Snapshots
- class elasticsearch.client.SnapshotClient(client)
- Parameters:
client (BaseClient)
- cleanup_repository(*, name, error_trace=None, filter_path=None, human=None, master_timeout=None, pretty=None, timeout=None)
Clean up the snapshot repository. Trigger the review of the contents of a snapshot repository and delete any stale data not referenced by existing snapshots.
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/clean-up-snapshot-repo-api.html
- Parameters:
name (str) – Snapshot repository to clean up.
master_timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Period to wait for a connection to the master node.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Period to wait for a response.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- clone(*, repository, snapshot, target_snapshot, indices=None, error_trace=None, filter_path=None, human=None, master_timeout=None, pretty=None, timeout=None, body=None)
Clone a snapshot. Clone part of all of a snapshot into another snapshot in the same repository.
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/clone-snapshot-api.html
- Parameters:
repository (str) – A repository name
snapshot (str) – The name of the snapshot to clone from
target_snapshot (str) – The name of the cloned snapshot to create
indices (str | None)
master_timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Explicit operation timeout for connection to master node
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- create(*, repository, snapshot, error_trace=None, feature_states=None, filter_path=None, human=None, ignore_unavailable=None, include_global_state=None, indices=None, master_timeout=None, metadata=None, partial=None, pretty=None, wait_for_completion=None, body=None)
Create a snapshot. Take a snapshot of a cluster or of data streams and indices.
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/create-snapshot-api.html
- Parameters:
repository (str) – Repository for the snapshot.
snapshot (str) – Name of the snapshot. Must be unique in the repository.
feature_states (Sequence[str] | None) – Feature states to include in the snapshot. Each feature state includes one or more system indices containing related data. You can view a list of eligible features using the get features API. If include_global_state is true, all current feature states are included by default. If include_global_state is false, no feature states are included by default.
ignore_unavailable (bool | None) – If true, the request ignores data streams and indices in indices that are missing or closed. If false, the request returns an error for any data stream or index that is missing or closed.
include_global_state (bool | None) – If true, the current cluster state is included in the snapshot. The cluster state includes persistent cluster settings, composable index templates, legacy index templates, ingest pipelines, and ILM policies. It also includes data stored in system indices, such as Watches and task records (configurable via feature_states).
indices (str | Sequence[str] | None) – Data streams and indices to include in the snapshot. Supports multi-target syntax. Includes all data streams and indices by default.
master_timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
metadata (Mapping[str, Any] | None) – Optional metadata for the snapshot. May have any contents. Must be less than 1024 bytes. This map is not automatically generated by Elasticsearch.
partial (bool | None) – If true, allows restoring a partial snapshot of indices with unavailable shards. Only shards that were successfully included in the snapshot will be restored. All missing shards will be recreated as empty. If false, the entire restore operation will fail if one or more indices included in the snapshot do not have all primary shards available.
wait_for_completion (bool | None) – If true, the request returns a response when the snapshot is complete. If false, the request returns a response when the snapshot initializes.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- create_repository(*, name, repository=None, body=None, error_trace=None, filter_path=None, human=None, master_timeout=None, pretty=None, timeout=None, verify=None)
Create or update a snapshot repository. IMPORTANT: If you are migrating searchable snapshots, the repository name must be identical in the source and destination clusters. To register a snapshot repository, the cluster’s global metadata must be writeable. Ensure there are no cluster blocks (for example, cluster.blocks.read_only and clsuter.blocks.read_only_allow_delete settings) that prevent write access.
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html
- Parameters:
name (str) – A repository name
master_timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Explicit operation timeout for connection to master node
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Explicit operation timeout
verify (bool | None) – Whether to verify the repository after creation
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- delete(*, repository, snapshot, error_trace=None, filter_path=None, human=None, master_timeout=None, pretty=None)
Delete snapshots.
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/delete-snapshot-api.html
- Parameters:
- Return type:
- delete_repository(*, name, error_trace=None, filter_path=None, human=None, master_timeout=None, pretty=None, timeout=None)
Delete snapshot repositories. When a repository is unregistered, Elasticsearch removes only the reference to the location where the repository is storing the snapshots. The snapshots themselves are left untouched and in place.
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/delete-snapshot-repo-api.html
- Parameters:
name (str | Sequence[str]) – Name of the snapshot repository to unregister. Wildcard (*) patterns are supported.
master_timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Explicit operation timeout for connection to master node
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Explicit operation timeout
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- get(*, repository, snapshot, after=None, error_trace=None, filter_path=None, from_sort_value=None, human=None, ignore_unavailable=None, include_repository=None, index_details=None, index_names=None, master_timeout=None, offset=None, order=None, pretty=None, size=None, slm_policy_filter=None, sort=None, verbose=None)
Get snapshot information.
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-snapshot-api.html
- Parameters:
repository (str) – Comma-separated list of snapshot repository names used to limit the request. Wildcard (*) expressions are supported.
snapshot (str | Sequence[str]) – Comma-separated list of snapshot names to retrieve. Also accepts wildcards (*). - To get information about all snapshots in a registered repository, use a wildcard (*) or _all. - To get information about any snapshots that are currently running, use _current.
after (str | None) – Offset identifier to start pagination from as returned by the next field in the response body.
from_sort_value (str | None) – Value of the current sort column at which to start retrieval. Can either be a string snapshot- or repository name when sorting by snapshot or repository name, a millisecond time value or a number when sorting by index- or shard count.
ignore_unavailable (bool | None) – If false, the request returns an error for any snapshots that are unavailable.
include_repository (bool | None) – If true, returns the repository name in each snapshot.
index_details (bool | None) – If true, returns additional information about each index in the snapshot comprising the number of shards in the index, the total size of the index in bytes, and the maximum number of segments per shard in the index. Defaults to false, meaning that this information is omitted.
index_names (bool | None) – If true, returns the name of each index in each snapshot.
master_timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
offset (int | None) – Numeric offset to start pagination from based on the snapshots matching this request. Using a non-zero value for this parameter is mutually exclusive with using the after parameter. Defaults to 0.
order (str | Literal['asc', 'desc'] | None) – Sort order. Valid values are asc for ascending and desc for descending order. Defaults to asc, meaning ascending order.
size (int | None) – Maximum number of snapshots to return. Defaults to 0 which means return all that match the request without limit.
slm_policy_filter (str | None) – Filter snapshots by a comma-separated list of SLM policy names that snapshots belong to. Also accepts wildcards (*) and combinations of wildcards followed by exclude patterns starting with -. To include snapshots not created by an SLM policy you can use the special pattern _none that will match all snapshots without an SLM policy.
sort (str | Literal['duration', 'failed_shard_count', 'index_count', 'name', 'repository', 'shard_count', 'start_time'] | None) – Allows setting a sort order for the result. Defaults to start_time, i.e. sorting by snapshot start time stamp.
verbose (bool | None) – If true, returns additional information about each snapshot such as the version of Elasticsearch which took the snapshot, the start and end times of the snapshot, and the number of shards snapshotted.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- get_repository(*, name=None, error_trace=None, filter_path=None, human=None, local=None, master_timeout=None, pretty=None)
Get snapshot repository information.
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-snapshot-repo-api.html
- Parameters:
name (str | Sequence[str] | None) – A comma-separated list of repository names
local (bool | None) – Return local information, do not retrieve the state from master node (default: false)
master_timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Explicit operation timeout for connection to master node
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- repository_analyze(*, name, blob_count=None, concurrency=None, detailed=None, early_read_node_count=None, error_trace=None, filter_path=None, human=None, max_blob_size=None, max_total_data_size=None, pretty=None, rare_action_probability=None, rarely_abort_writes=None, read_node_count=None, register_operation_count=None, seed=None, timeout=None)
Analyze a snapshot repository. Analyze the performance characteristics and any incorrect behaviour found in a repository. The response exposes implementation details of the analysis which may change from version to version. The response body format is therefore not considered stable and may be different in newer versions. There are a large number of third-party storage systems available, not all of which are suitable for use as a snapshot repository by Elasticsearch. Some storage systems behave incorrectly, or perform poorly, especially when accessed concurrently by multiple clients as the nodes of an Elasticsearch cluster do. This API performs a collection of read and write operations on your repository which are designed to detect incorrect behaviour and to measure the performance characteristics of your storage system. The default values for the parameters are deliberately low to reduce the impact of running an analysis inadvertently and to provide a sensible starting point for your investigations. Run your first analysis with the default parameter values to check for simple problems. If successful, run a sequence of increasingly large analyses until you encounter a failure or you reach a blob_count of at least 2000, a max_blob_size of at least 2gb, a max_total_data_size of at least 1tb, and a register_operation_count of at least 100. Always specify a generous timeout, possibly 1h or longer, to allow time for each analysis to run to completion. Perform the analyses using a multi-node cluster of a similar size to your production cluster so that it can detect any problems that only arise when the repository is accessed by many nodes at once. If the analysis fails, Elasticsearch detected that your repository behaved unexpectedly. This usually means you are using a third-party storage system with an incorrect or incompatible implementation of the API it claims to support. If so, this storage system is not suitable for use as a snapshot repository. You will need to work with the supplier of your storage system to address the incompatibilities that Elasticsearch detects. If the analysis is successful, the API returns details of the testing process, optionally including how long each operation took. You can use this information to determine the performance of your storage system. If any operation fails or returns an incorrect result, the API returns an error. If the API returns an error, it may not have removed all the data it wrote to the repository. The error will indicate the location of any leftover data and this path is also recorded in the Elasticsearch logs. You should verify that this location has been cleaned up correctly. If there is still leftover data at the specified location, you should manually remove it. If the connection from your client to Elasticsearch is closed while the client is waiting for the result of the analysis, the test is cancelled. Some clients are configured to close their connection if no response is received within a certain timeout. An analysis takes a long time to complete so you might need to relax any such client-side timeouts. On cancellation the analysis attempts to clean up the data it was writing, but it may not be able to remove it all. The path to the leftover data is recorded in the Elasticsearch logs. You should verify that this location has been cleaned up correctly. If there is still leftover data at the specified location, you should manually remove it. If the analysis is successful then it detected no incorrect behaviour, but this does not mean that correct behaviour is guaranteed. The analysis attempts to detect common bugs but it does not offer 100% coverage. Additionally, it does not test the following: * Your repository must perform durable writes. Once a blob has been written it must remain in place until it is deleted, even after a power loss or similar disaster. * Your repository must not suffer from silent data corruption. Once a blob has been written, its contents must remain unchanged until it is deliberately modified or deleted. * Your repository must behave correctly even if connectivity from the cluster is disrupted. Reads and writes may fail in this case, but they must not return incorrect results. IMPORTANT: An analysis writes a substantial amount of data to your repository and then reads it back again. This consumes bandwidth on the network between the cluster and the repository, and storage space and I/O bandwidth on the repository itself. You must ensure this load does not affect other users of these systems. Analyses respect the repository settings max_snapshot_bytes_per_sec and max_restore_bytes_per_sec if available and the cluster setting indices.recovery.max_bytes_per_sec which you can use to limit the bandwidth they consume. NOTE: This API is intended for exploratory use by humans. You should expect the request parameters and the response format to vary in future versions. NOTE: Different versions of Elasticsearch may perform different checks for repository compatibility, with newer versions typically being stricter than older ones. A storage system that passes repository analysis with one version of Elasticsearch may fail with a different version. This indicates it behaves incorrectly in ways that the former version did not detect. You must work with the supplier of your storage system to address the incompatibilities detected by the repository analysis API in any version of Elasticsearch. NOTE: This API may not work correctly in a mixed-version cluster. Implementation details NOTE: This section of documentation describes how the repository analysis API works in this version of Elasticsearch, but you should expect the implementation to vary between versions. The request parameters and response format depend on details of the implementation so may also be different in newer versions. The analysis comprises a number of blob-level tasks, as set by the blob_count parameter and a number of compare-and-exchange operations on linearizable registers, as set by the register_operation_count parameter. These tasks are distributed over the data and master-eligible nodes in the cluster for execution. For most blob-level tasks, the executing node first writes a blob to the repository and then instructs some of the other nodes in the cluster to attempt to read the data it just wrote. The size of the blob is chosen randomly, according to the max_blob_size and max_total_data_size parameters. If any of these reads fails then the repository does not implement the necessary read-after-write semantics that Elasticsearch requires. For some blob-level tasks, the executing node will instruct some of its peers to attempt to read the data before the writing process completes. These reads are permitted to fail, but must not return partial data. If any read returns partial data then the repository does not implement the necessary atomicity semantics that Elasticsearch requires. For some blob-level tasks, the executing node will overwrite the blob while its peers are reading it. In this case the data read may come from either the original or the overwritten blob, but the read operation must not return partial data or a mix of data from the two blobs. If any of these reads returns partial data or a mix of the two blobs then the repository does not implement the necessary atomicity semantics that Elasticsearch requires for overwrites. The executing node will use a variety of different methods to write the blob. For instance, where applicable, it will use both single-part and multi-part uploads. Similarly, the reading nodes will use a variety of different methods to read the data back again. For instance they may read the entire blob from start to end or may read only a subset of the data. For some blob-level tasks, the executing node will cancel the write before it is complete. In this case, it still instructs some of the other nodes in the cluster to attempt to read the blob but all of these reads must fail to find the blob. Linearizable registers are special blobs that Elasticsearch manipulates using an atomic compare-and-exchange operation. This operation ensures correct and strongly-consistent behavior even when the blob is accessed by multiple nodes at the same time. The detailed implementation of the compare-and-exchange operation on linearizable registers varies by repository type. Repository analysis verifies that that uncontended compare-and-exchange operations on a linearizable register blob always succeed. Repository analysis also verifies that contended operations either succeed or report the contention but do not return incorrect results. If an operation fails due to contention, Elasticsearch retries the operation until it succeeds. Most of the compare-and-exchange operations performed by repository analysis atomically increment a counter which is represented as an 8-byte blob. Some operations also verify the behavior on small blobs with sizes other than 8 bytes.
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/repo-analysis-api.html
- Parameters:
name (str) – The name of the repository.
blob_count (int | None) – The total number of blobs to write to the repository during the test. For realistic experiments, you should set it to at least 2000.
concurrency (int | None) – The number of operations to run concurrently during the test.
detailed (bool | None) – Indicates whether to return detailed results, including timing information for every operation performed during the analysis. If false, it returns only a summary of the analysis.
early_read_node_count (int | None) – The number of nodes on which to perform an early read operation while writing each blob. Early read operations are only rarely performed.
max_blob_size (int | str | None) – The maximum size of a blob to be written during the test. For realistic experiments, you should set it to at least 2gb.
max_total_data_size (int | str | None) – An upper limit on the total size of all the blobs written during the test. For realistic experiments, you should set it to at least 1tb.
rare_action_probability (float | None) – The probability of performing a rare action such as an early read, an overwrite, or an aborted write on each blob.
rarely_abort_writes (bool | None) – Indicates whether to rarely cancel writes before they complete.
read_node_count (int | None) – The number of nodes on which to read a blob after writing.
register_operation_count (int | None) – The minimum number of linearizable register operations to perform in total. For realistic experiments, you should set it to at least 100.
seed (int | None) – The seed for the pseudo-random number generator used to generate the list of operations performed during the test. To repeat the same set of operations in multiple experiments, use the same seed in each experiment. Note that the operations are performed concurrently so might not always happen in the same order on each run.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – The period of time to wait for the test to complete. If no response is received before the timeout expires, the test is cancelled and returns an error.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- repository_verify_integrity(*, name, blob_thread_pool_concurrency=None, error_trace=None, filter_path=None, human=None, index_snapshot_verification_concurrency=None, index_verification_concurrency=None, max_bytes_per_sec=None, max_failed_shard_snapshots=None, meta_thread_pool_concurrency=None, pretty=None, snapshot_verification_concurrency=None, verify_blob_contents=None)
Verify the repository integrity. Verify the integrity of the contents of a snapshot repository. This API enables you to perform a comprehensive check of the contents of a repository, looking for any anomalies in its data or metadata which might prevent you from restoring snapshots from the repository or which might cause future snapshot create or delete operations to fail. If you suspect the integrity of the contents of one of your snapshot repositories, cease all write activity to this repository immediately, set its read_only option to true, and use this API to verify its integrity. Until you do so: * It may not be possible to restore some snapshots from this repository. * Searchable snapshots may report errors when searched or may have unassigned shards. * Taking snapshots into this repository may fail or may appear to succeed but have created a snapshot which cannot be restored. * Deleting snapshots from this repository may fail or may appear to succeed but leave the underlying data on disk. * Continuing to write to the repository while it is in an invalid state may causing additional damage to its contents. If the API finds any problems with the integrity of the contents of your repository, Elasticsearch will not be able to repair the damage. The only way to bring the repository back into a fully working state after its contents have been damaged is by restoring its contents from a repository backup which was taken before the damage occurred. You must also identify what caused the damage and take action to prevent it from happening again. If you cannot restore a repository backup, register a new repository and use this for all future snapshot operations. In some cases it may be possible to recover some of the contents of a damaged repository, either by restoring as many of its snapshots as needed and taking new snapshots of the restored data, or by using the reindex API to copy data from any searchable snapshots mounted from the damaged repository. Avoid all operations which write to the repository while the verify repository integrity API is running. If something changes the repository contents while an integrity verification is running then Elasticsearch may incorrectly report having detected some anomalies in its contents due to the concurrent writes. It may also incorrectly fail to report some anomalies that the concurrent writes prevented it from detecting. NOTE: This API is intended for exploratory use by humans. You should expect the request parameters and the response format to vary in future versions. NOTE: This API may not work correctly in a mixed-version cluster.
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/verify-repo-integrity-api.html
- Parameters:
blob_thread_pool_concurrency (int | None) – Number of threads to use for reading blob contents
index_snapshot_verification_concurrency (int | None) – Number of snapshots to verify concurrently within each index
index_verification_concurrency (int | None) – Number of indices to verify concurrently
max_bytes_per_sec (str | None) – Rate limit for individual blob verification
max_failed_shard_snapshots (int | None) – Maximum permitted number of failed shard snapshots
meta_thread_pool_concurrency (int | None) – Number of threads to use for reading metadata
snapshot_verification_concurrency (int | None) – Number of snapshots to verify concurrently
verify_blob_contents (bool | None) – Whether to verify the contents of individual blobs
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- restore(*, repository, snapshot, error_trace=None, feature_states=None, filter_path=None, human=None, ignore_index_settings=None, ignore_unavailable=None, include_aliases=None, include_global_state=None, index_settings=None, indices=None, master_timeout=None, partial=None, pretty=None, rename_pattern=None, rename_replacement=None, wait_for_completion=None, body=None)
Restore a snapshot. Restore a snapshot of a cluster or data streams and indices. You can restore a snapshot only to a running cluster with an elected master node. The snapshot repository must be registered and available to the cluster. The snapshot and cluster versions must be compatible. To restore a snapshot, the cluster’s global metadata must be writable. Ensure there are’t any cluster blocks that prevent writes. The restore operation ignores index blocks. Before you restore a data stream, ensure the cluster contains a matching index template with data streams enabled. To check, use the index management feature in Kibana or the get index template API:
` GET _index_template/*?filter_path=index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream `
If no such template exists, you can create one or restore a cluster state that contains one. Without a matching index template, a data stream can’t roll over or create backing indices. If your snapshot contains data from App Search or Workplace Search, you must restore the Enterprise Search encryption key before you restore the snapshot.https://www.elastic.co/guide/en/elasticsearch/reference/8.17/restore-snapshot-api.html
- Parameters:
repository (str) – A repository name
snapshot (str) – A snapshot name
ignore_unavailable (bool | None)
include_aliases (bool | None)
include_global_state (bool | None)
master_timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Explicit operation timeout for connection to master node
partial (bool | None)
rename_pattern (str | None)
rename_replacement (str | None)
wait_for_completion (bool | None) – Should this request wait until the operation has completed before returning
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- status(*, repository=None, snapshot=None, error_trace=None, filter_path=None, human=None, ignore_unavailable=None, master_timeout=None, pretty=None)
Get the snapshot status. Get a detailed description of the current state for each shard participating in the snapshot. Note that this API should be used only to obtain detailed shard-level information for ongoing snapshots. If this detail is not needed or you want to obtain information about one or more existing snapshots, use the get snapshot API. WARNING: Using the API to return the status of any snapshots other than currently running snapshots can be expensive. The API requires a read from the repository for each shard in each snapshot. For example, if you have 100 snapshots with 1,000 shards each, an API request that includes all snapshots will require 100,000 reads (100 snapshots x 1,000 shards). Depending on the latency of your storage, such requests can take an extremely long time to return results. These requests can also tax machine resources and, when using cloud storage, incur high processing costs.
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-snapshot-status-api.html
- Parameters:
repository (str | None) – A repository name
snapshot (str | Sequence[str] | None) – A comma-separated list of snapshot names
ignore_unavailable (bool | None) – Whether to ignore unavailable snapshots, defaults to false which means a SnapshotMissingException is thrown
master_timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Explicit operation timeout for connection to master node
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- verify_repository(*, name, error_trace=None, filter_path=None, human=None, master_timeout=None, pretty=None, timeout=None)
Verify a snapshot repository. Check for common misconfigurations in a snapshot repository.
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/verify-snapshot-repo-api.html
- Parameters:
name (str) – A repository name
master_timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Explicit operation timeout for connection to master node
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Explicit operation timeout
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type: