Inference

class elasticsearch.client.InferenceClient

To use this client, access client.inference from an Elasticsearch client. For example:

from elasticsearch import Elasticsearch

# Create the client instance
client = Elasticsearch(...)
# Use the inference client
client.inference.<method>(...)
completion(*, inference_id, input=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)

Perform completion inference on the service

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/post-inference-api.html

Parameters:
  • inference_id (str) – The inference Id

  • input (str | Sequence[str] | None) – Inference input. Either a string or an array of strings.

  • task_settings (Any | None) – Optional task settings

  • timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference request to complete.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

delete(*, inference_id, task_type=None, dry_run=None, error_trace=None, filter_path=None, force=None, human=None, pretty=None)

Delete an inference endpoint

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/delete-inference-api.html

Parameters:
  • inference_id (str) – The inference identifier.

  • task_type (str | Literal['chat_completion', 'completion', 'rerank', 'sparse_embedding', 'text_embedding'] | None) – The task type

  • dry_run (bool | None) – When true, the endpoint is not deleted and a list of ingest processors which reference this endpoint is returned.

  • force (bool | None) – When true, the inference endpoint is forcefully deleted even if it is still being used by ingest processors or semantic text fields.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

Return type:

ObjectApiResponse[Any]

get(*, task_type=None, inference_id=None, error_trace=None, filter_path=None, human=None, pretty=None)

Get an inference endpoint

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/get-inference-api.html

Parameters:
  • task_type (str | Literal['chat_completion', 'completion', 'rerank', 'sparse_embedding', 'text_embedding'] | None) – The task type

  • inference_id (str | None) – The inference Id

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

Return type:

ObjectApiResponse[Any]

inference(*, inference_id, input=None, task_type=None, error_trace=None, filter_path=None, human=None, pretty=None, query=None, task_settings=None, timeout=None, body=None)

Perform inference on the service.

This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API.

For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation.

info The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/post-inference-api.html

Parameters:
  • inference_id (str) – The unique identifier for the inference endpoint.

  • input (str | Sequence[str] | None) – The text on which you want to perform the inference task. It can be a single string or an array. > info > Inference endpoints for the completion task type currently only support a single string as input.

  • task_type (str | Literal['chat_completion', 'completion', 'rerank', 'sparse_embedding', 'text_embedding'] | None) – The type of inference task that the model performs.

  • query (str | None) – The query input, which is required only for the rerank task. It is not required for other tasks.

  • task_settings (Any | None) – Task settings for the individual inference request. These settings are specific to the task type you specified and override the task settings specified when initializing the service.

  • timeout (str | Literal[-1] | ~typing.Literal[0] | None) – The amount of time to wait for the inference request to complete.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put(*, inference_id, inference_config=None, body=None, task_type=None, error_trace=None, filter_path=None, human=None, pretty=None)

Create an inference endpoint. When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/put-inference-api.html

Parameters:
  • inference_id (str) – The inference Id

  • inference_config (Mapping[str, Any] | None)

  • task_type (str | Literal['chat_completion', 'completion', 'rerank', 'sparse_embedding', 'text_embedding'] | None) – The task type

  • body (Mapping[str, Any] | None)

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

Return type:

ObjectApiResponse[Any]

put_alibabacloud(*, task_type, alibabacloud_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, body=None)

Create an AlibabaCloud AI Search inference endpoint.

Create an inference endpoint to perform an inference task with the alibabacloud-ai-search service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-alibabacloud-ai-search.html

Parameters:
  • task_type (str | Literal['completion', 'rerank', 'space_embedding', 'text_embedding']) – The type of the inference task that the model will perform.

  • alibabacloud_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['alibabacloud-ai-search'] | None) – The type of service supported for the specified task type. In this case, alibabacloud-ai-search.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the alibabacloud-ai-search service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_amazonbedrock(*, task_type, amazonbedrock_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, body=None)

Create an Amazon Bedrock inference endpoint.

Creates an inference endpoint to perform an inference task with the amazonbedrock service.

info You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-amazon-bedrock.html

Parameters:
  • task_type (str | Literal['completion', 'text_embedding']) – The type of the inference task that the model will perform.

  • amazonbedrock_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['amazonbedrock'] | None) – The type of service supported for the specified task type. In this case, amazonbedrock.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the amazonbedrock service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_anthropic(*, task_type, anthropic_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, body=None)

Create an Anthropic inference endpoint.

Create an inference endpoint to perform an inference task with the anthropic service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-anthropic.html

Parameters:
  • task_type (str | Literal['completion']) – The task type. The only valid task type for the model to perform is completion.

  • anthropic_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['anthropic'] | None) – The type of service supported for the specified task type. In this case, anthropic.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the watsonxai service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_azureaistudio(*, task_type, azureaistudio_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, body=None)

Create an Azure AI studio inference endpoint.

Create an inference endpoint to perform an inference task with the azureaistudio service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-azure-ai-studio.html

Parameters:
  • task_type (str | Literal['completion', 'text_embedding']) – The type of the inference task that the model will perform.

  • azureaistudio_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['azureaistudio'] | None) – The type of service supported for the specified task type. In this case, azureaistudio.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the openai service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_azureopenai(*, task_type, azureopenai_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, body=None)

Create an Azure OpenAI inference endpoint.

Create an inference endpoint to perform an inference task with the azureopenai service.

The list of chat completion models that you can choose from in your Azure OpenAI deployment include:

The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-azure-openai.html

Parameters:
  • task_type (str | Literal['completion', 'text_embedding']) – The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API.

  • azureopenai_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['azureopenai'] | None) – The type of service supported for the specified task type. In this case, azureopenai.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the azureopenai service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_cohere(*, task_type, cohere_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, body=None)

Create a Cohere inference endpoint.

Create an inference endpoint to perform an inference task with the cohere service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-cohere.html

Parameters:
  • task_type (str | Literal['completion', 'rerank', 'text_embedding']) – The type of the inference task that the model will perform.

  • cohere_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['cohere'] | None) – The type of service supported for the specified task type. In this case, cohere.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the cohere service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_elasticsearch(*, task_type, elasticsearch_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, body=None)

Create an Elasticsearch inference endpoint.

Create an inference endpoint to perform an inference task with the elasticsearch service.

info Your Elasticsearch deployment contains preconfigured ELSER and E5 inference endpoints, you only need to create the enpoints using the API if you want to customize the settings.

If you use the ELSER or the E5 model through the elasticsearch service, the API request will automatically download and deploy the model if it isn't downloaded yet.

info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.

After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-elasticsearch.html

Parameters:
  • task_type (str | Literal['rerank', 'sparse_embedding', 'text_embedding']) – The type of the inference task that the model will perform.

  • elasticsearch_inference_id (str) – The unique identifier of the inference endpoint. The must not match the model_id.

  • service (str | Literal['elasticsearch'] | None) – The type of service supported for the specified task type. In this case, elasticsearch.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the elasticsearch service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_elser(*, task_type, elser_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, body=None)

Create an ELSER inference endpoint.

Create an inference endpoint to perform an inference task with the elser service. You can also deploy ELSER by using the Elasticsearch inference integration.

info Your Elasticsearch deployment contains a preconfigured ELSER inference endpoint, you only need to create the enpoint using the API if you want to customize the settings.

The API request will automatically download and deploy the ELSER model if it isn't already downloaded.

info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.

After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-elser.html

Parameters:
  • task_type (str | Literal['sparse_embedding']) – The type of the inference task that the model will perform.

  • elser_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['elser'] | None) – The type of service supported for the specified task type. In this case, elser.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the elser service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_googleaistudio(*, task_type, googleaistudio_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, body=None)

Create an Google AI Studio inference endpoint.

Create an inference endpoint to perform an inference task with the googleaistudio service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-google-ai-studio.html

Parameters:
  • task_type (str | Literal['completion', 'text_embedding']) – The type of the inference task that the model will perform.

  • googleaistudio_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['googleaistudio'] | None) – The type of service supported for the specified task type. In this case, googleaistudio.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the googleaistudio service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_googlevertexai(*, task_type, googlevertexai_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, body=None)

Create a Google Vertex AI inference endpoint.

Create an inference endpoint to perform an inference task with the googlevertexai service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-google-vertex-ai.html

Parameters:
  • task_type (str | Literal['rerank', 'text_embedding']) – The type of the inference task that the model will perform.

  • googlevertexai_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['googlevertexai'] | None) – The type of service supported for the specified task type. In this case, googlevertexai.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the googlevertexai service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_hugging_face(*, task_type, huggingface_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, body=None)

Create a Hugging Face inference endpoint.

Create an inference endpoint to perform an inference task with the hugging_face service.

You must first create an inference endpoint on the Hugging Face endpoint page to get an endpoint URL. Select the model you want to use on the new endpoint creation page (for example intfloat/e5-small-v2), then select the sentence embeddings task under the advanced configuration section. Create the endpoint and copy the URL after the endpoint initialization has been finished.

The following models are recommended for the Hugging Face service:

  • all-MiniLM-L6-v2
  • all-MiniLM-L12-v2
  • all-mpnet-base-v2
  • e5-base-v2
  • e5-small-v2
  • multilingual-e5-base
  • multilingual-e5-small

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-hugging-face.html

Parameters:
  • task_type (str | Literal['text_embedding']) – The type of the inference task that the model will perform.

  • huggingface_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['hugging_face'] | None) – The type of service supported for the specified task type. In this case, hugging_face.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the hugging_face service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_jinaai(*, task_type, jinaai_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, body=None)

Create an JinaAI inference endpoint.

Create an inference endpoint to perform an inference task with the jinaai service.

To review the available rerank models, refer to https://jina.ai/reranker. To review the available text_embedding models, refer to the https://jina.ai/embeddings/.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-jinaai.html

Parameters:
  • task_type (str | Literal['rerank', 'text_embedding']) – The type of the inference task that the model will perform.

  • jinaai_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['jinaai'] | None) – The type of service supported for the specified task type. In this case, jinaai.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the jinaai service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_mistral(*, task_type, mistral_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, body=None)

Create a Mistral inference endpoint.

Creates an inference endpoint to perform an inference task with the mistral service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/{brnach}/infer-service-mistral.html

Parameters:
  • task_type (str | Literal['text_embedding']) – The task type. The only valid task type for the model to perform is text_embedding.

  • mistral_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['mistral'] | None) – The type of service supported for the specified task type. In this case, mistral.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the mistral service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_openai(*, task_type, openai_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, body=None)

Create an OpenAI inference endpoint.

Create an inference endpoint to perform an inference task with the openai service or openai compatible APIs.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-openai.html

Parameters:
  • task_type (str | Literal['chat_completion', 'completion', 'text_embedding']) – The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API.

  • openai_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['openai'] | None) – The type of service supported for the specified task type. In this case, openai.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the openai service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_voyageai(*, task_type, voyageai_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, body=None)

Create a VoyageAI inference endpoint.

Create an inference endpoint to perform an inference task with the voyageai service.

Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-voyageai.html

Parameters:
  • task_type (str | Literal['rerank', 'text_embedding']) – The type of the inference task that the model will perform.

  • voyageai_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['voyageai'] | None) – The type of service supported for the specified task type. In this case, voyageai.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the voyageai service.

  • chunking_settings (Mapping[str, Any] | None) – The chunking configuration object.

  • task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

put_watsonx(*, task_type, watsonx_inference_id, service=None, service_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, body=None)

Create a Watsonx inference endpoint.

Create an inference endpoint to perform an inference task with the watsonxai service. You need an IBM Cloud Databases for Elasticsearch deployment to use the watsonxai inference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-watsonx-ai.html

Parameters:
  • task_type (str | Literal['text_embedding']) – The task type. The only valid task type for the model to perform is text_embedding.

  • watsonx_inference_id (str) – The unique identifier of the inference endpoint.

  • service (str | Literal['watsonxai'] | None) – The type of service supported for the specified task type. In this case, watsonxai.

  • service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the watsonxai service.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

rerank(*, inference_id, input=None, query=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)

Perform rereanking inference on the service

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/post-inference-api.html

Parameters:
  • inference_id (str) – The unique identifier for the inference endpoint.

  • input (str | Sequence[str] | None) – The text on which you want to perform the inference task. It can be a single string or an array. > info > Inference endpoints for the completion task type currently only support a single string as input.

  • query (str | None) – Query input.

  • task_settings (Any | None) – Task settings for the individual inference request. These settings are specific to the task type you specified and override the task settings specified when initializing the service.

  • timeout (str | Literal[-1] | ~typing.Literal[0] | None) – The amount of time to wait for the inference request to complete.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

sparse_embedding(*, inference_id, input=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)

Perform sparse embedding inference on the service

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/post-inference-api.html

Parameters:
  • inference_id (str) – The inference Id

  • input (str | Sequence[str] | None) – Inference input. Either a string or an array of strings.

  • task_settings (Any | None) – Optional task settings

  • timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference request to complete.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

text_embedding(*, inference_id, input=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)

Perform text embedding inference on the service

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/post-inference-api.html

Parameters:
  • inference_id (str) – The inference Id

  • input (str | Sequence[str] | None) – Inference input. Either a string or an array of strings.

  • task_settings (Any | None) – Optional task settings

  • timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference request to complete.

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

  • body (Dict[str, Any] | None)

Return type:

ObjectApiResponse[Any]

update(*, inference_id, inference_config=None, body=None, task_type=None, error_trace=None, filter_path=None, human=None, pretty=None)

Update an inference endpoint.

Modify task_settings, secrets (within service_settings), or num_allocations for an inference endpoint, depending on the specific endpoint service and task_type.

IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

https://www.elastic.co/guide/en/elasticsearch/reference/8.18/update-inference-api.html

Parameters:
  • inference_id (str) – The unique identifier of the inference endpoint.

  • inference_config (Mapping[str, Any] | None)

  • task_type (str | Literal['chat_completion', 'completion', 'rerank', 'sparse_embedding', 'text_embedding'] | None) – The type of inference task that the model performs.

  • body (Mapping[str, Any] | None)

  • error_trace (bool | None)

  • filter_path (str | Sequence[str] | None)

  • human (bool | None)

  • pretty (bool | None)

Return type:

ObjectApiResponse[Any]