MistralAIEmbeddings#

class langchain_mistralai.embeddings.MistralAIEmbeddings[source]#

Bases: BaseModel, Embeddings

MistralAI embedding model integration.

Setup:

Install langchain_mistralai and set environment variable MISTRAL_API_KEY.

pip install -U langchain_mistralai
export MISTRAL_API_KEY="your-api-key"
Key init args — completion params:
model: str

Name of MistralAI model to use.

Key init args — client params:
api_key: Optional[SecretStr]

The API key for the MistralAI API. If not provided, it will be read from the environment variable MISTRAL_API_KEY.

max_retries: int

The number of times to retry a request if it fails.

timeout: int

The number of seconds to wait for a response before timing out.

max_concurrent_requests: int

The maximum number of concurrent requests to make to the Mistral API.

See full list of supported init args and their descriptions in the params section.

Instantiate:
from __module_name__ import MistralAIEmbeddings

embed = MistralAIEmbeddings(
    model="mistral-embed",
    # api_key="...",
    # other params...
)
Embed single text:
input_text = "The meaning of life is 42"
vector = embed.embed_query(input_text)
print(vector[:3])
[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
Embed multiple text:
 input_texts = ["Document 1...", "Document 2..."]
vectors = embed.embed_documents(input_texts)
print(len(vectors))
# The first 3 coordinates for the first vector
print(vectors[0][:3])
2
[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
Async:
 vector = await embed.aembed_query(input_text)
print(vector[:3])

 # multiple:
 # await embed.aembed_documents(input_texts)
[-0.009100092574954033, 0.005071679595857859, -0.0029193938244134188]

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

param async_client: AsyncClient = None#
param client: Client = None#
param endpoint: str = 'https://api.mistral.ai/v1/'#
param max_concurrent_requests: int = 64#
param max_retries: int = 5#
param mistral_api_key: SecretStr [Optional] (alias 'api_key')#
param model: str = 'mistral-embed'#
param timeout: int = 120#
param tokenizer: Tokenizer = None#
async aembed_documents(texts: List[str]) List[List[float]][source]#

Embed a list of document texts.

Parameters:

texts (List[str]) – The list of texts to embed.

Returns:

List of embeddings, one for each text.

Return type:

List[List[float]]

async aembed_query(text: str) List[float][source]#

Embed a single query text.

Parameters:

text (str) – The text to embed.

Returns:

Embedding for the text.

Return type:

List[float]

embed_documents(texts: List[str]) List[List[float]][source]#

Embed a list of document texts.

Parameters:

texts (List[str]) – The list of texts to embed.

Returns:

List of embeddings, one for each text.

Return type:

List[List[float]]

embed_query(text: str) List[float][source]#

Embed a single query text.

Parameters:

text (str) – The text to embed.

Returns:

Embedding for the text.

Return type:

List[float]

Examples using MistralAIEmbeddings