DeepInfraEmbeddings#

class langchain_community.embeddings.deepinfra.DeepInfraEmbeddings[source]#

Bases: BaseModel, Embeddings

Deep Infraโ€™s embedding inference service.

To use, you should have the environment variable DEEPINFRA_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. There are multiple embeddings models available, see https://deepinfra.com/models?type=embeddings.

Example

from langchain_community.embeddings import DeepInfraEmbeddings
deepinfra_emb = DeepInfraEmbeddings(
    model_id="sentence-transformers/clip-ViT-B-32",
    deepinfra_api_token="my-api-key"
)
r1 = deepinfra_emb.embed_documents(
    [
        "Alpha is the first letter of Greek alphabet",
        "Beta is the second letter of Greek alphabet",
    ]
)
r2 = deepinfra_emb.embed_query(
    "What is the second letter of Greek alphabet"
)

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param batch_size: int = 1024#

Batch size for embedding requests.

param deepinfra_api_token: str | None = None#

API token for Deep Infra. If not provided, the token is fetched from the environment variable โ€˜DEEPINFRA_API_TOKENโ€™.

param embed_instruction: str = 'passage: '#

Instruction used to embed documents.

param model_id: str = 'sentence-transformers/clip-ViT-B-32'#

Embeddings model to use.

param model_kwargs: dict | None = None#

Other model keyword args

param normalize: bool = False#

whether to normalize the computed embeddings

param query_instruction: str = 'query: '#

Instruction used to embed the query.

async aembed_documents(texts: List[str]) โ†’ List[List[float]]#

Asynchronous Embed search docs.

Parameters:

texts (List[str]) โ€“ List of text to embed.

Returns:

List of embeddings.

Return type:

List[List[float]]

async aembed_query(text: str) โ†’ List[float]#

Asynchronous Embed query text.

Parameters:

text (str) โ€“ Text to embed.

Returns:

Embedding.

Return type:

List[float]

embed_documents(texts: List[str]) โ†’ List[List[float]][source]#

Embed documents using a Deep Infra deployed embedding model. For larger batches, the input list of texts is chunked into smaller batches to avoid exceeding the maximum request size.

Parameters:

texts (List[str]) โ€“ The list of texts to embed.

Returns:

List of embeddings, one for each text.

Return type:

List[List[float]]

embed_query(text: str) โ†’ List[float][source]#

Embed a query using a Deep Infra deployed embedding model.

Parameters:

text (str) โ€“ The text to embed.

Returns:

Embeddings for the text.

Return type:

List[float]

Examples using DeepInfraEmbeddings