QuantizedBgeEmbeddings#
- class langchain_community.embeddings.itrex.QuantizedBgeEmbeddings[source]#
Bases:
BaseModel
,Embeddings
Leverage Itrex runtime to unlock the performance of compressed NLP models.
Please ensure that you have installed intel-extension-for-transformers.
- Input:
model_name: str = Model name. max_seq_len: int = The maximum sequence length for tokenization. (default 512) pooling_strategy: str =
โmeanโ or โclsโ, pooling strategy for the final layer. (default โmeanโ)
- query_instruction: Optional[str] =
An instruction to add to the query before embedding. (default None)
- document_instruction: Optional[str] =
An instruction to add to each document before embedding. (default None)
- padding: Optional[bool] =
Whether to add padding during tokenization or not. (default True)
- model_kwargs: Optional[Dict] =
Parameters to add to the model during initialization. (default {})
- encode_kwargs: Optional[Dict] =
Parameters to add during the embedding forward pass. (default {})
- onnx_file_name: Optional[str] =
File name of onnx optimized model which is exported by itrex. (default โint8-model.onnxโ)
Example
from langchain_community.embeddings import QuantizedBgeEmbeddings model_name = "Intel/bge-small-en-v1.5-sts-int8-static-inc" encode_kwargs = {'normalize_embeddings': True} hf = QuantizedBgeEmbeddings( model_name, encode_kwargs=encode_kwargs, query_instruction="Represent this sentence for searching relevant passages: " )
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- async aembed_documents(texts: List[str]) List[List[float]] #
Asynchronous Embed search docs.
- Parameters:
texts (List[str]) โ List of text to embed.
- Returns:
List of embeddings.
- Return type:
List[List[float]]
- async aembed_query(text: str) List[float] #
Asynchronous Embed query text.
- Parameters:
text (str) โ Text to embed.
- Returns:
Embedding.
- Return type:
List[float]
- embed_documents(texts: List[str]) List[List[float]] [source]#
Embed a list of text documents using the Optimized Embedder model.
- Input:
texts: List[str] = List of text documents to embed.
- Output:
List[List[float]] = The embeddings of each text document.
- Parameters:
texts (List[str]) โ
- Return type:
List[List[float]]
Examples using QuantizedBgeEmbeddings