IpexLLMBgeEmbeddings#
- class langchain_community.embeddings.ipex_llm.IpexLLMBgeEmbeddings[source]#
Bases:
BaseModel
,Embeddings
Wrapper around the BGE embedding model with IPEX-LLM optimizations on Intel CPUs and GPUs.
To use, you should have the
ipex-llm
andsentence_transformers
package installed. Refer to here for installation on Intel CPU.- Example on Intel CPU:
from langchain_community.embeddings import IpexLLMBgeEmbeddings embedding_model = IpexLLMBgeEmbeddings( model_name="BAAI/bge-large-en-v1.5", model_kwargs={}, encode_kwargs={"normalize_embeddings": True}, )
Refer to here for installation on Intel GPU.
- Example on Intel GPU:
from langchain_community.embeddings import IpexLLMBgeEmbeddings embedding_model = IpexLLMBgeEmbeddings( model_name="BAAI/bge-large-en-v1.5", model_kwargs={"device": "xpu"}, encode_kwargs={"normalize_embeddings": True}, )
Initialize the sentence_transformer.
- param cache_folder: str | None = None#
Path to store models. Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
- param embed_instruction: str = ''#
Instruction to use for embedding document.
- param encode_kwargs: Dict[str, Any] [Optional]#
Keyword arguments to pass when calling the encode method of the model.
- param model_kwargs: Dict[str, Any] [Optional]#
Keyword arguments to pass to the model.
- param model_name: str = 'BAAI/bge-small-en-v1.5'#
Model name to use.
- param query_instruction: str = 'Represent this question for searching relevant passages: '#
Instruction to use for embedding query.
- async aembed_documents(texts: list[str]) list[list[float]] #
Asynchronous Embed search docs.
- Parameters:
texts (list[str]) – List of text to embed.
- Returns:
List of embeddings.
- Return type:
list[list[float]]
- async aembed_query(text: str) list[float] #
Asynchronous Embed query text.
- Parameters:
text (str) – Text to embed.
- Returns:
Embedding.
- Return type:
list[float]
Examples using IpexLLMBgeEmbeddings