TinyAsyncOpenAIInfinityEmbeddingClient#

class langchain_community.embeddings.infinity.TinyAsyncOpenAIInfinityEmbeddingClient(host: str = 'http://localhost:7797/v1', aiosession: ClientSession | None = None)[source]#

Helper tool to embed Infinity.

It is not a part of Langchain’s stable API, direct use discouraged.

Example

mini_client = TinyAsyncInfinityEmbeddingClient(
)
embeds = mini_client.embed(
    model="BAAI/bge-small",
    text=["doc1", "doc2"]
)
# or
embeds = await mini_client.aembed(
    model="BAAI/bge-small",
    text=["doc1", "doc2"]
)

Methods

__init__([host, aiosession])

aembed(model, texts)

call the embedding of model, async method

embed(model, texts)

call the embedding of model

Parameters:
  • host (str) –

  • aiosession (ClientSession | None) –

__init__(host: str = 'http://localhost:7797/v1', aiosession: ClientSession | None = None) None[source]#
Parameters:
  • host (str) –

  • aiosession (ClientSession | None) –

Return type:

None

async aembed(model: str, texts: List[str]) List[List[float]][source]#

call the embedding of model, async method

Parameters:
  • model (str) – to embedding model

  • texts (List[str]) – List of sentences to embed.

Returns:

List of vectors for each sentence

Return type:

List[List[float]]

embed(model: str, texts: List[str]) List[List[float]][source]#

call the embedding of model

Parameters:
  • model (str) – to embedding model

  • texts (List[str]) – List of sentences to embed.

Returns:

List of vectors for each sentence

Return type:

List[List[float]]