Skip to main content


MiniMax offers an embeddings service.

This example goes over how to use LangChain to interact with MiniMax Inference for text embedding.

import os

from langchain_community.embeddings import MiniMaxEmbeddings

API Reference:

embeddings = MiniMaxEmbeddings()
query_text = "This is a test query."
query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
import numpy as np

query_numpy = np.array(query_result)
document_numpy = np.array(document_result[0])
similarity =, document_numpy) / (
np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy)
print(f"Cosine similarity between document and query: {similarity}")
Cosine similarity between document and query: 0.1573236279277012

Help us out by providing feedback on this documentation page: