SingleStoreSemanticCache
This example demonstrates how to get started with the SingleStore semantic cache.
Integration Overview
SingleStoreSemanticCache
leverages SingleStoreVectorStore
to cache LLM responses directly in a SingleStore database, enabling efficient semantic retrieval and reuse of results.
Integration details
Class | Package | JS support |
---|---|---|
SingleStoreSemanticCache | langchain_singlestore | ❌ |
Installation
This cache lives in the langchain-singlestore
package:
%pip install -qU langchain-singlestore
Usage
from langchain_core.globals import set_llm_cache
from langchain_singlestore import SingleStoreSemanticCache
set_llm_cache(
SingleStoreSemanticCache(
embedding=YourEmbeddings(),
host="root:pass@localhost:3306/db",
)
)
API Reference:set_llm_cache
%%time
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Tell me a joke")
%%time
# The second time, while not a direct hit, the question is semantically similar to the original question,
# so it uses the cached result!
llm.invoke("Tell me one joke")