BaseCache#

class langchain_core.caches.BaseCache[source]#

Interface for a caching layer for LLMs and Chat models.

The cache interface consists of the following methods:

  • lookup: Look up a value based on a prompt and llm_string.

  • update: Update the cache based on a prompt and llm_string.

  • clear: Clear the cache.

In addition, the cache interface provides an async version of each method.

The default implementation of the async methods is to run the synchronous method in an executor. It’s recommended to override the async methods and provide async implementations to avoid unnecessary overhead.

Methods

__init__()

aclear(**kwargs)

Async clear cache that can take additional keyword arguments.

alookup(prompt, llm_string)

Async look up based on prompt and llm_string.

aupdate(prompt, llm_string, return_val)

Async update cache based on prompt and llm_string.

clear(**kwargs)

Clear cache that can take additional keyword arguments.

lookup(prompt, llm_string)

Look up based on prompt and llm_string.

update(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

__init__()#
async aclear(**kwargs: Any) None[source]#

Async clear cache that can take additional keyword arguments.

Parameters:

kwargs (Any) –

Return type:

None

async alookup(prompt: str, llm_string: str) Sequence[Generation] | None[source]#

Async look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

Parameters:
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

Returns:

On a cache miss, return None. On a cache hit, return the cached value. The cached value is a list of Generations (or subclasses).

Return type:

Sequence[Generation] | None

async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]#

Async update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.

Parameters:
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

  • return_val (Sequence[Generation]) – The value to be cached. The value is a list of Generations (or subclasses).

Return type:

None

abstract clear(**kwargs: Any) None[source]#

Clear cache that can take additional keyword arguments.

Parameters:

kwargs (Any) –

Return type:

None

abstract lookup(prompt: str, llm_string: str) Sequence[Generation] | None[source]#

Look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

Parameters:
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

Returns:

On a cache miss, return None. On a cache hit, return the cached value. The cached value is a list of Generations (or subclasses).

Return type:

Sequence[Generation] | None

abstract update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]#

Update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the lookup method.

Parameters:
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

  • return_val (Sequence[Generation]) – The value to be cached. The value is a list of Generations (or subclasses).

Return type:

None