InMemoryCache#

class langchain_core.caches.InMemoryCache(*, maxsize: int | None = None)[source]#

Cache that stores things in memory.

Initialize with empty cache.

Parameters:

maxsize (Optional[int]) – The maximum number of items to store in the cache. If None, the cache has no maximum size. If the cache exceeds the maximum size, the oldest items are removed. Default is None.

Raises:

ValueError – If maxsize is less than or equal to 0.

Methods

__init__(*[, maxsize])

Initialize with empty cache.

aclear(**kwargs)

Async clear cache.

alookup(prompt, llm_string)

Async look up based on prompt and llm_string.

aupdate(prompt, llm_string, return_val)

Async update cache based on prompt and llm_string.

clear(**kwargs)

Clear cache.

lookup(prompt, llm_string)

Look up based on prompt and llm_string.

update(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

__init__(*, maxsize: int | None = None) None[source]#

Initialize with empty cache.

Parameters:

maxsize (int | None) – The maximum number of items to store in the cache. If None, the cache has no maximum size. If the cache exceeds the maximum size, the oldest items are removed. Default is None.

Raises:

ValueError – If maxsize is less than or equal to 0.

Return type:

None

async aclear(**kwargs: Any) None[source]#

Async clear cache.

Parameters:

kwargs (Any)

Return type:

None

async alookup(prompt: str, llm_string: str) Sequence[Generation] | None[source]#

Async look up based on prompt and llm_string.

Parameters:
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration.

Returns:

On a cache miss, return None. On a cache hit, return the cached value.

Return type:

Sequence[Generation] | None

async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]#

Async update cache based on prompt and llm_string.

Parameters:
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration.

  • return_val (Sequence[Generation]) – The value to be cached. The value is a list of Generations (or subclasses).

Return type:

None

clear(**kwargs: Any) None[source]#

Clear cache.

Parameters:

kwargs (Any)

Return type:

None

lookup(prompt: str, llm_string: str) Sequence[Generation] | None[source]#

Look up based on prompt and llm_string.

Parameters:
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration.

Returns:

On a cache miss, return None. On a cache hit, return the cached value.

Return type:

Sequence[Generation] | None

update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]#

Update cache based on prompt and llm_string.

Parameters:
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration.

  • return_val (Sequence[Generation]) – The value to be cached. The value is a list of Generations (or subclasses).

Return type:

None

Examples using InMemoryCache