caches#

Warning

Beta Feature!

Cache provides an optional caching layer for LLMs.

Cache is useful for two reasons:

  • It can save you money by reducing the number of API calls you make to the LLM provider if you’re often requesting the same completion multiple times.

  • It can speed up your application by reducing the number of API calls you make to the LLM provider.

Cache directly competes with Memory. See documentation for Pros and Cons.

Class hierarchy:

BaseCache --> <name>Cache  # Examples: InMemoryCache, RedisCache, GPTCache

Classes

caches.BaseCache()

Interface for a caching layer for LLMs and Chat models.

caches.InMemoryCache(*[, maxsize])

Cache that stores things in memory.