Skip to main content
Ctrl+K
🦜🔗 LangChain  documentation - Home 🦜🔗 LangChain  documentation - Home
  • Reference
  • Legacy reference
Ctrl+K
Docs
  • GitHub
  • X / Twitter
Ctrl+K
  • Reference
  • Legacy reference
Docs
  • GitHub
  • X / Twitter

Section Navigation

Base packages

  • Core
    • agents
    • beta
    • caches
    • callbacks
    • chat_history
    • chat_loaders
    • chat_sessions
    • document_loaders
    • documents
    • embeddings
    • example_selectors
    • exceptions
    • globals
      • get_debug
      • get_llm_cache
      • get_verbose
      • set_debug
      • set_llm_cache
      • set_verbose
    • graph_vectorstores
    • indexing
    • language_models
    • load
    • memory
    • messages
    • output_parsers
    • outputs
    • prompt_values
    • prompts
    • rate_limiters
    • retrievers
    • runnables
    • stores
    • structured_query
    • sys_info
    • tools
    • tracers
    • utils
    • vectorstores
  • Langchain
  • Text Splitters
  • Community
  • Experimental

Integrations

  • AI21
  • Airbyte
  • Anthropic
  • AstraDB
  • AWS
  • Azure Dynamic Sessions
  • Box
  • Chroma
  • Cohere
  • Couchbase
  • Elasticsearch
  • Exa
  • Fireworks
  • Google Community
  • Google GenAI
  • Google VertexAI
  • Groq
  • Huggingface
  • Milvus
  • MistralAI
  • MongoDB
  • Nomic
  • Nvidia Ai Endpoints
  • Ollama
  • OpenAI
  • Pinecone
  • Postgres
  • Prompty
  • Qdrant
  • Robocorp
  • Together
  • Unstructured
  • VoyageAI
  • Weaviate
  • LangChain Python API Reference
  • globals
  • set_llm_cache

set_llm_cache#

langchain_core.globals.set_llm_cache(value: BaseCache | None) → None[source]#

Set a new LLM cache, overwriting the previous value, if any.

Parameters:

value (BaseCache | None) – The new LLM cache to use. If None, the LLM cache is disabled.

Return type:

None

Examples using set_llm_cache

  • Astra DB

  • Cassandra

  • Couchbase

  • DSPy

  • How to cache LLM responses

  • How to cache chat model responses

  • Model caches

  • Momento

  • MongoDB Atlas

  • Redis

On this page
  • set_llm_cache()

© Copyright 2023, LangChain Inc.