Skip to main content
Ctrl+K
🦜🔗 LangChain  documentation - Home 🦜🔗 LangChain  documentation - Home
  • Reference
Ctrl+K
Docs
  • GitHub
  • X / Twitter
Ctrl+K
  • Reference
Docs
  • GitHub
  • X / Twitter

Section Navigation

Base packages

  • Core
  • Langchain
    • agents
    • callbacks
    • chains
    • chat_models
    • embeddings
    • evaluation
    • globals
      • get_debug
      • get_llm_cache
      • get_verbose
      • set_debug
      • set_llm_cache
      • set_verbose
    • hub
    • indexes
    • memory
    • model_laboratory
    • output_parsers
    • retrievers
    • runnables
    • smith
    • storage
  • Text Splitters
  • Community
  • Experimental

Integrations

  • AI21
  • Anthropic
  • AstraDB
  • AWS
  • Azure Ai
  • Azure Dynamic Sessions
  • Cerebras
  • Chroma
  • Cohere
  • Deepseek
  • Elasticsearch
  • Exa
  • Fireworks
  • Google Community
  • Google GenAI
  • Google VertexAI
  • Groq
  • Huggingface
  • IBM
  • Milvus
  • MistralAI
  • MongoDB
  • Neo4J
  • Nomic
  • Nvidia Ai Endpoints
  • Ollama
  • OpenAI
  • Perplexity
  • Pinecone
  • Postgres
  • Prompty
  • Qdrant
  • Redis
  • Sema4
  • Snowflake
  • Sqlserver
  • Standard Tests
  • Tavily
  • Together
  • Unstructured
  • Upstage
  • Weaviate
  • XAI
  • LangChain Python API Reference
  • langchain: 0.3.25
  • globals
  • set_llm_cache

set_llm_cache#

langchain.globals.set_llm_cache(value: BaseCache | None) → None[source]#

Set a new LLM cache, overwriting the previous value, if any.

Parameters:

value (BaseCache | None)

Return type:

None

Examples using set_llm_cache

  • Astra DB

  • Cassandra

  • Couchbase

  • DSPy

  • How to cache LLM responses

  • How to cache chat model responses

  • Model caches

  • Momento

  • MongoDB Atlas

  • Redis

On this page
  • set_llm_cache()

© Copyright 2025, LangChain Inc.