set_llm_cache# langchain.globals.set_llm_cache(value: BaseCache | None) → None[source]# Set a new LLM cache, overwriting the previous value, if any. Parameters: value (BaseCache | None) – Return type: None Examples using set_llm_cache Astra DB Cassandra Couchbase DSPy How to cache LLM responses How to cache chat model responses Model caches Momento MongoDB Atlas Redis