update_cache#
- langchain_core.language_models.llms.update_cache(cache: BaseCache | bool | None, existing_prompts: Dict[int, List], llm_string: str, missing_prompt_idxs: List[int], new_results: LLMResult, prompts: List[str]) dict | None [source]#
Update the cache and get the LLM output.
- Parameters:
- Returns:
LLM output.
- Raises:
ValueError – If the cache is not set and cache is True.
- Return type:
dict | None