aupdate_cache#

async langchain_core.language_models.llms.aupdate_cache(cache: BaseCache | bool | None, existing_prompts: dict[int, list], llm_string: str, missing_prompt_idxs: list[int], new_results: LLMResult, prompts: list[str]) dict | None[source]#

Update the cache and get the LLM output. Async version.

Parameters:
  • cache (BaseCache | bool | None) – Cache object.

  • existing_prompts (dict[int, list]) – Dictionary of existing prompts.

  • llm_string (str) – LLM string.

  • missing_prompt_idxs (list[int]) – List of missing prompt indexes.

  • new_results (LLMResult) – LLMResult object.

  • prompts (list[str]) – List of prompts.

Returns:

LLM output.

Raises:

ValueError – If the cache is not set and cache is True.

Return type:

dict | None