aget_prompts#

async langchain_core.language_models.llms.aget_prompts(params: dict[str, Any], prompts: list[str], cache: BaseCache | bool | None = None) tuple[dict[int, list], str, list[int], list[str]][source]#

Get prompts that are already cached. Async version.

Parameters:
  • params (dict[str, Any]) – Dictionary of parameters.

  • prompts (list[str]) – List of prompts.

  • cache (BaseCache | bool | None) – Cache object. Default is None.

Returns:

A tuple of existing prompts, llm_string, missing prompt indexes,

and missing prompts.

Raises:

ValueError – If the cache is not set and cache is True.

Return type:

tuple[dict[int, list], str, list[int], list[str]]