create_context_cache#
- langchain_google_vertexai.utils.create_context_cache(model: ChatVertexAI, messages: List[BaseMessage], expire_time: datetime | None = None, time_to_live: timedelta | None = None, tools: Sequence[Tool | Tool | _ToolDictLike | BaseTool | Type[BaseModel] | FunctionDescription | Callable | FunctionDeclaration | Dict[str, Any]] | None = None, tool_config: _ToolConfigDict | None = None) str [source]#
Creates a cache for content in some model.
- Parameters:
model (ChatVertexAI) β ChatVertexAI model. Must be at least gemini-1.5 pro or flash.
messages (List[BaseMessage]) β List of messages to cache.
expire_time (datetime | None) β Timestamp of when this resource is considered expired.
set (At most one of expire_time and ttl can be set. If neither is) β on the API side will be used (currently 1 hour).
TTL (default) β on the API side will be used (currently 1 hour).
time_to_live (timedelta | None) β The TTL for this resource. If provided, the expiration time is
computed β created_time + TTL.
set β on the API side will be used (currently 1 hour).
TTL β on the API side will be used (currently 1 hour).
tools (Sequence[Tool | Tool | _ToolDictLike | BaseTool | Type[BaseModel] | FunctionDescription | Callable | FunctionDeclaration | Dict[str, Any]] | None) β A list of tool definitions to bind to this chat model. Can be a pydantic model, callable, or BaseTool. Pydantic models, callables, and BaseTools will be automatically converted to their schema dictionary representation.
tool_config (_ToolConfigDict | None) β Optional. Immutable. Tool config. This config is shared for all tools.
- Raises:
ValueError β If model doesnβt support context catching.
- Returns:
String with the identificator of the created cache.
- Return type:
str