tool_example_to_messages#
- langchain_core.utils.function_calling.tool_example_to_messages(input: str, tool_calls: list[BaseModel], tool_outputs: list[str] | None = None) list[BaseMessage] [source]#
Convert an example into a list of messages that can be fed into an LLM.
This code is an adapter that converts a single example to a list of messages that can be fed into a chat model.
The list of messages per example corresponds to:
HumanMessage: contains the content from which content should be extracted.
AIMessage: contains the extracted information from the model
- ToolMessage: contains confirmation to the model that the model requested a tool
correctly.
The ToolMessage is required because some chat models are hyper-optimized for agents rather than for an extraction use case.
- Parameters:
input (str) – string, the user input
tool_calls (list[BaseModel]) – List[BaseModel], a list of tool calls represented as Pydantic BaseModels
tool_outputs (list[str] | None) – Optional[List[str]], a list of tool call outputs. Does not need to be provided. If not provided, a placeholder value will be inserted. Defaults to None.
- Returns:
A list of messages
- Return type:
list[BaseMessage]
Examples
from typing import List, Optional from pydantic import BaseModel, Field from langchain_openai import ChatOpenAI class Person(BaseModel): '''Information about a person.''' name: Optional[str] = Field(..., description="The name of the person") hair_color: Optional[str] = Field( ..., description="The color of the person's hair if known" ) height_in_meters: Optional[str] = Field( ..., description="Height in METERs" ) examples = [ ( "The ocean is vast and blue. It's more than 20,000 feet deep.", Person(name=None, height_in_meters=None, hair_color=None), ), ( "Fiona traveled far from France to Spain.", Person(name="Fiona", height_in_meters=None, hair_color=None), ), ] messages = [] for txt, tool_call in examples: messages.extend( tool_example_to_messages(txt, [tool_call]) )