ToolAgentAction#

class langchain.agents.output_parsers.tools.ToolAgentAction[source]#

Bases: AgentActionMessageLog

param log: str [Required]#

Additional information to log about the action. This log can be used in a few ways. First, it can be used to audit what exactly the LLM predicted to lead to this (tool, tool_input). Second, it can be used in future iterations to show the LLMs prior thoughts. This is useful when (tool, tool_input) does not contain full information about the LLM prediction (for example, any thought before the tool/tool_input).

param message_log: Sequence[BaseMessage] [Required]#

Similar to log, this can be used to pass along extra information about what exact messages were predicted by the LLM before parsing out the (tool, tool_input). This is again useful if (tool, tool_input) cannot be used to fully recreate the LLM prediction, and you need that LLM prediction (for future agent iteration). Compared to log, this is useful when the underlying LLM is a ChatModel (and therefore returns messages rather than a string).

param tool: str [Required]#

The name of the Tool to execute.

param tool_call_id: str [Required]#

Tool call that this message is responding to.

param tool_input: str | dict [Required]#

The input to pass in to the Tool.

param type: Literal['AgentActionMessageLog'] = 'AgentActionMessageLog'#
property messages: Sequence[BaseMessage]#

Return the messages that correspond to this action.