OpenAIFunctionsAgent#
- class langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent[source]#
Bases:
BaseSingleActionAgent
Deprecated since version 0.1.0: Use
create_openai_functions_agent()
instead.An Agent driven by OpenAIs function powered API.
- Parameters:
llm β This should be an instance of ChatOpenAI, specifically a model that supports using functions.
tools β The tools this agent has access to.
prompt β The prompt for this agent, should support agent_scratchpad as one of the variables. For an easy way to construct this prompt, use OpenAIFunctionsAgent.create_prompt(β¦)
output_parser β The output parser for this agent. Should be an instance of OpenAIFunctionsAgentOutputParser. Defaults to OpenAIFunctionsAgentOutputParser.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- param llm: BaseLanguageModel [Required]#
- param output_parser: Type[OpenAIFunctionsAgentOutputParser] = <class 'langchain.agents.output_parsers.openai_functions.OpenAIFunctionsAgentOutputParser'>#
- param prompt: BasePromptTemplate [Required]#
- async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None, **kwargs: Any) AgentAction | AgentFinish [source]#
Async given input, decided what to do.
- Parameters:
intermediate_steps (List[Tuple[AgentAction, str]]) β Steps the LLM has taken to date, along with observations.
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) β Callbacks to use. Defaults to None.
**kwargs (Any) β User inputs.
- Returns:
Action specifying what tool to use. If the agent is finished, returns an AgentFinish. If the agent is not finished, returns an AgentAction.
- Return type:
- classmethod create_prompt(system_message: SystemMessage | None = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}, response_metadata={}), extra_prompt_messages: List[BaseMessagePromptTemplate] | None = None) ChatPromptTemplate [source]#
Create prompt for this agent.
- Parameters:
system_message (SystemMessage | None) β Message to use as the system message that will be the first in the prompt.
extra_prompt_messages (List[BaseMessagePromptTemplate] | None) β Prompt messages that will be placed between the system message and the new human input.
- Returns:
A prompt template to pass into this agent.
- Return type:
- classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: BaseCallbackManager | None = None, extra_prompt_messages: List[BaseMessagePromptTemplate] | None = None, system_message: SystemMessage | None = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}, response_metadata={}), **kwargs: Any) BaseSingleActionAgent [source]#
Construct an agent from an LLM and tools.
- Parameters:
llm (BaseLanguageModel) β The LLM to use as the agent.
tools (Sequence[BaseTool]) β The tools to use.
callback_manager (BaseCallbackManager | None) β The callback manager to use. Defaults to None.
extra_prompt_messages (List[BaseMessagePromptTemplate] | None) β Extra prompt messages to use. Defaults to None.
system_message (SystemMessage | None) β The system message to use. Defaults to a default system message.
kwargs (Any) β Additional parameters to pass to the agent.
- Return type:
- plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None, with_functions: bool = True, **kwargs: Any) AgentAction | AgentFinish [source]#
Given input, decided what to do.
- Parameters:
intermediate_steps (List[Tuple[AgentAction, str]]) β Steps the LLM has taken to date, along with observations.
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) β Callbacks to use. Defaults to None.
with_functions (bool) β Whether to use functions. Defaults to True.
**kwargs (Any) β User inputs.
- Returns:
Action specifying what tool to use. If the agent is finished, returns an AgentFinish. If the agent is not finished, returns an AgentAction.
- Return type:
- return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) AgentFinish [source]#
Return response when agent has been stopped due to max iterations.
- Parameters:
early_stopping_method (str) β The early stopping method to use.
intermediate_steps (List[Tuple[AgentAction, str]]) β Intermediate steps.
**kwargs (Any) β User inputs.
- Returns:
AgentFinish.
- Raises:
ValueError β If early_stopping_method is not force or generate.
ValueError β If agent_decision is not an AgentAction.
- Return type:
- save(file_path: Path | str) None #
Save the agent.
- Parameters:
file_path (Path | str) β Path to file to save the agent to.
- Return type:
None
Example: .. code-block:: python
# If working with agent executor agent.agent.save(file_path=βpath/agent.yamlβ)
- tool_run_logging_kwargs() Dict #
Return logging kwargs for tool run.
- Return type:
Dict
- property functions: List[dict]#
Get functions.
- property input_keys: List[str]#
Get input keys. Input refers to user input here.
- property return_values: List[str]#
Return values of the agent.
Examples using OpenAIFunctionsAgent