OpenAIFunctionsAgent#

class langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent[source]#

Bases: BaseSingleActionAgent

Deprecated since version 0.1.0: Use create_openai_functions_agent() instead. It will be removed in None==1.0.

An Agent driven by OpenAIs function powered API.

Parameters:
  • llm – This should be an instance of ChatOpenAI, specifically a model that supports using functions.

  • tools – The tools this agent has access to.

  • prompt – The prompt for this agent, should support agent_scratchpad as one of the variables. For an easy way to construct this prompt, use OpenAIFunctionsAgent.create_prompt(…)

  • output_parser – The output parser for this agent. Should be an instance of OpenAIFunctionsAgentOutputParser. Defaults to OpenAIFunctionsAgentOutputParser.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

param llm: BaseLanguageModel [Required]#
param output_parser: Type[OpenAIFunctionsAgentOutputParser] = <class 'langchain.agents.output_parsers.openai_functions.OpenAIFunctionsAgentOutputParser'>#
param prompt: BasePromptTemplate [Required]#
param tools: Sequence[BaseTool] [Required]#
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None, **kwargs: Any) β†’ AgentAction | AgentFinish[source]#

Async given input, decided what to do.

Parameters:
  • intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.

  • callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) – Callbacks to use. Defaults to None.

  • **kwargs (Any) – User inputs.

Returns:

Action specifying what tool to use. If the agent is finished, returns an AgentFinish. If the agent is not finished, returns an AgentAction.

Return type:

AgentAction | AgentFinish

classmethod create_prompt(system_message: SystemMessage | None = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}, response_metadata={}), extra_prompt_messages: List[BaseMessagePromptTemplate] | None = None) β†’ ChatPromptTemplate[source]#

Create prompt for this agent.

Parameters:
  • system_message (SystemMessage | None) – Message to use as the system message that will be the first in the prompt.

  • extra_prompt_messages (List[BaseMessagePromptTemplate] | None) – Prompt messages that will be placed between the system message and the new human input.

Returns:

A prompt template to pass into this agent.

Return type:

ChatPromptTemplate

classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: BaseCallbackManager | None = None, extra_prompt_messages: List[BaseMessagePromptTemplate] | None = None, system_message: SystemMessage | None = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}, response_metadata={}), **kwargs: Any) β†’ BaseSingleActionAgent[source]#

Construct an agent from an LLM and tools.

Parameters:
  • llm (BaseLanguageModel) – The LLM to use as the agent.

  • tools (Sequence[BaseTool]) – The tools to use.

  • callback_manager (BaseCallbackManager | None) – The callback manager to use. Defaults to None.

  • extra_prompt_messages (List[BaseMessagePromptTemplate] | None) – Extra prompt messages to use. Defaults to None.

  • system_message (SystemMessage | None) – The system message to use. Defaults to a default system message.

  • kwargs (Any) – Additional parameters to pass to the agent.

Return type:

BaseSingleActionAgent

get_allowed_tools() β†’ List[str][source]#

Get allowed tools.

Return type:

List[str]

plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None, with_functions: bool = True, **kwargs: Any) β†’ AgentAction | AgentFinish[source]#

Given input, decided what to do.

Parameters:
  • intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.

  • callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) – Callbacks to use. Defaults to None.

  • with_functions (bool) – Whether to use functions. Defaults to True.

  • **kwargs (Any) – User inputs.

Returns:

Action specifying what tool to use. If the agent is finished, returns an AgentFinish. If the agent is not finished, returns an AgentAction.

Return type:

AgentAction | AgentFinish

return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) β†’ AgentFinish[source]#

Return response when agent has been stopped due to max iterations.

Parameters:
  • early_stopping_method (str) – The early stopping method to use.

  • intermediate_steps (List[Tuple[AgentAction, str]]) – Intermediate steps.

  • **kwargs (Any) – User inputs.

Returns:

AgentFinish.

Raises:
  • ValueError – If early_stopping_method is not force or generate.

  • ValueError – If agent_decision is not an AgentAction.

Return type:

AgentFinish

save(file_path: Path | str) β†’ None#

Save the agent.

Parameters:

file_path (Path | str) – Path to file to save the agent to.

Return type:

None

Example: .. code-block:: python

# If working with agent executor agent.agent.save(file_path=”path/agent.yaml”)

tool_run_logging_kwargs() β†’ Dict#

Return logging kwargs for tool run.

Return type:

Dict

property functions: List[dict]#

Get functions.

property input_keys: List[str]#

Get input keys. Input refers to user input here.

property return_values: List[str]#

Return values of the agent.

Examples using OpenAIFunctionsAgent