Agents#
Interface for agents.
- pydantic model langchain.agents.Agent[source]#
Class responsible for calling the language model and deciding the action.
This is driven by an LLMChain. The prompt in the LLMChain MUST include a variable called “agent_scratchpad” where the agent can put its intermediary work.
- field allowed_tools: Optional[List[str]] = None#
- field llm_chain: langchain.chains.llm.LLMChain [Required]#
- field output_parser: langchain.agents.agent.AgentOutputParser [Required]#
- async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] [source]#
Given input, decided what to do.
- Parameters
intermediate_steps – Steps the LLM has taken to date, along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
- Returns
Action specifying what tool to use.
- abstract classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool]) langchain.prompts.base.BasePromptTemplate [source]#
Create a prompt for this class.
- classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, **kwargs: Any) langchain.agents.agent.Agent [source]#
Construct an agent from an LLM and tools.
- get_full_inputs(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) Dict[str, Any] [source]#
Create the full inputs for the LLMChain from intermediate steps.
- plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] [source]#
Given input, decided what to do.
- Parameters
intermediate_steps – Steps the LLM has taken to date, along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
- Returns
Action specifying what tool to use.
- return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) langchain.schema.AgentFinish [source]#
Return response when agent has been stopped due to max iterations.
- abstract property llm_prefix: str#
Prefix to append the LLM call with.
- abstract property observation_prefix: str#
Prefix to append the observation with.
- property return_values: List[str]#
Return values of the agent.
- pydantic model langchain.agents.AgentExecutor[source]#
Consists of an agent using tools.
- Validators
raise_deprecation
»all fields
set_verbose
»verbose
validate_return_direct_tool
»all fields
validate_tools
»all fields
- field agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]#
- field early_stopping_method: str = 'force'#
- field handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False#
- field max_execution_time: Optional[float] = None#
- field max_iterations: Optional[int] = 15#
- field return_intermediate_steps: bool = False#
- classmethod from_agent_and_tools(agent: Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent], tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) langchain.agents.agent.AgentExecutor [source]#
Create from agent and tools.
- lookup_tool(name: str) langchain.tools.base.BaseTool [source]#
Lookup tool by name.
- class langchain.agents.AgentType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
- CHAT_CONVERSATIONAL_REACT_DESCRIPTION = 'chat-conversational-react-description'#
- CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'chat-zero-shot-react-description'#
- CONVERSATIONAL_REACT_DESCRIPTION = 'conversational-react-description'#
- REACT_DOCSTORE = 'react-docstore'#
- SELF_ASK_WITH_SEARCH = 'self-ask-with-search'#
- STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'structured-chat-zero-shot-react-description'#
- ZERO_SHOT_REACT_DESCRIPTION = 'zero-shot-react-description'#
- pydantic model langchain.agents.BaseMultiActionAgent[source]#
Base Agent class.
- abstract async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish] [source]#
Given input, decided what to do.
- Parameters
intermediate_steps – Steps the LLM has taken to date, along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
- Returns
Actions specifying what tool to use.
- abstract plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish] [source]#
Given input, decided what to do.
- Parameters
intermediate_steps – Steps the LLM has taken to date, along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
- Returns
Actions specifying what tool to use.
- return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) langchain.schema.AgentFinish [source]#
Return response when agent has been stopped due to max iterations.
- save(file_path: Union[pathlib.Path, str]) None [source]#
Save the agent.
- Parameters
file_path – Path to file to save the agent to.
Example: .. code-block:: python
# If working with agent executor agent.agent.save(file_path=”path/agent.yaml”)
- property return_values: List[str]#
Return values of the agent.
- pydantic model langchain.agents.BaseSingleActionAgent[source]#
Base Agent class.
- abstract async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] [source]#
Given input, decided what to do.
- Parameters
intermediate_steps – Steps the LLM has taken to date, along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
- Returns
Action specifying what tool to use.
- classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) langchain.agents.agent.BaseSingleActionAgent [source]#
- abstract plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] [source]#
Given input, decided what to do.
- Parameters
intermediate_steps – Steps the LLM has taken to date, along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
- Returns
Action specifying what tool to use.
- return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) langchain.schema.AgentFinish [source]#
Return response when agent has been stopped due to max iterations.
- save(file_path: Union[pathlib.Path, str]) None [source]#
Save the agent.
- Parameters
file_path – Path to file to save the agent to.
Example: .. code-block:: python
# If working with agent executor agent.agent.save(file_path=”path/agent.yaml”)
- property return_values: List[str]#
Return values of the agent.
- pydantic model langchain.agents.ConversationalAgent[source]#
An agent designed to hold a conversation in addition to using tools.
- field ai_prefix: str = 'AI'#
- field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
- classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None) langchain.prompts.prompt.PromptTemplate [source]#
Create prompt in the style of the zero shot agent.
- Parameters
tools – List of tools the agent will have access to, used to format the prompt.
prefix – String to put before the list of tools.
suffix – String to put after the list of tools.
ai_prefix – String to use before AI output.
human_prefix – String to use before human output.
input_variables – List of input variables the final prompt will expect.
- Returns
A PromptTemplate with the template assembled from the pieces here.
- classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None, **kwargs: Any) langchain.agents.agent.Agent [source]#
Construct an agent from an LLM and tools.
- property llm_prefix: str#
Prefix to append the llm call with.
- property observation_prefix: str#
Prefix to append the observation with.
- pydantic model langchain.agents.ConversationalChatAgent[source]#
An agent designed to hold a conversation in addition to using tools.
- field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
- field template_tool_response: str = "TOOL RESPONSE: \n---------------------\n{observation}\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else."#
- classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], system_message: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables: Optional[List[str]] = None, output_parser: Optional[langchain.schema.BaseOutputParser] = None) langchain.prompts.base.BasePromptTemplate [source]#
Create a prompt for this class.
- classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, system_message: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables: Optional[List[str]] = None, **kwargs: Any) langchain.agents.agent.Agent [source]#
Construct an agent from an LLM and tools.
- property llm_prefix: str#
Prefix to append the llm call with.
- property observation_prefix: str#
Prefix to append the observation with.
- pydantic model langchain.agents.LLMSingleActionAgent[source]#
- field llm_chain: langchain.chains.llm.LLMChain [Required]#
- field output_parser: langchain.agents.agent.AgentOutputParser [Required]#
- field stop: List[str] [Required]#
- async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] [source]#
Given input, decided what to do.
- Parameters
intermediate_steps – Steps the LLM has taken to date, along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
- Returns
Action specifying what tool to use.
- plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] [source]#
Given input, decided what to do.
- Parameters
intermediate_steps – Steps the LLM has taken to date, along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
- Returns
Action specifying what tool to use.
- pydantic model langchain.agents.MRKLChain[source]#
Chain that implements the MRKL system.
Example
from langchain import OpenAI, MRKLChain from langchain.chains.mrkl.base import ChainConfig llm = OpenAI(temperature=0) prompt = PromptTemplate(...) chains = [...] mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)
- Validators
raise_deprecation
»all fields
set_verbose
»verbose
validate_return_direct_tool
»all fields
validate_tools
»all fields
- classmethod from_chains(llm: langchain.base_language.BaseLanguageModel, chains: List[langchain.agents.mrkl.base.ChainConfig], **kwargs: Any) langchain.agents.agent.AgentExecutor [source]#
User friendly way to initialize the MRKL chain.
This is intended to be an easy way to get up and running with the MRKL chain.
- Parameters
llm – The LLM to use as the agent LLM.
chains – The chains the MRKL system has access to.
**kwargs – parameters to be passed to initialization.
- Returns
An initialized MRKL chain.
Example
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain from langchain.chains.mrkl.base import ChainConfig llm = OpenAI(temperature=0) search = SerpAPIWrapper() llm_math_chain = LLMMathChain(llm=llm) chains = [ ChainConfig( action_name = "Search", action=search.search, action_description="useful for searching" ), ChainConfig( action_name="Calculator", action=llm_math_chain.run, action_description="useful for doing math" ) ] mrkl = MRKLChain.from_chains(llm, chains)
- pydantic model langchain.agents.ReActChain[source]#
Chain that implements the ReAct paper.
Example
from langchain import ReActChain, OpenAI react = ReAct(llm=OpenAI())
- Validators
raise_deprecation
»all fields
set_verbose
»verbose
validate_return_direct_tool
»all fields
validate_tools
»all fields
- pydantic model langchain.agents.ReActTextWorldAgent[source]#
Agent for the ReAct TextWorld chain.
- classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool]) langchain.prompts.base.BasePromptTemplate [source]#
Return default prompt.
- pydantic model langchain.agents.SelfAskWithSearchChain[source]#
Chain that does self ask with search.
Example
from langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper search_chain = GoogleSerperAPIWrapper() self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)
- Validators
raise_deprecation
»all fields
set_verbose
»verbose
validate_return_direct_tool
»all fields
validate_tools
»all fields
- pydantic model langchain.agents.StructuredChatAgent[source]#
- field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
- classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\n "action": "Final Answer",\n "action_input": "Final response to human"\n}}}}\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[langchain.prompts.base.BasePromptTemplate]] = None) langchain.prompts.base.BasePromptTemplate [source]#
Create a prompt for this class.
- classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\n "action": "Final Answer",\n "action_input": "Final response to human"\n}}}}\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[langchain.prompts.base.BasePromptTemplate]] = None, **kwargs: Any) langchain.agents.agent.Agent [source]#
Construct an agent from an LLM and tools.
- property llm_prefix: str#
Prefix to append the llm call with.
- property observation_prefix: str#
Prefix to append the observation with.
- pydantic model langchain.agents.Tool[source]#
Tool that takes in function or coroutine directly.
- field coroutine: Optional[Callable[[...], Awaitable[str]]] = None#
The asynchronous version of the function.
- field description: str = ''#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
- field func: Callable[[...], str] [Required]#
The function to run when the tool is called.
- classmethod from_function(func: Callable, name: str, description: str, return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, **kwargs: Any) langchain.tools.base.Tool [source]#
Initialize tool from a function.
- property args: dict#
The tool’s input arguments.
- pydantic model langchain.agents.ZeroShotAgent[source]#
Agent for the MRKL chain.
- field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
- classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) langchain.prompts.prompt.PromptTemplate [source]#
Create prompt in the style of the zero shot agent.
- Parameters
tools – List of tools the agent will have access to, used to format the prompt.
prefix – String to put before the list of tools.
suffix – String to put after the list of tools.
input_variables – List of input variables the final prompt will expect.
- Returns
A PromptTemplate with the template assembled from the pieces here.
- classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) langchain.agents.agent.Agent [source]#
Construct an agent from an LLM and tools.
- property llm_prefix: str#
Prefix to append the llm call with.
- property observation_prefix: str#
Prefix to append the observation with.
- langchain.agents.create_csv_agent(llm: langchain.base_language.BaseLanguageModel, path: Union[str, List[str]], pandas_kwargs: Optional[dict] = None, **kwargs: Any) langchain.agents.agent.AgentExecutor [source]#
Create csv agent by loading to a dataframe and using pandas agent.
- langchain.agents.create_json_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) langchain.agents.agent.AgentExecutor [source]#
Construct a json agent from an LLM and tools.
- langchain.agents.create_openapi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = "You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, return_intermediate_steps: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) langchain.agents.agent.AgentExecutor [source]#
Construct a json agent from an LLM and tools.
- langchain.agents.create_pandas_dataframe_agent(llm: langchain.base_language.BaseLanguageModel, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, include_df_in_prompt: Optional[bool] = True, **kwargs: Dict[str, Any]) langchain.agents.agent.AgentExecutor [source]#
Construct a pandas agent from an LLM and dataframe.
- langchain.agents.create_pbi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples: Optional[str] = None, input_variables: Optional[List[str]] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) langchain.agents.agent.AgentExecutor [source]#
Construct a pbi agent from an LLM and tools.
- langchain.agents.create_pbi_chat_agent(llm: langchain.chat_models.base.BaseChatModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[langchain.memory.chat_memory.BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) langchain.agents.agent.AgentExecutor [source]#
Construct a pbi agent from an Chat LLM and tools.
If you supply only a toolkit and no powerbi dataset, the same LLM is used for both.
- langchain.agents.create_spark_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix: str = '\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) langchain.agents.agent.AgentExecutor [source]#
Construct a spark agent from an LLM and dataframe.
- langchain.agents.create_spark_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) langchain.agents.agent.AgentExecutor [source]#
Construct a sql agent from an LLM and tools.
- langchain.agents.create_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) langchain.agents.agent.AgentExecutor [source]#
Construct a sql agent from an LLM and tools.
- langchain.agents.create_vectorstore_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) langchain.agents.agent.AgentExecutor [source]#
Construct a vectorstore agent from an LLM and tools.
- langchain.agents.create_vectorstore_router_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) langchain.agents.agent.AgentExecutor [source]#
Construct a vectorstore router agent from an LLM and tools.
- langchain.agents.initialize_agent(tools: Sequence[langchain.tools.base.BaseTool], llm: langchain.base_language.BaseLanguageModel, agent: Optional[langchain.agents.agent_types.AgentType] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any) langchain.agents.agent.AgentExecutor [source]#
Load an agent executor given tools and LLM.
- Parameters
tools – List of tools this agent has access to.
llm – Language model to use as the agent.
agent – Agent type to use. If None and agent_path is also None, will default to AgentType.ZERO_SHOT_REACT_DESCRIPTION.
callback_manager – CallbackManager to use. Global callback manager is used if not provided. Defaults to None.
agent_path – Path to serialized agent to use.
agent_kwargs – Additional key word arguments to pass to the underlying agent
**kwargs – Additional key word arguments passed to the agent executor
- Returns
An agent executor
- langchain.agents.load_agent(path: Union[str, pathlib.Path], **kwargs: Any) langchain.agents.agent.BaseSingleActionAgent [source]#
Unified method for loading a agent from LangChainHub or local fs.
- langchain.agents.load_huggingface_tool(task_or_repo_id: str, model_repo_id: Optional[str] = None, token: Optional[str] = None, remote: bool = False, **kwargs: Any) langchain.tools.base.BaseTool [source]#
- langchain.agents.load_tools(tool_names: List[str], llm: Optional[langchain.base_language.BaseLanguageModel] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) List[langchain.tools.base.BaseTool] [source]#
Load tools based on their name.
- Parameters
tool_names – name of tools to load.
llm – Optional language model, may be needed to initialize certain tools.
callbacks – Optional callback manager or list of callback handlers. If not provided, default global callback manager will be used.
- Returns
List of tools.
- langchain.agents.tool(*args: Union[str, Callable], return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, infer_schema: bool = True) Callable [source]#
Make tools out of functions, can be used with or without arguments.
- Parameters
*args – The arguments to the tool.
return_direct – Whether to return directly from the tool rather than continuing the agent loop.
args_schema – optional argument schema for user to specify
infer_schema – Whether to infer the schema of the arguments from the function’s signature. This also makes the resultant tool accept a dictionary input to its run() function.
- Requires:
Function must be of type (str) -> str
Function must have a docstring
Examples
@tool def search_api(query: str) -> str: # Searches the API for the query. return @tool("search", return_direct=True) def search_api(query: str) -> str: # Searches the API for the query. return