ConversationalAgent#
- class langchain.agents.conversational.base.ConversationalAgent[source]#
Bases:
Agent
Deprecated since version 0.1.0: LangChain agents will continue to be supported, but it is recommended for new use cases to be built with LangGraph. LangGraph offers a more flexible and full-featured framework for building agents, including support for tool-calling, persistence of state, and human-in-the-loop workflows. See LangGraph documentation for more details: https://langchain-ai.github.io/langgraph/. Refer here for its pre-built ReAct agent: https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/ It will be removed in None==1.0.
An agent that holds a conversation in addition to using tools.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- param ai_prefix: str = 'AI'#
Prefix to use before AI output.
- param allowed_tools: List[str] | None = None#
Allowed tools for the agent. If None, all tools are allowed.
- param output_parser: AgentOutputParser [Optional]#
Output parser for the agent.
- async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None, **kwargs: Any) AgentAction | AgentFinish #
Async given input, decided what to do.
- Parameters:
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) – Callbacks to run.
**kwargs (Any) – User inputs.
- Returns:
Action specifying what tool to use.
- Return type:
- classmethod create_prompt(tools: Sequence[BaseTool], prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: List[str] | None = None) PromptTemplate [source]#
Create prompt in the style of the zero-shot agent.
- Parameters:
tools (Sequence[BaseTool]) – List of tools the agent will have access to, used to format the prompt.
prefix (str) – String to put before the list of tools. Defaults to PREFIX.
suffix (str) – String to put after the list of tools. Defaults to SUFFIX.
format_instructions (str) – Instructions on how to use the tools. Defaults to FORMAT_INSTRUCTIONS
ai_prefix (str) – String to use before AI output. Defaults to “AI”.
human_prefix (str) – String to use before human output. Defaults to “Human”.
input_variables (List[str] | None) – List of input variables the final prompt will expect. Defaults to [“input”, “chat_history”, “agent_scratchpad”].
- Returns:
A PromptTemplate with the template assembled from the pieces here.
- Return type:
- classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: BaseCallbackManager | None = None, output_parser: AgentOutputParser | None = None, prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: List[str] | None = None, **kwargs: Any) Agent [source]#
Construct an agent from an LLM and tools.
- Parameters:
llm (BaseLanguageModel) – The language model to use.
tools (Sequence[BaseTool]) – A list of tools to use.
callback_manager (BaseCallbackManager | None) – The callback manager to use. Default is None.
output_parser (AgentOutputParser | None) – The output parser to use. Default is None.
prefix (str) – The prefix to use in the prompt. Default is PREFIX.
suffix (str) – The suffix to use in the prompt. Default is SUFFIX.
format_instructions (str) – The format instructions to use. Default is FORMAT_INSTRUCTIONS.
ai_prefix (str) – The prefix to use before AI output. Default is “AI”.
human_prefix (str) – The prefix to use before human output. Default is “Human”.
input_variables (List[str] | None) – The input variables to use. Default is None.
**kwargs (Any) – Any additional keyword arguments to pass to the agent.
- Returns:
An agent.
- Return type:
- get_allowed_tools() List[str] | None #
Get allowed tools.
- Return type:
List[str] | None
- get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) Dict[str, Any] #
Create the full inputs for the LLMChain from intermediate steps.
- Parameters:
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.
**kwargs (Any) – User inputs.
- Returns:
Full inputs for the LLMChain.
- Return type:
Dict[str, Any]
- plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None, **kwargs: Any) AgentAction | AgentFinish #
Given input, decided what to do.
- Parameters:
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) – Callbacks to run.
**kwargs (Any) – User inputs.
- Returns:
Action specifying what tool to use.
- Return type:
- return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) AgentFinish #
Return response when agent has been stopped due to max iterations.
- Parameters:
early_stopping_method (str) – Method to use for early stopping.
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.
**kwargs (Any) – User inputs.
- Returns:
Agent finish object.
- Return type:
- Raises:
ValueError – If early_stopping_method is not in [‘force’, ‘generate’].
- save(file_path: Path | str) None #
Save the agent.
- Parameters:
file_path (Path | str) – Path to file to save the agent to.
- Return type:
None
Example: .. code-block:: python
# If working with agent executor agent.agent.save(file_path=”path/agent.yaml”)
- tool_run_logging_kwargs() Dict #
Return logging kwargs for tool run.
- Return type:
Dict
- property llm_prefix: str#
Prefix to append the llm call with.
- Returns:
”
- Return type:
“Thought
- property observation_prefix: str#
Prefix to append the observation with.
- Returns:
”
- Return type:
“Observation
- property return_values: List[str]#
Return values of the agent.