AgentServiceFactory#
- class langchain_azure_ai.agents.agent_service.AgentServiceFactory[source]#
Bases:
BaseModel
Factory to create and manage declarative chat agents in Azure AI Foundry.
Examples
To create a simple echo agent:
Agents can also be created with tools. For example, to create an agent that can perform arithmetic using a calculator tool:
You can also use the built-in tools in the Agent Service. Those tools only work with agents created in Azure AI Foundry. For example, to create an agent that can use Code Interpreter.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- param api_version: str | None = None#
The API version to use with Azure. If None, the default version is used.
- param client_kwargs: Dict[str, Any] = {}#
Additional keyword arguments to pass to the client.
- param credential: TokenCredential | None = None#
The API key or credential to use to connect to the service. If using a project endpoint, this must be of type TokenCredential since only Microsoft EntraID is supported.
- param project_endpoint: str | None = None#
The project endpoint associated with the AI project. If this is specified, then the endpoint parameter becomes optional and credential has to be of type TokenCredential.
- create_declarative_chat_agent(
- model: str,
- name: str,
- description: str | None = None,
- tools: Sequence[AgentServiceBaseTool | BaseTool | Callable] | ToolNode | None = None,
- instructions: SystemMessage | str | Callable[[StateSchema], PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]] | Runnable[StateSchema, PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]] | None = None,
- temperature: float | None = None,
- top_p: float | None = None,
- response_format: Dict[str, Any] | None = None,
- context_schema: Type[Any] | None = None,
- checkpointer: None | bool | BaseCheckpointSaver = None,
- store: BaseStore | None = None,
- interrupt_before: list[str] | None = None,
- interrupt_after: list[str] | None = None,
- trace: bool = False,
- debug: bool = False,
Create a declarative chat agent in Azure AI Foundry.
- Parameters:
name (str) – The name of the agent.
description (str | None) – An optional description of the agent.
model (str) – The model to use for the agent.
tools (Sequence[AgentServiceBaseTool | BaseTool | Callable] | ToolNode | None) – The tools to use with the agent. This can be a list of BaseTools, callables, or tool definitions, or a ToolNode.
instructions (SystemMessage | str | Callable[[StateSchema], PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]] | Runnable[StateSchema, PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]] | None) – The prompt instructions to use for the agent.
temperature (float | None) – The temperature to use for the agent.
top_p (float | None) – The top_p to use for the agent.
response_format (Dict[str, Any] | None) – The response format to use for the agent.
context_schema (Type[Any] | None) – The schema for the context to pass to the agent.
checkpointer (None | bool | BaseCheckpointSaver) – The checkpointer to use for the agent.
store (BaseStore | None) – The store to use for the agent.
interrupt_before (list[str] | None) – A list of node names to interrupt before.
interrupt_after (list[str] | None) – A list of node names to interrupt after.
trace (bool) – Whether to enable tracing.
debug (bool) – Whether to enable debug mode.
- Returns:
A CompiledStateGraph representing the agent workflow.
- Return type:
CompiledStateGraph
- create_declarative_chat_node(
- name: str,
- model: str,
- description: str | None = None,
- tools: Sequence[AgentServiceBaseTool | BaseTool | Callable] | ToolNode | None = None,
- instructions: SystemMessage | str | Callable[[StateSchema], PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]] | Runnable[StateSchema, PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]] | None = None,
- temperature: float | None = None,
- top_p: float | None = None,
- response_format: Dict[str, Any] | None = None,
- trace: bool = False,
Create a declarative chat agent node in Azure AI Foundry.
- Parameters:
name (str) – The name of the agent.
model (str) – The model to use for the agent.
description (str | None) – An optional description of the agent.
tools (Sequence[AgentServiceBaseTool | BaseTool | Callable] | ToolNode | None) – The tools to use with the agent. This can be a list of BaseTools callables, or tool definitions, or a ToolNode.
instructions (SystemMessage | str | Callable[[StateSchema], PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]] | Runnable[StateSchema, PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]] | None) – The prompt instructions to use for the agent.
temperature (float | None) – The temperature to use for the agent.
top_p (float | None) – The top_p to use for the agent.
response_format (Dict[str, Any] | None) – The response format to use for the agent.
trace (bool) – Whether to enable tracing.
- Returns:
A DeclarativeChatAgentNode representing the agent.
- Return type:
- delete_agent(
- agent: CompiledStateGraph | DeclarativeChatAgentNode,
Delete an agent created with create_declarative_chat_agent.
- Parameters:
agent (CompiledStateGraph | DeclarativeChatAgentNode) – The CompiledStateGraph representing the agent to delete.
- Raises:
ValueError – If the agent ID cannot be found in the graph metadata.
- Return type:
None