ChatModelUnitTests#
- class langchain_tests.unit_tests.chat_models.ChatModelUnitTests[source]#
Base class for chat model unit tests.
Test subclasses must implement the
chat_model_class
andchat_model_params
properties to specify what model to test and its initialization parameters.Example:
from typing import Type from langchain_tests.unit_tests import ChatModelUnitTests from my_package.chat_models import MyChatModel class TestMyChatModelUnit(ChatModelUnitTests): @property def chat_model_class(self) -> Type[MyChatModel]: # Return the chat model class to test here return MyChatModel @property def chat_model_params(self) -> dict: # Return initialization parameters for the model. return {"model": "model-001", "temperature": 0}
Note
API references for individual test methods include troubleshooting tips.
Test subclasses must implement the following two properties:
- chat_model_class
The chat model class to test, e.g.,
ChatParrotLink
.Example:
@property def chat_model_class(self) -> Type[ChatParrotLink]: return ChatParrotLink
- chat_model_params
Initialization parameters for the chat model.
Example:
@property def chat_model_params(self) -> dict: return {"model": "bird-brain-001", "temperature": 0}
In addition, test subclasses can control what features are tested (such as tool calling or multi-modality) by selectively overriding the following properties. Expand to see details:
has_tool_calling
Boolean property indicating whether the chat model supports tool calling.
By default, this is determined by whether the chat model’s bind_tools method is overridden. It typically does not need to be overridden on the test class.
Example override:
@property def has_tool_calling(self) -> bool: return True
tool_choice_value
Value to use for tool choice when used in tests.
Some tests for tool calling features attempt to force tool calling via a tool_choice parameter. A common value for this parameter is “any”. Defaults to None.
Note: if the value is set to “tool_name”, the name of the tool used in each test will be set as the value for tool_choice.
Example:
@property def tool_choice_value(self) -> Optional[str]: return "any"
has_structured_output
Boolean property indicating whether the chat model supports structured output.
By default, this is determined by whether the chat model’s with_structured_output method is overridden. If the base implementation is intended to be used, this method should be overridden.
See: https://python.langchain.com/docs/concepts/structured_outputs/
Example:
@property def has_structured_output(self) -> bool: return True
supports_image_inputs
Boolean property indicating whether the chat model supports image inputs. Defaults to
False
.If set to
True
, the chat model will be tested using content blocks of the form[ {"type": "text", "text": "describe the weather in this image"}, { "type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_data}"}, }, ]
See https://python.langchain.com/docs/concepts/multimodality/
Example:
@property def supports_image_inputs(self) -> bool: return True
supports_video_inputs
Boolean property indicating whether the chat model supports image inputs. Defaults to
False
. No current tests are written for this feature.returns_usage_metadata
Boolean property indicating whether the chat model returns usage metadata on invoke and streaming responses.
usage_metadata
is an optional dict attribute on AIMessages that track input and output tokens: https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.UsageMetadata.htmlExample:
@property def returns_usage_metadata(self) -> bool: return False
supports_anthropic_inputs
Boolean property indicating whether the chat model supports Anthropic-style inputs.
These inputs might feature “tool use” and “tool result” content blocks, e.g.,
[ {"type": "text", "text": "Hmm let me think about that"}, { "type": "tool_use", "input": {"fav_color": "green"}, "id": "foo", "name": "color_picker", }, ]
If set to
True
, the chat model will be tested using content blocks of this form.Example:
@property def supports_anthropic_inputs(self) -> bool: return False
supports_image_tool_message
Boolean property indicating whether the chat model supports ToolMessages that include image content, e.g.,
ToolMessage( content=[ { "type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_data}"}, }, ], tool_call_id="1", name="random_image", )
If set to
True
, the chat model will be tested with message sequences that include ToolMessages of this form.Example:
@property def supports_image_tool_message(self) -> bool: return False
supported_usage_metadata_details
Property controlling what usage metadata details are emitted in both invoke and stream.
usage_metadata
is an optional dict attribute on AIMessages that track input and output tokens: https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.UsageMetadata.htmlIt includes optional keys
input_token_details
andoutput_token_details
that can track usage details associated with special types of tokens, such as cached, audio, or reasoning.Only needs to be overridden if these details are supplied.
- Testing initialization from environment variables
Some unit tests may require testing initialization from environment variables. These tests can be enabled by overriding the
init_from_env_params
property (see below):init_from_env_params
This property is used in unit tests to test initialization from environment variables. It should return a tuple of three dictionaries that specify the environment variables, additional initialization args, and expected instance attributes to check.
Defaults to empty dicts. If not overridden, the test is skipped.
Example:
@property def init_from_env_params(self) -> Tuple[dict, dict, dict]: return ( { "MY_API_KEY": "api_key", }, { "model": "bird-brain-001", }, { "my_api_key": "api_key", }, )
Attributes
chat_model_class
The chat model class to test, e.g.,
ChatParrotLink
.chat_model_params
Initialization parameters for the chat model.
has_structured_output
(bool) whether the chat model supports structured output.
has_tool_calling
(bool) whether the model supports tool calling.
init_from_env_params
(tuple) environment variables, additional initialization args, and expected instance attributes for testing initialization from environment variables.
returns_usage_metadata
(bool) whether the chat model returns usage metadata on invoke and streaming responses.
supported_usage_metadata_details
(dict) what usage metadata details are emitted in invoke and stream.
supports_anthropic_inputs
(bool) whether the chat model supports Anthropic-style inputs.
supports_image_inputs
(bool) whether the chat model supports image inputs, defaults to
False
.supports_image_tool_message
(bool) whether the chat model supports ToolMessages that include image content.
supports_video_inputs
(bool) whether the chat model supports video inputs, efaults to
False
.tool_choice_value
(None or str) to use for tool choice when used in tests.
Methods
test_bind_tool_pydantic
(model, my_adder_tool)Test that chat model correctly handles Pydantic models that are passed into
bind_tools
.Test model initialization.
Test initialization from environment variables.
Test that model can be initialized with
streaming=True
.test_serdes
(model, snapshot)Test serialization and deserialization of the model.
test_standard_params
(model)Test that model properly generates standard parameters.
test_with_structured_output
(model, schema)Test
with_structured_output
method.- test_bind_tool_pydantic(model: BaseChatModel, my_adder_tool: BaseTool) None [source]#
Test that chat model correctly handles Pydantic models that are passed into
bind_tools
. Test is skipped if thehas_tool_calling
property on the test class is False.Troubleshooting
If this test fails, ensure that the model’s
bind_tools
method properly handles Pydantic V2 models.langchain_core
implements a utility function that will accommodate most formats: https://python.langchain.com/api_reference/core/utils/langchain_core.utils.function_calling.convert_to_openai_tool.htmlSee example implementation of
bind_tools
here: https://python.langchain.com/api_reference/_modules/langchain_openai/chat_models/base.html#BaseChatOpenAI.bind_tools- Parameters:
model (BaseChatModel)
my_adder_tool (BaseTool)
- Return type:
None
- test_init() None [source]#
Test model initialization. This should pass for all integrations.
Troubleshooting
If this test fails, ensure that:
chat_model_params
is specified and the model can be initialized from those params;The model accommodates standard parameters: https://python.langchain.com/docs/concepts/chat_models/#standard-parameters
- Return type:
None
- test_init_from_env() None [source]#
Test initialization from environment variables. Relies on the
init_from_env_params
property. Test is skipped if that property is not set.Troubleshooting
If this test fails, ensure that
init_from_env_params
is specified correctly and that model parameters are properly set from environment variables during initialization.- Return type:
None
- test_init_streaming() None [source]#
Test that model can be initialized with
streaming=True
. This is for backward-compatibility purposes.Troubleshooting
If this test fails, ensure that the model can be initialized with a boolean
streaming
parameter.- Return type:
None
- test_serdes(model: BaseChatModel, snapshot: SnapshotAssertion) None [source]#
Test serialization and deserialization of the model. Test is skipped if the
is_lc_serializable
property on the chat model class is not overwritten to returnTrue
.Troubleshooting
If this test fails, check that the
init_from_env_params
property is correctly set on the test class.- Parameters:
model (BaseChatModel)
snapshot (SnapshotAssertion)
- Return type:
None
- test_standard_params(model: BaseChatModel) None [source]#
Test that model properly generates standard parameters. These are used for tracing purposes.
Troubleshooting
If this test fails, check that the model accommodates standard parameters: https://python.langchain.com/docs/concepts/chat_models/#standard-parameters
Check also that the model class is named according to convention (e.g.,
ChatProviderName
).- Parameters:
model (BaseChatModel)
- Return type:
None
- test_with_structured_output(model: BaseChatModel, schema: Any) None [source]#
Test
with_structured_output
method. Test is skipped if thehas_structured_output
property on the test class is False.Troubleshooting
If this test fails, ensure that the model’s
bind_tools
method properly handles Pydantic V2 models.langchain_core
implements a utility function that will accommodate most formats: https://python.langchain.com/api_reference/core/utils/langchain_core.utils.function_calling.convert_to_openai_tool.htmlSee example implementation of
with_structured_output
here: https://python.langchain.com/api_reference/_modules/langchain_openai/chat_models/base.html#BaseChatOpenAI.with_structured_output- Parameters:
model (BaseChatModel)
schema (Any)
- Return type:
None