init_chat_model#

langchain.chat_models.base.init_chat_model(model: str, *, model_provider: str | None = None, configurable_fields: Literal[None] = None, config_prefix: str | None = None, **kwargs: Any) BaseChatModel[source]#
langchain.chat_models.base.init_chat_model(model: Literal[None] = None, *, model_provider: str | None = None, configurable_fields: Literal[None] = None, config_prefix: str | None = None, **kwargs: Any) _ConfigurableModel
langchain.chat_models.base.init_chat_model(model: str | None = None, *, model_provider: str | None = None, configurable_fields: Literal['any'] | List[str] | Tuple[str, ...] = None, config_prefix: str | None = None, **kwargs: Any) _ConfigurableModel

Beta

This feature is in beta. It is actively being worked on, so the API may change.

Initialize a ChatModel from the model name and provider.

Must have the integration package corresponding to the model provider installed.

New in version 0.2.7.

Changed in version 0.2.8: Support for configurable_fields and config_prefix added.

Changed in version 0.2.12: Support for Ollama via langchain-ollama package added. Previously langchain-community version of Ollama (now deprecated) was installed by default.

Parameters:
  • model – The name of the model, e.g. “gpt-4o”, “claude-3-opus-20240229”.

  • model_provider

    The model provider. Supported model_provider values and the corresponding integration package:

    • openai (langchain-openai)

    • anthropic (langchain-anthropic)

    • azure_openai (langchain-openai)

    • google_vertexai (langchain-google-vertexai)

    • google_genai (langchain-google-genai)

    • bedrock (langchain-aws)

    • cohere (langchain-cohere)

    • fireworks (langchain-fireworks)

    • together (langchain-together)

    • mistralai (langchain-mistralai)

    • huggingface (langchain-huggingface)

    • groq (langchain-groq)

    • ollama (langchain-ollama) [support added in langchain==0.2.12]

    Will attempt to infer model_provider from model if not specified. The following providers will be inferred based on these model prefixes:

    • gpt-3… or gpt-4… -> openai

    • claude… -> anthropic

    • amazon…. -> bedrock

    • gemini… -> google_vertexai

    • command… -> cohere

    • accounts/fireworks… -> fireworks

  • configurable_fields

    Which model parameters are configurable:

    • None: No configurable fields.

    • ”any”: All fields are configurable. See Security Note below.

    • Union[List[str], Tuple[str, …]]: Specified fields are configurable.

    Fields are assumed to have config_prefix stripped if there is a config_prefix. If model is specified, then defaults to None. If model is not specified, then defaults to ("model", "model_provider").

    *Security Note*: Setting configurable_fields="any" means fields like api_key, base_url, etc. can be altered at runtime, potentially redirecting model requests to a different service/user. Make sure that if you’re accepting untrusted configurations that you enumerate the configurable_fields=(...) explicitly.

  • config_prefix – If config_prefix is a non-empty string then model will be configurable at runtime via the config["configurable"]["{config_prefix}_{param}"] keys. If config_prefix is an empty string then model will be configurable via config["configurable"]["{param}"].

  • kwargs – Additional keyword args to pass to <<selected ChatModel>>.__init__(model=model_name, **kwargs).

Returns:

A BaseChatModel corresponding to the model_name and model_provider specified if configurability is inferred to be False. If configurable, a chat model emulator that initializes the underlying model at runtime once a config is passed in.

Raises:
  • ValueError – If model_provider cannot be inferred or isn’t supported.

  • ImportError – If the model provider integration package is not installed.

Initialize non-configurable models:
# pip install langchain langchain-openai langchain-anthropic langchain-google-vertexai
from langchain.chat_models import init_chat_model

gpt_4o = init_chat_model("gpt-4o", model_provider="openai", temperature=0)
claude_opus = init_chat_model("claude-3-opus-20240229", model_provider="anthropic", temperature=0)
gemini_15 = init_chat_model("gemini-1.5-pro", model_provider="google_vertexai", temperature=0)

gpt_4o.invoke("what's your name")
claude_opus.invoke("what's your name")
gemini_15.invoke("what's your name")
Create a partially configurable model with no default model:
# pip install langchain langchain-openai langchain-anthropic
from langchain.chat_models import init_chat_model

# We don't need to specify configurable=True if a model isn't specified.
configurable_model = init_chat_model(temperature=0)

configurable_model.invoke(
    "what's your name",
    config={"configurable": {"model": "gpt-4o"}}
)
# GPT-4o response

configurable_model.invoke(
    "what's your name",
    config={"configurable": {"model": "claude-3-5-sonnet-20240620"}}
)
# claude-3.5 sonnet response
Create a fully configurable model with a default model and a config prefix:
# pip install langchain langchain-openai langchain-anthropic
from langchain.chat_models import init_chat_model

configurable_model_with_default = init_chat_model(
    "gpt-4o",
    model_provider="openai",
    configurable_fields="any",  # this allows us to configure other params like temperature, max_tokens, etc at runtime.
    config_prefix="foo",
    temperature=0
)

configurable_model_with_default.invoke("what's your name")
# GPT-4o response with temperature 0

configurable_model_with_default.invoke(
    "what's your name",
    config={
        "configurable": {
            "foo_model": "claude-3-5-sonnet-20240620",
            "foo_model_provider": "anthropic",
            "foo_temperature": 0.6
        }
    }
)
# Claude-3.5 sonnet response with temperature 0.6
Bind tools to a configurable model:

You can call any ChatModel declarative methods on a configurable model in the same way that you would with a normal model.

# pip install langchain langchain-openai langchain-anthropic
from langchain.chat_models import init_chat_model
from langchain_core.pydantic_v1 import BaseModel, Field

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

configurable_model = init_chat_model(
    "gpt-4o",
    configurable_fields=("model", "model_provider"),
    temperature=0
)

configurable_model_with_tools = configurable_model.bind_tools([GetWeather, GetPopulation])
configurable_model_with_tools.invoke(
    "Which city is hotter today and which is bigger: LA or NY?"
)
# GPT-4o response with tool calls

configurable_model_with_tools.invoke(
    "Which city is hotter today and which is bigger: LA or NY?",
    config={"configurable": {"model": "claude-3-5-sonnet-20240620"}}
)
# Claude-3.5 sonnet response with tools

Examples using init_chat_model