ChatPromptTemplate#

class langchain_core.prompts.chat.ChatPromptTemplate[source]#

Bases: BaseChatPromptTemplate

Prompt template for chat models.

Use to create flexible templated prompts for chat models.

Examples

Changed in version 0.2.24: You can pass any Message-like formats supported by ChatPromptTemplate.from_messages() directly to ChatPromptTemplate() init.

from langchain_core.prompts import ChatPromptTemplate

template = ChatPromptTemplate([
    ("system", "You are a helpful AI bot. Your name is {name}."),
    ("human", "Hello, how are you doing?"),
    ("ai", "I'm doing well, thanks!"),
    ("human", "{user_input}"),
])

prompt_value = template.invoke(
    {
        "name": "Bob",
        "user_input": "What is your name?"
    }
)
# Output:
# ChatPromptValue(
#    messages=[
#        SystemMessage(content='You are a helpful AI bot. Your name is Bob.'),
#        HumanMessage(content='Hello, how are you doing?'),
#        AIMessage(content="I'm doing well, thanks!"),
#        HumanMessage(content='What is your name?')
#    ]
#)

Messages Placeholder:

# In addition to Human/AI/Tool/Function messages,
# you can initialize the template with a MessagesPlaceholder
# either using the class directly or with the shorthand tuple syntax:

template = ChatPromptTemplate([
    ("system", "You are a helpful AI bot."),
    # Means the template will receive an optional list of messages under
    # the "conversation" key
    ("placeholder", "{conversation}")
    # Equivalently:
    # MessagesPlaceholder(variable_name="conversation", optional=True)
])

prompt_value = template.invoke(
    {
        "conversation": [
            ("human", "Hi!"),
            ("ai", "How can I assist you today?"),
            ("human", "Can you make me an ice cream sundae?"),
            ("ai", "No.")
        ]
    }
)

# Output:
# ChatPromptValue(
#    messages=[
#        SystemMessage(content='You are a helpful AI bot.'),
#        HumanMessage(content='Hi!'),
#        AIMessage(content='How can I assist you today?'),
#        HumanMessage(content='Can you make me an ice cream sundae?'),
#        AIMessage(content='No.'),
#    ]
#)

Single-variable template:

If your prompt has only a single input variable (i.e., 1 instance of โ€œ{variable_nams}โ€), and you invoke the template with a non-dict object, the prompt template will inject the provided argument into that variable location.

from langchain_core.prompts import ChatPromptTemplate

template = ChatPromptTemplate([
    ("system", "You are a helpful AI bot. Your name is Carl."),
    ("human", "{user_input}"),
])

prompt_value = template.invoke("Hello, there!")
# Equivalent to
# prompt_value = template.invoke({"user_input": "Hello, there!"})

# Output:
#  ChatPromptValue(
#     messages=[
#         SystemMessage(content='You are a helpful AI bot. Your name is Carl.'),
#         HumanMessage(content='Hello, there!'),
#     ]
# )

Create a chat prompt template from a variety of message formats.

Parameters:
  • messages โ€“ sequence of message representations. A message can be represented using the following formats: (1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of (message type, template); e.g., (โ€œhumanโ€, โ€œ{user_input}โ€), (4) 2-tuple of (message class, template), (5) a string which is shorthand for (โ€œhumanโ€, template); e.g., โ€œ{user_input}โ€.

  • template_format โ€“ format of the template. Defaults to โ€œf-stringโ€.

  • input_variables โ€“ A list of the names of the variables whose values are required as inputs to the prompt.

  • optional_variables โ€“ A list of the names of the variables for placeholder

  • inferred (or MessagePlaceholder that are optional. These variables are auto) โ€“

  • them. (from the prompt and user need not provide) โ€“

  • partial_variables โ€“ A dictionary of the partial variables the prompt template carries. Partial variables populate the template so that you donโ€™t need to pass them in every time you call the prompt.

  • validate_template โ€“ Whether to validate the template.

  • input_types โ€“ A dictionary of the types of the variables the prompt template expects. If not provided, all variables are assumed to be strings.

Returns:

A chat prompt template.

Examples

Instantiation from a list of message templates:

template = ChatPromptTemplate([
    ("human", "Hello, how are you?"),
    ("ai", "I'm doing well, thanks!"),
    ("human", "That's good to hear."),
])

Instantiation from mixed message formats:

template = ChatPromptTemplate([
    SystemMessage(content="hello"),
    ("human", "Hello, how are you?"),
])

Note

ChatPromptTemplate implements the standard Runnable Interface. ๐Ÿƒ

The Runnable Interface has additional methods that are available on runnables, such as with_types, with_retry, assign, bind, get_graph, and more.

param input_types: Dict[str, Any] [Optional]#

A dictionary of the types of the variables the prompt template expects. If not provided, all variables are assumed to be strings.

param input_variables: List[str] [Required]#

A list of the names of the variables whose values are required as inputs to the prompt.

param messages: List[MessageLike] [Required]#

List of messages consisting of either message prompt templates or messages.

param metadata: Dict[str, Any] | None = None#

Metadata to be used for tracing.

param optional_variables: List[str] = []#

optional_variables: A list of the names of the variables for placeholder or MessagePlaceholder that are optional. These variables are auto inferred from the prompt and user need not provide them.

param output_parser: BaseOutputParser | None = None#

How to parse the output of calling an LLM on this formatted prompt.

param partial_variables: Mapping[str, Any] [Optional]#

A dictionary of the partial variables the prompt template carries.

Partial variables populate the template so that you donโ€™t need to pass them in every time you call the prompt.

param tags: List[str] | None = None#

Tags to be used for tracing.

param validate_template: bool = False#

Whether or not to try validating the template.

async abatch(inputs: List[Input], config: RunnableConfig | List[RunnableConfig] | None = None, *, return_exceptions: bool = False, **kwargs: Any | None) โ†’ List[Output]#

Default implementation runs ainvoke in parallel using asyncio.gather.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters:
  • inputs (List[Input]) โ€“ A list of inputs to the Runnable.

  • config (RunnableConfig | List[RunnableConfig] | None) โ€“ A config to use when invoking the Runnable. The config supports standard keys like โ€˜tagsโ€™, โ€˜metadataโ€™ for tracing purposes, โ€˜max_concurrencyโ€™ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

  • return_exceptions (bool) โ€“ Whether to return exceptions instead of raising them. Defaults to False.

  • kwargs (Any | None) โ€“ Additional keyword arguments to pass to the Runnable.

Returns:

A list of outputs from the Runnable.

Return type:

List[Output]

async abatch_as_completed(inputs: Sequence[Input], config: RunnableConfig | Sequence[RunnableConfig] | None = None, *, return_exceptions: bool = False, **kwargs: Any | None) โ†’ AsyncIterator[Tuple[int, Output | Exception]]#

Run ainvoke in parallel on a list of inputs, yielding results as they complete.

Parameters:
  • inputs (Sequence[Input]) โ€“ A list of inputs to the Runnable.

  • config (RunnableConfig | Sequence[RunnableConfig] | None) โ€“ A config to use when invoking the Runnable. The config supports standard keys like โ€˜tagsโ€™, โ€˜metadataโ€™ for tracing purposes, โ€˜max_concurrencyโ€™ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None. Defaults to None.

  • return_exceptions (bool) โ€“ Whether to return exceptions instead of raising them. Defaults to False.

  • kwargs (Any | None) โ€“ Additional keyword arguments to pass to the Runnable.

Yields:

A tuple of the index of the input and the output from the Runnable.

Return type:

AsyncIterator[Tuple[int, Output | Exception]]

async aformat(**kwargs: Any) โ†’ str#

Async format the chat template into a string.

Parameters:

**kwargs (Any) โ€“ keyword arguments to use for filling in template variables in all the template messages in this chat template.

Returns:

formatted string.

Return type:

str

async aformat_messages(**kwargs: Any) โ†’ List[BaseMessage][source]#

Async format the chat template into a list of finalized messages.

Parameters:

**kwargs (Any) โ€“ keyword arguments to use for filling in template variables in all the template messages in this chat template.

Returns:

list of formatted messages.

Raises:

ValueError โ€“ If unexpected input.

Return type:

List[BaseMessage]

async aformat_prompt(**kwargs: Any) โ†’ PromptValue#

Async format prompt. Should return a PromptValue.

Parameters:

**kwargs (Any) โ€“ Keyword arguments to use for formatting.

Returns:

PromptValue.

Return type:

PromptValue

async ainvoke(input: Dict, config: RunnableConfig | None = None, **kwargs: Any) โ†’ PromptValue#

Async invoke the prompt.

Parameters:
  • input (Dict) โ€“ Dict, input to the prompt.

  • config (RunnableConfig | None) โ€“ RunnableConfig, configuration for the prompt.

  • kwargs (Any) โ€“

Returns:

The output of the prompt.

Return type:

PromptValue

append(message: BaseMessagePromptTemplate | BaseMessage | BaseChatPromptTemplate | Tuple[str | Type, str | List[dict] | List[object]] | str) โ†’ None[source]#

Append a message to the end of the chat template.

Parameters:

message (BaseMessagePromptTemplate | BaseMessage | BaseChatPromptTemplate | Tuple[str | Type, str | List[dict] | List[object]] | str) โ€“ representation of a message to append.

Return type:

None

async astream(input: Input, config: RunnableConfig | None = None, **kwargs: Any | None) โ†’ AsyncIterator[Output]#

Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output.

Parameters:
  • input (Input) โ€“ The input to the Runnable.

  • config (RunnableConfig | None) โ€“ The config to use for the Runnable. Defaults to None.

  • kwargs (Any | None) โ€“ Additional keyword arguments to pass to the Runnable.

Yields:

The output of the Runnable.

Return type:

AsyncIterator[Output]

astream_events(input: Any, config: RunnableConfig | None = None, *, version: Literal['v1', 'v2'], include_names: Sequence[str] | None = None, include_types: Sequence[str] | None = None, include_tags: Sequence[str] | None = None, exclude_names: Sequence[str] | None = None, exclude_types: Sequence[str] | None = None, exclude_tags: Sequence[str] | None = None, **kwargs: Any) โ†’ AsyncIterator[StandardStreamEvent | CustomStreamEvent]#

Beta

This API is in beta and may change in the future.

Generate a stream of events.

Use to create an iterator over StreamEvents that provide real-time information about the progress of the Runnable, including StreamEvents from intermediate results.

A StreamEvent is a dictionary with the following schema:

  • event: str - Event names are of the

    format: on_[runnable_type]_(start|stream|end).

  • name: str - The name of the Runnable that generated the event.

  • run_id: str - randomly generated ID associated with the given execution of

    the Runnable that emitted the event. A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.

  • parent_ids: List[str] - The IDs of the parent runnables that

    generated the event. The root Runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.

  • tags: Optional[List[str]] - The tags of the Runnable that generated

    the event.

  • metadata: Optional[Dict[str, Any]] - The metadata of the Runnable

    that generated the event.

  • data: Dict[str, Any]

Below is a table that illustrates some evens that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

ATTENTION This reference table is for the V2 version of the schema.

event

name

chunk

input

output

on_chat_model_start

[model name]

{โ€œmessagesโ€: [[SystemMessage, HumanMessage]]}

on_chat_model_stream

[model name]

AIMessageChunk(content=โ€helloโ€)

on_chat_model_end

[model name]

{โ€œmessagesโ€: [[SystemMessage, HumanMessage]]}

AIMessageChunk(content=โ€hello worldโ€)

on_llm_start

[model name]

{โ€˜inputโ€™: โ€˜helloโ€™}

on_llm_stream

[model name]

โ€˜Helloโ€™

on_llm_end

[model name]

โ€˜Hello human!โ€™

on_chain_start

format_docs

on_chain_stream

format_docs

โ€œhello world!, goodbye world!โ€

on_chain_end

format_docs

[Document(โ€ฆ)]

โ€œhello world!, goodbye world!โ€

on_tool_start

some_tool

{โ€œxโ€: 1, โ€œyโ€: โ€œ2โ€}

on_tool_end

some_tool

{โ€œxโ€: 1, โ€œyโ€: โ€œ2โ€}

on_retriever_start

[retriever name]

{โ€œqueryโ€: โ€œhelloโ€}

on_retriever_end

[retriever name]

{โ€œqueryโ€: โ€œhelloโ€}

[Document(โ€ฆ), ..]

on_prompt_start

[template_name]

{โ€œquestionโ€: โ€œhelloโ€}

on_prompt_end

[template_name]

{โ€œquestionโ€: โ€œhelloโ€}

ChatPromptValue(messages: [SystemMessage, โ€ฆ])

In addition to the standard events, users can also dispatch custom events (see example below).

Custom events will be only be surfaced with in the v2 version of the API!

A custom event has following format:

Attribute

Type

Description

name

str

A user defined name for the event.

data

Any

The data associated with the event. This can be anything, though we suggest making it JSON serializable.

Here are declarations associated with the standard events shown above:

format_docs:

def format_docs(docs: List[Document]) -> str:
    '''Format the docs.'''
    return ", ".join([doc.page_content for doc in docs])

format_docs = RunnableLambda(format_docs)

some_tool:

@tool
def some_tool(x: int, y: str) -> dict:
    '''Some_tool.'''
    return {"x": x, "y": y}

prompt:

template = ChatPromptTemplate.from_messages(
    [("system", "You are Cat Agent 007"), ("human", "{question}")]
).with_config({"run_name": "my_template", "tags": ["my_template"]})

Example:

from langchain_core.runnables import RunnableLambda

async def reverse(s: str) -> str:
    return s[::-1]

chain = RunnableLambda(func=reverse)

events = [
    event async for event in chain.astream_events("hello", version="v2")
]

# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
    {
        "data": {"input": "hello"},
        "event": "on_chain_start",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"chunk": "olleh"},
        "event": "on_chain_stream",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"output": "olleh"},
        "event": "on_chain_end",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
]

Example: Dispatch Custom Event

from langchain_core.callbacks.manager import (
    adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio


async def slow_thing(some_input: str, config: RunnableConfig) -> str:
    """Do something that takes a long time."""
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 1 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 2 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    return "Done"

slow_thing = RunnableLambda(slow_thing)

async for event in slow_thing.astream_events("some_input", version="v2"):
    print(event)
Parameters:
  • input (Any) โ€“ The input to the Runnable.

  • config (RunnableConfig | None) โ€“ The config to use for the Runnable.

  • version (Literal['v1', 'v2']) โ€“ The version of the schema to use either v2 or v1. Users should use v2. v1 is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. custom events will only be surfaced in v2.

  • include_names (Sequence[str] | None) โ€“ Only include events from runnables with matching names.

  • include_types (Sequence[str] | None) โ€“ Only include events from runnables with matching types.

  • include_tags (Sequence[str] | None) โ€“ Only include events from runnables with matching tags.

  • exclude_names (Sequence[str] | None) โ€“ Exclude events from runnables with matching names.

  • exclude_types (Sequence[str] | None) โ€“ Exclude events from runnables with matching types.

  • exclude_tags (Sequence[str] | None) โ€“ Exclude events from runnables with matching tags.

  • kwargs (Any) โ€“ Additional keyword arguments to pass to the Runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.

Yields:

An async stream of StreamEvents.

Raises:

NotImplementedError โ€“ If the version is not v1 or v2.

Return type:

AsyncIterator[StandardStreamEvent | CustomStreamEvent]

batch(inputs: List[Input], config: RunnableConfig | List[RunnableConfig] | None = None, *, return_exceptions: bool = False, **kwargs: Any | None) โ†’ List[Output]#

Default implementation runs invoke in parallel using a thread pool executor.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters:
  • inputs (List[Input]) โ€“

  • config (RunnableConfig | List[RunnableConfig] | None) โ€“

  • return_exceptions (bool) โ€“

  • kwargs (Any | None) โ€“

Return type:

List[Output]

batch_as_completed(inputs: Sequence[Input], config: RunnableConfig | Sequence[RunnableConfig] | None = None, *, return_exceptions: bool = False, **kwargs: Any | None) โ†’ Iterator[Tuple[int, Output | Exception]]#

Run invoke in parallel on a list of inputs, yielding results as they complete.

Parameters:
  • inputs (Sequence[Input]) โ€“

  • config (RunnableConfig | Sequence[RunnableConfig] | None) โ€“

  • return_exceptions (bool) โ€“

  • kwargs (Any | None) โ€“

Return type:

Iterator[Tuple[int, Output | Exception]]

configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]) โ†’ RunnableSerializable[Input, Output]#

Configure alternatives for Runnables that can be set at runtime.

Parameters:
  • which (ConfigurableField) โ€“ The ConfigurableField instance that will be used to select the alternative.

  • default_key (str) โ€“ The default key to use if no alternative is selected. Defaults to โ€œdefaultโ€.

  • prefix_keys (bool) โ€“ Whether to prefix the keys with the ConfigurableField id. Defaults to False.

  • **kwargs (Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]) โ€“ A dictionary of keys to Runnable instances or callables that return Runnable instances.

Returns:

A new Runnable with the alternatives configured.

Return type:

RunnableSerializable[Input, Output]

from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatAnthropic(
    model_name="claude-3-sonnet-20240229"
).configurable_alternatives(
    ConfigurableField(id="llm"),
    default_key="anthropic",
    openai=ChatOpenAI()
)

# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)

# uses ChatOpenAI
print(
    model.with_config(
        configurable={"llm": "openai"}
    ).invoke("which organization created you?").content
)
configurable_fields(**kwargs: ConfigurableField | ConfigurableFieldSingleOption | ConfigurableFieldMultiOption) โ†’ RunnableSerializable[Input, Output]#

Configure particular Runnable fields at runtime.

Parameters:

**kwargs (ConfigurableField | ConfigurableFieldSingleOption | ConfigurableFieldMultiOption) โ€“ A dictionary of ConfigurableField instances to configure.

Returns:

A new Runnable with the fields configured.

Return type:

RunnableSerializable[Input, Output]

from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatOpenAI(max_tokens=20).configurable_fields(
    max_tokens=ConfigurableField(
        id="output_token_number",
        name="Max tokens in the output",
        description="The maximum number of tokens in the output",
    )
)

# max_tokens = 20
print(
    "max_tokens_20: ",
    model.invoke("tell me something about chess").content
)

# max_tokens = 200
print("max_tokens_200: ", model.with_config(
    configurable={"output_token_number": 200}
    ).invoke("tell me something about chess").content
)
extend(messages: Sequence[BaseMessagePromptTemplate | BaseMessage | BaseChatPromptTemplate | Tuple[str | Type, str | List[dict] | List[object]] | str]) โ†’ None[source]#

Extend the chat template with a sequence of messages.

Parameters:

messages (Sequence[BaseMessagePromptTemplate | BaseMessage | BaseChatPromptTemplate | Tuple[str | Type, str | List[dict] | List[object]] | str]) โ€“ sequence of message representations to append.

Return type:

None

format(**kwargs: Any) โ†’ str#

Format the chat template into a string.

Parameters:

**kwargs (Any) โ€“ keyword arguments to use for filling in template variables in all the template messages in this chat template.

Returns:

formatted string.

Return type:

str

format_messages(**kwargs: Any) โ†’ List[BaseMessage][source]#

Format the chat template into a list of finalized messages.

Parameters:

**kwargs (Any) โ€“ keyword arguments to use for filling in template variables in all the template messages in this chat template.

Returns:

list of formatted messages.

Return type:

List[BaseMessage]

format_prompt(**kwargs: Any) โ†’ PromptValue#

Format prompt. Should return a PromptValue.

Parameters:

**kwargs (Any) โ€“ Keyword arguments to use for formatting.

Returns:

PromptValue.

Return type:

PromptValue

classmethod from_messages(messages: Sequence[BaseMessagePromptTemplate | BaseMessage | BaseChatPromptTemplate | Tuple[str | Type, str | List[dict] | List[object]] | str], template_format: Literal['f-string', 'mustache', 'jinja2'] = 'f-string') โ†’ ChatPromptTemplate[source]#

Create a chat prompt template from a variety of message formats.

Examples

Instantiation from a list of message templates:

template = ChatPromptTemplate.from_messages([
    ("human", "Hello, how are you?"),
    ("ai", "I'm doing well, thanks!"),
    ("human", "That's good to hear."),
])

Instantiation from mixed message formats:

template = ChatPromptTemplate.from_messages([
    SystemMessage(content="hello"),
    ("human", "Hello, how are you?"),
])
Parameters:
  • messages (Sequence[BaseMessagePromptTemplate | BaseMessage | BaseChatPromptTemplate | Tuple[str | Type, str | List[dict] | List[object]] | str]) โ€“ sequence of message representations. A message can be represented using the following formats: (1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of (message type, template); e.g., (โ€œhumanโ€, โ€œ{user_input}โ€), (4) 2-tuple of (message class, template), (5) a string which is shorthand for (โ€œhumanโ€, template); e.g., โ€œ{user_input}โ€.

  • template_format (Literal['f-string', 'mustache', 'jinja2']) โ€“ format of the template. Defaults to โ€œf-stringโ€.

Returns:

a chat prompt template.

Return type:

ChatPromptTemplate

classmethod from_role_strings(string_messages: List[Tuple[str, str]]) โ†’ ChatPromptTemplate[source]#

Deprecated since version langchain-core==0.0.1: Use from_messages classmethod instead.

Create a chat prompt template from a list of (role, template) tuples.

Parameters:

string_messages (List[Tuple[str, str]]) โ€“ list of (role, template) tuples.

Returns:

a chat prompt template.

Return type:

ChatPromptTemplate

classmethod from_strings(string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]) โ†’ ChatPromptTemplate[source]#

Deprecated since version langchain-core==0.0.1: Use from_messages classmethod instead.

Create a chat prompt template from a list of (role class, template) tuples.

Parameters:

string_messages (List[Tuple[Type[BaseMessagePromptTemplate], str]]) โ€“ list of (role class, template) tuples.

Returns:

a chat prompt template.

Return type:

ChatPromptTemplate

classmethod from_template(template: str, **kwargs: Any) โ†’ ChatPromptTemplate[source]#

Create a chat prompt template from a template string.

Creates a chat template consisting of a single message assumed to be from the human.

Parameters:
  • template (str) โ€“ template string

  • **kwargs (Any) โ€“ keyword arguments to pass to the constructor.

Returns:

A new instance of this class.

Return type:

ChatPromptTemplate

invoke(input: Dict, config: RunnableConfig | None = None) โ†’ PromptValue#

Invoke the prompt.

Parameters:
  • input (Dict) โ€“ Dict, input to the prompt.

  • config (RunnableConfig | None) โ€“ RunnableConfig, configuration for the prompt.

Returns:

The output of the prompt.

Return type:

PromptValue

partial(**kwargs: Any) โ†’ ChatPromptTemplate[source]#

Get a new ChatPromptTemplate with some input variables already filled in.

Parameters:

**kwargs (Any) โ€“ keyword arguments to use for filling in template variables. Ought to be a subset of the input variables.

Returns:

A new ChatPromptTemplate.

Return type:

ChatPromptTemplate

Example

from langchain_core.prompts import ChatPromptTemplate

template = ChatPromptTemplate.from_messages(
    [
        ("system", "You are an AI assistant named {name}."),
        ("human", "Hi I'm {user}"),
        ("ai", "Hi there, {user}, I'm {name}."),
        ("human", "{input}"),
    ]
)
template2 = template.partial(user="Lucy", name="R2D2")

template2.format_messages(input="hello")
pretty_print() โ†’ None#

Print a human-readable representation.

Return type:

None

pretty_repr(html: bool = False) โ†’ str[source]#

Human-readable representation.

Parameters:

html (bool) โ€“ Whether to format as HTML. Defaults to False.

Returns:

Human-readable representation.

Return type:

str

save(file_path: Path | str) โ†’ None[source]#

Save prompt to file.

Parameters:

file_path (Path | str) โ€“ path to file.

Return type:

None

stream(input: Input, config: RunnableConfig | None = None, **kwargs: Any | None) โ†’ Iterator[Output]#

Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output.

Parameters:
  • input (Input) โ€“ The input to the Runnable.

  • config (RunnableConfig | None) โ€“ The config to use for the Runnable. Defaults to None.

  • kwargs (Any | None) โ€“ Additional keyword arguments to pass to the Runnable.

Yields:

The output of the Runnable.

Return type:

Iterator[Output]

to_json() โ†’ SerializedConstructor | SerializedNotImplemented#

Serialize the Runnable to JSON.

Returns:

A JSON-serializable representation of the Runnable.

Return type:

SerializedConstructor | SerializedNotImplemented

Examples using ChatPromptTemplate