create_tagging_chain#
- langchain.chains.openai_functions.tagging.create_tagging_chain(schema: dict, llm: BaseLanguageModel, prompt: ChatPromptTemplate | None = None, **kwargs: Any) Chain [source]#
Deprecated since version 0.2.13: LangChain has introduced a method called with_structured_output that is available on ChatModels capable of tool calling. See API reference for this function for replacement: <https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.tagging.create_tagging_chain.html> You can read more about with_structured_output here: <https://python.langchain.com/v0.2/docs/how_to/structured_output/>. If you notice other issues, please provide feedback here: <langchain-ai/langchain#18154>
- Create a chain that extracts information from a passage
based on a schema.
This function is deprecated. Please use with_structured_output instead. See example usage below:
from typing_extensions import Annotated, TypedDict from langchain_anthropic import ChatAnthropic class Joke(TypedDict): """Tagged joke.""" setup: Annotated[str, ..., "The setup of the joke"] punchline: Annotated[str, ..., "The punchline of the joke"] # Or any other chat model that supports tools. # Please reference to to the documentation of structured_output # to see an up to date list of which models support # with_structured_output. model = ChatAnthropic(model="claude-3-haiku-20240307", temperature=0) structured_llm = model.with_structured_output(Joke) structured_llm.invoke( "Why did the cat cross the road? To get to the other " "side... and then lay down in the middle of it!" )
Read more here: https://python.langchain.com/v0.2/docs/how_to/structured_output/
- Parameters:
schema (dict) – The schema of the entities to extract.
llm (BaseLanguageModel) – The language model to use.
prompt (ChatPromptTemplate | None) –
kwargs (Any) –
- Returns:
Chain (LLMChain) that can be used to extract information from a passage.
- Return type:
Examples using create_tagging_chain