create_tagging_chain_pydantic#
- langchain.chains.openai_functions.tagging.create_tagging_chain_pydantic(pydantic_schema: Any, llm: BaseLanguageModel, prompt: ChatPromptTemplate | None = None, **kwargs: Any) Chain [source]#
Deprecated since version 0.2.13: LangChain has introduced a method called with_structured_output that is available on ChatModels capable of tool calling. See API reference for this function for replacement: <https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.tagging.create_tagging_chain_pydantic.html> You can read more about with_structured_output here: <https://python.langchain.com/docs/how_to/structured_output/>. If you notice other issues, please provide feedback here: <langchain-ai/langchain#18154>
- Create a chain that extracts information from a passage
based on a pydantic schema.
This function is deprecated. Please use with_structured_output instead. See example usage below:
from pydantic import BaseModel, Field from langchain_anthropic import ChatAnthropic class Joke(BaseModel): setup: str = Field(description="The setup of the joke") punchline: str = Field(description="The punchline to the joke") # Or any other chat model that supports tools. # Please reference to to the documentation of structured_output # to see an up to date list of which models support # with_structured_output. model = ChatAnthropic(model="claude-3-opus-20240229", temperature=0) structured_llm = model.with_structured_output(Joke) structured_llm.invoke( "Why did the cat cross the road? To get to the other " "side... and then lay down in the middle of it!" )
Read more here: https://python.langchain.com/docs/how_to/structured_output/
- Parameters:
pydantic_schema (Any) – The pydantic schema of the entities to extract.
llm (BaseLanguageModel) – The language model to use.
prompt (ChatPromptTemplate | None)
kwargs (Any)
- Returns:
Chain (LLMChain) that can be used to extract information from a passage.
- Return type: