langchain: 0.3.13#
Main entrypoint into package.
agents#
Classes
Agent that is using tools. |
|
Base class for parsing agent output into agent action/finish. |
|
Base Multi Action Agent class. |
|
Base Single Action Agent class. |
|
Tool that just returns the query. |
|
Base class for parsing agent output into agent actions/finish. |
|
Agent powered by Runnables. |
|
Agent powered by Runnables. |
|
Iterator for AgentExecutor. |
|
Information about a VectorStore. |
|
|
Toolkit for routing between Vector Stores. |
|
Toolkit for interacting with a Vector Store. |
Output parser for the chat agent. |
|
Output parser for the conversational agent. |
|
Output parser for the conversational agent. |
|
|
Configuration for a chain to use in MRKL system. |
MRKL Output parser for the chat agent. |
|
AgentAction with info needed to submit custom tool output to existing run. |
|
AgentFinish with run and thread metadata. |
|
Run an OpenAI Assistant. |
|
|
Memory used to save agent output AND intermediate steps. |
Parses tool invocations and final answers in JSON format. |
|
|
Parses a message into agent action/finish. |
|
Parses a message into agent actions/finish. |
|
Parses ReAct-style LLM calls that have a single tool input in json format. |
|
Parses ReAct-style LLM calls that have a single tool input. |
Parses self-ask style LLM calls. |
|
Parses a message into agent actions/finish. |
|
Parses tool invocations and final answers in XML format. |
|
Output parser for the ReAct agent. |
|
Chat prompt template for the agent scratchpad. |
|
|
Output parser for the structured chat agent. |
|
Output parser with retries for the structured chat agent. |
Tool that is run when invalid tool name is encountered by agent. |
Functions
A convenience method for creating a conversational retrieval agent. |
|
Construct the scratchpad that lets the agent continue its thought process. |
|
|
Construct the scratchpad that lets the agent continue its thought process. |
|
Convert (AgentAction, tool output) tuples into FunctionMessages. |
|
Convert (AgentAction, tool output) tuples into FunctionMessages. |
Convert (AgentAction, tool output) tuples into ToolMessages. |
|
Format the intermediate steps as XML. |
|
Create an agent that uses JSON to format its logic, build for Chat Models. |
|
|
Create an agent that uses OpenAI function calling. |
Create an agent that uses OpenAI tools. |
|
|
Parse an AI message potentially containing tool_calls. |
|
Parse an AI message potentially containing tool_calls. |
|
Create an agent that uses ReAct prompting. |
|
Create an agent that uses self-ask with search prompting. |
|
Create an agent aimed at supporting tools with multiple inputs. |
|
Create an agent that uses tools. |
Validate tools for single input. |
|
|
Create an agent that uses XML to format its logic. |
Deprecated classes
|
|
|
|
|
|
Deprecated functions
callbacks#
Classes
Callback handler that returns an async iterator. |
|
|
Callback handler that returns an async iterator. |
|
Callback handler for streaming in agents. |
Tracer that logs via the input Logger. |
chains#
Classes
Abstract base class for creating structured sequences of calls to components. |
|
Base interface for chains combining documents. |
|
|
Interface for the combine_docs method. |
Interface for the combine_docs method. |
|
Class for a constitutional principle. |
|
|
Chain for chatting with an index. |
Chain for chatting with a vector database. |
|
Input type for ConversationalRetrievalChain. |
|
|
Chain for interacting with Elasticsearch Database. |
Chain that combines a retriever, a question generator, and a response generator. |
|
Chain that generates questions from uncertain spans. |
|
Output parser that checks if the output is finished. |
|
Generate hypothetical document for query, and then embed that. |
|
Pass input through a moderation endpoint. |
|
A crawler for web pages. |
|
A typed dictionary containing information about elements in the viewport. |
|
|
Class representing a single statement. |
A question and its answer as a list of facts each one should have a source. |
|
Chain for making a simple request to an API endpoint. |
|
An answer to the question, with sources. |
|
Base class for prompt selectors. |
|
Prompt collection that goes through conditionals. |
|
Interface for loading the combine documents chain. |
|
|
Question-answering with sources over an index. |
Question-answering with sources over a vector database. |
|
Output parser that parses a structured query. |
|
A date in ISO 8601 format (YYYY-MM-DD). |
|
A datetime in ISO 8601 format (YYYY-MM-DDTHH:MM:SS). |
|
Information about a data source attribute. |
|
Interface for loading the combine documents chain. |
|
Use a single chain to route an input to one of multiple candidate chains. |
|
|
Create new instance of Route(destination, next_inputs) |
Chain that outputs the name of a destination chain and the inputs to it. |
|
Chain that uses embeddings to route between options. |
|
Parser for output of router chain in the multi-prompt chain. |
|
A multi-route chain that uses an LLM router chain to choose amongst retrieval qa chains. |
|
Chain where the outputs of one chain feed directly into next. |
|
Simple chain where the outputs of one step feed directly into next. |
|
Input for a SQL Chain. |
|
Input for a SQL Chain. |
|
Interface for loading the combine documents chain. |
|
Chain that transforms the chain output. |
Functions
Execute a collapse function on a set of documents and merge their metadatas. |
|
Execute a collapse function on a set of documents and merge their metadatas. |
|
Split Documents into subsets that each meet a cumulative length constraint. |
|
|
Create a chain for passing a list of Documents to a model. |
Return another example given a list of examples for a prompt. |
|
|
Create a chain that takes conversation history and returns documents. |
|
Create a citation fuzzy match Runnable. |
|
Convert a valid OpenAPI spec to the JSON Schema format expected for OpenAI |
Return the kwargs for the LLMChain constructor. |
|
Check if the language model is a chat model. |
|
Check if the language model is a LLM. |
|
Construct examples from input-output pairs. |
|
Fix invalid filter directive. |
|
|
Create query construction prompt. |
|
Load a query constructor runnable chain. |
Return a parser for the query language. |
|
Dummy decorator for when lark is not installed. |
|
Create retrieval chain that retrieves documents and then passes them on. |
|
Create a chain that generates SQL queries. |
|
Get the appropriate function output parser given the user functions. |
|
Load summarizing chain. |
Deprecated classes
|
|
|
|
Deprecated functions
chat_models#
Functions
Initialize a ChatModel from the model name and provider. |
embeddings#
Classes
Interface for caching results from embedding models. |
Functions
|
evaluation#
Classes
A named tuple containing the score and reasoning for a trajectory. |
|
A chain for evaluating ReAct style agents. |
|
|
Trajectory output parser. |
|
A chain for comparing two outputs, such as the outputs |
A chain for comparing two outputs, such as the outputs |
|
|
A parser for the output of the PairwiseStringEvalChain. |
A Criteria to evaluate. |
|
LLM Chain for evaluating runs against criteria. |
|
A parser for the output of the CriteriaEvalChain. |
|
Criteria evaluation chain that requires references. |
|
Embedding Distance Metric. |
|
|
Use embedding distances to score semantic difference between a prediction and reference. |
|
Use embedding distances to score semantic difference between two predictions. |
Compute an exact match between the prediction and the reference. |
|
Evaluate whether the prediction is equal to the reference after |
|
Evaluate whether the prediction is valid JSON. |
|
|
An evaluator that calculates the edit distance between JSON strings. |
An evaluator that validates a JSON prediction against a JSON schema reference. |
|
LLM Chain for evaluating QA w/o GT based on context |
|
LLM Chain for evaluating QA using chain of thought reasoning. |
|
LLM Chain for evaluating question answering. |
|
LLM Chain for generating examples for question answering. |
|
Compute a regex match between the prediction and the reference. |
|
Interface for evaluating agent trajectories. |
|
|
The types of the evaluators. |
A base class for evaluators that use an LLM. |
|
Compare the output of two models (or two outputs of the same model). |
|
Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels. |
|
A chain for scoring the output of a model on a scale of 1-10. |
|
A chain for scoring on a scale of 1-10 the output of a model. |
|
A parser for the output of the ScoreStringEvalChain. |
|
|
Compute string edit distances between two predictions. |
Distance metric to use. |
|
Compute string distances between the prediction and the reference. |
Functions
|
Resolve the criteria for the pairwise evaluator. |
Resolve the criteria to evaluate. |
|
Load a dataset from the LangChainDatasets on HuggingFace. |
|
|
Load the requested evaluation chain specified by a string. |
|
Load evaluators specified by a list of evaluator types. |
Resolve the criteria for the pairwise evaluator. |
globals#
Functions
Get the value of the debug global setting. |
|
Get the value of the llm_cache global setting. |
|
Get the value of the verbose global setting. |
|
|
Set a new value for the debug global setting. |
|
Set a new LLM cache, overwriting the previous value, if any. |
|
Set a new value for the verbose global setting. |
hub#
Functions
indexes#
Classes
Wrapper around a vectorstore for easy access. |
|
Logic for creating indexes. |
memory#
Classes
Combining multiple memories' data together. |
|
Memory wrapper that is read-only and cannot be changed. |
|
Simple memory for storing context or other information that shouldn't ever change between prompts. |
|
|
Conversation chat memory with token limit and vectordb backing. |
Functions
|
Get the prompt input key. |
Deprecated classes
model_laboratory#
Classes
|
Experiment with different models. |
output_parsers#
Classes
Parse the output of an LLM call to a boolean. |
|
Combine multiple output parsers into one. |
|
Parse the output of an LLM call to a datetime. |
|
Parse an output that is one of a set of values. |
|
Wrap a parser and try to fix parsing errors. |
|
Parse an output using Pandas DataFrame format. |
|
Parse the output of an LLM call using a regex. |
|
Parse the output of an LLM call into a Dictionary using a regex. |
|
Wrap a parser and try to fix parsing errors. |
|
Wrap a parser and try to fix parsing errors. |
|
|
|
Schema for a response from a structured output parser. |
|
Parse the output of an LLM call to a structured output. |
|
Parse YAML output using a pydantic model. |
Functions
Load an output parser. |
retrievers#
Classes
|
Retriever that wraps a base retriever and compresses the results. |
|
Document compressor that uses a pipeline of Transformers. |
|
Document compressor that uses an LLM chain to extract the relevant parts of documents. |
|
Parse outputs that could return a null string of some sort. |
Filter that drops documents that aren't relevant to the query. |
|
|
Interface for cross encoder models. |
|
Document compressor that uses CrossEncoder for reranking. |
|
Document compressor that uses embeddings to drop documents unrelated to the query. |
|
Document compressor that uses Zero-Shot Listwise Document Reranking. |
Retriever that ensembles the multiple retrievers. |
|
Retriever that merges the results of multiple retrievers. |
|
Output parser for a list of lines. |
|
Given a query, use an LLM to write a set of queries. |
|
Retrieve from a set of multiple embeddings for the same document. |
|
|
Enumerator of the types of search to perform. |
|
Retrieve small chunks then retrieve their parent documents. |
Given a query, use an LLM to re-phrase it. |
|
Retriever that uses a vector store and an LLM to generate the vector store queries. |
|
|
Retriever that combines embedding similarity with recency in retrieving values. |
Functions
|
Return the compression chain input. |
|
Return the compression chain input. |
|
Yield unique elements of an iterable based on a key function. |
Deprecated classes
runnables#
Classes
An instance of a runnable stored in the LangChain Hub. |
|
A function description for ChatOpenAI |
|
A runnable that routes to the selected function. |
smith#
Classes
Configuration for a given run evaluator. |
|
Configuration for a run evaluation. |
|
Configuration for a run evaluator that only requires a single key. |
|
A simple progress bar for the console. |
|
Input for a chat model. |
|
Your architecture raised an error. |
|
Raised when the input format is invalid. |
|
A dictionary of the results of a single test run. |
|
Extract items to evaluate from the run object from a chain. |
|
Extract items to evaluate from the run object. |
|
Map an example, or row in the dataset, to the inputs of an evaluation. |
|
|
Evaluate Run and optional examples. |
Extract items to evaluate from the run object. |
|
Map an input to the tool. |
Functions
Generate a random name. |
|
Run the Chain or language model on a dataset and store traces to the specified project name. |
|
Run the Chain or language model on a dataset and store traces to the specified project name. |
storage#
Classes
Wraps a store with key and value encoders/decoders. |
|
|
BaseStore interface that works on the local file system. |