LLMChainExtractor#
- class langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor[source]#
Bases:
BaseDocumentCompressor
Document compressor that uses an LLM chain to extract the relevant parts of documents.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- param get_input: Callable[[str, Document], dict] = <function default_get_input>#
Callable for constructing the chain input from the query and a Document.
- async acompress_documents(documents: Sequence[Document], query: str, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None) Sequence[Document] [source]#
Compress page content of raw documents asynchronously.
- Parameters:
documents (Sequence[Document]) –
query (str) –
callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) –
- Return type:
Sequence[Document]
- compress_documents(documents: Sequence[Document], query: str, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None) Sequence[Document] [source]#
Compress page content of raw documents.
- Parameters:
documents (Sequence[Document]) –
query (str) –
callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) –
- Return type:
Sequence[Document]
- classmethod from_llm(llm: BaseLanguageModel, prompt: PromptTemplate | None = None, get_input: Callable[[str, Document], str] | None = None, llm_chain_kwargs: dict | None = None) LLMChainExtractor [source]#
Initialize from LLM.
- Parameters:
llm (BaseLanguageModel) –
prompt (PromptTemplate | None) –
get_input (Callable[[str, Document], str] | None) –
llm_chain_kwargs (dict | None) –
- Return type:
Examples using LLMChainExtractor