LLMListwiseRerank#

class langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank[source]#

Bases: BaseDocumentCompressor

Document compressor that uses Zero-Shot Listwise Document Reranking.

Adapted from: https://arxiv.org/pdf/2305.02156.pdf

LLMListwiseRerank uses a language model to rerank a list of documents based on their relevance to a query.

NOTE: requires that underlying model implement with_structured_output.

Example usage:
from langchain.retrievers.document_compressors.listwise_rerank import (
    LLMListwiseRerank,
)
from langchain_core.documents import Document
from langchain_openai import ChatOpenAI

documents = [
    Document("Sally is my friend from school"),
    Document("Steve is my friend from home"),
    Document("I didn't always like yogurt"),
    Document("I wonder why it's called football"),
    Document("Where's waldo"),
]

reranker = LLMListwiseRerank.from_llm(
    llm=ChatOpenAI(model="gpt-3.5-turbo"), top_n=3
)
compressed_docs = reranker.compress_documents(documents, "Who is steve")
assert len(compressed_docs) == 3
assert "Steve" in compressed_docs[0].page_content

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param reranker: Runnable[Dict, List[Document]] [Required]#

LLM-based reranker to use for filtering documents. Expected to take in a dict with β€˜documents: Sequence[Document]’ and β€˜query: str’ keys and output a List[Document].

param top_n: int = 3#

Number of documents to return.

async acompress_documents(documents: Sequence[Document], query: str, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None) β†’ Sequence[Document]#

Async compress retrieved documents given the query context.

Parameters:
Returns:

The compressed documents.

Return type:

Sequence[Document]

compress_documents(documents: Sequence[Document], query: str, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None) β†’ Sequence[Document][source]#

Filter down documents based on their relevance to the query.

Parameters:
Return type:

Sequence[Document]

classmethod from_llm(llm: BaseLanguageModel, *, prompt: BasePromptTemplate | None = None, **kwargs: Any) β†’ LLMListwiseRerank[source]#

Create a LLMListwiseRerank document compressor from a language model.

Parameters:
  • llm (BaseLanguageModel) – The language model to use for filtering. Must implement BaseLanguageModel.with_structured_output().

  • prompt (BasePromptTemplate | None) – The prompt to use for the filter.

  • kwargs (Any) – Additional arguments to pass to the constructor.

Returns:

A LLMListwiseRerank document compressor that uses the given language model.

Return type:

LLMListwiseRerank

Examples using LLMListwiseRerank