RankLLMRerank#
- class langchain_community.document_compressors.rankllm_rerank.RankLLMRerank[source]#
Bases:
BaseDocumentCompressor
Document compressor using Flashrank interface.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- param client: Any = None#
RankLLM client to use for compressing documents
- param gpt_model: str = 'gpt-3.5-turbo'#
OpenAI model name.
- param model: str = 'zephyr'#
Name of model to use for reranking.
- param step_size: int = 10#
Step size for moving sliding window.
- param top_n: int = 3#
Top N documents to return.
- async acompress_documents(documents: Sequence[Document], query: str, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None) Sequence[Document] #
Async compress retrieved documents given the query context.
- Parameters:
documents (Sequence[Document]) – The retrieved documents.
query (str) – The query context.
callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) – Optional callbacks to run during compression.
- Returns:
The compressed documents.
- Return type:
Sequence[Document]
- compress_documents(documents: Sequence[Document], query: str, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None) Sequence[Document] [source]#
Compress retrieved documents given the query context.
- Parameters:
documents (Sequence[Document]) – The retrieved documents.
query (str) – The query context.
callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) – Optional callbacks to run during compression.
- Returns:
The compressed documents.
- Return type:
Sequence[Document]
Examples using RankLLMRerank