VertexAIRank#
- class langchain_google_community.vertex_rank.VertexAIRank[source]#
Bases:
BaseDocumentCompressor
Initializes the Vertex AI Ranker with configurable parameters.
Inherits from BaseDocumentCompressor for document processing and validation features, respectively.
- project_id#
Google Cloud project ID
- Type:
str
- location_id#
Location ID for the ranking service.
- Type:
str
- ranking_config#
Required. The name of the rank service config, such as default_config. It is set to default_config by default if unspecified.
- Type:
str
- model#
The identifier of the model to use. It is one of:
semantic-ranker-512@latest
: Semantic ranking model with maximum input token size 512.
It is set to
semantic-ranker-512@latest
by default if unspecified.- Type:
str
- top_n#
The number of results to return. If this is unset or no bigger than zero, returns all results.
- Type:
int
- ignore_record_details_in_response#
If true, the response will contain only record ID and score. By default, it is false, the response will contain record details.
- Type:
bool
- id_field#
Specifies a unique document metadata field
- Type:
Optional[str]
- to use as an id.
- title_field#
Specifies the document metadata field
- Type:
Optional[str]
- to use as title.
- credentials#
Google Cloud credentials object.
- Type:
Optional[Credentials]
- credentials_path#
Path to the Google Cloud service
- Type:
Optional[str]
- account credentials file.
Constructor for VertexAIRanker, allowing for specification of ranking configuration and initialization of Google Cloud services.
The parameters accepted are the same as the attributes listed above.
- param client: Any = None#
- param credentials: Credentials | None = None#
- param credentials_path: str | None = None#
- param id_field: str | None = None#
- param ignore_record_details_in_response: bool = False#
- param location_id: str = 'global'#
- param model: str = 'semantic-ranker-512@latest'#
- param project_id: str = None#
- param ranking_config: str = 'default_config'#
- param title_field: str | None = None#
- param top_n: int = 10#
- async acompress_documents(documents: Sequence[Document], query: str, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None) Sequence[Document] #
Async compress retrieved documents given the query context.
- Parameters:
documents (Sequence[Document]) – The retrieved documents.
query (str) – The query context.
callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) – Optional callbacks to run during compression.
- Returns:
The compressed documents.
- Return type:
Sequence[Document]
- compress_documents(documents: Sequence[Document], query: str, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None) Sequence[Document] [source]#
Compresses documents using Vertex AI’s rerank API.
- Parameters:
documents (Sequence[Document]) – List of Document instances to compress.
query (str) – Query string to use for compressing the documents.
callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) – Callbacks to execute during compression (not used here).
- Returns:
A list of Document instances, compressed.
- Return type:
Sequence[Document]