AstraDBGraphVectorStore#
- class langchain_astradb.graph_vectorstores.AstraDBGraphVectorStore(*, collection_name: str, embedding: Embeddings | None = None, metadata_incoming_links_key: str = 'incoming_links', token: str | TokenProvider | None = None, api_endpoint: str | None = None, environment: str | None = None, namespace: str | None = None, metric: str | None = None, batch_size: int | None = None, bulk_insert_batch_concurrency: int | None = None, bulk_insert_overwrite_concurrency: int | None = None, bulk_delete_concurrency: int | None = None, setup_mode: SetupMode | None = None, pre_delete_collection: bool = False, metadata_indexing_include: Iterable[str] | None = None, metadata_indexing_exclude: Iterable[str] | None = None, collection_indexing_policy: dict[str, Any] | None = None, collection_vector_service_options: CollectionVectorServiceOptions | None = None, collection_embedding_api_key: str | EmbeddingHeadersProvider | None = None, content_field: str | None = None, ignore_invalid_documents: bool = False, autodetect_collection: bool = False, ext_callers: list[tuple[str | None, str | None] | str | None] | None = None, component_name: str = 'langchain_graphvectorstore', astra_db_client: AstraDBClient | None = None, async_astra_db_client: AsyncAstraDBClient | None = None)[source]#
Beta
This feature is in beta. It is actively being worked on, so the API may change.
Graph Vector Store backed by AstraDB.
- Parameters:
embedding (Embeddings | None) – the embeddings function or service to use. This enables client-side embedding functions or calls to external embedding providers. If
embedding
is provided, argumentscollection_vector_service_options
andcollection_embedding_api_key
cannot be provided.collection_name (str) – name of the Astra DB collection to create/use.
metadata_incoming_links_key (str) – document metadata key where the incoming links are stored (and indexed).
token (str | TokenProvider | None) – API token for Astra DB usage, either in the form of a string or a subclass of
astrapy.authentication.TokenProvider
. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.api_endpoint (str | None) – full URL to the API endpoint, such as
https://<DB-ID>-us-east1.apps.astra.datastax.com
. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.environment (str | None) – a string specifying the environment of the target Data API. If omitted, defaults to “prod” (Astra DB production). Other values are in
astrapy.constants.Environment
enum class.namespace (str | None) – namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database’s “default namespace”.
metric (str | None) – similarity function to use out of those available in Astra DB. If left out, it will use Astra DB API’s defaults (i.e. “cosine” - but, for performance reasons, “dot_product” is suggested if embeddings are normalized to one).
batch_size (int | None) – Size of document chunks for each individual insertion API request. If not provided, astrapy defaults are applied.
bulk_insert_batch_concurrency (int | None) – Number of threads or coroutines to insert batches concurrently.
bulk_insert_overwrite_concurrency (int | None) – Number of threads or coroutines in a batch to insert pre-existing entries.
bulk_delete_concurrency (int | None) – Number of threads or coroutines for multiple-entry deletes.
setup_mode (SetupMode | None) – mode used to create the collection (SYNC, ASYNC or OFF).
pre_delete_collection (bool) – whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.
metadata_indexing_include (Iterable[str] | None) – an allowlist of the specific metadata subfields that should be indexed for later filtering in searches.
metadata_indexing_exclude (Iterable[str] | None) – a denylist of the specific metadata subfields that should not be indexed for later filtering in searches.
collection_indexing_policy (dict[str, Any] | None) – a full “indexing” specification for what fields should be indexed for later filtering in searches. This dict must conform to to the API specifications (see https://docs.datastax.com/en/astra-db-serverless/api-reference/collections.html#the-indexing-option)
collection_vector_service_options (CollectionVectorServiceOptions | None) – specifies the use of server-side embeddings within Astra DB. If passing this parameter,
embedding
cannot be provided.collection_embedding_api_key (str | EmbeddingHeadersProvider | None) – for usage of server-side embeddings within Astra DB. With this parameter one can supply an API Key that will be passed to Astra DB with each data request. This parameter can be either a string or a subclass of
astrapy.authentication.EmbeddingHeadersProvider
. This is useful when the service is configured for the collection, but no corresponding secret is stored within Astra’s key management system.content_field (str | None) – name of the field containing the textual content in the documents when saved on Astra DB. For vectorize collections, this cannot be specified; for non-vectorize collection, defaults to “content”. The special value “*” can be passed only if autodetect_collection=True. In this case, the actual name of the key for the textual content is guessed by inspection of a few documents from the collection, under the assumption that the longer strings are the most likely candidates. Please understand the limitations of this method and get some understanding of your data before passing
"*"
for this parameter.ignore_invalid_documents (bool) – if False (default), exceptions are raised when a document is found on the Astra DB collection that does not have the expected shape. If set to True, such results from the database are ignored and a warning is issued. Note that in this case a similarity search may end up returning fewer results than the required
k
.autodetect_collection (bool) – if True, turns on autodetect behavior. The store will look for an existing collection of the provided name and infer the store settings from it. Default is False. In autodetect mode,
content_field
can be given as"*"
, meaning that an attempt will be made to determine it by inspection (unless vectorize is enabled, in which casecontent_field
is ignored). In autodetect mode, the store not only determines whether embeddings are client- or server-side, but - most importantly - switches automatically between “nested” and “flat” representations of documents on DB (i.e. having the metadata key-value pairs grouped in ametadata
field or spread at the documents’ top-level). The former scheme is the native mode of the AstraDBVectorStore; the store resorts to the latter in case of vector collections populated with external means (such as a third-party data import tool) before applying an AstraDBVectorStore to them. Note that the following parameters cannot be used if this is True:metric
,setup_mode
,metadata_indexing_include
,metadata_indexing_exclude
,collection_indexing_policy
,collection_vector_service_options
.ext_callers (list[tuple[str | None, str | None] | str | None] | None) – one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.
component_name (str) – the string identifying this specific component in the stack of usage info passed as the User-Agent string to the Data API. Defaults to “langchain_graphvectorstore”, but can be overridden if this component actually serves as the building block for another component.
astra_db_client (AstraDBClient | None) – DEPRECATED starting from version 0.3.5. Please use ‘token’, ‘api_endpoint’ and optionally ‘environment’. you can pass an already-created ‘astrapy.db.AstraDB’ instance (alternatively to ‘token’, ‘api_endpoint’ and ‘environment’).
async_astra_db_client (AsyncAstraDBClient | None) – DEPRECATED starting from version 0.3.5. Please use ‘token’, ‘api_endpoint’ and optionally ‘environment’. you can pass an already-created ‘astrapy.db.AsyncAstraDB’ instance (alternatively to ‘token’, ‘api_endpoint’ and ‘environment’).
Note
For concurrency in synchronous
add_texts()
:, as a rule of thumb, on a typical client machine it is suggested to keep the quantity bulk_insert_batch_concurrency * bulk_insert_overwrite_concurrency much below 1000 to avoid exhausting the client multithreading/networking resources. The hardcoded defaults are somewhat conservative to meet most machines’ specs, but a sensible choice to test may be:bulk_insert_batch_concurrency = 80
bulk_insert_overwrite_concurrency = 10
A bit of experimentation is required to nail the best results here, depending on both the machine/network specs and the expected workload (specifically, how often a write is an update of an existing id). Remember you can pass concurrency settings to individual calls to
add_texts()
andadd_documents()
as well.Attributes
embeddings
Access the query embedding object if available.
Methods
__init__
(*, collection_name[, embedding, ...])Graph Vector Store backed by AstraDB.
aadd_documents
(documents, **kwargs)Run more documents through the embeddings and add to the vector store.
aadd_nodes
(nodes, **kwargs)Add nodes to the graph store.
aadd_texts
(texts[, metadatas, ids])Run more texts through the embeddings and add to the vector store.
add_documents
(documents, **kwargs)Run more documents through the embeddings and add to the vector store.
add_nodes
(nodes, **kwargs)Add nodes to the graph store.
add_texts
(texts[, metadatas, ids])Run more texts through the embeddings and add to the vector store.
adelete
([ids])Async delete by vector ID or other criteria.
afrom_documents
(documents[, embedding, ids, ...])Return AstraDBGraphVectorStore initialized from docs and embeddings.
afrom_texts
(texts[, embedding, metadatas, ...])Return AstraDBGraphVectorStore initialized from texts and embeddings.
aget_by_document_id
(document_id)Retrieve a single document from the store, given its document ID.
aget_by_ids
(ids, /)Async get documents by their IDs.
aget_node
(node_id)Retrieve a single node from the store, given its ID.
amax_marginal_relevance_search
(query[, k, ...])Async return docs selected using the maximal marginal relevance.
Async return docs selected using the maximal marginal relevance.
ametadata_search
([filter, n])Get documents via a metadata search.
ammr_traversal_search
(query, *[, ...])Retrieve documents from this graph store using MMR-traversal.
as_retriever
(**kwargs)Return GraphVectorStoreRetriever initialized from this GraphVectorStore.
asearch
(query, search_type, **kwargs)Async return docs most similar to query using a specified search type.
asimilarity_search
(query[, k, filter])Retrieve documents from this graph store.
asimilarity_search_by_vector
(embedding[, k, ...])Return docs most similar to embedding vector.
Async return docs and relevance scores in the range [0, 1].
asimilarity_search_with_score
(*args, **kwargs)Async run similarity search with distance.
atraversal_search
(query, *[, k, depth, filter])Retrieve documents from this knowledge store.
delete
([ids])Delete by vector ID or other criteria.
from_documents
(documents[, embedding, ids, ...])Return AstraDBGraphVectorStore initialized from docs and embeddings.
from_texts
(texts[, embedding, metadatas, ...])Return AstraDBGraphVectorStore initialized from texts and embeddings.
get_by_document_id
(document_id)Retrieve a single document from the store, given its document ID.
get_by_ids
(ids, /)Get documents by their IDs.
get_node
(node_id)Retrieve a single node from the store, given its ID.
max_marginal_relevance_search
(query[, k, ...])Return docs selected using the maximal marginal relevance.
Return docs selected using the maximal marginal relevance.
metadata_search
([filter, n])Get documents via a metadata search.
mmr_traversal_search
(query, *[, ...])Retrieve documents from this graph store using MMR-traversal.
search
(query, search_type, **kwargs)Return docs most similar to query using a specified search type.
similarity_search
(query[, k, filter])Retrieve documents from this graph store.
similarity_search_by_vector
(embedding[, k, ...])Return docs most similar to embedding vector.
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score
(*args, **kwargs)Run similarity search with distance.
traversal_search
(query, *[, k, depth, filter])Retrieve documents from this knowledge store.
- __init__(*, collection_name: str, embedding: Embeddings | None = None, metadata_incoming_links_key: str = 'incoming_links', token: str | TokenProvider | None = None, api_endpoint: str | None = None, environment: str | None = None, namespace: str | None = None, metric: str | None = None, batch_size: int | None = None, bulk_insert_batch_concurrency: int | None = None, bulk_insert_overwrite_concurrency: int | None = None, bulk_delete_concurrency: int | None = None, setup_mode: SetupMode | None = None, pre_delete_collection: bool = False, metadata_indexing_include: Iterable[str] | None = None, metadata_indexing_exclude: Iterable[str] | None = None, collection_indexing_policy: dict[str, Any] | None = None, collection_vector_service_options: CollectionVectorServiceOptions | None = None, collection_embedding_api_key: str | EmbeddingHeadersProvider | None = None, content_field: str | None = None, ignore_invalid_documents: bool = False, autodetect_collection: bool = False, ext_callers: list[tuple[str | None, str | None] | str | None] | None = None, component_name: str = 'langchain_graphvectorstore', astra_db_client: AstraDBClient | None = None, async_astra_db_client: AsyncAstraDBClient | None = None)[source]#
Graph Vector Store backed by AstraDB.
- Parameters:
embedding (Embeddings | None) – the embeddings function or service to use. This enables client-side embedding functions or calls to external embedding providers. If
embedding
is provided, argumentscollection_vector_service_options
andcollection_embedding_api_key
cannot be provided.collection_name (str) – name of the Astra DB collection to create/use.
metadata_incoming_links_key (str) – document metadata key where the incoming links are stored (and indexed).
token (str | TokenProvider | None) – API token for Astra DB usage, either in the form of a string or a subclass of
astrapy.authentication.TokenProvider
. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.api_endpoint (str | None) – full URL to the API endpoint, such as
https://<DB-ID>-us-east1.apps.astra.datastax.com
. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.environment (str | None) – a string specifying the environment of the target Data API. If omitted, defaults to “prod” (Astra DB production). Other values are in
astrapy.constants.Environment
enum class.namespace (str | None) – namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database’s “default namespace”.
metric (str | None) – similarity function to use out of those available in Astra DB. If left out, it will use Astra DB API’s defaults (i.e. “cosine” - but, for performance reasons, “dot_product” is suggested if embeddings are normalized to one).
batch_size (int | None) – Size of document chunks for each individual insertion API request. If not provided, astrapy defaults are applied.
bulk_insert_batch_concurrency (int | None) – Number of threads or coroutines to insert batches concurrently.
bulk_insert_overwrite_concurrency (int | None) – Number of threads or coroutines in a batch to insert pre-existing entries.
bulk_delete_concurrency (int | None) – Number of threads or coroutines for multiple-entry deletes.
setup_mode (SetupMode | None) – mode used to create the collection (SYNC, ASYNC or OFF).
pre_delete_collection (bool) – whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.
metadata_indexing_include (Iterable[str] | None) – an allowlist of the specific metadata subfields that should be indexed for later filtering in searches.
metadata_indexing_exclude (Iterable[str] | None) – a denylist of the specific metadata subfields that should not be indexed for later filtering in searches.
collection_indexing_policy (dict[str, Any] | None) – a full “indexing” specification for what fields should be indexed for later filtering in searches. This dict must conform to to the API specifications (see https://docs.datastax.com/en/astra-db-serverless/api-reference/collections.html#the-indexing-option)
collection_vector_service_options (CollectionVectorServiceOptions | None) – specifies the use of server-side embeddings within Astra DB. If passing this parameter,
embedding
cannot be provided.collection_embedding_api_key (str | EmbeddingHeadersProvider | None) – for usage of server-side embeddings within Astra DB. With this parameter one can supply an API Key that will be passed to Astra DB with each data request. This parameter can be either a string or a subclass of
astrapy.authentication.EmbeddingHeadersProvider
. This is useful when the service is configured for the collection, but no corresponding secret is stored within Astra’s key management system.content_field (str | None) – name of the field containing the textual content in the documents when saved on Astra DB. For vectorize collections, this cannot be specified; for non-vectorize collection, defaults to “content”. The special value “*” can be passed only if autodetect_collection=True. In this case, the actual name of the key for the textual content is guessed by inspection of a few documents from the collection, under the assumption that the longer strings are the most likely candidates. Please understand the limitations of this method and get some understanding of your data before passing
"*"
for this parameter.ignore_invalid_documents (bool) – if False (default), exceptions are raised when a document is found on the Astra DB collection that does not have the expected shape. If set to True, such results from the database are ignored and a warning is issued. Note that in this case a similarity search may end up returning fewer results than the required
k
.autodetect_collection (bool) – if True, turns on autodetect behavior. The store will look for an existing collection of the provided name and infer the store settings from it. Default is False. In autodetect mode,
content_field
can be given as"*"
, meaning that an attempt will be made to determine it by inspection (unless vectorize is enabled, in which casecontent_field
is ignored). In autodetect mode, the store not only determines whether embeddings are client- or server-side, but - most importantly - switches automatically between “nested” and “flat” representations of documents on DB (i.e. having the metadata key-value pairs grouped in ametadata
field or spread at the documents’ top-level). The former scheme is the native mode of the AstraDBVectorStore; the store resorts to the latter in case of vector collections populated with external means (such as a third-party data import tool) before applying an AstraDBVectorStore to them. Note that the following parameters cannot be used if this is True:metric
,setup_mode
,metadata_indexing_include
,metadata_indexing_exclude
,collection_indexing_policy
,collection_vector_service_options
.ext_callers (list[tuple[str | None, str | None] | str | None] | None) – one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.
component_name (str) – the string identifying this specific component in the stack of usage info passed as the User-Agent string to the Data API. Defaults to “langchain_graphvectorstore”, but can be overridden if this component actually serves as the building block for another component.
astra_db_client (AstraDBClient | None) – DEPRECATED starting from version 0.3.5. Please use ‘token’, ‘api_endpoint’ and optionally ‘environment’. you can pass an already-created ‘astrapy.db.AstraDB’ instance (alternatively to ‘token’, ‘api_endpoint’ and ‘environment’).
async_astra_db_client (AsyncAstraDBClient | None) – DEPRECATED starting from version 0.3.5. Please use ‘token’, ‘api_endpoint’ and optionally ‘environment’. you can pass an already-created ‘astrapy.db.AsyncAstraDB’ instance (alternatively to ‘token’, ‘api_endpoint’ and ‘environment’).
Note
For concurrency in synchronous
add_texts()
:, as a rule of thumb, on a typical client machine it is suggested to keep the quantity bulk_insert_batch_concurrency * bulk_insert_overwrite_concurrency much below 1000 to avoid exhausting the client multithreading/networking resources. The hardcoded defaults are somewhat conservative to meet most machines’ specs, but a sensible choice to test may be:bulk_insert_batch_concurrency = 80
bulk_insert_overwrite_concurrency = 10
A bit of experimentation is required to nail the best results here, depending on both the machine/network specs and the expected workload (specifically, how often a write is an update of an existing id). Remember you can pass concurrency settings to individual calls to
add_texts()
andadd_documents()
as well.
- async aadd_documents(documents: Iterable[Document], **kwargs: Any) list[str] #
Run more documents through the embeddings and add to the vector store.
The Links present in the document metadata field links will be extracted to create the Node links.
Eg if nodes a and b are connected over a hyperlink https://some-url, the function call would look like:
store.add_documents( [ Document( id="a", page_content="some text a", metadata={ "links": [ Link.incoming(kind="hyperlink", tag="http://some-url") ] } ), Document( id="b", page_content="some text b", metadata={ "links": [ Link.outgoing(kind="hyperlink", tag="http://some-url") ] } ), ] )
- async aadd_nodes(nodes: Iterable[Node], **kwargs: Any) AsyncIterable[str] [source]#
Add nodes to the graph store.
- Parameters:
nodes (Iterable[Node]) – the nodes to add.
**kwargs (Any) – Additional keyword arguments.
- Return type:
AsyncIterable[str]
- async aadd_texts(texts: Iterable[str], metadatas: Iterable[dict] | None = None, *, ids: Iterable[str] | None = None, **kwargs: Any) list[str] #
Run more texts through the embeddings and add to the vector store.
The Links present in the metadata field links will be extracted to create the Node links.
Eg if nodes a and b are connected over a hyperlink https://some-url, the function call would look like:
await store.aadd_texts( ids=["a", "b"], texts=["some text a", "some text b"], metadatas=[ { "links": [ Link.incoming(kind="hyperlink", tag="https://some-url") ] }, { "links": [ Link.outgoing(kind="hyperlink", tag="https://some-url") ] }, ], )
- Parameters:
texts (Iterable[str]) – Iterable of strings to add to the vector store.
metadatas (Iterable[dict] | None) – Optional list of metadatas associated with the texts. The metadata key links shall be an iterable of
Link
.ids (Iterable[str] | None) – Optional list of IDs associated with the texts.
**kwargs (Any) – vector store specific parameters.
- Returns:
List of ids from adding the texts into the vector store.
- Return type:
list[str]
- add_documents(documents: Iterable[Document], **kwargs: Any) list[str] #
Run more documents through the embeddings and add to the vector store.
The Links present in the document metadata field links will be extracted to create the Node links.
Eg if nodes a and b are connected over a hyperlink https://some-url, the function call would look like:
store.add_documents( [ Document( id="a", page_content="some text a", metadata={ "links": [ Link.incoming(kind="hyperlink", tag="http://some-url") ] } ), Document( id="b", page_content="some text b", metadata={ "links": [ Link.outgoing(kind="hyperlink", tag="http://some-url") ] } ), ] )
- add_nodes(nodes: Iterable[Node], **kwargs: Any) Iterable[str] [source]#
Add nodes to the graph store.
- Parameters:
nodes (Iterable[Node]) – the nodes to add.
**kwargs (Any) – Additional keyword arguments.
- Return type:
Iterable[str]
- add_texts(texts: Iterable[str], metadatas: Iterable[dict] | None = None, *, ids: Iterable[str] | None = None, **kwargs: Any) list[str] #
Run more texts through the embeddings and add to the vector store.
The Links present in the metadata field links will be extracted to create the Node links.
Eg if nodes a and b are connected over a hyperlink https://some-url, the function call would look like:
store.add_texts( ids=["a", "b"], texts=["some text a", "some text b"], metadatas=[ { "links": [ Link.incoming(kind="hyperlink", tag="https://some-url") ] }, { "links": [ Link.outgoing(kind="hyperlink", tag="https://some-url") ] }, ], )
- Parameters:
texts (Iterable[str]) – Iterable of strings to add to the vector store.
metadatas (Iterable[dict] | None) – Optional list of metadatas associated with the texts. The metadata key links shall be an iterable of
Link
.ids (Iterable[str] | None) – Optional list of IDs associated with the texts.
**kwargs (Any) – vector store specific parameters.
- Returns:
List of ids from adding the texts into the vector store.
- Return type:
list[str]
- async adelete(ids: list[str] | None = None, **kwargs: Any) bool | None #
Async delete by vector ID or other criteria.
- Parameters:
ids (list[str] | None) – List of ids to delete. If None, delete all. Default is None.
**kwargs (Any) – Other keyword arguments that subclasses might use.
- Returns:
True if deletion is successful, False otherwise, None if not implemented.
- Return type:
Optional[bool]
- async classmethod afrom_documents(documents: Iterable[Document], embedding: Embeddings | None = None, ids: Iterable[str] | None = None, collection_vector_service_options: CollectionVectorServiceOptions | None = None, collection_embedding_api_key: str | EmbeddingHeadersProvider | None = None, **kwargs: Any) AstraDBGraphVectorStore [source]#
Return AstraDBGraphVectorStore initialized from docs and embeddings.
- Parameters:
documents (Iterable[Document])
embedding (Embeddings | None)
ids (Iterable[str] | None)
collection_vector_service_options (CollectionVectorServiceOptions | None)
collection_embedding_api_key (str | EmbeddingHeadersProvider | None)
kwargs (Any)
- Return type:
- async classmethod afrom_texts(texts: Iterable[str], embedding: Embeddings | None = None, metadatas: list[dict] | None = None, ids: Iterable[str] | None = None, collection_vector_service_options: CollectionVectorServiceOptions | None = None, collection_embedding_api_key: str | EmbeddingHeadersProvider | None = None, **kwargs: Any) AstraDBGraphVectorStore [source]#
Return AstraDBGraphVectorStore initialized from texts and embeddings.
- Parameters:
texts (Iterable[str])
embedding (Embeddings | None)
metadatas (list[dict] | None)
ids (Iterable[str] | None)
collection_vector_service_options (CollectionVectorServiceOptions | None)
collection_embedding_api_key (str | EmbeddingHeadersProvider | None)
kwargs (Any)
- Return type:
- async aget_by_document_id(document_id: str) Document | None [source]#
Retrieve a single document from the store, given its document ID.
- Parameters:
document_id (str) – The document ID
- Returns:
The the document if it exists. Otherwise None.
- Return type:
Document | None
- async aget_by_ids(ids: Sequence[str], /) list[Document] #
Async get documents by their IDs.
The returned documents are expected to have the ID field set to the ID of the document in the vector store.
Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.
Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.
This method should NOT raise exceptions if no documents are found for some IDs.
- Parameters:
ids (Sequence[str]) – List of ids to retrieve.
- Returns:
List of Documents.
- Return type:
list[Document]
Added in version 0.2.11.
- async aget_node(node_id: str) Node | None [source]#
Retrieve a single node from the store, given its ID.
- Parameters:
node_id (str) – The node ID
- Returns:
The the node if it exists. Otherwise None.
- Return type:
Node | None
- async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) list[Document] #
Async return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
- Parameters:
query (str) – Text to look up documents similar to.
k (int) – Number of Documents to return. Defaults to 4.
fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. Default is 20.
lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
kwargs (Any)
- Returns:
List of Documents selected by maximal marginal relevance.
- Return type:
list[Document]
- async amax_marginal_relevance_search_by_vector(embedding: list[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) list[Document] #
Async return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
- Parameters:
embedding (list[float]) – Embedding to look up documents similar to.
k (int) – Number of Documents to return. Defaults to 4.
fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. Default is 20.
lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
**kwargs (Any) – Arguments to pass to the search method.
- Returns:
List of Documents selected by maximal marginal relevance.
- Return type:
list[Document]
- async ametadata_search(filter: dict[str, Any] | None = None, n: int = 5) Iterable[Document] [source]#
Get documents via a metadata search.
- Parameters:
filter (dict[str, Any] | None) – the metadata to query for.
n (int) – the maximum number of documents to return.
- Return type:
Iterable[Document]
- async ammr_traversal_search(query: str, *, initial_roots: Sequence[str] = (), k: int = 4, depth: int = 2, fetch_k: int = 100, adjacent_k: int = 10, lambda_mult: float = 0.5, score_threshold: float = -inf, filter: dict[str, Any] | None = None, **kwargs: Any) AsyncIterable[Document] [source]#
Retrieve documents from this graph store using MMR-traversal.
This strategy first retrieves the top fetch_k results by similarity to the question. It then selects the top k results based on maximum-marginal relevance using the given lambda_mult.
At each step, it considers the (remaining) documents from fetch_k as well as any documents connected by edges to a selected document retrieved based on similarity (a “root”).
- Parameters:
query (str) – The query string to search for.
initial_roots (Sequence[str]) – Optional list of document IDs to use for initializing search. The top adjacent_k nodes adjacent to each initial root will be included in the set of initial candidates. To fetch only in the neighborhood of these nodes, set fetch_k = 0.
k (int) – Number of Documents to return. Defaults to 4.
fetch_k (int) – Number of initial Documents to fetch via similarity. Will be added to the nodes adjacent to initial_roots. Defaults to 100.
adjacent_k (int) – Number of adjacent Documents to fetch. Defaults to 10.
depth (int) – Maximum depth of a node (number of edges) from a node retrieved via similarity. Defaults to 2.
lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
score_threshold (float) – Only documents with a score greater than or equal this threshold will be chosen. Defaults to -infinity.
filter (dict[str, Any] | None) – Optional metadata to filter the results.
**kwargs (Any) – Additional keyword arguments.
- Return type:
AsyncIterable[Document]
- as_retriever(**kwargs: Any) GraphVectorStoreRetriever #
Return GraphVectorStoreRetriever initialized from this GraphVectorStore.
- Parameters:
**kwargs (Any) –
Keyword arguments to pass to the search function. Can include:
search_type (Optional[str]): Defines the type of search that the Retriever should perform. Can be
traversal
(default),similarity
,mmr
,mmr_traversal
, orsimilarity_score_threshold
.search_kwargs (Optional[Dict]): Keyword arguments to pass to the search function. Can include things like:
k(int): Amount of documents to return (Default: 4).
depth(int): The maximum depth of edges to traverse (Default: 1). Only applies to search_type:
traversal
andmmr_traversal
.score_threshold(float): Minimum relevance threshold for similarity_score_threshold.
fetch_k(int): Amount of documents to pass to MMR algorithm (Default: 20).
lambda_mult(float): Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5).
- Returns:
Retriever for this GraphVectorStore.
- Return type:
Examples:
# Retrieve documents traversing edges docsearch.as_retriever( search_type="traversal", search_kwargs={'k': 6, 'depth': 2} ) # Retrieve documents with higher diversity # Useful if your dataset has many similar documents docsearch.as_retriever( search_type="mmr_traversal", search_kwargs={'k': 6, 'lambda_mult': 0.25, 'depth': 2} ) # Fetch more documents for the MMR algorithm to consider # But only return the top 5 docsearch.as_retriever( search_type="mmr_traversal", search_kwargs={'k': 5, 'fetch_k': 50, 'depth': 2} ) # Only retrieve documents that have a relevance score # Above a certain threshold docsearch.as_retriever( search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8} ) # Only get the single most similar document from the dataset docsearch.as_retriever(search_kwargs={'k': 1})
- async asearch(query: str, search_type: str, **kwargs: Any) list[Document] #
Async return docs most similar to query using a specified search type.
- Parameters:
query (str) – Input text.
search_type (str) – Type of search to perform. Can be “similarity”, “mmr”, or “similarity_score_threshold”.
**kwargs (Any) – Arguments to pass to the search method.
- Returns:
List of Documents most similar to the query.
- Raises:
ValueError – If search_type is not one of “similarity”, “mmr”, or “similarity_score_threshold”.
- Return type:
list[Document]
- async asimilarity_search(query: str, k: int = 4, filter: dict[str, Any] | None = None, **kwargs: Any) list[Document] [source]#
Retrieve documents from this graph store.
- Parameters:
query (str) – The query string.
k (int) – The number of Documents to return. Defaults to 4.
filter (dict[str, Any] | None) – Optional metadata to filter the results.
**kwargs (Any) – Additional keyword arguments.
- Returns:
Collection of retrieved documents.
- Return type:
list[Document]
- async asimilarity_search_by_vector(embedding: list[float], k: int = 4, filter: dict[str, Any] | None = None, **kwargs: Any) list[Document] [source]#
Return docs most similar to embedding vector.
- Parameters:
embedding (list[float]) – Embedding to look up documents similar to.
k (int) – Number of Documents to return. Defaults to 4.
filter (dict[str, Any] | None) – Filter on the metadata to apply.
**kwargs (Any) – Additional arguments are ignored.
- Returns:
The list of Documents most similar to the query vector.
- Return type:
list[Document]
- async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) list[tuple[Document, float]] #
Async return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
- Parameters:
query (str) – Input text.
k (int) – Number of Documents to return. Defaults to 4.
**kwargs (Any) –
kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
- Returns:
List of Tuples of (doc, similarity_score)
- Return type:
list[tuple[Document, float]]
- async asimilarity_search_with_score(*args: Any, **kwargs: Any) list[tuple[Document, float]] #
Async run similarity search with distance.
- Parameters:
*args (Any) – Arguments to pass to the search method.
**kwargs (Any) – Arguments to pass to the search method.
- Returns:
List of Tuples of (doc, similarity_score).
- Return type:
list[tuple[Document, float]]
- async atraversal_search(query: str, *, k: int = 4, depth: int = 1, filter: dict[str, Any] | None = None, **kwargs: Any) AsyncIterable[Document] [source]#
Retrieve documents from this knowledge store.
First, k nodes are retrieved using a vector search for the query string. Then, additional nodes are discovered up to the given depth from those starting nodes.
- Parameters:
query (str) – The query string.
k (int) – The number of Documents to return from the initial vector search. Defaults to 4.
depth (int) – The maximum depth of edges to traverse. Defaults to 1.
filter (dict[str, Any] | None) – Optional metadata to filter the results.
**kwargs (Any) – Additional keyword arguments.
- Returns:
Collection of retrieved documents.
- Return type:
AsyncIterable[Document]
- delete(ids: list[str] | None = None, **kwargs: Any) bool | None #
Delete by vector ID or other criteria.
- Parameters:
ids (list[str] | None) – List of ids to delete. If None, delete all. Default is None.
**kwargs (Any) – Other keyword arguments that subclasses might use.
- Returns:
True if deletion is successful, False otherwise, None if not implemented.
- Return type:
Optional[bool]
- classmethod from_documents(documents: Iterable[Document], embedding: Embeddings | None = None, ids: Iterable[str] | None = None, collection_vector_service_options: CollectionVectorServiceOptions | None = None, collection_embedding_api_key: str | EmbeddingHeadersProvider | None = None, **kwargs: Any) AstraDBGraphVectorStore [source]#
Return AstraDBGraphVectorStore initialized from docs and embeddings.
- Parameters:
documents (Iterable[Document])
embedding (Embeddings | None)
ids (Iterable[str] | None)
collection_vector_service_options (CollectionVectorServiceOptions | None)
collection_embedding_api_key (str | EmbeddingHeadersProvider | None)
kwargs (Any)
- Return type:
- classmethod from_texts(texts: Iterable[str], embedding: Embeddings | None = None, metadatas: list[dict] | None = None, ids: Iterable[str] | None = None, collection_vector_service_options: CollectionVectorServiceOptions | None = None, collection_embedding_api_key: str | EmbeddingHeadersProvider | None = None, **kwargs: Any) AstraDBGraphVectorStore [source]#
Return AstraDBGraphVectorStore initialized from texts and embeddings.
- Parameters:
texts (Iterable[str])
embedding (Embeddings | None)
metadatas (list[dict] | None)
ids (Iterable[str] | None)
collection_vector_service_options (CollectionVectorServiceOptions | None)
collection_embedding_api_key (str | EmbeddingHeadersProvider | None)
kwargs (Any)
- Return type:
- get_by_document_id(document_id: str) Document | None [source]#
Retrieve a single document from the store, given its document ID.
- Parameters:
document_id (str) – The document ID
- Returns:
The the document if it exists. Otherwise None.
- Return type:
Document | None
- get_by_ids(ids: Sequence[str], /) list[Document] #
Get documents by their IDs.
The returned documents are expected to have the ID field set to the ID of the document in the vector store.
Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.
Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.
This method should NOT raise exceptions if no documents are found for some IDs.
- Parameters:
ids (Sequence[str]) – List of ids to retrieve.
- Returns:
List of Documents.
- Return type:
list[Document]
Added in version 0.2.11.
- get_node(node_id: str) Node | None [source]#
Retrieve a single node from the store, given its ID.
- Parameters:
node_id (str) – The node ID
- Returns:
The the node if it exists. Otherwise None.
- Return type:
Node | None
- max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) list[Document] #
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
- Parameters:
query (str) – Text to look up documents similar to.
k (int) – Number of Documents to return. Defaults to 4.
fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. Default is 20.
lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
**kwargs (Any) – Arguments to pass to the search method.
- Returns:
List of Documents selected by maximal marginal relevance.
- Return type:
list[Document]
- max_marginal_relevance_search_by_vector(embedding: list[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) list[Document] #
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
- Parameters:
embedding (list[float]) – Embedding to look up documents similar to.
k (int) – Number of Documents to return. Defaults to 4.
fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. Default is 20.
lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
**kwargs (Any) – Arguments to pass to the search method.
- Returns:
List of Documents selected by maximal marginal relevance.
- Return type:
list[Document]
- metadata_search(filter: dict[str, Any] | None = None, n: int = 5) Iterable[Document] [source]#
Get documents via a metadata search.
- Parameters:
filter (dict[str, Any] | None) – the metadata to query for.
n (int) – the maximum number of documents to return.
- Return type:
Iterable[Document]
- mmr_traversal_search(query: str, *, initial_roots: Sequence[str] = (), k: int = 4, depth: int = 2, fetch_k: int = 100, adjacent_k: int = 10, lambda_mult: float = 0.5, score_threshold: float = -inf, filter: dict[str, Any] | None = None, **kwargs: Any) Iterable[Document] [source]#
Retrieve documents from this graph store using MMR-traversal.
This strategy first retrieves the top fetch_k results by similarity to the question. It then selects the top k results based on maximum-marginal relevance using the given lambda_mult.
At each step, it considers the (remaining) documents from fetch_k as well as any documents connected by edges to a selected document retrieved based on similarity (a “root”).
- Parameters:
query (str) – The query string to search for.
initial_roots (Sequence[str]) – Optional list of document IDs to use for initializing search. The top adjacent_k nodes adjacent to each initial root will be included in the set of initial candidates. To fetch only in the neighborhood of these nodes, set fetch_k = 0.
k (int) – Number of Documents to return. Defaults to 4.
fetch_k (int) – Number of initial Documents to fetch via similarity. Will be added to the nodes adjacent to initial_roots. Defaults to 100.
adjacent_k (int) – Number of adjacent Documents to fetch. Defaults to 10.
depth (int) – Maximum depth of a node (number of edges) from a node retrieved via similarity. Defaults to 2.
lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
score_threshold (float) – Only documents with a score greater than or equal this threshold will be chosen. Defaults to -infinity.
filter (dict[str, Any] | None) – Optional metadata to filter the results.
**kwargs (Any) – Additional keyword arguments.
- Return type:
Iterable[Document]
- search(query: str, search_type: str, **kwargs: Any) list[Document] #
Return docs most similar to query using a specified search type.
- Parameters:
query (str) – Input text
search_type (str) – Type of search to perform. Can be “similarity”, “mmr”, or “similarity_score_threshold”.
**kwargs (Any) – Arguments to pass to the search method.
- Returns:
List of Documents most similar to the query.
- Raises:
ValueError – If search_type is not one of “similarity”, “mmr”, or “similarity_score_threshold”.
- Return type:
list[Document]
- similarity_search(query: str, k: int = 4, filter: dict[str, Any] | None = None, **kwargs: Any) list[Document] [source]#
Retrieve documents from this graph store.
- Parameters:
query (str) – The query string.
k (int) – The number of Documents to return. Defaults to 4.
filter (dict[str, Any] | None) – Optional metadata to filter the results.
**kwargs (Any) – Additional keyword arguments.
- Returns:
Collection of retrieved documents.
- Return type:
list[Document]
- similarity_search_by_vector(embedding: list[float], k: int = 4, filter: dict[str, Any] | None = None, **kwargs: Any) list[Document] [source]#
Return docs most similar to embedding vector.
- Parameters:
embedding (list[float]) – Embedding to look up documents similar to.
k (int) – Number of Documents to return. Defaults to 4.
filter (dict[str, Any] | None) – Filter on the metadata to apply.
**kwargs (Any) – Additional arguments are ignored.
- Returns:
The list of Documents most similar to the query vector.
- Return type:
list[Document]
- similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) list[tuple[Document, float]] #
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
- Parameters:
query (str) – Input text.
k (int) – Number of Documents to return. Defaults to 4.
**kwargs (Any) –
kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs.
- Returns:
List of Tuples of (doc, similarity_score).
- Return type:
list[tuple[Document, float]]
- similarity_search_with_score(*args: Any, **kwargs: Any) list[tuple[Document, float]] #
Run similarity search with distance.
- Parameters:
*args (Any) – Arguments to pass to the search method.
**kwargs (Any) – Arguments to pass to the search method.
- Returns:
List of Tuples of (doc, similarity_score).
- Return type:
list[tuple[Document, float]]
- traversal_search(query: str, *, k: int = 4, depth: int = 1, filter: dict[str, Any] | None = None, **kwargs: Any) Iterable[Document] [source]#
Retrieve documents from this knowledge store.
First, k nodes are retrieved using a vector search for the query string. Then, additional nodes are discovered up to the given depth from those starting nodes.
- Parameters:
query (str) – The query string.
k (int) – The number of Documents to return from the initial vector search. Defaults to 4.
depth (int) – The maximum depth of edges to traverse. Defaults to 1.
filter (dict[str, Any] | None) – Optional metadata to filter the results.
**kwargs (Any) – Additional keyword arguments.
- Returns:
Collection of retrieved documents.
- Return type:
Iterable[Document]