index#

langchain_core.indexing.api.index(docs_source: BaseLoader | Iterable[Document], record_manager: RecordManager, vector_store: VectorStore | DocumentIndex, *, batch_size: int = 100, cleanup: Literal['incremental', 'full', 'scoped_full', None] = None, source_id_key: str | Callable[[Document], str] | None = None, cleanup_batch_size: int = 1000, force_update: bool = False, upsert_kwargs: dict[str, Any] | None = None) IndexingResult[source]#

Index data from the loader into the vector store.

Indexing functionality uses a manager to keep track of which documents are in the vector store.

This allows us to keep track of which documents were updated, and which documents were deleted, which documents should be skipped.

For the time being, documents are indexed using their hashes, and users

are not able to specify the uid of the document.

Important

  • In full mode, the loader should be returning the entire dataset, and not just a subset of the dataset. Otherwise, the auto_cleanup will remove documents that it is not supposed to.

  • In incremental mode, if documents associated with a particular source id appear across different batches, the indexing API will do some redundant work. This will still result in the correct end state of the index, but will unfortunately not be 100% efficient. For example, if a given document is split into 15 chunks, and we index them using a batch size of 5, we’ll have 3 batches all with the same source id. In general, to avoid doing too much redundant work select as big a batch size as possible.

  • The scoped_full mode is suitable if determining an appropriate batch size is challenging or if your data loader cannot return the entire dataset at once. This mode keeps track of source IDs in memory, which should be fine for most use cases. If your dataset is large (10M+ docs), you will likely need to parallelize the indexing process regardless.

Parameters:
  • docs_source (BaseLoader | Iterable[Document]) – Data loader or iterable of documents to index.

  • record_manager (RecordManager) – Timestamped set to keep track of which documents were updated.

  • vector_store (VectorStore | DocumentIndex) – VectorStore or DocumentIndex to index the documents into.

  • batch_size (int) – Batch size to use when indexing. Default is 100.

  • cleanup (Literal['incremental', 'full', 'scoped_full', None]) –

    How to handle clean up of documents. Default is None. - incremental: Cleans up all documents that haven’t been updated AND

    that are associated with source ids that were seen during indexing. Clean up is done continuously during indexing helping to minimize the probability of users seeing duplicated content.

    • full: Delete all documents that have not been returned by the loader

      during this run of indexing. Clean up runs after all documents have been indexed. This means that users may see duplicated content during indexing.

    • scoped_full: Similar to Full, but only deletes all documents

      that haven’t been updated AND that are associated with source ids that were seen during indexing.

    • None: Do not delete any documents.

  • source_id_key (str | Callable[[Document], str] | None) – Optional key that helps identify the original source of the document. Default is None.

  • cleanup_batch_size (int) – Batch size to use when cleaning up documents. Default is 1_000.

  • force_update (bool) – Force update documents even if they are present in the record manager. Useful if you are re-indexing with updated embeddings. Default is False.

  • upsert_kwargs (dict[str, Any] | None) –

    Additional keyword arguments to pass to the add_documents

    method of the VectorStore or the upsert method of the DocumentIndex. For example, you can use this to specify a custom vector_field: upsert_kwargs={“vector_field”: “embedding”}

    Added in version 0.3.10.

Returns:

Indexing result which contains information about how many documents were added, updated, deleted, or skipped.

Raises:
  • ValueError – If cleanup mode is not one of ‘incremental’, ‘full’ or None

  • ValueError – If cleanup mode is incremental and source_id_key is None.

  • ValueError – If vectorstore does not have “delete” and “add_documents” required methods.

  • ValueError – If source_id_key is not None, but is not a string or callable.

Return type:

IndexingResult

Examples using index