Skip to main content

Astra DB

This page provides a quickstart for using Astra DB as a Vector Store.

DataStax Astra DB is a serverless vector-capable database built on Apache Cassandraยฎ and made conveniently available through an easy-to-use JSON API.

Note: in addition to access to the database, an OpenAI API Key is required to run the full example.

Setup and general dependenciesโ€‹

Use of the integration requires the corresponding Python package:

pip install --upgrade langchain-astradb

Note. the following are all packages required to run the full demo on this page. Depending on your LangChain setup, some of them may need to be installed:

pip install langchain langchain-openai datasets pypdf

Import dependenciesโ€‹

import os
from getpass import getpass

from datasets import (
from langchain_community.document_loaders import PyPDFLoader
from langchain_core.documents import Document
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter
os.environ["OPENAI_API_KEY"] = getpass("OPENAI_API_KEY = ")
embe = OpenAIEmbeddings()

Import the Vector Storeโ€‹

from langchain_astradb import AstraDBVectorStore

API Reference:

Connection parametersโ€‹

These are found on your Astra DB dashboard:

  • the API Endpoint looks like
  • the Token looks like AstraCS:6gBhNmsk135....
  • you may optionally provide a Namespace such as my_namespace

desired_namespace = input("(optional) Namespace = ")
if desired_namespace:
ASTRA_DB_KEYSPACE = desired_namespace

Now you can create the vector store:

vstore = AstraDBVectorStore(

Load a datasetโ€‹

Convert each entry in the source dataset into a Document, then write them into the vector store:

philo_dataset = load_dataset("datastax/philosopher-quotes")["train"]

docs = []
for entry in philo_dataset:
metadata = {"author": entry["author"]}
doc = Document(page_content=entry["quote"], metadata=metadata)

inserted_ids = vstore.add_documents(docs)
print(f"\nInserted {len(inserted_ids)} documents.")

In the above, metadata dictionaries are created from the source data and are part of the Document.

Note: check the Astra DB API Docs for the valid metadata field names: some characters are reserved and cannot be used.

Add some more entries, this time with add_texts:

texts = ["I think, therefore I am.", "To the things themselves!"]
metadatas = [{"author": "descartes"}, {"author": "husserl"}]
ids = ["desc_01", "huss_xy"]

inserted_ids_2 = vstore.add_texts(texts=texts, metadatas=metadatas, ids=ids)
print(f"\nInserted {len(inserted_ids_2)} documents.")

Note: you may want to speed up the execution of add_texts and add_documents by increasing the concurrency level for these bulk operations - check out the *_concurrency parameters in the class constructor and the add_texts docstrings for more details. Depending on the network and the client machine specifications, your best-performing choice of parameters may vary.

Run searchesโ€‹

This section demonstrates metadata filtering and getting the similarity scores back:

results = vstore.similarity_search("Our life is what we make of it", k=3)
for res in results:
print(f"* {res.page_content} [{res.metadata}]")
results_filtered = vstore.similarity_search(
"Our life is what we make of it",
filter={"author": "plato"},
for res in results_filtered:
print(f"* {res.page_content} [{res.metadata}]")
results = vstore.similarity_search_with_score("Our life is what we make of it", k=3)
for res, score in results:
print(f"* [SIM={score:3f}] {res.page_content} [{res.metadata}]")
results = vstore.max_marginal_relevance_search(
"Our life is what we make of it",
filter={"author": "aristotle"},
for res in results:
print(f"* {res.page_content} [{res.metadata}]")


Note that the Astra DB vector store supports all fully async methods (asimilarity_search, afrom_texts, adelete and so on) natively, i.e. without thread wrapping involved.

Deleting stored documentsโ€‹

delete_1 = vstore.delete(inserted_ids[:3])
print(f"all_succeed={delete_1}") # True, all documents deleted
delete_2 = vstore.delete(inserted_ids[2:5])
print(f"some_succeeds={delete_2}") # True, though some IDs were gone already

A minimal RAG chainโ€‹

The next cells will implement a simple RAG pipeline:

  • download a sample PDF file and load it onto the store;
  • create a RAG chain with LCEL (LangChain Expression Language), with the vector store at its heart;
  • run the question-answering chain.
!curl -L \
"" \
-o "what-is-philosophy.pdf"
pdf_loader = PyPDFLoader("what-is-philosophy.pdf")
splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=64)
docs_from_pdf = pdf_loader.load_and_split(text_splitter=splitter)

print(f"Documents from PDF: {len(docs_from_pdf)}.")
inserted_ids_from_pdf = vstore.add_documents(docs_from_pdf)
print(f"Inserted {len(inserted_ids_from_pdf)} documents.")
retriever = vstore.as_retriever(search_kwargs={"k": 3})

philo_template = """
You are a philosopher that draws inspiration from great thinkers of the past
to craft well-thought answers to user questions. Use the provided context as the basis
for your answers and do not make up new reasoning paths - just mix-and-match what you are given.
Your answers must be concise and to the point, and refrain from answering about other topics than philosophy.


QUESTION: {question}


philo_prompt = ChatPromptTemplate.from_template(philo_template)

llm = ChatOpenAI()

chain = (
{"context": retriever, "question": RunnablePassthrough()}
| philo_prompt
| llm
| StrOutputParser()
chain.invoke("How does Russel elaborate on Peirce's idea of the security blanket?")

For more, check out a complete RAG template using Astra DB here.


If you want to completely delete the collection from your Astra DB instance, run this.

(You will lose the data you stored in it.)


Help us out by providing feedback on this documentation page: