Skip to main content
Open In ColabOpen on GitHub

Oracle AI Vector Search: Vector Store

Oracle AI Vector Search is designed for Artificial Intelligence (AI) workloads that allows you to query data based on semantics, rather than keywords. One of the biggest benefits of Oracle AI Vector Search is that semantic search on unstructured data can be combined with relational search on business data in one single system. This is not only powerful but also significantly more effective because you don't need to add a specialized vector database, eliminating the pain of data fragmentation between multiple systems.

In addition, your vectors can benefit from all of Oracle Database’s most powerful features, like the following:

If you are just starting with Oracle Database, consider exploring the free Oracle 23 AI which provides a great introduction to setting up your database environment. While working with the database, it is often advisable to avoid using the system user by default; instead, you can create your own user for enhanced security and customization. For detailed steps on user creation, refer to our end-to-end guide which also shows how to set up a user in Oracle. Additionally, understanding user privileges is crucial for managing database security effectively. You can learn more about this topic in the official Oracle guide on administering user accounts and security.

You'll need to install langchain-community with pip install -qU langchain-community to use this integration

Please install Oracle Python Client driver to use Langchain with Oracle AI Vector Search.

# pip install oracledb

The following sample code will show how to connect to Oracle Database. By default, python-oracledb runs in a ‘Thin’ mode which connects directly to Oracle Database. This mode does not need Oracle Client libraries. However, some additional functionality is available when python-oracledb uses them. Python-oracledb is said to be in ‘Thick’ mode when Oracle Client libraries are used. Both modes have comprehensive functionality supporting the Python Database API v2.0 Specification. See the following guide that talks about features supported in each mode. You might want to switch to thick-mode if you are unable to use thin-mode.

import oracledb

username = "username"
password = "password"
dsn = "ipaddress:port/orclpdb1"

try:
connection = oracledb.connect(user=username, password=password, dsn=dsn)
print("Connection successful!")
except Exception as e:
print("Connection failed!")
from langchain_community.vectorstores import oraclevs
from langchain_community.vectorstores.oraclevs import OracleVS
from langchain_community.vectorstores.utils import DistanceStrategy
from langchain_core.documents import Document
from langchain_huggingface import HuggingFaceEmbeddings

Load Documents

# Define a list of documents (The examples below are 5 random documents from Oracle Concepts Manual )

documents_json_list = [
{
"id": "cncpt_15.5.3.2.2_P4",
"text": "If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.",
"link": "https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/logical-storage-structures.html#GUID-5387D7B2-C0CA-4C1E-811B-C7EB9B636442",
},
{
"id": "cncpt_15.5.5_P1",
"text": "A tablespace can be online (accessible) or offline (not accessible) whenever the database is open.\nA tablespace is usually online so that its data is available to users. The SYSTEM tablespace and temporary tablespaces cannot be taken offline.",
"link": "https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/logical-storage-structures.html#GUID-D02B2220-E6F5-40D9-AFB5-BC69BCEF6CD4",
},
{
"id": "cncpt_22.3.4.3.1_P2",
"text": "The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table.\nSometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.",
"link": "https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/concepts-for-database-developers.html#GUID-3C50EAB8-FC39-4BB3-B680-4EACCE49E866",
},
{
"id": "cncpt_22.3.4.3.1_P3",
"text": "The LOB segment stores data in pieces called chunks. A chunk is a logically contiguous set of data blocks and is the smallest unit of allocation for a LOB. A row in the table stores a pointer called a LOB locator, which points to the LOB index. When the table is queried, the database uses the LOB index to quickly locate the LOB chunks.",
"link": "https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/concepts-for-database-developers.html#GUID-3C50EAB8-FC39-4BB3-B680-4EACCE49E866",
},
]
# Create Langchain Documents

documents_langchain = []

for doc in documents_json_list:
metadata = {"id": doc["id"], "link": doc["link"]}
doc_langchain = Document(page_content=doc["text"], metadata=metadata)
documents_langchain.append(doc_langchain)

First we will create three vector stores each with different distance functions. Since we have not created indices in them yet, they will just create tables for now. Later we will use these vector stores to create HNSW indicies. To understand more about the different types of indices Oracle AI Vector Search supports, refer to the following guide .

You can manually connect to the Oracle Database and will see three tables : Documents_DOT, Documents_COSINE and Documents_EUCLIDEAN.

We will then create three additional tables Documents_DOT_IVF, Documents_COSINE_IVF and Documents_EUCLIDEAN_IVF which will be used to create IVF indicies on the tables instead of HNSW indices.

# Ingest documents into Oracle Vector Store using different distance strategies

# When using our API calls, start by initializing your vector store with a subset of your documents
# through from_documents(), then incrementally add more documents using add_texts().
# This approach prevents system overload and ensures efficient document processing.

model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")

vector_store_dot = OracleVS.from_documents(
documents_langchain,
model,
client=connection,
table_name="Documents_DOT",
distance_strategy=DistanceStrategy.DOT_PRODUCT,
)
vector_store_max = OracleVS.from_documents(
documents_langchain,
model,
client=connection,
table_name="Documents_COSINE",
distance_strategy=DistanceStrategy.COSINE,
)
vector_store_euclidean = OracleVS.from_documents(
documents_langchain,
model,
client=connection,
table_name="Documents_EUCLIDEAN",
distance_strategy=DistanceStrategy.EUCLIDEAN_DISTANCE,
)

# Ingest documents into Oracle Vector Store using different distance strategies
vector_store_dot_ivf = OracleVS.from_documents(
documents_langchain,
model,
client=connection,
table_name="Documents_DOT_IVF",
distance_strategy=DistanceStrategy.DOT_PRODUCT,
)
vector_store_max_ivf = OracleVS.from_documents(
documents_langchain,
model,
client=connection,
table_name="Documents_COSINE_IVF",
distance_strategy=DistanceStrategy.COSINE,
)
vector_store_euclidean_ivf = OracleVS.from_documents(
documents_langchain,
model,
client=connection,
table_name="Documents_EUCLIDEAN_IVF",
distance_strategy=DistanceStrategy.EUCLIDEAN_DISTANCE,
)
def manage_texts(vector_stores):
"""
Adds texts to each vector store, demonstrates error handling for duplicate additions,
and performs deletion of texts. Showcases similarity searches and index creation for each vector store.

Args:
- vector_stores (list): A list of OracleVS instances.
"""
texts = ["Rohan", "Shailendra"]
metadata = [
{"id": "100", "link": "Document Example Test 1"},
{"id": "101", "link": "Document Example Test 2"},
]

for i, vs in enumerate(vector_stores, start=1):
# Adding texts
try:
vs.add_texts(texts, metadata)
print(f"\n\n\nAdd texts complete for vector store {i}\n\n\n")
except Exception as ex:
print(f"\n\n\nExpected error on duplicate add for vector store {i}\n\n\n")

# Deleting texts using the value of 'id'
vs.delete([metadata[0]["id"]])
print(f"\n\n\nDelete texts complete for vector store {i}\n\n\n")

# Similarity search
results = vs.similarity_search("How are LOBS stored in Oracle Database", 2)
print(f"\n\n\nSimilarity search results for vector store {i}: {results}\n\n\n")


vector_store_list = [
vector_store_dot,
vector_store_max,
vector_store_euclidean,
vector_store_dot_ivf,
vector_store_max_ivf,
vector_store_euclidean_ivf,
]
manage_texts(vector_store_list)

Demonstrating index creation with specific parameters for each distance strategy

def create_search_indices(connection):
"""
Creates search indices for the vector stores, each with specific parameters tailored to their distance strategy.
"""
# Index for DOT_PRODUCT strategy
# Notice we are creating a HNSW index with default parameters
# This will default to creating a HNSW index with 8 Parallel Workers and use the Default Accuracy used by Oracle AI Vector Search
oraclevs.create_index(
connection,
vector_store_dot,
params={"idx_name": "hnsw_idx1", "idx_type": "HNSW"},
)

# Index for COSINE strategy with specific parameters
# Notice we are creating a HNSW index with parallel 16 and Target Accuracy Specification as 97 percent
oraclevs.create_index(
connection,
vector_store_max,
params={
"idx_name": "hnsw_idx2",
"idx_type": "HNSW",
"accuracy": 97,
"parallel": 16,
},
)

# Index for EUCLIDEAN_DISTANCE strategy with specific parameters
# Notice we are creating a HNSW index by specifying Power User Parameters which are neighbors = 64 and efConstruction = 100
oraclevs.create_index(
connection,
vector_store_euclidean,
params={
"idx_name": "hnsw_idx3",
"idx_type": "HNSW",
"neighbors": 64,
"efConstruction": 100,
},
)

# Index for DOT_PRODUCT strategy with specific parameters
# Notice we are creating an IVF index with default parameters
# This will default to creating an IVF index with 8 Parallel Workers and use the Default Accuracy used by Oracle AI Vector Search
oraclevs.create_index(
connection,
vector_store_dot_ivf,
params={
"idx_name": "ivf_idx1",
"idx_type": "IVF",
},
)

# Index for COSINE strategy with specific parameters
# Notice we are creating an IVF index with parallel 32 and Target Accuracy Specification as 90 percent
oraclevs.create_index(
connection,
vector_store_max_ivf,
params={
"idx_name": "ivf_idx2",
"idx_type": "IVF",
"accuracy": 90,
"parallel": 32,
},
)

# Index for EUCLIDEAN_DISTANCE strategy with specific parameters
# Notice we are creating an IVF index by specifying Power User Parameters which is neighbor_part = 64
oraclevs.create_index(
connection,
vector_store_euclidean_ivf,
params={"idx_name": "ivf_idx3", "idx_type": "IVF", "neighbor_part": 64},
)

print("Index creation complete.")


create_search_indices(connection)

Demonstrate advanced searches on all six vector stores, with and without attribute filtering – with filtering, we only select the document id 101 and nothing else

# Conduct advanced searches after creating the indices
def conduct_advanced_searches(vector_stores):
query = "How are LOBS stored in Oracle Database"
# Constructing a filter for direct comparison against document metadata
# This filter aims to include documents whose metadata 'id' is exactly '2'
filter_criteria = {"id": ["101"]} # Direct comparison filter

for i, vs in enumerate(vector_stores, start=1):
print(f"\n--- Vector Store {i} Advanced Searches ---")
# Similarity search without a filter
print("\nSimilarity search results without filter:")
print(vs.similarity_search(query, 2))

# Similarity search with a filter
print("\nSimilarity search results with filter:")
print(vs.similarity_search(query, 2, filter=filter_criteria))

# Similarity search with relevance score
print("\nSimilarity search with relevance score:")
print(vs.similarity_search_with_score(query, 2))

# Similarity search with relevance score with filter
print("\nSimilarity search with relevance score with filter:")
print(vs.similarity_search_with_score(query, 2, filter=filter_criteria))

# Max marginal relevance search
print("\nMax marginal relevance search results:")
print(vs.max_marginal_relevance_search(query, 2, fetch_k=20, lambda_mult=0.5))

# Max marginal relevance search with filter
print("\nMax marginal relevance search results with filter:")
print(
vs.max_marginal_relevance_search(
query, 2, fetch_k=20, lambda_mult=0.5, filter=filter_criteria
)
)


conduct_advanced_searches(vector_store_list)

End to End Demo

Please refer to our complete demo guide Oracle AI Vector Search End-to-End Demo Guide to build an end to end RAG pipeline with the help of Oracle AI Vector Search.


Was this page helpful?