This template enables user to use
pgvector for combining postgreSQL with semantic search / RAG.
If you are using
ChatOpenAI as your LLM, make sure the
OPENAI_API_KEY is set in your environment. You can change both the LLM and embeddings model inside
And you can configure configure the following environment variables for use by the template (defaults are in parentheses)
If you don't have a postgres instance, you can run one locally in docker:
docker run \
--name some-postgres \
-e POSTGRES_PASSWORD=test \
-e POSTGRES_USER=postgres \
-e POSTGRES_DB=vectordb \
-p 5432:5432 \
And to start again later, use the
--name defined above:
docker start some-postgres
PostgreSQL Database setup
Apart from having
pgvector extension enabled, you will need to do some setup before being able to run semantic search within your SQL queries.
In order to run RAG over your postgreSQL database you will need to generate the embeddings for the specific columns you want.
This process is covered in the RAG empowered SQL cookbook, but the overall approach consist of:
- Querying for unique values in the column
- Generating embeddings for those values
- Store the embeddings in a separate column or in an auxiliary table.
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package sql-pgvector
If you want to add this to an existing project, you can just run:
langchain app add sql-pgvector
And add the following code to your
from sql_pgvector import chain as sql_pgvector_chain
add_routes(app, sql_pgvector_chain, path="/sql-pgvector")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up here. If you don't have access, you can skip this section
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-pgvector")