Skip to main content

rag-chroma-multi-modal

Multi-modal LLMs enable visual assistants that can perform question-answering about images.

This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.

It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.

Given a question, relevant slides are retrieved and passed to GPT-4V for answer synthesis.

Input​

Supply a slide deck as pdf in the /docs directory.

By default, this template has a slide deck about Q3 earnings from DataDog, a public technology company.

Example questions to ask can be:

How many customers does Datadog have?
What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?

To create an index of the slide deck, run:

poetry install
python ingest.py

Storage​

This template will use OpenCLIP multi-modal embeddings to embed the images.

You can select different embedding model options (see results here).

The first time you run the app, it will automatically download the multimodal embedding model.

By default, LangChain will use an embedding model with moderate performance but lower memory requirements, ViT-H-14.

You can choose alternative OpenCLIPEmbeddings models in rag_chroma_multi_modal/ingest.py:

vectorstore_mmembd = Chroma(
collection_name="multi-modal-rag",
persist_directory=str(re_vectorstore_path),
embedding_function=OpenCLIPEmbeddings(
model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
),
)

LLM​

The app will retrieve images based on similarity between the text input and the image, which are both mapped to multi-modal embedding space. It will then pass the images to GPT-4V.

Environment Setup​

Set the OPENAI_API_KEY environment variable to access the OpenAI GPT-4V.

Usage​

To use this package, you should first have the LangChain CLI installed:

pip install -U langchain-cli

To create a new LangChain project and install this as the only package, you can do:

langchain app new my-app --package rag-chroma-multi-modal

If you want to add this to an existing project, you can just run:

langchain app add rag-chroma-multi-modal

And add the following code to your server.py file:

from rag_chroma_multi_modal import chain as rag_chroma_multi_modal_chain

add_routes(app, rag_chroma_multi_modal_chain, path="/rag-chroma-multi-modal")

(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section

export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"

If you are inside this directory, then you can spin up a LangServe instance directly by:

langchain serve

This will start the FastAPI app with a server is running locally at http://localhost:8000

We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-chroma-multi-modal/playground

We can access the template from code with:

from langserve.client import RemoteRunnable

runnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal")

Was this page helpful?


You can also leave detailed feedback on GitHub.