Infinispan
Infinispan is an open-source key-value data grid, it can work as single node as well as distributed.
Vector search is supported since release 15.x For more: Infinispan Home
# Ensure that all we need is installed
# You may want to skip this
%pip install sentence-transformers
%pip install langchain
%pip install langchain_core
%pip install langchain_community
Setup
To run this demo we need a running Infinispan instance without authentication and a data file. In the next three cells we're going to:
- download the data file
- create the configuration
- run Infinispan in docker
%%bash
#get an archive of news
wget https://raw.githubusercontent.com/rigazilla/infinispan-vector/main/bbc_news.csv.gz
%%bash
#create infinispan configuration file
echo 'infinispan:
cache-container:
name: default
transport:
cluster: cluster
stack: tcp
server:
interfaces:
interface:
name: public
inet-address:
value: 0.0.0.0
socket-bindings:
default-interface: public
port-offset: 0
socket-binding:
name: default
port: 11222
endpoints:
endpoint:
socket-binding: default
rest-connector:
' > infinispan-noauth.yaml
!docker rm --force infinispanvs-demo
!docker run -d --name infinispanvs-demo -v $(pwd):/user-config -p 11222:11222 infinispan/server:15.0 -c /user-config/infinispan-noauth.yaml
The Code
Pick up an embedding model
In this demo we're using a HuggingFace embedding mode.
from langchain_core.embeddings import Embeddings
from langchain_huggingface import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-MiniLM-L12-v2"
hf = HuggingFaceEmbeddings(model_name=model_name)
API Reference:Embeddings | HuggingFaceEmbeddings
Setup Infinispan cache
Infinispan is a very flexible key-value store, it can store raw bits as well as complex data type. User has complete freedom in the datagrid configuration, but for simple data type everything is automatically configured by the python layer. We take advantage of this feature so we can focus on our application.