Skip to main content

Predibase

Predibase allows you to train, fine-tune, and deploy any ML modelโ€”from linear regression to large language model.

This example demonstrates using Langchain with models deployed on Predibase

Setup

To run this notebook, youโ€™ll need a Predibase account and an API key.

Youโ€™ll also need to install the Predibase Python package:

%pip install --upgrade --quiet  predibase
import os

os.environ["PREDIBASE_API_TOKEN"] = "{PREDIBASE_API_TOKEN}"

Initial Callโ€‹

from langchain_community.llms import Predibase

model = Predibase(
model="mistral-7b",
predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN"),
)
from langchain_community.llms import Predibase

# With a fine-tuned adapter hosted at Predibase (adapter_version can be specified; omitting it is equivalent to the most recent version).
model = Predibase(
model="mistral-7b",
adapter_id="e2e_nlg",
adapter_version=1,
predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN"),
)
from langchain_community.llms import Predibase

# With a fine-tuned adapter hosted at HuggingFace (adapter_version does not apply and will be ignored).
model = Predibase(
model="mistral-7b",
adapter_id="predibase/e2e_nlg",
predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN"),
)
response = model("Can you recommend me a nice dry wine?")
print(response)

Chain Call Setupโ€‹

from langchain_community.llms import Predibase

model = Predibase(
model="mistral-7b", predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN")
)
# With a fine-tuned adapter hosted at Predibase (adapter_version can be specified; omitting it is equivalent to the most recent version).
model = Predibase(
model="mistral-7b",
adapter_id="e2e_nlg",
adapter_version=1,
predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN"),
)
# With a fine-tuned adapter hosted at HuggingFace (adapter_version does not apply and will be ignored).
llm = Predibase(
model="mistral-7b",
adapter_id="predibase/e2e_nlg",
predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN"),
)

SequentialChainโ€‹

from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
# This is an LLMChain to write a synopsis given a title of a play.
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.

Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
# This is an LLMChain to write a review of a play given a synopsis.
template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.

Play Synopsis:
{synopsis}
Review from a New York Times play critic of the above play:"""
prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)
review_chain = LLMChain(llm=llm, prompt=prompt_template)
# This is the overall chain where we run these two chains in sequence.
from langchain.chains import SimpleSequentialChain

overall_chain = SimpleSequentialChain(
chains=[synopsis_chain, review_chain], verbose=True
)
review = overall_chain.run("Tragedy at sunset on the beach")

Fine-tuned LLM (Use your own fine-tuned LLM from Predibase)โ€‹

from langchain_community.llms import Predibase

model = Predibase(
model="my-base-LLM",
adapter_id="my-finetuned-adapter-id", # Supports both, Predibase-hosted and HuggingFace-hosted model repositories.
# adapter_version=1, # optional (returns the latest, if omitted)
predibase_api_key=os.environ.get(
"PREDIBASE_API_TOKEN"
), # Adapter argument is optional.
)
# replace my-finetuned-LLM with the name of your model in Predibase
# response = model("Can you help categorize the following emails into positive, negative, and neutral?")

Help us out by providing feedback on this documentation page: