Skip to main content


Learn how to use LangChain with models deployed on Baseten.

Installation and setup

  • Create a Baseten account and API key.
  • Install the Baseten Python client with pip install baseten
  • Use your API key to authenticate with baseten login

Invoking a model

Baseten integrates with LangChain through the LLM module, which provides a standardized and interoperable interface for models that are deployed on your Baseten workspace.

You can deploy foundation models like WizardLM and Alpaca with one click from the Baseten model library or if you have your own model, deploy it with this tutorial.

In this example, we'll work with WizardLM. Deploy WizardLM here and follow along with the deployed model's version ID.

from langchain.llms import Baseten

wizardlm = Baseten(model="MODEL_VERSION_ID", verbose=True)

wizardlm("What is the difference between a Wizard and a Sorcerer?")

API Reference: