Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you鈥檙e building your own machine learning models, Replicate makes it easy to deploy them at scale.

This example goes over how to use LangChain to interact with Replicate models


To run this notebook, you鈥檒l need to create a replicate account and install the replicate python client.

!pip install replicate
# get a token:

from getpass import getpass

import os

from langchain.llms import Replicate
from langchain import PromptTemplate, LLMChain

Calling a model#

Find a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version

For example, for this dolly model, click on the API tab. The model name/version would be: replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5

Only the model param is required, but we can add other model params when initializing.

For example, if we were running stable diffusion and wanted to change the image dimensions:

Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})

Note that only the first output of a model will be returned.

llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
prompt = """
Answer the following yes/no question by reasoning step by step. 
Can a dog drive a car?
'The legal driving age of dogs is 2. Cars are designed for humans to drive. Therefore, the final answer is yes.'

We can call any replicate model using this syntax. For example, we can call stable diffusion.

text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", 
                       input={'image_dimensions': '512x512'})
image_output = text2image("A cat riding a motorcycle by Picasso")

The model spits out a URL. Let鈥檚 render it.

from PIL import Image
import requests
from io import BytesIO

response = requests.get(image_output)
img =


Chaining Calls#

The whole point of langchain is to鈥 chain! Here鈥檚 an example of how do that.

from langchain.chains import SimpleSequentialChain

First, let鈥檚 define the LLM for this model as a flan-5, and text2image as a stable diffusion model.

dolly_llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf")

First prompt in the chain

prompt = PromptTemplate(
    template="What is a good name for a company that makes {product}?",

chain = LLMChain(llm=dolly_llm, prompt=prompt)

Second prompt to get the logo for company description

second_prompt = PromptTemplate(
    template="Write a description of a logo for this company: {company_name}",
chain_two = LLMChain(llm=dolly_llm, prompt=second_prompt)

Third prompt, let鈥檚 create the image based on the description output from prompt 2

third_prompt = PromptTemplate(
chain_three = LLMChain(llm=text2image, prompt=third_prompt)

Now let鈥檚 run it!

# Run the chain specifying only the input variable for the first chain.
overall_chain = SimpleSequentialChain(chains=[chain, chain_two, chain_three], verbose=True)
catchphrase ="colorful socks")
> Entering new SimpleSequentialChain chain...
novelty socks
todd & co.

> Finished chain.
response = requests.get("")
img =