Getting Started#

In this tutorial, we will learn about creating simple chains in LangChain. We will learn how to create a chain, add components to it, and run it.

In this tutorial, we will cover:

  • Using a simple LLM chain

  • Creating sequential chains

  • Creating a custom chain

Why do we need chains?#

Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.

Query an LLM with the LLMChain#

The LLMChain is a simple chain that takes in a prompt template, formats it with the user input and returns the response from an LLM.

To use the LLMChain, first create a prompt template.

from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI

llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)

We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.

from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)

# Run the chain only specifying the input variable.
print(chain.run("colorful socks"))
Rainbow Socks Co.

You can use a chat model in an LLMChain as well:

from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
)
human_message_prompt = HumanMessagePromptTemplate(
        prompt=PromptTemplate(
            template="What is a good name for a company that makes {product}?",
            input_variables=["product"],
        )
    )
chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])
chat = ChatOpenAI(temperature=0.9)
chain = LLMChain(llm=chat, prompt=chat_prompt_template)
print(chain.run("colorful socks"))
Rainbow Threads

This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains.

Combine chains with the SequentialChain#

The next step after calling a language model is to make a series of calls to a language model. We can do this using sequential chains, which are chains that execute their links in a predefined order. Specifically, we will use the SimpleSequentialChain. This is the simplest type of a sequential chain, where each step has a single input/output, and the output of one step is the input to the next.

In this tutorial, our sequential chain will:

  1. First, create a company name for a product. We will reuse the LLMChain we’d previously initialized to create this company name.

  2. Then, create a catchphrase for the product. We will initialize a new LLMChain to create this catchphrase, as shown below.

second_prompt = PromptTemplate(
    input_variables=["company_name"],
    template="Write a catchphrase for the following company: {company_name}",
)
chain_two = LLMChain(llm=llm, prompt=second_prompt)

Now we can combine the two LLMChains, so that we can create a company name and a catchphrase in a single step.

from langchain.chains import SimpleSequentialChain
overall_chain = SimpleSequentialChain(chains=[chain, chain_two], verbose=True)

# Run the chain specifying only the input variable for the first chain.
catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
> Entering new SimpleSequentialChain chain...


Cheerful Toes.


"Spread smiles from your toes!"

> Finished SimpleSequentialChain chain.


"Spread smiles from your toes!"

Create a custom chain with the Chain class#

LangChain provides many chains out of the box, but sometimes you may want to create a custom chain for your specific use case. For this example, we will create a custom chain that concatenates the outputs of 2 LLMChains.

In order to create a custom chain:

  1. Start by subclassing the Chain class,

  2. Fill out the input_keys and output_keys properties,

  3. Add the _call method that shows how to execute the chain.

These steps are demonstrated in the example below:

from langchain.chains import LLMChain
from langchain.chains.base import Chain

from typing import Dict, List


class ConcatenateChain(Chain):
    chain_1: LLMChain
    chain_2: LLMChain

    @property
    def input_keys(self) -> List[str]:
        # Union of the input keys of the two chains.
        all_input_vars = set(self.chain_1.input_keys).union(set(self.chain_2.input_keys))
        return list(all_input_vars)

    @property
    def output_keys(self) -> List[str]:
        return ['concat_output']

    def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
        output_1 = self.chain_1.run(inputs)
        output_2 = self.chain_2.run(inputs)
        return {'concat_output': output_1 + output_2}

Now, we can try running the chain that we called.

prompt_1 = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)
chain_1 = LLMChain(llm=llm, prompt=prompt_1)

prompt_2 = PromptTemplate(
    input_variables=["product"],
    template="What is a good slogan for a company that makes {product}?",
)
chain_2 = LLMChain(llm=llm, prompt=prompt_2)

concat_chain = ConcatenateChain(chain_1=chain_1, chain_2=chain_2)
concat_output = concat_chain.run("colorful socks")
print(f"Concatenated output:\n{concat_output}")
Concatenated output:


Rainbow Socks Co.

"Step Into Colorful Comfort!"

That’s it! For more details about how to do cool things with Chains, check out the how-to guide for chains.