Getting Started#

In this tutorial, we will learn about creating simple chains in LangChain. We will learn how to create a chain, add components to it, and run it.

In this tutorial, we will cover:

  • Using a simple LLM chain

  • Creating sequential chains

  • Creating a custom chain

Why do we need chains?#

Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.

Quick start: Using LLMChain#

The LLMChain is a simple chain that takes in a prompt template, formats it with the user input and returns the response from an LLM.

To use the LLMChain, first create a prompt template.

from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI

llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)

We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.

from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)

# Run the chain only specifying the input variable.
print(chain.run("colorful socks"))
Colorful Toes Co.

If there are multiple variables, you can input them all at once using a dictionary.

prompt = PromptTemplate(
    input_variables=["company", "product"],
    template="What is a good name for {company} that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run({
    'company': "ABC Startup",
    'product': "colorful socks"
    }))
Socktopia Colourful Creations.

You can use a chat model in an LLMChain as well:

from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
)
human_message_prompt = HumanMessagePromptTemplate(
        prompt=PromptTemplate(
            template="What is a good name for a company that makes {product}?",
            input_variables=["product"],
        )
    )
chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])
chat = ChatOpenAI(temperature=0.9)
chain = LLMChain(llm=chat, prompt=chat_prompt_template)
print(chain.run("colorful socks"))
Rainbow Socks Co.

Different ways of calling chains#

All classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using __call__:

chat = ChatOpenAI(temperature=0)
prompt_template = "Tell me a {adjective} joke"
llm_chain = LLMChain(
    llm=chat,
    prompt=PromptTemplate.from_template(prompt_template)
)

llm_chain(inputs={"adjective":"corny"})
{'adjective': 'corny',
 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}

By default, __call__ returns both the input and output key values. You can configure it to only return output key values by setting return_only_outputs to True.

llm_chain("corny", return_only_outputs=True)
{'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}

If the Chain only outputs one output key (i.e. only has one element in its output_keys), you can use run method. Note that run outputs a string instead of a dictionary.

# llm_chain only has one output key, so we can use run
llm_chain.output_keys
['text']
llm_chain.run({"adjective":"corny"})
'Why did the tomato turn red? Because it saw the salad dressing!'

In the case of one input key, you can input the string directly without specifying the input mapping.

# These two are equivalent
llm_chain.run({"adjective":"corny"})
llm_chain.run("corny")

# These two are also equivalent
llm_chain("corny")
llm_chain({"adjective":"corny"})
{'adjective': 'corny',
 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}

Tips: You can easily integrate a Chain object as a Tool in your Agent via its run method. See an example here.

Add memory to chains#

Chain supports taking a BaseMemory object as its memory argument, allowing Chain object to persist data across multiple calls. In other words, it makes Chain a stateful object.

from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory

conversation = ConversationChain(
    llm=chat,
    memory=ConversationBufferMemory()
)

conversation.run("Answer briefly. What are the first 3 colors of a rainbow?")
# -> The first three colors of a rainbow are red, orange, and yellow.
conversation.run("And the next 4?")
# -> The next four colors of a rainbow are green, blue, indigo, and violet.
'The next four colors of a rainbow are green, blue, indigo, and violet.'

Essentially, BaseMemory defines an interface of how langchain stores memory. It allows reading of stored data through load_memory_variables method and storing new data through save_context method. You can learn more about it in Memory section.

Debug Chain#

It can be hard to debug Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing. Setting verbose to True will print out some internal states of the Chain object while it is being ran.

conversation = ConversationChain(
    llm=chat,
    memory=ConversationBufferMemory(),
    verbose=True
)
conversation.run("What is ChatGPT?")
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

Human: What is ChatGPT?
AI:

> Finished chain.
'ChatGPT is an AI language model developed by OpenAI. It is based on the GPT-3 architecture and is capable of generating human-like responses to text prompts. ChatGPT has been trained on a massive amount of text data and can understand and respond to a wide range of topics. It is often used for chatbots, virtual assistants, and other conversational AI applications.'

Combine chains with the SequentialChain#

The next step after calling a language model is to make a series of calls to a language model. We can do this using sequential chains, which are chains that execute their links in a predefined order. Specifically, we will use the SimpleSequentialChain. This is the simplest type of a sequential chain, where each step has a single input/output, and the output of one step is the input to the next.

In this tutorial, our sequential chain will:

  1. First, create a company name for a product. We will reuse the LLMChain we’d previously initialized to create this company name.

  2. Then, create a catchphrase for the product. We will initialize a new LLMChain to create this catchphrase, as shown below.

second_prompt = PromptTemplate(
    input_variables=["company_name"],
    template="Write a catchphrase for the following company: {company_name}",
)
chain_two = LLMChain(llm=llm, prompt=second_prompt)

Now we can combine the two LLMChains, so that we can create a company name and a catchphrase in a single step.

from langchain.chains import SimpleSequentialChain
overall_chain = SimpleSequentialChain(chains=[chain, chain_two], verbose=True)

# Run the chain specifying only the input variable for the first chain.
catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
> Entering new SimpleSequentialChain chain...
Rainbow Socks Co.


"Put a little rainbow in your step!"

> Finished chain.


"Put a little rainbow in your step!"

Create a custom chain with the Chain class#

LangChain provides many chains out of the box, but sometimes you may want to create a custom chain for your specific use case. For this example, we will create a custom chain that concatenates the outputs of 2 LLMChains.

In order to create a custom chain:

  1. Start by subclassing the Chain class,

  2. Fill out the input_keys and output_keys properties,

  3. Add the _call method that shows how to execute the chain.

These steps are demonstrated in the example below:

from langchain.chains import LLMChain
from langchain.chains.base import Chain

from typing import Dict, List


class ConcatenateChain(Chain):
    chain_1: LLMChain
    chain_2: LLMChain

    @property
    def input_keys(self) -> List[str]:
        # Union of the input keys of the two chains.
        all_input_vars = set(self.chain_1.input_keys).union(set(self.chain_2.input_keys))
        return list(all_input_vars)

    @property
    def output_keys(self) -> List[str]:
        return ['concat_output']

    def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
        output_1 = self.chain_1.run(inputs)
        output_2 = self.chain_2.run(inputs)
        return {'concat_output': output_1 + output_2}

Now, we can try running the chain that we called.

prompt_1 = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)
chain_1 = LLMChain(llm=llm, prompt=prompt_1)

prompt_2 = PromptTemplate(
    input_variables=["product"],
    template="What is a good slogan for a company that makes {product}?",
)
chain_2 = LLMChain(llm=llm, prompt=prompt_2)

concat_chain = ConcatenateChain(chain_1=chain_1, chain_2=chain_2)
concat_output = concat_chain.run("colorful socks")
print(f"Concatenated output:\n{concat_output}")
Concatenated output:


Funky Footwear Company

"Brighten Up Your Day with Our Colorful Socks!"

That’s it! For more details about how to do cool things with Chains, check out the how-to guide for chains.