Quickstart Guide#

This tutorial gives you a quick walkthrough about building an end-to-end language model application with LangChain.

Installation#

To get started, install LangChain with the following command:

pip install langchain

Environment Setup#

Using LangChain will usually require integrations with one or more model providers, data stores, apis, etc.

For this example, we will be using OpenAI’s APIs, so we will first need to install their SDK:

pip install openai

We will then need to set the environment variable in the terminal.

export OPENAI_API_KEY="..."

Alternatively, you could do this from inside the Jupyter notebook (or Python script):

import os
os.environ["OPENAI_API_KEY"] = "..."

Building a Language Model Application: LLMs#

Now that we have installed LangChain and set up our environment, we can start building our language model application.

LangChain provides many modules that can be used to build language model applications. Modules can be combined to create more complex applications, or be used individually for simple applications.

Building a Language Model Application: Chat Models#

Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.

Chat model APIs are fairly new, so we are still figuring out the correct abstractions.