Skip to main content


This page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.

Installation and Setup​

  • Install the Python package with pip install gpt4all
  • Download a GPT4All model and place it in your desired directory

In this example, We are using mistral-7b-openorca.Q4_0.gguf(Best overall fast chat model):

mkdir models
wget -O models/mistral-7b-openorca.Q4_0.gguf



To use the GPT4All wrapper, you need to provide the path to the pre-trained model file and the model's configuration.

from langchain_community.llms import GPT4All

# Instantiate the model. Callbacks support token-wise streaming
model = GPT4All(model="./models/mistral-7b-openorca.Q4_0.gguf", n_threads=8)

# Generate text
response = model.invoke("Once upon a time, ")
API Reference:GPT4All

You can also customize the generation parameters, such as n_predict, temp, top_p, top_k, and others.

To stream the model's predictions, add in a CallbackManager.

from langchain_community.llms import GPT4All
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

# There are many CallbackHandlers supported, such as
# from langchain.callbacks.streamlit import StreamlitCallbackHandler

callbacks = [StreamingStdOutCallbackHandler()]
model = GPT4All(model="./models/mistral-7b-openorca.Q4_0.gguf", n_threads=8)

# Generate text. Tokens are streamed through the callback manager.
model("Once upon a time, ", callbacks=callbacks)

Model File​

You can find links to model file downloads in the

For a more detailed walkthrough of this, see this notebook

Was this page helpful?

You can also leave detailed feedback on GitHub.