Skip to main content


This page covers how to use the DeepSparse inference runtime within LangChain. It is broken into two parts: installation and setup, and then examples of DeepSparse usage.

Installation and Setup

There exists a DeepSparse LLM wrapper, that provides a unified interface for all models:

from langchain_community.llms import DeepSparse

llm = DeepSparse(

print(llm.invoke("def fib():"))

API Reference:

Additional parameters can be passed using the config parameter:

config = {"max_generated_tokens": 256}

llm = DeepSparse(

Help us out by providing feedback on this documentation page: