Skip to main content

OpenLLM

This page demonstrates how to use OpenLLM with LangChain.

OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.

Installation and Setup​

Install the OpenLLM package via PyPI:

pip install openllm

LLM​

OpenLLM supports a wide range of open-source LLMs as well as serving users' own fine-tuned LLMs. Use openllm model command to see all available models that are pre-optimized for OpenLLM.

Wrappers​

There is a OpenLLM Wrapper which supports loading LLM in-process or accessing a remote OpenLLM server:

from langchain_community.llms import OpenLLM
API Reference:OpenLLM

Wrapper for OpenLLM server​

This wrapper supports connecting to an OpenLLM server via HTTP or gRPC. The OpenLLM server can run either locally or on the cloud.

To try it out locally, start an OpenLLM server:

openllm start flan-t5

Wrapper usage:

from langchain_community.llms import OpenLLM

llm = OpenLLM(server_url='http://localhost:3000')

llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
API Reference:OpenLLM

Wrapper for Local Inference​

You can also use the OpenLLM wrapper to load LLM in current Python process for running inference.

from langchain_community.llms import OpenLLM

llm = OpenLLM(model_name="dolly-v2", model_id='databricks/dolly-v2-7b')

llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
API Reference:OpenLLM

Usage​

For a more detailed walkthrough of the OpenLLM Wrapper, see the example notebook


Was this page helpful?


You can also leave detailed feedback on GitHub.