Skip to main content

ChatOctoAI

OctoAI offers easy access to efficient compute and enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications easily.

This notebook demonstrates the use of langchain.chat_models.ChatOctoAI for OctoAI endpoints.

Setup

To run our example app, there are two simple steps to take:

  1. Get an API Token from your OctoAI account page.
  2. Paste your API token in in the code cell below or use the octoai_api_token keyword argument.

Note: If you want to use a different model than the available models, you can containerize the model and make a custom OctoAI endpoint yourself, by following Build a Container from Python and Create a Custom Endpoint from a Container and then updating your OCTOAI_API_BASE environment variable.

import os

os.environ["OCTOAI_API_TOKEN"] = "OCTOAI_API_TOKEN"
from langchain_community.chat_models import ChatOctoAI
from langchain_core.messages import HumanMessage, SystemMessage

Example

chat = ChatOctoAI(max_tokens=300, model_name="mixtral-8x7b-instruct")
messages = [
SystemMessage(content="You are a helpful assistant."),
HumanMessage(content="Tell me about Leonardo da Vinci briefly."),
]
print(chat(messages).content)

Leonardo da Vinci (1452-1519) was an Italian polymath who is often considered one of the greatest painters in history. However, his genius extended far beyond art. He was also a scientist, inventor, mathematician, engineer, anatomist, geologist, and cartographer.

Da Vinci is best known for his paintings such as the Mona Lisa, The Last Supper, and The Virgin of the Rocks. His scientific studies were ahead of his time, and his notebooks contain detailed drawings and descriptions of various machines, human anatomy, and natural phenomena.

Despite never receiving a formal education, da Vinci's insatiable curiosity and observational skills made him a pioneer in many fields. His work continues to inspire and influence artists, scientists, and thinkers today.


Help us out by providing feedback on this documentation page: