Skip to main content

ChatTogether

This page will help you get started with Together AI chat models. For detailed documentation of all ChatTogether features and configurations head to the API reference.

Together AI offers an API to query 50+ leading open-source models

Overview

Integration details

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
ChatTogetherlangchain-togetherbetaPyPI - DownloadsPyPI - Version

Model features

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs

Setup

To access Together models you'll need to create a/an Together account, get an API key, and install the langchain-together integration package.

Credentials

Head to this page to sign up to Together and generate an API key. Once you've done this set the TOGETHER_API_KEY environment variable:

import getpass
import os

os.environ["TOGETHER_API_KEY"] = getpass.getpass("Enter your Together API key: ")

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"

Installation

The LangChain Together integration lives in the langchain-together package:

%pip install -qU langchain-together

[notice] A new release of pip is available: 24.0 -> 24.1.2
[notice] To update, run: pip install --upgrade pip
Note: you may need to restart the kernel to use updated packages.

Instantiation

Now we can instantiate our model object and generate chat completions:

  • TODO: Update model instantiation with relevant params.
from langchain_together import ChatTogether

llm = ChatTogether(
model="meta-llama/Llama-3-70b-chat-hf",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# other params...
)
API Reference:ChatTogether

Invocation

messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
AIMessage(content="J'adore la programmation.", response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 35, 'total_tokens': 44}, 'model_name': 'meta-llama/Llama-3-70b-chat-hf', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-79efa49b-dbaf-4ef8-9dce-958533823ef6-0', usage_metadata={'input_tokens': 35, 'output_tokens': 9, 'total_tokens': 44})
print(ai_msg.content)
J'adore la programmation.

Chaining

We can chain our model with a prompt template like so:

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)

chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API Reference:ChatPromptTemplate
AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 7, 'prompt_tokens': 30, 'total_tokens': 37}, 'model_name': 'meta-llama/Llama-3-70b-chat-hf', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-80bba5fa-1723-4242-8d5a-c09b76b8350b-0', usage_metadata={'input_tokens': 30, 'output_tokens': 7, 'total_tokens': 37})

API reference

For detailed documentation of all ChatTogether features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_together.chat_models.ChatTogether.html


Was this page helpful?


You can also leave detailed feedback on GitHub.