chat_models
#
Chat Models are a variation on language models.
While Chat Models use language models under the hood, the interface they expose is a bit different. Rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.
Class hierarchy:
BaseLanguageModel --> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes
Anyscale Chat large language models. |
|
Azure ML Online Endpoint chat models. |
|
|
Chat Content formatter for models with OpenAI like API scheme. |
Deprecated: Kept for backwards compatibility |
|
Content formatter for LLaMA. |
|
Content formatter for Mistral. |
|
Baichuan chat model integration. |
|
Baidu Qianfan chat model integration. |
|
Adapter class to prepare the inputs from Langchain to prompt format that Chat model expects. |
|
Custom chat model for Cloudflare Workers AI |
|
ChatCoze chat models API by coze.com |
|
Dappier chat large language models. |
|
A chat model that uses the DeepInfra API. |
|
Exception raised when the DeepInfra API returns an error. |
|
EdenAI chat large language models. |
|
EverlyAI Chat large language models. |
|
Fake ChatModel for testing purposes. |
|
Fake ChatModel for testing purposes. |
|
Friendli LLM for chat. |
|
Google PaLM Chat models API. |
|
Error with the Google PaLM API. |
|
GPTRouter by Writesonic Inc. |
|
Error with the GPTRouter APIs |
|
GPTRouter model. |
|
ChatModel which returns user input as the response. |
|
Tencent Hunyuan chat models API by Tencent. |
|
Javelin AI Gateway chat models API. |
|
Parameters for the Javelin AI Gateway LLM. |
|
Jina AI Chat models API. |
|
Kinetica LLM Chat Model API. |
|
Fetch and return data from the Kinetica LLM. |
|
Response containing SQL and the fetched data. |
|
Kinetica utility functions. |
|
ChatKonko Chat large language models API. |
|
Chat model that uses the LiteLLM API. |
|
Error with the LiteLLM I/O library |
|
LiteLLM Router as LangChain Model. |
|
Chat with LLMs via llama-api-server |
|
llama.cpp model. |
|
MariTalk Chat models API. |
|
Initialize RequestException with request and response objects. |
|
MiniMax chat model integration. |
|
MLflow chat models API. |
|
MLflow AI Gateway chat models API. |
|
Parameters for the MLflow AI Gateway LLM. |
|
MLX chat models. |
|
Moonshot large language models. |
|
NCP ClovaStudio Chat Completion API. |
|
OCI Data Science Model Deployment chat model integration. |
|
OCI large language chat models deployed with Text Generation Inference. |
|
OCI large language chat models deployed with vLLM. |
|
ChatOCIGenAI chat model integration. |
|
OctoAI Chat large language models. |
|
Outlines chat model integration. |
|
Alibaba Cloud PAI-EAS LLM Service chat model API. |
|
Perplexity AI Chat models API. |
|
PremAI Chat models. |
|
Error with the PremAI API. |
|
PromptLayer and OpenAI Chat large language models API. |
|
Reka chat large language models. |
|
SambaNova Cloud chat model. |
|
SambaStudio chat model. |
|
Snowflake Cortex based Chat model |
|
Error with Snowpark client. |
|
IFlyTek Spark chat model integration. |
|
Nebula chat large language model - https://docs.symbl.ai/docs/nebula-llm |
|
Alibaba Tongyi Qwen chat model integration. |
|
Volc Engine Maas hosts a plethora of models. |
|
Writer chat model. |
|
YandexGPT large language models. |
|
Yi chat models API. |
|
Yuan2.0 Chat models API. |
|
ZhipuAI chat model integration. |
Functions
|
Format a list of messages into a full prompt for the Anthropic model |
Async context manager for connecting to an SSE stream. |
|
|
Convert a message to a dictionary that can be passed to the API. |
Convert a list of messages to a prompt for mistral. |
|
Get the request for the Cohere chat API. |
|
|
Get the role of the message. |
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the completion call for streaming. |
|
Use tenacity to retry the completion call. |
|
Define conditional decorator. |
|
Convert a dict response to a message. |
|
|
Get a request of the Friendli chat API. |
|
Get role of the message. |
Use tenacity to retry the async completion call. |
|
|
Use tenacity to retry the completion call. |
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the completion call. |
|
Return the body for the model router input. |
|
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the async completion call. |
|
Get llm output from usage and params. |
|
Convert a list of messages to a prompt for llama. |
|
Async context manager for connecting to an SSE stream. |
|
Context manager for connecting to an SSE stream. |
|
Use tenacity to retry the async completion call. |
|
|
Using tenacity for retry in completion call |
Create a retry decorator for PremAI API errors. |
|
Convert LangChain messages to Reka message format. |
|
|
Process content to handle both text and media inputs, returning a list of content items. |
Process a single content item. |
|
Convert a dict to a message. |
|
Convert a message chunk to a message. |
|
Convert a message to a dict. |
|
Convert a dict to a message. |
|
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the completion call. |
|
|
|
Use tenacity to retry the async completion call. |
|
|
Async context manager for connecting to an SSE stream. |
|
Context manager for connecting to an SSE stream. |
Deprecated classes