llms
#
LLM classes provide access to the large language model (LLM) APIs and services.
Class hierarchy:
BaseLanguageModel --> BaseLLM --> LLM --> <name> # Examples: AI21, HuggingFaceHub, OpenAI
Main helpers:
LLMResult, PromptValue,
CallbackManagerForLLMRun, AsyncCallbackManagerForLLMRun,
CallbackManager, AsyncCallbackManager,
AIMessage, BaseMessage
Classes
AI21 large language models. |
|
Parameters for AI21 penalty data. |
|
Aleph Alpha large language models. |
|
Amazon API Gateway to access LLM models hosted on AWS. |
|
Adapter to prepare the inputs from Langchain to a format that LLM model expects. |
|
Anyscale large language models. |
|
Aphrodite language model. |
|
Arcee's Domain Adapted Language Models (DALMs). |
|
Aviary hosted models. |
|
|
Aviary backend. |
Azure ML Online Endpoint models. |
|
Azure ML endpoints API types. |
|
AzureML Managed Endpoint client. |
|
Azure ML Online Endpoint models. |
|
Transform request and response of AzureML endpoint to match with required schema. |
|
Content formatter for models that use the OpenAI like API scheme. |
|
Content handler for the Dolly-v2-12b model |
|
Content handler for GPT2 |
|
Content handler for LLMs from the HuggingFace catalog. |
|
Deprecated: Kept for backwards compatibility |
|
Deprecated: Kept for backwards compatibility |
|
Baichuan large language models. |
|
Baidu Qianfan completion model integration. |
|
Banana large language models. |
|
Baseten model |
|
Beam API for gpt2 large language model. |
|
Base class for Bedrock models. |
|
Adapter class to prepare the inputs from Langchain to a format that LLM model expects. |
|
Wrapper around the BigdlLLM model |
|
NIBittensor LLMs |
|
CerebriumAI large language models. |
|
ChatGLM LLM service. |
|
ChatGLM3 LLM service. |
|
Clarifai large language models. |
|
Cloudflare Workers AI service. |
|
C Transformers LLM models. |
|
CTranslate2 language model. |
|
DeepInfra models. |
|
Neural Magic DeepSparse LLM interface. |
|
EdenAI models. |
|
ExllamaV2 API. |
|
Fake LLM for testing purposes. |
|
Fake streaming list LLM for testing purposes. |
|
ForefrontAI large language models. |
|
Base class of Friendli. |
|
Friendli LLM. |
|
GigaChat large language models API. |
|
GooseAI large language models. |
|
GPT4All language models. |
|
Gradient.ai LLM Endpoints. |
|
Train result. |
|
User input as the response. |
|
IpexLLM model. |
|
Javelin AI Gateway LLMs. |
|
Parameters for the Javelin AI Gateway LLM. |
|
Kobold API language model. |
|
Konko AI models. |
|
Layerup Security LLM service. |
|
llama.cpp model. |
|
Llamafile lets you distribute and run large language models with a single file. |
|
HazyResearch's Manifest library. |
|
Minimax large language models. |
|
Common parameters for Minimax large language models. |
|
MLflow LLM service. |
|
MLflow AI Gateway LLMs. |
|
Parameters for the MLflow AI Gateway LLM. |
|
MLX Pipeline API. |
|
Modal large language models. |
|
Moonshot large language models. |
|
Common parameters for Moonshot LLMs. |
|
MosaicML LLM service. |
|
NLPCloud large language models. |
|
|
Base class for LLM deployed on OCI Data Science Model Deployment. |
|
LLM deployed on OCI Data Science Model Deployment. |
|
OCI Data Science Model Deployment TGI Endpoint. |
|
VLLM deployed on OCI Data Science Model Deployment |
Raises when encounter server error when making inference. |
|
|
Raises when token expired. |
|
OCI authentication types as enumerator. |
OCI large language models. |
|
Base class for OCI GenAI models |
|
OctoAI LLM Endpoints - OpenAI compatible. |
|
Raised when the Ollama endpoint is not found. |
|
LLM that uses OpaquePrompts to sanitize prompts. |
|
Base OpenAI large language model class. |
|
Parameters for identifying a model as a typed dict. |
|
OpenLLM, supporting both in-process model instance and remote OpenLLM servers. |
|
OpenLM models. |
|
Langchain LLM class to help to access eass llm service. |
|
Petals Bloom models. |
|
PipelineAI large language models. |
|
Use your Predibase models with Langchain. |
|
Prediction Guard large language models. |
|
PromptLayer OpenAI large language models. |
|
PromptLayer OpenAI large language models. |
|
Replicate models. |
|
RWKV language models. |
|
Handler class to transform input from LLM to a format that SageMaker endpoint expects. |
|
Content handler for LLM class. |
|
Parse the byte stream input. |
|
Sagemaker Inference Endpoint models. |
|
SambaStudio large language models. |
|
Model inference on self-hosted remote hardware. |
|
HuggingFace Pipeline API to run on self-hosted remote hardware. |
|
Solar large language models. |
|
Common configuration for Solar LLMs. |
|
iFlyTek Spark completion model integration. |
|
StochasticAI large language models. |
|
Nebula Service models. |
|
Text generation models from WebUI. |
|
|
The device to use for inference, cuda or cpu |
Configuration for the reader to be deployed in Titan Takeoff API. |
|
Titan Takeoff API LLMs. |
|
Tongyi completion model integration. |
|
VLLM language model. |
|
vLLM OpenAI-compatible API client |
|
Base class for VolcEngineMaas models. |
|
volc engine maas hosts a plethora of models. |
|
Weight only quantized model. |
|
Writer large language models. |
|
Xinference large-scale model inference service. |
|
Yandex large language models. |
|
Yi large language models. |
|
Wrapper around You.com's conversational Smart and Research APIs. |
|
Yuan2.0 language models. |
Functions
|
Create the LLMResult from the choices and prompts. |
|
Update token usage. |
|
Get completions from Aviary models. |
List available models |
|
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the completion call. |
Get the default Databricks personal access token. |
|
Get the default Databricks workspace hostname. |
|
Get the notebook REPL context if running inside a Databricks notebook. |
|
|
Use tenacity to retry the completion call. |
Use tenacity to retry the completion call. |
|
Use tenacity to retry the completion call for streaming. |
|
|
Use tenacity to retry the completion call. |
Use tenacity to retry the completion call. |
|
Conditionally apply a decorator. |
|
|
Use tenacity to retry the completion call. |
Remove trailing slash and /api from url if present. |
|
|
Default guardrail violation handler. |
|
Load LLM from a file. |
|
Load LLM from Config Dict. |
|
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the completion call. |
|
Update token usage. |
Use tenacity to retry the completion call. |
|
|
Generate text from the model. |
Generate elements from an async iterable, and a boolean indicating if it is the last element. |
|
|
Async version of stream_generate_with_retry. |
Check the response from the completion call. |
|
Generate elements from an iterable, and a boolean indicating if it is the last element. |
|
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the completion call. |
|
Cut off the text as soon as any stop words occur. |
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the completion call. |
|
Return True if the model name is a Codey model. |
|
Return True if the model name is a Gemini model. |
|
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the completion call. |
Deprecated classes