LLMs#

Wrappers on top of large language models APIs.

pydantic model langchain.llms.AI21[source]#

Wrapper around AI21 large language models.

To use, you should have the environment variable AI21_API_KEY set with your API key.

Example

from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field base_url: Optional[str] = None#

Base url to use, if None decides based on model name.

field countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#

Penalizes repeated tokens according to count.

field frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#

Penalizes repeated tokens according to frequency.

field logitBias: Optional[Dict[str, float]] = None#

Adjust the probability of specific tokens being generated.

field maxTokens: int = 256#

The maximum number of tokens to generate in the completion.

field minTokens: int = 0#

The minimum number of tokens to generate in the completion.

field model: str = 'j2-jumbo-instruct'#

Model name to use.

field numResults: int = 1#

How many completions to generate for each prompt.

field presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#

Penalizes repeated tokens.

field temperature: float = 0.7#

What sampling temperature to use.

field topP: float = 1.0#

Total probability mass of tokens to consider at each step.

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.AlephAlpha[source]#

Wrapper around Aleph Alpha large language models.

To use, you should have the aleph_alpha_client python package installed, and the environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass it as a named parameter to the constructor.

Parameters are explained more in depth here: Aleph-Alpha/aleph-alpha-client

Example

from langchain.llms import AlephAlpha
alpeh_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key")
Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field aleph_alpha_api_key: Optional[str] = None#

API key for Aleph Alpha API.

field best_of: Optional[int] = None#

returns the one with the “best of” results (highest log probability per token)

field completion_bias_exclusion_first_token_only: bool = False#

Only consider the first token for the completion_bias_exclusion.

field contextual_control_threshold: Optional[float] = None#

If set to None, attention control parameters only apply to those tokens that have explicitly been set in the request. If set to a non-None value, control parameters are also applied to similar tokens.

field control_log_additive: Optional[bool] = True#

True: apply control by adding the log(control_factor) to attention scores. False: (attention_scores - - attention_scores.min(-1)) * control_factor

field echo: bool = False#

Echo the prompt in the completion.

field frequency_penalty: float = 0.0#

Penalizes repeated tokens according to frequency.

field log_probs: Optional[int] = None#

Number of top log probabilities to be returned for each generated token.

field logit_bias: Optional[Dict[int, float]] = None#

The logit bias allows to influence the likelihood of generating tokens.

field maximum_tokens: int = 64#

The maximum number of tokens to be generated.

field minimum_tokens: Optional[int] = 0#

Generate at least this number of tokens.

field model: Optional[str] = 'luminous-base'#

Model name to use.

field n: int = 1#

How many completions to generate for each prompt.

field penalty_bias: Optional[str] = None#

Penalty bias for the completion.

field penalty_exceptions: Optional[List[str]] = None#

List of strings that may be generated without penalty, regardless of other penalty settings

field penalty_exceptions_include_stop_sequences: Optional[bool] = None#

Should stop_sequences be included in penalty_exceptions.

field presence_penalty: float = 0.0#

Penalizes repeated tokens.

field raw_completion: bool = False#

Force the raw completion of the model to be returned.

field repetition_penalties_include_completion: bool = True#

Flag deciding whether presence penalty or frequency penalty are updated from the completion.

field repetition_penalties_include_prompt: Optional[bool] = False#

Flag deciding whether presence penalty or frequency penalty are updated from the prompt.

field stop_sequences: Optional[List[str]] = None#

Stop sequences to use.

field temperature: float = 0.0#

A non-negative float that tunes the degree of randomness in generation.

field tokens: Optional[bool] = False#

return tokens of completion.

field top_k: int = 0#

Number of most likely tokens to consider at each step.

field top_p: float = 0.0#

Total probability mass of tokens to consider at each step.

field use_multiplicative_presence_penalty: Optional[bool] = False#

Flag deciding whether presence penalty is applied multiplicatively (True) or additively (False).

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.Anthropic[source]#

Wrapper around Anthropic’s large language models.

To use, you should have the anthropic python package installed, and the environment variable ANTHROPIC_API_KEY set with your API key, or pass it as a named parameter to the constructor.

Example

import anthropic
from langchain.llms import Anthropic
model = Anthropic(model="<model_name>", anthropic_api_key="my-api-key")

# Simplest invocation, automatically wrapped with HUMAN_PROMPT
# and AI_PROMPT.
response = model("What are the biggest risks facing humanity?")

# Or if you want to use the chat mode, build a few-shot-prompt, or
# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:
raw_prompt = "What are the biggest risks facing humanity?"
prompt = f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}"
response = model(prompt)
Validators
  • raise_deprecation » all fields

  • raise_warning » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None#

Timeout for requests to Anthropic Completion API. Default is 600 seconds.

field max_tokens_to_sample: int = 256#

Denotes the number of tokens to predict per generation.

field model: str = 'claude-v1'#

Model name to use.

field streaming: bool = False#

Whether to stream the results.

field temperature: Optional[float] = None#

A non-negative float that tunes the degree of randomness in generation.

field top_k: Optional[int] = None#

Number of most likely tokens to consider at each step.

field top_p: Optional[float] = None#

Total probability mass of tokens to consider at each step.

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int[source]#

Calculate number of tokens.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

stream(prompt: str, stop: Optional[List[str]] = None) Generator[source]#

Call Anthropic completion_stream and return the resulting generator.

BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change.

Parameters
  • prompt – The prompt to pass into the model.

  • stop – Optional list of stop words to use when generating.

Returns

A generator representing the stream of tokens from Anthropic.

Example

prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
    yield token
classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.Anyscale[source]#

Wrapper around Anyscale Services. To use, you should have the environment variable ANYSCALE_SERVICE_URL, ANYSCALE_SERVICE_ROUTE and ANYSCALE_SERVICE_TOKEN set with your Anyscale Service, or pass it as a named parameter to the constructor.

Example

from langchain.llms import Anyscale
anyscale = Anyscale(anyscale_service_url="SERVICE_URL",
                    anyscale_service_route="SERVICE_ROUTE",
                    anyscale_service_token="SERVICE_TOKEN")

# Use Ray for distributed processing
import ray
prompt_list=[]
@ray.remote
def send_query(llm, prompt):
    resp = llm(prompt)
    return resp
futures = [send_query.remote(anyscale, prompt) for prompt in prompt_list]
results = ray.get(futures)
Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field model_kwargs: Optional[dict] = None#

Key word arguments to pass to the model. Reserved for future use

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.Aviary[source]#

Allow you to use an Aviary.

Aviary is a backend for hosted models. You can find out more about aviary at ray-project/aviary

Has no dependencies, since it connects to backend directly.

To get a list of the models supported on an aviary, follow the instructions on the web site to install the aviary CLI and then use: aviary models

You must at least specify the environment variable or parameter AVIARY_URL.

You may optionally specify the environment variable or parameter AVIARY_TOKEN.

Example

from langchain.llms import Aviary
light = Aviary(aviary_url='AVIARY_URL',
                model='amazon/LightGPT')

result = light.predict('How do you make fried rice?')
Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.AzureOpenAI[source]#

Wrapper around Azure-specific OpenAI large language models.

To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key.

Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class.

Example

from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
Validators
  • build_extra » all fields

  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_azure_settings » all fields

  • validate_environment » all fields

field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#

Set of special tokens that are allowed。

field batch_size: int = 20#

Batch size to use when passing multiple documents to generate.

field best_of: int = 1#

Generates best_of completions server-side and returns the “best”.

field deployment_name: str = ''#

Deployment name to use.

field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#

Set of special tokens that are not allowed。

field frequency_penalty: float = 0#

Penalizes repeated tokens according to frequency.

field logit_bias: Optional[Dict[str, float]] [Optional]#

Adjust the probability of specific tokens being generated.

field max_retries: int = 6#

Maximum number of retries to make when generating.

field max_tokens: int = 256#

The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size.

field model_kwargs: Dict[str, Any] [Optional]#

Holds any model parameters valid for create call not explicitly specified.

field model_name: str = 'text-davinci-003' (alias 'model')#

Model name to use.

field n: int = 1#

How many completions to generate for each prompt.

field presence_penalty: float = 0#

Penalizes repeated tokens.

field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#

Timeout for requests to OpenAI completion API. Default is 600 seconds.

field streaming: bool = False#

Whether to stream the results or not.

field temperature: float = 0.7#

What sampling temperature to use.

field top_p: float = 1#

Total probability mass of tokens to consider at each step.

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) langchain.schema.LLMResult#

Create the LLMResult from the choices and prompts.

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) List[List[str]]#

Get the sub prompts for llm call.

get_token_ids(text: str) List[int]#

Get the token IDs using the tiktoken package.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

max_tokens_for_prompt(prompt: str) int#

Calculate the maximum number of tokens possible to generate for a prompt.

Parameters

prompt – The prompt to pass into the model.

Returns

The maximum number of tokens to generate for a prompt.

Example

max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) int#

Calculate the maximum number of tokens possible to generate for a model.

Parameters

modelname – The modelname we want to know the context size for.

Returns

The maximum context size

Example

max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

prep_streaming_params(stop: Optional[List[str]] = None) Dict[str, Any]#

Prepare the params for streaming.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

stream(prompt: str, stop: Optional[List[str]] = None) Generator#

Call OpenAI with streaming flag and return the resulting generator.

BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change.

Parameters
  • prompt – The prompts to pass into the model.

  • stop – Optional list of stop words to use when generating.

Returns

A generator representing the stream of tokens from OpenAI.

Example

generator = openai.stream("Tell me a joke.")
for token in generator:
    yield token
classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.Banana[source]#

Wrapper around Banana large language models.

To use, you should have the banana-dev python package installed, and the environment variable BANANA_API_KEY set with your API key.

Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class.

Example

from langchain.llms import Banana
banana = Banana(model_key="")
Validators
  • build_extra » all fields

  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field model_key: str = ''#

model endpoint to use

field model_kwargs: Dict[str, Any] [Optional]#

Holds any model parameters valid for create call not explicitly specified.

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.Baseten[source]#

Use your Baseten models in Langchain

To use, you should have the baseten python package installed, and run baseten.login() with your Baseten API key.

The required model param can be either a model id or model version id. Using a model version ID will result in slightly faster invocation. Any other model parameters can also be passed in with the format input={model_param: value, …}

The Baseten model must accept a dictionary of input with the key “prompt” and return a dictionary with a key “data” which maps to a list of response strings.

Example

Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.Beam[source]#

Wrapper around Beam API for gpt2 large language model.

To use, you should have the beam-sdk python package installed, and the environment variable BEAM_CLIENT_ID set with your client id and BEAM_CLIENT_SECRET set with your client secret. Information on how to get these is available here: https://docs.beam.cloud/account/api-keys.

The wrapper can then be called as follows, where the name, cpu, memory, gpu, python version, and python packages can be updated accordingly. Once deployed, the instance can be called.

Example

llm = Beam(model_name="gpt2",
    name="langchain-gpt2",
    cpu=8,
    memory="32Gi",
    gpu="A10G",
    python_version="python3.8",
    python_packages=[
        "diffusers[torch]>=0.10",
        "transformers",
        "torch",
        "pillow",
        "accelerate",
        "safetensors",
        "xformers",],
    max_length=50)
llm._deploy()
call_result = llm._call(input)
Validators
  • build_extra » all fields

  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field model_kwargs: Dict[str, Any] [Optional]#

Holds any model parameters valid for create call not explicitly specified.

field url: str = ''#

model endpoint to use

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

app_creation() None[source]#

Creates a Python file which will contain your Beam app definition.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

run_creation() None[source]#

Creates a Python file which will be deployed on beam.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.Bedrock[source]#

LLM provider to invoke Bedrock models.

To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html

If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used.

Make sure the credentials / roles used have the required policies to access the Bedrock service.

Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field credentials_profile_name: Optional[str] = None#

The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html

field model_id: str [Required]#

Id of the model to call, e.g., amazon.titan-tg1-large, this is equivalent to the modelId property in the list-foundation-models api

field model_kwargs: Optional[Dict] = None#

Key word arguments to pass to the model.

field region_name: Optional[str] = None#

The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here.

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.CTransformers[source]#

Wrapper around the C Transformers LLM interface.

To use, you should have the ctransformers python package installed. See marella/ctransformers

Example

from langchain.llms import CTransformers

llm = CTransformers(model="/path/to/ggml-gpt-2.bin", model_type="gpt2")
Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field config: Optional[Dict[str, Any]] = None#

The config parameters. See marella/ctransformers

field lib: Optional[str] = None#

The path to a shared library or one of avx2, avx, basic.

field model: str [Required]#

The path to a model file or directory or the name of a Hugging Face Hub model repo.

field model_file: Optional[str] = None#

The name of the model file in repo or directory.

field model_type: Optional[str] = None#

The model type.

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.CerebriumAI[source]#

Wrapper around CerebriumAI large language models.

To use, you should have the cerebrium python package installed, and the environment variable CEREBRIUMAI_API_KEY set with your API key.

Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class.

Example

from langchain.llms import CerebriumAI
cerebrium = CerebriumAI(endpoint_url="")
Validators
  • build_extra » all fields

  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field endpoint_url: str = ''#

model endpoint to use

field model_kwargs: Dict[str, Any] [Optional]#

Holds any model parameters valid for create call not explicitly specified.

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.Cohere[source]#

Wrapper around Cohere large language models.

To use, you should have the cohere python package installed, and the environment variable COHERE_API_KEY set with your API key, or pass it as a named parameter to the constructor.

Example

from langchain.llms import Cohere
cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key")
Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field frequency_penalty: float = 0.0#

Penalizes repeated tokens according to frequency. Between 0 and 1.

field k: int = 0#

Number of most likely tokens to consider at each step.

field max_retries: int = 10#

Maximum number of retries to make when generating.

field max_tokens: int = 256#

Denotes the number of tokens to predict per generation.

field model: Optional[str] = None#

Model name to use.

field p: int = 1#

Total probability mass of tokens to consider at each step.

field presence_penalty: float = 0.0#

Penalizes repeated tokens. Between 0 and 1.

field temperature: float = 0.75#

A non-negative float that tunes the degree of randomness in generation.

field truncate: Optional[str] = None#

Specify how the client handles inputs longer than the maximum token length: Truncate from START, END or NONE

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.Databricks[source]#

LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app. It supports two endpoint types:

  • Serving endpoint (recommended for both production and development). We assume that an LLM was registered and deployed to a serving endpoint. To wrap it as an LLM you must have “Can Query” permission to the endpoint. Set endpoint_name accordingly and do not set cluster_id and cluster_driver_port. The expected model signature is:

    • inputs:

      [{"name": "prompt", "type": "string"},
       {"name": "stop", "type": "list[string]"}]
      
    • outputs: [{"type": "string"}]

  • Cluster driver proxy app (recommended for interactive development). One can load an LLM on a Databricks interactive cluster and start a local HTTP server on the driver node to serve the model at / using HTTP POST method with JSON input/output. Please use a port number between [3000, 8000] and let the server listen to the driver IP address or simply 0.0.0.0 instead of localhost only. To wrap it as an LLM you must have “Can Attach To” permission to the cluster. Set cluster_id and cluster_driver_port and do not set endpoint_name. The expected server schema (using JSON schema) is:

    • inputs:

      {"type": "object",
       "properties": {
          "prompt": {"type": "string"},
          "stop": {"type": "array", "items": {"type": "string"}}},
       "required": ["prompt"]}`
      
    • outputs: {"type": "string"}

If the endpoint model signature is different or you want to set extra params, you can use transform_input_fn and transform_output_fn to apply necessary transformations before and after the query.

Validators
  • raise_deprecation » all fields

  • set_cluster_driver_port » cluster_driver_port

  • set_cluster_id » cluster_id

  • set_model_kwargs » model_kwargs

  • set_verbose » verbose

field api_token: str [Optional]#

Databricks personal access token. If not provided, the default value is determined by

  • the DATABRICKS_TOKEN environment variable if present, or

  • an automatically generated temporary token if running inside a Databricks notebook attached to an interactive cluster in “single user” or “no isolation shared” mode.

field cluster_driver_port: Optional[str] = None#

The port number used by the HTTP server running on the cluster driver node. The server should listen on the driver IP address or simply 0.0.0.0 to connect. We recommend the server using a port number between [3000, 8000].

field cluster_id: Optional[str] = None#

ID of the cluster if connecting to a cluster driver proxy app. If neither endpoint_name nor cluster_id is not provided and the code runs inside a Databricks notebook attached to an interactive cluster in “single user” or “no isolation shared” mode, the current cluster ID is used as default. You must not set both endpoint_name and cluster_id.

field endpoint_name: Optional[str] = None#

Name of the model serving endpont. You must specify the endpoint name to connect to a model serving endpoint. You must not set both endpoint_name and cluster_id.

field host: str [Optional]#

Databricks workspace hostname. If not provided, the default value is determined by

  • the DATABRICKS_HOST environment variable if present, or

  • the hostname of the current Databricks workspace if running inside a Databricks notebook attached to an interactive cluster in “single user” or “no isolation shared” mode.

field model_kwargs: Optional[Dict[str, Any]] = None#

Extra parameters to pass to the endpoint.

field transform_input_fn: Optional[Callable] = None#

A function that transforms {prompt, stop, **kwargs} into a JSON-compatible request object that the endpoint accepts. For example, you can apply a prompt template to the input prompt.

field transform_output_fn: Optional[Callable[[...], str]] = None#

A function that transforms the output from the endpoint to the generated text.

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.DeepInfra[source]#

Wrapper around DeepInfra deployed models.

To use, you should have the requests python package installed, and the environment variable DEEPINFRA_API_TOKEN set with your API token, or pass it as a named parameter to the constructor.

Only supports text-generation and text2text-generation for now.

Example

from langchain.llms import DeepInfra
di = DeepInfra(model_id="google/flan-t5-xl",
                    deepinfra_api_token="my-api-key")
Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.FakeListLLM[source]#

Fake LLM wrapper for testing purposes.

Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.ForefrontAI[source]#

Wrapper around ForefrontAI large language models.

To use, you should have the environment variable FOREFRONTAI_API_KEY set with your API key.

Example

from langchain.llms import ForefrontAI
forefrontai = ForefrontAI(endpoint_url="")
Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field base_url: Optional[str] = None#

Base url to use, if None decides based on model name.

field endpoint_url: str = ''#

Model name to use.

field length: int = 256#

The maximum number of tokens to generate in the completion.

field repetition_penalty: int = 1#

Penalizes repeated tokens according to frequency.

field temperature: float = 0.7#

What sampling temperature to use.

field top_k: int = 40#

The number of highest probability vocabulary tokens to keep for top-k-filtering.

field top_p: float = 1.0#

Total probability mass of tokens to consider at each step.

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.GPT4All[source]#

Wrapper around GPT4All language models.

To use, you should have the gpt4all python package installed, the pre-trained model file, and the model’s config information.

Example

from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)

# Simplest invocation
response = model("Once upon a time, ")
Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field allow_download: bool = False#

If model does not exist in ~/.cache/gpt4all/, download it.

field context_erase: float = 0.5#

Leave (n_ctx * context_erase) tokens starting from beginning if the context has run out.

field echo: Optional[bool] = False#

Whether to echo the prompt.

field embedding: bool = False#

Use embedding mode only.

field f16_kv: bool = False#

Use half-precision for key/value cache.

field logits_all: bool = False#

Return logits for all tokens, not just the last token.

field model: str [Required]#

Path to the pre-trained GPT4All model file.

field n_batch: int = 1#

Batch size for prompt processing.

field n_ctx: int = 512#

Token context window.

field n_parts: int = -1#

Number of parts to split the model into. If -1, the number of parts is automatically determined.

field n_predict: Optional[int] = 256#

The maximum number of tokens to generate.

field n_threads: Optional[int] = 4#

Number of threads to use.

field repeat_last_n: Optional[int] = 64#

Last n tokens to penalize

field repeat_penalty: Optional[float] = 1.3#

The penalty to apply to repeated tokens.

field seed: int = 0#

Seed. If -1, a random seed is used.

field stop: Optional[List[str]] = []#

A list of strings to stop generation when encountered.

field streaming: bool = False#

Whether to stream the results or not.

field temp: Optional[float] = 0.8#

The temperature to use for sampling.

field top_k: Optional[int] = 40#

The top-k value to use for sampling.

field top_p: Optional[float] = 0.95#

The top-p value to use for sampling.

field use_mlock: bool = False#

Force system to keep model in RAM.

field verbose: bool [Optional]#

Whether to print out response text.

field vocab_only: bool = False#

Only load the vocabulary, no weights.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.GooglePalm[source]#
Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field max_output_tokens: Optional[int] = None#

Maximum number of tokens to include in a candidate. Must be greater than zero. If unset, will default to 64.

field model_name: str = 'models/text-bison-001'#

Model name to use.

field n: int = 1#

Number of chat completions to generate for each prompt. Note that the API may not return the full n completions if duplicates are generated.

field temperature: float = 0.7#

Run inference with this temperature. Must by in the closed interval [0.0, 1.0].

field top_k: Optional[int] = None#

Decode using top-k sampling: consider the set of top_k most probable tokens. Must be positive.

field top_p: Optional[float] = None#

Decode using nucleus sampling: consider the smallest set of tokens whose probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.GooseAI[source]#

Wrapper around OpenAI large language models.

To use, you should have the openai python package installed, and the environment variable GOOSEAI_API_KEY set with your API key.

Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class.

Example

from langchain.llms import GooseAI
gooseai = GooseAI(model_name="gpt-neo-20b")
Validators
  • build_extra » all fields

  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field frequency_penalty: float = 0#

Penalizes repeated tokens according to frequency.

field logit_bias: Optional[Dict[str, float]] [Optional]#

Adjust the probability of specific tokens being generated.

field max_tokens: int = 256#

The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size.

field min_tokens: int = 1#

The minimum number of tokens to generate in the completion.

field model_kwargs: Dict[str, Any] [Optional]#

Holds any model parameters valid for create call not explicitly specified.

field model_name: str = 'gpt-neo-20b'#

Model name to use

field n: int = 1#

How many completions to generate for each prompt.

field presence_penalty: float = 0#

Penalizes repeated tokens.

field temperature: float = 0.7#

What sampling temperature to use

field top_p: float = 1#

Total probability mass of tokens to consider at each step.

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) int#

Get the number of tokens in the message.

get_token_ids(text: str) List[int]#

Get the token present in the text.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

predict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

save(file_path: Union[pathlib.Path, str]) None#

Save the LLM.

Parameters

file_path – Path to file to save the LLM to.

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

pydantic model langchain.llms.HuggingFaceEndpoint[source]#

Wrapper around HuggingFaceHub Inference Endpoints.

To use, you should have the huggingface_hub python package installed, and the environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass it as a named parameter to the constructor.

Only supports text-generation and text2text-generation for now.

Example

from langchain.llms import HuggingFaceEndpoint
endpoint_url = (
    "https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud"
)
hf = HuggingFaceEndpoint(
    endpoint_url=endpoint_url,
    huggingfacehub_api_token="my-api-key"
)
Validators
  • raise_deprecation » all fields

  • set_verbose » verbose

  • validate_environment » all fields

field endpoint_url: str = ''#

Endpoint URL to use.

field model_kwargs: Optional[dict] = None#

Key word arguments to pass to the model.

field task: Optional[str] = None#

Task to call the model with. Should be a task that returns generated_text or summary_text.

field verbose: bool [Optional]#

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) str#

Check Cache and run the LLM on the given prompt and input.

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

async apredict(text: str, *, stop: Optional[Sequence[str]] = None) str#

Predict text from text.

async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) langchain.schema.BaseMessage#

Predict message from messages.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict#

Return a dictionary of the LLM.

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Run the LLM on the given prompt and input.

generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) langchain.schema.LLMResult#

Take in a list of prompt values and return an LLMResult.

get_num_tokens(text: str