BedrockBase#

class langchain_community.llms.bedrock.BedrockBase[source]#

Bases: BaseModel, ABC

Base class for Bedrock models.

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param config: Any = None#

An optional botocore.config.Config instance to pass to the client.

param credentials_profile_name: str | None = None#

The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html

param endpoint_url: str | None = None#

Needed if you don’t want to default to us-east-1 endpoint

param guardrails: Mapping[str, Any] | None = {'id': None, 'trace': False, 'version': None}#

An optional dictionary to configure guardrails for Bedrock.

This field ‘guardrails’ consists of two keys: ‘id’ and ‘version’, which should be strings, but are initialized to None. It’s used to determine if specific guardrails are enabled and properly set.

Type:

Optional[Mapping[str, str]]: A mapping with ‘id’ and ‘version’ keys.

Example: llm = Bedrock(model_id=”<model_id>”, client=<bedrock_client>,

model_kwargs={}, guardrails={

“id”: “<guardrail_id>”, “version”: “<guardrail_version>”})

To enable tracing for guardrails, set the ‘trace’ key to True and pass a callback handler to the ‘run_manager’ parameter of the ‘generate’, ‘_call’ methods.

Example: llm = Bedrock(model_id=”<model_id>”, client=<bedrock_client>,

model_kwargs={}, guardrails={

“id”: “<guardrail_id>”, “version”: “<guardrail_version>”, “trace”: True},

callbacks=[BedrockAsyncCallbackHandler()])

[https://python.langchain.com/docs/modules/callbacks/] for more information on callback handlers.

class BedrockAsyncCallbackHandler(AsyncCallbackHandler):
async def on_llm_error(

self, error: BaseException, **kwargs: Any,

) -> Any:

reason = kwargs.get(“reason”) if reason == “GUARDRAIL_INTERVENED”:

…Logic to handle guardrail intervention…

param model_id: str [Required]#

Id of the model to call, e.g., amazon.titan-text-express-v1, this is equivalent to the modelId property in the list-foundation-models api. For custom and provisioned models, an ARN value is expected.

param model_kwargs: Dict | None = None#

Keyword arguments to pass to the model.

param provider: str | None = None#

The model provider, e.g., amazon, cohere, ai21, etc. When not supplied, provider is extracted from the first part of the model_id e.g. ‘amazon’ in ‘amazon.titan-text-express-v1’. This value should be provided for model ids that do not have the provider in them, e.g., custom and provisioned models that have an ARN associated with them.

param provider_stop_sequence_key_name_map: Mapping[str, str] = {'ai21': 'stop_sequences', 'amazon': 'stopSequences', 'anthropic': 'stop_sequences', 'cohere': 'stop_sequences', 'mistral': 'stop'}#
param region_name: str | None = None#

The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here.

param streaming: bool = False#

Whether to stream the results.