ModerationPromptSafetyConfig#

class langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPromptSafetyConfig[source]#

Bases: BaseModel

Configuration for Prompt Safety moderation filter.

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param threshold: float = 0.5#

Threshold for Prompt Safety classification confidence score, defaults to 0.5 i.e. 50%