ModerationPromptSafetyConfig#
- class langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPromptSafetyConfig[source]#
Bases:
BaseModel
Configuration for Prompt Safety moderation filter.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- param threshold: float = 0.5#
Threshold for Prompt Safety classification confidence score, defaults to 0.5 i.e. 50%