ModerationToxicityConfig#

class langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig[source]#

Bases: BaseModel

Configuration for Toxicity moderation filter.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

param labels: List[str] = []#

List of toxic labels, defaults to list[]

param threshold: float = 0.5#

Threshold for Toxic label confidence score, defaults to 0.5 i.e. 50%