comprehend_moderation#

Comprehend Moderation is used to detect and handle Personally Identifiable Information (PII), toxicity, and prompt safety in text.

The Langchain experimental package includes the AmazonComprehendModerationChain class for the comprehend moderation tasks. It is based on Amazon Comprehend service. This class can be configured with specific moderation settings like PII labels, redaction, toxicity thresholds, and prompt safety thresholds.

See more at https://aws.amazon.com/comprehend/

Amazon Comprehend service is used by several other classes: - ComprehendToxicity class is used to check the toxicity of text prompts using

AWS Comprehend service and take actions based on the configuration

  • ComprehendPromptSafety class is used to validate the safety of given prompt text, raising an error if unsafe content is detected based on the specified threshold

  • ComprehendPII class is designed to handle Personally Identifiable Information (PII) moderation tasks, detecting and managing PII entities in text inputs

Classes

comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain

Moderation Chain, based on Amazon Comprehend service.

comprehend_moderation.base_moderation.BaseModeration(client)

Base class for moderation.

comprehend_moderation.base_moderation_callbacks.BaseModerationCallbackHandler()

Base class for moderation callback handlers.

comprehend_moderation.base_moderation_config.BaseModerationConfig

Base configuration settings for moderation.

comprehend_moderation.base_moderation_config.ModerationPiiConfig

Configuration for PII moderation filter.

comprehend_moderation.base_moderation_config.ModerationPromptSafetyConfig

Configuration for Prompt Safety moderation filter.

comprehend_moderation.base_moderation_config.ModerationToxicityConfig

Configuration for Toxicity moderation filter.

comprehend_moderation.base_moderation_exceptions.ModerationPiiError([...])

Exception raised if PII entities are detected.

comprehend_moderation.base_moderation_exceptions.ModerationPromptSafetyError([...])

Exception raised if Unsafe prompts are detected.

comprehend_moderation.base_moderation_exceptions.ModerationToxicityError([...])

Exception raised if Toxic entities are detected.

comprehend_moderation.pii.ComprehendPII(client)

Class to handle Personally Identifiable Information (PII) moderation.

comprehend_moderation.prompt_safety.ComprehendPromptSafety(client)

Class to handle prompt safety moderation.

comprehend_moderation.toxicity.ComprehendToxicity(client)

Class to handle toxicity moderation.