ModerationToxicityError#

class langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationToxicityError(
message: str = 'The prompt contains toxic content and cannot be processed',
)[source]#

Exception raised if Toxic entities are detected.

Parameters:

message (str)

message -- explanation of the error