ComprehendToxicity#

class langchain_experimental.comprehend_moderation.toxicity.ComprehendToxicity(client: Any, callback: Any | None = None, unique_id: str | None = None, chain_id: str | None = None)[source]#

Class to handle toxicity moderation.

Methods

__init__(client[, callback, unique_id, chain_id])

validate(prompt_value[, config])

Check the toxicity of a given text prompt using AWS Comprehend service and apply actions based on configuration.

Parameters:
  • client (Any)

  • callback (Any | None)

  • unique_id (str | None)

  • chain_id (str | None)

__init__(client: Any, callback: Any | None = None, unique_id: str | None = None, chain_id: str | None = None) None[source]#
Parameters:
  • client (Any)

  • callback (Any | None)

  • unique_id (str | None)

  • chain_id (str | None)

Return type:

None

validate(prompt_value: str, config: Any = None) str[source]#

Check the toxicity of a given text prompt using AWS Comprehend service and apply actions based on configuration. :param prompt_value: The text content to be checked for toxicity. :type prompt_value: str :param config: Configuration for toxicity checks and actions. :type config: Dict[str, Any]

Returns:

The original prompt_value if allowed or no toxicity found.

Return type:

str

Raises:
  • ValueError – If the prompt contains toxic labels and cannot be

  • processed based on the configuration.

Parameters:
  • prompt_value (str)

  • config (Any)