ComprehendPromptSafety#
- class langchain_experimental.comprehend_moderation.prompt_safety.ComprehendPromptSafety(client: Any, callback: Any | None = None, unique_id: str | None = None, chain_id: str | None = None)[source]#
Class to handle prompt safety moderation.
Methods
__init__
(client[, callback, unique_id, chain_id])validate
(prompt_value[, config])Check and validate the safety of the given prompt text.
- Parameters:
client (Any) –
callback (Any | None) –
unique_id (str | None) –
chain_id (str | None) –
- __init__(client: Any, callback: Any | None = None, unique_id: str | None = None, chain_id: str | None = None) None [source]#
- Parameters:
client (Any) –
callback (Any | None) –
unique_id (str | None) –
chain_id (str | None) –
- Return type:
None
- validate(prompt_value: str, config: Any = None) str [source]#
Check and validate the safety of the given prompt text.
- Parameters:
prompt_value (str) – The input text to be checked for unsafe text.
config (Dict[str, Any]) – Configuration settings for prompt safety checks.
- Raises:
ValueError – If unsafe prompt is found in the prompt text based
on the specified threshold. –
- Returns:
The input prompt_value.
- Return type:
str
Note
This function checks the safety of the provided prompt text using Comprehend’s classify_document API and raises an error if unsafe text is detected with a score above the specified threshold.
Example
comprehend_client = boto3.client(‘comprehend’) prompt_text = “Please tell me your credit card information.” config = {“threshold”: 0.7} checked_prompt = check_prompt_safety(comprehend_client, prompt_text, config)