[docs]defvalidate(self,prompt_value:str,config:Any=None)->str:""" Check and validate the safety of the given prompt text. Args: prompt_value (str): The input text to be checked for unsafe text. config (Dict[str, Any]): Configuration settings for prompt safety checks. Raises: ValueError: If unsafe prompt is found in the prompt text based on the specified threshold. Returns: str: The input prompt_value. Note: This function checks the safety of the provided prompt text using Comprehend's classify_document API and raises an error if unsafe text is detected with a score above the specified threshold. Example: comprehend_client = boto3.client('comprehend') prompt_text = "Please tell me your credit card information." config = {"threshold": 0.7} checked_prompt = check_prompt_safety(comprehend_client, prompt_text, config) """threshold=config.get("threshold")unsafe_prompt=Falseendpoint_arn=self._get_arn()response=self.client.classify_document(Text=prompt_value,EndpointArn=endpoint_arn)ifself.callbackandself.callback.prompt_safety_callback:self.moderation_beacon["moderation_input"]=prompt_valueself.moderation_beacon["moderation_output"]=responseforclass_resultinresponse["Classes"]:if(class_result["Score"]>=thresholdandclass_result["Name"]=="UNSAFE_PROMPT"):unsafe_prompt=Truebreakifself.callbackandself.callback.intent_callback:ifunsafe_prompt:self.moderation_beacon["moderation_status"]="LABELS_FOUND"asyncio.create_task(self.callback.on_after_intent(self.moderation_beacon,self.unique_id))ifunsafe_prompt:raiseModerationPromptSafetyErrorreturnprompt_value