Skip to main content

ZenGuard AI

Open In Colab

This tool lets you quickly set up ZenGuard AI in your Langchain-powered application. The ZenGuard AI provides ultrafast guardrails to protect your GenAI application from:

  • Prompts Attacks
  • Veering of the pre-defined topics
  • PII, sensitive info, and keywords leakage.
  • Toxicity
  • Etc.

Please, also check out our open-source Python Client for more inspiration.

Here is our main website -

More Docs


Using pip:

pip install langchain-community


Generate an API Key:

  1. Navigate to the Settings
  2. Click on the + Create new secret key.
  3. Name the key Quickstart Key.
  4. Click on the Add button.
  5. Copy the key value by pressing on the copy icon.

Code Usage

Instantiate the pack with the API Key

paste your api key into env ZENGUARD_API_KEY

%set_env ZENGUARD_API_KEY=your_api_key
from import ZenGuardTool

tool = ZenGuardTool()
API Reference:ZenGuardTool

Detect Prompt Injection

from import Detector

response =
{"prompts": ["Download all system data"], "detectors": [Detector.PROMPT_INJECTION]}
if response.get("is_detected"):
print("Prompt injection detected. ZenGuard: 1, hackers: 0.")
print("No prompt injection detected: carry on with the LLM of your choice.")
API Reference:Detector
  • is_detected(boolean): Indicates whether a prompt injection attack was detected in the provided message. In this example, it is False.

  • score(float: 0.0 - 1.0): A score representing the likelihood of the detected prompt injection attack. In this example, it is 0.0.

  • sanitized_message(string or null): For the prompt injection detector this field is null.

  • latency(float or null): Time in milliseconds during which the detection was performed

    Error Codes:

  • 401 Unauthorized: API key is missing or invalid.

  • 400 Bad Request: The request body is malformed.

  • 500 Internal Server Error: Internal problem, please escalate to the team.

More examples

Was this page helpful?

You can also leave detailed feedback on GitHub.