# Build secure and controlled AI Agents with Guardrails

This quickstart shows you how to use **guardrails** to automatically detect and protect PII (Personally Identifiable Information) while enforcing business rules with custom patterns, keeping your integrations secure and compliant.

## What are guardrails?

**Guardrails** act as security layers between the user and the LLM. In Digibee, they:

* **Detect and mask sensitive data:** Identify and mask personal identifiers (CPF, CNPJ), financial data (credit cards, IBAN), contact details (email, phone numbers), technical identifiers (IP addresses, URLs), crypto wallet addresses, and other configurable patterns.
* **Validate input patterns:** Block specific data formats before processing the request to ensure the AI follows a predefined pattern.

## Prerequisites

Before you begin, make sure you have:

* An API key from an LLM provider (for example, OpenAI, Anthropic, or Google).
* The API key registered in Digibee as a **Secret Key** account. For details, see [how to create a Secret Key account](https://app.gitbook.com/s/jvO5S91EQURCEhbZOuuZ/platform-administration/settings/accounts#secret-key).

## Initial setup

Add the **Agent Component** to your pipeline immediately after the trigger and configure it as follows:

* **Model:** Select your preferred model (for example, OpenAI – GPT-4o Mini).
* **Account:** Click the gear icon next to the Model parameter, go to **Account**, and select the Secret Key account you created in Digibee.

Once the basic configuration is complete, you are ready to configure your guardrails.

## Scenario

You are building an Agent to process refund requests. It must identify whether the request is valid, never send sensitive customer data to the AI provider, and ensure that the output format is readable by a database.

To ensure data privacy and compliance, configure the Agent with the following messages:

**System Message**

```
You are a reimbursement screening assistant. From the user request, return:
1. The order code exactly as shown,
2. Whether an order code is present,
3. A brief summary without sensitive data.
Output JSON only.
```

**User Message**

```
{{ message.$ }}
```

## Step-by-step

### 1. Enable PII detection (Masking) and JSON Schema validation

Protect your customers' privacy by ensuring sensitive data is never sent in plain text to the LLM provider .

1. Open the **Agent Component** and click the gear icon (⚙️) next to **Guardrails**.
2. In the settings, enable **Mask on detection**.
3. Select the specific patterns you want to protect. For this example, select:
   * **CPF detection (Brazilian national identifier for individuals)**&#x20;
   * **Email detection**
4. After selecting the patterns, enable **JSON Schema** to validate the model’s response against the defined structure. If the response does not match the schema, the Agent automatically requests a correction; if validation still fails, the execution ends with an error.

### 2. Add Custom Regex validation

Add a custom pattern to ensure that the Agent only processes legitimate internal order codes:

1. On the same Guardrails page, enable **Regex**.
2. Configure:
   * **Pattern Name:** `Valid_Order_Code`
   * **Regex Pattern:** `REF-\d{4}`
3. Click **Save**.

### 3. Ensure structured output (JSON Schema)

Guardrails also monitor the model's output. The **JSON Schema** option was enabled in Step 1. To enforce a machine-readable response, you will define the schema itself in the next steps. Once both configurations are in place, the Agent validates responses against this schema and automatically retries if the output doesn’t comply on the first attempt.

1. Click the gear icon (⚙️) next to **Model** and enable **Use JSON Schema**. .
2. Define the output schema into the **JSON Schema definition** field (ensure the `$schema` tag is present):&#x20;

```json
{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "type": "object",
  "required": [
    "order_id",
    "order_code_present",
    "customer_summary",
    "email_present"
  ],
  "properties": {
    "order_id": {
      "type": "string",
      "description": "The order identifier extracted from the text"
    },
    "order_code_present": {
      "type": "boolean",
      "description": "True if an order code is present in the input."
    },
    "email_present": {
      "type": "boolean",
      "description": "True if an email address is present in the input."
    },
    "customer_summary": {
      "type": "string",
      "description": "A summary without sensitive data."
    }
  }
}
```

### 4. Test the Guardrails

Enter the following input in the **Output Details** section to see how the protection is applied, then click **Run** to execute the test.

**Input:**

```json
{
  "body": {
    "text": "Hello, my name is João, my email is joao@email.com, and my CPF is 000.111.222-33. I would like a refund for order REF-9988."
  }
}
```

**Output (Final result):**&#x20;

```json
{
  "success": true,
  "body": {
    "order_id": "{{{REDACTED-custom-regex-valid_order_code}}}",
    "order_code_present": true,
    "email_present": true,
    "customer_summary": "The customer is requesting a refund for an order. Personal identifiers like email and CPF have been redacted."
  },
  "tokenUsage": {
    "inputTokenCount": 209,
    "outputTokenCount": 55,
    "totalTokenCount": 264
  }
}

```

## How it works

To ensure security and consistency, the Agent Component follows this internal process:

### Data redaction

Before the prompt is sent to the LLM (OpenAI, Google, or others), Digibee scans the text for sensitive data. If information such as a CPF, email address, or other identifiable data is detected, the Platform replaces it with a placeholder. This ensures the provider never receives the real data, helping you stay compliant with privacy laws.

### Reprompt logic

When JSON Schema validation is enabled, the Agent validates the model’s response against the defined schema. If the response doesn’t match the expected structure (for example, a required field is missing), the Agent automatically sends a correction request to the model. If the response still fails validation, the execution ends with an error.

## Result

Kudos! You've built a secure, production-ready AI Agent with masked data, enforced business rules via Regex, and outputs guaranteed by JSON Schema validation.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.digibee.com/documentation/resources/quickstarts/agent-guardrails.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
