# Agent Component — Complete configuration guide

Create your own AI Agent with the **Agent Component**. This component abstracts the APIs of major LLM providers so you can easily perform tasks such as text classification, information extraction, summarization, content evaluation, and more — all inside your Digibee pipelines.

This guide walks you through:

* Understanding each configuration option
* Exploring advanced features like tools, retrieval, guardrails, and mock responses

{% hint style="success" %}
**Tip:** You can find more **real-world examples** of the Agent Component in the [**Use Cases section**](https://app.gitbook.com/s/aD6wuPRxnEQEsYpePq36/ai-practical-examples).
{% endhint %}

<figure><img src="https://3591250690-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEKM2LD3uNAckQgy1OUyZ%2Fuploads%2FOCD8KW1ToykxDignqSON%2Fagent-component-overview.gif?alt=media&#x26;token=9af24928-71b2-4fc5-842b-dc8a3a9f1249" alt=""><figcaption></figcaption></figure>

## **Basic configuration**

With the two steps below, your Agent is ready for simple and direct tasks.

{% stepper %}
{% step %}

### **Select the model**

Choose the LLM provider and model your Agent will use. Supported providers:

* **Anthropic Claude**
* **DeepSeek**
* **Google Gemini**
* **OpenAI**
* **Azure OpenAI**
* **Custom Model**
* **Azure OpenAI (Private)**
* **Amazon Bedrock (Private)**

Or select **Custom Model** to personalize the model.

After selecting the provider, click the **gear icon** (⚙) to open the **Model Configuration** page. Complete the fields below.

<figure><img src="https://3591250690-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEKM2LD3uNAckQgy1OUyZ%2Fuploads%2FFMAg8uRv6ONDfPXFENps%2Fmodel-configuration.gif?alt=media&#x26;token=869e3b31-9938-4cfd-84b8-0e1cf7378795" alt=""><figcaption></figcaption></figure>

#### **Account Configuration**

These settings define how Digibee authenticates with the selected LLM provider. If no account is selected, the connector will not operate. Some providers also require a **custom Endpoint**, depending on how they expose their models.

Private providers — such as **Azure OpenAI (Private)** and **Amazon Bedrock (Private)** — always require a custom endpoint. When you select one of these providers, make sure you provide the correct endpoint for your environment.

For **Amazon Bedrock**, additional configuration is required:

* Select the **AWS-V4** account type, the authentication method supported by Bedrock.
* Because Digibee does not offer predefined models for Bedrock, you must enable **Custom Model** and enter the ARN (Amazon Resource Name) of the model you want to use from your AWS account.

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Data type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Account</strong></td><td>The account used to authenticate with the LLM provider. It must be previously registered in <a href="https://app.gitbook.com/s/jvO5S91EQURCEhbZOuuZ/platform-administration/settings/accounts"><strong>Accounts</strong></a>. Supported types: <strong>Secret Key</strong> or <strong>AWS-V4</strong> (for Bedrock).</td><td>Account</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Certificate</strong></td><td>The account used to enable secure HTTPS communication with external endpoints. It must be previously registered in <a href="https://app.gitbook.com/s/jvO5S91EQURCEhbZOuuZ/platform-administration/settings/accounts"><strong>Accounts</strong></a>. Supported type: <strong>Certificate Chain</strong>.</td><td>Account</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Endpoint</strong></td><td>Custom endpoint for the selected provider. Required for private providers (such as Azure OpenAI or Amazon Bedrock).</td><td>String</td><td>✅</td><td>N/A</td></tr><tr><td><strong>Allow Insecure Connections</strong></td><td>Allows the client to accept insecure SSL connections. Recommended only for testing or internal environments.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr></tbody></table>

If you select a private model, you must also configure the **Custom Model Name**. If you choose to fully customize the model, you must set both the **Custom Model Name** and a compatible provider.

#### **Model Parameters**

These values determine the model’s personality and behavior. You can adjust them as needed or keep the default settings.

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Data type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Temperature</strong></td><td>Controls creativity. Lower values = more focused; higher values = more diverse.</td><td>Float</td><td>✅</td><td><code>0.7</code></td></tr><tr><td><strong>Max Output Tokens</strong></td><td>Limits the number of tokens in the output.</td><td>Integer</td><td>✅</td><td><code>1024</code></td></tr><tr><td><strong>Top K</strong></td><td>Limits how many word options the model considers at each step. Lower = safer; higher = more diverse.</td><td>Integer</td><td>✅</td><td><code>64</code></td></tr><tr><td><strong>Top P</strong></td><td>Controls variability using probability thresholds. Lower = more focused answers.</td><td>Integer</td><td>✅</td><td><code>1</code></td></tr><tr><td><strong>Frequency Penalty</strong></td><td>Reduces repetition of identical words.</td><td>Integer</td><td>✅</td><td>N/A</td></tr><tr><td><strong>Presence Penalty</strong></td><td>Encourages the model to bring in new ideas instead of staying in the same topic.</td><td>Float</td><td>✅</td><td>N/A</td></tr></tbody></table>

#### **Advanced Settings**

Use these options to fine-tune the model’s performance and resilience. You can modify them if necessary or keep the default configuration.

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Data type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Timeout</strong></td><td>Maximum allowed time (in ms) for the operation to complete.</td><td>Integer</td><td>✅</td><td><code>30000</code></td></tr><tr><td><strong>Max retries</strong></td><td>Number of retry attempts after a failure.</td><td>Integer</td><td>✅</td><td><code>3</code></td></tr><tr><td><strong>Custom Parameters</strong></td><td>Additional custom parameters to include in the model API request.</td><td>Key-value pair</td><td>❌</td><td>N/A</td></tr></tbody></table>

#### **Response Format**

Define how the model should structure the output. When supported by the provider, setting a JSON Schema is recommended to ensure more precise and consistent responses.

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Data type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Use JSON Schema</strong></td><td>Enables providing a JSON Schema to enforce response structure.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr><tr><td><strong>JSON Schema definition</strong></td><td>The Schema defining the expected structure.</td><td>JSON</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Use JSON Mode</strong></td><td>Enables sending a JSON example to guide the response.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr><tr><td><strong>JSON definition</strong></td><td>The JSON example the model should follow.</td><td>JSON</td><td>❌</td><td>N/A</td></tr></tbody></table>
{% endstep %}

{% step %}

### **Configure the messages**

Messages define how the model behaves and how it interprets your prompt.

* **System message**: Describes rules, tone, and general behavior.
  * Example: “You are a senior data analyst…”
* **User message**: The actual prompt. You can inject pipeline data using Double Braces.
  * Example: “Summarize the content below…”

After adding your messages, you’re ready to test the Agent.

{% hint style="warning" %}
Every execution consumes tokens from your provider. To reduce token usage when testing the full pipeline, consider adding a [**mock response**](#configure-mock-response).
{% endhint %}

<figure><img src="https://3591250690-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEKM2LD3uNAckQgy1OUyZ%2Fuploads%2FzCZnQe2bBQ2h0qkGiURO%2Fmessage-configuration.gif?alt=media&#x26;token=3992fece-2759-4279-ba2d-13a52fa48bb6" alt=""><figcaption></figcaption></figure>
{% endstep %}
{% endstepper %}

## **Advanced configuration**

Optional settings let you extend and customize the behavior of the Agent. These are not required, but they are useful when you need integrations, external context retrieval, validation, or controlled testing.

{% stepper %}
{% step %}

### **Add tools**

Tools expand what the Agent can do. Use them when the Agent needs to:

* Call APIs
* Access internal systems
* Trigger other pipelines
* Fetch external web content

If you're only generating text or doing classification, you can skip tools.

Here’s how to configure the supported tool types.

<figure><img src="https://3591250690-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEKM2LD3uNAckQgy1OUyZ%2Fuploads%2FSUkLZmeNrLXxdYiclhGG%2Ftools-configuration.gif?alt=media&#x26;token=da524848-a6e3-419d-b6ac-24b2c661a585" alt=""><figcaption></figcaption></figure>

#### **MCP Server**

A **Model Context Protocol (MCP)** server handles communication between the AI model and source systems, enabling the LLM to securely access external tools and data.

**Add MCP Tools From Registry**

If you have pipelines deployed with the [**MCP Server Trigger**](https://docs.digibee.com/documentation/connectors-and-triggers/triggers/web-protocols/mcp-server) in **Test** or **Prod**, they will be available in the **MCP From Registry** tab.

In this tab, you can view:

* The pipeline name
* The authentication type
* The configured tools and their respective parameters
* The version of the deployed pipeline
* MCP pipelines currently used as tools in the component (identified with the **In use** tag)

To add an MCP pipeline as a tool in your Agent Component, select the desired pipeline and click **Add Tool**. You will be redirected to the **Add MCP Tools** tab to complete the configuration.

<figure><img src="https://3591250690-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEKM2LD3uNAckQgy1OUyZ%2Fuploads%2FdO6IOhzOS45pjDYnTEA2%2Fen-mcp-tools-from-registry.gif?alt=media&#x26;token=1b1e0e97-a4cf-4f2b-85c0-9470cc00bd97" alt=""><figcaption></figcaption></figure>

**Add MCP Tools**

Use this tab to configure the MCP Tool.

If the MPC pipeline was selected through the **MCP From Registry** tab, some parameters are automatically pre-filled:

* **Name:** Pre-filled with the pipeline name.
* **Server URL:** Pre-filled with the pipeline endpoint.
* **Headers:** If an authentication method is configured, the **Key** field is automatically populated according to the authentication type:
  * If **API Key** is enabled, the header is `apiKey`.
  * If **Token JWT** is enabled, the header is `Authorization`.
  * If **External JWT** is enabled, the header is `externalJWT`.

You must manually provide the corresponding **Value**.

Refer to the parameter descriptions below for more details about each configuration option.

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Data type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Name</strong></td><td>Identifier for this MCP server.</td><td>String</td><td>✅</td><td>N/A</td></tr><tr><td><strong>Server URL</strong></td><td>Endpoint that receives the request.</td><td>String</td><td>✅</td><td>N/A</td></tr><tr><td><strong>Account</strong></td><td>The account to authenticate with. Must be configured in <strong>Accounts</strong>. Supported type: <strong>Secret Key</strong>.</td><td>Account</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Custom Account</strong></td><td>A secondary account, referenced via Double Braces, for example <code>{{ account.custom.value }}</code>.</td><td>String</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Certificate Account</strong></td><td>The account used to enable secure HTTPS communication with external endpoints. It must be previously registered in <strong>Accounts</strong>. Supported type: <a href="https://app.gitbook.com/s/jvO5S91EQURCEhbZOuuZ/platform-administration/settings/accounts#certificate-chain"><strong>Certificate Chain</strong></a>.</td><td>Account</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Headers</strong></td><td>HTTP headers to include in requests.</td><td>Key-value pairs</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Query Parameters</strong></td><td>Query parameters to include in the request.</td><td>Key-value pairs</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Available Tools</strong></td><td><p>This field is available only when an MCP pipeline is selected in the <strong>MCP From Registry</strong> tab.</p><p></p><p>It displays the tools configured in the selected MCP pipeline. Enable the tools you want to make available to the Agent Component.</p></td><td>Boolean</td><td>❌</td><td>N/A</td></tr></tbody></table>

#### **Content Retrieval**

Enable **Retrieval-Augmented Generation (RAG)** so the model can use external context stored in a vector database.

**Embedding Model**

An embedding model converts text into vectors that capture its meaning. Choose a provider and model that match your architecture. Supported options:

<details>

<summary><strong>Local (all-MiniLM-L6-v2)</strong></summary>

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Data type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Max Results</strong></td><td>Maximum number of vectors returned from the similarity search.</td><td>Integer</td><td>❌</td><td><code>N/A</code></td></tr><tr><td><strong>Min Score</strong></td><td>Minimum similarity score (0.0 to 1.0) for a result to be considered relevant. Higher is more restrictive.</td><td>String</td><td>✅</td><td><code>0.7</code></td></tr></tbody></table>

</details>

<details>

<summary><strong>OpenAI</strong></summary>

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Data type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Embedding Account</strong></td><td>The OpenAI account for embeddings. Supported types: <strong>Basic</strong>, <strong>Secret Key</strong>.</td><td>Select</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Embedding Model Name</strong></td><td>Name of the embedding model, for example <code>text-embedding-3-large</code>.</td><td>String</td><td>✅</td><td>N/A</td></tr><tr><td><strong>Vector Dimension</strong></td><td>Number of dimensions in the embeddings. Must match your vector store’s config.</td><td>Integer</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Max Results</strong></td><td>Maximum number of similar vectors to return.</td><td>Integer</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Min Score</strong></td><td>Minimum similarity score for a result to be considered. More restrictive when higher.</td><td>String</td><td>✅</td><td><code>0.7</code></td></tr></tbody></table>

</details>

<details>

<summary><strong>Hugging Face</strong></summary>

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Data type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Embedding Account</strong></td><td>Hugging Face account credentials. Supported: <strong>Basic</strong>, <strong>Google Key</strong>.</td><td>Select</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Embedding Model Name</strong></td><td>Name of the embedding model (for example, <code>sentence-transformers/all-mpnet-base-v2</code>)</td><td>String</td><td>✅</td><td>N/A</td></tr><tr><td><strong>Vector Dimension</strong></td><td>Number of dimensions in the embedding vector. Must align with your store.</td><td>Integer</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Wait for Model</strong></td><td>Whether the system should wait for the model to load (<code>true</code>) or return an error if it’s not ready (<code>false</code>).</td><td>Boolean</td><td>❌</td><td><code>true</code></td></tr><tr><td><strong>Max Results</strong></td><td>Maximum number of vectors returned.</td><td>Integer</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Min Score</strong></td><td>Minimum similarity score for relevance.</td><td>String</td><td>✅</td><td><code>0.7</code></td></tr></tbody></table>

</details>

**Vector Store**

A vector store is a database optimized to store and retrieve embeddings. You can connect to:

<details>

<summary><strong>PostgreSQL (PGVector)</strong></summary>

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Data type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Host</strong></td><td>Hostname or IP address of your PostgreSQL server.</td><td>String</td><td>✅</td><td><code>localhost</code></td></tr><tr><td><strong>Port</strong></td><td>Port number for the Postgres connection.</td><td>Number</td><td>❌</td><td><code>5432</code></td></tr><tr><td><strong>Database Name</strong></td><td>Name of the database containing the vector table.</td><td>String</td><td>✅</td><td>N/A</td></tr><tr><td><strong>Vector Store Account</strong></td><td>The Postgres account with credentials.</td><td>Select</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Table Name</strong></td><td>Name of the table where embeddings are stored.</td><td>String</td><td>✅</td><td><code>embeddings</code></td></tr></tbody></table>

</details>

<details>

<summary><strong>Neo4j</strong></summary>

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Data type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Database Name</strong></td><td>Name of the Neo4j database used for vector storage.</td><td>String</td><td>✅</td><td>N/A</td></tr><tr><td><strong>Vector Store Account</strong></td><td>Credentials for connecting to Neo4j.</td><td>Select</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Index Name</strong></td><td>Index that holds the embedding vectors.</td><td>String</td><td>✅</td><td>N/A</td></tr><tr><td><strong>URI</strong></td><td>URI used to connect to the Neo4j instance.</td><td>String</td><td>✅</td><td>N/A</td></tr><tr><td><strong>Node Label</strong></td><td>Label for nodes storing embeddings.</td><td>String</td><td>✅</td><td><code>Document</code></td></tr><tr><td><strong>Embedding Property</strong></td><td>Node property that holds the embedding vector.</td><td>String</td><td>✅</td><td><code>embedding</code></td></tr><tr><td><strong>Text Property</strong></td><td>Node property that stores the original text or document.</td><td>String</td><td>✅</td><td><code>text</code></td></tr></tbody></table>

</details>
{% endstep %}

{% step %}

### **Add files**

To include external documents (PDFs, images, among others) as context:

1. Upload them using a storage connector (for example, **S3**, **Azure Blob**, **Digibee Storage**)
2. Click the **plus (+)** button.
3. Enter the filename (for example, `bankslip_1.pdf`) or reference it with Double Braces.

The content will be injected into the prompt as additional context for the model.

<figure><img src="https://3591250690-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEKM2LD3uNAckQgy1OUyZ%2Fuploads%2Fai372xHCRNTC4gQGgU9Z%2Fadd-files.gif?alt=media&#x26;token=adb9ffa0-e1d4-4edb-b188-8a36aca7571f" alt=""><figcaption></figcaption></figure>
{% endstep %}

{% step %}

### Setup guardrails

Use guardrails when your inputs may contain sensitive data (like PII), or when you must strictly control what is sent to the LLM.

1. Click the **gear icon (⚙)** next to **Guardrails** to open the Guardrail Settings.
2. Check the validations you want, or choose **Select All** to enable all options.

<figure><img src="https://3591250690-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEKM2LD3uNAckQgy1OUyZ%2Fuploads%2FCjSHydzRabqc12czmHkr%2Fsetup-guardrails.gif?alt=media&#x26;token=37183465-da4f-4dce-b0d5-ddc067d11854" alt=""><figcaption></figcaption></figure>

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th></tr></thead><tbody><tr><td><strong>Mask on detection</strong></td><td>Masks detected PII before sending to the LLM. If disabled, the execution fails with an error.</td></tr><tr><td><strong>CNPJ detection</strong></td><td>Detects CNPJ (Cadastro Nacional da Pessoa Jurídica) patterns in the <strong>User Message</strong>, such as <code>00.000.000/0000-00</code>. It doesn’t validate the numerical logic.</td></tr><tr><td><strong>Credit Card detection</strong></td><td>Detects credit card number patterns in the <strong>User Message</strong>, ranging from 13 to 19 digits. It doesn’t perform checksum (Luhn) validation.</td></tr><tr><td><strong>IP detection</strong></td><td>Detects IPv4 and IPv6 address patterns in the <strong>User Message</strong>. It doesn’t validate network or range correctness.</td></tr><tr><td><strong>Datetime detection</strong></td><td>Detects datetime patterns in the <strong>User Message</strong>, such as <code>2025-11-05</code>, <code>05/11/2025</code>, or <code>Nov 5, 2025</code>. It doesn’t validate date logic.</td></tr><tr><td><strong>IBAN code detection</strong></td><td>Detects IBAN (International Bank Account Number) patterns in the <strong>User Message</strong>, such as <code>GB82WEST12345698765432</code>. It doesn’t perform checksum validation.</td></tr><tr><td><strong>CPF detection</strong></td><td>Detects CPF (Cadastro de Pessoa Física) patterns in the <strong>User Message</strong>, such as <code>000.000.000-00</code>. It doesn’t validate the numerical logic.</td></tr><tr><td><strong>Email detection</strong></td><td>Detects email addresses in the <strong>User Message</strong>, such as <code>user@example.com</code>. It doesn’t validate domain or address existence.</td></tr><tr><td><strong>Crypto Wallet Address detection</strong></td><td>Detects crypto wallet address patterns in the <strong>User Message</strong>, such as Bitcoin addresses starting with <code>1</code> and containing 26–35 base58 characters. It doesn’t validate the checksum or blockchain type.</td></tr><tr><td><strong>Phone number detection</strong></td><td>Detects phone number patterns in the <strong>User Message</strong>, such as <code>(11) 99999-9999</code> or <code>+55 11 99999-9999</code>. It doesn’t validate carrier or region.</td></tr><tr><td><strong>URL detection</strong></td><td>Detects URL patterns in the <strong>User Message</strong>, such as <code>http://example.com</code> or <code>https://www.example.org/path</code>. It doesn’t verify link validity.</td></tr></tbody></table>

On the same page, you can also:

* Enable **JSON Schema validation** for responses. The output is validated against the schema you provide. If it’s invalid, the Agent sends a reprompt requesting the correct format. If the model still returns an invalid response, the execution ends with an error.
* Define **custom regex patterns** to validate the input. Provide both the **Pattern Name** and the **Regex Pattern**.
  {% endstep %}
  {% endstepper %}

## **Test your Agent**

Use the Test Panel on the right side of the page to validate your configuration before running the full pipeline.

For detailed guidance on how to perform tests within the Agent Component and explore all available technical details, see the [**Build Agent tests: Datasets, Test Cases, and Evaluations**](https://docs.digibee.com/documentation/connectors-and-triggers/connectors/ai-tools/llm/testing-your-agent) documentation.

{% hint style="info" %}
Each **successful test** will consume tokens, and the number of tokens used is shown in the output. If you want to test the entire pipeline with a specific Agent Component response, you can [**mock this response**](#configure-mock-response) during the pipeline run to avoid AI usage costs.
{% endhint %}

## **Version the component**

The Agent Component supports **optional versioning**. You can apply the Agent directly to the pipeline or save versions to track and reuse different configurations over time.

{% hint style="info" %}
You can start versioning the component only after the pipeline is saved.
{% endhint %}

<figure><img src="https://3591250690-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEKM2LD3uNAckQgy1OUyZ%2Fuploads%2FCvJSMjea9aLbMdC0UF31%2FAgent%20Component%20-%20Versioning%20v2.gif?alt=media&#x26;token=e26353c0-7e0c-4557-bcc9-410067dcf41d" alt=""><figcaption></figcaption></figure>

### **Without versioning**

By default, the component opens in **Draft** mode. If you choose not to use versioning, configure the component and click **Confirm** in the upper-right corner to apply the configuration directly to the pipeline.

Only the current configuration is stored, and no version history is created.

### **With versioning**

Versioning allows you to save, organize, and reuse different configurations of the Agent Component.

To create or manage versions:

1. After making changes to the component, click **Save** next to the **Draft** indicator in the upper-left corner and select **Create new version**.
2. In the modal, select **Create new version** or **Update existing version**, depending on whether versions already exist.
3. Optionally, add tags to help identify the version, such as the model name (for example, *OpenAI – gpt-4o*).
4. Click **Save**.

Each saved version receives a version number, such as v1, along with any defined tags. Once created, versions cannot be deleted.

To apply a saved version to the pipeline, click **Confirm**. The version currently applied is marked with the **On pipe** tag.

#### **Saving and applying changes to a version**

After updating the component configuration, you can choose how to proceed:

* **Save (upper-left corner)**: Saves the changes to a version and keeps the component open for further editing.
* **Save and Confirm (upper-right corner)**: Saves the changes to a version and immediately applies it to the pipeline.

You can access any version you have created at any time from the upper-left corner to load, review, or compare previous versions.

#### **Deploying a version**

When deploying a pipeline that includes an Agent Component with versions, the deployment automatically uses the version currently applied to the pipeline.

{% hint style="success" %}
**Tip:** To confirm which version is applied, open the component and check which version is marked with the **On pipe** tag.
{% endhint %}

After a version is deployed, it can no longer be changed and is marked with the **Locked** tag. Other versions that were not deployed remain editable and can be updated normally.

## **Configure general settings**

These parameters influence how the component behaves within your pipeline, not how the LLM operates.

They can be accessed on the **Settings** tab.

<figure><img src="https://3591250690-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEKM2LD3uNAckQgy1OUyZ%2Fuploads%2FGh5eTtBfrxpEiB9ZXdX6%2Fgeneral-settings.gif?alt=media&#x26;token=90e0b297-0169-4850-a3ad-03351852294e" alt=""><figcaption></figcaption></figure>

### **Configure step**

* **Step Name:** The name displayed in your pipeline.
* **Alias:** An alias you can use to reference this connector’s output later via Double Braces. [Learn more](https://docs.digibee.com/documentation/connectors-and-triggers/double-braces/how-to-reference-data-using-double-braces/previous-steps-access).

### **Configure mock response**

Create a mock response to test your pipeline without sending real requests to an LLM.

This is helpful when you want:

* Deterministic testing
* To avoid usage costs

Mocks substitute live model calls during pipeline execution.

**Steps to create a mock:**

1. Click **Create mock response**.
2. Provide a **Name**.
3. Enter the JSON of the mock response in the **JSON Response** field.
4. Activate **Set as active** to use this mock for tests.
5. Click **Create mock response** to save.

### **Setup error handling**

Enable **Fail On Error** if you want the pipeline to stop on failure. Otherwise, the connector continues and returns `{ "success": false }` inside the output.

## **Document the Agent Component’s usage**

Use the **Documentation** tab to record:

* Use cases
* Business rules
* Required inputs
* Output examples

You can format all of this in Markdown.

<figure><img src="https://3591250690-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEKM2LD3uNAckQgy1OUyZ%2Fuploads%2Fkopm0dI6ZvPOCdKV2peW%2Fdocumentation.gif?alt=media&#x26;token=d5576261-54c4-4666-af87-e4b6a2694f23" alt=""><figcaption></figcaption></figure>

## **FAQ**

<details>

<summary><strong>How can I test and experiment with my prompts?</strong></summary>

Use the **testing panel** located on the right side of the connector’s configuration form. For detailed guidance, see the [Testing your Agent](#testing-your-agent) topic.

</details>

<details>

<summary><strong>Can I use data from the previous connectors?</strong></summary>

Yes. You can use [Double Braces expressions](https://docs.digibee.com/documentation/connectors-and-triggers/double-braces/overview) to reference data from previous connectors and include it in your prompt.

</details>

<details>

<summary><strong>How is sensitive data handled?</strong></summary>

The connector doesn’t redact or filter payload data. We recommend following the same data handling practices used with other connectors.

</details>

<details>

<summary><strong>Can I chain multiple LLM calls in one pipeline?</strong></summary>

Yes. You can use the output of one LLM call as input for another. For example, first classify a support ticket, then generate a response based on the classification.

</details>

<details>

<summary><strong>What if the connector produces inaccurate or made-up results?</strong></summary>

For critical tasks, reduce the risk of hallucinations by following these best practices:

* Configure parameters such as **Temperature**, **Top K**, **Top P**, **Frequency Penalty**, and **Presence Penalty**.
* Break processes into smaller steps, for example, generate first and verify afterward. This approach provides better control and allows you to validate results before using them.
* Create more effective prompts by applying [prompt engineering techniques](https://www.promptingguide.ai/).

</details>

<details>

<summary><strong>What happens if the provider takes too long to respond?</strong></summary>

If the provider takes too long to respond, the request will time out and an error message will be shown in the Execution Panel.

</details>
