Agent Component — Complete configuration guide
Learn how to quickly create your first Agent and then explore the full set of configuration options step by step.
Create your own AI Agent with the Agent Component. This component abstracts the APIs of major LLM providers so you can easily perform tasks such as text classification, information extraction, summarization, content evaluation, and more — all inside your Digibee pipelines.
This guide walks you through:
Setting up a working Agent in minutes
Understanding each configuration option
Exploring advanced features like tools, retrieval, guardrails, and mock responses

Quickstart: Build your first working agent in 5 steps
Prerequisites
Before starting, you need:
An account with the chosen LLM provider and an API key (token).
The API key registered in Digibee as an Account of type “Secret Key”.
This test will consume tokens from your provider.
To add your API key:
Open Accounts, select Create, choose Secret Key, paste the provider token, and save. The token must be generated from the provider console (for example, the OpenAI dashboard). Keep it secure.
Steps
Add the Agent Component to your pipeline from the Step Library or via the Smart Connector.
In Model, choose a model from any supported provider (for example, OpenAI — GPT-4.1 Mini).
Click the gear icon (⚙) and select the Secret Key account you configured in the Accounts page.
In the Messages section, insert:
System:
You are an assistant.User:
Explain what an API is in one paragraph.
Click Play to test the Agent.
The response should look similar to this:
Your Agent is now successfully running.
The next sections explain each configuration in detail.
Detailed configuration
Some settings are required for the component to run properly. After completing the basic configuration, you can explore more advanced features.
Required configuration
With the two steps below, your Agent is ready for simple and direct tasks.
Select the model (required)
Choose the LLM provider and model your Agent will use. Supported providers:
Anthropic Claude
DeepSeek
Google Gemini
OpenAI
Azure OpenAI
Custom Model
Azure OpenAI (Private)
Amazon Bedrock (Private)
Or select Custom Model to personalize the model.
After selecting the provider, click the gear icon (⚙) to open the Model Configuration page. Complete the fields below.

Account Configuration
These settings define how Digibee authenticates with the selected LLM provider. If no account is selected, the connector will not operate. Some providers also require a custom Endpoint, depending on how they expose their models.
Private providers — such as Azure OpenAI (Private) and Amazon Bedrock (Private) — always require a custom endpoint. When you select one of these providers, make sure you provide the correct endpoint for your environment.
For Amazon Bedrock, additional configuration is required:
Select the AWS-V4 account type, the authentication method supported by Bedrock.
Because Digibee does not offer predefined models for Bedrock, you must enable Custom Model and enter the ARN (Amazon Resource Name) of the model you want to use from your AWS account.
Account
The account used to authenticate with the LLM provider. It must be previously registered in Accounts. Supported types: Secret Key or AWS-V4 (for Bedrock).
Account
❌
N/A
Endpoint
Custom endpoint for the selected provider. Required for private providers (such as Azure OpenAI or Amazon Bedrock).
String
✅
N/A
Allow Insecure Connections
Allows the client to accept insecure SSL connections. Recommended only for testing or internal environments.
Boolean
❌
false
If you select a private model, you must also configure the Custom Model Name. If you choose to fully customize the model, you must set both the Custom Model Name and a compatible provider.
Model Parameters
These values determine the model’s personality and behavior. You can adjust them as needed or keep the default settings.
Temperature
Controls creativity. Lower values = more focused; higher values = more diverse.
Float
✅
0.7
Max Output Tokens
Limits the number of tokens in the output.
Integer
✅
1024
Top K
Limits how many word options the model considers at each step. Lower = safer; higher = more diverse.
Integer
✅
64
Top P
Controls variability using probability thresholds. Lower = more focused answers.
Integer
✅
1
Frequency Penalty
Reduces repetition of identical words.
Integer
✅
N/A
Presence Penalty
Encourages the model to bring in new ideas instead of staying in the same topic.
Float
✅
N/A
Advanced Settings
Use these options to fine-tune the model’s performance and resilience. You can modify them if necessary or keep the default configuration.
Timeout
Maximum allowed time (in ms) for the operation to complete.
Integer
✅
30000
Max retries
Number of retry attempts after a failure.
Integer
✅
3
Custom Parameters
Additional custom parameters to include in the model API request.
Key-value pair
❌
N/A
Response Format
Define how the model should structure the output. When supported by the provider, setting a JSON Schema is recommended to ensure more precise and consistent responses.
Use JSON Schema
Enables providing a JSON Schema to enforce response structure.
Boolean
❌
false
JSON Schema definition
The Schema defining the expected structure.
JSON
❌
N/A
Use JSON Mode
Enables sending a JSON example to guide the response.
Boolean
❌
false
JSON definition
The JSON example the model should follow.
JSON
❌
N/A
Configure the messages (required)
Messages define how the model behaves and how it interprets your prompt.
System message: Describes rules, tone, and general behavior.
Example: “You are a senior data analyst…”
User message: The actual prompt. You can inject pipeline data using Double Braces.
Example: “Summarize the content below…”
After adding your messages, you’re ready to test the Agent.
Every execution consumes tokens from your provider. To reduce token usage when testing the full pipeline, consider adding a mock response.

Optional configuration
Optional settings let you extend and customize the behavior of the Agent. These are not required, but they are useful when you need integrations, external context retrieval, validation, or controlled testing.
Add tools (optional)
Tools expand what the Agent can do. Use them when the Agent needs to:
Call APIs
Access internal systems
Trigger other pipelines
Fetch external web content
If you're only generating text or doing classification, you can skip tools.
Here’s how to configure the supported tool types.

MCP Server
A Model Context Protocol (MCP) server handles communication between the AI model and source systems, enabling the LLM to securely access external tools and data.
Name
Identifier for this MCP server.
String
✅
N/A
Server URL
Endpoint that receives the request.
String
✅
N/A
Account
The account to authenticate with. Must be configured in Accounts. Supported type: Secret Key.
Account
❌
N/A
Custom Account
A secondary account, referenced via Double Braces, for example {{ account.custom.value }}.
String
❌
N/A
Headers
HTTP headers to include in requests.
Key-value pairs
❌
N/A
Query Parameters
Query parameters to include in the request.
Key-value pairs
❌
N/A
Content Retrieval
Enable Retrieval-Augmented Generation (RAG) so the model can use external context stored in a vector database.
Embedding Model
An embedding model converts text into vectors that capture its meaning. Choose a provider and model that match your architecture. Supported options:
Vector Store
A vector store is a database optimized to store and retrieve embeddings. You can connect to:
Add files (optional)
To include external documents (PDFs, images, among others) as context:
Upload them using a storage connector (for example, S3, Azure Blob, Digibee Storage)
Click the plus (+) button.
Enter the filename (for example,
bankslip_1.pdf) or reference it with Double Braces.
The content will be injected into the prompt as additional context for the model.

Setup guardrails (optional)
Use guardrails when your inputs may contain sensitive data (like PII), or when you must strictly control what is sent to the LLM.
Click the gear icon (⚙) next to Guardrails to open the Guardrail Settings.
Check the validations you want, or choose Select All to enable all options.

Mask on detection
Masks detected PII before sending to the LLM. If disabled, the execution fails with an error.
CNPJ detection
Detects CNPJ (Cadastro Nacional da Pessoa Jurídica) patterns in the User Message, such as 00.000.000/0000-00. It doesn’t validate the numerical logic.
Credit Card detection
Detects credit card number patterns in the User Message, ranging from 13 to 19 digits. It doesn’t perform checksum (Luhn) validation.
IP detection
Detects IPv4 and IPv6 address patterns in the User Message. It doesn’t validate network or range correctness.
Datetime detection
Detects datetime patterns in the User Message, such as 2025-11-05, 05/11/2025, or Nov 5, 2025. It doesn’t validate date logic.
IBAN code detection
Detects IBAN (International Bank Account Number) patterns in the User Message, such as GB82WEST12345698765432. It doesn’t perform checksum validation.
CPF detection
Detects CPF (Cadastro de Pessoa Física) patterns in the User Message, such as 000.000.000-00. It doesn’t validate the numerical logic.
Email detection
Detects email addresses in the User Message, such as [email protected]. It doesn’t validate domain or address existence.
Crypto Wallet Address detection
Detects crypto wallet address patterns in the User Message, such as Bitcoin addresses starting with 1 and containing 26–35 base58 characters. It doesn’t validate the checksum or blockchain type.
Phone number detection
Detects phone number patterns in the User Message, such as (11) 99999-9999 or +55 11 99999-9999. It doesn’t validate carrier or region.
URL detection
Detects URL patterns in the User Message, such as http://example.com or https://www.example.org/path. It doesn’t verify link validity.
On the same page, you can also:
Enable JSON Schema validation for responses. The output is validated against the schema you provide. If it’s invalid, the Agent sends a reprompt requesting the correct format. If the model still returns an invalid response, the execution ends with an error.
Define custom regex patterns to validate the input. Provide both the Pattern Name and the Regex Pattern.
Testing your Agent
Use the Test Panel to validate your configuration before running the full pipeline.
Enter an Input (optional).
Click Play.
Review the request and response received.
Once everything looks good, click Save and Return.

Configure general settings
These parameters influence how the component behaves within your pipeline, not how the LLM operates.
They can be accessed on the Settings tab.

Configure step
Step Name: The name displayed in your pipeline.
Alias: An alias you can use to reference this connector’s output later via Double Braces. Learn more.
Configure mock response
Create a mock response to test your pipeline without sending real requests to an LLM.
This is helpful when you want:
Deterministic testing
To avoid usage costs
Mocks substitute live model calls during pipeline execution.
Steps to create a mock:
Click Create mock response.
Provide a Name.
Enter the JSON of the mock response in the JSON Response field.
Activate Set as active to use this mock for tests.
Click Create mock response to save.
Setup error handling
Enable Fail On Error if you want the pipeline to stop on failure. Otherwise, the connector continues and returns { "success": false } inside the output.
Document the Agent Component’s usage
Use the Documentation tab to record:
Use cases
Business rules
Required inputs
Output examples
You can format all of this in Markdown.

FAQ
Last updated
Was this helpful?