Agent Connector
Discover more about the Agent Connector and how to use it on the Digibee Integration Platform.
Create your own AI Agent with the Agent Connector. By abstracting the APIs of major LLM providers, it lets you seamlessly perform tasks like text classification, information extraction, summarization, and content evaluation in your Digibee pipelines.
It supports built-in authentication and works with major providers: OpenAI, Google Gemini, Anthropic Claude, Azure OpenAI, Amazon Bedrock, and DeepSeek. The configuration allows you to control model behavior, response format, and output structure based on your integration needs.
Parameters
Configure the connector using the parameters below. Fields that support Double Braces expressions are marked in the Supports DB column.
Alias
Name (alias) for this connector’s output, allowing you to reference it later in the flow using Double Braces expressions. Learn more.
String
✅
llm-1
LLM Provider
Specifies the LLM provider to use. Available options are: OpenAI, Google Gemini, Anthropic Claude, Azure OpenAI, Amazon Bedrock, and DeepSeek. When selecting private providers (Azure Open AI), you must include the endpoint.
String
❌
N/A
Use Custom Model
Enable to select a custom AI Model. Enable this option to use a custom model with Amazon Bedrock. Provide the model’s ARN from your AWS account.
Boolean
❌
False
Custom
Custom AI model to be used based on the selected provider. Must be entered manually.
String
❌
N/A
Model
The AI model to be used, based on the selected provider. Only text models are supported; image generation is not available.
String
❌
N/A
Account
The account to authenticate with the connector. It must be previously registered on the Accounts page. Supported type: Secret Key.
Account
❌
N/A
Timeout
Maximum time limit (in milliseconds) allowed for the operation to complete. If this limit is exceeded, the operation is aborted.
Integer
✅
30000
Max retries
Defines the maximum number of retry attempts after an operation fails. For example, a value of 3 means the system will attempt the operation up to three times.
Integer
✅
1
System Prompt
A predefined instruction that sets the tone and behavior of the AI. You can use it to define roles or the type of response the model should always follow.
Plain Text
✅
N/A
User Prompt
The prompt sent to the AI model. Supports Double Braces syntax to include data or variables from earlier steps.
Plain Text
✅
N/A
File
Enables including data from files in the prompt. The file must be previously uploaded using one of our storage connectors, such as:
After that, enter the filename with its extension (for example, bankslip_1.pdf), or reference it using Double Braces.
String
✅
N/A
To configure the connector with Amazon Bedrock, select the AWS-V4 account type. Since there are no predefined models for this provider, you must enable the Custom model option and enter the ARN (Amazon Resource Name) of the desired model from your AWS account.
Click Add to add a new remote MCP Server. A Model Context Protocol (MCP) server manages communication between AI models and source systems, allowing LLMs to securely access tools and data.
Provide the following parameters:
Name
The name of the MCP server for identification only.
N/A
✅
String
URL
The endpoint that will receive the call.
N/A
✅
String
Account
The account to authenticate with the LLM provider. It must be previously registered on the Accounts page. Supported type: Secret Key.
N/A
❌
Account
Custom Account
The additional custom account to be used by the connector through Double Braces expressions, for example {{ account.custom.value }}.
N/A
❌
String
Headers
Defines the headers required for the call.
N/A
❌
Key-value pairs
Query Parameters
Defines the query parameters for the call.
N/A
❌
Key-value pairs
Use JSON Schema
When enabled, allows you to provide a JSON Schema to guide the LLM in generating the expected response format.
Boolean
❌
False
JSON Schema definition
The JSON Schema that the AI should follow when generating the response.
JSON
❌
N/A
Use JSON Mode
When enabled, allows you to provide a JSON example to help LLM to produce your desired response format.
Boolean
❌
False
JSON definition
The JSON that the AI should follow when generating the response.
JSON
❌
N/A
Detect PII
Enables the detection of Personally Identifiable Information (PII) in the User Prompt. You can select which PII types to validate. The detection uses regex patterns to identify known formats.
Boolean
❌
False
Mask on detection
When enabled, the connector masks detected PIIs before sending the input to the LLM. If disabled, the execution is interrupted and an error is returned.
Boolean
❌
False
CPF detection
Detects CPF (Cadastro de Pessoa Física) patterns in the User Prompt, such as 000.000.000-00. It doesn’t validate the numerical logic.
Boolean
❌
False
CNPJ detection
Detects CNPJ (Cadastro Nacional da Pessoa Jurídica) patterns in the User Prompt, such as 00.000.000/0000-00. It doesn’t validate the numerical logic.
Boolean
❌
False
Email detection
Detects email addresses in the User Prompt, such as [email protected]. It doesn’t validate domain or address existence.
Boolean
❌
False
Credit Card detection
Detects credit card number patterns in the User Prompt, ranging from 13 to 19 digits. It doesn’t perform checksum (Luhn) validation.
Boolean
❌
False
Crypto Wallet Address detection
Detects crypto wallet address patterns in the User Prompt, such as Bitcoin addresses starting with 1 and containing 26–35 base58 characters. It doesn’t validate the checksum or blockchain type.
Boolean
❌
False
IP detection
Detects IPv4 and IPv6 address patterns in the User Prompt. It doesn’t validate network or range correctness.
Boolean
❌
False
Phone number detection
Detects phone number patterns in the User Prompt, such as (11) 99999-9999 or +55 11 99999-9999. It doesn’t validate carrier or region.
Boolean
❌
False
Date detection
Detects date patterns in the User Prompt, such as 2025-11-05, 05/11/2025, or Nov 5, 2025. It doesn’t validate date logic.
Boolean
❌
False
URL detection
Detects URL patterns in the User Prompt, such as http://example.com or https://www.example.org/path. It doesn’t verify link validity.
Boolean
❌
False
IBAN code detection
Detects IBAN (International Bank Account Number) patterns in the User Prompt, such as GB82WEST12345698765432. It doesn’t perform checksum validation.
Boolean
❌
False
Custom Regex
Lets you define one or more custom regular expressions for detecting specific patterns. Click Add to include a new regex.
Button
❌
N/A
Name
Name used to identify the custom regex.
String
❌
N/A
Regex Expression
Regular expression to be validated.
String
❌
N/A
Maximum Output Token
The maximum length of the response. Larger numbers allow longer answers, smaller numbers make them shorter. For some newer models (mainly from OpenAI), this parameter is referred to as Max Completion Tokens instead.
Integer
✅
10000
Temperature
Controls creativity. Lower values make the answers more focused and predictable. Higher values make them more varied and creative.
Float
✅
0.1
Top K
Limits how many word choices the model looks at for each step. Smaller numbers represent safer, more focused answers. Larger numbers represent more variety.
Integer
✅
64
Top P
Another way to control variety. The model only looks at the most likely words that add up to this probability. Lower values represent more focused answers.
Integer
✅
1
Frequency Penalty
Discourages the model from repeating the same words too often.
Integer
✅
0
Presence Penalty
Encourages the model to bring in new ideas instead of staying in the same topic.
Float
✅
0
An embedding model converts text or other types of data into numerical vectors that represent their semantic meaning. These vectors allow the system to measure similarity between pieces of content based on meaning rather than exact wording.
Embedding models are commonly used for tasks such as semantic search, clustering, and Retrieval-Augmented Generation (RAG), where they enable efficient comparison and retrieval of contextually relevant information.
Embedding Provider
The embedding model provider to use. Options are: Local (all-MiniLM-L6-v2), OpenAI, Google Vertex AI, and Hugging Face.
Select
❌
N/A
Local (all-MiniLM-L6-v2)
Max Results
Defines the maximum number of vectors returned from the similarity search.
Integer
❌
5
Min Score
Sets the minimum similarity score (from 0.0 to 1.0) required for a result to be considered relevant. Higher values make the search more restrictive.
String
✅
N/A
OpenAI
Embedding Account
Specifies the account configured with OpenAI credentials. Supported type: Secret Key.
Select
❌
N/A
Embedding Model Name
Defines the name of the embedding model to use, such as text-embedding-3-large.
String
✅
N/A
Vector Dimension
Sets the number of dimensions in the generated vector. The value must match the vector store configuration.
Integer
❌
N/A
Max Results
Defines the maximum number of similar vectors returned from the vector store.
Integer
❌
5
Min Score
Sets the minimum similarity score (from 0.0 to 1.0) required for a result to be considered relevant. Higher values make the search more restrictive.
String
✅
N/A
Google Vertex AI
Embedding Account
Specifies the account configured with Google Cloud credentials. Supported type: Google Key.
Select
❌
N/A
Embedding Model Name
Defines the name of the embedding model to use, such as textembedding-gecko@003.
String
✅
N/A
Vector Dimension
Sets the number of dimensions in the generated vector. The value must match the vector store configuration.
Integer
❌
N/A
Project ID
Defines the ID of the Google Cloud project associated with the account.
String
✅
N/A
Location
Specifies the region where the Vertex AI model is deployed, such as us-central1.
String
✅
N/A
Endpoint
Defines the endpoint of the embedding model, such as us-central1-aiplatform.googleapis.com:443.
String
✅
N/A
Publisher
Specifies the publisher of the model, typically google.
String
✅
N/A
Max Results
Defines the maximum number of similar vectors returned from the vector store.
Integer
❌
5
Min Score
Sets the minimum similarity score (from 0.0 to 1.0) required for a result to be considered relevant. Higher values make the search more restrictive.
String
✅
N/A
Max Retries
Defines the maximum number of retry attempts in case of temporary API failures.
Integer
❌
3
Hugging Face
Embedding Account
Specifies the account configured with Hugging Face credentials. Supported types: Secret Key.
Select
❌
N/A
Embedding Model Name
Defines the name of the embedding model to use, such as sentence-transformers/all-mpnet-base-v2.
String
✅
N/A
Vector Dimension
Sets the number of dimensions in the generated vector. The value must match the vector store configuration.
Integer
❌
N/A
Max Results
Defines the maximum number of similar vectors returned from the vector store.
Integer
❌
5
Min Score
Sets the minimum similarity score (from 0.0 to 1.0) required for a result to be considered relevant. Higher values make the search more restrictive.
String
✅
N/A
Wait for Model
Determines whether the system should wait for the model to load before generating embeddings (true) or return an error if the model is not ready (false).
Boolean
❌
True
A vector store is a specialized database designed to store and retrieve vector representations of data (embeddings). It enables similarity searches by comparing numerical vectors instead of exact text matches, allowing more relevant and semantic results.
Vector Store Provider
Defines the database provider used for storing and querying embeddings. Options are: PostgreSQL (PGVector) and Neo4j.
Select
❌
N/A
PostgreSQL (PGVector)
Host
Defines the hostname or IP address of the PostgreSQL server.
String
✅
N/A
Port
Defines the port number used to connect to the PostgreSQL server.
Number
❌
5432
Database Name
Defines the name of the PostgreSQL database containing the vector table.
String
✅
N/A
Vector Store Account
Specifies the account configured with PostgreSQL credentials.
Select
❌
N/A
Table Name
Defines the name of the table where vectors are stored.
String
✅
N/A
Neo4j
Database Name
Defines the name of the Neo4j database where the vector index is stored.
String
✅
N/A
Vector Store Account
Specifies the account configured with Neo4j credentials.
Select
❌
N/A
Index Name
Defines the name of the index used to store and query vectors.
String
✅
N/A
URI
Defines the connection URI to the Neo4j instance.
String
✅
N/A
Node Label
Defines the label assigned to nodes containing embedding data.
String
✅
N/A
Embedding Property
Defines the node property used to store the embedding vector.
String
✅
N/A
Text Property
Defines the node property used to store the original text or document.
String
✅
N/A
Fail On Error
If enabled, interrupts the pipeline execution when an error occurs. If disabled, execution continues, but the "success" property will be set to false.
Boolean
❌
False
Documentation
Optional field to describe the connector configuration and any relevant business rules.
String
❌
N/A
Testing the Agent Connector
The Agent Connector can be tested in isolation, without executing the full pipeline. This makes it easier to quickly refine and improve your prompts.
After configuring the parameters of the connector, enter an Input (optional) and then click Play in the bottom right corner to run a test and view the request responses.
After verifying whether the connector works as expected, click Confirm to save it and return to the main flow.

Agent Connector in action
Configuration with User Prompt Only
This configuration uses only the User Prompt parameter to send a request to the AI model.
Advantages:
Easy to set up with just one input.
Good for testing different prompts quickly.
Works well for simple requests.
Practical example
Use case: A pipeline integrated with Zendesk receives a new customer ticket. The Agent Connector is used to analyze the request and classify its topic.
Goal: Classify the topic of a support ticket.
User Prompt:
Classify the topic of the following customer request:
"My payment was declined, but the amount was debited from my account. I need help fixing this."Example output:
{
"body": {
"text": "**Topic:** Payment Issue / Failed Transaction\n\n**Explanation:** \nThe customer is reporting a problem where their payment was declined, but the money was still debited from their account. This falls under payment issues, specifically failed or unsuccessful transactions with debited funds."
},
"tokenUsage": {
"inputTokenCount": 39,
"outputTokenCount": 52,
"totalTokenCount": 91
}
}Configuration with User + System Prompts
This configuration uses both the User Prompt and System Prompt parameters to guide the AI response.
Advantages:
Helps guide the AI’s tone and behavior.
Makes responses more consistent.
Adds context that helps the AI understand the prompt better.
Practical example
Use case: After classifying the support ticket, the pipeline queries a knowledge database. The Agent Connector is then used again to generate a personalized response for the customer.
Goal: Generate a custom response using predefined tone and style.
System Prompt:
You are a friendly and helpful support agent. Always use an empathetic tone and provide clear instructions. Return the message as plain text with no line breaks.User Prompt:
Write a response to the customer below, explaining that we will investigate the payment and get back to them within 24 hours:
"My payment was declined, but the amount was debited from my account. I need help fixing this."Example output:
{
"body": {
"text": "Thank you for reaching out and letting us know about the issue with your payment. I understand how concerning it is to see a charge when your payment was declined. We will investigate this matter right away and get back to you with an update within 24 hours. Thank you for your patience while we look into this for you."
},
"tokenUsage": {
"inputTokenCount": 89,
"outputTokenCount": 65,
"totalTokenCount": 154
}
}Configuration with Prompts + JSON Schema
Support for JSON Schema may vary across LLM providers and models. We recommend reviewing the official documentation of the provider and model before configuring to confirm compatibility.
This configuration uses User Prompt, System Prompt, and JSON Schema to generate a structured response.
Advantages:
Keeps the output consistent with a defined format.
Validates field types, required fields, and allowed values automatically.
Works as a contract between systems, making integration more reliable.
Prevents invalid data from being processed.
Practical example
Use case: A pipeline receives a user-generated comment from an ISV (independent software vendor) platform. The Agent Connector sends the comment to the AI to evaluate whether it’s harmful or offensive. The returned score is then used to decide whether the comment should be published or if the user should be flagged.
Goal: Evaluate and score a comment’s harmfulness and determine whether it should be approved.
System Prompt:
You are a content moderator. Evaluate whether the comment is harmful, assign a score from 0 to 1 for severity, and indicate whether it should be approved.User Prompt:
Evaluate the following comment:
"I had a great experience with this company. The team is professional and very helpful."JSON Schema:
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"title": "ModerationResult",
"type": "object",
"properties": {
"status": {
"type": "integer",
"enum": [200],
"description": "HTTP status code"
},
"body": {
"type": "object",
"properties": {
"score": {
"type": "string",
"pattern": "^(0(\\.\\d+)?|1(\\.0+)?)$",
"description": "Severity score from 0 to 1"
},
"label": {
"type": "string",
"description": "Label describing the content, e.g., harmless, potentially harmful"
},
"should_approve": {
"type": "boolean",
"description": "Indicates whether the comment should be approved"
}
},
"required": ["score", "label", "should_approve"],
"additionalProperties": false
}
},
"required": ["status", "body"],
"additionalProperties": false
}
Possible output:
{
"body": {
"status": "200",
"body": {
"score": "0",
"label": "harmless",
"should_approve": true
}
},
"tokenUsage": {
"inputTokenCount": 168,
"outputTokenCount": 22,
"totalTokenCount": 190
}
}Configuration with Prompts + plain JSON
This configuration uses User Prompt, System Prompt, and plain JSON (no schema) to return a structured response.
Advantages:
Produces output in a simple, readable format.
Flexible and easy to work with.
Good when strict validation isn’t necessary.
Practical example
Using the same use case as above, the prompts guide the AI to return a JSON object directly, without schema validation.
System Prompt:
You are a content moderator.
Evaluate whether the comment is harmful, assign a score from 0 to 1 for severity, and indicate whether it should be approved.User Prompt:
Evaluate the following comment:
"I had a great experience with this company. The team is professional and very helpful."
Plain JSON:
{
"score": "",
"label": "",
"should_approve": ""
}Output:
{
"body": {
"score": "0",
"label": "Not harmful",
"should_approve": "yes"
},
"tokenUsage": {
"inputTokenCount": 91,
"outputTokenCount": 26,
"totalTokenCount": 117
}
}
Configuration with Prompts + Double Braces
This configuration uses the User Prompt field to dynamically inject data from a previous connector using Double Braces expressions. In addition, the System Prompt and Output Format fields are used to guide the AI and generate a structured response.
Advantages:
Enables contextual prompts based on pipeline data.
Connects the AI response to runtime information.
Practical example
Use case: A pipeline receives address data from a REST connector that queries a Brazilian public ZIP code API (OpenCEP). The Agent Connector is then used to classify the type of address as residential, commercial or rural, based on the street name and neighborhood returned by the API.
Goal: Categorize the address type using dynamic data from the previous connector.
System Prompt:
You are an address classification assistant. Based on the street name and neighborhood, classify the address as residential, commercial, or rural. Explain your reasoning.User Prompt with Double Braces:
Use the following address to make your evaluation: {{message.body}}JSON Schema:
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"name": "address_classification_response",
"type": "object",
"properties": {
"status": {
"type": "integer"
},
"body": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["residential", "commercial", "rural"]
},
"reason": {
"type": "string"
}
},
"required": ["type", "reason"]
}
},
"required": ["status", "body"],
"additionalProperties": false
}Output:
{
"body": {
"status": 200,
"body": {
"type": "residential",
"reason": "The street name 'Rua Josina Teixeira de Carvalho' and the neighborhood 'Vila Anchieta' in São José do Rio Preto indicate a typical urban residential area. There is no indication of commercial or rural characteristics in the address."
}
},
"tokenUsage": {
"inputTokenCount": 170,
"outputTokenCount": 62,
"totalTokenCount": 232
}
}MCP Server configuration
This configuration uses an MCP Server, combined with User Prompt, System Prompt, and JSON Schema, to request and structure documentation generated from external data sources.
Advantages:
Enables secure communication between AI models and source systems.
Keeps the generated output consistent with a predefined format.
Validates required fields and data types automatically.
Ensures reliable and accurate documentation generation.
Practical example
Use case: A pipeline connects to the Deepwiki MCP server to retrieve technical knowledge about a topic. The AI transforms this raw information into structured documentation.
Goal: Generate a documentation section about Event-Driven Architecture with a clear title, short description, practical use cases, and best practices.
MCP Server:
Name: DeepWiki
System prompt:
You are a technical documentation generator. Always write in clear and concise English, using a professional but simple tone.
Your task is to transform raw information retrieved from external tools (such as Deepwiki) into well-structured documentation.
Ensure your output is consistent, accurate, and aligned with the requested format.User prompt:
Use the information retrieved from the Deepwiki MCP server about the topic "Event-Driven Architecture" to create a documentation section.
The documentation must include:
- A clear title.
- A concise description (2–3 sentences).
- At least three practical use cases.
- At least three best practices.
Format the response strictly following the provided JSON schema.JSON Schema:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "DocumentationSection",
"type": "object",
"required": ["title", "description", "use_cases", "best_practices"],
"properties": {
"title": {
"type": "string",
"description": "The title of the documentation section"
},
"description": {
"type": "string",
"description": "A concise description of the topic (2-3 sentences)"
},
"use_cases": {
"type": "array",
"description": "Practical use cases for the topic",
"items": {
"type": "string"
},
"minItems": 3
},
"best_practices": {
"type": "array",
"description": "Recommended best practices for the topic",
"items": {
"type": "string"
},
"minItems": 3
}
},
"additionalProperties": false
}
Output:
{
"body": {
"title": "Event-Driven Architecture",
"description": "Event-Driven Architecture (EDA) is a software design pattern where system components communicate by producing and consuming events. This approach enables loosely coupled systems that can react to changes in real time, improving scalability and flexibility. EDA is commonly used in distributed systems and applications requiring asynchronous processing.",
"use_cases": [
"Building microservices that need to communicate asynchronously.",
"Implementing real-time analytics platforms that process streaming data.",
"Automating workflows in response to business events, such as order processing or user actions."
],
"best_practices": [
"Design events to be self-contained and descriptive to ensure clear communication between components.",
"Use reliable messaging systems to guarantee event delivery and avoid data loss.",
"Monitor and log event flows to quickly detect and resolve issues in the system."
]
},
"tokenUsage": {
"inputTokenCount": 396,
"outputTokenCount": 162,
"totalTokenCount": 558
}
}
FAQ
Last updated
Was this helpful?