Agent Connector

Discover more about the Agent Connector and how to use it on the Digibee Integration Platform.

This connector was named "LLM" until October 2025.

Create your own AI Agent with the Agent Connector. By abstracting the APIs of major LLM providers, it lets you seamlessly perform tasks like text classification, information extraction, summarization, and content evaluation in your Digibee pipelines.

It supports built-in authentication and works with major providers: OpenAI, Google Gemini, Anthropic Claude, Azure OpenAI, Amazon Bedrock, and DeepSeek. The configuration allows you to control model behavior, response format, and output structure based on your integration needs.

Parameters

Configure the connector using the parameters below. Fields that support Double Braces expressions are marked in the Supports DB column.

Parameter
Description
Type
Supports DB
Default value

Alias

Name (alias) for this connector’s output, allowing you to reference it later in the flow using Double Braces expressions. Learn more.

String

llm-1

LLM Provider

Specifies the LLM provider to use. Available options are: OpenAI, Google Gemini, Anthropic Claude, Azure OpenAI, Amazon Bedrock, and DeepSeek. When selecting private providers (Azure Open AI), you must include the endpoint.

String

N/A

Use Custom Model

Enable to select a custom AI Model. Enable this option to use a custom model with Amazon Bedrock. Provide the model’s ARN from your AWS account.

Boolean

False

Custom

Custom AI model to be used based on the selected provider. Must be entered manually.

String

N/A

Model

The AI model to be used, based on the selected provider. Only text models are supported; image generation is not available.

String

N/A

Account

The account to authenticate with the connector. It must be previously registered on the Accounts page. Supported type: Secret Key.

Account

N/A

Timeout

Maximum time limit (in milliseconds) allowed for the operation to complete. If this limit is exceeded, the operation is aborted.

Integer

30000

Max retries

Defines the maximum number of retry attempts after an operation fails. For example, a value of 3 means the system will attempt the operation up to three times.

Integer

1

System Prompt

A predefined instruction that sets the tone and behavior of the AI. You can use it to define roles or the type of response the model should always follow.

Plain Text

N/A

User Prompt

The prompt sent to the AI model. Supports Double Braces syntax to include data or variables from earlier steps.

Plain Text

N/A

File

Enables including data from files in the prompt. The file must be previously uploaded using one of our storage connectors, such as:

After that, enter the filename with its extension (for example, bankslip_1.pdf), or reference it using Double Braces.

String

N/A

Testing the Agent Connector

The Agent Connector can be tested in isolation, without executing the full pipeline. This makes it easier to quickly refine and improve your prompts.

After configuring the parameters of the connector, enter an Input (optional) and then click Play in the bottom right corner to run a test and view the request responses.

After verifying whether the connector works as expected, click Confirm to save it and return to the main flow.

Agent Connector in action

You can use the Mock Response feature to validate your flows for external calls without consuming AI tokens.

Configuration with User Prompt Only

This configuration uses only the User Prompt parameter to send a request to the AI model.

Practical example

  • Use case: A pipeline integrated with Zendesk receives a new customer ticket. The Agent Connector is used to analyze the request and classify its topic.

  • Goal: Classify the topic of a support ticket.

User Prompt:

Classify the topic of the following customer request:  
"My payment was declined, but the amount was debited from my account. I need help fixing this."

Example output:

{
  "body": {
    "text": "**Topic:** Payment Issue / Failed Transaction\n\n**Explanation:**  \nThe customer is reporting a problem where their payment was declined, but the money was still debited from their account. This falls under payment issues, specifically failed or unsuccessful transactions with debited funds."
  },
  "tokenUsage": {
    "inputTokenCount": 39,
    "outputTokenCount": 52,
    "totalTokenCount": 91
  }
}

Configuration with User + System Prompts

This configuration uses both the User Prompt and System Prompt parameters to guide the AI response.

Practical example

  • Use case: After classifying the support ticket, the pipeline queries a knowledge database. The Agent Connector is then used again to generate a personalized response for the customer.

  • Goal: Generate a custom response using predefined tone and style.

System Prompt:

You are a friendly and helpful support agent. Always use an empathetic tone and provide clear instructions. Return the message as plain text with no line breaks.

User Prompt:

Write a response to the customer below, explaining that we will investigate the payment and get back to them within 24 hours:  
"My payment was declined, but the amount was debited from my account. I need help fixing this."

Example output:

{
  "body": {
    "text": "Thank you for reaching out and letting us know about the issue with your payment. I understand how concerning it is to see a charge when your payment was declined. We will investigate this matter right away and get back to you with an update within 24 hours. Thank you for your patience while we look into this for you."
  },
  "tokenUsage": {
    "inputTokenCount": 89,
    "outputTokenCount": 65,
    "totalTokenCount": 154
  }
}

Configuration with Prompts + JSON Schema

This configuration uses User Prompt, System Prompt, and JSON Schema to generate a structured response.

Practical example

  • Use case: A pipeline receives a user-generated comment from an ISV (independent software vendor) platform. The Agent Connector sends the comment to the AI to evaluate whether it’s harmful or offensive. The returned score is then used to decide whether the comment should be published or if the user should be flagged.

  • Goal: Evaluate and score a comment’s harmfulness and determine whether it should be approved.

System Prompt:

You are a content moderator. Evaluate whether the comment is harmful, assign a score from 0 to 1 for severity, and indicate whether it should be approved.

User Prompt:

Evaluate the following comment:  
"I had a great experience with this company. The team is professional and very helpful."

JSON Schema:

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "title": "ModerationResult",
  "type": "object",
  "properties": {
    "status": {
      "type": "integer",
      "enum": [200],
      "description": "HTTP status code"
    },
    "body": {
      "type": "object",
      "properties": {
        "score": {
          "type": "string",
          "pattern": "^(0(\\.\\d+)?|1(\\.0+)?)$",
          "description": "Severity score from 0 to 1"
        },
        "label": {
          "type": "string",
          "description": "Label describing the content, e.g., harmless, potentially harmful"
        },
        "should_approve": {
          "type": "boolean",
          "description": "Indicates whether the comment should be approved"
        }
      },
      "required": ["score", "label", "should_approve"],
      "additionalProperties": false
    }
  },
  "required": ["status", "body"],
  "additionalProperties": false
}

Possible output:

{
  "body": {
    "status": "200",
    "body": {
      "score": "0",
      "label": "harmless",
      "should_approve": true
    }
  },
  "tokenUsage": {
    "inputTokenCount": 168,
    "outputTokenCount": 22,
    "totalTokenCount": 190
  }
}

Configuration with Prompts + plain JSON

This configuration uses User Prompt, System Prompt, and plain JSON (no schema) to return a structured response.

Practical example

  • Using the same use case as above, the prompts guide the AI to return a JSON object directly, without schema validation.

System Prompt:

You are a content moderator. 
Evaluate whether the comment is harmful, assign a score from 0 to 1 for severity, and indicate whether it should be approved.

User Prompt:

Evaluate the following comment:  
"I had a great experience with this company. The team is professional and very helpful."

Plain JSON:

{
  "score": "",
  "label": "",
  "should_approve": ""
}

Output:

{
  "body": {
    "score": "0",
    "label": "Not harmful",
    "should_approve": "yes"
  },
  "tokenUsage": {
    "inputTokenCount": 91,
    "outputTokenCount": 26,
    "totalTokenCount": 117
  }
}

Configuration with Prompts + Double Braces

This configuration uses the User Prompt field to dynamically inject data from a previous connector using Double Braces expressions. In addition, the System Prompt and Output Format fields are used to guide the AI and generate a structured response.

Practical example

  • Use case: A pipeline receives address data from a REST connector that queries a Brazilian public ZIP code API (OpenCEP). The Agent Connector is then used to classify the type of address as residential, commercial or rural, based on the street name and neighborhood returned by the API.

  • Goal: Categorize the address type using dynamic data from the previous connector.

System Prompt:

You are an address classification assistant. Based on the street name and neighborhood, classify the address as residential, commercial, or rural. Explain your reasoning.

User Prompt with Double Braces:

Use the following address to make your evaluation: {{message.body}}

JSON Schema:

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "name": "address_classification_response",
  "type": "object",
  "properties": {
    "status": {
      "type": "integer"
    },
    "body": {
      "type": "object",
      "properties": {
        "type": {
          "type": "string",
          "enum": ["residential", "commercial", "rural"]
        },
        "reason": {
          "type": "string"
        }
      },
      "required": ["type", "reason"]
    }
  },
  "required": ["status", "body"],
  "additionalProperties": false
}

Output:

{
  "body": {
    "status": 200,
    "body": {
      "type": "residential",
      "reason": "The street name 'Rua Josina Teixeira de Carvalho' and the neighborhood 'Vila Anchieta' in São José do Rio Preto indicate a typical urban residential area. There is no indication of commercial or rural characteristics in the address."
    }
  },
  "tokenUsage": {
    "inputTokenCount": 170,
    "outputTokenCount": 62,
    "totalTokenCount": 232
  }
}

MCP Server configuration

This configuration uses an MCP Server, combined with User Prompt, System Prompt, and JSON Schema, to request and structure documentation generated from external data sources.

Practical example

  • Use case: A pipeline connects to the Deepwiki MCP server to retrieve technical knowledge about a topic. The AI transforms this raw information into structured documentation.

  • Goal: Generate a documentation section about Event-Driven Architecture with a clear title, short description, practical use cases, and best practices.

MCP Server:

System prompt:

You are a technical documentation generator. Always write in clear and concise English, using a professional but simple tone.
Your task is to transform raw information retrieved from external tools (such as Deepwiki) into well-structured documentation.
Ensure your output is consistent, accurate, and aligned with the requested format.

User prompt:

Use the information retrieved from the Deepwiki MCP server about the topic "Event-Driven Architecture" to create a documentation section.
The documentation must include:
- A clear title.
- A concise description (2–3 sentences).
- At least three practical use cases.
- At least three best practices.
Format the response strictly following the provided JSON schema.

JSON Schema:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "DocumentationSection",
  "type": "object",
  "required": ["title", "description", "use_cases", "best_practices"],
  "properties": {
    "title": {
      "type": "string",
      "description": "The title of the documentation section"
    },
    "description": {
      "type": "string",
      "description": "A concise description of the topic (2-3 sentences)"
    },
    "use_cases": {
      "type": "array",
      "description": "Practical use cases for the topic",
      "items": {
        "type": "string"
      },
      "minItems": 3
    },
    "best_practices": {
      "type": "array",
      "description": "Recommended best practices for the topic",
      "items": {
        "type": "string"
      },
      "minItems": 3
    }
  },
  "additionalProperties": false
}

Output:

{
  "body": {
    "title": "Event-Driven Architecture",
    "description": "Event-Driven Architecture (EDA) is a software design pattern where system components communicate by producing and consuming events. This approach enables loosely coupled systems that can react to changes in real time, improving scalability and flexibility. EDA is commonly used in distributed systems and applications requiring asynchronous processing.",
    "use_cases": [
      "Building microservices that need to communicate asynchronously.",
      "Implementing real-time analytics platforms that process streaming data.",
      "Automating workflows in response to business events, such as order processing or user actions."
    ],
    "best_practices": [
      "Design events to be self-contained and descriptive to ensure clear communication between components.",
      "Use reliable messaging systems to guarantee event delivery and avoid data loss.",
      "Monitor and log event flows to quickly detect and resolve issues in the system."
    ]
  },
  "tokenUsage": {
    "inputTokenCount": 396,
    "outputTokenCount": 162,
    "totalTokenCount": 558
  }
}

FAQ

How can I test and experiment with my prompts?

Use the testing panel located on the right side of the connector’s configuration form. For detailed guidance, see the Testing the Agent Connector topic.

Can I use data from the previous connectors?

Yes. You can use Double Braces expressions to reference data from previous connectors and include it in your prompt.

How is sensitive data handled?

The connector doesn’t redact or filter payload data. We recommend following the same data handling practices used with other connectors.

Can I chain multiple LLM calls in one pipeline?

Yes. You can use the output of one LLM call as input for another. For example, first classify a support ticket, then generate a response based on the classification.

What if the connector produces inaccurate or made-up results?

For critical tasks, reduce the risk of hallucinations by following these best practices:

  • Configure parameters such as Temperature, Top K, Top P, Frequency Penalty, and Presence Penalty.

  • Break processes into smaller steps, for example, generate first and verify afterward. This approach provides better control and allows you to validate results before using them.

  • Create more effective prompts by applying prompt engineering techniques.

What happens if the provider takes too long to respond?

If the provider takes too long to respond, the request will time out and an error message will be shown in the Execution Panel.

Last updated

Was this helpful?