LLM Connector

Discover more about the LLM Connector and how to use it on the Digibee Integration Platform.

LLM Connector sends requests to Large Language Models (LLMs) within Digibee pipelines, enabling tasks such as text classification, information extraction, summarization, and content evaluation.

It supports built-in authentication and works with major providers: OpenAI, Google Gemini, Anthropic Claude, Azure OpenAI, and DeepSeek. The configuration allows you to control model behavior, response format, and output structure based on your integration needs.

Parameters

Configure the connector using the parameters below. Fields that support Double Braces expressions are marked in the Supports DB column.

Parameter
Description
Type
Supports DB
Default value

Alias

Name (alias) for this connector’s output, allowing you to reference it later in the flow using Double Braces expressions. Learn more.

String

llm-1

LLM Provider

Specifies the LLM provider to use. Available options are: Anthropic Claude, DeepSeek, Google Gemini, and OpenAI.

String

N/A

Use Custom Model

Enable to select a custom AI Model.

Boolean

False

Model

The AI model to be used, based on the selected provider. Only text models are supported; image generation is not available.

String

N/A

Account

The account to authenticate with the connector. It must be previously registered on the Accounts page. Supported type: Secret Key.

Account

N/A

System Prompt

A predefined instruction that sets the tone and behavior of the AI. You can use it to define roles or the type of response the model should always follow.

Plain Text

N/A

User Prompt

The prompt sent to the AI model. Supports Double Braces syntax to include data or variables from earlier steps.

Plain Text

N/A

Testing the LLM Connector

The LLM connector can be tested in isolation, without executing the full pipeline. This makes it easier to quickly refine and improve your prompts.

After configuring the parameters of the connector, click Play in the bottom right corner to run a test and view the request responses. After verifying the connector works as expected, click Confirm to save it and return to the main flow.

LLM Connector in action

You can use the Mock Response feature to validate your flows for external calls without consuming AI tokens.

Configuration with User Prompt Only

This configuration uses only the User Prompt parameter to send a request to the AI model.

Practical example

  • Use case: A pipeline integrated with Zendesk receives a new customer ticket. The LLM Connector is used to analyze the request and classify its topic.

  • Goal: Classify the topic of a support ticket.

User Prompt:

Classify the topic of the following customer request:  
"My payment was declined, but the amount was debited from my account. I need help fixing this."

Example output:

{
  "status": 200,
  "body": "Payment Issues"
}

Configuration with User + System Prompts

This configuration uses both the User Prompt and System Prompt parameters to guide the AI response.

Practical example

  • Use case: After classifying the support ticket, the pipeline queries a knowledge database. The LLM Connector is then used again to generate a personalized response for the customer.

  • Goal: Generate a custom response using predefined tone and style.

System Prompt:

You are a friendly and helpful support agent. Always use an empathetic tone and provide clear instructions. Return the message as plain text with no line breaks.

User Prompt:

Write a response to the customer below, explaining that we will investigate the payment and get back to them within 24 hours:  
"My payment was declined, but the amount was debited from my account. I need help fixing this."

Example output:

{
  "status": 200,
  "body": "Thank you for reaching out, and I’m sorry to hear about the payment issue. I completely understand how frustrating this must be. We’ll investigate this right away and get back to you with an update within 24 hours. In the meantime, please rest assured that we’re on it and will do everything we can to resolve this for you. If you have any additional details or questions, feel free to share them. We appreciate your patience!"
}

Configuration with Prompts + JSON Schema

This configuration uses User Prompt, System Prompt, and JSON Schema to generate a structured response.

Practical example

  • Use case: A pipeline receives a user-generated comment from an ISV (independent software vendor) platform. The LLM Connector sends the comment to the AI to evaluate whether it’s harmful or offensive. The returned score is then used to decide whether the comment should be published or if the user should be flagged.

  • Goal: Evaluate and score a comment’s harmfulness and determine whether it should be approved.

System Prompt:

You are a content moderator. Evaluate whether the comment is harmful, assign a score from 0 to 1 for severity, and indicate whether it should be approved.

User Prompt:

Evaluate the following comment:  
"I had a great experience with this company. The team is professional and very helpful."

JSON Schema:

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "title": "ModerationResult",
  "type": "object",
  "properties": {
    "status": {
      "type": "integer",
      "enum": [200],
      "description": "HTTP status code"
    },
    "body": {
      "type": "object",
      "properties": {
        "score": {
          "type": "string",
          "pattern": "^(0(\\.\\d+)?|1(\\.0+)?)$",
          "description": "Severity score from 0 to 1"
        },
        "label": {
          "type": "string",
          "description": "Label describing the content, e.g., harmless, potentially harmful"
        },
        "should_approve": {
          "type": "boolean",
          "description": "Indicates whether the comment should be approved"
        }
      },
      "required": ["score", "label", "should_approve"],
      "additionalProperties": false
    }
  },
  "required": ["status", "body"],
  "additionalProperties": false
}

Possible output:

{
  "body": {
    "status": "200",
    "body": {
      "score": "0",
      "label": "harmless",
      "should_approve": true
    }
  },
  "tokenUsage": {
    "inputTokenCount": 168,
    "outputTokenCount": 22,
    "totalTokenCount": 190
  }
}

Configuration with Prompts + plain JSON

This configuration uses User Prompt, System Prompt, and plain JSON (no schema) to return a structured response.

Practical example

  • Using the same use case as above, the prompts guide the AI to return a JSON object directly, without schema validation.

System Prompt:

You are a content moderator. 
Evaluate whether the comment is harmful, assign a score from 0 to 1 for severity, and indicate whether it should be approved.

User Prompt:

Evaluate the following comment:  
"I had a great experience with this company. The team is professional and very helpful."

Plain JSON:

{
  "score": "",
  "label": "",
  "should_approve": ""
}

Output:

{
  "body": {
    "score": "0",
    "label": "Not harmful",
    "should_approve": "yes"
  },
  "tokenUsage": {
    "inputTokenCount": 91,
    "outputTokenCount": 26,
    "totalTokenCount": 117
  }
}

Dynamic configuration: Prompt with Double Braces reference

This configuration uses the User Prompt field to dynamically inject data from a previous connector using Double Braces expressions. In addition, the System Prompt and Output Format fields are used to guide the AI and generate a structured response.

Practical example

  • Use case: A pipeline receives address data from a REST connector that queries a Brazilian public ZIP code API (OpenCEP). The LLM Connector is then used to classify the type of address as residential, commercial or rural, based on the street name and neighborhood returned by the API.

  • Goal: Categorize the address type using dynamic data from the previous connector.

System Prompt:

You are an address classification assistant. Based on the street name and neighborhood, classify the address as residential, commercial, or rural. Explain your reasoning.

User Prompt with Double Braces:

Use the following address to make your evaluation: {{message.body}}

Output Format Body:

{
  "type": "",
  "reason": ""
}

Possible output:

{
  "status": 200,
  "body": {
    "type": "residential",
    "reason": "The street name 'Rua Abilio Carvalho Bastos' and the neighborhood 'Fósforo' suggest a typical residential area. The presence of house numbers (até 799/800) further supports this classification, as commercial areas are more likely to have business names or larger ranges of numbers."
  }
}

MCP Server configuration

This configuration uses an MCP Server, combined with User Prompt, System Prompt, and JSON Schema, to request and structure documentation generated from external data sources.

Practical example

  • Use case: A pipeline connects to the Deepwiki MCP server to retrieve technical knowledge about a topic. The AI transforms this raw information into structured documentation.

  • Goal: Generate a documentation section about Event-Driven Architecture with a clear title, short description, practical use cases, and best practices.

MCP Server:

System prompt:

You are a technical documentation generator. Always write in clear and concise English, using a professional but simple tone.
Your task is to transform raw information retrieved from external tools (such as Deepwiki) into well-structured documentation.
Ensure your output is consistent, accurate, and aligned with the requested format.

User prompt:

Use the information retrieved from the Deepwiki MCP server about the topic "Event-Driven Architecture" to create a documentation section.
The documentation must include:
- A clear title.
- A concise description (2–3 sentences).
- At least three practical use cases.
- At least three best practices.
Format the response strictly following the provided JSON schema.

JSON Schema:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "DocumentationSection",
  "type": "object",
  "required": ["title", "description", "use_cases", "best_practices"],
  "properties": {
    "title": {
      "type": "string",
      "description": "The title of the documentation section"
    },
    "description": {
      "type": "string",
      "description": "A concise description of the topic (2-3 sentences)"
    },
    "use_cases": {
      "type": "array",
      "description": "Practical use cases for the topic",
      "items": {
        "type": "string"
      },
      "minItems": 3
    },
    "best_practices": {
      "type": "array",
      "description": "Recommended best practices for the topic",
      "items": {
        "type": "string"
      },
      "minItems": 3
    }
  },
  "additionalProperties": false
}

Output:

{
  "body": {
    "title": "Event-Driven Architecture",
    "description": "Event-Driven Architecture (EDA) is a software design pattern where system components communicate by producing and consuming events. This approach enables loosely coupled systems that can react to changes in real time, improving scalability and flexibility. EDA is commonly used in distributed systems and applications requiring asynchronous processing.",
    "use_cases": [
      "Building microservices that need to communicate asynchronously.",
      "Implementing real-time analytics platforms that process streaming data.",
      "Automating workflows in response to business events, such as order processing or user actions."
    ],
    "best_practices": [
      "Design events to be self-contained and descriptive to ensure clear communication between components.",
      "Use reliable messaging systems to guarantee event delivery and avoid data loss.",
      "Monitor and log event flows to quickly detect and resolve issues in the system."
    ]
  },
  "tokenUsage": {
    "inputTokenCount": 396,
    "outputTokenCount": 162,
    "totalTokenCount": 558
  }
}

FAQ

How can I test and experiment with my prompts?

Use the Execution Panel to test your prompts. The Run Selected Steps option is especially useful for testing prompts separately from the rest of the pipeline.

Can I use data from the previous connectors?

Yes. You can use Double Braces expressions to reference data from previous connectors and include it in your prompt.

How is sensitive data handled?

The connector doesn’t redact or filter payload data. We recommend following the same data handling practices used with other connectors.

Can I chain multiple LLM calls in one pipeline?

Yes. You can use the output of one LLM call as input for another. For example, first classify a support ticket, then generate a response based on the classification.

What if the LLM produces inaccurate or made-up results?

For critical tasks, reduce hallucination risk by splitting the process into smaller steps, such as generating first and verifying afterward. This gives you more control and lets you validate the result before using it.

What happens if the provider takes too long to respond?

If the provider takes too long to respond, the request will time out and an error message will be shown in the Execution Panel.

Last updated

Was this helpful?