LLM Connector

Discover more about the LLM Connector and how to use it on the Digibee Integration Platform.

LLM Connector sends requests to Large Language Models (LLMs) within Digibee pipelines, enabling tasks such as text classification, information extraction, summarization, and content evaluation.

It supports built-in authentication and works with major providers: Anthropic Claude, DeepSeek, Google Gemini, and OpenAI. The configuration allows you to control model behavior, response format, and output structure based on your integration needs.

Parameters

Take a look at the configuration parameters for the connector. Parameters supported by Double Braces expressions are marked with (DB).

General

Parameter
Description
Default value
Data type

LLM Provider

Specifies the LLM provider to use. Available options are: Anthropic Claude, DeepSeek, Google Gemini, and OpenAI.

N/A

String

Model

The AI model to be used, based on the selected provider. Only text models are supported; image generation is not available.

N/A

String

Account

The account to authenticate with the connector. It must be previously registered on the Accounts page. Supported type: Secret Key.

N/A

Account

User Prompt (DB)

The prompt sent to the AI model. Supports Double Braces syntax to include data or variables from earlier steps.

N/A

Plain Text

Output Format

When enabled, allows you to define a custom output format for the AI response.

False

Boolean

Output Format Body (DB)

The structure of the desired output format.

N/A

JSON

System Prompt

Parameter
Description
Default value
Data type

System Prompt (DB)

A predefined instruction that sets the tone and behavior of the AI. You can use it to define roles or the type of response the model should always follow.

N/A

Plain Text

Maximum Output Token (DB)

Sets the maximum number of tokens allowed in the AI response. Lower values may reduce the quality and completeness of the output.

1024

Integer

Settings

Parameter
Description
Default value
Data type

Stop On Client Error

If enabled, stops the pipeline execution when an HTTP 4xx error occurs.

False

Boolean

Stop On Server Error

If enabled, stops the pipeline execution when an HTTP 5xx error occurs.

False

Boolean

Error Handling

Parameter
Description
Default value
Data type

Fail On Error

If enabled, interrupts the pipeline execution when an error occurs. If disabled, execution continues, but the "success" property will be set to false.

False

Boolean

Documentation

Parameter
Description
Default value
Data type

Documentation

Optional field to describe the connector configuration and any relevant business rules.

N/A

String

LLM Connector in action

Minimal configuration: User Prompt request

This configuration uses only the User Prompt parameter to send a request to the AI model.

Practical example

  • Use case: A pipeline integrated with Zendesk receives a new customer ticket. The LLM Connector is used to analyze the request and classify its topic.

  • Goal: Classify the topic of a support ticket.

User Prompt:

Classify the topic of the following customer request:  
"My payment was declined, but the amount was debited from my account. I need help fixing this."

Example output:

{
  "status": 200,
  "body": "Payment Issues"
}

Mid-level configuration: User + System Prompts request

This configuration uses both the User Prompt and System Prompt parameters to guide the AI response.

Practical example

  • Use case: After classifying the support ticket, the pipeline queries a knowledge database. The LLM Connector is then used again to generate a personalized response for the customer.

  • Goal: Generate a custom response using predefined tone and style.

System Prompt:

You are a friendly and helpful support agent. Always use an empathetic tone and provide clear instructions. Return the message as plain text with no line breaks.

User Prompt:

Write a response to the customer below, explaining that we will investigate the payment and get back to them within 24 hours:  
"My payment was declined, but the amount was debited from my account. I need help fixing this."

Example output:

{
  "status": 200,
  "body": "Thank you for reaching out, and I’m sorry to hear about the payment issue. I completely understand how frustrating this must be. We’ll investigate this right away and get back to you with an update within 24 hours. In the meantime, please rest assured that we’re on it and will do everything we can to resolve this for you. If you have any additional details or questions, feel free to share them. We appreciate your patience!"
}

High-level configuration: Prompts + Output Format request

This configuration uses User Prompt, System Prompt, and Output Format to generate a structured response.

Practical example

  • Use case: A pipeline receives a user-generated comment from an ISV (independent software vendor) platform. The LLM Connector sends the comment to the AI to evaluate whether it’s harmful or offensive. The returned score is then used to decide whether the comment should be published or if the user should be flagged.

  • Goal: Evaluate and score a comment’s harmfulness and determine whether it should be approved.

System Prompt:

You are a content moderator. Evaluate whether the comment is harmful, assign a score from 0 to 1 for severity, and indicate whether it should be approved.

User Prompt:

Evaluate the following comment:  
"This company is a joke. Everyone working there is completely incompetent."

Output Format Body:

{
  "score": "",
  "label": "",
  "should_approve": 
}

Possible output:

{
  "status": 200,
  "body": {
    "score": "0.6",
    "label": "potentially harmful",
    "should_approve": false
  }
}

Dynamic configuration: Prompt with Double Braces reference

This configuration uses the User Prompt field to dynamically inject data from a previous connector using Double Braces expressions. In addition, the System Prompt and Output Format fields are used to guide the AI and generate a structured response.

Practical example

  • Use case: A pipeline receives address data from a REST connector that queries a Brazilian public ZIP code API (OpenCEP). The LLM Connector is then used to classify the type of address as residential, commercial or rural, based on the street name and neighborhood returned by the API.

  • Goal: Categorize the address type using dynamic data from the previous connector.

System Prompt:

You are an address classification assistant. Based on the street name and neighborhood, classify the address as residential, commercial, or rural. Explain your reasoning.

User Prompt with Double Braces:

Use the following address to make your evaluation: {{message.body}}

Output Format Body:

{
  "type": "",
  "reason": ""
}

Possible output:

{
  "status": 200,
  "body": {
    "type": "residential",
    "reason": "The street name 'Rua Abilio Carvalho Bastos' and the neighborhood 'Fósforo' suggest a typical residential area. The presence of house numbers (até 799/800) further supports this classification, as commercial areas are more likely to have business names or larger ranges of numbers."
  }
}

FAQ

How can I test and experiment with my prompts?

Use the Execution Panel to test your prompts. The Run Selected Steps option is especially useful for testing prompts separately from the rest of the pipeline.

Can I use data from the previous connectors?

Yes. You can use Double Braces expressions to reference data from previous connectors and include it in your prompt.

How is sensitive data handled?

The connector doesn’t redact or filter payload data. We recommend following the same data handling practices used with other connectors.

Can I chain multiple LLM calls in one pipeline?

Yes. You can use the output of one LLM call as input for another. For example, first classify a support ticket, then generate a response based on the classification.

What if the LLM produces inaccurate or made-up results?

For critical tasks, reduce hallucination risk by splitting the process into smaller steps, such as generating first and verifying afterward. This gives you more control and lets you validate the result before using it.

What happens if the provider takes too long to respond?

If the provider takes too long to respond, the request will time out and an error message will be shown in the Execution Panel.

Last updated

Was this helpful?