LLM Connector
Discover more about the LLM Connector and how to use it on the Digibee Integration Platform.
LLM Connector sends requests to Large Language Models (LLMs) within Digibee pipelines, enabling tasks such as text classification, information extraction, summarization, and content evaluation.
It supports built-in authentication and works with major providers: OpenAI, Google Gemini, Anthropic Claude, Azure OpenAI, and DeepSeek. The configuration allows you to control model behavior, response format, and output structure based on your integration needs.
Parameters
Take a look at the configuration parameters for the connector. Parameters supported by Double Braces expressions are marked with (DB)
.
General
Alias
Name (alias) for this connector’s output, allowing you to reference it later in the flow using Double Braces expressions. Learn more.
llm-1
String
LLM Provider
Specifies the LLM provider to use. Available options are: Anthropic Claude, DeepSeek, Google Gemini, and OpenAI.
N/A
String
Use Custom Model
Enable to select a custom AI Model.
False
Boolean
Model
The AI model to be used, based on the selected provider. Only text models are supported; image generation is not available.
N/A
String
Account
The account to authenticate with the connector. It must be previously registered on the Accounts page. Supported type: Secret Key.
N/A
Account
System Prompt (DB)
A predefined instruction that sets the tone and behavior of the AI. You can use it to define roles or the type of response the model should always follow.
N/A
Plain Text
User Prompt (DB)
The prompt sent to the AI model. Supports Double Braces syntax to include data or variables from earlier steps.
N/A
Plain Text
Response Format
Use JSON Schema
When enabled, allows you to provide a JSON Schema to guide the LLM in generating the expected response format.
False
Boolean
JSON Schema definition
The JSON Schema that the AI should follow when generating the response.
N/A
JSON
Use JSON Mode
When enabled, allows you to provide a JSON example to help LLM to produce your desired response format.
False
Boolean
JSON definition
The JSON that the AI should follow when generating the response.
N/A
JSON
Settings
Maximum Output Token (DB)
The maximum length of the response. Larger numbers allow longer answers, smaller numbers make them shorter.
10000
Integer
Temperature (DB)
Controls creativity. Lower values make the answers more focused and predictable. Higher values make them more varied and creative.
0.1
Float and Integer
Top K (DB)
Limits how many word choices the model looks at for each step. Smaller numbers represent safer, more focused answers. Larger numbers represent more variety
64
Integer
Top P (DB)
Another way to control variety. The model only looks at the most likely words that add up to this probability. Lower values represent more focused answers.
1
Integer
Frequency Penalty (DB)
Discourages the model from repeating the same words too often.
0
Integer
Presence Penalty (DB)
Encourages the model to bring in new ideas instead of staying in the same topic.
0
Float
Error Handling
Fail On Error
If enabled, interrupts the pipeline execution when an error occurs. If disabled, execution continues, but the "success"
property will be set to false
.
False
Boolean
Documentation
Documentation
Optional field to describe the connector configuration and any relevant business rules.
N/A
String
LLM Connector in action
Configuration with User Prompt Only
This configuration uses only the User Prompt parameter to send a request to the AI model.
Advantages:
Easy to set up with just one input.
Good for testing different prompts quickly.
Works well for simple requests.
Practical example
Use case: A pipeline integrated with Zendesk receives a new customer ticket. The LLM Connector is used to analyze the request and classify its topic.
Goal: Classify the topic of a support ticket.
User Prompt:
Classify the topic of the following customer request:
"My payment was declined, but the amount was debited from my account. I need help fixing this."
Example output:
{
"status": 200,
"body": "Payment Issues"
}
Configuration with User + System Prompts
This configuration uses both the User Prompt and System Prompt parameters to guide the AI response.
Advantages:
Helps guide the AI’s tone and behavior.
Makes responses more consistent.
Adds context that helps the AI understand the prompt better.
Practical example
Use case: After classifying the support ticket, the pipeline queries a knowledge database. The LLM Connector is then used again to generate a personalized response for the customer.
Goal: Generate a custom response using predefined tone and style.
System Prompt:
You are a friendly and helpful support agent. Always use an empathetic tone and provide clear instructions. Return the message as plain text with no line breaks.
User Prompt:
Write a response to the customer below, explaining that we will investigate the payment and get back to them within 24 hours:
"My payment was declined, but the amount was debited from my account. I need help fixing this."
Example output:
{
"status": 200,
"body": "Thank you for reaching out, and I’m sorry to hear about the payment issue. I completely understand how frustrating this must be. We’ll investigate this right away and get back to you with an update within 24 hours. In the meantime, please rest assured that we’re on it and will do everything we can to resolve this for you. If you have any additional details or questions, feel free to share them. We appreciate your patience!"
}
Configuration with Prompts + JSON Schema
This configuration uses User Prompt, System Prompt, and JSON Schema to generate a structured response.
Advantages:
Keeps the output consistent with a defined format.
Validates field types, required fields, and allowed values automatically.
Works as a contract between systems, making integration more reliable.
Prevents invalid data from being processed.
Practical example
Use case: A pipeline receives a user-generated comment from an ISV (independent software vendor) platform. The LLM Connector sends the comment to the AI to evaluate whether it’s harmful or offensive. The returned score is then used to decide whether the comment should be published or if the user should be flagged.
Goal: Evaluate and score a comment’s harmfulness and determine whether it should be approved.
System Prompt:
You are a content moderator. Evaluate whether the comment is harmful, assign a score from 0 to 1 for severity, and indicate whether it should be approved.
User Prompt:
Evaluate the following comment:
"I had a great experience with this company. The team is professional and very helpful."
JSON Schema:
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"title": "ModerationResult",
"type": "object",
"properties": {
"status": {
"type": "integer",
"enum": [200],
"description": "HTTP status code"
},
"body": {
"type": "object",
"properties": {
"score": {
"type": "string",
"pattern": "^(0(\\.\\d+)?|1(\\.0+)?)$",
"description": "Severity score from 0 to 1"
},
"label": {
"type": "string",
"description": "Label describing the content, e.g., harmless, potentially harmful"
},
"should_approve": {
"type": "boolean",
"description": "Indicates whether the comment should be approved"
}
},
"required": ["score", "label", "should_approve"],
"additionalProperties": false
}
},
"required": ["status", "body"],
"additionalProperties": false
}
Possible output:
{
"body": {
"status": "200",
"body": {
"score": "0",
"label": "harmless",
"should_approve": true
}
},
"tokenUsage": {
"inputTokenCount": 168,
"outputTokenCount": 22,
"totalTokenCount": 190
}
}
Configuration with Prompts + plain JSON
This configuration uses User Prompt, System Prompt, and plain JSON (no schema) to return a structured response.
Advantages:
Produces output in a simple, readable format.
Flexible and easy to work with.
Good when strict validation isn’t necessary.
Practical example
Using the same use case as above, the prompts guide the AI to return a JSON object directly, without schema validation.
System Prompt:
You are a content moderator.
Evaluate whether the comment is harmful, assign a score from 0 to 1 for severity, and indicate whether it should be approved.
User Prompt:
Evaluate the following comment:
"I had a great experience with this company. The team is professional and very helpful."
Plain JSON:
{
"score": "",
"label": "",
"should_approve": ""
}
Output:
{
"body": {
"score": "0",
"label": "Not harmful",
"should_approve": "yes"
},
"tokenUsage": {
"inputTokenCount": 91,
"outputTokenCount": 26,
"totalTokenCount": 117
}
}
Dynamic configuration: Prompt with Double Braces reference
This configuration uses the User Prompt field to dynamically inject data from a previous connector using Double Braces expressions. In addition, the System Prompt and Output Format fields are used to guide the AI and generate a structured response.
Advantages:
Enables contextual prompts based on pipeline data.
Connects the AI response to runtime information.
Practical example
Use case: A pipeline receives address data from a REST connector that queries a Brazilian public ZIP code API (OpenCEP). The LLM Connector is then used to classify the type of address as residential, commercial or rural, based on the street name and neighborhood returned by the API.
Goal: Categorize the address type using dynamic data from the previous connector.
System Prompt:
You are an address classification assistant. Based on the street name and neighborhood, classify the address as residential, commercial, or rural. Explain your reasoning.
User Prompt with Double Braces:
Use the following address to make your evaluation: {{message.body}}
Output Format Body:
{
"type": "",
"reason": ""
}
Possible output:
{
"status": 200,
"body": {
"type": "residential",
"reason": "The street name 'Rua Abilio Carvalho Bastos' and the neighborhood 'Fósforo' suggest a typical residential area. The presence of house numbers (até 799/800) further supports this classification, as commercial areas are more likely to have business names or larger ranges of numbers."
}
}
FAQ
Last updated
Was this helpful?