LLM Connector
Discover more about the LLM Connector and how to use it on the Digibee Integration Platform.
LLM Connector sends requests to Large Language Models (LLMs) within Digibee pipelines, enabling tasks such as text classification, information extraction, summarization, and content evaluation.
It supports built-in authentication and works with major providers: Anthropic Claude, DeepSeek, Google Gemini, and OpenAI. The configuration allows you to control model behavior, response format, and output structure based on your integration needs.
Parameters
Take a look at the configuration parameters for the connector. Parameters supported by Double Braces expressions are marked with (DB)
.
General
LLM Provider
Specifies the LLM provider to use. Available options are: Anthropic Claude, DeepSeek, Google Gemini, and OpenAI.
N/A
String
Model
The AI model to be used, based on the selected provider. Only text models are supported; image generation is not available.
N/A
String
Account
N/A
Account
User Prompt (DB)
The prompt sent to the AI model. Supports Double Braces syntax to include data or variables from earlier steps.
N/A
Plain Text
Output Format
When enabled, allows you to define a custom output format for the AI response.
False
Boolean
Output Format Body (DB)
The structure of the desired output format.
N/A
JSON
System Prompt
System Prompt (DB)
A predefined instruction that sets the tone and behavior of the AI. You can use it to define roles or the type of response the model should always follow.
N/A
Plain Text
Maximum Output Token (DB)
Sets the maximum number of tokens allowed in the AI response. Lower values may reduce the quality and completeness of the output.
1024
Integer
Settings
Stop On Client Error
If enabled, stops the pipeline execution when an HTTP 4xx error occurs.
False
Boolean
Stop On Server Error
If enabled, stops the pipeline execution when an HTTP 5xx error occurs.
False
Boolean
Error Handling
Fail On Error
If enabled, interrupts the pipeline execution when an error occurs. If disabled, execution continues, but the "success"
property will be set to false
.
False
Boolean
Documentation
Documentation
Optional field to describe the connector configuration and any relevant business rules.
N/A
String
LLM Connector in action
Minimal configuration: User Prompt request
This configuration uses only the User Prompt parameter to send a request to the AI model.
Advantages:
Easy to set up with just one input.
Good for testing different prompts quickly.
Works well for simple requests.
Practical example
Use case: A pipeline integrated with Zendesk receives a new customer ticket. The LLM Connector is used to analyze the request and classify its topic.
Goal: Classify the topic of a support ticket.
User Prompt:
Example output:
Mid-level configuration: User + System Prompts request
This configuration uses both the User Prompt and System Prompt parameters to guide the AI response.
Advantages:
Helps guide the AI’s tone and behavior.
Makes responses more consistent.
Adds context that helps the AI understand the prompt better.
Practical example
Use case: After classifying the support ticket, the pipeline queries a knowledge database. The LLM Connector is then used again to generate a personalized response for the customer.
Goal: Generate a custom response using predefined tone and style.
System Prompt:
User Prompt:
Example output:
High-level configuration: Prompts + Output Format request
This configuration uses User Prompt, System Prompt, and Output Format to generate a structured response.
Advantages:
Produces output in a clear and structured format.
Useful when the result needs to follow a specific structure.
Helps manage the AI’s unpredictability by setting a fixed format.
Practical example
Use case: A pipeline receives a user-generated comment from an ISV (independent software vendor) platform. The LLM Connector sends the comment to the AI to evaluate whether it’s harmful or offensive. The returned score is then used to decide whether the comment should be published or if the user should be flagged.
Goal: Evaluate and score a comment’s harmfulness and determine whether it should be approved.
System Prompt:
User Prompt:
Output Format Body:
Possible output:
Dynamic configuration: Prompt with Double Braces reference
This configuration uses the User Prompt field to dynamically inject data from a previous connector using Double Braces expressions. In addition, the System Prompt and Output Format fields are used to guide the AI and generate a structured response.
Advantages:
Enables contextual prompts based on pipeline data.
Connects the AI response to runtime information.
Practical example
Use case: A pipeline receives address data from a REST connector that queries a Brazilian public ZIP code API (OpenCEP). The LLM Connector is then used to classify the type of address as residential, commercial or rural, based on the street name and neighborhood returned by the API.
Goal: Categorize the address type using dynamic data from the previous connector.
System Prompt:
User Prompt with Double Braces:
Output Format Body:
Possible output:
FAQ
Last updated
Was this helpful?