DB V2
Discover more about the DB V2 connector and how to use it on the Digibee Integration Platform.
The DB V2 connector executes SQL commands such as SELECT
, INSERT
, UPDATE
, and DELETE
, and can also call stored procedures. It processes these operations and returns the results in a structured JSON format.
For a complete list of databases compatible with this connector, please refer to our Supported databases documentation.
Parameters
Configure the connector using the parameters below. Fields that support Double Braces expressions are marked in the Supports DB column.
Use Dynamic Account
If enabled, the account is defined dynamically during execution. If disabled, a static account is selected from the list.
Boolean
❌
false
Scoped
If enabled, the stored account is isolated from other subprocesses. In this case, the subprocesses see their own version of the stored account data. It is only available if the Use Dynamic Account parameter is enabled.
Boolean
❌
false
Account Name
The name of the dynamic account used by the connector. This account must have been previously configured in a Store Account connector in the pipeline for this process to take effect. It is only available if the Use Dynamic Account parameter is enabled.
String
✅
N/A
Account Type
The account type to be used by the connector. Options are: AWS V4, Basic, Kerberos, and Azure Key.
String
❌
Basic
Fail on Error
If enabled, the pipeline execution stops on error. If disabled, the pipeline continues, and the error is returned in the connector's output message with "success": false
.
Boolean
❌
false
Important information
Credentials and connection pool
When credentials are enabled, the connection pool is created at pipeline execution and closed after completion. Pools can be shared between connectors if the same settings are applied.
Apache Hive
On Apache Hive, updatecount
may not be available unless row control is enabled on the server. See the supported databases for details.
Parameters – Additional information
Bind variables
Bind variables are parameters used in SQL queries to replace fixed values with dynamic ones. They allow the same query to be reused with different input values, reducing the risk of SQL Injection and improving execution efficiency. This feature is supported by several relational databases, such as Oracle and SQL Server.
Use Double Braces ({{ }}
) to insert dynamic values in SQL queries. This approach improves security (prevents SQL Injection) and performance (reuses execution plans).
Incorrect (no bind variable):
SELECT *
FROM CUSTOMER
WHERE CUSTOMER_ID = 12345
Correct (with bind variable):
SELECT *
FROM CUSTOMER
WHERE CUSTOMER_ID = {{ message.customerId }}
DB V2 uses JDBC PreparedStatements, sending queries such as:
SQL Statement:
SELECT * FROM CUSTOMER WHERE CUSTOMER_ID = ?
Parameter value:
CUSTOMER_ID = 12345
This guarantees safe parameterization and reuse of execution plans.
Checking if binds are applied
You can confirm if binds are being used by checking the database catalog views. For example, in Oracle:
SELECT sql_id, sql_text, bind_data, executions
FROM v$sql
WHERE sql_text LIKE '%CUSTOMER%'
ORDER BY last_load_time DESC;
If bind_data
is populated and the SQL text contains placeholders (:1
, :B1
), the query uses bind variables.
Raw SQL Statement parameter
To add more flexibility when using DB V2, you can enable the Raw SQL Statement option, predefine a query, and reference it via Double Braces in the SQL Statement parameter as follows:
Define a query using the Template Transformer.
Enable the Raw SQL Statement option.
Reference the predefined query in the SQL Statement parameter.
When using Raw SQL Statement, always define queries through the Template Transformer. This allows parameter validation using FreeMarker and parameter declaration via Double Braces. Note that these parameters are resolved by DB V2, which prepares and validates SQL statements (PreparedStatement) by default. This approach reduces the risk of SQL Injection.
Below is a comparison between the recommended use of the Template Transformer on the left (using Double Braces in the WHERE
clause) and a non-recommended approach on the right (using FreeMarker in the WHERE
clause), which may introduce security risks.
✅ Recommended usage:
SELECT * FROM
${table}
WHERE 0=0
<#if codigo??>AND CODE= {{ message._query.code }} </#if>
<#if cep??>AND ZIP_CODE= {{ message._query.zipCode }} </#if>
<#if name??>AND NAME= {{ message._query.name }} </#if>
❌ Non-recommended usage:
SELECT * FROM
${table}
WHERE 0=0
<#if codigo??>AND CODE=${code} </#if>
<#if cep??>AND ZIP_CODEand zip_code=${zipCode} </#if>
<#if name??>and name='${name}' </#if>
Key parameter
The Key parameter maps placeholders in SQL statements to their values, especially for Procedures
and INSERT
queries handling CLOB/BLOB data.
Example:
INSERT INTO TABLE (MY_CLOB, MY_BLOB, MY_STRING)
VALUES ({{ message.clob }}, {{ message.blob }}, {{ message.string }})
Index mapping:
0
{{ message.clob }}
1
{{ message.blob }}
2
{{ message.string }}
Indexes always start at 0 and follow the order of placeholders in the SQL statement.
DB V2 in action
Batch processing
This mode executes queries for arrays of objects:
[
{ "name": "Matthew", "type":"A" },
{ "name": "Jules", "type":"A" },
{ "name": "Raphael", "type":"B" }
]
SQL example:
INSERT INTO TABLE VALUES ({{ item.name }}, {{ item.type }})
Iterates through
items
array and maps properties.Expected output:
{
"totalSucceeded":3,
"totalFailed":0
}
Error handling:
Single error:
{
"totalSucceeded":1,
"totalFailed":1,
"error": "error1"
}
Multiple errors:
{
"totalSucceeded":1,
"totalFailed":1,
"errors": ["error1", "error2"]
}
Rollback on Error parameter
If this option is enabled, the Platform applies the all-or-nothing principle to batch operations:
The commit will only be performed if all operations in the batch succeed.
If at least one operation fails, a rollback is triggered, and all operations in the batch are reverted, ensuring data consistency.
If this option is disabled, the behavior changes:
Each operation is committed individually as it succeeds.
Even if one or more operations fail, the previous successful commits are preserved, which may result in partial updates in the database.
Driver-specific behavior
Oracle and certain other drivers cannot return consolidated counts for succeeded and failed executions. In such cases, results may appear as:
{
"totalSucceeded": -1,
"totalFailed": -1,
"errors": ["error1", "error2"],
"success": false
}
Firebird and similar drivers may underreport errors, sometimes returning objects that suggest no error even when failures occurred:
{
"totalSucceeded": 0,
"totalFailed": 3,
"errors": ["error1", "error2"],
"success": false
}
Always check the success
property. A value of false
means at least one error occurred, regardless of the reported totals.
Connection pool
The connection pool size is defined automatically based on the pipeline deployment size:
Small: 10 connections
Medium: 20 connections
Large: 40 connections
Pools are shared among all DB connectors (DB V2 and Stream DB V3) that connect to the same database within a pipeline.
Pool Size by Actual Consumers parameter
You can override the default behavior by enabling the Pool Size By Actual Consumers parameter:
The pool size is defined by the number of consumers manually configured in the deployment screen.
You must enable this property in each connector that should adopt this behavior.
Example: In a Small pipeline with 5 consumers, the DB connectors use a pool size of 5.
Manual configurations can cause deadlocks if multiple concurrent calls compete for the same database.
Exclusive Pool parameter
By default, pools are shared among DB connectors accessing the same database. If you need an isolated pool for a specific connector, enable the Exclusive Pool parameter.
Keep Connection parameter
This parameter defines how pool connections are managed:
Enabled:
Connections remain open and are renewed every 30 minutes.
Suitable for high availability scenarios (reduces overhead of opening connections frequently).
Disabled:
Connections are opened only when needed.
After use, they remain idle for up to 5 minutes before being closed.
More efficient for low-frequency usage.
Custom Pool parameter
For advanced scenarios, you can enable the Custom Pool option to override all defaults and define:
Connection Maximum Lifetime
Minimum Idle Connections
Idle Connection Timeout
Customizing pools requires in-depth knowledge. You must consider:
Deployment size and number of replicas.
Use of different credentials for the same database.
Interaction between DB connectors across multiple pipelines.
Whether the Exclusive Pool option is enabled.
Misconfiguration can degrade performance or cause resource contention. Only use this option if you have experience with pool management and a clear need.
Underlying technology
The connector uses the HikariCP framework to manage connection pools. For advanced configurations and best practices, see the official HikariCP documentation.
More usage scenarios
Refer to DB V2: Usage scenarios documentation for advanced topics, ranging from defining personalized connection properties to handling custom data types.
Last updated
Was this helpful?