# DB V2

The **DB V2** connector executes SQL commands such as `SELECT`, `INSERT`, `UPDATE`, and `DELETE`, and can also call stored procedures. It processes these operations and returns the results in a structured JSON format.

For a complete list of databases compatible with this connector, please refer to our [Supported databases documentation](https://docs.digibee.com/documentation/connectors-and-triggers/connectors/structured-data/supported-databases).

## **Parameters**

Configure the connector using the parameters below. Fields that support [Double Braces expressions](https://docs.digibee.com/documentation/connectors-and-triggers/double-braces/overview) are marked in the **Supports DB** column.

{% tabs fullWidth="true" %}
{% tab title="General" %}

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Use Dynamic Account</strong></td><td>If enabled, the account is defined dynamically during execution. If disabled, a static account is selected from the list.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr><tr><td><strong>Scoped</strong></td><td>If enabled, the stored account is isolated from other subprocesses. In this case, the subprocesses see their own version of the stored account data. It is only available if the <strong>Use Dynamic Account</strong> parameter is enabled.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr><tr><td><strong>Account Name</strong></td><td>The name of the dynamic account used by the connector. This account must have been previously configured in a <a href="../tools/store-account"><strong>Store Account</strong></a> connector in the pipeline for this process to take effect. It is only available if the <strong>Use Dynamic Account</strong> parameter is enabled.</td><td>String</td><td>✅</td><td>N/A</td></tr><tr><td><strong>Account Type</strong></td><td>The account type to be used by the connector. Options are: <strong>AWS V4, Basic</strong>, <strong>Kerberos</strong>, and <strong>Azure Key</strong>.</td><td>String</td><td>❌</td><td><code>Basic</code></td></tr><tr><td><strong>Account</strong></td><td>The <a href="https://app.gitbook.com/s/jvO5S91EQURCEhbZOuuZ/platform-administration/settings/accounts">account credentials</a> used to connect to the database.</td><td>Account</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Fail on Error</strong></td><td>If enabled, the pipeline execution stops on error. If disabled, the pipeline continues, and the error is returned in the connector's output message with <code>"success": false</code>.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr></tbody></table>
{% endtab %}

{% tab title="Operation" %}

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Type</strong></td><td>The data type of the property declared in the SQL Statement.</td><td>String</td><td>❌</td><td><code>Query</code></td></tr><tr><td><strong>Database URL</strong></td><td>The JDBC connection string for the target database.</td><td>String</td><td>✅</td><td><code>jdbc:mysql://host:port/database</code></td></tr><tr><td><strong>SQL Statement</strong></td><td>The SQL query or procedure call to be executed. Dynamic values can be inserted using Double Braces.</td><td>String</td><td>✅</td><td><code>SELECT DATE_FORMAT(SYSDATE(), '%Y-%m-%d') as DATA</code></td></tr><tr><td><strong>Batch</strong></td><td>If enabled, the connector processes multiple items in a single batch operation. See <a href="#batch-mode"><strong>Batch processing</strong></a> below for details.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr><tr><td><strong>Rollback on Error</strong></td><td>When <strong>Batch</strong> is enabled, this option ensures that all operations are rolled back if any single item fails. If disabled, successful operations are committed even if others fail.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr><tr><td><strong>Batch Items</strong></td><td>When <strong>Batch</strong> is enabled, specify the batch items.</td><td>String</td><td>✅</td><td>N/A</td></tr><tr><td><strong>Blob As File</strong></td><td>If enabled, BLOB parameters for Query or Procedure operations are expected to receive a file path from the input message.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr><tr><td><strong>Clob As File</strong></td><td>If enabled, CLOB parameters for Query or Procedure operations are expected to receive a file path from the input message.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr><tr><td><strong>Charset</strong></td><td>The character set for reading files when <strong>Clob As File</strong> is enabled.</td><td>String</td><td>✅</td><td><code>UTF-8</code></td></tr><tr><td><strong>Type Properties</strong></td><td>Defines specific data types for procedure parameters, especially for <code>OUT</code> parameters. Specify the <strong>Key</strong> (parameter index), <strong>Type</strong> (for example, <code>VARCHAR</code>), <strong>Out Parameter Name</strong> and <strong>Parameter Type</strong>.</td><td>List</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Custom Connection Properties</strong></td><td>Additional key-value properties for the JDBC connection. Each property must be on a new line, for example, <code>key=value</code>.</td><td>JSON</td><td>❌</td><td>N/A</td></tr></tbody></table>
{% endtab %}

{% tab title="Advanced Settings" %}

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Pool Size By Actual Consumers</strong></td><td>If enabled, the connection pool size is set by the number of consumers configured in the pipeline deployment. If disabled, it defaults to the deployment size.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr><tr><td><strong>Exclusive DB Pool</strong></td><td><p>If enabled, creates a new, dedicated connection pool for this connector. If disabled, it may share pools with other connectors that use the exact same configuration.</p><p></p><p><strong>Important:</strong> To share a pool, custom connection properties must be defined in the same order across connectors; otherwise, a separate pool will be created.</p></td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr><tr><td><strong>Custom Pool</strong></td><td><p>If enabled, the connection pool is configured using <strong>Connection Maximum Lifetime</strong>, <strong>Minimum Idle Connections</strong>, and <strong>Idle Connection Timeout</strong>. If disabled, it uses the <strong>Keep Connection</strong> parameter. </p><p></p><p><strong>Important:</strong> This advanced feature should be used with caution. See the dedicated <a href="#connection-pool"><strong>Connection pool</strong></a> section below.</p></td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr><tr><td><strong>Connection Maximum Lifetime</strong></td><td>(<strong>Custom Pool only</strong>) Defines the maximum lifetime of a connection in the pool. Active connections are not retired and are removed only when closed. Minimum allowed is 30,000 ms; lower values default to 1,800,000 ms (30 minutes).</td><td>Integer</td><td>✅</td><td><code>1800000</code></td></tr><tr><td><strong>Minimum Idle Connections</strong></td><td>(<strong>Custom Pool only</strong>) Sets the minimum number of idle connections in the pool. Maximum allowed depends on deployment size (10, 20, or 40); higher values default to the deployment limit.</td><td>Integer</td><td>✅</td><td><code>10</code></td></tr><tr><td><strong>Idle Connection Timeout</strong></td><td>(<strong>Custom Pool only</strong>) Sets the maximum idle time for a connection in the pool. Must be less than <strong>Connection Maximum Lifetime</strong>, and <strong>Minimum Idle Connections</strong> must be less than the pool size.</td><td>Integer</td><td>✅</td><td><code>600000</code></td></tr><tr><td><strong>Keep Connection</strong></td><td>If enabled, the pool maintains a minimum number of open connections (based on <strong>Pool Size By Actual Consumers</strong>) and renews them every 30 minutes. If disabled, the pool starts empty, opens connections on demand, keeps them up to 5 minutes, and does not renew them. Available only when <strong>Custom Pool</strong> is disabled.</td><td>Boolean</td><td>❌</td><td><code>true</code></td></tr><tr><td><strong>Output Column From Label</strong></td><td>If enabled, the output JSON keys will use the column alias (label) from the <code>SELECT</code> statement instead of the original column name.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr><tr><td><strong>Connection Test Query</strong></td><td>An optional SQL query (for example, <code>SELECT 1</code>) that runs before establishing a connection to validate its integrity.</td><td>String</td><td>✅</td><td>N/A</td></tr><tr><td><strong>Raw SQL Statement</strong></td><td>If enabled, the SQL Statement parameter supports dynamic queries using Double Braces. Ensure proper security measures against SQL injection. See the <a href="#raw-sql-statement-parameter">section below</a> for details.</td><td>Boolean</td><td>❌</td><td><code>false</code></td></tr></tbody></table>
{% endtab %}

{% tab title="Documentation" %}

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Data type</th><th>Default</th></tr></thead><tbody><tr><td><strong>Documentation</strong></td><td>Optional field to describe the connector configuration and any relevant business rules.</td><td>String</td><td>N/A</td></tr></tbody></table>
{% endtab %}
{% endtabs %}

## **Important information**

#### **Credentials and connection pool**

When credentials are enabled, the connection pool is created at pipeline execution and closed after completion. Pools can be shared between connectors if the same settings are applied.

#### **Apache Hive**

On Apache Hive, `updatecount` may not be available unless row control is enabled on the server. See the [supported databases](https://docs.digibee.com/documentation/connectors-and-triggers/connectors/structured-data/supported-databases) for details.

## **Parameters – Additional information**

### **Bind variables**

Bind variables are parameters used in SQL queries to replace fixed values with dynamic ones. They allow the same query to be reused with different input values, reducing the risk of SQL Injection and improving execution efficiency. This feature is supported by several relational databases, such as Oracle and SQL Server.

Use [**Double Braces**](https://docs.digibee.com/documentation/connectors-and-triggers/double-braces/overview) **(`{{ }}`)** to insert dynamic values in SQL queries. This approach improves security (prevents SQL Injection) and performance (reuses execution plans).

**Incorrect (no bind variable):**

```sql
SELECT * 
FROM CUSTOMER 
WHERE CUSTOMER_ID = 12345
```

**Correct (with bind variable):**

```sql
SELECT * 
FROM CUSTOMER 
WHERE CUSTOMER_ID = {{ message.customerId }}
```

DB V2 uses **JDBC PreparedStatements**, sending queries such as:

1. SQL Statement:\
   `SELECT * FROM CUSTOMER WHERE CUSTOMER_ID = ?`
2. Parameter value:\
   `CUSTOMER_ID = 12345`

This guarantees safe parameterization and reuse of execution plans.

#### **Checking if binds are applied**

You can confirm if binds are being used by checking the database catalog views. For example, in Oracle:

```sql
SELECT sql_id, sql_text, bind_data, executions
FROM v$sql
WHERE sql_text LIKE '%CUSTOMER%'
ORDER BY last_load_time DESC;
```

If `bind_data` is populated and the SQL text contains placeholders (`:1`, `:B1`), the query uses bind variables.

### **Raw SQL Statement parameter**

To add more flexibility when using DB V2, you can enable the **Raw SQL Statement** option, predefine a query, and reference it via **Double Braces** in the SQL Statement parameter as follows:

1. Define a query using the [**Template Transformer**](https://docs.digibee.com/documentation/connectors-and-triggers/connectors/tools/template-transformer).
2. Enable the **Raw SQL Statement** option.
3. Reference the predefined query in the **SQL Statement** parameter.

{% hint style="success" %}
When using **Raw SQL Statement**, always define queries through the [**Template Transformer**](https://docs.digibee.com/documentation/connectors-and-triggers/connectors/tools/template-transformer). This allows parameter validation using FreeMarker and parameter declaration via Double Braces. Note that these parameters are resolved by DB V2, which prepares and validates SQL statements (PreparedStatement) by default. This approach reduces the risk of SQL Injection.
{% endhint %}

Below is a comparison between the recommended use of the **Template Transformer** on the left (using Double Braces in the `WHERE` clause) and a non-recommended approach on the right (using FreeMarker in the `WHERE` clause), which may introduce security risks.

{% columns %}
{% column %}
:white\_check\_mark: **Recommended usage:**

{% code overflow="wrap" %}

```sql
SELECT * FROM
${table}
WHERE 0=0
<#if codigo??>AND CODE= {{ message._query.code }} </#if>
<#if cep??>AND ZIP_CODE= {{ message._query.zipCode }} </#if>
<#if name??>AND NAME= {{ message._query.name }} </#if>
```

{% endcode %}
{% endcolumn %}

{% column %}
:x: **Non-recommended usage:**

{% code overflow="wrap" %}

```sql
SELECT * FROM
${table}
WHERE 0=0
<#if codigo??>AND CODE=${code} </#if>
<#if cep??>AND ZIP_CODEand zip_code=${zipCode} </#if>
<#if name??>and name='${name}' </#if>
```

{% endcode %}
{% endcolumn %}
{% endcolumns %}

### **Key parameter**

The Key parameter maps placeholders in SQL statements to their values, especially for `Procedures` and `INSERT` queries handling CLOB/BLOB data.

**Example:**

```sql
INSERT INTO TABLE (MY_CLOB, MY_BLOB, MY_STRING)
VALUES ({{ message.clob }}, {{ message.blob }}, {{ message.string }})
```

**Index mapping:**

| Index | Placeholder            |
| ----- | ---------------------- |
| 0     | `{{ message.clob }}`   |
| 1     | `{{ message.blob }}`   |
| 2     | `{{ message.string }}` |

Indexes always start at 0 and follow the order of placeholders in the SQL statement.

## **DB V2 in action**

### **Batch processing**

This mode executes queries for arrays of objects:

```json
[
  { "name": "Matthew", "type":"A" },
  { "name": "Jules", "type":"A" },
  { "name": "Raphael", "type":"B" }
]
```

**SQL example:**

```sql
INSERT INTO TABLE VALUES ({{ item.name }}, {{ item.type }})
```

* Iterates through `items` array and maps properties.
* Expected output:

```json
{ 
"totalSucceeded":3, 
"totalFailed":0 
}
```

**Error handling**:

1. **Single error**:

```json
{ 
"totalSucceeded":1, 
"totalFailed":1, 
"error": "error1" 
}
```

2. **Multiple errors**:

```json
{ 
"totalSucceeded":1, 
"totalFailed":1, 
"errors": ["error1", "error2"] 
}
```

{% hint style="info" %}
The content of the `errors` property may vary depending on the database driver. Some drivers don’t report all errors that occur during batch execution.
{% endhint %}

#### **Rollback on Error parameter**

If this option is **enabled**, the Platform applies the all-or-nothing principle to batch operations:

* The commit will only be performed if all operations in the batch succeed.
* If at least one operation fails, a rollback is triggered, and all operations in the batch are reverted, ensuring data consistency.

If this option is **disabled**, the behavior changes:

* Each operation is committed individually as it succeeds.
* Even if one or more operations fail, the previous successful commits are preserved, which may result in partial updates in the database.

**Driver-specific behavior**

Oracle and certain other drivers cannot return consolidated counts for succeeded and failed executions. In such cases, results may appear as:

```json
{
  "totalSucceeded": -1,
  "totalFailed": -1,
  "errors": ["error1", "error2"],
  "success": false
}
```

Firebird and similar drivers may underreport errors, sometimes returning objects that suggest no error even when failures occurred:

```json
{
  "totalSucceeded": 0,
  "totalFailed": 3,
  "errors": ["error1", "error2"],
  "success": false
}
```

{% hint style="success" %}
Always check the `success` property. A value of `false` means at least one error occurred, regardless of the reported totals.
{% endhint %}

### **Connection pool**

The connection pool size is defined automatically based on the **pipeline deployment size**:

* Small: 10 connections
* Medium: 20 connections
* Large: 40 connections

Pools are shared among all DB connectors (**DB V2** and [**Stream DB V3**](https://docs.digibee.com/documentation/connectors-and-triggers/connectors/structured-data/stream-db-v3)) that connect to the same database within a pipeline.

#### **Pool Size by Actual Consumers parameter**

You can override the default behavior by enabling the **Pool Size By Actual Consumers** parameter:

* The pool size is defined by the number of consumers manually configured in the deployment screen.
* You must enable this property in each connector that should adopt this behavior.

**Example:** In a Small pipeline with 5 consumers, the DB connectors use a pool size of 5.

{% hint style="danger" %}
Manual configurations can cause deadlocks if multiple concurrent calls compete for the same database.
{% endhint %}

#### **Exclusive Pool parameter**

By default, pools are shared among DB connectors accessing the same database. If you need an isolated pool for a specific connector, enable the **Exclusive Pool** parameter.

#### **Keep Connection parameter**

This parameter defines how pool connections are managed:

* **Enabled**:
  * Connections remain open and are renewed every 30 minutes.
  * Suitable for high availability scenarios (reduces overhead of opening connections frequently).
* **Disabled**:
  * Connections are opened only when needed.
  * After use, they remain idle for up to 5 minutes before being closed.
  * More efficient for low-frequency usage.

#### **Custom Pool parameter**

For advanced scenarios, you can enable the **Custom Pool** option to override all defaults and define:

* **Connection Maximum Lifetime**
* **Minimum Idle Connections**
* **Idle Connection Timeout**

{% hint style="danger" %}
Customizing pools requires in-depth knowledge. You must consider:

* Deployment size and number of replicas.
* Use of different credentials for the same database.
* Interaction between DB connectors across multiple pipelines.
* Whether the **Exclusive Pool** option is enabled.

Misconfiguration can **degrade performance** or cause **resource contention**. Only use this option if you have experience with pool management and a clear need.
{% endhint %}

#### **Underlying technology**

The connector uses the **HikariCP** framework to manage connection pools. For advanced configurations and best practices, see the [official HikariCP documentation](https://github.com/brettwooldridge/HikariCP).

## **More usage scenarios**

Refer to [DB V2: Usage scenarios](https://docs.digibee.com/documentation/connectors-and-triggers/connectors/structured-data/db-v2/db-v2-usage-scenarios) documentation for advanced topics, ranging from defining personalized connection properties to handling custom data types.
