Digibee Documentation
Request documentationBook a demo
English
English
  • Quick start
  • Highlights
    • Release notes
      • Release notes 2025
        • May
        • April
        • March
        • February
        • January
      • Release notes 2024
        • December
        • November
        • October
        • September
        • August
          • Connectors release 08/20/2024
        • July
        • June
        • May
        • April
        • March
        • February
        • January
      • Release notes 2023
        • December
        • November
        • October
        • September
        • August
        • July
        • June
        • May
        • April
        • March
        • February
        • January
      • Release notes 2022
        • December
        • November
        • October
        • September
        • August
        • July
        • June
        • May
        • April
        • March
        • February
        • January
      • Release notes 2021
      • Release notes 2020
    • AI Pair Programmer
    • Digibeectl
      • Getting started
        • How to install Digibeectl on Windows
      • Digibeectl syntax
      • Digibeectl operations
  • Digibee in action
    • Use Cases in Action
      • Improving integration performance with API pagination
      • Automating file storage with Digibee
      • Reprocessing strategy in event-driven integrations
      • Key practices for securing sensitive information in pipelines with Digibee
      • OAuth2 for secure API access
      • Secure your APIs with JWT in Digibee
      • Integration best practices for developers on the Digibee Integration Platform
      • How to use Event-driven architecture on the Digibee Integration Platform
      • Dynamic file download with Digibee
      • Microservices: Circuit Breaker pattern for improving resilience
      • Error handling strategy in event-driven integrations
    • Troubleshooting
      • Integration guidance
        • How to resolve common pipeline issues
        • How to resolve Error 409: “You cannot update a pipeline that is not on draft mode”
        • How to resolve the "Pipeline execution was aborted" error
        • Integrated authentication with Microsoft Entra ID
        • How to resolve the "Failed to initialize pool: ONS configuration failed" error
        • How to perform IP address mapping with Progress database
        • How to build integration flows that send error notifications
        • How to send logs to external services
        • How JSONPath differs in connectors and the Execution panel
        • Using JSONPath to validate numbers with specific initial digits
        • How to analyze the "Network error: Failed to fetch" in the Execution panel
        • How to handle request payloads larger than 5MB
        • How to configure Microsoft Entra ID to display groups on the Digibee Integration Platform
        • How to build an HL7 message
      • Connectors behavior and configuration
        • Timeout in the Pipeline Executor connector
        • How to use DISTINCT and COUNT in the Object Store
        • Understanding @@DGB_TRUNCATED@@ on the Digibee Integration Platform
        • How to resolve names without a DNS - REST, SOAP, SAP (web protocols)
        • How to read and write files inside a folder
        • AuthToken Reuse for Salesforce connector
        • How to resolve the "Invalid payload" error in API Integration
        • Supported databases
          • Functions and uses for databases
      • Connectors implementation and usage examples
        • Google Storage: Usage scenarios
        • DB V2: Usage scenarios
        • For Each: Usage example
        • Template and its uses
        • Digibee JWT implementation
        • Email V1: Usage example (Deprecated)
      • JOLT applications
        • Transformer: Getting to know JOLT
        • Transformer: Transformations with JOLT
        • Transformer: Add values to list elements
        • Transformer: Operations overview
        • Transformer: Date formatting using split and concat
        • Transformer: Simple IF-ELSE logic with JOLT
      • Platform access and performance tips
        • How to solve login problems on the Digibee Integration Platform
        • How to receive updates from Digibee Status Page
        • How to clean the Digibee Integration Platform cache
      • Governance troubleshooting guidance
        • How to consume Internal API pipelines using ZTNA
        • How to use Internal API with and without a VPN
        • How to generate, convert, and register SSH Keys
        • mTLS authentication
          • How to configure mTLS on the Digibee Integration Platform
          • FAQs: Certificates in mTLS
        • How to connect Digibee to Oracle RAC
        • How to connect Digibee to SAP
        • How to connect Digibee to MongoDB Atlas using VPN
        • How to manage IPs on the Digibee Integration Platform
        • Configuring the Dropbox account
        • How to use your Gmail account with the Digibee email component (SMTP)
        • How to use the CORS policy on the Digibee Integration Platform
      • Deployment scenarios
        • Solving the “Out of memory” errors in deployment
        • Warning of route conflicts
    • Best practices
      • Best practices for building a pipeline
      • Best practices on validating messages in a consumer pipeline
      • Avoiding loops and maximizing pipeline efficiency
      • Naming: Global, Accounts, and API Keys
      • Pagination tutorial
        • Pagination tutorial - part 1
        • Pagination tutorial - part 2
        • Pagination tutorial - part 3
        • Pagination tutorial - part 4
      • Pagination example
      • Event-driven architecture
      • Notification model in event-driven integrations
      • OAuth2 integration model with Digibee
      • Best practices for error handling in pipelines
    • Digibee Academy
      • Integration Developer Bootcamp
  • Reference guides
    • Connectors
      • AWS
        • S3 Storage
        • SQS
        • AWS Secrets Manager
        • AWS Athena
        • AWS CloudWatch
        • AWS Elastic Container Service (ECS)
        • AWS Eventbridge
        • AWS Identity and Access Management (IAM)
        • AWS Kinesis
        • AWS Kinesis Firehose
        • AWS Key Management Service (KMS)
        • AWS Lambda
        • AWS MQ
        • AWS Simple Email Service (SES)
        • AWS Simple Notification System (SNS)
        • AWS Security Token Service (STS)
        • AWS Translate
      • Azure
        • Azure CosmosDB
        • Azure Event Hubs
        • Azure Key Vault
        • Azure ServiceBus
        • Azure Storage DataLake Service
        • Azure Storage Queue Service
      • Enterprise applications
        • SAP
        • Salesforce
        • Braintree
        • Facebook
        • GitHub
        • Jira
        • ServiceNow
        • Slack
        • Telegram
        • Twilio
        • WhatsApp
        • Wordpress
        • Workday
        • Zendesk
      • File storage
        • Blob Storage (Azure)
        • Digibee Storage
        • Dropbox
        • FTP
        • Google Drive
        • Google Storage
        • OneDrive
        • SFTP
        • WebDav V2
        • WebDav (Deprecated)
      • Files
        • Append Files
        • Avro File Reader
        • Avro File Writer
        • CSV to Excel
        • Excel
        • File Reader
        • File Writer
        • GZIP V2
        • GZIP V1 (Deprecated)
        • Parquet File Reader
        • Parquet File Writer
        • Stream Avro File Reader
        • Stream Excel
        • Stream File Reader
        • Stream File Reader Pattern
        • Stream JSON File Reader
        • Stream Parquet File Reader
        • Stream XML File Reader
        • XML Schema Validator
        • ZIP File
        • NFS
      • Flow
        • Delayer
      • Google/GCP
        • Google BigQuery
        • Google BigQuery Standard SQL
        • Google Calendar
        • Google Cloud Functions
        • Google Mail
        • Google PubSub
        • Google Secret Manager
        • Google Sheets
      • Industry solutions
        • FHIR (Beta)
        • Gupy Public API
        • HL7
        • HubSpot: Sales and CMS
        • Mailgun API
        • Oracle NetSuite (Beta)
        • Orderful
        • Protheus: Billing and Inventory of Cost
      • Logic
        • Block Execution
        • Choice
        • Do While
        • For Each
        • Retry
        • Parallel Execution
      • Queues and messaging
        • Event Publisher
        • JMS
        • Kafka
        • RabbitMQ
      • Security
        • AES Cryptography
        • Asymmetric Cryptography
        • CMS
        • Digital Signature
        • JWT (Deprecated)
        • JWT V2
        • Google IAP Token
        • Hash
        • Digibee JWT (Generate and Decode)
        • LDAP
        • PBE Cryptography
        • PGP
        • RSA Cryptography
        • Symmetric Cryptography
      • Structured data
        • CassandraDB
        • DB V2
        • DB V1 (Deprecated)
        • DynamoDB
        • Google Big Table
        • Memcached
        • MongoDB
        • Object Store
        • Relationship
        • Session Management
        • Stored Procedure
        • Stream DB V3
        • Stream DB V1 (Deprecated)
        • ArangoDb
        • Caffeine Cache
        • Caffeine LoadCache
        • Couchbase
        • CouchDB
        • Ehcache
        • InfluxDB
      • Tools
        • Assert V2
        • Assert V1 (Deprecated)
        • Base64
        • CSV to JSON V2
        • CSV to JSON V1 (Deprecated)
        • HL7 Message Transformer (Beta)
        • HTML to PDF
        • Transformer (JOLT) V2
        • JSLT
        • JSON String to JSON Transformer
        • JSON to JSON String Transformer
        • JSON to XML Transformer
        • JSON to CSV V2
        • JSON to CSV Transformer (Deprecated)
        • JSON Path Transformer V2
        • JSON Path Transformer
        • JSON Transformer
        • Log
        • Pipeline Executor
        • QuickFix (Beta)
        • SSH Remote Command
        • Script (JavaScript)
        • Secure PDF
        • Store Account
        • Template Transformer
        • Throw Error
        • Transformer (JOLT)
        • Validator V1 (Deprecated)
        • Validator V2
        • XML to JSON Transformer
        • XML Transformer
        • JSON Generator (Mock)
      • Web protocols
        • Email V2
        • Email V1 (Deprecated)
        • REST V2
        • REST V1 (Deprecated)
        • SOAP V1 (Deprecated)
        • SOAP V2
        • SOAP V3
        • WGet (Download HTTP)
        • gRPC
    • Triggers
      • Web Protocols
        • API Trigger
        • Email Trigger
        • Email Trigger V2
        • HTTP Trigger
        • HTTP File Trigger
          • HTTP File Trigger - Downloads
          • HTTP File Trigger - Uploads
        • REST Trigger
      • Scheduling
        • Scheduler Trigger
      • Messaging and Events
        • Event Trigger
        • JMS Trigger
        • Kafka Trigger
        • RabbitMQ Trigger
      • Others
        • DynamoDB Streams Trigger
        • HL7 Trigger
        • Salesforce Trigger - Events
    • Double Braces
      • How to reference data using Double Braces
      • Double Braces functions
        • Math functions
        • Utilities functions
        • Numerical functions
        • String functions
        • JSON functions
        • Date functions
        • Comparison functions
        • File functions
        • Conditional functions
      • Double Braces autocomplete
  • Development cycle
    • Build
      • Canvas
        • AI Assistant
        • Smart Connector User Experience
        • Execution panel
        • Design and Inspect Mode
        • Linter: Canvas building validation
        • Connector Mocking
      • Pipeline
        • How to create a pipeline
        • How to scaffold a pipeline using an OpenAPI specification
        • How to create a project
        • Pipeline version history
        • Pipeline versioning
        • Messages processing
        • Subpipelines
      • Capsules
        • How to use Capsules
          • How to create a Capsule collection
            • Capsule header dimensions
          • How to create a Capsule group
          • How to configure a Capsule
          • How to build a Capsule
          • How to test a Capsule
          • How to save a Capsule
          • How to publish a Capsule
          • How to change a Capsule collection or group
          • How to archive and restore a Capsule
        • Capsules versioning
        • Public capsules
          • SAP
          • Digibee Tools
          • Google Sheets
          • Gupy
          • Send notifications via email
          • Totvs Live
          • Canvas LMS
        • AI Assistant for Capsules Docs Generation
    • Run
      • Run concepts
        • Autoscalling
      • Deployment
        • Deploying a pipeline
        • How to redeploy a pipeline
        • How to promote pipelines across environments
        • How to check the pipeline deployment History
        • How to rollback to a previous deployment version
        • Using deployment history advanced functions
        • Pipeline deployment status
      • How warnings work on pipelines in Run
    • Monitor
      • Monitor Insights (Beta)
      • Completed executions
        • Pipeline execution logs download
      • Pipeline logs
      • Pipeline Metrics
        • Pipeline Metrics API
          • How to set up Digibee API metrics with Datadog
          • How to set up Digibee API metrics with Prometheus
        • Connector Latency
      • Alerts
        • How to create an alert
        • How to edit an alert
        • How to activate, deactivate or duplicate an alert
        • How to delete an alert
        • How to configure alerts on Slack
        • How to configure alerts on Telegram
        • How to configure alerts through a webhook
        • Available metrics
        • Best practices about alerts
        • Use cases for alerts
      • VPN connections monitoring
        • Alerts for VPN metrics
  • Connectivity management
    • Connectivity
    • Zero Trust Network Access (ZTNA)
      • Prerequisites for using ZTNA
      • How to view connections (Edge Routers)
      • How to view the Network Mappings associated with an Edge Router
      • How to add new ZTNA connections (Edge Routers)
      • How to delete connections (Edge Routers)
      • How to view routes (Network Mapping)
      • How to add new routes (Network Mapping)
      • How to add routes in batch for ZTNA
      • How to edit routes (Network Mapping)
      • How to delete routes (Network Mapping)
      • How to generate new keys (Edge Router)
      • How to change the environment of Edge routers
      • ZTNA Inverse Flow
      • ZTNA Groups
    • Virtual Private Network (VPN)
  • Platform administration
    • Administration
      • Audit
      • Access control
        • Users
        • Groups
        • Roles
          • List of permissions by service
          • Roles and responsibilities: Governance and key stakeholder identification
      • Identity provider integration
        • How to integrate an identity provider
        • Authentication rules
        • Integration of IdP groups with Digibee groups
          • How to create a group integration
          • How to test a group integration
          • How to enable group integrations
          • How to edit a group integration
          • How to delete a group integration
      • User authentication and authorization
        • How to activate and deactivate two-factor authentication
        • Login flow
      • Organization groups
    • Settings
      • Globals
        • How to create Globals
        • How to edit or delete Globals
        • How to use Globals
      • Accounts
        • Configuring each account type
        • Monitor changes to account settings in deployed pipelines
        • OAuth2 Architecture
          • Registration of new OAuth providers
      • Consumers (API Keys)
      • Relationship model
      • Multi-Instance
        • Deploying a multi-instance pipeline
      • Log Streaming
        • How to use Log Streaming with Datadog
    • Governance
      • Policies
        • Security
          • Internal API access policy
          • External API access policy
          • Sensitive fields policy
        • Transformation
          • Custom HTTP header
          • CORS HTTP header
        • Limit of Replicas policy
    • Licensing
      • Licensing models
        • Consumption Based model
      • Capacity and quotas
      • License consumption
    • Digibee APIs
      • How to create API credentials
  • Digibee concepts
    • Pipeline Engine
      • Digibee Integration Platform Pipeline Engine v2
      • Support Dynamic Accounts (Restricted Beta)
    • Digibee Integration Platform Dedicated SaaS
      • Digibee Integration Platform architecture on Dedicated Saas model
      • Requirements for Digibee Dedicated Saas model
      • Site-to-Site VPN for dedicated SaaS customer support
      • Dedicated Saas customer responsibilities
      • Custom Images of Kubernetes Nodes
      • Digibee Dedicated SaaS installation on AWS
        • How to install requirements before installing Digibee Integration Platform on EKS
        • Permissions to use Digibee Integration Platform on EKS
        • How to create custom nodes for EKS (Golden Images)
    • Introduction to ZTNA
  • Help & FAQ
    • Digibee Customer Support
    • Request documentation, suggest features, or send feedback
    • Beta Program
    • Security and compliance
    • About Digibee
Powered by GitBook
On this page

Was this helpful?

  1. Digibee in action
  2. Troubleshooting
  3. Connectors implementation and usage examples

DB V2: Usage scenarios

Learn more about practical usage scenarios for the DB V2 connector, ranging from defining personalized connection properties to handling custom data types.

PreviousGoogle Storage: Usage scenariosNextFor Each: Usage example

Last updated 15 days ago

Was this helpful?

is a connector that supports SELECT, INSERT, DELETE, and UPDATE operations, as well as procedure calls, returning results in a JSON structure. For the full list of supported databases, see the documentation.

This document outlines various usage scenarios for the connector, ranging from defining personalized connection properties to handling custom data types.

Explore the supported usage scenarios below:

Configuring a QUERY operation with a SELECT statement

Use DB V2 to run a SELECT query when you need to retrieve data from a configured database.

To do this, connect DB V2 to the trigger and configure it as follows:

General tab:

  • Account Type: Select the type of account you configured.

  • Account: Choose the account used to authenticate with the database.

Operation tab:

  • Type: Select QUERY.

  • Database URL: Enter your database URL, such as jdbc:mysql://myserver/mydb.

  • SQL Statement: Enter the SQL Statement, for example SELECT * FROM CLIENTS.

After , the output will be:

{
  "data": [
    {
      "uf": "SP",
      "codigo": 1,
      "cidade": "Osasco",
      "logradouro": "Rua Laranjeiras",
      "name": "João Bosco",
      "due_date": "2019-03-27",
      "email": "joao.bosco@digibee.com.br",
      "cep": "XXXXX-XXX"
    },
    {
      "uf": "SP",
      "codigo": 2,
      "cidade": "São Paulo",
      "logradouro": "Rua João Bosco",
      "name": "Roberto Carlos",
      "due_date": "2019-03-27",
      "email": "roberto.carlos@digibee.com.br",
      "cep": "XXXXX-XXX"
    }
  ],
  "updateCount": 0,
  "rowCount": 2
}

Understanding the output

The query returns a JSON object with three root properties:

  • data: An array of objects representing the rows by the query. Each object contains properties corresponding to the columns in the SELECT statement. For example, if the query is SELECT name FROM CLIENTS, each object in the array will have a single name property.

  • updateCount: The number of rows affected by the query. In this case, the value is 0 because a SELECT operation doesn’t modify the table.

  • rowCount: The number of rows returned by the query.

Configuring the QUERY operation with an UPDATE statement

Use DB V2 to update records in your database by executing an UPDATE query.

To do this, connect DB V2 to the trigger and configure it as follows:

General tab:

  • Account Type: Select the type of account you configured.

  • Account: Choose the account used to authenticate with the database.

Operation tab:

  • Type: Select QUERY.

  • Database URL: Enter your database URL, such as jdbc:mysql://myserver/mydb.

  • SQL Statement: Enter the SQL Statement, for example UPDATE CLIENTS SET uf='SP' WHERE uf is null.

After , the output will be:

{
    "data": null,
    "updateCount": 5,
    "rowCount": 0
}

Understanding the output

  • updateCount: Confirms that five rows in the table were modified in this example.

Configuring the QUERY operation with a DELETE statement

Use DB V2 to remove data from your database with a DELETE query.

To do this, connect DB V2 to the trigger and configure it as follows:

General tab:

  • Account Type: Select the type of account you configured.

  • Account: Choose the account used to authenticate with the database.

Operation tab:

  • Type: Select QUERY.

  • Database URL: Enter your database URL, such as jdbc:mysql://myserver/mydb.

  • SQL Statement: Enter the SQL Statement, for example DELETE FROM CLIENTS WHERE code='111111'.

After , the output will be:

{
    "data": null,
    "updateCount": 1,
    "rowCount": 0
}

Understanding the output

  • updateCount: Confirms that one row in the table was deleted in this example.

Configuring the QUERY operation with an INSERT statement

Use DB V2 to insert new records into your database using an INSERT query.

To do this, connect DB V2 to the trigger and configure it as follows:

General tab:

  • Account Type: Select the type of account you configured.

  • Account: Choose the account used to authenticate with the database.

Operation tab:

  • Type: Select QUERY.

  • Database URL: Enter your database URL, such as jdbc:mysql://myserver/mydb.

  • SQL Statement: Enter the SQL Statement, for example INSERT INTO CLIENTS (name, email) VALUES ('My Name', 'myname@email.com').

After , the output will be:

{
    "data": null,
    "updateCount": 1,
    "rowCount": 0
}

Understanding the output

  • updateCount: Confirms that one row in the table was created in this example.

Keeping the connection active for older or unreliable drivers

If your database is incompatible with JDBC 4.0 or struggles to maintain a reliable connection, you can use DB V2 to execute any query — even if the same call is made again after 20 minutes. This ensures that the database connection remains active.

In the Advanced Settings tab, you can configure the Connection Test Query. This query runs before the one specified in the SQL Statement parameter, ensuring that the database connection is established and preventing errors.

If the connection test fails, a new connection will be opened automatically. This helps prevent issues such as "stuck" connections and timeouts caused by previous failed queries.

This option should only be considered for older databases that support JDBC versions earlier than 4.0.

To do this, connect DB V2 to the trigger and configure it as follows:

General tab:

  • Account Type: Select the type of account you configured.

  • Account: Choose the account used to authenticate with the database.

Operation tab:

  • Type: Select QUERY.

  • Database URL: Enter your database URL, such as jdbc:mysql://myserver/mydb.

  • SQL Statement: Enter the SQL Statement, for example INSERT INTO CLIENTS (name, email) VALUES ('My Name', 'myname@email.com').

Advanced Settings tab:

  • Connection Test Query: Enter a query to test the connection, such as SELECT version().

After executing the pipeline, the query result will be shown in the Output column of the .

Keeping the connection active for 5 or 30 minutes

Use DB V2 to keep your database connection open for 5 or 30 minutes, depending on your needs.

Configure the connector as follows:

Advanced Settings tab:

  • Keep Connection: Enable this option to extend the default 5-minute connection time to 30 minutes, ensuring a longer active session with the database.

Interrupting pipeline execution in case of an error

Use DB V2 to control pipeline execution by interrupting it when an error occurs during a database operation.

To control how errors impact execution, configure the connector as follows:

General tab:

  • Fail On Error: Enable this option to stop the pipeline if an error occurs while invoking DB V2. If disabled, the pipeline will continue running even if the operation fails.

Accessing and writing files into the database

Use DB V2 to read from or write files into BLOB or CLOB fields in your database. This allows you to:

  • Write files from the pipeline into BLOB or CLOB columns in the database.

  • Read BLOB or CLOB data from the database as files within the pipeline.

By enabling the Blob As File or Clob As File option, you can specify a file path for these operations:

  • When using INSERT, providing a file path from the pipeline stores the file in the database.

  • When using SELECT, the retrieved BLOB or CLOB content is converted into a file within the pipeline.

Step 1: Read the file

To read the file, connect DB V2 with the trigger and configure it as follows:

General tab:

  • Step Name: Name it as “Read File” or similar.

  • Account Type: Select the type of account you configured.

  • Account: Choose the account used to authenticate with the database.

Operation tab:

  • Type: Select QUERY.

  • Database URL: Enter your database URL, such as jdbc:mysql://myserver/mydb.

  • SQL Statement: Enter the SQL Statement, for example SELECT blob FROM CLIENTES WHERE id=78.

  • Blob As File / Clob As File: Enable one or both options. If Clob As File is enabled, the Charset parameter will appear for defining the file encoding.

{
  "data": [
    {
      "blob": "E4G17F.file",
      "clob": "H2K89G.file"
    }
  ],
  "updateCount": 0,
  "rowCount": 1
}

Two files, E4G17F.file and H2K89G.file, were generated and made available in the pipeline.

Step 2: Write the file

To write the file, connect another DB V2 with the “Read File” connector and configure it as follows:

General tab:

  • Step Name: Name it as “Write File” or similar.

  • Account Type: Select the type of account you configured.

  • Account: Choose the account used to authenticate with the database.

Operation tab:

  • Type: Select QUERY.

  • Database URL: Enter your database URL, such as jdbc:mysql://myserver/mydb.

  • SQL Statement: Enter the SQL Statement, for example INSERT INTO CLIENTS (blob, clob) VALUES ( {{ message.data.blob }}, {{ message.data.clob }} ).

  • Blob As File / Clob As File: Enable one or both options. If Clob As File is enabled, the Charset parameter will appear for defining the file encoding.

  • Type Properties: Click Add and provide the following details:

    • Key: 0 (each Double Braces expression has an index; 0 refers to the first one)

    • Type: BLOB

    • Out Parameter Name: (leave empty)

    • Parameter Type: IN

{
    "data": null,
    "updateCount": 1,
    "rowCount": 0
}

Understanding the output

The result indicates that one row in the table was affected. First, the Read File connector retrieved the BLOB and CLOB content from the database and generated files. Then, the Write File connector used these files from the pipeline and inserted them into the database as a new record.

Executing a procedure

Use DB V2 to call and run a stored procedure in your database and retrieve its result.

Example

In this example, the database contains the following procedure:

CREATE PROCEDURE `proc_sum_one` (INOUT `num` INTEGER)
BEGIN
    SET `num` = `num` + 1;
END;

To execute this procedure in your pipeline, connect DB V2 to the trigger and configure it as follows:

General tab:

  • Account Type: Select the type of account you configured.

  • Account: Choose the account used to authenticate with the database.

Operation tab:

  • Type: Select PROCEDURE.

  • Database URL: Enter your database URL, such as jdbc:mysql://myserver/mydb.

  • SQL Statement: Enter the SQL Statement, such as call proc_sum_one({{ message.num }}).

  • Type Properties: Click Add and provide the following details:

    • Key: 0 (each Double Braces expression has an index; 0 refers to the first one)

    • Type: INTEGER

    • Out Parameter Name: result (defines the name of the procedure’s output parameter)

    • Parameter Type: INOUT

{ 
    "num": 3 
}

After executing the pipeline, the output will be:

{
    "data": {
        "result": 4
        },
    "success": true
}
Defining personalized connection properties

Use DB V2 to define personalized connection properties, customizing how the connector interacts with your database.

To do this, connect DB V2 to the trigger and configure it as follows:

General tab:

  • Account Type: Select the type of account you configured.

  • Account: Choose the account used to authenticate with the database.

Operation tab:

  • Type: Select QUERY.

  • Database URL: Enter your database URL, such as jdbc:mysql://myserver/mydb.

  • SQL Statement: Enter the SQL Statement, such as SELECT * FROM CLIENTS.

  • Custom Connection Properties: Enter the custom connection properties. For example, { "connectTimeout": 1 }

{
    "success": false,
    "message": "Could not establish connection to the database",
    "error": "java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=xxxxx)(port=XXX)(type=master) : null"
}

In this case, the connectTimeout connection property was set to 1 (one second) to demonstrate that it was applied and resulted in a connection error.

Reading custom data types from a Oracle database

Use DB V2 to read custom data types from an Oracle database when a configured table contains this type of data.

Example:

Suppose you have a table created with the following structure:

CREATE TABLE project (
    name VARCHAR2(50),
    coordinator OWNER
);

And the OWNER custom type is defined as:

CREATE TYPE OWNER AS OBJECT (
    name VARCHAR2(50),
    email VARCHAR2(30)
);

When executing the following query:

SELECT * FROM project

The result will be:

{
  "data": [
    {
      "name": "Example Project",
      "owner": [
        "Coordinator Name",
        "Coordinator Email"
      ]
    }
  ],
  "updateCount": 0,
  "rowCount": 1
}

If the custom type includes other custom types within its structure, the returned data will be nested, as shown in the example below:

{
  "data": [
    {
      "name": "Example Project",
      "owner": [
        "Coordinator Name",
        "Coordinator Email",
        [
          "Field from a 2nd custom type",
          [
            "Field from a 3rd custom type"
          ]
        ]
      ]
    }
  ],
  "updateCount": 0,
  "rowCount": 1
}

Custom types don’t expose the field names defined in their structure; only the order of those fields is preserved.

In the example above, the first element of the "owner" array corresponds to the "name" field and the second element to the ”email” field, as defined in the OWNER type.

When working with custom types in PROCEDURE queries, you must set the Out Parameter Name to match the name of the custom type. In this case, that would be OWNER.

Sending custom data types from a Oracle database

Use DB V2 to insert custom data types (structs) into a table in an Oracle database.

Example

The following custom types and tables are created in the database:

  • Custom types:

CREATE OR REPLACE TYPE CONTACT AS OBJECT (
  NAME VARCHAR2(50),
  PHONE VARCHAR2(15)
);

CREATE OR REPLACE TYPE PERSON AS OBJECT (
  ID NUMBER(10,0),
  NAME VARCHAR2(50),
  CONTACT_INFO CONTACT
);

CREATE OR REPLACE TYPE DEPARTMENT AS OBJECT (
  ID NUMBER(10,0),
  NAME VARCHAR2(50)
);
  • ​Table:

CREATE TABLE EMPLOYEE (
  PERSON PERSON, 		
  DEPT DEPARTMENT,
  SALARY NUMBER(12,2)
);

To insert data into the EMPLOYEE table, you can use the following approaches:

  • Example input:

{
	"personName": "John A.",
	"contactName": "John",
	"contactPhone": "99999",
	"departmentName": "IT"
}
  • Using an INSERT statement from a SELECT query:

INSERT INTO EMPLOYEE
SELECT 
  PERSON(
    TRUNC(DBMS_RANDOM.VALUE(0,999999)), 
    {{ message.personName }}, 
    CONTACT({{ message.contactName }}, {{ message.contactPhone }})
  ),
  DEPARTMENT(
    TRUNC(DBMS_RANDOM.VALUE(0,999999)), 
    {{ UPPERCASE(message.departmentName) }}
  ),
  3030.67
FROM DUAL;
  • Using a previously created PROCEDURE:

Creating the procedure:
CREATE OR REPLACE PROCEDURE MY_PROC(DEPT IN DEPARTMENT, EMP IN OUT PERSON) IS
  SAL NUMBER(12,2);
BEGIN
  SAL := 4567.89 * 1.10;
  INSERT INTO EMPLOYEE VALUES (EMP, DEPT, SAL);	
EXCEPTION
  WHEN OTHERS THEN
    RAISE_APPLICATION_ERROR(-20001, 'An error was encountered - ' || SQLCODE || ' - ERROR: ' || SQLERRM);
END;
Calling the procedure:
DECLARE
  CONT CONTACT;
  EMP PERSON;
  DEPT DEPARTMENT;
BEGIN
  DEPT := DEPARTMENT(
    TRUNC(DBMS_RANDOM.VALUE(0,999999)), 
    {{ UPPERCASE(message.departmentName) }}
  );

  CONT := CONTACT(
    {{ message.contactName }}, 
    {{ message.contactPhone }}
  );

  EMP := PERSON(
    TRUNC(DBMS_RANDOM.VALUE(0,999999)), 
    {{ message.personName }}, 
    CONT
  );

  MY_PROC(DEPT, EMP);
END;
  • Using a full SQL Script:

DECLARE
  CONT CONTACT;
  EMP PERSON;
  DEPT DEPARTMENT;
  SAL EMPLOYEE.SALARY%TYPE;
BEGIN
  DEPT := DEPARTMENT(
    TRUNC(DBMS_RANDOM.VALUE(0,999999)), 
    {{ UPPERCASE(message.departmentName) }}
  );

  CONT := CONTACT(
    {{ message.contactName }}, 
    {{ message.contactPhone }}
  );

  EMP := PERSON(
    TRUNC(DBMS_RANDOM.VALUE(0,999999)), 
    {{ message.personName }}, 
    CONT
  );

  SAL := 4567.89;

  INSERT INTO EMPLOYEE VALUES (EMP, DEPT, SAL);	
EXCEPTION
  WHEN OTHERS THEN
    RAISE_APPLICATION_ERROR(-20001, 'An error was encountered - ' || SQLCODE || ' - ERROR: ' || SQLERRM);
END;
Using Raw SQL statement to handle conditional queries

Suppose your integration flow needs to execute one of two different SQL queries depending on business logic:

Query 1:

SELECT * FROM TABLE_1
WHERE ID = 123

Query 2:

SELECT * FROM TABLE_2
WHERE CODE = 'ABC'

Step-by-step example

Step 1: Configure the pipeline

Add a Template Transformer and a DB V2 connector to your pipeline.

Step 2: Configure the Template Transformer connector

  • Preserve Original: Enable this option.

  • Body: Set the following body:

SELECT * FROM 
${table}
WHERE 
<#if id?? > 
    ID = {{ message._query.id }} 
</#if>

<#if code??>
    CODE = {{ message._query.code }} 
</#if>

Step 3: Configure the DB V2 connector

General tab:

  • Account Type: Select the type of account you configured.

  • Account: Choose the account used to authenticate with the database.

Operation tab:

  • Type: Select QUERY.

  • Database URL: Enter your database URL, such as jdbc:mysql://myserver/mydb.

  • SQL Statement: Set the SQL statement to: {{ message.query }}

Advanced Settings tab:

  • Raw SQL Statement: Enable this option.

Step 4: Test the execution

Payload for Query 1:

{
    "query": {
        "table": "TABLE_1",
        "id": 123
    }
}

Payload for Query 2:

{
    "query": {
        "table": "TABLE_2",
        "code": "ABC"
    }
}

The expected results for each query are:

Result for Query 1:

{
    "data": [
        {
            "id": 123,
            "name": "John",
            "role": "Developer",
            ...
        }
    ],
    "updateCount": 0,
    "rowCount": 1
}

Result for Query 2:

{
    "data": [
        {
            "code": "ABC",
		 "fullName": "Paul Smith",
		 "zipCode": "99999",
            ...
        }
    ],
    "updateCount": 0,
    "rowCount": 1
}

Understanding the output

The query result is returned as a JSON with three main properties:

  • data: An array of objects representing each row returned. Each object contains the columns selected by the query.

    • Example: If your query is SELECT name FROM TABLE_1 WHERE ID = 123, each object in the data array will contain only the name property.

  • updateCount: The number of rows affected by the SQL command. For SELECT operations, this is always 0, as no table is affected.

  • rowCount: The number of rows returned by the query.

  • Uses FreeMarker technology to validate data before executing the target query.

  • Helps prevent SQL Injection: the Double Braces declaration is not resolved by Template Transformer itself, but by DB V2, which configures and validades the SQL statement parameters in advance using Prepared Statements.

  • Provides a safer and easier way to create queries in your pipeline.

After , the output will be:

This example uses ({{ message.data.blob }}, {{ message.data.clob }}) to reference the query result from the previous connector. These expressions retrieve the generated file names (E4G17F.file, H2K89G.file) and pass their paths into the INSERT command for the blob and clob fields.

After , the output will be:

In this example, we use the Double Braces expression {{ message.num }} to access a value from the previous connector. For more details, .

Once the connector is configured and saved, open the and provide an input value in the Payload column, such as:

After , the output will be:

The DB V2 connector doesn’t support to resolve custom types (struct) when using INSERT or UPDATE commands.

The DB V2 connector supports the Raw SQL Statement feature, allowing you to reference an entire query dynamically using . This means you can define the full query in a previous connector and pass it as a variable into DB V2, adapting it at runtime.

To use this feature, you must configure the SQL query in a connector like and then reference it in DB V2.

On the , test the following input payloads:

Although other connectors like or can also be used to define the query string, Template Transformer offers some important advantages:

Double Braces expressions
read the documentation
Execution panel
Double Braces expression
Double Braces
Template Transformer
Execution panel
JSON Generator
Transformer (JOLT)
DB V2
Supported databases
executing the pipeline
executing the pipeline
executing the pipeline
executing the pipeline
Execution panel
executing the pipeline
executing the pipeline
executing the pipeline