Digibee Documentation
Request documentationBook a demo
English
English
  • Quick start
  • Highlights
    • Release notes
      • Release notes 2025
        • May
        • April
        • March
        • February
        • January
      • Release notes 2024
        • December
        • November
        • October
        • September
        • August
          • Connectors release 08/20/2024
        • July
        • June
        • May
        • April
        • March
        • February
        • January
      • Release notes 2023
        • December
        • November
        • October
        • September
        • August
        • July
        • June
        • May
        • April
        • March
        • February
        • January
      • Release notes 2022
        • December
        • November
        • October
        • September
        • August
        • July
        • June
        • May
        • April
        • March
        • February
        • January
      • Release notes 2021
      • Release notes 2020
    • AI Pair Programmer
    • Digibeectl
      • Getting started
        • How to install Digibeectl on Windows
      • Digibeectl syntax
      • Digibeectl operations
  • Digibee in action
    • Use Cases in Action
      • Improving integration performance with API pagination
      • Automating file storage with Digibee
      • Reprocessing strategy in event-driven integrations
      • Key practices for securing sensitive information in pipelines with Digibee
      • OAuth2 for secure API access
      • Secure your APIs with JWT in Digibee
      • Integration best practices for developers on the Digibee Integration Platform
      • How to use Event-driven architecture on the Digibee Integration Platform
      • Dynamic file download with Digibee
      • Microservices: Circuit Breaker pattern for improving resilience
      • Error handling strategy in event-driven integrations
    • Troubleshooting
      • Integration guidance
        • How to resolve common pipeline issues
        • How to resolve Error 409: “You cannot update a pipeline that is not on draft mode”
        • How to resolve the "Pipeline execution was aborted" error
        • Integrated authentication with Microsoft Entra ID
        • How to resolve the "Failed to initialize pool: ONS configuration failed" error
        • How to perform IP address mapping with Progress database
        • How to build integration flows that send error notifications
        • How to send pipeline logs to external monitoring systems
        • How JSONPath differs in connectors and the Execution panel
        • Using JSONPath to validate numbers with specific initial digits
        • How to analyze the "Network error: Failed to fetch" in the Execution panel
        • How to handle request payloads larger than 5MB
        • How to configure Microsoft Entra ID to display groups on the Digibee Integration Platform
        • How to build an HL7 message
      • Connectors behavior and configuration
        • Timeout in the Pipeline Executor connector
        • How to use DISTINCT and COUNT in the Object Store
        • Understanding @@DGB_TRUNCATED@@ on the Digibee Integration Platform
        • How to resolve names without a DNS - REST, SOAP, SAP (web protocols)
        • How to read and write files inside a folder
        • AuthToken Reuse for Salesforce connector
        • How to resolve the "Invalid payload" error in API Integration
        • Supported databases
          • Functions and uses for databases
      • Connectors implementation and usage examples
        • Google Storage: Usage scenarios
        • DB V2: Usage scenarios
        • For Each: Usage example
        • Template and its uses
        • Digibee JWT implementation
        • Email V1: Usage example (Deprecated)
      • JOLT applications
        • Transformer: Getting to know JOLT
        • Transformer: Transformations with JOLT
        • Transformer: Add values to list elements
        • Transformer: Operations overview
        • Transformer: Date formatting using split and concat
        • Transformer: Simple IF-ELSE logic with JOLT
      • Platform access and performance tips
        • How to solve login problems on the Digibee Integration Platform
        • How to receive updates from Digibee Status Page
        • How to clean the Digibee Integration Platform cache
      • Governance troubleshooting guidance
        • How to consume Internal API pipelines using ZTNA
        • How to use Internal API with and without a VPN
        • How to generate, convert, and register SSH Keys
        • mTLS authentication
          • How to configure mTLS on the Digibee Integration Platform
          • FAQs: Certificates in mTLS
        • How to connect Digibee to Oracle RAC
        • How to connect Digibee to SAP
        • How to connect Digibee to MongoDB Atlas using VPN
        • How to manage IPs on the Digibee Integration Platform
        • Configuring the Dropbox account
        • How to use your Gmail account with the Digibee email component (SMTP)
        • How to use the CORS policy on the Digibee Integration Platform
      • Deployment scenarios
        • Solving the “Out of memory” errors in deployment
        • Warning of route conflicts
    • Best practices
      • JOLT and Double Braces on Digibee: Choosing the right method for data transformation
      • Best practices for building a pipeline
      • Best practices on validating messages in a consumer pipeline
      • Avoiding loops and maximizing pipeline efficiency
      • Naming: Global, Accounts, and API Keys
      • Pagination tutorial
        • Pagination tutorial - part 1
        • Pagination tutorial - part 2
        • Pagination tutorial - part 3
        • Pagination tutorial - part 4
      • Pagination example
      • Event-driven architecture
      • Notification model in event-driven integrations
      • OAuth2 integration model with Digibee
      • Best practices for error handling in pipelines
      • Highly scalable ETL model for Digibee
    • Digibee Academy
      • Integration Developer Bootcamp
  • Reference guides
    • Connectors
      • AI Tools
        • LLM Connector
      • AWS
        • S3 Storage
        • SQS
        • AWS Secrets Manager
        • AWS Athena
        • AWS CloudWatch
        • AWS Elastic Container Service (ECS)
        • AWS Eventbridge
        • AWS Identity and Access Management (IAM)
        • AWS Kinesis
        • AWS Kinesis Firehose
        • AWS Key Management Service (KMS)
        • AWS Lambda
        • AWS MQ
        • AWS Simple Email Service (SES)
        • AWS Simple Notification System (SNS)
        • AWS Security Token Service (STS)
        • AWS Translate
      • Azure
        • Azure CosmosDB
        • Azure Event Hubs
        • Azure Key Vault
        • Azure ServiceBus
        • Azure Storage DataLake Service
        • Azure Storage Queue Service
      • Enterprise applications
        • SAP
        • Salesforce
        • Braintree
        • Facebook
        • GitHub
        • Jira
        • ServiceNow
        • Slack
        • Telegram
        • Twilio
        • WhatsApp
        • Wordpress
        • Workday
        • Zendesk
      • File storage
        • Blob Storage (Azure)
        • Digibee Storage
        • Dropbox
        • FTP
        • Google Drive
        • Google Storage
        • OneDrive
        • SFTP
        • WebDav V2
        • WebDav (Deprecated)
      • Files
        • Append Files
        • Avro File Reader
        • Avro File Writer
        • CSV to Excel
        • Excel
        • File Reader
        • File Writer
        • GZIP V2
        • GZIP V1 (Deprecated)
        • Parquet File Reader
        • Parquet File Writer
        • Stream Avro File Reader
        • Stream Excel
        • Stream File Reader
        • Stream File Reader Pattern
        • Stream JSON File Reader
        • Stream Parquet File Reader
        • Stream XML File Reader
        • XML Schema Validator
        • ZIP File V2
        • ZIP File
        • NFS
      • Flow
        • Delayer
      • Google/GCP
        • Google BigQuery
        • Google BigQuery Standard SQL
        • Google Calendar
        • Google Cloud Functions
        • Google Mail
        • Google PubSub
        • Google Secret Manager
        • Google Sheets
      • Industry solutions
        • FHIR (Beta)
        • Gupy Public API
        • HL7
        • HubSpot: Sales and CMS
        • Mailgun API
        • Oracle NetSuite (Beta)
        • Orderful
        • Protheus: Billing and Inventory of Cost
      • Logic
        • Block Execution
        • Choice
        • Do While
        • For Each
        • Retry
        • Parallel Execution
      • Queues and messaging
        • Event Publisher
        • JMS
        • Kafka
        • RabbitMQ
      • Security
        • AES Cryptography
        • Asymmetric Cryptography
        • CMS
        • Digital Signature
        • JWT (Deprecated)
        • JWT V2
        • Google IAP Token
        • Hash
        • Digibee JWT (Generate and Decode)
        • LDAP
        • PBE Cryptography
        • PGP
        • RSA Cryptography
        • Symmetric Cryptography
      • Structured data
        • CassandraDB
        • DB V2
        • DB V1 (Deprecated)
        • DynamoDB
        • Google Big Table
        • Memcached
        • MongoDB
        • Object Store
        • Relationship
        • Session Management
        • Stored Procedure
        • Stream DB V3
        • Stream DB V1 (Deprecated)
        • ArangoDb
        • Caffeine Cache
        • Caffeine LoadCache
        • Couchbase
        • CouchDB
        • Ehcache
        • InfluxDB
      • Tools
        • Assert V2
        • Assert V1 (Deprecated)
        • Base64
        • CSV to JSON V2
        • CSV to JSON V1 (Deprecated)
        • HL7 Message Transformer (Beta)
        • HTML to PDF
        • Transformer (JOLT) V2
        • JSLT
        • JSON String to JSON Transformer
        • JSON to JSON String Transformer
        • JSON to XML Transformer
        • JSON to CSV V2
        • JSON to CSV Transformer (Deprecated)
        • JSON Path Transformer V2
        • JSON Path Transformer
        • JSON Transformer
        • Log
        • Pipeline Executor
        • QuickFix (Beta)
        • SSH Remote Command
        • Script (JavaScript)
        • Secure PDF
        • Store Account
        • Template Transformer
        • Throw Error
        • Transformer (JOLT)
        • Validator V1 (Deprecated)
        • Validator V2
        • XML to JSON Transformer
        • XML Transformer
        • JSON Generator (Mock)
      • Web protocols
        • Email V2
        • Email V1 (Deprecated)
        • REST V2
        • REST V1 (Deprecated)
        • SOAP V1 (Deprecated)
        • SOAP V2
        • SOAP V3
        • WGet (Download HTTP)
        • gRPC
    • Triggers
      • Web Protocols
        • API Trigger
        • Email Trigger
        • Email Trigger V2
        • HTTP Trigger
        • HTTP File Trigger
          • HTTP File Trigger - Downloads
          • HTTP File Trigger - Uploads
        • REST Trigger
      • Scheduling
        • Scheduler Trigger
      • Messaging and Events
        • Event Trigger
        • JMS Trigger
        • Kafka Trigger
        • RabbitMQ Trigger
      • Others
        • DynamoDB Streams Trigger
        • HL7 Trigger
        • Salesforce Trigger - Events
    • Double Braces
      • How to reference data using Double Braces
      • Double Braces functions
        • Math functions
        • Utilities functions
        • Numerical functions
        • String functions
        • JSON functions
        • Date functions
        • Comparison functions
        • File functions
        • Conditional functions
      • Double Braces autocomplete
  • Development cycle
    • Build
      • Canvas
        • AI Assistant
        • Smart Connector User Experience
        • Execution panel
        • Design and Inspect Mode
        • Linter: Canvas building validation
        • Connector Mocking
      • Pipeline
        • How to create a pipeline
        • How to scaffold a pipeline using an OpenAPI specification
        • How to create a project
        • Pipeline version history
        • Pipeline versioning
        • Messages processing
        • Subpipelines
      • Capsules
        • How to use Capsules
          • How to create a Capsule collection
            • Capsule header dimensions
          • How to create a Capsule group
          • How to configure a Capsule
          • How to build a Capsule
          • How to test a Capsule
          • How to save a Capsule
          • How to publish a Capsule
          • How to change a Capsule collection or group
          • How to archive and restore a Capsule
        • Capsules versioning
        • Public capsules
          • SAP
          • Digibee Tools
          • Google Sheets
          • Gupy
          • Send notifications via email
          • Totvs Live
          • Canvas LMS
        • AI Assistant for Capsules Docs Generation
    • Run
      • Run concepts
        • Autoscalling
      • Deployment
        • Deploying a pipeline
        • How to redeploy a pipeline
        • How to promote pipelines across environments
        • How to check the pipeline deployment History
        • How to rollback to a previous deployment version
        • Using deployment history advanced functions
        • Pipeline deployment status
      • How warnings work on pipelines in Run
    • Monitor
      • Monitor Insights (Beta)
      • Completed executions
        • Pipeline execution logs download
      • Pipeline logs
      • Pipeline Metrics
        • Pipeline Metrics API
          • How to set up Digibee API metrics with Datadog
          • How to set up Digibee API metrics with Prometheus
        • Connector Latency
      • Alerts
        • How to create an alert
        • How to edit an alert
        • How to activate, deactivate or duplicate an alert
        • How to delete an alert
        • How to configure alerts on Slack
        • How to configure alerts on Telegram
        • How to configure alerts through a webhook
        • Available metrics
        • Best practices about alerts
        • Use cases for alerts
      • VPN connections monitoring
        • Alerts for VPN metrics
  • Connectivity management
    • Connectivity
    • Zero Trust Network Access (ZTNA)
      • Prerequisites for using ZTNA
      • How to view connections (Edge Routers)
      • How to view the Network Mappings associated with an Edge Router
      • How to add new ZTNA connections (Edge Routers)
      • How to delete connections (Edge Routers)
      • How to view routes (Network Mapping)
      • How to add new routes (Network Mapping)
      • How to add routes in batch for ZTNA
      • How to edit routes (Network Mapping)
      • How to delete routes (Network Mapping)
      • How to generate new keys (Edge Router)
      • How to change the environment of Edge routers
      • ZTNA Inverse Flow
      • ZTNA Groups
    • Virtual Private Network (VPN)
  • Platform administration
    • Administration
      • Audit
      • Access control
        • Users
        • Groups
        • Roles
          • List of permissions by service
          • Roles and responsibilities: Governance and key stakeholder identification
      • Identity provider integration
        • How to integrate an identity provider
        • Authentication rules
        • Integration of IdP groups with Digibee groups
          • How to create a group integration
          • How to test a group integration
          • How to enable group integrations
          • How to edit a group integration
          • How to delete a group integration
      • User authentication and authorization
        • How to activate and deactivate two-factor authentication
        • Login flow
      • Organization groups
    • Settings
      • Globals
        • How to create Globals
        • How to edit or delete Globals
        • How to use Globals
      • Accounts
        • Configuring each account type
        • Monitor changes to account settings in deployed pipelines
        • OAuth2 Architecture
          • Registration of new OAuth providers
      • Consumers (API Keys)
      • Relationship model
      • Multi-Instance
        • Deploying a multi-instance pipeline
      • Data Streaming
        • How to use Data Streaming with Datadog
    • Governance
      • Policies
        • Security
          • Internal API access policy
          • External API access policy
          • Sensitive fields policy
        • Transformation
          • Custom HTTP header
          • CORS HTTP header
        • Limit of Replicas policy
    • Licensing
      • Licensing models
        • Consumption Based model
      • Capacity and quotas
      • License consumption
    • Digibee APIs
      • How to create API credentials
  • Digibee concepts
    • Pipeline Engine
      • Digibee Integration Platform Pipeline Engine v2
      • Support Dynamic Accounts (Restricted Beta)
    • Introduction to ZTNA
  • Help & FAQ
    • Digibee Customer Support
    • Request documentation, suggest features, or send feedback
    • Beta Program
    • Security and compliance
    • About Digibee
Powered by GitBook
On this page
  • Encryption: A foundation for securing sensitive data
  • Enhancing pipeline security with Digibee connectors
  • Hashing: A secure approach to duplicate prevention
  • Understanding the Hash connector
  • Sensitive field obfuscation
  • Leverage Accounts for enhanced security
  • Final thoughts

Was this helpful?

  1. Digibee in action
  2. Use Cases in Action

Key practices for securing sensitive information in pipelines with Digibee

PreviousReprocessing strategy in event-driven integrationsNextOAuth2 for secure API access

Last updated 4 months ago

Was this helpful?

Securing sensitive information while building scalable integrations is crucial. Data security doesn't just involve encryption — it encompasses a comprehensive approach to how data is handled, protected, and audited.

In this use case, you’ll explore best practices for securing sensitive data and how to implement security measures in your pipeline-building process within the Digibee Integration Platform.

The focus of this article will reside on four aspects of pipeline security:

  • Encryption

  • Hashing

  • Obfuscation of sensitive fields

  • Management of authentication credentials through Accounts

Encryption: A foundation for securing sensitive data

One of the primary ways to safeguard sensitive data is through encryption. Think of encryption as a secure lockbox: the public key is like a padlock that anyone can close, but only the person with the matching private key (the unique key) can open it. This ensures that even if someone intercepts the lockbox, they can’t access its contents without the private key.

When implementing encryption and decryption within the Digibee Integration Platform, the connector acts as this lockbox system, enabling secure data management by leveraging public and private key pairs. This connector supports two primary operations — Encrypt and Decrypt — where the data is encrypted using a public key and decrypted with the corresponding private key.

In contrast, the connector uses the same key for both encryption and decryption, making it more efficient for scenarios where performance is a priority. Symmetric encryption is typically used for encrypting large volumes of data, while asymmetric encryption is preferred for scenarios requiring higher security, such as secure key exchanges.

Enhancing pipeline security with Digibee connectors

In this case, the focus will be on the connector. However, Digibee offers many more security options that you can leverage to protect your data. To explore the full range of security connectors, visit our .

Putting theory into practice: Securing sensitive data

Encryption plays a critical role in securing sensitive data as it flows through pipelines. For instance, imagine a financial service that handles sensitive customer data, such as credit card numbers and personal identification information. The pipeline must ensure that this information remains secure as it passes through various processes.

The following sequence diagram provides a high-level overview of the solution, illustrating how encryption and decryption can be applied within a pipeline.

Here’s a breakdown of the implementation:

  1. Start with data already encrypted

  • For this example, imagine that sensitive data (such as payment details) arrives at the pipeline already encrypted by the customer. This ensures that the information is secure from the very beginning.

  1. Decrypt for processing

  • A private key is used to access the encrypted data securely.

  1. Process the decrypted data

  • After the data is properly formatted, the pipeline performs necessary actions, such as validation and transformations.

  1. Re-encrypt for security

  • After processing, the data is encrypted again using the recipient's public key before being sent to another service or stored securely. This adds an extra layer of protection during transmission or storage.

Hashing: A secure approach to duplicate prevention

Unlike encryption, hashing is a one-way process that can’t be reversed, meaning that once data is hashed, it can’t be reconstructed into its original form. This makes hashing an ideal solution for verifying data integrity and storing sensitive information that doesn’t require decryption, such as passwords.

For instance, hashing can be used to manage user registration by preventing duplicate entries. Applying a hash to specific fields, such as a user’s email address, ensures that requests are unique.

Understanding the Hash connector

Putting theory into practice: Preventing duplicate data with hashing

Imagine a financial institution that processes loan applications. One of the key pieces of data for identity verification is the Social Security Number (SSN).

Here's how hashing can prevent duplicate SSNs while maintaining security in your integrations:

Here’s a breakdown of the implementation:

  1. Start the pipeline

  2. Extract the required information

  3. Hash the SSN

  4. Check for duplicates

Compare the hashed SSN with existing entries in the database:

  • If the hash exists: The system flags the SSN as a duplicate and prevents further processing of the loan application.

  • If the hash doesn’t exist: The SSN is recognized as unique, and the process continues to the next connector.

  1. Prevent duplicate applications:

    • Store the newly generated hash in the database to mark the SSN as registered.

    • Any future applications with the same SSN will be flagged as duplicates and blocked from further processing.

Sensitive field obfuscation

Log obfuscation of sensitive fields requires additional processing resources and memory. The impact on performance depends on the number of sensitive fields configured and the size of the message.

Leverage Accounts for enhanced security

This secure management of authentication credentials ensures that you’re protecting not just the data flowing through your pipelines but also the access controls that govern it, adding another layer of defense against potential leaks or unauthorized access.

Final thoughts

Security is a shared responsibility among all teams involved in integration development. Each team member has a role in implementing and maintaining security practices to minimize risks. This collaboration leads to secure and reliable integrations.

Once the data reaches the pipeline, it’s decrypted using the connector.

Once the data is decrypted, the pipeline first transforms it into a structured JSON object using the connector.

The connector is designed to generate a unique digital fingerprint — a hash — for any given data input. It allows users to hash fields, payloads, or files. Users can select from a wide range of hashing algorithms and adjust parameters like salt and cost factor, among others.

Begin by setting up an integration that receives data about a customer loan application. For simplicity, assume it’s a , although other triggers can be applied.

Use a transformation connector (such as , , or other) to extract the customer's SSN from the application payload. This SSN will be used to verify if a duplicate entry already exists in the database.

Pass the SSN through the connector to create a digital fingerprint of the SSN.

Logging is critical for troubleshooting, but it’s equally important to ensure that sensitive data remains protected. To address this, you can configure within the pipeline itself, ensuring that specific data is obfuscated or masked in the logs. These fields will appear obfuscated with the "***" character set in the log output.

In addition to pipeline-level configurations, you can also create a that allows you to define policies at a realm-wide level. This policy applies across all pipelines within the realm, making it easier to standardize data protection across multiple integrations.

In addition to the previous measures, managing accounts securely is a critical part of protecting sensitive information in pipelines. enhance the security of credentials such as passwords, private keys, and API tokens, among others.

With , you can store sensitive data securely, ensuring that your team can use the credentials within integration flows without directly accessing the sensitive values. This allows you to lock the credentials, ensuring that only authorized systems and services can use them in integrations, while team members can’t view or modify the sensitive values.

Explore more possibilities in our , take courses on Advanced Security at , or visit our to discover more resources and insights.

If you have feedback on this Use case or suggestions for future articles, we’d love to hear from you through our .

Asymmetric Cryptography
Symmetric Cryptography
Asymmetric Cryptography
Documentation Portal
Asymmetric Cryptography
JSON String to JSON Transformer
Hash
REST Trigger
JSON Generator (Mock)
Transformer (JOLT)
Hash
Sensitive fields policy
Accounts
Accounts
Documentation Portal
Digibee Academy
Blog
feedback form
Sensitive fields
The sequence diagram above provides a high-level overview of the solution, illustrating how hashing can be used within a pipeline to securely manage data and prevent duplicates.
To enable log obfuscation in your pipeline, click the gear icon in the top-right corner of the screen and enter the sensitive fields in the appropriate section.