Digibee Documentation
Request documentationBook a demo
English
English
  • Quick start
  • Highlights
    • Release notes
      • Release notes 2025
        • May
        • April
        • March
        • February
        • January
      • Release notes 2024
        • December
        • November
        • October
        • September
        • August
          • Connectors release 08/20/2024
        • July
        • June
        • May
        • April
        • March
        • February
        • January
      • Release notes 2023
        • December
        • November
        • October
        • September
        • August
        • July
        • June
        • May
        • April
        • March
        • February
        • January
      • Release notes 2022
        • December
        • November
        • October
        • September
        • August
        • July
        • June
        • May
        • April
        • March
        • February
        • January
      • Release notes 2021
      • Release notes 2020
    • AI Pair Programmer
    • Digibeectl
      • Getting started
        • How to install Digibeectl on Windows
      • Digibeectl syntax
      • Digibeectl operations
  • Digibee in action
    • Use Cases in Action
      • Improving integration performance with API pagination
      • Automating file storage with Digibee
      • Reprocessing strategy in event-driven integrations
      • Key practices for securing sensitive information in pipelines with Digibee
      • OAuth2 for secure API access
      • Secure your APIs with JWT in Digibee
      • Integration best practices for developers on the Digibee Integration Platform
      • How to use Event-driven architecture on the Digibee Integration Platform
      • Dynamic file download with Digibee
      • Microservices: Circuit Breaker pattern for improving resilience
      • Error handling strategy in event-driven integrations
    • Troubleshooting
      • Integration guidance
        • How to resolve common pipeline issues
        • How to resolve Error 409: “You cannot update a pipeline that is not on draft mode”
        • How to resolve the "Pipeline execution was aborted" error
        • Integrated authentication with Microsoft Entra ID
        • How to resolve the "Failed to initialize pool: ONS configuration failed" error
        • How to perform IP address mapping with Progress database
        • How to build integration flows that send error notifications
        • How to send logs to external services
        • How JSONPath differs in connectors and the Execution panel
        • Using JSONPath to validate numbers with specific initial digits
        • How to analyze the "Network error: Failed to fetch" in the Execution panel
        • How to handle request payloads larger than 5MB
        • How to configure Microsoft Entra ID to display groups on the Digibee Integration Platform
        • How to build an HL7 message
      • Connectors behavior and configuration
        • Timeout in the Pipeline Executor connector
        • How to use DISTINCT and COUNT in the Object Store
        • Understanding @@DGB_TRUNCATED@@ on the Digibee Integration Platform
        • How to resolve names without a DNS - REST, SOAP, SAP (web protocols)
        • How to read and write files inside a folder
        • AuthToken Reuse for Salesforce connector
        • How to resolve the "Invalid payload" error in API Integration
        • Supported databases
          • Functions and uses for databases
      • Connectors implementation and usage examples
        • Google Storage: Usage scenarios
        • DB V2: Usage scenarios
        • For Each: Usage example
        • Template and its uses
        • Digibee JWT implementation
        • Email V1: Usage example (Deprecated)
      • JOLT applications
        • Transformer: Getting to know JOLT
        • Transformer: Transformations with JOLT
        • Transformer: Add values to list elements
        • Transformer: Operations overview
        • Transformer: Date formatting using split and concat
        • Transformer: Simple IF-ELSE logic with JOLT
      • Platform access and performance tips
        • How to solve login problems on the Digibee Integration Platform
        • How to receive updates from Digibee Status Page
        • How to clean the Digibee Integration Platform cache
      • Governance troubleshooting guidance
        • How to consume Internal API pipelines using ZTNA
        • How to use Internal API with and without a VPN
        • How to generate, convert, and register SSH Keys
        • mTLS authentication
          • How to configure mTLS on the Digibee Integration Platform
          • FAQs: Certificates in mTLS
        • How to connect Digibee to Oracle RAC
        • How to connect Digibee to SAP
        • How to connect Digibee to MongoDB Atlas using VPN
        • How to manage IPs on the Digibee Integration Platform
        • Configuring the Dropbox account
        • How to use your Gmail account with the Digibee email component (SMTP)
        • How to use the CORS policy on the Digibee Integration Platform
      • Deployment scenarios
        • Solving the “Out of memory” errors in deployment
        • Warning of route conflicts
    • Best practices
      • Best practices for building a pipeline
      • Best practices on validating messages in a consumer pipeline
      • Avoiding loops and maximizing pipeline efficiency
      • Naming: Global, Accounts, and API Keys
      • Pagination tutorial
        • Pagination tutorial - part 1
        • Pagination tutorial - part 2
        • Pagination tutorial - part 3
        • Pagination tutorial - part 4
      • Pagination example
      • Event-driven architecture
      • Notification model in event-driven integrations
      • OAuth2 integration model with Digibee
      • Best practices for error handling in pipelines
    • Digibee Academy
      • Integration Developer Bootcamp
  • Reference guides
    • Connectors
      • AWS
        • S3 Storage
        • SQS
        • AWS Secrets Manager
        • AWS Athena
        • AWS CloudWatch
        • AWS Elastic Container Service (ECS)
        • AWS Eventbridge
        • AWS Identity and Access Management (IAM)
        • AWS Kinesis
        • AWS Kinesis Firehose
        • AWS Key Management Service (KMS)
        • AWS Lambda
        • AWS MQ
        • AWS Simple Email Service (SES)
        • AWS Simple Notification System (SNS)
        • AWS Security Token Service (STS)
        • AWS Translate
      • Azure
        • Azure CosmosDB
        • Azure Event Hubs
        • Azure Key Vault
        • Azure ServiceBus
        • Azure Storage DataLake Service
        • Azure Storage Queue Service
      • Enterprise applications
        • SAP
        • Salesforce
        • Braintree
        • Facebook
        • GitHub
        • Jira
        • ServiceNow
        • Slack
        • Telegram
        • Twilio
        • WhatsApp
        • Wordpress
        • Workday
        • Zendesk
      • File storage
        • Blob Storage (Azure)
        • Digibee Storage
        • Dropbox
        • FTP
        • Google Drive
        • Google Storage
        • OneDrive
        • SFTP
        • WebDav V2
        • WebDav (Deprecated)
      • Files
        • Append Files
        • Avro File Reader
        • Avro File Writer
        • CSV to Excel
        • Excel
        • File Reader
        • File Writer
        • GZIP V2
        • GZIP V1 (Deprecated)
        • Parquet File Reader
        • Parquet File Writer
        • Stream Avro File Reader
        • Stream Excel
        • Stream File Reader
        • Stream File Reader Pattern
        • Stream JSON File Reader
        • Stream Parquet File Reader
        • Stream XML File Reader
        • XML Schema Validator
        • ZIP File
        • NFS
      • Flow
        • Delayer
      • Google/GCP
        • Google BigQuery
        • Google BigQuery Standard SQL
        • Google Calendar
        • Google Cloud Functions
        • Google Mail
        • Google PubSub
        • Google Secret Manager
        • Google Sheets
      • Industry solutions
        • FHIR (Beta)
        • Gupy Public API
        • HL7
        • HubSpot: Sales and CMS
        • Mailgun API
        • Oracle NetSuite (Beta)
        • Orderful
        • Protheus: Billing and Inventory of Cost
      • Logic
        • Block Execution
        • Choice
        • Do While
        • For Each
        • Retry
        • Parallel Execution
      • Queues and messaging
        • Event Publisher
        • JMS
        • Kafka
        • RabbitMQ
      • Security
        • AES Cryptography
        • Asymmetric Cryptography
        • CMS
        • Digital Signature
        • JWT (Deprecated)
        • JWT V2
        • Google IAP Token
        • Hash
        • Digibee JWT (Generate and Decode)
        • LDAP
        • PBE Cryptography
        • PGP
        • RSA Cryptography
        • Symmetric Cryptography
      • Structured data
        • CassandraDB
        • DB V2
        • DB V1 (Deprecated)
        • DynamoDB
        • Google Big Table
        • Memcached
        • MongoDB
        • Object Store
        • Relationship
        • Session Management
        • Stored Procedure
        • Stream DB V3
        • Stream DB V1 (Deprecated)
        • ArangoDb
        • Caffeine Cache
        • Caffeine LoadCache
        • Couchbase
        • CouchDB
        • Ehcache
        • InfluxDB
      • Tools
        • Assert V2
        • Assert V1 (Deprecated)
        • Base64
        • CSV to JSON V2
        • CSV to JSON V1 (Deprecated)
        • HL7 Message Transformer (Beta)
        • HTML to PDF
        • Transformer (JOLT) V2
        • JSLT
        • JSON String to JSON Transformer
        • JSON to JSON String Transformer
        • JSON to XML Transformer
        • JSON to CSV V2
        • JSON to CSV Transformer (Deprecated)
        • JSON Path Transformer V2
        • JSON Path Transformer
        • JSON Transformer
        • Log
        • Pipeline Executor
        • QuickFix (Beta)
        • SSH Remote Command
        • Script (JavaScript)
        • Secure PDF
        • Store Account
        • Template Transformer
        • Throw Error
        • Transformer (JOLT)
        • Validator V1 (Deprecated)
        • Validator V2
        • XML to JSON Transformer
        • XML Transformer
        • JSON Generator (Mock)
      • Web protocols
        • Email V2
        • Email V1 (Deprecated)
        • REST V2
        • REST V1 (Deprecated)
        • SOAP V1 (Deprecated)
        • SOAP V2
        • SOAP V3
        • WGet (Download HTTP)
        • gRPC
    • Triggers
      • Web Protocols
        • API Trigger
        • Email Trigger
        • Email Trigger V2
        • HTTP Trigger
        • HTTP File Trigger
          • HTTP File Trigger - Downloads
          • HTTP File Trigger - Uploads
        • REST Trigger
      • Scheduling
        • Scheduler Trigger
      • Messaging and Events
        • Event Trigger
        • JMS Trigger
        • Kafka Trigger
        • RabbitMQ Trigger
      • Others
        • DynamoDB Streams Trigger
        • HL7 Trigger
        • Salesforce Trigger - Events
    • Double Braces
      • How to reference data using Double Braces
      • Double Braces functions
        • Math functions
        • Utilities functions
        • Numerical functions
        • String functions
        • JSON functions
        • Date functions
        • Comparison functions
        • File functions
        • Conditional functions
      • Double Braces autocomplete
  • Development cycle
    • Build
      • Canvas
        • AI Assistant
        • Smart Connector User Experience
        • Execution panel
        • Design and Inspect Mode
        • Linter: Canvas building validation
        • Connector Mocking
      • Pipeline
        • How to create a pipeline
        • How to scaffold a pipeline using an OpenAPI specification
        • How to create a project
        • Pipeline version history
        • Pipeline versioning
        • Messages processing
        • Subpipelines
      • Capsules
        • How to use Capsules
          • How to create a Capsule collection
            • Capsule header dimensions
          • How to create a Capsule group
          • How to configure a Capsule
          • How to build a Capsule
          • How to test a Capsule
          • How to save a Capsule
          • How to publish a Capsule
          • How to change a Capsule collection or group
          • How to archive and restore a Capsule
        • Capsules versioning
        • Public capsules
          • SAP
          • Digibee Tools
          • Google Sheets
          • Gupy
          • Send notifications via email
          • Totvs Live
          • Canvas LMS
        • AI Assistant for Capsules Docs Generation
    • Run
      • Run concepts
        • Autoscalling
      • Deployment
        • Deploying a pipeline
        • How to redeploy a pipeline
        • How to promote pipelines across environments
        • How to check the pipeline deployment History
        • How to rollback to a previous deployment version
        • Using deployment history advanced functions
        • Pipeline deployment status
      • How warnings work on pipelines in Run
    • Monitor
      • Monitor Insights (Beta)
      • Completed executions
        • Pipeline execution logs download
      • Pipeline logs
      • Pipeline Metrics
        • Pipeline Metrics API
          • How to set up Digibee API metrics with Datadog
          • How to set up Digibee API metrics with Prometheus
        • Connector Latency
      • Alerts
        • How to create an alert
        • How to edit an alert
        • How to activate, deactivate or duplicate an alert
        • How to delete an alert
        • How to configure alerts on Slack
        • How to configure alerts on Telegram
        • How to configure alerts through a webhook
        • Available metrics
        • Best practices about alerts
        • Use cases for alerts
      • VPN connections monitoring
        • Alerts for VPN metrics
  • Connectivity management
    • Connectivity
    • Zero Trust Network Access (ZTNA)
      • Prerequisites for using ZTNA
      • How to view connections (Edge Routers)
      • How to view the Network Mappings associated with an Edge Router
      • How to add new ZTNA connections (Edge Routers)
      • How to delete connections (Edge Routers)
      • How to view routes (Network Mapping)
      • How to add new routes (Network Mapping)
      • How to add routes in batch for ZTNA
      • How to edit routes (Network Mapping)
      • How to delete routes (Network Mapping)
      • How to generate new keys (Edge Router)
      • How to change the environment of Edge routers
      • ZTNA Inverse Flow
      • ZTNA Groups
    • Virtual Private Network (VPN)
  • Platform administration
    • Administration
      • Audit
      • Access control
        • Users
        • Groups
        • Roles
          • List of permissions by service
          • Roles and responsibilities: Governance and key stakeholder identification
      • Identity provider integration
        • How to integrate an identity provider
        • Authentication rules
        • Integration of IdP groups with Digibee groups
          • How to create a group integration
          • How to test a group integration
          • How to enable group integrations
          • How to edit a group integration
          • How to delete a group integration
      • User authentication and authorization
        • How to activate and deactivate two-factor authentication
        • Login flow
      • Organization groups
    • Settings
      • Globals
        • How to create Globals
        • How to edit or delete Globals
        • How to use Globals
      • Accounts
        • Configuring each account type
        • Monitor changes to account settings in deployed pipelines
        • OAuth2 Architecture
          • Registration of new OAuth providers
      • Consumers (API Keys)
      • Relationship model
      • Multi-Instance
        • Deploying a multi-instance pipeline
      • Log Streaming
        • How to use Log Streaming with Datadog
    • Governance
      • Policies
        • Security
          • Internal API access policy
          • External API access policy
          • Sensitive fields policy
        • Transformation
          • Custom HTTP header
          • CORS HTTP header
        • Limit of Replicas policy
    • Licensing
      • Licensing models
        • Consumption Based model
      • Capacity and quotas
      • License consumption
    • Digibee APIs
      • How to create API credentials
  • Digibee concepts
    • Pipeline Engine
      • Digibee Integration Platform Pipeline Engine v2
      • Support Dynamic Accounts (Restricted Beta)
    • Introduction to ZTNA
  • Help & FAQ
    • Digibee Customer Support
    • Request documentation, suggest features, or send feedback
    • Beta Program
    • Security and compliance
    • About Digibee
Powered by GitBook
On this page
  • What qualifies as an error?
  • Examples of errors in integrations
  • Step-by-step guide to identifying, analyzing, and solving issues
  • Identifying the issue
  • Analyzing the issue
  • Resolving common issues
  • External tools support
  • Improving diagnostics
  • Pay attention to logs
  • Check trigger and queue settings
  • Review pipeline implementation
  • Monitor deployment settings
  • Use Digibee monitoring tools
  • Manually reprocess executions
  • Still having issues with your integration?
  • Conclusion

Was this helpful?

  1. Digibee in action
  2. Troubleshooting
  3. Integration guidance

How to resolve common pipeline issues

Learn how to identify and resolve common errors in your pipelines.

PreviousIntegration guidanceNextHow to resolve Error 409: “You cannot update a pipeline that is not on draft mode”

Last updated 3 months ago

Was this helpful?

What qualifies as an error?

In integrations, an error is a failure or unexpected behavior that interrupts the normal flow of data between systems, services, or connectors. These errors can occur at any stage of execution.

Examples of errors in integrations

Timeout

Description: Occurs when the pipeline exceeds the configured execution timeout, forcing the process to stop.

Example: A pipeline that queries an external database takes more than 15 minutes to complete the request due to a large volume of data.

{
  "timestamp": 1730373482528,
  "error": "Pipeline execution timed-out.",
  "code": 500
}
Out of Memory (OOM)

Description: This error occurs when the pipeline consumes more memory or CPU than the configuration allows and stops execution.

Example: A pipeline that processes a large file or queries a high volume API without pagination, resulting in excessive memory usage. Learn .

{
  "timestamp": 1730438185028,
  "error": "This pipeline execution was aborted and cannot be automatically retried because it was configured to disallow redeliveries.",
  "code": 500
}
Connection failures

Description: Communication error between the pipeline and an external service or database, such as authentication failures or unavailable servers.

Example: A pipeline tries to access an external API but encounters an authentication error or cannot connect because the server is temporarily unavailable.

{
  "timestamp": 1734254417051,
  "error": "java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 60000ms.",
  "code": 500
}
Configuration errors

Description: Errors caused by incorrect settings, such as invalid endpoints, missing parameters, or incorrect size limits.

Example: A pipeline with an incorrect endpoint for a REST API cannot send data, resulting in an error with an invalid URL.

{
    "timestamp": 1734134205495,
    "error": "An internal error has occurred. Exception: java.lang.NullPointerException: Cannot invoke \"String.length()\" because \"sql\" is null",
    "code": 500
}
Unexpected restarts

Description: When the integration infrastructure, such as pipeline containers, is restarted due to internal problems such as OOM errors.

Example: A pipeline that consumes a lot of memory leads to an automatic restart by Kubernetes and generates a “recycled” warning in the Runtime screen.

Message expiration in event queues

Description: Messages expire if they are not processed in time, which leads to their elimination.

Example: A message expires before it is consumed by the next pipeline, especially if the queue expiration time is short and the data load is high.

Step-by-step guide to identifying, analyzing, and solving issues

Identifying the issue

1

Gather initial information

Make sure you have the execution key from the error logs or find it by filtering the logs by time and relevant fields.

2

Access the pipeline and execution

  • Build: Shows how your integration is configured.

  • Monitor: Shows execution details and possible error points.

3

Analyze the logs

Check the execution logs to trace the integration path, to find the point at which the integration was aborted, or to identify the error. The logs can be analyzed individually or across multiple executions. You can view the logs in two ways:

Analyzing the issue

Method 1

Compare a successful integration with a failed integration. Look for data that has been integrated correctly (for example, an order number) and data that has failed to integrate. If similar data follows different flows, compare the logs to find the point of divergence.

Method 2

Identify patterns in execution failures. For example, if concurrent executions fail with a timeout, analyze the logs to determine which step caused the issue and which connector or service is associated with it.

Resolving common issues

Timeout

What it is: A timeout occurs when a pipeline exceeds the execution time, usually 15 minutes (900,000 milliseconds).

How to fix it:

  • Increase the timeout in the pipeline’s trigger, but avoid exceeding the maximum limit.

  • Restructure pipelines that are approaching the 10-minute limit by processing the data in smaller chunks using pagination.

Out of Memory (OOM)

What it is: Occurs when the pipeline exceeds its configured memory limit.

How to fix it:

  1. Identify the cause: Use the Monitor page to analyze the logs and determine where the error is occurring. Adding logs can narrow down the issue.

  2. Optimize the flow: Implement pagination to reduce the data flow or split the flow into primary and secondary pipelines to implement enrichment and data submission logic.

  3. Adjust the deployment: After you restructure the pipeline, test it and then deploy it to Production. If the error persists, increase the pipeline size and deploy it again.

Trigger configuration and queue expiration

What it is: The pipeline trigger causes execution to start and the queue expiration determines how long a message can wait to be processed.

How you can fix it:

  • For low-volume scenarios, set shorter expiration times (2-5 minutes) to avoid unnecessary message accumulation.

  • For high-volume scenarios, extend the inspiration time to 10-15 minutes, depending on how much time the messages take to consume and process.

  • Adjust the expiration time to avoid the loss of messages that can occur when queues expire before processing.

Suggestions:

  • Make sure the trigger is configured according to the expected load before running the pipeline.

  • Monitor the queues regularly to adjust the expiration time to the volume and expected behavior of the pipeline.

Delay in Object Store or Session Management

How to fix it:

  • Check the data volume: Access the Object Store and analyze the current size of the database. If the data volume is high, implement a cleanup routine to improve performance.

  • Implement a cleanup routine: Configure a cleanup routine to remove old or unnecessary data. This will avoid the accumulation of excessive data and maintain more efficient storage.

  • Divide the processing into smaller parts: For pipelines that process large amounts of data, use pagination or split the flow into different pipelines. One pipeline performs the first query and sends the data to other pipelines. This keeps the Object Store clearer and the integration process scalable.

  • Test and monitor: After you have implemented a cleanup routine or restructured the pipeline, monitor performance in a Test environment. Observe whether the delay has decreased and, if necessary, adjust the cleanup process or the integration flow before deploying it in the Production environment.

External tools support

Sometimes you can use external tools to help you with certain aspects of the integration:

  • SSL certificate validation: If you encounter connection issues, it can be helpful to check that certificates are correctly updated and active.

  • XML validation and comparison: For debugging errors in XML workflows and comparing successful executions with failed ones.

  • JSON Schema generator: For generating schemas from a valid JSON file.

  • Mock data generator: Useful for testing with large amounts of data.

  • AI tools: For simplifying logical solutions such as the creation of RegEx.

Improving diagnostics

Pay attention to logs

  • Monitor these logs regularly to identify error patterns or signs of poor performance. With the Log connector configured in key areas, you get a more detailed overview of execution, making it easier to track problems.

Check trigger and queue settings

  • Make sure that the pipeline trigger is configured correctly and that the queue expiration time is appropriate for the volume of data you are processing. Errors in the trigger or an expiry time that is too short can lead to the loss of important messages.

  • Configure the trigger to meet the requirements of the data flow and adjust the time in the queue if necessary to ensure that the messages have enough time to be processed. Pay attention to the expiration time of the event queue and regularly check the lifetime of the messages in the queues.

Review pipeline implementation

Monitor deployment settings

  • Adjust these settings as needed to ensure stable performance and minimize execution errors.

Use Digibee monitoring tools

The Digibee Integration Platform provides monitoring tools and insights to detect issues such as increased response times or unusual error rates. Enable these tools to monitor pipeline health in real time and receive proactive alerts about performance or configuration issues.

Manually reprocess executions

  • This feature allows you to retry previously processed calls, which is useful in scenarios where the original call failed due to temporary issues such as network interruptions or errors in the target system.

Although manual reprocessing is an effective tool, it should be used with caution. In some cases, it may be necessary to investigate the causes of errors before simply retrying calls to avoid repeating the same errors.

Still having issues with your integration?

When you open the ticket, make sure you provide the relevant information:

  • Pipeline name or Execution ID

  • Previous analysis performed by the customer with proof of testing

  • Other pertinent information to help identify and analyze

  • Expected behavior versus observed behavior

  • Error message with date and time

  • Pipeline key (for errors or incidents)

  • Environment (Prod or Test)

  • Project name (if applicable)

Conclusion

By following the recommendations in this guide, you can troubleshoot pipelines on the Digibee Integration Platform more efficiently and independently. With an organized approach and the application of best practices — such as detailed log analysis, proper queue monitoring, and adjusting deployment settings — you can quickly identify and resolve common errors that affect integration execution.

The Digibee Integration Platform provides tools that, along with the troubleshooting procedures outlined, allow you to resolve many issues on your own. When necessary, use resources such as manual reprocessing and monitoring tools to validate corrections and ensure pipeline stability.

Open two tabs: one for the examined pipeline and another for the associated execution on the page. This allows you to compare the configured integration (pipeline) with what actually happened (execution).

: Displays the execution key, the input/output messages, and the start and end timestamps.

: Provides logs of the steps that were executed from the request to the pipeline response.

Not all pipeline connectors automatically generate logs. Connectors such as and external interaction connectors (for example, , , ) always log messages. However, the results of data transformation or external connectors are only displayed if they are explicitly logged.

Identify where the timeout occurs in the flow. Adjust the timeout in the connector for specific requests such as or .

What it is: A delay in or occurs when the volume of data stored increases significantly without a cleanup or deletion routine in place. This overloads the database, slowing down queries and pipeline execution and affecting the overall performance of the pipeline.

HTML generator: For creating HTML messages for use in connectors such as and .

Logs are crucial to quickly identify where errors occur in the pipeline. For effective diagnostics, we recommend placing the connector at strategic points in the pipeline. Place it before and after critical connectors, such as data transformations, calls to external services, and flow decisions. This will allow you to track the exact path of execution and pinpoint specific issues.

Analyze each pipeline connector to identify possible points of failure. If an error is detected in a specific connector (for example, or ), check its configurations, such as timeout settings and the data structure sent or received.

You can use an to check whether the data returned by external services matches the expected format, which ensures a more robust integration.

Deployment settings such as the number of , , and memory allocation () have a significant impact on pipeline performance. Ensure that the memory and replicas are appropriately sized for the expected workload, especially for pipelines with high demand.

Alerts Configuration: On this page, you can customize integration performance notifications based on metrics to take quick action to minimize operational impact. Learn more in the .

Monitor Insights: This feature provides you with insights into the health of completed integrations. Selecting an insight takes the user to the metrics of the executions that require attention, already filtered by the time period of occurrence. Further information can be found in the .

Manual reprocessing requires identifying the failed call and checking the previously used parameters. Depending on the configuration, you can keep the same parameters or change them. After reprocessing the call, perform a new search on the page; it will be displayed with the value "manual" in the Source field.

If so, open a ticket with our Support team. Here you can find out .

how to resolve the "Pipeline execution was aborted" error
Completed Executions
Completed Executions
Pipeline Logs
Log
REST
Google Storage
Stream DB
REST
SOAP
Object Store
Session Management
Email
Template Transformer
Log
REST
SOAP
Assert
Alerts documentation
Monitor Insights documentation
Pipeline logs
how to open a ticket and how long it is likely to take
Replicas
Concurrent executions
Pipeline size