Integration best practices for developers on the Digibee Integration Platform
Last updated
Last updated
In this use case, you’ll explore Integration best practices, essential for developers at all levels. This documentation will cover guidelines to ensure your integrations are effective, scalable, and secure, focusing on the Digibee Integration Platform.
The emphasis will be on three key pillars:
Improving debugging with better troubleshooting practices.
Ensuring data consistency through payload validation.
Validating responses from external calls.
Debugging is essential for maintaining integration flows. One way to achieve this is by improving visibility into your payloads during runtime. The Log connector plays a key role in this, making troubleshooting easier.
To maximize the effectiveness of logging, it's important to strategically position the logs within your integrations. For example:
At the beginning of an event-triggered pipeline to monitor the input payload.
In the outputs of a Choice to track the "path" the integration follows.
As the first connector within onProcess and onException subflows to track the subprocess the execution is progressing through.
Before making external calls to track the information being passed in the request.
Assign a descriptive name (step name) to the connector.
Choose the appropriate log level (Info, Error, or Warning), as it will appear in the Monitor logs.
Use the Log connector with caution inside loop connectors for example, For Each or any Stream connector) to avoid excessive logging, which can overwhelm the pipeline’s memory.
Log only relevant data to minimize exposure of sensitive information. For example: {{ message.id }}
instead of {{ message.$ }}
.
Protect sensitive information by applying sensitive field configurations to obfuscate those fields in the Monitor using “***” characters. This can be done in the pipeline settings on Canvas or through a sensitive fields policy for the entire realm.
To achieve data conformity, you should implement validation as part of your integration process. The Validator V2 connector is used to enforce data structure validation based on a predefined JSON schema. The connector should be placed strategically in your pipeline, typically after receiving data and before any external calls that require a specific payload structure.
Place the Validator V2 connector in your pipeline at the point where you need to verify the structure of a given data.
Select the appropriate Schema version for your use case or enable the Detect Draft Version option to automatically detect the version from the provided schema.
In the JSON Payload field, provide the JSON content that needs to be validated (this field accepts Double Braces functions).
In the JSON Schema field, enter the JSON Schema that will be used to validate the JSON content.
If the connector successfully validates the JSON content, it outputs the JSON configured in the JSON Payload parameter.
If there is an error, the connector outputs an attribute with success
: false and a validation
object containing the error details.
For assistance in creating your schemas, use the JSON Schema tool.
Validating external calls is crucial to ensure responses are handled appropriately. For example, a REST call with different status codes, such as 200 or 500, may require distinct handling. Similarly, an error key in the response might indicate the need for specific error treatment. Ensuring the correct routing of the payload based on the response type helps avoid issues in your integration flow.
For REST, SOAP, and similar connectors, understanding what to check is only the first step—you also need to know how to perform these checks. Two common methods are:
Using the Assert connector: This connector checks if the data received meets specific validation conditions. If the check fails, the pipeline triggers an error, and the execution is interrupted.
Using the Choice connector: Instead of interrupting the pipeline immediately when an error is found, it allows you to branch the flow and perform appropriate error handling before deciding whether to stop or continue the execution.
Common checks include:
Check for an error attribute in the response payload after making the request.
Check the HTTP status code.
Database connectors (DB V2, Object Store, and similar): For database operations, validate the response by checking the following attributes:
rowCount and updateCount: Ensure they are different from zero, depending on the operation performed.
When using the Assert connector, the flow will be interrupted if the data doesn’t meet the validation condition, and the pipeline will mark the execution as having an ERROR status in the log.
For situations where you need to stop the pipeline with a custom error message, use the Throw Error connector. It enables you to define a specific error message and return a corresponding HTTP status code in the response.
By following these best practices, you'll be on your way to building efficient, secure, and scalable integrations. Implementing these practices into your workflows will enhance the reliability of your pipelines, minimize errors, and ultimately improve the scalability of your solutions.
You can explore more possibilities in our Documentation Portal, Digibee Academy, or visit our Blog to discover more resources and insights.
If you have feedback on this 'Use case or suggestions for future articles, share your thoughts on our feedback form.