# How to run a load test in Digibee

## **Overview**

This document provides guidance for running load tests on pipelines in the Digibee Integration Platform, assessing performance, response time, and stability to ensure a safe go-live in production.

Here you will find the recommended strategy, the ideal settings for the planned load, and best practices.

Support materials are also available:

* Test Plan spreadsheet template, which helps organize and document the load test.
* Checklist with the main steps of the Test Plan, for quick reference whenever needed.

## **Context**

The guidance in this document is based on a real case: testing a synchronous REST API pipeline that needed to support 4,000 simultaneous users (*threads*) with a maximum response time of 4,000 ms. This load occurred only on the fifth business day of the month; on other days, the peak was 10 *threads*.

## **Important considerations**

### **Disabling autoscaling**

During the test, [autoscaling](https://app.gitbook.com/s/jvO5S91EQURCEhbZOuuZ/development-cycle/overview/runtime/autoscaling) must be disabled, since cold start affects metrics but doesn’t represent real use, as it only occurs on the first request. After the test, configure the minimum number of replicas to ensure good performance for initial requests and for below-average loads.

### **Warm up on new replicas**

Each incremental test aims to identify limits and adjust either horizontal scaling (replicas) or vertical scaling (pipeline size). When increasing replicas, disregard the first execution, since it triggers the warm up process on new replicas, which distorts performance analysis.

### **Load test costs**

Costs depend on the licensing model used ([Licensing models](https://app.gitbook.com/s/OhzhFKeTGI53IPSy3U9T/licensing-models/licensing-models)):

* **Pipeline-Based (Licenses):** Production and Test consume the same package.
* **Subscription-Based (RTUs):** Production and Test consume different packages.
* **Consumption-Based (Credits):** Based on actual usage; load tests can **consume credits quickly** due to the high volume of requests.

### **Digibee team involvement**

The load test must include Digibee team support starting in **Phase 2**, ensuring assistance during execution and making necessary adjustments. More information can be found in the section [Phase 2: Preparing the test environment](#phase-2-preparing-the-test-environment).

## **Phase 1: Test planning**

A test plan is essential to assess the environment, execute scenarios, analyze results, and successfully complete the load test.

{% hint style="info" %}
We recommend downloading the [Test Plan Spreadsheet](#test-plan-spreadsheet-template), which will serve as support for designing and documenting the load test.
{% endhint %}

### **Load parameters**

The first step of the plan is to record the general conditions for the test, covering the following topics:

* What will be tested
* Relevant parameters for test execution (simultaneous requests, message size, example of the test message, expected response time)
* Notes (tool used for running the tests, information about peak transactions and average load outside the peak)

See an example:

<figure><img src="https://3750561495-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaD6wuPRxnEQEsYpePq36%2Fuploads%2FWxkOTYd5QdAGda9aVpo0%2Fload-test-1.png?alt=media&#x26;token=3966a325-d5a2-4a24-be7e-7bba087dcd93" alt=""><figcaption></figcaption></figure>

### **Test rehearsal**

The Test Plan should include scenarios with gradual load growth, starting with minimal resources and increasing with each run until reaching the final goal. This process is called a “rehearsal” because its results help define the parameters for the final test.

Below is an example of the planning in a Test Plan spreadsheet page:

<figure><img src="https://3750561495-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaD6wuPRxnEQEsYpePq36%2Fuploads%2F3CM1nI8MEkaGB988Ua5I%2Fload-test-2.png?alt=media&#x26;token=d59a2386-6c5c-4ba2-a3b5-01daf1b7f956" alt=""><figcaption></figcaption></figure>

## **Phase 2: Preparing the test environment**

With the plan defined, it’s time to prepare the environment together with Digibee. At this stage, it’s essential to align responsibilities among the **customer team**, **Digibee internal team**, and **Product team**, ensuring that all necessary adjustments are made before starting the tests.

### **Customer team**

Responsible for enabling test execution and providing the necessary external resources.

* Define the tool to be used for test execution (for example, JMeter).
* Assign people responsible for running and analyzing results together with the Digibee team.
* Provide resources for simulating external access steps, even with simulated responses, such as:
  * Request endpoints
  * Databases
  * File server
  * Messaging brokers
  * Others
* Participate in the pipeline pre-analysis to identify components requiring scale.

### **Digibee internal team**

Acts as an intermediary between the customer and the Product team, coordinating actions and ensuring the environment is ready for the load test.

For this, one of the following teams must be contacted:

1. **Professional Services**: Specialized technical support, **if previously contracted**.
2. **Customer Success**: Oversees the process, ensuring alignment with the customer.

Main activities:

* Verify completion of infrastructure requirements.
* Confirm availability of external access resources or validate alternatives (for example, Mock).
* Conduct pre-analysis of pipelines and functional tests.
* Assess environment capacity.

### **Digibee Product team**

Responsible for evaluating and calibrating the test environment, ensuring resources are adequate before execution.

Main activities:

* Analyze load parameters and the test plan, identifying necessary adjustments.
* Review and calibrate Platform components such as AWS EC2, GCE, Gateway (Kong), Trigger, RabbitMQ, Object Store, and Digibee Store.
* Define test monitoring: team presence, execution window, reports and metrics, access, and other relevant details.

{% hint style="warning" %}
It’s essential that Digibee and the customer conduct a pipeline pre-analysis together to identify components that require scaling.
{% endhint %}

Initial configurations can be recorded in the “general” page of the Test Plan spreadsheet.

<figure><img src="https://3750561495-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaD6wuPRxnEQEsYpePq36%2Fuploads%2FD80zLb4EiLAv1YtkbOac%2Fload-test-3.png?alt=media&#x26;token=74dcf96c-796f-4b74-b598-db3eb753c30f" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
Some of the information above refers to adjustments made during the tests, as described in [**Phase 3: Execution**](#phase-3-execution).
{% endhint %}

## **Phase 3: Execution** <a href="#phase-3-execution" id="phase-3-execution"></a>

With the previous phases completed, the planned tests should be executed sequentially.

### **Updating general environment data**

During testing, configuration adjustments may be required.

Whenever this happens, it’s a good practice to update the general conditions page so that the final analysis reflects the actual scenario, as illustrated below:

<figure><img src="https://3750561495-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaD6wuPRxnEQEsYpePq36%2Fuploads%2F0iKWCnKaQTgk23uxGvsW%2Fload-test-4.png?alt=media&#x26;token=79b6ade8-68ef-4b49-a4a8-e677f89105ea" alt=""><figcaption></figcaption></figure>

### **Points to note**

During testing, it is common to observe relevant behaviors that should be recorded for evaluation or future consideration.

For this, we recommend creating a dedicated page in the spreadsheet:

<figure><img src="https://3750561495-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaD6wuPRxnEQEsYpePq36%2Fuploads%2FMP5uMheUyliG3YZXB5N5%2Fload-test-5.png?alt=media&#x26;token=15c50bb4-5006-4c6e-a619-dc7ac4a641d5" alt=""><figcaption></figcaption></figure>

### **Final test**

After rehearsals and necessary adjustments, the team and customer select the most relevant tests to repeat, usually those with the best initial results in terms of success rate and response time.

In the example, the five best results from the initial scenario are highlighted, considering the success rate and response time.

<figure><img src="https://3750561495-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaD6wuPRxnEQEsYpePq36%2Fuploads%2FZsubS92tngAc3xQREknk%2Fload-test-6.png?alt=media&#x26;token=74f3bb33-102d-470c-ae0e-873d67d2679c" alt=""><figcaption></figcaption></figure>

## **Phase 4: Final result analysis and action plan**

### **Final analysis of tests**

Based on the final test, analyses are conducted to assess results, considering metrics such as **RPS (requests per second)**, **response time**, and **success rate**, correlated with the pipeline deployment size (**size × replicas × threads**).

<figure><img src="https://3750561495-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaD6wuPRxnEQEsYpePq36%2Fuploads%2FY7xp5iBJkqkbqIlBkcvT%2Fload-test-7.png?alt=media&#x26;token=e80551c0-8dea-4b6a-8f30-cffd1ccdbaaf" alt=""><figcaption></figcaption></figure>

### **Test conclusion**

This phase concludes with a report that includes need, context, results, test summary, success rate observations, platform configuration, adjustments made (rehearsal and final tests), points of attention, and general notes.

The report can be added in Markdown to the “conclusion” page of the Test Plan spreadsheet.

<figure><img src="https://3750561495-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaD6wuPRxnEQEsYpePq36%2Fuploads%2FVbMBf0ZzV2PK8MxaxZbz%2Fload-test-8.png?alt=media&#x26;token=c42020a9-c567-453a-a814-5fca766e232c" alt=""><figcaption></figcaption></figure>

## **Final considerations**

The strategy outlined in this document ensures the execution of a **structured, controlled, and iterative** load test, capable of indicating not only pipeline scalability needs but also possible platform capacity adjustments.

This process is essential to validate scenarios with high transaction volumes, where **response time** and **stability** are critical factors for ensuring a safe production go-live without risks.

## **Support materials**

### Test Plan spreadsheet template

To accompany the tests, we recommend using a structured spreadsheet, based on the following model.

{% file src="<https://3750561495-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaD6wuPRxnEQEsYpePq36%2Fuploads%2FYMP0A4sLrCRtYCxivgwS%2FTemplate_%20Load%20Test%20Project.xlsx?alt=media&token=f6d99fd5-8c3e-447c-bcc5-df0c701f7470>" %}

### Checklist for the load test

Use the checklist below as a guide to create a test plan according to the instructions in this document.

<details>

<summary><strong>Checklist: Guidelines for the load test</strong></summary>

**Phase 1 – Test planning**

1.0 – Detail with the customer the parameters to be tested:

* [x] Number of simultaneous requests.
* [x] Message sizes (all possible sizes).
* [x] Expected response time.
* [x] Business requirements.
* [x] Infrastructure limits (for example, *Rate Limit*).
* [x] Environment configuration.
* [x] Routes (with intermediate pieces).
* [x] Success criteria and expectations.

1.1 – Structure the test plan (Test rehearsal):

* [x] Create a spreadsheet with the parameters to be tested, including:
  * [x] Flow to be tested
  * [x] Message size
  * [x] Number of messages
  * [x] Pipeline capacity (size and replicas)
  * [x] Processing times
* [x] Vary parameters gradually, increasing load compared to the previous test.
* [x] Change only one parameter at a time and measure, with documented evidence. For example:

- *1 message of 150 KB, Small, 1 replica*
- *2 messages of 150 KB, Small, 1 replica*
- *1 message of 150 KB, Medium, 1 replica*

* [x] Record times and behavior of each variation. Repeat until identifying the limits of that configuration.

**Phase 2 – Preparing the test environment**

* [x] Obtain examples of the messages to be used.
* [x] Record parameters of the components to be monitored (gateway, triggers, connections, etc.).
* [x] Define monitoring tools, reports, and metrics (with the Product team).
* [x] Share the defined test parameters with the Product team.
* [x] Request the necessary resources from the customer to simulate external access steps (even if mocked), such as:
  * [x] Request endpoints
  * [x] Databases
  * [x] File server
  * [x] Messaging broker
* [x] Preparation checklist – validate if all items are ready as planned:
  * [x] Infrastructure requirements completed.
  * [x] External resources requested from the customer available.
    * [x] If unavailable, evaluate alternatives (e.g., mock).
  * [x] Pipeline pre-analysis performed to identify components requiring scale.
  * [x] Environment capacity validated.

**Phase 3 – Execution**

* [x] Start only after all Phase 2 validations.
* [x] Follow the test plan defined in Phase 1, executing sequentially.

**Phase 4 – Final result analysis and action plan**

* [x] Generate a results report from the Test Plan spreadsheet.
* [x] Evaluate whether requirements were met.
* [x] Define whether a new test cycle is needed.
  * [x] If yes, request infrastructure adjustments and repeat Phase 3.

</details>
