March 10
Connectors
Manual evaluations for test executions in the Agent Component
You can now manually evaluate test results when running Datasets in the Agent Component. In each Test Case, rate the result as positive (👍🏼) or negative (👎🏼) based on your assessment.
Your rating adds a manual-evaluations column to the execution details in the Executions tab and contributes to a manual success rate for the Dataset. This value is combined with Code Scorers results to provide a more complete view of the Dataset’s overall performance.

Add MCP tools from the registry to the Agent Component
You can now add MCP pipelines deployed with the MCP Server Trigger directly from the MCP From Registry tab when configuring tools in the Agent Component.
Previously, it was necessary to manually copy the endpoint information from the Run page to configure an MCP tool. With this update, the process is simplified.
The MCP From Registry tab lists available pipelines deployed in Test or Prod, including their authentication type, configured tools, parameters, and version. When selecting a pipeline, key fields such as Name and Server URL are automatically pre-filled, making MCP Tool configuration faster and easier.

Choice and For Each connectors now support Double Braces
Now both Choice and For Each connectors support Double Braces expressions, eliminating the need for temporary variables and allowing the use of previous step data with ease.
In For Each, in addition to supporting Double Braces in the iteration expression, a new reserved reference was introduced to access the current element of the loop:
{{iterators.<for-each-alias>.current}}
This reference ensures consistent access to the element being processed, even when connectors inside the iteration modify the payload.
These improvements increase expression syntax consistency across connectors, reduce implementation errors, and make automation flows more predictable and reliable.
Documentation
We’ve created the following use case documentation to help you expand your knowledge of AI on the Digibee Integration Platform:
Build secure and controlled AI Agents with Guardrails: Learn how to ensure data privacy by masking sensitive information and enforcing strict response rules.
We’ve also updated the following documentation to provide more relevant guidance for your learning journey:
Build your first AI testing workflow with Datasets, Evaluations, and Versioning: Learn how to create an AI testing workflow that validates structured JSON outputs across multiple input variations.
Bug fixes
Undo/Redo fixes on Canvas: We improved the undo and redo behavior on the Canvas. These options are now disabled when the Execution Panel or the Test Cases configuration is open to avoid unexpected actions.
Error when creating a minor version: We fixed a bug that prevented users from creating new minor versions directly from the Pipeline History.
Long input/output in the Results tab from the Execution Panel: We fixed a bug that prevented users from viewing the full input or output in the Results tab of the Execution Panel when the content was too long after clicking “Open Execution” for a Test Case.
Last updated
Was this helpful?