February 24
Connectors & Triggers
Evaluations for Agent Testing
We’ve introduced Evaluations in the Agent Component to enable automated validation of model outputs.
Evaluations allow you to define structured validation rules using JSONPath expressions and configurable comparison logic (such as Not Empty, Contains, or Equals). These rules are executed against each Experiment within a Dataset and return clear pass/fail results.
With Evaluations, you can now:
Automatically verify field presence and data types
Enforce deterministic value rules
Detect structural regressions before deployment
Validate outputs across multiple input variations
This enhancement adds an objective validation layer to AI workflows, improving reliability and reducing manual inspection.

Update to the MCP Server Trigger response format
In the previous release, the MCP Server Trigger response format was updated to align with the official MCP protocol specification for text outputs, encapsulating responses inside the content field.
With this release, we extended the implementation to support structured outputs that are now properly returned in the structuredContent field, as defined by the MCP protocol.
This improvement ensures full compliance with the tool/call response specification and guarantees consistent handling of both textual and structured data across MCP-compatible integrations.
Platform Improvements
Undo and Redo on Canvas
Boost your workflow efficiency and edit with confidence. You can now quickly reverse or reapply actions within the Canvas — such as adding/removing connectors or updating configurations — using the new Undo and Redo buttons. Located in the lower-right corner, these controls are also accessible via keyboard shortcuts for Windows, Linux, and MacOS.
Undo: Ctrl + Z (Windows/Linux) or Cmd + Z (macOS).
Redo: Ctrl + Shift + Z (Windows/Linux) or Cmd + Shift + Z (macOS).

Data Streaming now available for Splunk
Data Streaming on the Digibee Integration Platform now supports Splunk. You can automatically stream logs and execution data to one of the world’s most powerful data analysis platforms in real time.

Documentation
We’ve created the following use case documentation to expand your knowledge of AI on the Digibee Integration Platform:
Build your first AI testing workflow with Datasets and Evaluations: Build a working AI testing workflow that validates structured JSON outputs across multiple input variations.
Bug fixes
“View more logs" button doesn’t work: We fixed a bug that prevented users from viewing the full list of logs in the detailed Executions tab of the Executions page.
Last updated
Was this helpful?