Stream Avro File Reader

Learn more about the Stream Avro File Reader connector and how to use it in the Digibee Integration Platform.

The Stream Avro File Reader connector allows you to read Avro files triggering subpipelines to process each message individually. The connector should be used for large files.

Avro is a popular data serialization framework utilized within the Hadoop Big Data ecosystem, known for its schema evolution support and compactness. For more information, see the official website.

Parameters

Take a look at the configuration parameters of the connector. Parameters supported by Double Braces expressions are marked with (DB).

General Tab

Parameter
Description
Default value
Data type

File Name (DB)

The file name of the Avro file to be read.

{{ message.fileName }}

String

Parallel Execution

Occurs in parallel with loop execution.

false

Boolean

Fail On Error

If the option is active, the execution of the pipeline with an error will be interrupted. Otherwise, the pipeline execution proceeds, but the result will show a false value for the "success" property.

false

Boolean

Documentation Tab

Parameter
Description
Default value
Data type

Documentation

Section for documenting any necessary information about the connector configuration and business rules.

N/A

String

A compressed Avro file generates JSON content larger than the file itself when read. You must check if the pipeline has enough memory to handle the data, since it will be stored in the pipeline's memory.

Usage examples

Reading Avro file

  • File Name: file.avro

  • Parallel: deactivated

Output:

{
	"total": 1000,
	"success": 1000,
	"failed": 0
}

If the lines have been processed correctly, their respective subpipelines return { "success": true } for each individual line.

Last updated