# Stream Parquet File Reader

The **Stream Parquet File Reader** connector allows you to read Parquet files triggering subpipelines to process each message individually. This connector should be used for large files.

Parquet is a columnar file format designed for efficient data storage and retrieval. For more information, [see the official website](https://parquet.apache.org/).

## **Parameters**

Configure the connector using the parameters below. Fields that support [Double Braces expressions ](https://docs.digibee.com/documentation/connectors-and-triggers/double-braces)are marked in the **Supports DB** column.

{% tabs fullWidth="true" %}
{% tab title="General" %}

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Alias</strong></td><td>Name (alias) for this connector’s output, allowing you to reference it later in the flow using Double Braces expressions.</td><td>String</td><td>✅</td><td><code>stream-parquet-reader-1</code></td></tr><tr><td><strong>File Name</strong> </td><td>The file name of the Parquet file to be read.</td><td>String</td><td>✅</td><td>{{ message.fileName }}</td></tr><tr><td><strong>Parallel Execution</strong></td><td>Occurs in parallel with loop execution.</td><td>Boolean</td><td>❌</td><td>false</td></tr><tr><td><strong>Convert Date Fields</strong></td><td>If enabled, <code>DATE/TIMESTAMP</code> fields from the file are converted to string format (e.g. <code>yyyy-MM-dd</code> for <code>DATE</code>, ISO-8601 for <code>TIMESTAMP</code>). When default, dates remain as numeric values (days/millis since epoch).</td><td>Boolean</td><td>❌</td><td>False</td></tr><tr><td><strong>Date Field Paths (optional)</strong></td><td>Manually indicates date fields when the schema does not declare a logical type <code>DATE</code>.</td><td>String</td><td>❌</td><td>N/A</td></tr><tr><td><strong>Decode Base64 Fields</strong></td><td>If enabled, the connector recursively scans the output JSON nodes. Any string identified as a valid Base64 sequence is automatically decoded to UTF-8 and replaced in-place.</td><td>Boolean</td><td>❌</td><td>False</td></tr><tr><td><strong>Fail On Error</strong></td><td>If the option is active, the execution of the pipeline with an error will be interrupted. Otherwise, the pipeline execution proceeds, but the result will show a false value for the “success” property.</td><td>Boolean</td><td>❌</td><td>false</td></tr></tbody></table>
{% endtab %}

{% tab title="Documentation" %}

<table data-full-width="true"><thead><tr><th>Parameter</th><th>Description</th><th>Type</th><th>Supports DB</th><th>Default</th></tr></thead><tbody><tr><td><strong>Documentation</strong></td><td>Section for documenting any necessary information about the connector configuration and business rules.</td><td>String</td><td>❌</td><td>N/A</td></tr></tbody></table>

{% endtab %}
{% endtabs %}

{% hint style="info" %}
A compressed Parquet file generates JSON content larger than the file itself when it is read. It is important that you checj whether the pipeline has enough memory to handle the data, as it will be stored in the pipeline's memory.
{% endhint %}

## **Usage examples**

### **Reading Parquet file**

* **File Name:** file.parquet
* **Parallel:** deactivated

**Output:**

```
{
	"total": 1000,
	"success": 1000,
	"failed": 0
}

```

If the lines have been processed correctly, their respective subpipelines return `{ "success": true }` for each individual line.
