New Feature: Pass Data Between Custom Processors in the Same Flow
If your team uses custom processors to handle data transformations, validations, or file enrichment inside Chain.io, this update is for you.
We've added a new feature called executionContext — a simple way to share data between custom processors within the same flow run. It's a small change that solves a real frustration, and it makes your integration logic cleaner and easier to maintain.
The Problem It Solves
Custom processors are one of the most flexible tools in Chain.io. They let you write logic specific to your business — transforming data, validating records, enriching files before they reach their destination.
But until now, processors in the same flow couldn't easily share information with each other. If a pre-processor extracted a value you needed later in a post-processor, there was no clean way to pass it along. The common workaround was stuffing that data into an unused field and carrying it through to the output — messy, fragile, and easy to break when file structures change.
Introducing executionContext
executionContext is a shared data store, available to all custom processors within a single flow run. Whatever values you write in an earlier processor are available to read in any later processor in the same invocation. When the flow run ends, the context is cleared automatically.
It works in a straightforward way. In a Flow pre-processor, you write values into executionContext — things like order references, counts, totals, or timestamps. In a Flow post-processor, you read those values back out and use them. For Visual Builder file processors, the context is both readable and writable, so any value stored by one file processor is available to the next one in the same run.
The current size limit is 10 MB per flow invocation — more than enough for the metadata most processors need to share.
Why this matters
The real value here is cleaner, more maintainable processor logic.
Before executionContext, keeping data in sync across processors meant bending the data model to suit your logic — using fields for things they weren't designed for, or restructuring files mid-flow just to carry a value through. That works until it doesn't: a schema change, a new trading partner, or a different file format, and suddenly your workaround breaks.
With executionContext, the data your processors need stays in a purpose-built place, scoped to the flow run, and cleared automatically when it's done. Your file structure stays clean. Your logic stays readable.
It's particularly useful when you need to:
- Enrich output files with metadata calculated earlier in the flow (totals, counts, timestamps)
- Rename destination files based on values from the source (like the
orderReferencein the example above) - Carry audit data through a flow without touching the payload
- Coordinate logic across multiple file processors that each handle part of a larger dataset
Getting started
The AI Processor Helper is already aware of executionContext — just describe what you're trying to do and it will suggest how to use it.
You can also browse working examples directly in the custom processor examples repository.
If you have questions, reach out to your Customer Success contact or open a support ticket. We're here to help.
Chain.io connects systems used by supply chain teams, forwarders, agents, and carriers to simplify complex supply chains. With our integrations to leading supply chain technology providers, we make it easy to automate your logistics and gain valuable freight insights, all in one enterprise-grade platform. Learn more at www.chain.io.
