The client is a Logistics Service Provider (LSP) looking to move from an older legacy system to a new TMS system. The LSP is a U.S. based company offering global freight forwarding, logistics management and customs brokerage. They specialize in project freight which requires them to pay close attention to a higher level of details than normal, while moving time sensitive freight.



In 2017, the LSP selected a new global operating platform for their core logistics and accounting functions.

Due to the unique requirements surrounding the movement of specialized, high value project cargo, the LSP chose to maintain the internally developed legacy shipment planning and execution platform for their special project shipments.

To maintain consistent and compliant freight and accounting processes while gaining operational efficiencies, the company’s management wanted to process all freight, customs and accounting transactions through the new global operating platform.  The legacy system would be retained and used exclusively for the complex coordination and planning activities unique to project freight.



Time constraints

In order to bring up the new accounting system to coincide with the fiscal year, the company wanted to build multiple, real-time integrations in a 45-day period – less than seven weeks.  This time needed to include design, implementation and testing. Missing this important accounting deadline was not an option, so the team would need to prioritize for business value and ensure that the mission critical functionality was in place on day one.


Real Time

All interactions between the legacy system and the new platform had to take place in real time.  Since operators would be working side by side in both systems, they couldn’t wait for scheduled processing to synchronize data at some future point in time.


Legacy system

The company’s legacy system had an output format that had been used for prior internal integrations but had not been formally documented.  They needed a partner who understood the freight industry and could collaborate with their in-house developers and users to reverse engineer the interface and ensure that data would be transported to the right place.


How Helped

Strong Project Management & Analysis

With a short timeframe for implementation, doubled down on best practices around project management and documentation.  While there is always temptation to skip key project management steps during a compressed implementation, believes that strong fundamentals are the key to velocity and on time delivery.

The team spent the first two weeks of project making sure they had a deep understanding of the business objectives as well as the technical components of both the source and target systems. During this time, not a single line of code was written. This comprehensive analysis surfaced many potential roadblocks and data synchronization issues before they were built into the code.  The and the LSP’s teams were able to proactively work through these issues and resolve them before the development team started building the interfaces.


FTP to API Bridge

The legacy system is not API enabled.  It was, however, able to send files to an FTP site when the user triggered the synchronization job within each shipment.  Under a traditional integration model, would poll the FTP site every few minutes and push the data to the new application.  This process would have created unacceptable delays in the end user’s workflow. instead enabled a real-time FTP to API bridge.  As soon as the legacy system writes a file to the secure FTP site, a real-time trigger fires an API call to the Cloud Execution Platform which in turn transforms the data and posts it to the new application via their existing API.

The legacy system did not have to be rewritten to include API capability and the company’s operations got the benefits of a real-time API.


Data Duplication Detection

Given the critical nature of the underlying compliance and financial transactions, the company elected to enforce a “write once” rule where each unique transaction could only be posted from the legacy system one time. If a transaction was sent more than once, it was critical that the second or subsequent postings be rejected. This business rule posed two problems.

First, the target system allowed updates via its API and the legacy system did not have the ability to prevent the transactions from going out more than once. Since the Cloud Execution Platform actively inspects data moving across the network, the team was able to implement checks between the two systems that stopped duplicate transactions in their tracks.

Second, end users did not have a mechanism within the legacy system to know if they were sending a duplicate transaction. When found a duplicate, the system could have notified the company’s help desk, but that would have lead to thousands of support requests to follow up on and many confused users. Instead, when the Cloud Execution Platform detects a duplicate (or any other data processing error) it reviews the data feed, identifies the user who sent the data and emails them directly with clear instructions on what went wrong, what options they have to resolve the problem, and who to contact if they have questions. This puts actionable information on the user’s desktop as they are processing the shipment in real time.



By leveraging disciplined project management and the power of the Cloud Execution Platform, and the company partnered to deliver the project on time and with no disruption to their clients. Understanding the unique business and technical requirements allowed the team to deploy an innovative solution that exceeded client expectations. worked closely with the company’s team to find solutions and deliver them on a tight schedule. Providing a team with strong industry knowledge is’s strong suit and makes the difference in getting projects like this delivered that works for the operations staff of a Logistics Service Provider.