Consider: it takes one to four weeks for a supplier to process a PO from a customer.
That is the same amount of time it typically takes to ship goods from China to the US. Our global data handling is so poor that data which should move at the speed of light instead looks like a container ship crossing the largest ocean on the planet. Poor data handling is restricting the flexibility of our supply chains. We were in a similar position with the handling of physical goods and overcame it.
We can apply what we learned then to what we are facing now.
Through the end of the 1960s, shipping goods was a complicated and slow process made up of incompatible transportation systems. Each supplier had to maintain a staff of drivers and a fleet of vehicles. Each supplier had their own rules for transportation. If a supplier’s customers were within driving distance, the supplier could keep things simple by loading up their own trucks and delivering direct.
Increasing globalization of supply chains, however, increasingly meant that a supplier’s customers were more than a drive away. The supplier’s staff had to drive to a freight carrier, unload their packages, and pack them into the carrier’s containers. The carrier might then have to unpack their containers and repack them into yet another container when changing from rail to ship or vice-versa.
In some industries customers had the staff of drivers and fleet of vehicles instead of suppliers, but the problem was the same: the handling of physical goods required highly trained staffs of people duplicating effort as the goods moved from supplier to customer.
Two innovations solved the problem: standard intermodal containers and freight carriers that handled logistics from supplier dock to customer dock.
Standard intermodal containers, also known simply as shipping containers, reduced the expertise needed within the transportation chain while dock-to-dock freight carriers like FedEx reduced the staff and vehicles needed by customers and suppliers. Lowering the bar for expertise opened the field to more suppliers and increased flexibility in the supply chain. Customers could now get components and materials from suppliers an ocean away as reliably as they could a supplier in their hometown.
We are in a similar position with data handling today. From RFQs to POs to demand schedules to engineering and quality requirements, the data moved at all levels of the supply chain is hand-carried from one system to another. Doing so is a complicated and slow process made up of incompatible systems.
Each supplier has to maintain a staff of data entry specialists and a fleet of computers. Each supplier has their own rules for structuring the data and moving it around. If a supplier has only a couple of large customers, the supplier can keep things simple by building their data handling processes around those customers’ idiosyncrasies. The supplier’s specialists have to perform ongoing translations between the customers’ data and the supplier’s systems of record, which limits the number of customers the supplier can support.
In some industries and some segments of supply chains it is customers who have a few large suppliers and therefore those customers maintain the staff of data entry specialists and fleet of computers, but the problem is the same: the handling of data requires highly trained staffs of people duplicating effort as the data moves through the supply chain.
Globalized supply chains, however, mean that suppliers are selling to more customers and customers are buying from more suppliers. The data entry staff hand-carrying data to and from customers and suppliers can’t grow quickly enough. Our supply chains have lost flexibility as a result.
Things like EDI are not the answer to increase flexibility. EDI is 1970s technology made to solve for patchy or non-existent connections between large companies. It is prohibitively expensive to implement and where it hangs on it does so primarily through inertia.
The internet solved the problem of unreliable connections comprehensively. We now have a new problem: data silos that are unreachable and mutually unintelligible. A supplier may have their ERP full of accurate and useful information, but it doesn’t know how to receive requests from other ERPs or how to make its own requests of them. That ERP is an isolated village accessible only via the dirt paths trod by its data entry specialists.
If we wish to improve the flexibility of our supply chains, we must understand all of the types of data we move through them, map the roles the data play, and commit to eliminating hand-carried data by building robust and inexpensive connections between silos. Terms like Industry 4.0 and Digital Thread are a good start, but they are threatened by products that purport to achieve their goals while shifting the hand-carried work rather than eliminating it.
Like we did in the 1960s, we need to create and use solutions that increase interoperability and reduce the expertise needed to move data from one silo to another. Just as standard intermodal containers eliminated the complications of changing physical goods from one mode of transport to another, so too do we need to establish standard data containers. Just as dock-to-dock companies like FedEx eliminated the local expertise needed at suppliers and customers, so too do we need to encourage the emergence of silo-to-silo data transportation companies.
Let us use the lessons of our past to guide the solutions to our future.