The growing trend of collecting and analyzing data in manufacturing has the potential to make positive impacts in manufacturing. Thanks to the growth of Internet of Things, today’s advanced machines are built with numerous sensors all producing an ever-increasing amount of sensor and log data.
Gaining valuable actionable insights from the data can be leveraged to improve efficiencies and reduce costs; however, this vast amount of data can be overwhelmingly complex and costly to process, which can outweigh the potential efficiencies and savings.
As more devices are connected across the factory floor, there will be an exponential increase in the amount of data produced. This explosive growth in data also means an increase in computing, storage and networking power and infrastructure.
But, the traditional approach to analytics isn’t necessarily fit for IoT. Pushing all this data to a central data center to be processed and analyzed can be costly and not easily scaled.
One solution is clear, in order to enable the effective and efficient handling of IoT analytics, running some of the analytics “on the edge” is essential. Exploring edge analytics (also known as fog computing) provides a much more effective and scalable way to use computation and bandwidth resources much more efficiently. And for factories and plants already embracing IoT, it will optimize operation efficiencies and scalability.
The following are representative examples:
- Data can enable new services for customers. For example, in the industrial sector, sensorial data has traditionally been collected and used in a very limited way, such as measuring some aspect of an industrial process or machinery, adjusting a controller and then discarding the measured data. So, not only can more sensors be utilized, but the data can be stored, mined and analyzed in new ways and then used for new services.
- IoT data coupled with advanced machine learning approaches can detect anomalies in an industrial process to drive response and action in a faster and more efficient manner.
- By inspecting and analyzing historical IoT data, using advanced data analytics, we can detect patterns indicating changes that require attention. For example, deterioration of machinery enables predictive maintenance to fix the problem when the impact is low in terms of money and production time lost.
But how would distributed IoT analytics work? The hierarchy begins with “simple” analytics on the smart device itself to more complex analytics on multiple devices on the IoT gateways and finally the heavy lifting, the big data analytics that are running in the Cloud. This distribution of analytics offloads the network and the data centers creating a model that scales.
Many business processes do not require “heavy duty” analytics, and therefore the data collected, processed and analyzed on or near the edge can drive automated decisions. For example, a local valve can be turned off when edge analytics detect a leak.
Harness the computational power at the sensor and device to run valuable analytics on the device itself. Additionally, these sensors and other smart connected devices typically are tied to a local gateway with potentially more computational power available.
This, in turn, enables more complex multi-device analytics close to the edge. Offloading data analysis from the network and the data centers creates a model that effectively scales.
Some actions need to be taken in real time because they cannot tolerate any delay between the sensor-registered event and the reaction to that event. This is extremely true of industrial control systems when sometimes there is no time to transmit the data to a remote Cloud. This is remedied with a distributed model.
Through considering edge analytics, we’re beginning to understand that there are some trade-offs that must be considered.
Edge analytics is all about processing and analyzing subsets of all the data collected and then only transmitting the results. We are essentially discarding some of the raw data and potentially missing some insights.
The question is whether we can live with this “loss,” and if so, how should we choose which pieces we are willing to “discard” and which need to be kept and analyzed?
The answer is not simple and determined by the application. Some organizations may never be willing to lose any data but the vast majority will accept that not everything can be analyzed. This is where we will have to learn by experience as organizations begin to get involved in this new field of IoT analytics and review the results.
It’s also important to learn the lessons of past distributed systems. For example, when many devices are analyzing and acting on the edge, it may be important to have somewhere a single “up-to-date view,” which in turn, may impose various constraints. The fact that many of the edge devices are also mobile complicates the situation even more.
If you believe that the IoT will expand and become as ubiquitous as predicted, then distributing the analytics and the intelligence is inevitable and desirable. It will help us in dealing with big data and releasing bottlenecks in the networks and in the data centers; however, it will require new tools when developing analytics-rich IoT applications.
Making sense of this flood of data will be the challenge that will drive better and more advanced analytics in manufacturing; however, many currently recognize that the gap between collected data and analyzed data is growing.
We need to be smart about how we tackle this challenge: Do we collect everything? Do we store everything? Do we analyze everything just because we can?
About the Author: Gadi Lenz is Chief Scientist at AGT International. He discusses topics such as IoT, big data, analytics and other insights over at his blog: The Analytics of Everything.