Sundeep Sanghavi

Industrial machines generate billions of data points every year. Somewhere, hidden in all that noise are the critical signals that pinpoint the possibility of partial or complete machine failure.

Identifying the series of datapoints that indicates your multi-million dollar machine is about to break down can help you prevent downtime, additional costs and long term damage. However, without an automated solution, it’s like finding a needle in an unimaginably large haystack.

With effective insights into the health of machines, manufacturers can solve issues before they get serious, and potentially save millions of dollars a year in repair costs and help extend machine lifetimes. The Industrial IoT market is worth $11 trillion and predictive maintenance can help companies save $630 billion over the next 15 years. But to realize these savings and benefits, traditional approaches to data and data science don’t contribute much.

So what are the primary challenges that industry faces when it comes to predictive maintenance? And how will fully-automated predictive maintenance improve services to clients and the bottom line for companies?

Finding the Signal in the Noise

It is well understood that maintenance done at the right time reduces costs. According to an example analysis in a PMMI report, regular preventative maintenance carried out on a 10 year-old air compressor valued at $32,900 can extend the machine’s life for up to four years and will represent a saving of up to $6,359. These savings can, however, be increased with targeted predictive maintenance, based on effective predictive analytics, simply because there’s no more guesswork involved: an engineer can say with certainty which parts will need replacing and when.

How does this work? The fact is, industrial machines don’t just stop working. Failure is almost always the result of a chain of events. As one problem leads to another, a digital signal is created — think of it like symptoms of an illness. For machines connected to the IIoT, these symptoms are scattered over millions of data points, come from various different sensors at different times, and are stored in separate silos. Finding the critical signal amidst these millions of scattered data points is not humanly possible.

While many factories now have teams of data scientists in place to analyze this data and diagnose issues, current manual methodologies are simply not getting consistent results.

As Jon Sobel points out in an article for Techonomy, “Manufacturers generate data across massive, distributed operations, but […] until we can make use of the data we already have, collecting ever more data just buries us deeper.”

Unlike generic data patterns, machine data patterns are constantly changing, so prediction models become obsolete quickly and, because failure occurs at different points in a process, monitoring a single sensor on a machine doesn’t give a complete picture. A study from Gartner points out that 72 percent of manufacturing industry’s data is unused due to the complexities involved with variables, such as pressure, temperature and time.

The reality is, manual monitoring and human-led analytics is not only inefficient, but also ineffective — especially when you consider that only around 25 percent of these signals truly denote a major failure event, and most are just false positives.

Let’s take an industrial washing machine operation for example. In this scenario, there are a number of challenges for the data science team monitoring operations. With around 75 different sensors per machine, with each providing millisecond-range feedback, there is a huge amount of data to sift through.

Regular operations present numerous variations in sensory output, and while this data will fall within normal ranges, pin pointing significant anomalies is extremely complex for a human team; rules cannot be defined manually because real-world usage causes unpredictable wear and tear.

What’s more, for the same reasons, no pre-defined data exists to indicate what a failure state looks like in sensory readings, because failure occurs on a case-by-case basis. Despite manual prototyping and factory testing, we’re still left with false positives. Effectively, this makes training a team to consistently identify the warning signs impossible.

In other words, the model is completely broken.

Automating Predictive Maintenance Will Revolutionize Industry

Understandably, few CEOs or CFOs would willingly invest in predictive analytics, knowing that it will only be effective a quarter of the time.

So what can we do? We need to put machines first. Machines, being able to analyze thousands of data points and permutations a second, are far better able to output prediction models. When they have been taught correctly, they can accurately and consistently predict future failure.

Let’s go back to that industrial washing machine operation. Unlike humans, machines are able to monitor all 75 sensors in parallel, and rather than sifting through data silos, are able to compare and contrast data in real-time.

By collating months — even years — worth of data, machines can also build thousands of effective data models in parallel in order to deliver the optimal version. In this way, machines are able to effectively segment data, compare and contrast current real-world output to the historical records and make far more accurate predictions as a result.

The upshot is, by using predictive maintenance and machine learning, we are able to dramatically improve efficiency and lower costs in factory and industrial environments. Machines create the meta learning data model out of a year’s worth of captured data, and are up to 300 percent more accurate and 30 percent faster than human teams.

Other Industry Considerations

Instead of simply reacting to failure and false positives, through maintenance analytics, we’re able to proactively fix problems before they occur. However, there are other more wide-ranging implications for the manufacturing industry.

Part harmonization

Through machine learning predictive maintenance, for example, managers are able to leverage part harmonization. Predictive models are able to show which parts will be the first in line to fail, what will need replacing in the next six months, for example. This then allows teams to better manage inventories, stockpile the right parts, and even bulk order replacements before they are needed.

Cost-benefit analyses

Teams are also better able to do cost benefit analyses and further understand the risks of not performing maintenance at any given time. Presenting this data to the C-suite, and outlining future risk weighed against a smaller outlay at the present time, is far more compelling an argument than suggesting a piston might eventually need replacing.

Warranty Claims

Even more, companies can better assess their warranty offerings. Defining the optimal cost and duration for any given warrantee is a great challenge for many manufacturers. Analytics can help better define these boundaries by modeling usage patterns.

Risk Mitigation

In the same vein, manufacturers can also avoid paying penalty fees by fixing issues when they are notified of future failures. The current crisis at Volkswagen, for example, might well have been prevented with effective predictive maintenance, and could have saved millions of dollars in costs.

The future of predictive maintenance will start moving away from human-driven teams towards machine learning systems. As the industry grows, we’ll see improvements to services both on the factory floor and at the consumer level. The machine learning age is here, it’s time to embrace it once and for all.

Sundeep Sanghavi is the co-founder and CEO of DataRPM.