By Adurthi Ashwin Swarup, DataRPM. Sponsored Post.
Making the most of your Industrial Assets in the Digital Age - Roadblocks and beyond
While leading the data science team at our DataRPM, we have had the opportunity to interact with major players in the world for Discrete and Process Manufacturing. What we have found is that most companies (at-least those that matter) are moving towards an era of being resource consciousness since the cost of resources are going up. Companies in the past could focus on increasing top-line growth and comfortably lead the market. That is no longer true. Every CEO today has to worry about every line under the costs header and keep it under control.
Among these headers, Cost of Quality is a big opportunity for companies. One can no longer afford to have product rollbacks or have wastage because of replacement parts. This is where the need for “Predictive Maintenance” (PdM) comes into play. Predictive Maintenance (PdM) gets two birds with one stone
- a) It helps companies focus limited resources in a very specific manner. So, for example when one has limited human resources to tackle incoming consumer service requests, sensors hooked up to the end product could intelligently inform the SME’s what usage patterns are causing issues and reduce resolution time. One could also inform the users themselves because it is in their interest to elongate the remaining useful life of the asset. In fact, PdM could create a feedback loop between end users and product designers which ordinarily does not happen for discrete manufacturing companies.
- b) The second big thing that PdM does is that It acts as an insurance policy. If there is anything that CEOs fear, it is waking up the next day and seeing their faces plastered on Television screens talking about product recalls. Most of the time such things are not in the control of the Executive group and with today’s environment the variables are just too many to keep under human control. Black swan events need to be prepared for and that is what the manufacturing industry understands.
Accepting that such things occur and preparing for it is in the best interest of companies. In-fact we predict that many insurance companies will weigh in the probability of recall in their insurance premium calculation and having a productionized PdM solution will help bring these premiums down.
An interesting question we often get asked is about reactive maintenance. How these solutions are different and whether the data for one can be used for predictive maintenance.
The short answer is: it depends. The elaborate answer follows
Reactive maintenance/ Monitoring is mostly about firefighting.
In such a scenario one is more focused on resumption of normalcy (whether it is getting the assembly line up again or solving asset hardware problems). The resources involved in this are under constant pressure to move from one firefighting scenario to another. Record keeping is at best symptomatic and more resources are spent in training up corrective measures.
On the other hand, PdM by its very nature is based on analyzing and collating the past to predict the future; in-time. The stress being on “in-time” if you haven’t guessed it already. That means the focus has shifted from just corrective measures to agility with which one can analyze the past to predict the future.
That is the major challenge. How does one enable data stored for reactive measures to align itself to predicting the future? How does one change database notions from “storage efficiency “to “analytical efficiency”. How does move from having a “knowledge repository” for field engineers to having database mapping “sensor-patterns” to potential root causes.
The key take-away here is that if the data being used for reactive maintenance has enough
data redundancy for the sake of analytical efficiency then the transition to predictive maintenance is smooth. As the leaders in our industry we are fighting for this fundamental shift in the way people think about their data storage.
This discussion on “in-time” predictive maintenance lends itself to the next question on real-time systems. Real-time systems are sensor level systems that capture and pre-process data before sending it out for analysis. While reactive is one end of the spectrum, executives automatically assume that real-time predictions are at the opposite end and therefore good. This is not true.
It would be disingenuous to say that plugging any sensor on products and tracking them will lead to realization of perceived benefits. Even though, open-source IIoT frameworks based on Arduino or Raspberry Pi are on the rise, jumping into these systems without performing a cost benefit analysis could lead to heart burn. The problem here is not of technical limitation but to prove that the amount of information a sensor yields is worth the cost.
Without getting into the mathematics of it, one can roughly estimate which part of the product is causing the largest problems and then see if this cost is surmounted by adding a sensor that addresses the issue by collecting relevant data to warn about impending failures.
I like to give the analogy of a patient in the hospital. The more critical the nature of the patient the higher the frequency of observations needed and consequently superior is the technology needed. That does not mean all patients are hooked up in the ICU.
The takeaway? Get a good predictive maintenance solution in place and see whether it solves the purpose. When predictive maintenance is not able to capture required scenarios, then and only then move towards having end-point systems.
TL; DR: First improve the data, then place a system that can make use of this data to predict what could go wrong and if that fails add more data collection mechanisms.
Here is a small look at what questions we will look at in the next part of this article.
So, what happens when you need to predict for a failure that hasn’t occurred before?
Cognition through meta-learning is the new thing about predictive maintenance. The marriage between in-situ condition monitoring and highly iterative machine learning algorithms is made possible by supplying the algorithms with information from previous runs to speed them up.
Imagine walking into your kitchen in the night to grab a drink. You don’t want to wake your wife up and so you tentatively navigate towards the refrigerator. If you do step on your baby’s squeaky toy, you will remember not to step on it on your way back. That is learning through experience. Imagine your son hears you step on the toy then he becomes careful during his rendezvous with the refrigerator. That is learning through others experiences or cognition. The same thing applies to machine learning algorithms where meta-learning helps them exchange information about convergence based on data provided.
Bio: Adurthi Ashwin Swarup is a Principal Data Scientist at DataRPM.
- Machine 4.0: Making your Factory, Production and Maintenance Data Work
- 5 Decisive Technology Trends which will Make or Break the Manufacturing Momentum in 2017
- Webinar: Building Data Products for Predictive Maintenance