In every aspect of manufacturing business today there seems to be a lot of hype about Big Data Analytics and Predictive Analytics. In the last month I have listened to over a dozen different vendor briefings, or webinars related to either Big Data, or using data to glean more information about some business function. In some cases, it is about using Predictive Analytics to improve asset performance and increase reliability. In others, it is about driving optimal process performance, understanding the consumer/customer better, or optimizing supply chain performance.
Click here to speak with Dan
And, that is not even considering one of the most common forms of predictive technology, simulation in the Product Lifecycle Management (PLM) space. From IBM with their Watson technology, to specialized vendors that are providing enhanced analytics, to sit above a data historian to traditional Predictive Maintenance (PdM) solution providers, Predictive Analytics has become the market space, second only to the Industrial Internet of Things (IIoT) that every vendor seems to be pursuing. So what is driving all this interest and why now?
Predictive Analytics: Science or Fortune Telling?
There is a lot of hype about Predictive Analytics today. Some of the claims about the value of Predictive Analytics seem to be too good to be true and most likely are not true. There are a lot of definitions about Predictive Analytics, but the most commonly accepted versions all center on three key points:
- It’s about extracting information from data.
- It is about predicting potential trends or behavior patterns.
- It can be related to future, current or past events.
So taken together, Predictive Analytics is about analyzing data to predict potential outcomes, not about declaring with absolute certainty what will, could, or did happen. Predictive Analytics, for most businesses, is about understanding the realm of possibilities that may occur based on the data at hand. The quality of the prediction is then dependent on two things, which is the quality or fidelity of the predictive algorithm and the “quality” of the data that the algorithm is operating on. Data “quality” can be impacted in several different ways. First the data may just be bad; that is the data is not really accurate or precise. With bad data, no matter how good the predictive algorithm, the prediction will not be very good. Another case is where the data is actually good, but incomplete. In this case the prediction may also not be very good since the algorithm does not have enough data to make a good forecast. Of course, if the algorithm itself is not good, no matter how good the data, the resulting prediction is most likely flawed.
Predictive Analytics: Then vs. Now
It was this need for a good algorithm that drove the predictive analytics market in the past. Whether for MRO Inventory Optimization, Condition-Based and Reliability Centered Maintenance (CBM/RCM) or advanced model-based process control and process optimization, the quality of the predictive engine and an understanding of process or domain dynamics was the key to reliable and profitable results. That was the “then” of Predictive Analytics. The “now” of Predictive Analytics began with IBM’s Watson, taking on TV game show Jeopardy champions and emerging victorious. The emergence of powerful computing platform-based Predictive Analytics engines that could tackle any problem, imperfect data and delivering reliable results is changing the Predictive Analytics landscape. Big Data, which is varied and not necessarily structured, drove the makers of the repositories of that data, such as the enterprise software vendors and database suppliers to provide readily configurable powerful analytics engines. In turn, powerful analytic engines make it feasible for anyone to do Predictive Analytics on almost any kind of data and get acceptable results.
Predictive Analytics in the Future
With the growth of the IIoT, there will be exponentially increasing amounts of data from an ever increasing array of sensors, systems, and devices. The reliance on heavily engineered and domain specific Predictive Analytics solutions of the past can and will not be able to keep pace with this expanding Cloud of data. The power of Cloud computing will continue to grow, and the ability to have self-learning Predictive Analytics solutions that adapt to new and varied information flows will challenge the pre-defined models of the past. However, the winners will not simply be the newer Cloud-based self-learning systems no matter how inexpensive they are, unless they can deliver the quality of predictions that the purpose-built solutions can. The ultimate winners will embody the best of both; the domain-specific models to deliver high quality results out-of-the-box and the learning-based Predictive Analytics solutions that can adapt to an ever growing stream of complex data to deliver better and better predictions, all at a cost that makes the technology affordable for all businesses.