Learn about LNS Research's thoughts and projections on what will happen in the Asset Performance Management (APM) space in 2015.
On Wednesday, April 10, LNS Research hosted the webcast, “Digital Twin: Unlocking IX in Process Plant and Industry.” The presentation revealed the state of the market, clarified what Digital Twin really means beyond the marketing hype, and showed how companies can use it to drive Operational Excellence.
1. Which twins will be the most important and why?
From a process plant perspective, asset (device) and process twins will be the most important, but we will also see planning and scheduling integrated with the process twin. Remember what’s most important to owner/operators is that plants make money from availability, yield, throughput, and efficiency.
2. Can original equipment manufacturer (OEM) companies build twins and license them to industrial companies? Is this a real business model?
Yes, they certainly can. Examples include GE Baker Hughes for its turbines and compressors using the GE Predix platform; and second, Flowserve for its fluid motion products. I am surprised not to see more OEMs as they know their product best from a design and operational viewpoint. Frankly pump and valve manufacturers should be all over this rather than ceding it to others.
3. Do you think that the technology (platforms) to implement digital twins is already mature enough (i.e. keeping pace with what can be done, and what customers would like to have, etc.)?
I cannot say that customers yet know exactly what they would like to have, and vendors have not quite figured out the best architectures either, but customers are beginning to realize that traditional historians are not the end-all to be-all for plant data sources. They are only a component in the new operational technology (OT) ecosystems which now includes wireless, pervasive sensors, Edge devices, and the Cloud. Furthermore, while historians can store most types of data, including vibration waveforms, they cannot analyze all of them without the assistance of external systems, and even then there may be limitations. Having said that I think we are closing in on a new OT ecosystem architecture. The issue has always been how do we relate if not overlay context across all these data sources? Once we can do this and access to quality data is easy, then building and managing twins will be much easier.
4. The term Digital Twin of the organization emerged about 18 months ago. How does it relate to Digital Twin for process companies?
The organization Digital Twin is analogous to other twins, but companies should question the value of expending time and effort on it instead of other twins. A process twin or an asset twin will have direct impact to the bottom line and be more concrete to measure. The organization Digital Twin may in fact be a glitzy concept to keep CIOs busy.
5. Could we suppose Digital Twin is an evolution of discrete event simulation, where events are directly related to physical event the Digital Twin sends to the simulator? Can real time discrete event simulation anticipate possible behavior of physical assets in processing?
Absolutely! A discrete event simulation or state machine twin is one of the twins to use. The question is how do we use it with the other twins? When I was at KBC we built two models, one with Fidelis and one with Petro-SIM. We iterated back and forth until we came up with a likely capacity-based process model that accurately reflected various states, so that we could consider planned and unplanned downtime and turnarounds.
6. Could you comment on the competition between vendor specific proprietary vs. standardized neutral building blocks (analogy to "locked in" vs "open source")?
If you are referring to the analytics building blocks, then the open source, well-known published libraries of say machine learning algorithms will suffice for most twins. Why? Because we have good data and understanding of how and why devices and systems fail, so these lend themselves to supervised and semi-supervised learning approaches which are faster to build and deploy. There will be cases where one needs a proprietary approach to solve a specific problem. I see this more in upstream than in downstream, midstream, and power. The other problem is how does an end user compare them when they are not PhD data scientists themselves? End users, particularly engineers, have enough trouble accepting the black box of neural networks without trying to compare two unknowns.
Our recent research on Industrial Transformation (IX) Digital Readiness indicates that many companies start their IX Program, for varied reason, seemingly without a real commitment to it and without the organizational consensus needed to drive the transformational change. Without commitment, they are starting a program doomed to stagnation or outright failure. Digital twins are part of this transformation. It pays to measure or in this case prepare, not twice, but three times before cutting.