Gartner calls it the next evolution after simulation. Manufacturing calls it the next stage of prototyping. Maintenance calls it “try before you do”. Financial Services calls it risk management. Everyone has a name for it, and they also have a use for it.
This is the 2nd column of the 12-part series called Navigator MasterClass, wherein we will find our way through the myths and realities of one Bleeding Edge technology each month; in terms of where it truly stands at the time of writing, and its business applications- implemented, being attempted, or speculative. This month’s topic is Digital Twins.
As is the won’t of this columnist, a Google search was done on the topic. As “expected”, one billion results showed up. Then idle curiosity led to some clicks. And one interestingly found that many large players have their flavour of “Digital Twins” (which kind of explains why no one can clearly explain it).
- IMB calls it a tool to watch the entire product lifecycle
- Microsoft calls it a replication of the entire environment
- GE calls it the road to improved productivity, operations, and profits.
One can go on, but the point is that everyone looks at it from their perspective. In reality, Digital Twin is a replica of a physical person or process and is used to monitor, diagnose or prognosticate real-life situations. While it was conceived as an idea in 1991 (by David Gelernter in his book “Mirror World”), it took a University of Michigan professor to get this idea some credibility in 2002. And then, in 2010, NASA gave it the official name of Digital Twin. In the meantime, they had used a physical twin to rescue the crew of Apollo-13 and had realized that the same could have been done using the virtual world.
So, in simple language, Digital Twins (DT) replicates the physical world. There are of course multiple interpretations. While MIT calls it a 2-part model of a physical model and sensors to record data, others call it a 3-part model of the physical world, a virtual world, and connections. As we can see, there is not much of a difference, except in where we want to lay the importance.
The world of DT is expanding exponentially because of leapfrogs (and cost drops) in IoT, Machine Learning/ AI, Augmented Reality, Virtual Reality, and Edge Computing. Imagine if all of these came together (as they are beginning to). Imagine if these could be used to break all limits of Innovation as they are already beginning to. Imagine if they could start to discover.
MIT believes that DT is moving from tactical to strategic. It started as a specialized tool for PLM, now it can be used for any IT project/ business function. It also says that the three most common uses that are emerging are:
- Sustainability.
- Smart Innovations; Innovations On The Edge.
- Health and Safety
All of these relate to one word: INNOVATION. And all this fosters innovation because DT helps in (i) continuous evaluation; (ii) faster and cheaper prototyping, and (iii) the ability to work at the limits. Let us now look at two hot & emerging applications.
McKinsey talks extensively of an after-market commercial engine. The huge success of this engine lies in a granular view of opportunity, customized offers, and customized experience. All of these are delivered with “real-time” adjustments based on data feedback from a DT.
Gartner espouses the cause of “machine customers”. The ‘machine customers’ are everywhere and slowly doing more and more for us. They can be purely rule-based (for example, Amazon Subscriptions), or “select” based on rules (for example, HP’s printer ink shipping model), OR autonomous (emerging technology like autonomous vehicles). And we are seeing them around us, without realizing it: a refrigerator that orders food, a washing machine that orders detergent, a car that schedules its own service….
There is the example of the Apple Watch, where it senses when you have fallen and NOT gotten up for a specific period; it then calls for help automatically. There have been numerous false alarms, including when people have been too lazy to bother to get up, but they have all appreciated the fact that this App was there.
There are four things that make DT tick:
- They are logical; they do not think that I can delay my laundry;
- They thrive on information; not feelings;
- Speed is not an issue; it is of the essence;
- Machines are connected, so they “learn” faster” from each other.
And these are the three things that make them scary:
- Privacy risks.
- Does Technology work all the time?
- Plain old FEAR.
At the end of the day, DT is here to stay. Gartner calls it the next evolution after simulation. Manufacturing calls it the next stage of prototyping. Maintenance calls it “try before you do”. Financial Services calls it risk management. Everyone has a name for it, and they also have a use for it.
Like any emerging technology, there are risks. The biggest is the loss of jobs. We have known since the Industrial Revolution that jobs do not disappear, they just reinvent themselves.
But we will have to watch for the real fear that comes from books and movies: will the machines take over our world. “Unlikely” according to this columnist, simply because DTs are a replica (and can therefore be shut down by “reality”). And as we march in this direction, we will use them only for mundane tasks (vacuuming?!! or sorting envelopes).
Yes. DTs have been around us for a while. And are increasingly taking over jobs that we humans do not want to do.
The author managed large IT organizations for global players like MasterCard and Reliance, as well as lean IT organizations for startups, with experience in financial and retail technologies
Add new comment