The idea of a “digital twin”, a virtual replica of a physical object, is not a new one. In the 1960s NASA built computer simulations of spacecraft for the Apollo missions. The idea was that the telemetry from real-world sensors on the Apollo craft would be fed into the simulator, in order to see what might happen under particular conditions. This was used for real in the ill-fated Apollo 13 mission, where an oxygen tank exploded in the service module on April 13th 1970, 56 hours into the mission. The crew used the lunar module to survive, utilising it as a kind of lifeboat for the journey back to Earth. NASA used the digital twin of the spacecraft to simulate various scenarios and validate procedures on the simulation before recommending them to be used by the crew.
The term “digital twin” became more common in the 1990s, when advances in simulation and modelling, such as “Monte Carlo methods”, allowed rich representations of physical systems like turbines, production lines or whole factories. Since around 2017, the concept has been widely adopted in building management, aerospace, and manufacturing in order to help with predictive maintenance.
The idea is the same as the original Apollo simulators: a computer model of a physical object (such as a turbine) is set up that mimics the sensors of the physical object. The real-life sensor feeds, with data like vibration, humidity, valve positions or speed of operation, are sent to the virtual model, which is also configured with basic information about the physical object, such as its type, location, serial number, asset hierarchy, dimensions etc. Further information, like maintenance history, inspection results and part replacements, is also provided to the digital twin. Events such as alarms, operator interventions and similar data are also fed into the simulator. If the data is sufficiently complete, then an operator can test out scenarios on the simulation, such as what happens if the torque on a machine is changed, or to plan maintenance shutdowns.
Such simulations can be quite cost-effective. Shell estimates that they see 35% less equipment downtime and 20% lower maintenance costs by using digital twins in their oil rigs. Given the huge cost of oil operations, the savings mount up quickly; Shell reckons they save $2 billion a year by their use of digital twins. Petrobras found $154 million of savings through the use of this technology in its refinery operations across eleven plants. Machine learning, a form of AI, sits at the heart of analytics for such models, while optimisation models can be used as a sandbox to explore maintenance strategies. The models can spot subtle degradation patterns and suggest optimal intervention times. Large language models (LLMs) can be used to provide a natural language interface to machine operators. They can also be used to structure data like logs and maintenance records into more readable formats.
The advent of chatbots like ChatGPT has led to the idea of digital twins of people, though this is a murky area. It is possible to build a model of things like heart rates, sleep patterns, nutrition and medication for a human, or even a specific organ like a heart or liver, but a biological creature like a person is vastly more complex than a machine like a turbine or part of an oil rig. There is no equivalent of sensors that can read the thinking of a human being or measure internal states of a body with any degree of completeness. It is certainly possible to take an LLM and train it on the data of a person, for example, the writings of an author. It could then produce an essay in the style of that author. However, it is a long way from this to creating a digital model that could respond in the same way as a specific human, especially in circumstances that were outside the training data. Human decision-making is complex and based on individual personality and circumstances, something far beyond the boundaries of what an LLM can model. The idea of a digital twin of a human being is strictly within the realm of science fiction, as explored, for example, in the Black Mirror episode “Be Right Back“, and indeed in other episodes of the same series, such as “San Junipero” in season 3. Today, it is possible to mimic a human voice very effectively. It is also possible to produce a digital representation of a human in terms of predicting physical diseases. However, actual human thought is vastly more complex, and a digital twin of human thought is strictly in the realm of research and science fiction at present.
Digital twins are a well-proven technique in the field of engineering. The approach has established benefits over many years and is an excellent use case for AI, especially in the area of machine learning. The extension of digital twins to specific aspects of medicine is an interesting area of research. However, the notion of a digital twin of a human being in terms of thought processes is a long, long way away from any plausible reality. Mimicking someone’s voice or appearance in a deepfake is very possible and indeed already established, but getting an AI to think like a human is a fantasy at present.







