Nobel Laureate Richard Feynman said that “no one understands quantum mechanics”, despite his having delivered a famous set of lectures on the subject. Despite this drawback, many companies, such as Google, IBM and others, are now building small, early working models of quantum computers, which promise to solve certain classes of mathematical problems drastically faster than current computing architecture. If and when larger, fault-tolerant quantum computers emerge, then they may transform certain areas, such as some optimisation problems in logistics and cryptography.
The area of cryptography is not just for spies. Every modern file encryption system is based on a technique called public key cryptography, which relies on a mathematical property of prime numbers. It is easy to multiply two large (typically thousands of digits long) prime numbers together to get a public key, but it is incredibly difficult to factor that number back into its original primes. Current computing technology would take hundreds of thousands of years to factor such a public key prime number. Quantum computing may be able to do that in hours. This would have huge implications for encrypting email, bank transactions and just about everything else that is encrypted today.
Instead of computer “bits” that are either on or off, like a light switch, or set to 0 or 1 in computing terms. Quantum computing instead uses quantum bits, or qubits. These can also be set to either 0, 1, or, crucially, a mix (superposition) of both at the same time. An analogy would be tossing a coin in the air. When the coin lands, then we know that the result is either heads or tails. While it is spinning in the air, it is in an uncertain state (both heads and tails), which only resolves to a definite one when the coin lands.
When multiple qubits are combined, they can become “entangled”, meaning their states are strongly correlated no matter how far apart they are. This is rather like sending a pair of gloves by post, one glove each in a separate box, to two different cities, say London and New York. If you open the box in New York and you see a right-handed glove, then you know immediately that the box in London contains a left-handed glove. You don’t have to open the other box, and you are thousands of miles away from New York, but you still know what is in the other box. Similarly, when two qubits are entangled, they are described by a single quantum state. Measuring one instantly determines the state of the other, even if the other was transported light-years away. This counterintuitive behaviour, which Albert Einstein described as ”spooky action at a distance”, has been robustly demonstrated by experiment. It has even been shown to exist at room temperature between small diamonds.
Superposition and entanglement together let a quantum computer represent and manipulate an enormous number of possible configurations in parallel, within a single quantum state. A quantum computation works by preparing qubits in a superposition, applying a sequence of operations that entangle them and create interference patterns, and then measuring the qubits at the end of the process. An algorithm is designed so that interference cancels out wrong answers and amplifies the probability of seeing the right ones when measured.
Quantum computers are not faster versions of today’s machines; instead, they are better suited to very specific problem types. Even with decades of development, quantum computers will still be worse than classical computers at many types of calculation. Running operating systems and databases and executing business logic, is inherently deterministic in nature. Emulating this type of processing on a quantum computer would just add a pointless overhead. Therefore, we are likely to see a mix of classical and quantum computers used, depending on the type of problem.
What does quantum computing have to do with AI? It turns out that quantum computing is well-suited to some tasks that are commonly used in current AI technology. In particular, deep learning systems use matrix multiplication for training, a process that can take weeks for a large model. This could, in theory, be done in a much shorter timeframe by a future quantum computer. Image processing, natural language processing and genomic sequencing are other AI or AI-adjacent areas where identifying patterns in massive datasets is crucial, and quantum computing may be able to deliver in these areas too. There may even be the possibility of designing new types of neural networks based on quantum technology, and quantum enhanced reinforcement learning, which is another area within current AI that requires considerable computing resources.
It should be emphasised that current quantum computing is at a largely research and experimental stage. Current graphical processing units (GPUs) produced by companies like NVIDIA are stable, mass-produced silicon chips that run continuously in normal conditions. By contrast, quantum computers are fragile and often require extreme environments like temperatures near absolute zero. This makes it very difficult for them to scale. A single GPU today offers vastly better and more stable performance compared to current quantum computers. The key, of course, is whether that will change, and over what time frame. No one can be certain how quickly quantum computing will develop, but it is possible that small-scale demonstrations in specific domains like advanced material science may be viable within a few years. Within a decade, experts think that we may see commercially viable quantum applications. Beyond that, large-scale, fault-tolerant quantum computers capable of solving a wide range of complex commercial problems or breaking current encryption standards may appear, but probably not for 15-30 years. However, scientific advances are hard to predict, and sometimes breakthroughs happen quicker than expected, so AI companies are keeping a close eye on developments in quantum computing, just in case some spooky quantum action occurs quicker than expected.







