Artificial intelligence requires plenty of computing power. Large language models (LLMs), the technology underlying generative AI tools such as ChatGPT, require vast amounts of data to train on, and in turn, this training requires massive computing power. Executing a prompt in an AI chatbot requires processing power, and it is estimated that executing an AI prompt takes ten times the processing power of a regular Google query. Training a modern LLM requires massive computing power: GPT-4 on its own, a trillion-parameter LLM, may have cost OpenAI as much as $100 million to train.
It turns out that some specialist chips are well-suited to the needs of modern AI. NVIDIA was a company that made chips for the video game industry. Set up in 1993 to bring 3D graphics to the gaming industry, NVIDIA brought to market the Graphics Processing Unit (GPU) in 1999. This innovation spurred development in the gaming industry, along with NVIDIA’s CUDA parallel processing architecture in 2006. In 1999, NVIDIA went public and achieved revenues of $158 million. By 2020, this had risen to almost $11 billion. The massively parallel design of GPUs is perfect for computationally intensive tasks, such as rendering graphics, but also, it turns out, for training LLMs. A GPU is at least ten times quicker than a regular computer processing unit (CPU) for training an LLM. The advent of generative AI, spurred on by the release of ChatGPT in November 2022, caused demand for NVIDIA’s chips to explode. By 2024, the company’s revenues were over $60 billion, and in 2025, these are likely to more than double again. In September 2025, the company was capitalised at $4.3 trillion, the most valuable company on the planet after overtaking Microsoft, Apple, Amazon and Alphabet. In 2020, over half of NVIDIA’s revenues came from gaming, but by 2025, this was down to just 8%, the growth coming mostly from data centres utilising its chips for AI.
The boom is not restricted to processors. Specialised memory for AI has over four times the bandwidth of regular memory, making it ideal to handle the large amounts of data that AI models consume. Non-volatile memory (NVM) can retain stored information even after power is switched off, enhancing reliability. Field programmable gate arrays (FGPAs) are an integrated circuit that can be tailored to specific use cases and reprogrammed after manufacturing, and have long been used in applications like digital revelations and robotics. They are power-efficient, though they require specialist hardware design knowledge, so they have largely been supplanted by GPUs for LLM processing. Rival technologies to the NVIDIA GPUs have appeared in the form of Google’s Tensor Processing Unit (TPU), an application-specific integrated circuit (ASIC), tailored to the matrix operations used in AI processing. These can be even faster than GPUs for LLM operations. At present, they represent only around 3% of the market, (NVIDIA being by far the dominant GPU player) but are likely to grow, with rivals appearing from companies including AMD and Intel. GPUs have a significant “moat” to defend against this challenge, however, due to their broader and more mature software ecosystem.
The onward surge of interest in AI is driving a boom in the building of data centres. The global data centre industry is worth around $243 billion and may double in size by 2032. Data centres already consume 1.5% of the world’s electricity, though around 9% of US consumption, with a quarter of that in Virginia – “data centre alley”. This boom itself has a significant environmental impact. Data centres need clean water to cool their servers. Salt water would affect the equipment, though using seawater is possible if a closed-loop system with heat exchangers is put in place, but this requires corrosion-resistant equipment and meticulous maintenance. Using drinkable water has its own impacts. Data centres are often built in areas with low humidity, reducing metal corrosion, but such areas also tend to be short of drinking water. Technology can help here, with more efficient liquid cooling systems being an area of promise, though these themselves require a lot of capital investment. It is unclear whether there will be a return for shareholders on that hefty investment.
As AI continues its onward march into society, we will see continued demand for data centres and the specialist chips in them that are built for AI-related processing. We should expect to see increased competition at the hardware level, and increasing public debate and concern about the impact of the AI-driven data centre boom on electricity consumption, water consumption, and other environmental impacts. Of course, this assumes that we are not actually in an AI bubble, in which case all bets are off.







