The growth of interest in AI has been a global phenomenon over the last three years.
However, while large language model (LLM) adoption has been rapid, there is an open question as to whether the industry’s pricing model is sustainable. Is current LLM pricing a steady state business model or a temporary subsidy? Consider the two leading LLM vendors, OpenAI and Anthropic. OpenAI leads in consumer market share, with around 70% of all consumer traffic. The enterprise AI picture is different. By 2025, Anthropic overtook OpenAI and now leads in the enterprise, with 32% market share of foundation model revenue compared to 25% for OpenAI.
At present, OpenAI and Claude are both haemorrhaging money due to heavy research and development and other costs. That is not unusual with tech pioneers, though the scale of the losses being racked up is huge. OpenAI’s finances are private, but estimates are that it may have done around $13 billion in revenue in 2025, after generating $4.3 billion revenue in the first half of 2025. Anthropic’s revenue was likely to be around $7 billion in 2025. This sounds great, but it has to be set against the costs. According to estimates based on Microsoft’s financial disclosures, OpenAI likely lost $12 billion in the quarter from July-September 2025, despite those revenues. OpenAI hopes to break even by 2030 (when it will need $125 billion in revenue to break even, with cash burn of $115 billion in 2029). Anthropic expects to break even in 2028, with projected $70 billion in revenue in 2028.
Foundation model vendors have various categories of costs. Apart from the usual things like salaries, office rent and network costs, the training of LLMs is notoriously expensive. Training ChatGPT 3 cost perhaps $3 million, whereas training ChatGPT 4 cost in the range of $100 million, and training ChatGPT 5 may have cost as much as $2 billion. However, these are one-off costs, capital expenditures rather than ongoing operating expenditures. More important for long-term economics are the ongoing costs of executing user queries, which clearly cost a vendor something in terms of processing power, and are continuous.
What is the true cost of running an LLM query? A single LLM query has been estimated to cost around on average 0.36 US cents in marginal GPU processing power, so a hundred queries would cost 36 cents. It should be noted that the costs vary greatly with the complexity of the model, so a “reasoning” model like o1 costs a lot more than a standard model like ChatGPT-4o. However, that does not include overheads like staff, the running costs of a data centre etc. Unsurprisingly, consumer queries are directed by OpenAI to standard models by default.
LLM vendors make money through pricing tokens, subscriptions or through enterprise licenses. Consumers can generally access LLMs for free, at least up to certain limits. Indeed, it is estimated that only 2% of ChatGPT users pay for a subscription. Around 10 million customers have subscriptions, and perhaps 5 million enterprise users, out of the total 800 million or so overall users of the product. An OpenAI “ChatGPT Team” license is currently $30 a month. Enterprise licenses are subject to negotiation, but are estimated to be around $60 a month per user, with a minimum of 150 seats (about $108,000 a year). Very large businesses will doubtless negotiate better unit prices. Enterprise spending on compute for LLMs by mid-2025 was around 49% on inference i.e. executing LLM queries, rather than on development.
Output token prices, which might be $10 per million tokens, are considerably more than the processing costs of delivering those answer tokens. It is a complex picture, since more complex models consume more tokens per task. For a user paying (say) $20 a month, a query may cost $0.04 per query (assuming 500 queries a month), and executing each of these queries likely costs OpenAI around $0.01 to $0.03 per query in compute cost. However, for reasoning models the picture is different. These cost the user the same $0.04 to run, but cost OpenAI perhaps $0.1 to $0.5 per query, implying a significant loss per query for the vendor. OpenAI caps the number of such messages that a user can send, but even so, the complex queries are loss-making for OpenAI, somewhat compensated by the profit they make on the simpler queries. It is estimated that just two complex questions per day by a user will burn through the entire value of the $20 monthly subscription. Anything beyond that is a financial loss for OpenAI. Casual users are effectively subsidising power users.
All this matters because at this point, corporate businesses are basing their AI budgets on the current pricing models of the LLM vendors. However, these vendors are losing money at an impressive rate, and at some point, their investors are going to want to see them turn profitable. To be sure, some of this could come from continued growth, but already a tenth of the population are using LLMs. It is possible that costs will go down through future model efficiencies, but another route to profitability would be for vendors to hike their prices at some point. This would follow a pattern in other technology sectors. Uber initially priced aggressively and below market, driving many traditional minicab and taxi companies out of business. Founded in 2009, Uber only turned profitable in 2023, raising its prices by 83% from 2018 to 2022. Investors can be patient: Amazon was founded in 1994, but only turned profitable in 2003. A key question is whether these analogies really hold for the LLM industry, and at what point the vendors will raise their prices. Enterprises currently justifying AI projects are basing their economics on current pricing, but could run into an unpleasant surprise if LLM pricing goes up sharply. To be fair, most enterprise technology projects aim to make a return within three years or less, so as to reflect future uncertainty of all kinds, but at present, AI projects are failing to deliver any financial return at an alarming rate, even with the current economics.
The history of Uber and Amazon has shown that investors can be patient in terms of demanding profitability from ground-breaking technology, but there are doubts about the economics of the AI industry. In the case of Uber, they waited to gain market dominance before hiking prices. By that time, it was very difficult for competitors to make headway., given the level of lock-in Uber had by then in terms of drivers and customers. It is a different story with LLMs, where the core underlying technology is essentially the same. Competing LLM vendors can train with different data or tweak their model weightings, but there is no obvious “moat” to build. At present, there are over a hundred LLM models, with emerging low-cost open source models from China such as DeepSeek and Qwen eating away at the market share of OpenAI, Anthropic, Google and Meta. Switching costs for enterprises and users are low. If a new vendor appears with a smarter or cheaper model, it is quite easy to move to it; at present, there is not much lock-in. For example, if you implement a corporate customer service chatbot and you just make API calls with prompts, it is trivial to switch models. Indeed, libraries like LangChain can abstract away provider differences. It is a different story if an enterprise fine-tunes its own model, as a switch will require retraining. There are also differences in context windows and RAG embeddings. An enterprise that switches providers will certainly have to do testing and check on model performance, but the level of lock-in for LLMs is not remotely comparable to, say, switching ERP providers, which would be a mammoth undertaking. Vendors will certainly try to increase lock-in through things like proprietary tooling, workflows, and agent frameworks, but at this stage, the barriers to switching remain low.
Vendors are striving to outcompete each other, but building smarter models requires major investments in compute power. This is great news for NVIDIA, who make the chips that run most LLM queries today, and for companies building data centres, but the sheer level of investment is already having distorting market effects. The price of computer memory has rocketed, which is crucial since LLM processing is particularly memory-intensive. Dynamic Random Access Memory (DRAM) contract prices in Q4 2025 were up 170% compared to the year before, while server-grade DRAM has seen a 50% price spike in 2025, and may double again in 2026. Data centre capacity may double by 2030. This jump in memory prices is a real cost for AI companies, and the money to pay for it has to come from somewhere.
These stresses are becoming visible. Oracle has off-balance sheet lease commitments of $248 billion for AI data centres, leading to a slump in its share price. More worryingly, there was also a sharp rise in the price of credit default swaps (CDS) for Oracle in late 2025; a CDS is essentially a way to bet on whether a company will go bust. Not all companies are as lavishly financed as OpenAI, and AI companies are not immune to the need to make money at some point. Legal AI start-up Robin AI collapsed in late 2025, and Builder.AI (which at one point had a $1.5 billion valuation) became insolvent in mid-2025. A fuller list of AI companies that have closed can be found here.
The combination of the leading vendors haemorrhaging money, minimal vendor lock-in, sharply rising memory and escalating data centre costs is a heady mix. AI continues to find applications in a wide range of industries, even if the rate of failure of corporate AI projects is troubling, as indicated in this September 2025 BCG report. However, for all the growth in demand for LLMs, vendors have so far failed to demonstrate a clear path to sustainable profitability. In a headlong rush for growth, the tin can of profit is being kicked further down the road. At some point, the bills for all this investment will become due, and the shakeout for vendors may not be pretty. This also applies to the project economics of corporates who are implementing these AI tools. Enterprises that assume LLM prices are stable inputs rather than temporary subsidies risk building their business cases on shifting sands. Current LLM pricing seems to be more like an investor-subsidised land grab, and enterprises need to factor that into their AI economics.







