Addiction to gambling is common: over 2% of UK adults being problem gamblers. Gamblers frequently bet irrationally in several ways. In the “illusion of control”, people wrongly believe they can influence the outcome of a random or chance-based event, such as a slot machine or dice roll. The “gambler’s fallacy” is the irrational belief that past independent events influence future outcomes in random events, leading a person to think an event is “due” to happen after a string of the opposite events actually happening. For example, a gambler might incorrectly believe that a roulette wheel is more likely to land on red after it has landed on black several times in a row, despite each spin being an independent event with the same probability. If you toss a coin and it lands heads five times in a row, what is the chance of it landing heads next time? Assuming a fair coin, the answer is 0.5, but a gambler might feel that a tail was due. In “asymmetric chasing”, gamblers tend to increase bets or continue gambling to recover losses, driven by the psychological impact of a losing streak. Now you may be wondering, what on earth does this have to do with artificial intelligence (AI)?
Could an AI show similar behaviour to a human gambler? In a fascinating academic paper from the Gwangju Institute of Science and Technology in South Korea, researchers tested four large language models (LLMs) in a scenario of them gambling with a slot machine. You might assume that cold, calculating silicon-based models would behave entirely rationally and gamble logically using ironclad mathematical rules. You would be wrong.
Large language models actually exhibit behavioural patterns similar to human gambling addiction. All three of the gambling illusions mentioned were demonstrated: illusion of control, gambler’s fallacy and chasing behaviour. The study examined various possible triggers, including prompt complexity, variable betting options and autonomy-granting instructions. In essence, the more autonomy the LLMs were given, the more risks they took. They seem to have human-like addiction, perhaps showing that they internalise human cognitive biases.
This is intriguing in itself, but it is not just of academic interest, given that LLMs are being touted (and used) as investment advisors. They have demonstrated some ability to perform well on stock market prediction, while they are also used in the investment banking industry. Investment firm BlackRock has trained its own proprietary LLMs on historical stock data and earnings call transcripts. For example, LLMs may be used to produce sentiment analysis, summarising large numbers of sources such as news items regarding the general societal view of a particular company or stock at a certain moment.
However, other research has shown that LLMs are likely to take riskier and more active trading positions than is generally recommended. This results in high transaction costs and a tendency to follow “hot stocks”, with a particular tendency to invest in US tech stocks, which many perceive to be overvalued. This kind of more aggressive trading behaviour is exactly what was found in the South Korean research paper mentioned earlier. This is validated by other research, such as a 2025 University of Edinburgh paper, which examined many existing research papers and developed a longer timeframe to better understand the long-term performance of LLM predictions. By back-testing over two decades of stock market data, this study found that LLMs fail to outperform the market. They tend to be too conservative in bull markets and overly aggressive in bear markets. The LLMs underperformed alternative approaches such as the mechanistic autoregressive Integrated Moving Average (ARIMA) model, a statistical time-series model that has been shown to work well at capturing time-dependent patterns in stock prices and delivering short-term forecasts. It turns out that, in stock market performance, it is hard to “beat the index”. Over longer periods, this becomes extremely hard: just 8% of professional fund managers beat the index over a 20-year period. There are a few well-publicised exceptions (such as investor Warren Buffett), but these are rare indeed. LLMs are currently not in this bracket.
Given that LLMs in general tend towards overconfidence in their answers, this trait of exhibiting gambling addict style behaviour suggests that considerable caution should be given before entrusting your stock portfolio to an LLM. Without doubt, they are useful research tools for summarising financial news and research, but you would have to be a gambler, not an investor, to let them pick your stocks.







