Financial services make up at least a fifth of the entire global economy. It has always been a heavy user of technology, investing at least triple as much in technology as the retail sector or manufacturing. Given this, it is unsurprising that financial services have also been early adopters of artificial intelligence (AI). It is used for loan decisions, customer support, document analysis, financial forecasting, fraud detection, and to generate financial reports. As one example, Mastercard used AI and graph technology to look for potentially fraudulent patterns in credit card data. The approach reportedly doubled the detection rate of compromised cards.
The use of AI in finance goes back to the 1980s, when hedge funds adopted statistical techniques to identify price discrepancies, and machine learning was used in credit scoring. Machine learning was further used in risk management and customer segmentation. Trading, traditionally done by highly paid humans, started to migrate to high-frequency trading, where algorithms conduct a large number of trades at great speed. The algorithms analyse real-time stock price data, looking for anomalies or patterns that may represent trading opportunities. They are empowered to trade on their own, executing much faster than a human could, sometimes thousands of times a second. Over half of all US stock market trades are now algorithmic, though this has caused some high-profile issues on occasion, such as the 2010 “flash crash”.
In the last few years, many financial services start-up companies have emerged to challenge the big banks, with the “FinTech” sector, worth around $250 billion in 2024, being a heavy user of AI for everything from chatbots to portfolio optimisation to delivering insights to traders. Companies such as Stripe, Tencent, Revolut and Chime are all examples of this rapidly expanding sector. Ant Group, part of Alibaba, serves over 1.3 billion customers and 80 million merchants.
Using AI has some clear advantages for banks. A machine learning model can detect patterns in huge amounts of data that a human would miss, making this technology a good fit for fraud detection. With banks losing around 5% of their annual revenue to fraud, any improvement is welcome. Chatbots can be deployed to either supplement or replace some customer service staff. Although this is a controversial area, some studies have found that customers can actually experience a higher level of customer satisfaction with such chatbots compared to human customer service staff. At the very least, they are available 24/7 and do not get tired or cranky, even if they may lack empathy. There is little doubt that AI models will continue to be used for risk assessment, creditworthiness, evaluating loan collateral, and other areas that involve evaluating multiple data sources. Similarly, we can be sure that AI models will continue to have a role in high-frequency trading, given that they can operate at speeds that human traders cannot.
Generative AI’s ability to write program code in addition to text means that it has potential applications in the back-office systems that are used by banks. Trading systems are typically huge, proprietary and complex, involving millions of lines of code. The programming language COBOL, invented by US Rear Admiral Grace Hopper in 1959, still powers nearly half of all global banking IT systems, according to Reuters. AI code generation is not ready to just vibe-code a major trading system to replace all that COBOL any time soon, but AI tools can be useful in debugging and producing test cases for programmers.
There are plenty of challenges as well as opportunities. Data privacy is a major issue in a highly regulated industry like financial services, and AI has more than its share of security issues. Banks are particularly juicy targets for hackers, for obvious reasons, and AI is presenting new avenues for fraudsters and attackers, as well as new opportunities for the banks themselves. Bias in AI models is another problem. In a 2024 study, borrowers with black or brown skin were twice as likely to be denied a loan as those with white skin. A 2022 study found that racial minority borrowers were charged higher interest rates than white borrowers, costing an extra $450 million in interest per year. The EU AI Act, a piece of legislation that was enacted in August 2024 and will be in force from August 2026, explicitly addresses algorithmic bias, and will require a response from the industry.
Over the next few years we can expect to see AI, in its various forms, used more and more
By financial services firms as they seek to innovate and find opportunities to automate routine back-office processing. Just the ability to scan and index paper documents en masse is a major step forward for an industry that still uses plenty of paper, despite its embrace of high technology. We can expect generative AI to be used to generate customised financial advice and produce synthetic data for testing, though that has its own issues. Human advisers will likely be supplemented with AI tools to help them be more efficient. New AI models will continue to be used to detect patterns in data, both for trading and for fraud detection. Financial institutions will doubtless continue to be early adopters of AI, for better and for worse, just as they have been for other new technologies.







