It is now almost three years since the release of ChatGPT by OpenAI on an unsuspecting world. In this time large language models (LLMs) have had some significant impacts. Over half of all venture capital investment is now related to artificial intelligence (AI). Nvidia, which makes the graphics processing units that power most AI searches, is now the most valuable company in the world, growing from $337 billion to $4.5 trillion in October 2025, and is over 7% of the S&P 500 index. OpenAI claims 800 million weekly active users of ChatGPT, a tenth of the planet’s population. AI is starting to impact the job market, and AI is being used in industries as diverse as defence, education and wine, and even by people as virtual friends and even therapists (this rarely ends well).
So, three years on, what does the public actually think of all this? Outside the rarefied world of Silicon Valley and technologists, how is AI regarded? In the US, the Pew Institute survey found that the public was much more sceptical than experts about AI: just 17% of the public believes that AI will have a positive impact, and just 11% were excited about the increased use of AI in daily life. 43% of the public believes AI will harm them, compared to just 24% that think it will benefit them. Both the public (55%) and experts (57%) want more control and regulation of AI. There is quite a wide gender gap, with men much more positive than women. Fully 64% of the US public thinks that AI will lead to fewer jobs over the next two decades. Both the public (66%) and experts (70%) are concerned about inaccurate information from AIs, suggesting that people are becoming very aware of LLMs propensity to hallucinate. Both experts and the public (55% for both) are highly concerned about bias in AI decisions in the USA.
In the UK, the Alan Turing Institute and Ada Lovelace Institute Survey was conducted in November 2024, repeating an earlier survey. It found that 61% of the UK public have heard of LLMs and that 40% have used them (yet 93% had heard about driverless cars). Concern about the technology has risen from 44% in 2022/2023 to 59% in 2024/2025. Survey respondents cite false information (61%), financial fraud (58%) and deepfakes (61%) as the biggest areas of concern. Over 75% of the public wanted government regulators to have a suite of safety powers. People who use LLMs are most commonly using them for recommendations and answers; 9% use them for writing emails, 14% for entertainment and
11% for helping with job applications. A troubling 7% of the population (two million people) have used a mental health chatbot, yet only 36% thought that such chatbots were beneficial and 63% were concerned about the use of chatbots for mental health. Perceptions vary across demographics. Just 39% of the public were concerned about facial recognition technology being used in policing, but that rose to 57% among black people and 52% among Asians. The most popular use case of AI, in terms of perceived benefit v concern, was for assessing cancer risk from medical scans, followed by facial recognition in policing and assessing loan risk. People were most anxious about robotics, mental health chatbots and assessing welfare eligibility.
A separate UK survey by the National Centre for Social Research found that people’s attitudes to AI are influenced by their political views. For example, 23% of people with left-wing views are concerned about discriminatory outcomes in welfare eligibility compared to just 8% of people with right-wing views. People with left-wing views are much more concerned about AI taking jobs than people with right-wing views. Yet there is no political divide about the concern around mental health chatbots. Perhaps surprisingly, 70% of people wish that AI were governed by laws and regulations, irrespective of their political leanings.
Research from KPMG and the University of Melbourne found, in a global study in 2025 of 48,000 people in 47 countries, that 54% of people are wary about trusting AI. People in advanced economies are much more sceptical and less trusting (39% advanced v 57% emerging) than those in emerging economies. Two in three of the respondents use AI regularly, but 61% have no AI training. People highlighted improved efficiency and innovation as benefits, while the biggest risks were perceived to be cybersecurity, privacy, misinformation and job loss. A more business-oriented survey from the Henley Business School found broadly similar results, with the biggest AI frustrations being mistakes, lack of reliable data, model bias and privacy concerns.
Interestingly, public attitudes to AI do not seem to vary too much around the world, though they do show nuances by gender and political belief. I actually find it quite encouraging that the public has identified AI misinformation, cybersecurity and deepfakes as the biggest risks around AI, showing that they are not naively buying the relentless boosterism of the AI vendors and the consulting firms and investors supporting them. Trust in AI has fallen back significantly from 2023 to today, and it will be interesting to see how that continues to evolve. As we hear more and more stories in the mass media about chatbot-related suicides, hallucinating AIs and security breaches, there will be plenty for us all to think about to balance the genuine opportunities that are emerging for well-chosen use cases.







