The web browsing market has been relatively stable for some years, with Google Chrome holding 72% market share in 2025, ahead of Apple’s Safari at 12%. These are followed by Microsoft Edge, Firefox and Opera. Recently, the AI chatbot vendors have decided to muscle in on this market, and existing incumbents have added AI features in response.
An AI browser can, in principle, do things that traditional browsers could not. For example, they can learn the behaviour of a user and carry out tasks on that person’s behalf. The AI browser can offer summaries of a topic rather than just display a list of links to websites, by reading and synthesising several relevant websites at once. Google has offered an “AI summary” feature since May 2024. An AI browser may remember previous user interactions and use the context from those to answer follow-up questions.
There is already a landscape of AI browsers, quite apart from the AI features of browser incumbents. Perplexity Comet launched in July 2025, and OpenAI’s ChatGPT Atlas in October 2025. There are also specialist AI browsers such as Sigma AI Browser, Dia, Genspark and Browserbase, Fellou and many more.
Although there are some benefits, there are also many issues and challenges associated with web browsers. These tools collect data on individuals, further reducing privacy in a world where we are all increasingly tracked by digital tools. The vendor’s collection of personal data becomes a juicy target for hackers, who have demonstrated for many years their ability to hack into all manner of corporate systems to gain access to personal data such as bank accounts and credit card information. If you are a lawyer, and if you use an AI browser as part of working on a client case, then the notion of “client confidentiality” becomes a thorny one if you are sharing data with an AI browser provider, whose data is stored somewhere in a cloud server or on their own premises.
AI browsers are already having an effect on internet traffic and causing problems to advertisers. Previously, if you ran a Google search and saw a list of pages, then advertisers would pay to appear prominently in association with that search. An AI browser that summarises several websites and brings an answer to a user has read the websites, but the human user has not seen these websites at all, meaning that the advertising money is wasted. This rewiring of the web is a major issue.
However, by far the most troubling aspect of AI browsers is that of the security of the products themselves. Large language models (LLMs) are notoriously vulnerable to attack in various ways, from prompt injection to data poisoning and more. Any browser that uses an LLM is fundamentally open to such attacks, as it must read user inputs as prompts and also access websites, which themselves can be infected with LLM-specific instructions and malware. Such instructions can be hidden using invisible webpage text, within images or as HTML, all readable by an LLM but not obvious to a human.
Browsers that use agentic AI are even more vulnerable, as they will have been given permission to take actions on a user’s behalf, the LLM acting with the user’s privileges. Such a hijacked LLM could do all manner of things, such as sending emails out from your email address, or revealing banking or other private information, such as your email history, healthcare data or even passwords. It could initiate money transfers from your account, since it is acting with your privileges, notionally on your behalf. These actions may be entirely invisible to the user.
This is not a theoretical concern. It took just a single day for ChatGPT to be hacked by multiple security researchers. Several researchers found ways to inject prompts into this and various other AI browsers. The chief information security officer for OpenAI, Dane Stuckey, admitted this in an interview, saying: “Prompt injection remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make ChatGPT agent fall for these attacks.” In August 2025 security researchers successfully hacked Perplexity Comet via prompt injection. Researchers found that the browser failed to properly separate genuine user instructions from hidden commands that had been planted in webpage content. Troublingly, the researchers reported this to Perplexity before going public to enable them to address it, but Perplexity “could not identify any security impact”, despite a demonstration of the penetration.
There are steps that AI browser providers could take to mitigate these risks. For a start, any agentic instruction could be required to have explicit user permission. The LLMs could be fine-tuned in their training to try to look out for malicious prompts. The problem is that they already are, yet hacking exploits were discovered almost immediately after the release of these tools.
It is not so easy for users to just avoid this issue, for example, by refusing to use these latest AI browsers. The problem is that existing vendors are happily adding AI features to current products in an effort to respond to the competitive threat. The security vulnerabilities are not vendor-specific. Any LLM is susceptible to a prompt injection attack, and although vendors try and plug each gap that appears, they are in a constant “whack a mole” game with attackers. They only have to miss one cleverly worded or well-hidden instruction and a hacker can have a field day. So far in 2025 one report estimated that there had been 25 million AI-related hacking incidents across 92 countries. This includes attacks leveraging browsers, prompt injection, phishing, and agentic browser abuse. A separate report found 752,000 browser-specific attacks.
AI vendors are doing what they can to improve security, but the very nature of LLMs and free text chat interfaces makes it very difficult indeed to counter these threats. End users also have limited recourse. You should keep your browser and software up to date, limit permissions to any agentic AI tools to the absolute minimum, install security tools, check URLs carefully and use multi-factor authentication as far as you can. You can avoid switching on LLM-based browsers where possible. These steps will help, but the sieve-like state of LLM security at the moment represents a serious ongoing challenge. Vendors need to carefully consider rethinking the very architecture of browsers, balancing convenience with control, and automation with accountability.







