As generative artificial intelligence (AI) flooded into the public consciousness in late 2022, corporations wrestled with how best to utilise and control it at work. Initial hopes of productivity improvements were tempered by the realisation that the public chatbots like ChatGPT and Claude were executing prompts on their servers. This content, which at work could include contracts or product plans or other trade secrets, could potentially appear as training data used in a large language model (LLM). Samsung found this out the hard way in 2023, when some of its employees used ChatGPT to help with a new program. The notes and even the source code were entered by the employees, and so they leaked this sensitive information to the chatbot. Samsung subsequently banned the use of generative AI on company-owned devices. Other large companies followed suit, putting policies and restrictions in place to prevent trade secrets leaking from behind corporate firewalls into the wild. By the summer of 2023 a majority of UK companies had implemented such restrictions. In the USA, high-profile companies such as Apple, Amazon, Bank of America and many more followed the same path. Some companies have built enterprise-specific servers and secure cloud implementations in order to reduce the risk. A raft of policies and suggested security approaches have followed.
So – problem solved? Hardly. A survey by WalkMe in summer 2025 found that 78% of the thousand US staff surveyed were actually using unauthorised AI tools at work; just 8% of them had received any AI training. A different 2025 survey showed that 13% of companies that responded admitted that they had suffered financial losses, reputational damage or customer fallout as a result of AI. Yet another survey found that 70% of the staff in North America who were surveyed admitted to unauthorised use of AI tools at work. This ranged from brainstorming content to analysing data to drafting emails or documents and more, including writing code and generating customer-facing content. Quite apart from the confidentiality issues of using such tools and leaking trade secrets, the use of public chatbots in heavily regulated industries like finance could cause legal exposure for the companies involved. This suggests a lack of effective governance on the part of many companies, and also an unmet need from staff who want to be able to use productivity tools that are not being provided to them.
One interesting aspect of the use of AI, shadow or otherwise, is that the people who are least knowledgeable about AI are the ones who are most likely to be receptive to AI and use it. A 2025 study into this confirmed the results of another 2025 study into the same area, showing the tendency of those who understand AI least being the ones most likely to be impressed with it. This was also seen in a major study of IT professionals, who found that executives were likely to have much higher levels of trust in AI than those developers who were actually using it day to day. For example, even today not everyone is aware that LLMs “hallucinate” i.e. produce fabricated or nonsensical answers, and that they do so regularly, especially in areas outside of their original training data. At least 15% of AI answers contain hallucinations, and this rate is actually worse with more recent AI models. Another major concern about LLMs is their security vulnerability. It is notoriously easy for bad actors to manipulate chatbots into actions that are not intended. Some of the techniques do not involve elaborate or sophisticated hacking, but merely sending prompts that get LLMs to disregard their guardrails. There are some more elaborate techniques that can be used by hackers, such as steganography, but LLMs have been shown to be easily manipulated by much simpler methods. In one set of tests, a 65% success rate was achieved in getting LLMs to override their safety guardrails, merely by disguising the request amidst benign content.
What should companies do? To begin with, it would seem prudent to embark on comprehensive training of their staff about the risks of using LLMs as well as the opportunities. Many staff simply don’t realise the risks that they are taking. As noted, the more that people work with AI, the more they are likely to understand the issues and to be less trusting. In one survey, 93% of software engineers encounter problems with their models on a daily or weekly basis. A broad assessment of security risks would also seem in order, since LLMs clearly have substantial security weaknesses at present, but there are steps that can be taken to introduce better governance and to improve the security associated with AI. One Microsoft report found that 87% of UK businesses are vulnerable to AI-related cyberattacks. Another survey found that 74% of companies perceive AI threats as a major challenge for their organisations. This is not a rare, “black swan” style risk: over a quarter of all UK companies admitted to being victims of a cyber-attack in the last year, in one huge survey of over 8,000 businesses.
As with any new or ground-breaking technology, it will take time for companies to get to grips with how to implement AI effectively. However, it is important that corporate executives get a handle on these issues and do so quickly. The widespread rise in shadow AI has shown that the current approaches are not succeeding for most companies. Hackers are agile and adaptable, and time is not on the side of the angels here. Companies need to understand the risks that are lurking in the shadows.







