It is now three years since ChatGPT was launched, and introduced most of the world to generative AI. Corporations and public sector bodies alike have rushed to adopt this latest technology, but with limited success. The proportion of AI projects that actually succeed has been estimated at up to 20% (RAND) 13% (Gartner) and 5% (MIT in a large and influential study). Why is the failure rate so high?
Partly, people have been motivated by a “fear of missing out” mentality, caught up in the hype generated by AI vendors, the media and the consulting firms that make money from implementing these projects. Generative AI has been a problem looking for a solution in many cases, and not tied to business priorities. One issue is that the fluency of language in large language models (LLMs) seems almost magical when first encountered, and this has led to many people not taking the time to understand what LLMs are good at and what they are not good at. For sure, LLMs are fluent communicators and capable of generating capably written text on a wide range of subjects almost instantly, as well as generating program code. However, they are probabilistic creatures that give slightly different answers to the same questions, and so are unsuitable for situations where consistency is a requirement. They also hallucinate at a high rate, meaning that their answers and generated text need to be carefully checked rather than automatically trusted as being true and accurate. Another major issue is that they are vulnerable to various types of attack from hackers, from prompt injection to data positioning and more.
In the context of a corporation, an off-the-shelf LLM does not have context about your particular company policies, products or customers. This can be addressed by a technique called retrieval augmented generation (RAG), where company documents are shown to the LLM to supplement its training data. However, this raises the issue of the quality of the data in these documents. The quality of an LLM’s answers is highly dependent on the quality of its training data and data sources, yet much corporate data is of distinctly variable quality. So data governance and data quality is an important set of foundations to put in place if you are to improve the chances of success in your AI projects.
To begin with, you want to align the choice of AI projects with business priorities and build an investment case for AI, just like any new technology. The money that you are investing in AI should deliver real business value and a positive return on investment, or why are you doing it? Above all, ensure that use cases are selected that suit AI well, and avoid situations where they will struggle, such as those where a high level of consistency and accuracy is needed. LLMs bring unique security risks, so a careful review of security procedures that addresses these as well as possible is essential.
You may want to set up a central AI delivery team, perhaps complemented by an AI governance group with business as well as IT representation. The central team should establish the core AI architecture, including the products to be selected, central templates and sharing of best practices. AI is a fast-moving area, so it is unrealistic to expect a lengthy evaluation process for things like model selection, but equally, you do not want a chaotic landscape where every project chooses its own models and approaches, and fails to learn from others. In terms of the kind of architecture that you should put in place, that is a subject in itself and is explored in more detail in this blog. There is a lot to learn about how AI works, and so it makes sense to invest in comprehensive training of staff, so that they can be more productive in using AI technology. One other important step is to treat AI projects as a complete life cycle, from development through to operations, through to support and monitoring. AI models can deteriorate over time, so ongoing monitoring and support are essential.
So, given all that, what are the practical steps that you can put in place to succeed with AI? Here is a partial list.
- build a proper investment/business case
- choose projects that suit AI well
- establish a foundation of data governance and data quality
- set up a cross-functional AI delivery and governance team
- set up a sound AI architecture, including robust operations
- institute a review of security procedures with AI in mind
- invest in training and AI literacy
- establish a process for the reuse of platforms, models, and templates
- monitor AI models carefully and measure return on investment.
With these preparatory steps, you may not be able to guarantee that all your AI projects run smoothly, but you can at least improve your odds of success.







