Delve into the fundamentals of artificial intelligence with our blog series. Explore key AI types, how large language models (LLMs) work, the role of training data, and the challenge of LLM hallucinations. Learn about transformer architectures, neural networks, and the difference between generative and analytic AI, with practical explanations and real-world examples for beginners and experts alike.
Large language models (LLMs) have developed significantly since ChatGPT burst onto the public scene in late 2022, built on the GPT-3 model. There have been increases in scale, with GPT-4 being several times larger than GPT-3 in terms of the numbers of parameters. Context windows increased from a few thousand tokens to over a million.…
Large language models (LLMs) such as ChatGPT have excellent linguistic skills, but to use them for a specific business process, they need additional knowledge. For example, a customer service chatbot would not be much use unless it knew things like the products that the company sold and who had purchased them and when. To supplement…
You may be aware that a large language model (LLM) is trained on data, but did you know that there is a multi-billion-dollar industry of human data labelling that supports this? Behind every LLM lies an invisible workforce that toils away, labelling files, images and videos to help train the AIs. A large language model…
We are all being deluged by news stories about artificial intelligence (AI) and the large language models (LLMs) at the heart of the latest AI trend, generative AI. But how do they actually work? Some level of understanding of this would seem quite useful for a technology that now creates over half of the content…
There have been several studies that show that the current failure rate of AI projects is shocking. The highest profile one is a 2025 MIT study, involving hundreds of interviews, finding that the failure rate of AI projects was 95%. This stark number is not actually so different from other estimates, which have ranged from…
We are used to dealing with multiple sensory inputs: the information from our senses of sight, hearing, taste, touch and smell. We combine this information to make decisions easily and unconsciously. By contrast, artificial intelligence (AI) chatbots based on large language models (LLMs) have been mostly restricted to a single mode of communication – text.…
As the large language models (LLMs) that underlie artificial intelligence chatbots like ChatGPT permeate more and more of daily life, people are increasingly coming to depend on them. Students use LLMs to help with their homework, programmers use LLMs to debug code or write new program code, and marketers to write descriptive content about their…
We have grown used to computers behaving consistently. If you type a formula into Excel to multiply two numbers together, not only will the answer be correct, but it will produce the same result when you next open the spreadsheet. This is not the way that large language models (LLMS), the models underlying generative AI,…
With all the current levels of excitement about generative artificial intelligence (AI) in the form of chatbots like ChatGPT and its rivals, it is easy to forget that the scope of AI goes well beyond this particular strand of AI. The media is awash with stories about the large language models (LLMs) that underpin generative…
One aspect of generative AI that seems curious is the different rates of progress that the large language models (LLMs) have made in general-purpose text generation compared to their progress in image creation. It is now getting on for three years since the release of Open AI’s ChatGPT in November 2022, yet the progress of…