Skip to content
The Information Difference
The Information Difference
  • SERVICES
    • Software Vendor Services
      • Vendor Profiles
    • Data Management Consultancy
    • Market Research
    • IT Strategy Facilitation
    • Enterprise Services
  • Our Expertise
    • Focus Areas
      • Artificial Intelligence
      • Master Data Management
      • Data Quality
      • Data Governance
    • Landscapes
      • MDM Landscape Q2 2025
      • DQ Landscape Q2 2025
      • BDW Landscape Q4 2022
    • Product Evaluation Format
    • Mergers and Acquisitions
  • ABOUT US
  • BLOG
  • CONTACT
Linkedin page opens in new windowX page opens in new window
  • SERVICES
    • Software Vendor Services
      • Vendor Profiles
    • Data Management Consultancy
    • Market Research
    • IT Strategy Facilitation
    • Enterprise Services
  • Our Expertise
    • Focus Areas
      • Artificial Intelligence
      • Master Data Management
      • Data Quality
      • Data Governance
    • Landscapes
      • MDM Landscape Q2 2025
      • DQ Landscape Q2 2025
      • BDW Landscape Q4 2022
    • Product Evaluation Format
    • Mergers and Acquisitions
  • ABOUT US
  • BLOG
  • CONTACT

Daily Archives: 14 October, 2025

A stylised scene showing both humans and AI working together to annotate visual data such as people or digital avatars drawing bounding boxes, polygons, or tags on images of everyday items, vehicles, or faces. This conveys the blend of human insight and machine efficiency fundamental to the labelling process.

Under the Covers of AI – Data Labelling

Artificial Intelligence, Foundations of AIBy Mat Newcomb14 October, 2025

You may be aware that a large language model (LLM) is trained on data, but did you know that there is a multi-billion-dollar industry of human data labelling that supports this? Behind every LLM lies an invisible workforce that toils away, labelling files, images and videos to help train the AIs. A large language model…

A cartoon robot with desperate, wide eyes pulling the level on a slot machine, surrounded by coins, error codes or circuit diagrams.

Machine Yearning – AI Addiction

Artificial Intelligence, Emerging Topics in AIBy Mat Newcomb14 October, 2025

Addiction to gambling is common: over 2% of UK adults being problem gamblers. Gamblers frequently bet irrationally in several ways. In the “illusion of control”, people wrongly believe they can influence the outcome of a random or chance-based event, such as a slot machine or dice roll. The “gambler’s fallacy” is the irrational belief that…

A cartoon ghost or floating robot with a surprised expression looming over a set of formulas, hinting at “haunted” calculations or “exorcising” Copilot.

Formula for Disaster: Copilot in Excel

Artificial Intelligence, Emerging Topics in AIBy Mat Newcomb14 October, 2025

I saw a disturbing image today. Not something from a war zone or a horror film, but a post on Linkedin: Microsoft have, in their wisdom, introduced a large language model (LLM)-driven AI helper to Excel called Copilot. After entering “=copilot( )”, you can type in any text you like within the brackets, and Excel…

Cartoon-style art highlighting both hype and anxiety (e.g., a robot handing out conflicting "Hope" and "Hype" flyers to a sceptical crowd).

What the World Thinks About AI

Artificial Intelligence, Ethics of AIBy Mat Newcomb14 October, 2025

It is now almost three years since the release of ChatGPT by OpenAI on an unsuspecting world. In this time large language models (LLMs) have had some significant impacts. Over half of all venture capital investment is now related to artificial intelligence (AI). Nvidia, which makes the graphics processing units that power most AI searches,…

A computer or robot hand reaching for a bright red apple, which has a digital “poison” icon subtly embedded on its surface. This references both classic iconography (“poisoned apple”) and the idea of tempting but corrupt content entering the system.

Tainted Texts – AI Data Poisoning

Artificial Intelligence, Emerging Topics in AIBy Mat Newcomb14 October, 2025

Large language models (LLMs), the engines at the heart of generative AI chatbots like ChatGPT, Claude, Gemini and Grok are susceptible to various kinds of attack by hackers. For example, prompt injection is where an attacker fills a prompt with malicious input to either leak data or bypass controls. There are actually many other types…

The Information Difference
Copyright © 2007-2025 The Information Difference Ltd. All Rights Reserved.
Go to Top