Generative artificial intelligence (AI) has many useful applications, such as automating repetitive tasks, language translation and coding software. However, its ability to produce “deepfakes” of images, audio and video is a controversial and potentially troubling area.
There are two main approaches that are used in AI to producing deepfakes. The first is a technique called “general adversarial networks” (GAN). Two AI models are used, the first to generate a fake image or video, or alternatively, use a real one, and present its choice to the other model. This second model (‘the discriminator”) decides whether the image is real or fake. The two AI models repeat this process a vast number of times, learning as they go. The first model gets better at generating fakes, and the discriminator gets better at spotting them. For an effective deepfake model, up to 100,000 images are used, and millions of iterations, in batches.
The other main approach to produce deepfakes is the use of “diffusion models”. In this, a model is trained to restore an image to its original state after visual noise has been artificially added. For example, the model may fill in gaps in an image with something plausible. In some cases, these models are easier to train than GAN networks, though they can involve a lot of processing power, and may not produce quite such high-quality results as GANs.
Deepfakes have obvious potential for abuse, such as controversial (fake) statements made by AI-generated versions of real politicians during an election campaign. Deepfakes can be used to make pornographic images of public figures, and have even been used for child abuse. They can also be used by hackers for financial fraud. In 2024, a Hong Kong employee of engineering firm Arup was tricked into transferring $25 million after a video call with a deepfake “chief financial officer”, along with several other apparent employees, including some that he knew personally.
Can we spot a deepfake? A deepfake image that made a stir in the world’s press in March 2023 was one of Pope Francis seemingly wearing a quilted jacket. This image was not even particularly good, as it had some reasonably obvious flaws, as pointed out by The New Indian Express below:

Nonetheless, it captured the public imagination and popularised the awareness of deepfake images. Technology moves on, especially in the world of AI generated images, and it is easy now for anyone to produce a similar image that is harder to spot as a fake. This was an image I created in 30th August 2025 in Leonardo.

Even this basic attempt by me seems harder to spot as a fake than the one that made such a stir in 2023. Actually spotting a deepfake can be hard. Experts will look for lighting and shadow issues, unexpected reflections or overly smooth skin or unusual blinking patterns of a person in a video. The metadata of a file can be examined for clues, and reverse image searches can be used to track down original images. Technology can be used too, with various tools emerging to help detect deepfakes, from vendors including OpenAI, Intel, and others.
Deepfakes can be of audio as well as visual images, and there are obvious applications for misuse. One US senator was targeted by a deepfake Zoom call in 2024. UK Prime Minister Sir Keir Starmer was the victim of a deepfake audio recording of him seemingly swearing at staff. In the 2024 US election, a deepfake audio was sent to voters, purportedly of Joe Biden, urging voters in a Democratic primary election not to vote. In March 2022, Russia circulated a deepfake video of President Zelensky urging his military to lay down their arms.
Not all use of deepfake technology is necessarily bad. The technology can be used to restore old films or produce synthetic voices for those who have lost their voices to disease. People experiencing persecution in repressive regimes can tell their stories via deepfakes without risking exposure by using real video footage that might accidentally reveal their identity. Nonetheless, deepfakes are clearly a source of concern.
In response to the rise of deepfakes, governments around the world have been tightening regulations. In the UK there is the 2023 Online Safety Act, the EU AI Act, The USA’s Take it Down Act, signed into law in May 2025 as well as various state-level initiatives. In June 2025, Denmark gave its citizens copyright over their images. How effective this flurry of legislation will actually be remains to be seen, but the level of activity at least suggests that politicians are taking it seriously.
Deepfakes are here to stay, whether we like it or not: the technology to build them cannot be un-invented. Businesses and individuals need to be alert to the possibility of deepfake scams, and consider using modern detection technologies to counter deepfakes. Governments need to continue to actively legislate in this area to protect people’s rights, and law enforcement staff need to be trained in this area and to actively pursue scammers. Seeing is no longer believing. As said by Chico Marx in the 1933 movie “Duck Soup”
“Who are you going to believe, me or your lying eyes?







