Artificial intelligence (AI) is having a major impact on the creative arts. This does not just apply to books and articles, but also to artistic images, music and video. Large language models (LLMs), the underlying basis of generative AI, are capable of writing a poem, a song or an essay, and generating an image, cartoon or video. This creates a major issue for authors, graphic designers, artists and film-makers, whose income and careers are threatened by this new technology. In a list by Microsoft of careers that are most likely to be affected by AI, writers and authors are near the top, along with interpreters. Technical writers, proofreaders, copyrighters and editors are also on the list of most impacted careers.
A January 2024 survey involving 800 authors found that a quarter of illustrators had already lost work due to AI, with well over half of authors believing that generative AI will negatively impact their future incomes. The impact on creative jobs is real, with a Guardian article in May 2025 interviewing several people who had been laid off from their jobs due to AI, from copywriters to graphic designers to voice actors. Photographers are affected, with AI being used to generate stock images. The film industry is affected too, with AI capable of producing realistic crowd scenes, generating soundscapes, cloning voices, performing foreign language dubbing, building visual effects and even writing scene scripts. The Oscar-nominated film “The Brutalist” used AI to alter voices, fine-tuning the Hungarian accent of actor Adrian Brody in the film. AI is already being used in advertising by companies, including the huge marketing agency WPP. AI is already creeping into production tasks involved in advertising. Coca-Cola ran an entirely AI-generated TV advert for Christmas 2024.

It is getting harder and harder to tell the difference between photographs and AI-generated images. This image was created by me in August 2025 in a free version of the OpenArt AI. Other professional image-creating LLMs can produce even more photo-realistic images. There are a host of these available, from Midjourney to Stable Diffusion to Dall-E to Leonardo to Adobe Firefly. Given this, it is no surprise that the list of professions in danger from AI includes photographic models. AI video generators include Synthesia, Pictory, InVideo and Runway.
Entire AI-written books have been appearing on Amazon, including a field guide to foraging mushrooms that was riddled with errors. Given the toxicity of some wild mushrooms, this kind of thing could lead to more than just a reduced income for writers about fungi. Hundreds of AI-written books were appearing on Amazon as far back as 2023, in a wide range of genres. Some fake books are even appearing under the names of well-known authors, in order to fool fans of the authors into buying them. LLMs are good at generating images, and this is one area where the thorny issue of AI “hallucination”, a problem when AI accuracy is required, say for factual documents, has little impact. If a court submission hallucinates some fake precedent cases then that is a serious problem, but if an AI image has a hallucination, such as a person having an extra finger, then you can just regenerate a fresh one until you get an image that you like. Deep-fake videos are already worryingly high quality, and have already been used in elaborate scams. It should be noted that not all artists are against AI. For example, German artist Mario Klingemann produces art that actively uses AI.
The issue of copyright and intellectual property regarding AI is a hot topic. LLMs use vast swathes of content as training data, and this may include books, films and video games as well as social media posts. The authors of these books, films and video games are understandably not happy that the content they painstakingly created is being used, without permission or payment, to create LLMs that may put them out of a job. There are already databases to keep track of the lawsuits that have been filed by companies such as Conde Nast, Dow Jones, Getty Images and Thomas Reuters against assorted AI companies. Some of these lawsuits have the potential to impact the industry significantly, as for example in the case of a class action lawsuit initiated by three authors against Anthropic, the creator of the popular AI Claude. There is also the question of who actually owns the copyright on a book or image generated by an AI. Is it the person writing the AI prompt, or the AI company, or no one at all? At present, the picture varies by country, with some countries allowing protection for computer-generated work, and others being less clear. The picture here will doubtless evolve as people become more familiar with this area, and governments gradually react by providing legislation to clarify the position.
It is unclear how this fast-moving picture will settle down. At the very least, it would be useful for there to be clear disclosure as to when AI has been used in writing a book or image or video. The idea of an invisible watermark to identify AI-generated content has been around for a while, but the field is new, and the use of such techniques is voluntary. People writing fake books will not be queuing up to use them. There is an industry of AI detection tools that claim to detect plagiarism and AI content, but a March 2024 paper found that these were just 39% effective. Even that rate fell to 17% when the content was manipulated slightly to fool the detector software.
Society needs to find a way forward through this thicket of AI legal and ethical complexity. Creative content generators, from authors to film-makers, need to be able to invest in their content in the expectation of it being protected by law and be able to generate income from it. It will need a broad effort from governments, lawyers and the creative industries, as well as the technology industry, to find workable solutions. As artist Joanna Maciejewska said, “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.”







