Make It Fair: AI, Creativity, and the Crisis No One is Stopping

If you picked up a British newspaper today, you might have noticed something strange. Every major paper ran the same front cover. The Daily Mail and The Guardian agreeing on something? That alone should tell you how serious this is.

The Make It Fair campaign is calling out AI companies for using creative work without permission or payment. It is backed by journalists, musicians, authors, and photographers who are all watching their industries being quietly taken over by AI. And the reality is, this has been happening for a while.

 
Make it fair
 

What is the ‘Make It Fair’ Campaign?

AI companies are using news articles, books, photography, music, and art to train their models. This means AI can now write news stories, generate images, create music, and even mimic an artist’s style without the original creator seeing a penny.

The campaign is pushing for a simple change. Right now, AI companies can take creative work unless the creator actively opts out. The campaign argues that it should be the other way around. AI companies should have to get permission first.

For a journalist, this means AI will not be able to scrape their articles and repurpose them. For a musician, it means AI will not be able to clone their voice without consent. It is about basic rights over work people have created.

How is AI Getting Away with This?

The UK government is considering changing copyright laws to favour AI companies. The proposed changes would allow tech platforms to use British creative content without permission unless the creator specifically blocks them.

The problem is that most creators do not even know when their work is being used. AI companies do not ask, they do not notify, and they do not pay. The only way for a creator to fight back is to somehow track their work across billions of pieces of data. It is impossible.

This is why Make It Fair is asking the government to change the law. Instead of creatives having to chase AI companies, the companies should be responsible for getting permission first.

How Can Creatives Even Tell If AI is Using Their Work?

One of the biggest problems with AI scraping creative content is that it happens on such a massive scale that detecting it is nearly impossible. Unlike traditional copyright infringement—where an artist might see their work on a t-shirt they never licensed, or a journalist might find their article copy-pasted onto a blog—AI-generated content doesn’t leave obvious traces.

The Near-Impossible Task of Tracking AI Use

For most creators, there is no central database where they can check if their work has been scraped into an AI model. If a book, article, or song is being used in AI training, it is not like the company will send a notification saying, Hey, just so you know, we used your work to train our chatbot.

Here’s why it is so difficult:

AI Models Don’t List Their Sources

When AI companies scrape content, they pull from millions of sources at once. The data is blended together, making it impossible to pinpoint exactly what was taken from where.

No Way to Search a Training Dataset

Even if you suspect your work has been used, there is no tool that lets you search a model’s training data. Some leaked datasets have been analysed, revealing entire books and articles included without permission, but for most people, finding their own content is a shot in the dark.

The Only Clue is the Output

The only way a creator might realise their work has been used is by testing AI models themselves. Writers have found AI generating text that mimics their unique style, artists have seen AI produce images that look suspiciously like their own, and musicians have come across AI-generated voices that sound exactly like them. But even then, proving it is difficult.

AI Hallucinations Make It Even Trickier

AI is known for “hallucinating” sources, meaning it sometimes attributes information to real people who had nothing to do with it. This means even when an AI chatbot spits out something with a journalist’s name attached, it does not necessarily mean the AI actually read that journalist’s work.

The Only Current Workarounds

Until AI companies are forced to disclose what they are using, creatives have very few options:

Reverse Image and Text Searches – Artists sometimes use tools like Google Reverse Image Search or specialised AI detection tools to see if their work has been replicated. Writers can run excerpts of their work through AI models to see if it produces something eerily similar.

Testing AI Models Directly – Some authors have asked ChatGPT and other models directly about their own work, with AI summarising books and articles it should not have access to.

Third-Party Investigations – Some independent researchers and tech watchdogs are trying to track what datasets AI companies are using. For example, The Atlantic published an investigation revealing thousands of books that had been used to train AI without permission.

But none of this is an actual solution. The burden is entirely on creatives to somehow catch AI companies in the act. That is why the Make It Fair campaign is pushing for legislation that shifts the responsibility to the AI companies themselves.

If AI companies were forced to be transparent about their data sources, this wouldn’t even be a question. Until then, creatives are stuck playing detective in an impossible game.

Why This Matters to Everyone

This is not just an issue for journalists and artists. It affects anyone who values:

Real news and journalism. If AI replaces journalists, newsrooms will shrink and investigative journalism—the kind that uncovers corruption and holds power to account—will suffer.

Authentic creativity. AI-generated books, music, and art are already flooding the internet. The more it happens, the harder it is to find real, human-made content.

A fair economy. The UK’s creative industries generate £120 billion a year. Undermining them in favour of AI benefits Silicon Valley, not Britain.

AI is not just automating creativity, it is replacing it.

What Needs to Change?

The Make It Fair campaign is calling on the UK government to:

Enforce copyright laws that prevent AI companies from using creative work without permission.

Ensure fair compensation for journalists, artists, and content creators whose work is used in AI training.

Put the responsibility on tech platforms to obtain consent before scraping content, instead of placing the burden on individuals to opt out.

Simply put, AI companies should not be profiting from stolen content.

What Can You Do?

If you support this campaign, there are ways to take action:

Read more about the campaign and sign the petition at newsmediauk.org/make-it-fair

If you are in the UK, you can write to your MP using the template provided by Creative Rights UK

Support real journalism, art, and music—buy newspapers, subscribe to independent creators, and be mindful of where your content comes from.

A PR Crisis Waiting to Happen

AI is here to stay, but it should not come at the cost of exploiting human talent.

If AI companies continue to push boundaries, they are going to face serious backlash—not just from the creative industries, but from the public. People are already sceptical of AI, and this fight will only add to that distrust.

The UK government has a choice. Protect its creative industries or hand them over to Silicon Valley for free.

Next
Next

The Small Business PR Nightmare Playbook: Mistakes That Lead to a Crisis