The Mobile Review

The Mobile Review: Your Trusted Guide to the Latest Tech Trends.

The Mobile Review

The Mobile Review: Your Trusted Guide to the Latest Tech Trends.

New Release Details

YouTube’s Plan to Demonetize AI-Generated Videos

Are AI-generated YouTube videos really a thing of the past? Google has just announced a change to the monetization rules on YouTube, which would prevent certain AI-generated content from generating money. No money, no AI, YouTube’s magic solution, however, ignores the heart of the problem with this kind of video.

YouTube’s new policy will come into force on July 15, 2025, and not just in the U.S., but in France too. YouTube won’t be demonetizing all AI-enabled videos outright, however. What the platform has in its sights is what it calls repetitive, inauthentic, and mass-produced content.

What’s an inauthentic AI video?

At least once in your life, you’ve come across a YouTube video or short with a generic, robotic voice playing over random clips picked up here and there. A low-end video with no added value. This is what YouTube seems to want to attack by insisting on the notion of authenticity.

“Attack” is a big word, since YouTube claims, on its official website, to have “always demanded that monetized content be original and authentic.” In an explanatory video, YouTube editorial manager Rene Ritchie even asserts that this is a “minor update to the long-standing YouTube Partner Program rules, designed to better spot mass-produced or repetitive content.”

Curious coincidence or not, this Rene Ritchie video looks a lot like AI slop because of YouTube’s rather approximate automatic dubbing.

YouTube downplays the AI problem

YouTube seems intent on minimizing the proliferation of low-end AI-generated content on its platform. A proliferation that goes hand in hand with monetization is the main motivation behind the production of this kind of content. YouTube would therefore do well to clarify what it considers to be inauthentic, mass-produced, and repetitive content.

The 2.6 million-subscriber YouTube channel “Bloo”, recently spotted by CNBC, fits the bill, for example. On this channel, there’s no human presence on screen. Instead, a clumsily animated virtual avatar addresses the audience in an AI voice dubbed in several languages. The channel spams at least one video a day. The avatar yells at you without ever pausing to breathe, all over gameplay videos on the popular video game of the moment (GTA 5, Roblox, or other). This is typically the kind of video designed for children, or even young teenagers, to make them addicted by dumbing them down.

If YouTube already excluded this kind of content from monetization, as it claimed, why isn’t this the case here? And if this becomes the case from July 15, will YouTube distinguish between this kind of content and videos by VTubers in general? Those content creators who only show themselves on video via a virtual avatar? Vtubers can produce original content, with or without AI. But in the “Bloo” example above, the only human intervention in the production process is the creator(s)’ fingers as they enter their prompts.

AI videos have more than an authenticity problem

The fact that YouTube distinguishes between authentic and inauthentic AI content makes sense. Admittedly, YouTube’s notion of authenticity remains very vague. But not lumping all AI-created or augmented content together is a good thing. Personally, I love following a series entitled “Presidents play Mass Effect” from the PrimeRadiancy channel.

These videos feature Barack Obama, Joe Biden, and Donald Trump as if they were chatting on Discord while playing video games from the Mass Effect saga. The script is humorous, and you can tell a human was behind the writing. The gameplay is also created by a human. At the start of each video, there’s a disclaimer stating that these are AI voices. In short, this kind of content can be considered authentic.

But beyond authenticity and originality, the proliferation of AI videos poses another problem, and not the least. More and more frequently, I come across racist or sexist videos made entirely by an AI. These videos are often presented as sketches, and it only takes a few seconds to realize the harmful message being propagated.

Without sharing it with you here, the example that struck me most showed a white couple sitting on the porch in front of their house. Suddenly, a black man runs through their garden with a TV under his arm and disappears into the distance. The wife exclaims, “I think this is mine”, referring to the TV supposedly stolen by the running black man. The husband then intervenes, saying “but no, darling, ours is in the garden, pointing to another black man, crouched down pulling weeds.”

When AI is used to create racist memes, it’s already too late

Obviously, YouTube isn’t going to question whether this content is genuine or not before demonetizing it. Racism and incitement to hatred are against YouTube’s rules. But this illustrates just how accessible the creation of purely AI content has become. So accessible that cutting-edge AI video generation tools are being hijacked for simple shitposting.

Deutsche Welle dedicated a program to this subject, but on TikTok, not YouTube. Many of these videos had been created using Google’s Veo 3 tool. Although YouTube downplays the importance of this update, presenting it as a simple “minor adjustment” or clarification, the reality is quite different.

Allowing this type of content to proliferate and its creators to profit from it could ultimately damage the platform’s reputation and value. YouTube’s apparent calm and lack of precision regarding the notion of authenticity betray a desire to strike hard.

What do you think of this change in YouTube’s policy? Have you also noticed a proliferation of low-end AI content? Do you find it hard to distinguish AI slop from genuine content? Can a video generated by AI or using AI-generated elements be authentic, in your opinion?

Leave a Reply

Your email address will not be published. Required fields are marked *