Google is planning to make a decisive move against so-called creators who are using generative AI to churn out low quality videos for YouTube in an effort to rake in advertising click revenues.
A report in TechCrunch reveals that Google is about to reinforce its policy regarding what type of content users will be able to monetise on YouTube, and it will clamp down on the untold amounts of AI-generated garbage that provide little value to anyone.
Google has been aware for some time that a growing number of YouTube “creators” are using AI services from companies like RunwayML, Synthesia, Luma and Kling to quickly generate videos and post them on YouTube in an effort to get as many views as they can and crank up their revenue streams.
These mass-produced videos, which are often labeled “AI slop”, are overwhelmingly low-quality, created with minimal effort and offer very little in terms of educational or entertainment value to viewers.
Such content includes videos with AI-generated voices babbling away meaningless garbage, overlaid onto existing repurposed content such as photos and video clips. There are also dozens of channels with subpar, AI-generated music that have somehow managed to rack up millions of views. On top of that, there are AI-generated clips about news events, with rehashed commentary on topics such as the P Diddy trial, which don’t offer anything new to viewers beyond what’s available on “proper” news channels.
There’s also lots of misleading content on YouTube, such as the AI-generated “true crime” series that featured murder cases that didn't actually happen. Neither the victims, the criminals nor the stories were true, yet the series somehow managed to get itself in front of millions of viewers.
This is the kind of crap that Google wants to clamp down on. It has, thankfully (and despite being partly responsible for it) decided that it doesn’t want to reward “creators” for such low-effort and low-impact garbage anymore, and to do something about it, it’s going to cut them from its revenue-sharing program.
The YouTube Partner Program is set to enforce its policy and require that participating channels upload “original and authentic” content only. This is an existing policy, but for some reason it hasn’t been enforced with regard to AI-generated videos, but instead used to combat those who blatantly rip off content from other creators.
The change will come into effect on July 15, according to a video posted on YouTube by its head of editorial, Rene Ritchie. From that day, Google will attempt to filter out any mass-produced videos that rely too heavily on AI and repackaged content and lack any real creative input.
Ritchie said YouTube will be looking especially for automated and low-quality content, but it’s not clear exactly how it intends to define this AI junk. Some YouTubers have expressed concern that the policy change could impact commentary videos that they post to give reactions to news and events, but Ritchie stressed that this will still be eligible for monetisation, so long as the commentary and interpretation is novel and unique.
It’s easy to see why Google has decided to do this. There seems to be no end to the tidal wave of AI crud flooding onto YouTube, and it’s eroding the quality of the platform itself. By taking away the financial incentive to post this kind of content, it’s likely that those responsible will give up doing so.