Meta is joining YouTube in the fight against repetitive, reposted videos — including the ones generated by AI models. The company formerly known as Facebook has announced a crackdown on its namesake platform that will deny monetization for producers of “spammy content.”
Details about Meta’s new policy can be found in a post published on the official Facebook blog. “Too often the same meme or video pops up repeatedly ‑ sometimes from accounts pretending to be the creator and other times from different spammy accounts,” reads the post. “It dulls the experience for all and makes it harder for fresh voices to break through. To improve your Feed, we’re introducing stronger measures to reduce unoriginal content on Facebook and ultimately protect and elevate creators sharing original content.”
The post explained that offending accounts will lose access to monetization and will “receive reduced distribution” for future uploads. Those policies are reminiscent of changes YouTube recently introduced; at the start of July, Google’s video hub announced new rules governing the repetitive, reposted videos that are currently running amok on YouTube Shorts.
Subscribe for daily Tubefilter Top Stories
Though YouTube users initially speculated that the new policy could nix monetization for certain types of creator content (such as reaction videos), a more recent update clarified that the rules update is targeting the sort of “inauthentic” fare produced by generative AI programs. Facebook has recently been riddled with posts of that nature, leading to a wide-ranging initiative that looks to address the “unfortunate reality” of generative AI spam.
Even before Meta updated its rules, it had already kicked off its fight against AI slop. The latest Facebook blog post notes that 500,000 channels engaged in “spammy behavior” have received sanctions since the start of 2025. Meta also eighty-sixed 10 million profiles that impersonated notable content creators.
Those steps, when combined with the monetization update, are welcome changes for the creators who have long complained about freebooted content on Facebook. But not everyone is happy with Meta’s moderation practices. Earlier this year, a rash of mistaken account deactivations led to the launch of a petition that asks Meta to stop issuing takedowns without offering human customer support. More than 30,000 people have co-signed that measure.
The crackdown on AI slop will naturally lead to more moderator action against offending accounts. That has a lot of potential for good, but Meta will need to cut down the number of false positives if it wants its new policy to please Facebook users.