Facebook

In the midst of Slopageddon, Facebook says it wants to promote original content and protect creators from impersonation

Ah, the great duality of generative AI: Platforms like YouTube and Facebook have gone whole-hog into developing LLM-based tools in hopes of making big dolla dolla bills…only to realize those same tools are aiding the spread of slop content across their UGC platforms.

In late 2024, Meta‘s VP of Product for Generative AI publicly envisioned a future where human creators share the spotlight with AI content mills. “They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform…that’s where we see all of this going,” he said.

Less than six months later, as Slopageddon kicked in, Facebook started trying to put a lid on repetitive and low-quality content, as well as content reuploaders and people otherwise impersonating bona fide creators.

Subscribe for daily Tubefilter Top Stories

Subscribe

Now it has results from those efforts, and more updates about how it’s “rewarding creators who post original content on Facebook” and “testing enhancements to our content protection tool to help creators more quickly detect potential impersonation across our platforms and submit reports more easily.”

According to Facebook’s data, its crackdown on spammy, low-quality content had a significant impact on audience engagement: “both views and time spent watching original Reels on Facebook approximately doubled in the second half of 2025,” it wrote in a company blog post.

To be clear, Facebook doesn’t specifically say AI-generated content is categorized as spammy and/or low-quality. What it does say is that it considers content “original” (and thus worth promoting in users’ feeds, giving creators a better shot at making money) if it:

  • Is “filmed or produced directly by a creator or owner of a Profile or Page”
  • “selectively incorporat[es] third-party content (such as remixes or overlays)” when the “focus is an on-screen presence from a creator presenting something genuinely new–like fresh information, analysis, or substantial improvements to a storyline”

re: that second bullet point, Facebook says content that is “duplicative or involves minor edits to another creator’s post will be considered unoriginal and deprioritized.” (This is part of cracking down on reuploads/impersonators.)

As you can see, there’s nothing in here that solely pertains to AI, and Facebook’s language in the first bullet–that content must be “produced” directly by creators, not “made” directly by them–indicates some wiggle room for AI.

That being said, Facebook is clearly trying to cut down on B.S. filling people’s feeds (because if it doesn’t, people will click away, and it won’t make money, and investors are already mad about how much it’s spending on AI this year). YouTube undertook a similar effort in March 2025, when it joined with nearly 20 kids’ content creators and distribution companies, including Pinkfong, WildBrain, The Wiggles, and Cocomelon owner Moonbug, to form an initiative CEO Neal Mohan said was dedicated to “the development of high-quality, age-appropriate content for young people.”

As for further targeting reuploads and other kinds of impersonation, Facebook said it “removed more than 20 million accounts impersonating large content creators[,] and impersonation reports related to large content creators dropped by 33%” in 2025.

Those reports came from a tool Facebook launched in November; moving forward, it’s testing “enhancements” to that same tool that will “also detect potential impersonation and make it easy for creators to submit reports,” it said.

To be clear about this one, too, Facebook does not appear to be targeting deepfakes of creators the way YouTube is; instead, it’s going after reuploaded videos Content ID-style, and taking action against accounts that repost creators’ videos wholesale, without any significant change or contribution of their own. Deepfakes are a separate and also serious problem propagated by gen AI. (YouTube is doing a better job of tackling those: This week, it expanded its automatic deepfake detection/reporting program to journalists, politicians, and government officials.)

Facebook says these updates are part of its “commitment to creators.”

“We’re committed to making Facebook a place where creativity is celebrated and rewarded,” it said. “With clearer original content guidelines and stronger tools to protect creators’ work, it’s easier than ever for authentic voices to stand out.”

It’s a nice sentiment, but one that’s kinda hard to get behind when your VP of Product is out here putting AI accounts on the mainstage like they’re real people.

Share
Published by
James Hale

Recent Posts

YouTube hits nearly $10 billion in Q1 ad revenue

Alphabet's stock jumped more than 6% in after-hours trading following a strong quarterly earnings call--a…

5 hours ago

Kick gambling streamer N3on is spending millions on his army of clippers

Clipping is the content creator equivalent of a startup doing digital ads. And if you've…

8 hours ago

Australia limited teen access to social media. Now it’s trying to get a better deal for news media companies.

The Australian government has devised a new way to wring money out of social media…

8 hours ago

Can TikTok Shop disrupt the gallery by empowering creators to sell fine art pieces?

TikTok Shop's push into luxury goods isn't stopping with the $11,000 handbags it listed. Another…

9 hours ago

Patreon is prioritizing discovery. New short-form posts called Quips are part of the plan.

Back in 2024, Patreon announced its plan to bring a greater "network effect" to its platform. By…

10 hours ago

Top 5 Branded Videos of the Week: Yoshi egg, Pringles hack, alien ASMR

'Tis the season for festive holiday beverages, and some of YouTube's biggest channels are raising…

1 day ago