TikTok wants its fact-checking to be user-generated, too

By 07/31/2025
TikTok wants its fact-checking to be user-generated, too

We live in a disturbingly post-truth era, where public figures lie for clout and laypeople lap up conspiracy theories that suit their worldview, without bothering to check what’s real and what’s fake. And nowhere does “post-truth” spread better than online.

We saw this firsthand around the 2020 presidential election, when the Trump camp’s claims of voting interference found a home on YouTube. After the platform finally cracked down on that content, researchers at NYU saw the amount of related misinformation on platforms like Facebook and Twitter “drop sharply,” too.

This study showed that video is a primary source for misinformation online, and that by running a tight moderation ship, platforms can curb it.

Tubefilter

Subscribe for daily Tubefilter Top Stories

Subscribe

Unfortunately, in the five years since, YouTube, Facebook, and more have significantly lessened the amount of misinformation moderation they’re responsible for. YouTube ruled that actually, creators are allowed to lie about elections, and just before Donald Trump took office for his second term, Meta ended its official fact-checking program altogether. YouTube also, in the months since Trump was reinstated, began telling human moderators to take down fewer videos that contain misinformation, derogatory language, and other forms of violative content. And then there’s Elon Musk’s X, which is a whole other story.

This leaves us in a social media environment where misinformation thrives. But platforms seem to recognize that simply abandoning moderation won’t sit well with many users, so instead of getting rid of all fact-checking systems, they turn to crowdsourcing.

X and Meta already have Community Notes systems where users submit fact-checks on one another’s posts. YouTube also has “information panels,” which are crowdsourced but only from an approved list of publishers, so it half counts.

Now TikTok is implementing its own version, called Footnotes.

First announced this past April, Footnotes “draws on the collective knowledge of the TikTok community by allowing people to add relevant information to content on our platform,” TikTok said.

It is doing some screening for the program. Users have to apply to become a fact-checker, and must meet certain criteria: be from the U.S., be over 18, be on TikTok for at least six months, and have no recent history of violating the platform’s Community Standards.

TikTok says nearly 80,000 people have been approved as Footnotes contributors so far.

Footnotes works very similarly to X and Meta’s programs, using a ranking system that is supposed to find an objective contribution or middle ground–a statement that people who typically have differing opinions can both agree with.

Submitted Footnotes become visible to other Footnotes contributors first. Then, if enough contributors rate a note as “helpful,” it goes out to the TikTok community at large, where anyone can vote on its helpfulness.

Worth noting: Though TikTok is leaning into crowdsourced fact-checking, it’s not stopping its own internal moderation program. Instead, Footnotes is intended as an ancillary force whose main objective is providing context around hot-button topics, while TikTok’s moderation team will continue enforcing all its policies, including those around misinformation.

In April, TikTok said Footnotes will “[add] to these efforts with more ways to add helpful details that may be missing, ultimately enriching discussions on TikTok and improving the experience for viewers, contributors and creators alike.”

Subscribe for daily Tubefilter Top Stories

Stay up-to-date with the latest and breaking creator and online video news delivered right to your inbox.

Subscribe