As YouTube continues to invest in generative AI, it is also developing the resources creators need to stay ahead of emerging technology. One of the most important tools in that defensive arsenal is a likeness detection solution that officially launched last year, with a group of eligible YouTube Partners serving as guinea pigs.
Now, YouTube’s likeness detection technology is entering the next phase of its rollout. Public figures like journalists and political candidates will gain access to the tool, which can find and moderate AI deepfakes as they pop up across the platform.
AI deepfakes became a mainstream issue during the 2024 election cycle, when YouTube enlisted creators in its effort to root out unlicensed depictions of public figures. YouTube followed up that initiative in a big way: It co-signed Congress’ No Fakes Act to express its commitment to anti-AI safeguards, and it announced a partnership with CAA that led to the development of its likeness detection solution.
Subscribe for daily Tubefilter Top Stories
With another major election cycle underway, YouTube is ensuring that politicians and journalists have access to sophisticated deepfake protection. In a video announcing the expanded rollout, YouTube Creator Liaison Rene Ritchie cited his own experience as a tech journalist. He explained that reporters can protect the integrity of their work by relying on a tool built on the “same foundation as Content ID.”
Ultimately, YouTube’s plans to add more moderation options to its likeness detection system. “YouTube wants a future where AI helps creativity thrive,” Ritchie said, “and that means building the legal and technical frameworks to ensure that creators stay in the driver’s seat.”
YouTube, of course, is not the only entity tackling this particular problem. As other deepfake shields — including some developed by creators — operate across multiple platforms, YouTube is bringing its solution up to par. More details about the likeness detection update can be found on the official YouTube blog.










