YouTube, like other major online video platforms, is taking harassment seriously. In order to compile all of its protections against hate speech, spam, and unwanted attention, Google’s video platform has pulled back the curtain on the Creator Safety Center.
Visitors to the Creator Safety Center can review a number of different anti-harassment tools launched by YouTube over the past few months. In June, the company gave its users the ability to enable stricter content moderation within their comment sections, thus helping embattled creators root out spam. It’s also now possible to set guidelines on one’s channel, which allows a channel owner to establish the rules of engagement among their commenters.
YouTube’s overall policies toward hate speech and harassment, as well as the steps creators can take to purge those all-too-common demons, are outlined in a video that appears front-and-center on the Safety Center homepage:
Subscribe to get the latest creator news
YouTube noted that its new hub is a response to a common pain point among creators. In a blog post introducing the Creator Safety Center, the platform cited research from the Association for Computing Machinery. The ACM found that 95% of creators deal with unwanted attention on social media, but only 50% of them feel they have access to the resources they need to negotiate those interactions.
“We collected in-depth information and tips on topics like how to stay safe when starting out as a new creator, what to do as your channels are growing and how to navigate experiencing things like bullying, trolling, account hijacking and more,” reads YouTube’s post. “Creators can now access all this information in our new Creator Safety Center, which is launching today.”
YouTube is not the only platform looking to make its creator experience safer. Twitch has taken steps to combat “hate raids” after a 2021 protest, and TikTok is one of several platforms that has rolled out a strong policy against sexual harassment.