Insights is a weekly series featuring entertainment industry veteran David Bloom. It represents an experiment of sorts in digital-age journalism and audience engagement with a focus on the intersection of entertainment and technology, an area that David has written about and thought about and been part of in various career incarnations for much of the past 25 years. David welcomes your thoughts, perspectives, calumnies, and kudos at, or on Twitter @DavidBloom.

When comments were cool.

Back in those halcyon days when the Internet was young, offering comment sections and online reviews on your site was seen as a crucial step in the democratization of media, opening the conversation to everyone after centuries of centralized gatekeepers. That ideal was even kind of true.

Amateur reviewers on Amazon and Yelp developed their own reputations and fan bases. Fan-made playlists on Spotify became essential tastemakers. YouTube enabled a generation of creators who built followings in part through their comment sections. I personally saw the value of user comments while working on a Hollywood insider news site, where reader posts often provided as much insight as even an experienced reporter might bring.

And then people (and governments) decided to weaponize the newly empowered vox populi.

These days, who trusts Yelp user reviews, given all the rating manipulations by companies against their competitors? And as those Spotify user playlists became more influential, labels skewed the tastemaking with modern-day payola to get their tunes included.

Social-media Influencers routinely sign huge marketing deals to post about products. As detailed in a Tubefilter event earlier this year, that trend has left influencers crunched between making a living, remaining authentic, disclosing conflicts, and limiting hostile algorithm treatment for #ad and #sponsored posts. But who do you trust?

And that Hollywood news site? The comment sections were always stuffed with spam, but that was easy enough to manage. More problematic were the ad hominem attacks against people mentioned in stories. We’d try to leave intact negative material if it gave useful perspective, but woe to the hapless editor who left in place criticism about someone with a connection to the site’s leadership. Next time, just email us a tip, dude.

Once again, we’re discovering why we can’t have nice things.

And more companies are getting out of the public input business as a result.

ESPN recently confirmed to Deadspin that its websites dropped their comment sections, which were linked to Facebook. The sports giant said fans have “more touch points than ever” to share opinions elsewhere. To touch fans on those points, ESPN will create more “social-media material” (i.e., post even more inane social-media memes about Lebron James, Tom Brady and Cristiano Ronaldo).

Crucially for ESPN, however, the company has offloaded the headache of moderating conversations to someone else.

So, too, has Netflix, which said it will ditch user-created reviews at the end of July. The video powerhouse already dropped its five-star rating system for a thumbs-up or -down alternative. A company spokesman told CNET that viewers were using reviews less and less. More importantly, Netflix algorithms weren’t using the reviews at all.

The reviews had been part of Netflix from its 1997 beginnings, so this marks a definitive shift from O.G. Internet democracy. Add it to the funeral pyre along with net neutrality and StumbleUpon.

Meanwhile, the Moderators in Chief – Facebook, YouTube, and Twitter – continue to demonstrate why no one should want this gig.

Despite billions of dollars in investments and thousands of hires, they’re all still stepping into one controversy after another.

Twitter, conscious of its position as chief bullhorn for the chief bullspewer, has reportedly been killing off tens of thousands of fake accounts every month, trying to avoid a repeat of 2016’s Russian election manipulation. Reports continue to surface, however, of another year of fake accounts tied to Russian tricksters, now posing as, among other things, former Democrats urging others to #walkaway.

The Comment Moderation Blues continue apace at Facebook, too. The latest snafu: blocking a Texas newspaper’s post of a section of the Declaration of Independence as “hate speech.”

“The post was removed by mistake and restored as soon as we looked into it,” a company spokesperson said. “We process millions of reports each week, and sometimes we get things wrong.”

Admittedly, the Declaration’s paragraphs 27 to 31 do include the problematic phrase “merciless Indian Savages,” part of a lengthy list of crimes against the colonies committed by their feckless former English monarch. And the Founding Fathers did, after all, have issues sharing those unalienable rights granted by their Creator with women, poor people, Native Americans, slaves and others.

Nonetheless, last year, some Donald Trump supporters thought NPR was spreading “propaganda” when it also reposted sections of the Declaration. And it’s always useful to remember we’ve had dumb people for a long time. A 1990 poll (i.e., before the Internet went public)  found only a third of respondents could identify the Bill of Rights or its purpose in the Constitution.

Regardless, Facebook’s latest Adventure in Boneheaded Moderation is both a near-comical illustration of the challenges the online giant faces in trying to keep its 2.2 billion users less jerky and evidence of a much bigger issue.

Basically, that issue is whether we can trust much of anything we’re seeing online for much of anything.

We have a few choices to maintain our sanity:

  • Go back to relying on a handful of trusted gatekeepers that we hope can wade through all the muck on our behalf. Of course, no one believes any gatekeepers anymore, so that’ll be a challenging leap of faith.
  • Stop reading and interacting with social media. Lots of people say they’re doing less with Facebook, for instance, since the last election and resulting data-sharing scandals. But it’s interesting to note that, despite all that, Facebook continues to have banner earnings reports, and Mark Zuckerberg just became the world’s third-richest man.
  • Really hope those artificial-intelligence bots get way better fast at blocking the crap and giving us only the good stuff. We’ll be getting AI in everything we do soon enough. I’m just hoping Elon Musk is wrong when he keeps talking about all the horrible things that can happen when AI isn’t properly controlled on the front end. Ray Kurzweil’s Singularity isn’t automatically going to come with a happy ending for humanity.

Facebook has been hiring 4,000 human moderators, and doubling down on its AI investments to improve its automated screening. YouTube has hired 10,000 moderators, and reportedly has some of the world’s best AI technology, thanks to all the data parent company Google scrapes from you. All those moderators risk PTSD and other mental-health issues screening through thousands of awful posts every 10 seconds. So perhaps it’s not a surprise both Internet giants keep smacking up against new controversies.

This stuff is hard, and the Duopoly’s efforts to make the Internet A Clean, Well Lighted Place to hang haven’t consistently delivered. And if they can’t do it, I understand why even big companies such as ESPN and Netflix are getting out of the moderation game completely. No input from you, sir or madame!

I, for one, won’t miss ESPN comments, or Netflix user reviews or, truth be told, much of what I see on Facebook and Twitter. But as the Internet grows and evolves, we have to figure out who to trust to help us through the thickets of politics, pop culture and so much else. Relying on amateurs wasn’t a big success, humans get burned out, and gatekeepers aren’t what they used to be. Dare I root for the robots?

Don't miss out on the next big story.

Get Tubefilter's Top Stories, Breaking News, and Event updates delivered straight to your inbox.

This information will never be shared with a third party