Insights is a weekly series featuring entertainment industry veteran David Bloom. It represents an experiment of sorts in digital-age journalism and audience engagement with a focus on the intersection of entertainment and technology, an area that David has written about and thought about and been part of in various career incarnations for much of the past 25 years. David welcomes your thoughts, perspectives, calumnies, and kudos at email@example.com, or on Twitter @DavidBloom.
Sundown on Wednesday, September 27 marked the start of the Jewish New Year (Happy 5778 for those of you keeping track). It also was the day that Facebook COO Sheryl Sandberg acknowledged in a post that the company let buyers target ads to people who self-identified with topics such as “Jew hater,” “How to burn jews,” or, “History of ‘why jews ruin the world.’”
“Seeing those words made me disgusted and disappointed – disgusted by these sentiments and disappointed that our systems allowed this,” Sandberg wrote. “Hate has no place on Facebook – and as a Jew, as a mother, and as a human being, I know the damage that can come from hate.”
The post’s timing at the start of the Jewish High Holy Days was possibly intentional, but was certainly ripe for wry commentary. Once again, we’re talking about how one of the most powerful and profitable ad platforms in history didn’t know about crappy things being done with the tools that make it so much money.
It’s particularly remarkable that this happened at a company with the reach and resources of Facebook, and one led by a CEO and COO who are both Jewish. The company seemingly didn’t even know that targeting ads for racists was possible, and wasn’t aware of the offensive ad categories.
It makes you wonder what else they don’t know. Basically, who’s been paying attention to the beast that Facebook has become?
Sandberg called the “Jew hater” situation “totally inappropriate and a fail on our part,” an “unintended consequence” that “in the future, we will be unrelenting in identifying and fixing …as quickly as possible.” Good goals, now that they’re starting to realize what they’ve created.
As it is, this was also another week of developments in an earlier ad-targeting scandal at Facebook, involving details of likely Russian efforts to sway last year’s elections.
CEO Mark Zuckerberg said towards the end of September that Facebook will hand over to Congressional investigators material from more than 3,000 ads that it sold to a Russian troll farm over a two-year period. Similar material already has been handed over to Special Counsel Robert Mueller’s separate investigation.
To Facebook’s credit, once ProPublica exposed the racist-slur targeting categories, the company moved quickly on several fronts, removing the offensive terms and temporarily freezing and reviewing thousands of others before reinstating them.
Sandberg said Facebook was “strengthening our ads targeting policies and tools,” promising to better enforce existing rules that ban attacks on people based on race, ethnicity, religion, gender and several other categories. The company will add more human review and oversight of the automated processes that enable the giant ad machine.
And Sandberg said the company is creating a program to allow users to directly report “potential abuses of our ads system,” though the whole point of ad targeting is to send messages to people who want them. Basically, if Facebook does a good job targeting ads, the people receiving them are unlikely to complain, no matter how objectionable the content.
In April, eMarketer projected that Facebook would make nearly $34 billion in advertising revenue this year. Combined with Google’s projected $73 billion, the Big Two are expected to claim nearly half of the entire world’s spending on digital advertising.
With that kind of money, it might be good for both companies to invest heavily in human oversight of their mostly automated systems, to better anticipate and avoid issues like the racist ad categories, fake news manipulations and and the Google Ad-Pocalypse.
I’m particularly interested in what happens to both companies as they use more and more technology based on their huge investments in artificial intelligence, taking automation and algorithmic power to radical new levels that may be beyond oversight by most humans.
Elon Musk has spoken in dystopian, even apocalyptic terms of the potential for runaway AI to ruin the world. He may be right to be alarmed, though others seem more sanguine. At this point, however, I’d settle for better oversight by actual people of the Facebook and Google ad operations so they can stop enabling the worst impulses of their billions of all-too-human users.