After facing significant backlash for its recent decision not to ban political ads that contain false information, Facebook today made a flurry of announcements about what it’s doing to prevent interference in the 2020 U.S. Presidential election.
The platform, which in July was slapped with a record $5 billion fine for its part in the 2016 election’s Cambridge Analytica scandal, is now being proactive about protecting election security, CEO Mark Zuckerberg said in a call with reporters.
“The bottom line here is that elections have changed significantly since 2016,” he said, per TechCrunch. In response, Facebook has “gone from being on our back foot to proactively going after some of the biggest threats out there.”
That proactivity includes targeting and taking down networks of people seeking to influence elections by spreading misinformation. In a post about the latest announcements, Facebook said that over the past three years, it’s taken down more than 50 of these networks, including four just this morning. Three of the networks taken down today originated in Iran, and one originated in Russia. All four targeted the U.S. and its upcoming election.
These networks and others like them engage in what Facebook calls “coordinated inauthentic behavior”: groups of people working together to spread misinformation while concealing who they are and what they’re doing. Taking down these networks has given Facebook “a deeper understanding of different threats and how best to counter them,” the platform said.
Facebook Will Keep (Some) Users’ Data Under Extra Security
As such, the most prominent of today’s announcements is the introduction of a new program called Facebook Protect, which will provide added security for Facebook and Instagram accounts belonging to elected officials, candidates, and federal and state departments and agencies, as well as staff members of all the above. The program is intended to keep outside forces from breaking into these people’s accounts and potentially accessing sensitive, election-related information.
Those enrolled in the program will have mandatory two-factor authentication for all login attempts, and their accounts will be actively monitored for hacking attempts, with Facebook keeping a lookout for login attempts from unusual locations and new devices. If a Facebook Protect account is compromised, Facebook will automatically review other accounts in that person’s network. (So, for example, if a candidate staff member’s account is compromised, accounts belonging to all other staffers as well as that candidate would undergo review).
Facebook also announced a number of moves it’s making with regards to political ads. As mentioned above, its ad policies have been a point of contention for Facebook over the past couple of weeks. During his call, Zuckerberg reaffirmed that Facebook will not remove political ads containing misinformation, because “people should make up their own minds about what candidates are credible. I don’t think those determinations should come from tech companies.”
While it’ll continue to allow politicians to lie in their ads, Facebook is simultaneously introducing tools that are meant to make ads and other content from political entities more transparent. It’s adding a U.S. presidential candidate spend tracker that will show people how much candidates are spending on Facebook and Instagram advertising, with details about how much is being spent to target people in specific regions. And, beginning next month, Facebook will label Pages and ads that are “wholly or partially under the editorial control of their government” as “state-controlled media,” so people who see them will know the content is potential propaganda.
Facebook Is Still Trying To Minimize Misinformation That Doesn’t Come From Candidates
Facebook also announced that it’s cracking down on misinformation spread by people who aren’t politicians. It noted that it already reduces the distribution of content that contains misinformation by preventing it from being seen on Facebook’s News Feed and Instagram’s Explore page and hashtags. But, over the next month, it’ll expand this suppression using third-party fact-checkers. Video and photo content those fact-checkers find false will be “prominently labeled” as misinformation, with overlays users must click through to see the content, the platform said.
Another misinformation announcement: Facebook has now banned content that encourages people not to vote and/or tells people that voting is meaningless. This is in addition to its previously-implemented voting content policies, which ban content containing lies about things like who can vote, and when and where they can cast their ballot. It also bans content that threatens violence if people don’t vote a certain way, or if a certain candidate wins.
And last but not least, Facebook revealed it’s investing $2 million in a media literacy initiative to “support projects that empower people to determine what to read and share — both on Facebook and elsewhere.” Projects receiving money “range from training programs to help ensure the largest Instagram accounts have the resources they need to reduce the spread of misinformation, to expanding a pilot program that brings together senior citizens and high school students to learn about online safety and media literacy, to public events in local venues like bookstores, community centers, and libraries in cities across the country,” the company said.