Protecting Democracy and Public Health from Online Disinformation
The Challenge: Disinformation Undermines our Ability to Govern Ourselves
In July, a video entitled “America’s frontline doctors” was a runaway train racing across the major digital platforms. The video hosted on Breitbart’s Facebook page claimed that face masks are dangerous, social distancing is unnecessary, and the drug hydroxychloroquine is a miracle cure for the coronavirus. It racked up 20 million views in just 12 hours on Facebook alone, before it was ultimately removed by Facebook, Twitter, and YouTube for violating their guidelines.
Today only about half of Americans say they would take a coronavirus vaccine when it is available—not enough to provide for herd immunity.1According to one poll, over 40 percent of Americans would decline a shot in part because they believe the vaccine is a scheme by Bill Gates to implant a microchip inside them.2
The platforms deserve credit for limiting some of the disinformation related to the U.S. presidential election count but hoaxes continued to spread through private groups, such as QAnon, which now has millions of adherents.3 A report by the campaigning network Avaaz reveals that health misinformation generated a staggering 3.8 billion views on Facebook globally in the past year. 4 GMF Digital has found that websites that repeatedly publish false content or that gather and present information irresponsibly have increased their interactions on Facebook in the United States threefold since early 2017, and they now rival some of the most reputable news outlets.5
Relying on platforms to play whack-a-mole with individual pieces of dishonest content is clearly not working. In fact, the number of posts that would require whacking is so vast that any platform with the power to monitor it all in real time would itself represent a further threat to the democratic tenet of free speech.
But the disinformation emanates from an ecosystem of manipulation that the platforms could disable with sufficient commitment. A relatively small number of high-traffic outlets launder content as news: the top ten of GMF Digital’s most engaged-with deceptive sites are responsible for 62 percent of the interactions among 721 sites in the sample. The content from these outlets is promoted by networks of pages, influencers, and groups and then algorithmically promoted to many more users through their newsfeed.
Despite all the new anti-disinformation rules announced by platforms, the manipulation ecosystem continues to operate online, enlisting users into inadvertently spreading disinformation to others. An Internet utopianism characterized by the belief that the network would enhance democracy by its very design—bringing voice to the voiceless, power to the powerless, and the wisdom of crowds—lulled many into assuming it should be a policy-free zone. But, as our lives and our news consumption moved online, the Wild West atmosphere created too many opportunities for malign actors to manipulate users, distort democratic debate, and undermine the consensus building needed to address major challenges like the one the coronavirus presents.
The Solution: Change the Incentives to Protect the Digital Public Square
Dismantling the disinformation ecosystem, as GMF Digital proposed in its roadmap for Safeguarding Digital Democracy, must avoid conscripting government or industry to play the role of “truth police.”6 Instead, platform incentives should be changed so that expectations for fairness from the analog world would be honored in the digital world. For this new system to work, platforms should implement a new circuit breaker system to give them time to act. And a new Public Broadcasting Service (PBS) of the Internet should be created to support independent journalism.
Updating expectations from the analog world for the digital era would start with campaign advertising transparency as required by the proposed bipartisan Honest Ads Act.7 Consumers should be protected against computer-generated deceptions such as deepfakes and the intrusive collection and use of their personal data—just as they are protected against fraud offline. Civil rights protections against discrimination and harassment must apply online as well. Importing a version of the transparency that offline journalism traditionally practices (for example, through bylines and mastheads) would hold platforms accountable to the public for enforcing their own rules, such as limiting the reach of websites that repeatedly violate platform standards. It would also help users protect themselves against manipulation by clarifying the origins, coordination, and funding sources for websites, pages, channels, influencers, and groups.
These new practices can only be put in place if the lightning speed of online information sharing can be paused before it does irreversible harm. Platforms should employ “circuit breakers”—like those used to prevent market-driven panics and slow down high-frequency trading—to halt the viral rollercoaster and give platforms the opportunity to evaluate content before it reaches a mass audience.8 Intervention by a human able to determine if a piece of content violates platform guidelines would ensure that platforms are aware of dangerous viral spread as it happens, rather than after the damage has been done. Twitter and Facebook have said they are already considering variations on this notion.9
Finally, users need sources of accurate information. As the advertising revenues that once supported independent journalism have moved to the platforms, it has become clear that journalism is a public good in need of support. A PBS of the Internet would have platforms that subsidize the news content from which they—and democracy—benefit.10
Conclusion
It has become clear that the current whack-a-mole approach to disinformation is inadequate. At a time of a public-health emergency and democratic erosion, the information ecosystem can be cleaned up by updating analogue expectations of fairness for the digital world, including transparency about rules and sources of information, and treating independent journalism like the public good that it is.
Download the full report »
Photo Credit: Cristian Dina / Shutterstock
Karen Kornbluh is a senior fellow and director of GMF Digital. She previously served as the U.S. Ambassador to the Organization for Economic Cooperation and Development and as a senior official at the Federal Communications Commission and the U.S. Department of Treasury.
1 Ben Kamisar and Melissa Holzberg, “Poll: Less than Half of Americans Say They [Will] Get a Coronavirus Vaccine,” NBC News, August 18, 2020.
2 Andrew Romano, “New Yahoo News/YouGov Poll Shows Coronavirus Conspiracy Theories Spreading on the Right may Hamper Vaccine Efforts,” Yahoo News, May 22, 2020.
3 Ari Sen and Brandy Zadrozny, “QAnon Groups have Millions of Members on Facebook, Documents Show,” NBC News, August 10, 2020.
4 Avaaz, Facebook’s Algorithm: A Major Threat to Public Health, August 19, 2020.
5 Karen Kornbluh, Adrienne Goldstein, and Eli Weiner, New Study by Digital New Deal Finds Engagement with Deceptive Outlets Higher on Facebook Today Than Run-up to 2016 Election, German Marshall Fund of the United States, October 12, 2020.
6 Karen Kornbluh and Ellen P. Goodman, Safeguarding Digital Democracy, German Marshall Fund of the United States, March 24, 2020.
7 Senate, Honest Ads Act (S. 1356), introduced on May 7, 2019.
8 Ellen P. Goodman and Karen Kornbluh, “Social Media Platforms Need to Flatten the Curve of Dangerous Misinformation,” Slate, August 21, 2020.
9 See Vijaya Gadde and Kayvon Beykpour, Additional Steps We [are] Taking Ahead of the 2020 U..S Election, Twitter, October 9, 2020; and Hamza Shaban, “WhatsApp is Trying to Clamp Down on Viral Misinformation with a Messaging Limit,” Washington Post, January 22, 2019.
10 Ellen Goodman, “Building Civic Infrastructure for the 21st Century,” in #Tech2021: Ideas for Digital Democracy