How to Fight Online Disinformation in the Wake of a Bitter Election

,
Unless the platforms adjust their fundamental approach to disinformation, this trend will escalate and come to dominate political discourse to the detriment of all of our societies.

Since last week’s election was called in favor of President-elect Joe Biden, social media venues have been flooded with allegations that massive vote fraud altered the outcome. President Donald Trump clings to this view, which is objectively false, and polls taken in the last few days suggest that 70 percent of Republicans now also hold this view. While the internet platforms did not invent this false content, it is circulating widely on their sites, and it’s an important factor in undermining trust in our democracy.

The exploitation of social media to promote provably false political disinformation is not just about elections; it is a broader and growing phenomenon. It includes assertions that the Holocaust never happened, allegations that students at Parkland High School who became gun control activists were really actors, and that climate change is a liberal fantasy. It’s a global trend that autocratic political leaders are exploiting to advance their agendas. Unless the platforms adjust their fundamental approach to disinformation, this trend will escalate and come to dominate political discourse to the detriment of all of our societies.

It’s time to set aside the claim that social media sites are not “arbiters of the truth.”

Some critics are advocating that governments use antitrust laws to break up these companies, in part to address this problem. Yet even if those efforts are successful, they will not alter the fact that the social media companies control access to their sites. They alone design and control the algorithms that determine what users see. They have the technical capacity to moderate the content on their sites and the resources to reduce harmful content in real time. Recognizing these factors, the social media platforms themselves need to develop a new approach to addressing provably false political disinformation, especially as it goes viral.

Facebook, YouTube, and Twitter are not news organizations. They don’t have reporters, and they don’t generate their own political content. But their role is very different from that of common carriers like telephone companies, which simply run lines connecting people who wish to talk. As Mark Zuckerberg said in February during his remarks at the Munich Security Conference, these companies are “somewhere in between” the traditional media and telco models. 

A core function of their operations is moderating content on their sites. Recently, they have become more vigilant in taking down some harmful content, like the Infowars musings of Alex Jones, conspiracy posts of QAnon followers, and various false cures for COVID-19 that have circulated online. Over the last dozen years, the social media companies have delineated a growing list of categories of content that they have decided to take down, such as child pornography, content that promotes violence or bullying, and “inauthentic” content. Going forward, they should add demonstrably false political content to this list. 

This does not mean removing everything that is false or posts likely to reach only a trivial number of people. It does not mean refereeing inconsequential political hyperbole or name-calling. They should not take down political opinions. Their focus should be on factually inaccurate disinformation that’s likely to have wide circulation and substantial impact. Their fact-checking partners are already identifying material of this sort on a daily basis. But rather than following their current practice of labeling and demoting such content, they need to remove it altogether. 


Subscribe to the Ethical Systems newsletter


When they take down provably false content, they should archive a copy in a restricted area where journalists, scholars, and others can gain access to it. But they need to broaden their current framework to take down provably false political content, especially if it is going viral and even if it is not tied to violence, inauthentic accounts, or otherwise falls into an existing community standards category. The failure to weed out provably false content is resulting in a flood of disinformation that is seriously distorting democratic discourse. 

Second, it’s time to set aside the claim that social media sites are not “arbiters of the truth.” Their own actions—for instance, relying on fact-checkers who distinguish between true and false content—undercut the sweeping assertion that they don’t make such judgments. Along the same lines, they should stop implying that they are barred by the First Amendment or other national laws from constraining speech. The First Amendment applies to government, not private companies. The tech companies are well within their rights to take down content that is provably false. 

Third, each of these companies should invite a wide discussion about the way forward, involving representatives from within their own organizations, their competitors, and with outside experts. Academic centers like ours at NYU’s Stern School of Business would welcome the opportunity to engage in this type of review.  

Fourth, each company should create a new senior position to lead this effort: a content overseer, reporting directly to the C-Suite leadership. Ideally, this should be someone with extensive news and editorial experience. While this person will not function in a traditional editorial role, these companies would benefit from their experience determining what is factually accurate to help them navigate the challenges they face on a daily basis.  

Fifth, each of these companies should bring content moderation in-house, rather than relying so heavily, as they now do, on outside contractors to perform this vital function. Given the sensitivity of the content in question, it does not make sense to outsource responsibility for this core business function. 

Because of the extreme political polarization in our world today, this new approach will itself become a subject of intense political interest. It will generate fierce resistance from those who seek to manipulate online communications. The tech companies will need to insulate the process they adopt, to the extent they can, by involving people and organizations whose views cut across the political spectrum. Failure to pursue this alternative model will pose existential risks to the social media industry and to our society more generally. Nothing less than the health of our democracy is at stake.

Michael Posner is the Jerome Kohlberg professor of ethics and finance at NYU Stern School of Business and director of the Center for Business and Human Rights. Follow him on Twitter @mikehposner.

Lead image: Geoff Livingston / Flickr

Reprinted with permission from Forbes.