YouTube Amplifies Misinformation and Hatred, But Here’s What We Can Do About It

,
Because of its vast reach and its inadequate attention to moderating harmful content, YouTube poses a serious threat to democratic discourse.

The House Select Committee investigating the Jan. 6 insurrection at the Capitol is rightly focused on the actions of former President Trump and those around him. But as the public hearings make clear, while Trump and his allies may have encouraged and enabled the insurrection, several violent extremist groups, like the Proud Boys and the Oath Keepers, led the assault.

These militant organizations recruited people from around the country for weeks before the Jan. 6 attack on the Capitol, relying heavily on social media platforms to stir their emotions and plan the attack on Congress. Videos circulated on YouTube played an essential role in this effort.

YouTube has grown into one of the world’s most popular and influential social media sites since its launch in 2005 and acquisition by Google a year later. Today the video site has more than 2 billion monthly users who collectively watch more than 1 billion hours of videos each day. Last year, YouTube generated nearly $29 billion in revenue, mostly from advertising.

I have long believed that governments generally should avoid interfering in the moderation of online content.

A small percentage of YouTube users—but a meaningful number in absolute terms—visit YouTube regularly to consume and share alarming videos, including content that is undermining democracy. In June 2022, the NYU Stern Center for Business and Human Rights, which I direct, published a report entitled, A Platform ‘Weaponized’: How YouTube Spreads Harmful Content—And What Can Be Done About It. Written by my colleague Paul Barrett and Justin Hendrix, editor of Tech Policy Press, it describes how the platform allows video creators to share in revenue from advertising and other sources, a distinctive feature that makes it attractive to provocateurs seeking to amplify sensationalistic messages—and make a living while doing it. 

One such video creator highlighted in the report is Tim Pool. His five YouTube sites, including Timcast IRL, collectively have millions of subscribers and provide a prominent platform for extremist figures like Alex Jones of Infowars and Enrique Tarrio of the Proud Boys. Tarrio and four other Proud Boys members have been indicted on charges of seditious conspiracy for allegedly planning the January 6 attack. Lawyers for the defendants have said there is no evidence that their clients engaged in such a plot. As a video creator, Pool is compensated by YouTube for amplifying these far-right individuals and boosting their profile.

In an email to Barrett, Pool said that in preparing his commentaries he relies exclusively on sources approved by NewsGuard, a service that rates the reliability of news and information sites. For interview subjects, he added, “We seek out relevant people in news and culture to discuss issues, just like CNN or other mainstream outlets.”

In a February 2022 blog post, YouTube’s Chief Product Officer, Neal Mohan, wrote about changes the company is exploring to address disinformation and harmful content on its site. These include blocking users on other platforms from embedding links to YouTube videos that contain false or conspiratorial material but aren’t quite ominous enough to cause YouTube to remove them. But as our report concludes, “to date, the reforms that YouTube has actually adopted have not been adequate.”

In response to our report, YouTube said that it diligently seeks to remove content that violates its guidelines and to down-rank recommendations of “borderline” content, meaning false or conspiratorial material that brushes up against the line but doesn’t quite constitute a violation. For example, in defending its actions stemming from the 2020 election fallout, the company told us that it “removed tens of thousands of videos for violating our U.S. elections-related policies, the majority before hitting 100 views. In addition, our systems actively point to high-authority channels and limit the spread of harmful misinformation for election-related topics. We remain vigilant ahead of the 2022 elections.”


Subscribe to the Ethical Systems newsletter


Because of its vast reach and its inadequate attention to moderating harmful content, YouTube poses a serious threat to democratic discourse. Dartmouth researchers Annie Y. Chen and Brendan Nyhan oversaw a study published in 2021 by the Anti-Defamation League that investigated racist or otherwise hateful channels on YouTube. They found little systemic evidence that the platform guided unsuspecting individuals to harmful content. Still, their data did “indicate that exposure to videos from extremist or white supremacist channels on YouTube remains disturbingly common.”

So, where do we go from here? The NYU report recommends that Google and YouTube need to explain “the specific criteria its algorithms use to rank, recommend, and remove content—as well as how often and why those criteria change and how they are weighted relative to one another.” In addition, YouTube needs to dramatically expand and improve its content moderation system, including adding more human reviewers and making all moderators direct employees, rather than following the industry’s cost-cutting practice of outsourcing this vital corporate function. At the same time, YouTube must continue to refine the design and operation of the automated filtering that is responsible for the vast majority of video removals. These are massive tasks that will require Google to invest significantly greater resources, especially in countries in the Global South. 

I have long believed that governments generally should avoid interfering in the moderation of online content, based on the wise free speech protections contained in the U.S. Constitution and comparable international law. But my views have changed because YouTube and the other major social media companies have been so slow to act. 

The NYU report recommends carefully circumscribed federal action requiring greater platform transparency and procedurally adequate content moderation. Specifically, it urges Congress to authorize the FTC to oversee mandatory disclosure of currently secret data that would help researchers understand, for example, why certain content “goes viral” and what steps YouTube and other companies might take when harmful material spreads far and wide.

It also proposes that the FTC ensure that the companies build adequate content moderation systems that enable them to fulfill the promises they have already made to users in their terms of service about reducing the spread of divisive and sensationalistic content. The agency would ensure that platform standards are clear and internally consistent, that enforcement decisions are explained in a way that users can understand, and that users have ready access to an appeals process. The agency also would have authority to assess whether content moderation resources—budgets, personnel, and management attention—are commensurate with the daunting task.

Our Center has proposed an ambitious agenda for reforming the social media industry. Given the level of political volatility in the U.S. and in countries around the world, it is high time that YouTube and other influential platforms assume greater responsibility for their role exacerbating this instability—and do so under more watchful government oversight.

Michael Posner is the Jerome Kohlberg professor of ethics and finance at NYU Stern School of Business and director of the Center for Business and Human Rights. Follow him on Twitter @mikehposner.

Reprinted with permission from Forbes