Some surprising findings about content moderation

Lucas Rentschler and Will Rinehart
Tribune News Service (TNS)

On Tuesday, the Supreme Court held oral arguments for a case that could substantially alter the internet. While the case was specifically focused on who should be held liable for automatic recommendations, the justices will ultimately be deciding how platforms manage content moderation and misinformation.

While there are a lot of unanswered questions in content moderation, a paper we just published is one of the few rigorous studies of fact-checking that could settle some of them. It offers two striking insights.

First, we find that crowdsourcing fact-checks, much like Twitter’s Birdwatch program, are incredibly effective. Overall, it does a better job than just relying on the platform to police content.

A smartphone and a computer screen display the logos for Instagram, Facebook, WhatsApp and their parent company Meta, Jan. 12, 2023, in Toulouse, France. (Lionel Bonaventure/AFP/Getty Images/TNS)

MORE:Supreme Court sounds wary of weakening internet's Section 230 to allow lawsuits against internet giants

Second, we find that platforms should focus moderation efforts on policing content, rather than policing individuals. In other words, suspending accounts probably does more harm than good.

Nearly everyone agrees that misinformation is a problem. Upward of 95% of people cite it as a challenge when accessing news or other information. But there is little agreement about the right mix of policies that can balance the need for context without needlessly censoring content.

Rightly, everyone is concerned that platforms are picking and choosing which content to flag. Previous work from our lab has shown that social media platforms have a vested interest in policing misinformation. If companies want to promote user engagement and connections, they will need to address misinformation.

On top of this, measuring how misinformation affects real-world events is a tough empirical challenge for researchers. The Experimental Economics Lab, which is a part of the Center for Growth and Opportunity (CGO) at Utah State University, was set up specifically to understand these tangled questions.

This study, which is part of a larger research program on misinformation, was set up to evaluate fact-checking policies in a controlled laboratory experiment. Importantly, the decisions made by participants affect the amount of money they earn, so misinformation has real consequences. The study was also structured to allow people to interact with others over multiple rounds via a messaging system that replicates a platform. While no study is perfect, ours comes as close as possible to approximating real-world decision-making on platforms.

Three kinds of fact-checking scenarios were tested. In one version, individuals could fact-check information shared by other group members but they had to pay a small fee for it. The second scenario placed fact-checking in the hands of the platform and was randomly varied. Finally, we tested a combination of the two, both individual and platform fact-checking.

There were two consequences of posting misinformation. If misinformation was identified, it was flagged, so that participants knew. Likewise, users who were found to have posted misinformation were automatically fact-checked in the following round.

The results are remarkable. It is widely assumed that peer-to-peer monitoring, especially when users must pay to fact-check content, would lead to bad outcomes. But to the contrary, we find that this approach yields better outcomes than just relying on the platform. We also found that adding platform moderation to this peer-to-peer approach has only a small additional benefit.

Platforms would do well to leverage this pro-social behavior because it does not require them to evaluate posts, it provides more objective fact-checking, and it is transparent.

In other words, social media users can be relied upon for fact-checking.

Even more important, the research suggests that added scrutiny for users who post misinformation doesn’t really lead to deterrence. Simultaneously, it can lower user engagement. Given this, platforms are likely to be better off focusing their efforts on individual posts, rather than trying to identify bad actors and banning them from the platform.

In total, our results provide support for Twitter’s current approach to content moderation. Birdwatch’s decentralized and user-provided evaluations are effective. And the decision to be extremely judicious in banning accounts that have posted misinformation is the right move.

It seems that Elon and Jack have the right idea.

— Lucas Rentschler is the Director of the Experimental Economics Lab at the Center for Growth and Opportunity at Utah State University. Will Rinehart is a Senior Research Fellow at the Center for Growth and Opportunity at Utah State University.