With Facebook’s announcement that its Oversight Board will make a decision about whether former President Donald Trump can regain access to his account after the company suspended it, this and other high-profile moves by technology companies to address misinformation have reignited the debate about what responsible self-regulation by technology companies should look like.
Research shows three key ways social media self-regulation can work: deprioritize engagement, label misinformation and crowdsource accuracy verification.
Deprioritizing engagement in content recommendations should lessen the “rabbit hole” effect of social media, where people look at post after post, video after video. The algorithmic design of Big Tech platforms prioritizes new and microtargeted content, which fosters an almost unchecked proliferation of misinformation. Apple CEO Tim Cook recently summed up the problem: “At a moment of rampant disinformation and conspiracy theories juiced by algorithms, we can no longer turn a blind eye to a theory of technology that says all engagement is good engagement – the longer the better – and all with the goal of collecting as much data as possible.”
In an experiment, researchers hired anonymous temporary workers to label trustworthy posts. The posts were subsequently displayed on Facebook with labels annotated by the crowdsource workers. In that experiment, crowd workers from across the political spectrum were able to distinguish between mainstream sources and hyperpartisan or fake news sources, suggesting that crowds often do a good job of telling the difference between real and fake news.
In my own work, I have studied how combinations of human annotators, or content moderators, and artificial intelligence algorithms – what is referred to as human-in-the-loop intelligence – can be used to classify health care-related videos on YouTube. While it is not feasible to have medical professionals watch every single YouTube video on diabetes, it is possible to have a human-in-the-loop method of classification. For example, my colleagues and I recruited subject-matter experts to give feedback to AI algorithms, which results in better assessments of the content of posts and videos.
However, a Wikipedia-style model needs robust mechanisms of community governance to ensure that individual volunteers follow consistent guidelines when they authenticate and fact-check posts. Wikipedia recently updated its community standards specifically to stem the spread of misinformation. Whether the big-tech companies will voluntarily allow their content moderation policies to be reviewed so transparently is another matter.
Ultimately, social media companies could use a combination of deprioritizing engagement, partnering with news organizations, and AI and crowdsourced misinformation detection. These approaches are unlikely to work in isolation and will need to be designed to work together.
Coordinated actions facilitated by social media can disrupt society, from financial markets to politics. The technology platforms play an extraordinarily large role in shaping public opinion, which means they bear a responsibility to the public to govern themselves effectively.
Some form of government regulation is likely in the U.S. Big Tech still has an opportunity to engage in responsible self-regulation – before the companies are compelled to act by lawmakers.
Anjana Susarla does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.