How is observational correction populist?

Therefore, there are several reasons why observational correction might work. But the biggest potential benefit of observational correction is that it presents not only a scalable way to address misinformation on social media but also one that is populist in its reliance on ordinary users. Observational correction does not rely on elite external actors — experts, platforms, factcheckingjournalists, or others — but can occur from people in the communities being impacted by misinformation themselves. This means that social media users — everyday people — can play a major role in shaping the information environment of which they are a part. While this arguably has value in and of itself, it may also result in greater receptivity to user corrections than other actions designed to address misinformation.

Identifying and removing inaccurate content from social media is also virtually impossible at scale, given the massive quantity of information that passes through such platforms on a daily basis. Even successfully identifying a very high percentage of misinformation would still result in many thousands of pieces of inaccurate content persisting on social media (Bode 2020). While the scale is not ‘fixed’ with observational correction, it can be more effectively addressed, at least hypothetically. Given that one in three people around the world use social media (Ortiz-Ospina 2019), a virtual army of correctors exists to launch into action upon seeing misinformation posted.

Of course, user correction is only populist — and effective and scalable — if a wide range of people on social media platforms participate. Notably, observational correction is not as uncommon as people may assume. A recent study we conducted found that 23 percent of Americans report having corrected someone else on social media in the past week with regards to COVID-19, and 34 percent report having seen these corrections occur (Bode & Vraga 2020), which aligns with earlier work suggesting that 32 percent of Americans reported publicly flagging or reporting made-up news (Mitchell et al. 2019). Importantly, both those engaging in correction and those observing the correction are from a wide variety of backgrounds — including those from both sides of the partisan aisle. This is important: correction is not siloed within specific groups of people but can be found across broad swaths of the public. Moreover, corrections coming from others ‘like us’ are more likely to be trusted (Margolin et al. 2018; Tandoc 2019), meaning that anyone can experience correction, and such corrections are likely to be effective.

Likewise, our study found people generally endorsed correction as a valuable practice on social media. Not only do 68 percent of people agree that people should respond when they see someone sharing misinformation, 56 percent of people say they personally like seeing such corrections. While majorities of the public may believe that the news media hold primary responsibility for reducing made-up news and information broadly (Mitchell et al. 2019), this does not negate public support for a community solution to the problem on social media. Nascent social norms promoting user correction on social media may be emerging, which could make it even more populist in nature. If the public increasingly believes such corrections are appropriate or valued, and perceives that other people are engaging in correction, it lays the groundwork for injunctive and descriptive norms that may powerfully affect behaviors (Ajzen 1985, 2011; Cialdini et al. 2006), allowing more people to feel comfortable correcting others on social media.

Areas for future exploration

While existing research provides the groundwork for a populist solution to misinformation on social media, much work remains to be done in the space. First, more research is needed to examine the sources of misinformation and correction and how their credibility, proximity, relevance, and expertise affect how people perceive the information they share. User corrections, for example, may be more effective when they come from people we know personally or have close ties with (e.g. Margolin et al. 2018) or may be more effective from users with credibility or expertise cues in their name or post (for example, someone who claims they are a doctor, a scientist, ora professor) (Vraga, Kim, et al. 2020).

Likewise, user corrections may be more effective for some groups of people than for others. Given rising levels of scepticism towards elite institutions, news organisations, politicians, and scientists (Edelman 2020), user corrections from ordinary people may reach audiences who are sceptical of authority. Some evidence for this claim comes from research suggesting that user corrections were seen as equally credible to algorithmic corrections on Facebook among those higher in conspiracy ideation (Bode & Vraga 2018). Given the extensive research highlighting the importance of source cues in credibility and persuasion (Metzger et al. 2010; Petty & Cacioppo 1986), more research is needed into how myriad sources intersect (i.e. the source of the misinformation, the source of the correction, any additional links within the posts) when people see misinformation and correction online.

Second, more research is needed to understand who is most likely to engage in observational correction. As noted earlier, more people tend to ignore misinformation on social media than to respond to it (Bode & Vraga 2020; Tandoc et al. 2020), meaning that even user correction is not truly populist. Yet not enough is known about why some people respond to misinformation while others do not. Our initial research suggests that more educated people were more likely to say they had corrected others with regards to COVID-19, whereas older adults were less likely to say they had done so (Bode & Vraga 2020), although previous research had not found differences by education or age in terms of responding to ‘fake news’ on social media (Tandoc et al. 2020). In other research, we found that people who were misinformed about a topic were more likely to say they would respond to a misinformation post on the subject, suggesting that those who are most willing to respond may be the least equipped to reduce misperceptions (Tully, Vraga, & Bode 2020b, working paper). Clearly, much more research is needed to discover who is willing to correct others and the circumstances that may facilitate or deter correction.

Research is also urgently needed into what can be done to motivate people to correct others. Our research suggests user correction can be effective, but it requires people to be willing and able to engage with one another on social media. While research has not yet examined how to motivate corrections on social media specifically, several behavioral theories offer promising avenues to pursue. Notably, the theory of planned behavior highlights the importance of social norms — in conjunction with attitudes and perceived behavioral control — in spurring behavior (Ajzen 1985, 2011). Therefore, interventions that highlight public support for and engagement in correction represent a promising technique for encouraging more people to consider engaging in such corrections themselves (e.g. Cialdini et al. 2006). Likewise, user correction may be facilitated if combined with algorithmic correction; social media companies might consider prioritising user comments that include links to expert sources, elevating the visibility of such corrections.

In addition, most work has focused on either Facebook or Twitter, although other platforms like Instagram, Pinterest, and YouTube are also significant disseminators of misinformation. Indeed, initial studies suggest that incorporating video correction (Young et al. 2018) and graphics (Amazeen et al. 2018) enhances debunking efforts. Understanding what elements of observational correction successfully transfer between platforms and how corrections may need to be adopted to better fit the affordances of a given platform are essential.

 
Source
< Prev   CONTENTS   Source   Next >