Observational correction

Partly as a consequence of the limitations of these strategies to correct misinformation, we propose one more: that of observational correction. Observational correction occurs whenever people update their own attitudes after seeing someone else being corrected on social media (Vraga & Bode 2017).

There are several key features of this definition that set it apart from other types of correction and may make it more successful than traditional formats for correction. First, it focuses on the community seeing an interaction on social media in which both misinformation and correction are offered, rather than considering the individuals (or organisations) sharing misinformation. Second, the community observes the correction simultaneously with or immediately after the misinformation (we therefore cannot speak to whether observational correction can occur when someone was exposed to the misinformation absent correction initially and only later sees a corrective response).

Most importantly, observational correction is not limited to one particular source. Our previous research outlines three potential ways in which observational correction can occur. First, corrections can come from algorithms driven by the platforms themselves — most notably, through Facebook’s related stories algorithm (e.g. Bode & Vraga 2015, 2018; Smith & Seitz 2019). Newer examples of platforms responses — like showing messages to people who liked, reacted to, or commented on COVID-19 misinformation on Facebook (Rosen 2020) or adding fact-checks to manipulated information on Twitter (Roth & Achuthan 2020) — would fall under this domain of algorithmic correction. Future research should explore which of these mechanisms are most successful in responding to misinformation on the platform, but theoretically, these approaches may have merit when applied correctly.

Second, corrections can come directly from expert organisations. Our research shows that corrective responses by expert organisations reduce misperceptions, and engaging in these types of corrections do not hurt, and may in fact help, organisational credibility (Bode et al. n.d.; Vraga & Bode 2017; Vraga, Bode, et al. 2020). More research, however, is needed to understand the boundaries of who is considered an expert on a topic. For example, does simply including the title ‘professor’ in a social media handle imbue that user with sufficient credibility to function as an expert when offering a correction (e.g. Vraga, Kim, et al. 2020)? What level of existing organisational prominence or trust can be leveraged when offering corrections? And to what extent does a perception of expertise depend on the individual characteristics of the person viewing the correction?

Third, corrections can come from other social media users. This is where the true populist power of social media comes into play. Although user correction does reduce misperceptions, multiple corrections are sometimes necessary to have an impact, and corrections should provide links to expert information (for example, from the Centers for Disease Control, the American Medical Association, or a fact-check) on the topic to be effective (Vraga & Bode 2017, 2018). In other words, to mitigate misinformation, social media users should indicate the information is false, even if someone else has already done so, backing up their arguments with links to an expert organisation.

One might think the tone of the correction would affect its persuasiveness. However, the tone of user corrections does not appear to affect their effectiveness — at least on those observing the correction, rather than those being corrected (Bode et al. 2020). When users are uncivil in their responses, they are equally effective as when they are civil; likewise, expressing empathy and affirmation for why a user may be confused or sharing misinformation does not increase or limit the efficacy of such a response for the social media audience. Given that incivility can have numerous other deleterious effects on democratic society (Mutz 2006), a civil or understanding tone is still likely the best approach when offering corrective responses.

So why does observational correction work when other corrective responses often fail? One explanation relates to the distance between the person updating their attitudes and the correction. Observational correction is about people who see an interaction, rather than those who take part in it directly. People merely observing a correction may not be as determined to engage in motivated reasoning to protect their identity and are therefore more flexible in updating their beliefs (although there is at least some evidence to the contrary; see Bode & Vraga 2015).

Second, people are particularly sceptical of information shared on social media and thus may be more receptive to corrections that leverage expert sources. Indeed, 89 percent of Americans say they at least ‘sometimes’come across made-up news and information (with 38 percent saying ‘often’), and these numbers are higher among those who prefer social media for news (Mitchell et al. 2019). This scepticism may increase reliance on expertise, either from user corrections that highlight expert sources or from experts themselves. When facing information overload — as is common on social media — heuristic processing and relying on source cues regarding credibility may become even more important in shaping processing (Metzger et al. 2010; Tandoc 2019). Moreover, the fact that these corrections occur on social media — where exposure to news and information is often incidental to social purposes motivating online exposure (Bode 2016) — may also facilitate correction.

Third, the immediacy of the correction likely contributes to the success of observational correction on social media. In tests of observational correction, exposure to the misinformation occurs simultaneously with or immediately after exposure to the corrective information. Therefore, the misinformation is not able to linger and impact attitudes, reducing motivated reasoning in the face of corrective information and belief echoes of the misinformation (Thorson, 2016). Future research could use eye tracking to determine attention patterns — for example, whether people first look at the misinformation or the correction, their relative attention to the misinformation versus the correction, or how often they shift their attention between the misinformation and correction posts. The size, position, and memorable content attached to each are also likely to play a role.

Fourth, although this is outside the realm of what we have tested directly, correction imposes an indirect cost for sharing misinformation. Through observational correction, people see this cost — it may be embarrassing or harmful to one’s reputation to share information that is then called out as false. If people believe sharing misinformation will have negative consequences such as these, it should theoretically produce disincentives for sharing misinformation in the first place. Because even small nudges to encourage people to think twice about sharing false information can have important impacts (Fazio 2020; Pennycook et al. 2019), this is a useful potential consequence of observational correction.

 
Source
< Prev   CONTENTS   Source   Next >