Why is correction ineffective? Cognitive and motivational processes

Misbeliefs won’t be corrected (1) if people deliberately avoid debunking messages (selective avoidance), (2) if misinformation has been well integrated into the recipients mental model (proactive interference), and/or (3) if corrective messages are processed in a biased manner (motivated reasoning).

First, getting a message across to the target audience in this high-choice environment is an increasingly arduous task. Research on cognitive dissonance (Festinger 1957) and confirmation bias (Nickerson 1998) suggests that people are inherently motivated to avoid information incongruent with their existing beliefs and attitudes. Consequently, when a debunking message is expected to cause psychological discomfort by challenging their current understanding of the world, people actively avoid it. Hameleers and van der Meer’s (2020) finding that people who opposed immigration were more likely than those who supported immigration to choose the fact-check that refuted pro-immigration news supports this notion.

Second, even when people are exposed to corrections, processing of corrective messages may be hampered by proactive interference as previously obtained misinformation impedes ‘the acquisition and retrieval of newer materials’ (Teague et al. 2011: 2028). It is challenging to update the mental associations established around the old (mis)information with new facts, like memorising a friend’s new phone number. Proactive interference naturally transpires because of the way our memory works, so it presents inherent challenges with any correction attempts, especially when the previous misinformation plays a central role in explaining the focal event. Unless the correction provides a better alternative explanation, people will still fall back on old misinformation rather than the recent corrections (Wilkes and Leatherbarrow 1988).

Third, message recipients’ existing attitudes and beliefs may bias how they process corrective messages. Research on motivated reasoning in general, and disconfirmation bias in particular, suggests that people ‘spend time and cognitive resources denigrating and counter-arguing atti— tudinally incongruent arguments’ (Taber et al. 2009: 139). In the face of corrective information, people may engage in motivated reasoning, discounting the validity' of arguments and evidence that counter their misbeliefs while selectively' heeding and summarily accepting attitudinally coherent facts. The meta-analysis finding that counter-attitudinal correction is less effective than pro-attitudinal correction (Walter and Tukachinsky 2020) suggests the robust operation of biased information processing.

What next? Future directions

The extant literature offers useful insights into what makes correction efforts more and less effective and why', but the empirical evidence and theoretical explanations are far from conclusive. Next, we identify' several limitations in previous scholarship and suggest directions for future research.

Search for theory-grounded explanations

With research on misinformation amassed quickly, inconsistencies are also noted. For example, does repetition of misinformation in the corrective message, most notably in the myth-versus-fact form, lower its effectiveness or even backfire? Some have warned against the familiarity' backfire effect, arguing that misinformation included in the corrective message may become more familiar, hence believed (e.g. Chan et al. 2017). Others have shown that false information should be explicitly negated for correction to occur (e.g. Weil et al. 2020). Although meta-analyses may provide an answer relatively immune to the idiosyncratic biases of individual studies, merely counting which side gets more empirical support falls short of elucidating why repetition matters. Instead, theory-based explanations need to be developed and tested to reconcile the seemingly inconsistent or even contradictory findings. If the detection of incompatibility is the key to explaining why repetition of misinformation enhances correction efficacy (Ecker et al. 2017), for example, one may systematically vary the salience of incompatibility between misinformation and corrective information and examine how it moderates the efficacy of correction.

Need for deeper theoretical grounding is also found with studies comparing diverse forms of correction for their relative effectiveness: rating scale plus contextual correction versus contextual correction (Amazeen et al. 2018) and visual truth scale (fact-o-meter) versus simple retraction versus short-form refutation (Ecker et al. 2020). While it is certainly useful to evaluate which correction method works better, without sufficient theorising, one cannot explain why one message format outperforms the other(s), which lowers confidence in observed findings. For example, why does the inclusion of graphical elements, such as a truth scale, reduce the effectiveness of correction messages (Walter et al. 2019)? Is it because an extra visual element distracts people from key facts and core arguments? Or is it because the truth scale makes the discrepancy between one’s own beliefs/op inions and the correction message clearer that people downright reject the message without even reading it? To offer reliable practical recommendations for effective debunking strategies, more theoretical explorations are in order.

The search for theoretical explanations also involves identifying potential confounds in previous works. As Thorson (2016) suggested, corrections might have been ineffective because they were phrased as negations, rather than affirmations. Given that ‘what is’ is easier to process than 'what is not’ (i.e. the affirmative-representation principle) (Pacini and Epstein 1999), the efficacy of corrective information might have been suppressed due to the use of negations in corrective messages (e.g. ‘Vaccines do not cause autism’). Another potential confound is the valence of misinformation (and correction). If negative information tends to be considered more informative and thus more influential in social judgments (Fiske 1980), debunking misinformation might be less effective when the misinformation is negative than when it is positive. Misinformation, however, needs not be negative. Amid the coronavirus pandemic, the widely shared news about an Italian priest who allegedly died from coronavirus after giving up a respirator to a younger patient was invalidated later (Pleasance 2020). Potential methodological confounds, including syntactic and semantic features of misinformation, should be assessed thoroughly before any theoretical explaining is attempted.

User engagement at the core of conceptual framework

Most research has thus far focused on how people respond to misinformation and corrective information, once exposed. To make causal claims not contaminated by self-selection, participants were typically forced to read what was randomly assigned to them. However, not only do individual traits, such as processing style, affect how people respond to corrective information (Carnahan and Garrett 2019), but they also affect how likely people are to engage with it in the first place. Moreover, just as ‘forced’ exposure to partisan news yields different outcomes than ‘selected’ exposure (Stroud et al. 2019), those who willingly choose to read corrective information process and evaluate the information differently than those incidentally exposed to it. Therefore, research should start with questions such as what dispositional (e.g. need for closure, need for orientation) and situational (e.g. accuracy motivation, issue involvement) factors predict volitional exposure to corrective information like fact-checks.

User engagement also deserves more attention as an outcome variable. Possibly due to the legacy of the media effects paradigm, message acceptance is often positioned as the end result of the communication process. However, communication begets more communication. Especially considering that social media platforms and news aggregation sites are where people encounter misinformation most frequently, the specific patterns and consequences of user engagement like commenting and sharing demand empirical investigations. After all, it is how fast and far misinformation travels in the networked society that makes misinformation a real threat, which cannot happen without the voluntary engagement of countless individuals in the process.

Beyond controlled experiment

Much research reviewed herein relies on controlled experiments, whose known limitations include demand characteristics. Measuring the participants’ factual knowledge or the perceived accuracy of misinformation immediately after presenting corrective information, for example, seems fairly close to a manipulation check. Similarly, to examine if participants’ support for more investment in scientific research changes after they are informed that the current budget is less than what most people believe (Goldfarb and Kriner 2017) might have rendered the research objective all too transparent.

Another reason why researchers should look beyond a controlled setting is the paucity of research on the context. Amid a global pandemic, for instance, overwhelmed by the sheer amount of (mis)information, people might become even more susceptible to confirmation biases in the form of selective exposure and motivated reasoning. Alternatively, heightened accuracy motivation might lead people to seek information from more diverse sources than they normally prefer. By actively incorporating information environment as a variable and keeping track of how people seek, process, and share information over time, researchers can offer ecologically valid accounts of how the debunking of misinformation works in real life.

 
Source
< Prev   CONTENTS   Source   Next >