When is correction more and less effective?

Debunking misinformation is an inherently persuasive attempt whose objective is to undo or override the influence of false information. As a result, research has examined how factors known to affect persuasion in general, such as source credibility and message congruency, account for the (in)effectiveness of corrective information.

Attributes of misinformation

The stronger the misbeliefs are, the more resistant they are to the correction attempts. Thus, what makes misinformation more persuasive renders the correction less effective. For example, misbeliefs are more likely to persist when the misinformation is from a credible (versus a less credible) source and when repeated (versus not) (Walter and Tukachinsky 2020). Misinformation is also more difficult to correct when it offers causal explanations about the focal event than when it is only topically related. For instance, participants were more likely to believe the misinformation regarding the effect of kombucha, even after retraction, when it explained a story characters action that they had read than when it did not (Hamby et al. 2019). Just as the provision of a ‘reason,’ even a placebic one, was effective in eliciting compliance (Langer et al. 1978), misinformation that enables people to understand why something happens is better integrated into their knowledge structure, hence more persistent (Johnson and Seifert 1994).

Attributes of corrective information

More research has examined how various attributes of corrective information predict its effectiveness. Not surprisingly, expert sources such as a government agency (e.g. Centers for Disease Control and Prevention) (Vraga and Bode 2017) or news media (van der Meer and Jin 2019) are more effective in increasing belief accuracy than peer sources. Interestingly, algorithmic correction and peer correction (by other Facebook users) were not significantly different in reducing people’s misbeliefs (Bode and Vraga 2018).

Self-interest is another source-related factor that affects how likely people are to accept corrective information. When the corrective message is against the communicator’s (presumed) self-interest, like when Republicans (versus Democrats) endorse corrections supporting scientific consensus on climate change, people are more likely to accept it and modify their beliefs (Benegal and Scruggs 2018). Similarly, self-correction can be seen as a special case of self-defeating messages, for retracting one’s own message potentially hurts the communicators credibility. A meta-analysis indeed confirmed that self-corrections are more effective than third-party corrections (Walter and Tukachinsky 2020).

Why, then, are self-defeating corrections more persuasive? First, according to Fiske (1980), negative aspects of stimuli are more influential in the formation of judgments than positive ones (i.e. negativity effect) because negative information is less typical and thus deemed more informative. If (perceived) typicality determines the value of information, corrections against self-interest would be considered more informative and thus taken seriously, for they are uncommon and atypical. Second, self-defeating messages are unlikely to raise suspicions about the communicators ulterior motives. Considering that the suspicion of ulterior motives heightens concerns about deception, which subsequently trigger resistance to persuasion (Fransen et al. 2015), corrections against self-interest are probably more effective because people are less likely to infer ulterior motives. The meta-analytic finding that corrections involving health are more successful than those concerning politics and marketing (Walter and Murphy 2018) might be due to lower levels of ulterior motives attributed to the former.

As for message characteristics, counter-attitudinal corrections are less effective than pro-attitudinal corrections (Walter and Tukachinsky 2020). Also, debunking messages are more effective when they explicitly state what to believe and what not to, rather than merely questioning the truth of the misinformation (Pingree et al. 2014). For example, applying a ‘rated false’ tag to a false news headline, rather than a ‘disputed’ tag, improved the accuracy of perceptions (Clayton et al. 2019). Similarly, correction was more effective when it refuted the false information completely rather than partially, leaving no room for the falsified misbeliefs (Walter et al. 2019).

Quality of message also matters. As shown in a meta-analysis (Walter and Tukachinsky 2020), appeals to coherence (e.g. providing alternative explanations for misinformation) were found more effective than appeals to source credibility. Although such results might seem indicative of systematic (versus heuristic) processing of corrective information, evidence also suggests that simple peripheral cues interfere with the correction of misbeliefs. For example, a message debunking people’s misbelief that an Imam backing an Islamic cultural centre in NYC is a terrorist sympathizer was less successful in correcting the misbelief when it included his photo dressed in Middle-Eastern-style attire, as opposed to Western-style clothing (Garrett et al. 2013). Likewise, participants were less likely to accept the debunking message when it contained the Imam’s previous controversial, yet unrelated, statements (Garrett et al. 2013). Thus, more research is needed to conclude whether such findings stem from unwarranted reliance on peripheral cues or biased systematic processing guided by prior beliefs or attitudes.

Findings are also mixed as to how emotionality of misinformation or corrective information influences the continuation of misperceptions. Although the intensity of emotion arising from misinformation (e.g. a report attributing a plane crash to terrorist attack versus bad weather) did not moderate the effectiveness of correction (Ecker et al. 2011), corrections containing negative emotional feedback from the victim of misinformation were more effective than corrections with no emotional feedback (Sangalang et al. 2019).

In addition to the source- and message-related factors, context can play a role too. Most researched is the time lag between the exposure to the initial misinformation and the correction. Generally, immediate corrections, such as presenting corrective ‘related articles’ right after fake news on social media (Smith and Seitz 2019), work better than delayed corrections (Walter and Tukachinsky 2020), for misinformation is less likely integrated firmly into individuals’ knowledge structure if refuted immediately. However, such real-time corrections were more effective for those who found corrections favourable, rather than unfavourable, to their preexisting attitudes (Garrett and Weeks 2013), suggesting potential contingencies.

 
Source
< Prev   CONTENTS   Source   Next >