Shifting motivations

One such strategy is to prompt accuracy motivation. As mentioned, such motivation does not ensure accurate beliefs, but it can help. Bolsen and Druckman (2015) find that a correction only works when respondents are primed to pursue an accuracy motivation. Similarly, Pennycook et al. (2019) find that a prompt to think about ‘the concept of accuracy’ reduces respondents’ propensity to share false and misleading news attacking their political opponents.

There are a variety of approaches to stimulating accuracy motivation. One approach involves attenuating the effects of directional motivation by satisfying the underlying goal (see Dunning 2015), which reduces directional rejection of accurate information. For example, Bolsen and Druckman (2018b) show that affirming participants’ worldview — even if it involves conspiratorial tendencies — increases their inclination to accept accurate scientific information. Alternatively, encouraging deliberations about science can induce accuracy (Dietz 2013). Finally, highlighting the salience or personal relevance of an issue can temper directional motivations and lead to accuracy since holding inaccurate beliefs may have direct personal consequences. Indeed, this is one possible reason that residents of areas most affected by climate change often accept climate science (Scannell and Gifford 2013). The same has been found with regard to

COVID-19 — in areas with more cases, people were less likely to form opinions about policies based on their partisan leanings (Druckman et al. 2020).

Framing

Another approach to combatting misperceptions is framing, in which messages strategically highlight considerations that may be persuasive for a target audience (Druckman and Lupia 2017). As mentioned earlier, a values-based directional motivation often leads to misperceptions when scientific findings are at odds with strongly held values; reframing seems especially effective for correcting such misperceptions. For example, Campbell and Kay (2014) demonstrated that reframing the need for climate action in free-market-friendly terms allowed proponents of laissez-faire economics to express greater belief in human-induced climate change. Other studies report similar success using moral value frames that appeal to conservatives, such as in-group loyaltv, purity, and respect for authority (Feinberg and Wilier 2013; Wolsko et al. 2016).

Institutional and techno-cognitive strategies

In addition to social-psychological approaches, scholars are examining the degree to which institutional and ‘techno-cognitive’ strategies can combat the systemic changes in the information environment, discussed earlier, that exacerbate the spread of scientific misperceptions (Lazer et al. 2018). To address the institutional and financial roots of misinformation spread, a coordinated multidisciplinary effort is necessary to identify groups that finance, produce, and disseminate misinformation and politicisation as a way to confuse the public and protect the policy status quo. Farrell et al. (2019) suggest several interconnected strategies to combat the spread of misinformation that entail (1) academics working more closely with journalists and educators to disseminate inoculations or warnings when possible, (2) the use of lawsuits to defend climate scientists against personal attacks and to identify the most prominent misinformation creators and distributors, and (3) enacting legislation that requires greater financial transparency to eliminate hidden private contributions that shield both individuals and companies who produce fake news.

Another strategy involves using ‘technocognition [to] design better information architectures’ to suit the ‘post-truth era’ (Lewandowsky et al. 2017: 362). This approach advocates using technological adaptations to prevent the spread of misinformation, as well as cognitive approaches that might better educate and inform the public. For instance, social media outlets such as Facebook and Twitter can (1) provide feedback to users that allows them to better identify fake news, (2) provide credible sources for different groups that will confirm when a particular story is false, (3) develop and employ algorithms that detect bots and eliminate their ability to spread misinformation, and (4) identify the primary producers of fake news and eliminate their access to social media platforms. Of course, given the contemporary social and informational landscape, technological solutions must be accompanied by serious discussion of political and ethical complications.

 
Source
< Prev   CONTENTS   Source   Next >