Reasoning and goals: from psychopathological patients to healthy people

Amelia Gangemi and Francesco Mancini

1 Introduction

Laura is 55 years old, she’s a doctor at the local hospital, her parents died two years ago and she’s been living alone since then. She is not married, she has no children and since she was 30 she has had an affair with a married colleague, who is 25 years older. A few months ago she realized their relation has got no future and that not having children, she is going to live a solitary life. She’s sad, she often bursts into tears, she lost interest in her work and friends. In addition, her sadness and apathy make her feel a complete failure.

Giovanni thought he was dying when, while driving in a motorway, felt a sudden chest pain, his heart beating fast, he found it difficult to breathe and felt an annoying tingling in his arms. Convinced he was having a heart attack, he asked his fellow passenger to drive the car and take him to the nearest E.R., where the doctors, having done multiple tests, found nothing wrong with his heart.

Sabrina is 15 years old, and since college she has had great difficulties in the oral tests, although she likes studying and she has no difficulties in learning. When under examinations, it often happens to her to be afraid of being unable to speak. She looks at her classmates and sees some of them giggling, some others minding their own business. She thinks she’s going to make a fool of herself and that her classmates will laugh at her even more. Then she starts to stutter and she turns red.

Although apparently different, the stories of Laura, Giovanni, and Sabrina are likely: they concern someone who, starting from data that are far from remarkable, draws a negative conclusion which is exaggerated (/’m a failed person, I’m having a heart attack, I’m making a fool of myself).

Through these stories we can describe mood disorders, like depression (as in the case of Laura) which is characterized by sadness, pessimism, anhedonia, i.e., the inability to enjoy pleasures and interests, and reduced activity (American

Psychiatric Association, 2013), and anxiety disorders (as in the case of Giovanni and Sabrina), that is those pathologies which present symptoms as pounding heart, tachycardia, choking sensation, chest pain, and, in some cases, fainting. In the case of Giovanni, the disorder, called panic attack, involves feelings of overwhelming panic and, often, of an impending catastrophe. In the case of Sabrina, the disorder, called social phobia, leads to avoid social situations because of the fear of failure. The stories also show that these disorders can persist even when the sufferers have the available information to value the reality in a different way.

More generally, these stories point out to a crucial question in cognitivist clinical research. How can a wrong assessment of the social or body reality of one person lead to such consequences, and above all, why do these assessments persist even when facing with data which could easily change them? What maintains psychological disorders is indeed a topical question for clinical and cognitive scientists, including psychiatrists, psychologists, and neuroscientists.

In this chapter we want to show how, thanks to all the research aimed at trying to answer to this fundamental clinical question, the maintenance of psychological illnesses and their resistance to change, have deepened our understanding of a relevant general psychological process, such as reasoning, and how it is a tool at the service of our goals. According to a functional and pragmatic account of reasoning, we understood that the best kind of thinking is whatever kind of thinking that best helps people to achieve or protect their goals and reduce the costs of crucial errors. And surprisingly, we found out that this is valid for both normal and abnormal people.

2 Psychopathology and reasoning

Observations of behaviour show that all of us often fail to be rational. We make frequent errors, drawing fallacious inferences (e.g., Johnson-Laird, 2006; Gangemi et al., 2013). Yet, we usually function quite well and manage to survive, and so biases in reasoning are not always maladaptive and may even have real benefits (cf. Smeets et al., 2000). Cognitive theory of emotional disorders, however, commonly claims that biases in reasoning lead to dysfunctional behaviour and to psychological illnesses, including disorders in emotion, mood, and personality (e.g., Beck, 1976, 2019; Garety and Hemsley, 1997; Harvey et al., 2004), or to the interaction of patients’ beliefs and concerns with normal processes of reasoning - an interaction that in turn leads to biased or irrational inferences (Beck, 1976, 2019). For this reason, cognitive biases have become an important part of cognitive models of psychopathology (e.g., Bogels and Mansell, 2004; Clark and McManus, 2002; Hirsch and Clark, 2004).

According to such accounts, we were in good company when we originally thought that patients were more irrational in reasoning about topics pertinent to their illnesses than healthy individuals were. According to Beck (1976), we were convinced that if patients made fewer logical mistakes, they would be able to counteract the effects of these biases on maintaining their illnesses. Logic should therefore help patients to recognize and to correct their flawed thinking (e.g., Leahy, 2004), and the identification of the inferential errors that lead to dysfunctional beliefs and psychopathology' should contribute to cognitive therapies (Smeets and de Jong, 2005). In sum, psychotherapy should address errors in reasoning (Young and Beck, 1982). Here is an example of a man, under treatment (by F.M.) for paranoia, explaining why he thinks people are taking the mickey out of him (translated from Italian, Johnson-Laird et al., 2006):

As soon as I entered the lecture room 1 saw the students chatting together and among their almost imperceptible words I caught the word ‘queer’. They were taking the mickey out of me. Did you see how they were sniggering yesterday at the lecture and in the corridors as I was going past? Then the other day one of them was sitting in the first row right in front of me; I was about to start the lecture and he addressed the student next to him in an effeminate tone of voice. He was clearly referring to me. It is a known fact that students are cruel to teachers and like to have fun at their expense. I remember that when I was in high school, there was a teacher, probably gay, and my friends and I had fun at his expense for years. And I remember how my friends made fun of him as soon as his back was turned. Of course they are taking the mickey out of me!

This report is an example of the confirmatory pattern of inference: patient focuses only on the worst case (e.g., students were taking the mickey out of me), searching for confirmatory evidence (e.g., a student addressed the student next to him in an effeminate tone of voice. He was clearly referring to me, etc.) and ignoring disconfirming alternatives (e.g., the student was not referring to me). For this reason, this reasoning will likely end in the confirmation and strengthening of the worst initial hypothesis.

This form of reasoning seems to confirm the traditional thesis, supported originally by Beck: patients reason in a wrong way, since it involves the tendency to search only for confirmatory evidence, and it always leads to draw the same wrong conclusion and to hold the pathological conclusion or belief which creates, worsens, and maintains the pathology. However, thanks to a wide number of experiments, in the last years, it has been found out that:

  • 1 It’s not true that patients systematically confirm the worst hypothesis, but they can also falsify the safety hypothesis (i.e., reassurances), and this is a very difficult cognitive process’.
  • 2 It’s not true that patients are not able to reason logically, they can even reason better than healthy people, but only when they reason on topics relevant to their disturbance.

Starting from these empirical observations, i.e., patients confirm the worst hypothesis and falsify the safety ones (i.e., reassurances), and their reasoning is not impaired, it has been argued that they are motivated to reason effortfully to pursue their goals, thus reducing the likelihood of crucial errors and thereby avoiding their costs (see Friedrich, 1993).

In what follows, we are going to examine the empirical evidence supporting these conclusions, and what this tells us about reasoning processes in normal people.

3 Patients can also falsify hypotheses

Starting from the end of the 1990s, a group of Dutch researchers examined the hypothesis-testing process in patients affected by anxiety disorders, such as specific phobia and hypochondria (de Jong et al., 1997; Smeets et al., 2000). And since the first experiments, they surprisingly demonstrated that patients’ hypothesis-testing process is domain-specific and guided by the relevance of the hypothesis to their personal interests (Evans and Over, 1996; Kirby, 1994; Manktelow and Over, 1991; Smeets et al., 2000). In patients affected by anxiety disorders, a positive hypothesistesting strategy (confirming information seeking) coexists with more normative test strategies (falsifying information seeking), and these variations in their testing strategy (confirmation vs. falsification) depend precisely on the perceived utility of the outcomes. In de Jong and colleagues’ experiments, phobics, hypochondriacal patients, and healthy controls were presented with modified Wason Selection Tasks. The Wason Selection Task (WST, Wason, 1968) is a paradigm that is often used to investigate individuals reasoning strategy concerning conditional rules, and it indicates to what extent individuals tend to look for potentially confirmatory or for potentially disconfirmatory information concerning these rules (i.e., propositions). Using safety and danger rules, they found, for example, that in the context of health threats, individuals with hypochondriasis are not only more likely to selectively search for confirming information when asked to judge the validity of a danger conditional hypothesis (e.g., if a person suffers from a headache, then that person has a brain tumour), but that they can also search for falsifying information when asked to judge the validity of a safety conditional hypothesis (e.g., if a person suffers from a headache, then that person has influenza), that is they tend to look for falsifications in the case of safety rules (de Jong et al., 1998). Other research has demonstrated that similar reasoning processes are also involved in the context of phobic threat (e.g., de Jong et al., 1997). In two experiments, participants were presented with WST pertaining also to phobic threats. The WSTs contained safety rules (fit is a new house, then there are only a few spiders) and danger rules (if it is a modern house, then there are a lot of spiders). Both experiments showed that only clinically diagnosed phobics rely on corroborating information regarding danger rules and disconfirming information regarding safety rules. Thus, these results suggest that the perception of phobic threat is sufficient to activate a danger-confirming reasoning strategy, and such a reasoning pattern serves to maintain or even increase phobic fears.

The threat can also be related to guilt emotion and responsibility, which play a crucial role in the genesis and maintenance of obsessive-compulsive disorder (OCD, cf. Mancini and Gangemi, 2015). OCD is a mental disorder in which a person feels the need to perform certain routines repeatedly (compulsions), or has certain thoughts repeatedly (obsessions).

In their review, Shapiro and Stewart (2011) show that in healthy people, a mental state of guilt leads to obsessive-compulsive (OC)-like symptoms, including increased threat perception (see Gangemi et al., 2007), over-responsibility, and intrusive thoughts/impulses (Niler and Beck, 1989). Moreover, in healthy neuroimaging groups, the mental state of guilt is associated with brain activation in regions proximal to OCD-affected regions (Shin et al., 2000; Takahashi et al., 2004). In obsessive patients, reasoning processes should thus aim to reject the possibility of guilt in having done something wrong in creating, for example, a risk of contamination. If one wants to falsify a risk with certainty, s/he can only try to imagine all the possibilities in which it could be true, and falsify them one by one. This is an example of a real, typical protocol describing a problem relevant to an obsessive patients’ illness, using the falsificatory strategy (translated from the Italian; comments in parentheses highlight the crucial cues to the strategy):

off the bus and I touch someone. I physically feel that my hand, or rather my fist, punched him. I think I hit him on the head. I think he could be dead (the patient focuses on his action, seeking to corroborate its negative consequences; he makes a transition to the emotion of guilt). I looked back, but the bus was already gone. I keep thinking about it. . . . If I had hit him he would have at least reacted, he would have called for help, he would have beaten me (he tries to infer counter-examples to the negative outcome of his having harmed the other person). Yes, but it all happened so fast. But people would have said something, they would have stopped me (he searches again for counter-examples to the negative outcome). IVhat if no-one noticed it until it was too late? (He thinks again of a corroboration)

Accordingly, in a recent research (Gangemi et al., 2019), we found that obsessive patients focus on all the possibilities that could put them at risk, and try to refute them beyond a reasonable doubt. This falsificationist strategy is chosen because there is no possibility of acting on the facts, for example, changing them (e.g., I cannot go back, avoiding to touch someone). In this case, not only the results obtained but also one’s own efforts are evaluated against very high standards. The ultimate goal of this strategy is to prevent the self-accusation of having not been up to fulfilling one’s duties. This goal has a paradoxical effect: it suggests possible mechanisms by which the risk could be real (see Johnson-Laird et al., 2006; Gangemi et al., 2019). The reasoning of obsessive-compulsive patients should therefore be refuta-tory, searching for evidence falsifying the risk.

To examine this strategy in obsessive patients, in our experiment, we used vignettes in which the protagonist was guilty and responsible for the negative outcome. One vignette was as follows:

Imagine that it’s Sunday afternoon and I’m with my niece. I’m playing with her on the sofa, when my nose starts itching and I sneeze. I don’t care and keep on playing with her. Later, it strikes me that my niece might be sick because of my sneeze. It would because of my carelessness. I should have been more careful.

After reading the story, all participants were asked to try to reassure themselves about this possibility, beyond any reasonable doubt. According to the idea that obsessive symptomatology is based on the threat of being guilty, assessed as being imminent and the goal being to prevent it, we found that obsessive patients used the falsificationist strategy in this kind of scenario. For example, in the attempt to reassure himself, a patient wrote:

  • Surely it doesn’t depend on that, but if I was cold it is. The mere fact that I sneezed made the air full of germs (the participant corroborates the negative outcome).
  • Maybe the window was open. Therefore, the germs could haue gone out (to refute the negative outcome).
  • Nevertheless, they could have contaminated the kid; they could have been everywhere in the air (to corroborate the negative outcome).
  • Surely it was a coincidence. Maybe she already had a cold (a refutation).
  • But what if this is not the case (a corroboration).

Our findings appear to add to the growing list of studies showing that the effects of reasoning in psychological disorders run counter to the real intentions of patients. For example, the falsificationist strategies used by obsessive patients are counterproductive and lead to an increase, instead of a decrease, in confidence that there will be a negative outcome, and this in turn leads to the maintenance of the dysfunctional beliefs.

In general, thanks to all these studies with clinical populations, we concluded that a context of threat mental state attracts all patients’ attention to the importance of more effectively avoiding the harm. Thus, participants take account of their beliefs (e.g., they have a very serious illness, or they are going to cause harm due to their irresponsibility) and their goals (e.g., to avoid a late diagnosis, or to avoid guilt due to irresponsibility), and manage hypotheses (safety versus danger) following the kind of strategy that helps them to achieve the goals (Baron, 2008). In all the research reviewed, patients tended to consider only the hypothesis that best served their goal (e.g., to prevent feeling guilty due to irresponsibility, or to avoid a late diagnosis) or that best fitted their beliefs (e.g., they were going to cause harm; the harm was imminent and probable); moreover, they tended to seek evidence and draw inferences in a way that favoured the hypothesis that already appealed to them, and thus the one they focused on (Baron, 2008). In this way, patients put themselves in a position that makes it harder to revise a hypothesis.

4 Hypothesis-testing process: from psychopathology to normal reasoning

But what do we learn from these findings on the hypothesis-testing process as regard to reasoning processes in normal people? The answers come again from de Jong and colleagues’ studies (e.g., de Jong et al., 1998). They started from the question whether it was the threat mental state that leads anxious patients to test their hypotheses to prevent the crucial error of underestimating a danger, thus confirming the danger hypothesis and disconfirming the safety hypothesis. To answer, the authors decided to induce a threat mental state in healthy individuals to check whether they used the same reasoning strategy observed in patients. In a further experiment (Smeets et al., 2000) with hypochondriacs and normal controls, after having given the same conditional rules (danger versus safety) used in the previous studies (e.g., de Jong et al., 1998), they demonstrated that the threat mental state induction in normal controls leads to testing information as if they were worried. Healthy controls showed indeed the same threat-confirming or safety-disconfirming strategy in the domain of health threats that was, for example, the characteristic strategy for hypochondriacal individuals in the context of health threats. In other words, the addition of the sentence “A fter hearing this you get worried”, changed the sensitivity of normal controls to the WST’s conditionals, leading them to change their usual hypothesis-testing strategies.

In line with this procedure, we decided to examine whether the induction of the mental state of responsibility and fear of guilt in healthy people resulted in the same reasoning strategy observed in obsessive patients. Using the WST, we thus investigated the influence of the induction of this mental state on the hypothesistesting strategies (confirmation vs. falsification) adopted by participants in the case of both the danger (e.g., if m y patient’s symptoms, then Ebola virus') and the safety (e.g., if my patient’s symptoms, then influenza) hypotheses. The task instructions to activate responsibility and guilt in the participants were as follows:

You are the only doctor in your ward, and are solely responsible for several patients. In the few last months, although you had everything necessary, i.e., diagnostic equipment, time and medical know-how, you made several mistaken diagnoses due to superficiality, inattention and lack of commitment that led to serious consequences for your patients. You feel guilty about this and are fearful of making new serious mistakes.

And in line with de Jong and colleagues’ results, we found that in the responsibility and fear of guilt mental state, control participants became interested in seeking examples confirming the worst hypothesis (danger rule). By contrast, responsible and guilt-fearing individuals faced with a positive hypothesis (safety rule) prudentially tended to search for falsifying information about it. Thus, in a prudential way, responsible and guilty participants tended to select potentially confirming information in the case of the danger rule, and potentially disconfirming information in the case of the safety rule.

All these results are consistent with research results on reasoning, and show that both the mental state (i.e., mental state of threat) and the kind of conditional rule (safety vs. danger) have a strong impact on the reasoning strategy that participants tend to use when asked to check its validity (e.g., Cheng and Holyoak, 1985; Cos-mides, 1989; Smeets et al., 2000). Indeed, in the domain of threats, it is adaptive to rely on confirming information concerning danger rules. For example, given the rule “If the alarm bell rings, then there is a fire”, one is well advised to check whether the bell rings is indeed followed by the fire and whether the fire is indeed preceded by

the bell rings. The logical possibility of false alarm (the bell rings in the absence of fire) is less relevant for survival. In other words, although it is very uncomfortable to escape for nothing on some circumstances, only one time ignoring the bell may be fatal. Thus, individuals’ interests are better served knowing whether the bell rings when there is a fire than whether the bell sometimes rings in the absence of a fire. The opposite is true for safety rules such as “If the dog barks, then it will not bite”. In this example, it is adaptive to test whether it is, indeed, safe when the signal is present. That is, maybe there are barking dogs that bite. Thus, in the case of safety rules, individuals’ interests are best served by searching for potentially disconfirming information. In line with the idea that individuals reasoning in the context of threat is guided by perceived utilities, it has been thus demonstrated that healthy people, indeed, rely on a confirmatory reasoning strategy when reasoning about danger rules, whereas individuals actively seek for falsifications in case of safety rules. To sum up, all people adopt a better-safe-than-sorry reasoning strategy (e.g., Smeets et al., 2000; Gangemi et al., 2015). People tend to adopt this strategy in the face of exposure to a threat. It focuses them on the danger and leads them to search for examples confirming it. Such a reasoning pattern is functional and adaptive when faced with threats. However, if the perceived threats are related to the disorders, and thus exaggerated (e.g., dysfunctional beliefs, as in case of hypochondriacal concerns), actively searching for danger confirming information in combination with ignoring discon-firming evidence logically serves to maintain dysfunctional beliefs.

5 Patients can even reason better

In the previous section, we demonstrated that reasoning in psychological disorders works in the same way as it does in normals, and that it departs from normative rules no more so than usual, and, above all, that it is goal-directed just like in healthy people (cf. Harvey et al., 2004). In what follows, we will not only see that faulty reasoning does not characterize psychopathology, but that, on the contrary, psychopathology would improve reasoning, but only when topics are relevant to the disorder. In other words, patients can even be better reasoners than normal, when they are in their pathological domain. This means again that their reasoning is goal-directed and directed in a specific way, that is, as a way to minimize and contain the risk of catastrophic errors! (cf. the principle of Primary Error Detection and Minimization, Friedrich, 1993, and Trope and Lieberman, 1996).

Our clinical observations suggest indeed that patients are highly competent reasoners, at least about matters related to their psychological illnesses. For example, Francesco Mancini treated a patient suffering from obsessive-compulsive disorder. She was worried that she might have contracted the HIV after having touched a magazine photograph of Kock Hudson, who had died from AIDS. To justify her worries, she said as follows (translated from the Italian):

The photographer must have been near to Hudson because the photograph was a closeup. So, he might have been contaminated. So, when he developed the negative, he could have contaminated it. The photographic negative mas in contact with the print and so could have contaminated it. The man in charge of printing the newspaper used the print, and so, he could have passed its contamination on to the newspaper’s printer. The printing press could have passed the contamination on to the picture in every newspaper. So, when I touched the newspaper, I too might have been contaminated.

Like expert reasoners, the patient constructed a long chain of interconnected inferences, and she envisaged more than just obvious possibilities. She recognized that her conclusion was questionable, yet, typically for this kind of patients, she could not reject it, and so she obsessed about that possibility. Clinical observations of this kind led us to doubt whether faulty reasoning is the cause of psychological illnesses.

Also in research, the assumption that patients and non-patients with propensities towards psychological illness are more irrational than normals has not found robust support. In that regard, again the studies conducted by the Dutch group of researchers seem rather interesting. For example, Smeets and de Jong (2005) found that in solving linear syllogisms, patients are not poorer reasoners than normal controls. Other studies have even shown that patients reason better than normal controls, but only when the contents of the task are relevant for their pathology (Jonson-Laird et al., 2006; Johnson-Laird, 2006; Mancini et al., 2007). For example, in two initial experiments, we examined the reasoning of participants with obsessive-compulsive tendencies and those with depressive tendencies ([ohnson-Laird et al., 2006). The first experiment compared individuals with obsessive-compulsive tendencies and normal controls. All participants read a short scenario that ended with a specific proposition: The alarm rings or I feel guilty, or both. They were then asked to list what was possible and what was impossible according to this sentence. With this proposition, there are three different possibilities: (a) the alarm rings, (b) I feel guilty, and (c) the alarm rings and I feel guilty. There is only one impossibility instead: the alarm doesn’t ring and I don’t feel guilty. Each group of participants was further subdivided into two. In one group, the participants had to list possibilities for assertions in scenarios aimed at eliciting guilt, such as:

Suppose I am at my house with some friends. We decide to join some other friends in a bar. We leave the house joking amongst ourselves, but I forget to close the bathroom window. The burglar alarm rings or I feel guilty, or both.

They listed possibilities for the final sentence. In the other group, the participants listed possibilities for neutral scenarios, which ended with a sentence: The burglar alarm rings or I feel tired, or both, for which they listed possibilities. All the participants carried out the task four times with different contents, and two of the scenarios had a test proposition based on “and”, while the other two vignettes had a test proposition based on “or”. The obsessive individuals listed many more correct possibilities for sentences about guilt than the control participants did, but no reliable difference occurred between the two groups for neutral or depressing propositions.

The second experiment was the same, but the participants were at risk for depression. Again, depressed individuals listed many more correct possibilities for propositions about being depressed than the normal controls did, but no reliable difference occurred between the two groups for neutral propositions or those about guilt.

What both studies showed is that participants with tendencies towards mental illness reason about topics relevant to their illness better than about other topics, and better than control participants do.

In two further studies we examined participants who drew their own conclusions from syllogistic premises (Gangemi et al., 2013). In a first experiment we compared depressed patients and control participants. All the participants stated in their own words what followed, if anything, from several pairs of syllogistic premises. One set had premises (e.g., Sometimes when I think of my future, I feel sad. Every time I feel sad, I’m very pessimistic) and conclusions (e.g., Therefore, sometimes when I think of my future, I’m very pessimistic) that were depressing, and the other set had premises (e.g., Sometimes when I look back at my life, I find myself smiling. Every time I find myself smiling, I feel very satisfied with myself), and conclusions (e.g., Sometimes when I look back at my life, I feel very satisfied with myself) that were neutral. Overall, the depressed patients were more correct in their reasoning when they drew conclusions from premises about depression than the control participants. A second study compared the reasoning of students who were at high risk of panic attacks with controls who were not. The experiment was identical to the previous study apart from the different participants and contents. As in the previous experiment, anxious participants outperformed control participants when the premises and the conclusions (e.g., Sometimes when I am in an elevator I find it difficult to breathe) were relevant to their disturbance.

Overall, all the previous studies falsify the hypothesis that psychological illnesses impair reasoning: both the obsessive, depressed, and anxious participants outperformed control participants, but tended to draw conclusions relevant to their illnesses (e.g., Goel and Vartanian, 2011).

These findings suggest that psychological disorders help patients to explore more possibilities in reasoning about their symptoms, and this is because they are more motivated to draw conclusions about these symptoms. This could mean that individuals feeling anxious, depressed, or guilty should tend to reason about this kind of topic more expertly, thinking of possibilities that might otherwise escape them. The effect is to increase motivation and to improve reasoning, perhaps because emotions enable individuals, whether they are psychologically ill, to think of possibilities that they would otherwise not imagine.

The crucial question is now, why do clinical people reason better than non-clinical individuals when topics are pertinent to their illnesses?

According to the hyper-emotion theory (Johnson-Laird et al., 2006, see also Johnson-Laird in this book), psychological illnesses are disorders in which individuals have emotions that are appropriate to the situation but inappropriate in their intensity. This theory combines a theory of emotions and a theory of reasoning.

The theory of emotions is based on a cognitive view of emotions in which they dispose individuals to some thought and action, and can be caused by both unconscious and conscious evaluations of situations (see, e.g., Oatley and Johnson-Laird, 1987). An individual suffering from panic attack says, for example, “I don’t know why I am so frightened in highway, other than that I feel that I might lose control of myself”. Thus, although individuals may be aware of the cause of an emotion, they cannot be aware of the process that makes the transition to the emotion itself, which is exaggerated in its intensity. Individuals have no voluntary control over them. The best they can do is to adopt some method to decrease the emotion, such as avoiding its object.

The theory about reasoning assumes that it depends on envisaging the possibilities to which propositions refer, and on drawing conclusions that hold in those possibilities (Johnson-Laird, 2006). A common error in reasoning is to overlook a possibility (Barres and Johnson-Laird, 2003), and so any factor that can diminish such mistake should improve reasoning. One such factor is an emotion concerning the topic of inference. When these individuals experience this emotion, they are bound to reason about its cause. This focus, in turn, leads to the maintenance of the emotion and its concomitant pathology, which are beyond the reach of reason. But what this theory tells us about reasoning in healthy people?

6 Reasoning and emotion: from psychopathology to normal people

Before answering this question, we want to focus on what is the relation between rational thinking and emotion. A wide literature tends to see this relationship as a simple contrast between the two. Indeed, many scholars tend to use the term “emotion” as a substitute for the word “irrationality”. They say that rational thinking needs to be always cold. We cannot thus reason well, reaching our goals, if we are influenced by our emotions. In this perspective, emotions should always worse our reasoning, making us irrational. Yet, this is not the position of the Appraisal Theories of reasoning. In general, these theories assume that an individual’s goals (i.e., desires, needs, values) and beliefs (i.e., cognitions, representations, assumptions) are proximal determinants of our behaviour (cf. Castelfranchi and Paglieri, 2007). In particular, the appraisal-based theories claim that all emotional states and behaviours are based on “a person’s subjective evaluation or appraisal of the personal significance of a situation, object, or event on a number of dimensions or criteria” (Scherer, 1999: 637). Among these dimensions or criteria, our own goals or interests are the most important. When we believe that they could be compromised, then our reasoning becomes a tool to make them safe. Accordingly, these theories claim that our emotions can even improve our capacity to protect our goals driven by our reasoning processes. And emotions can improve our reasoning in some contexts, as demonstrated by several studies (e.g., Blanchette and Richards, 2010, see also Chapter 4 in this book). Let’s see what are these contexts. Some research shows that when emotions are incidental, and they are, for example, induced by music, they actually burden the system and lead to bad performance in a reasoning task (e.g., Blanchette, 2006; Blanchette and Richards, 2004). But, when emotions are integral, and so they are induced by the topic of reasoning, they improve it (e.g., Johnson-Laird et al., 2006; Gangemi et al., 2013). For example, Blanchette and her colleagues have shown these effects in a study with war veterans suffering from post-traumatic stress disorder. In this experiment, war veterans solved syllogisms better when the conclusions referred to a topic relevant for them, i.e., war, than to neutral topics (Blanchette and Campbell, 2005). Similar effects were found in the evaluation of syllogisms after the terrorist attacks in London, UK, in July 2005 (Blanchette et al., 2007). Participants who lived in London, UK, were more correct in drawing conclusions from syllogisms concerning terrorism than those living in Manchester, UK, who in turn were more accurate than those living in another country (e.g., Canada). In other words, the closer the geographical proximity of the participants was to the attacks, the greater was the proportion of them who correctly evaluated syllogisms. The difference between the Londoners and Canadians disappeared six months later, even though the Londoners still reasoned more accurately about terrorism than the other two groups. These results were due to the emotion related to the terrorist attack. The three groups differed indeed in the reported intensity of their basic emotions.

So, how do emotions explain an improvement in reasoning? The mental model theory of reasoning offers a hypothesis (e.g., Johnson-Laird, 2006; Johnson-Laird et al., 2015). The theory postulates that reasoning depends on imagining possibilities, and so emotions induced by the topic lead individuals to make a more exhaustive search for possibilities relevant to their cause than the search they make in other cases (e.g., Bucciarelli and Johnson-Laird, 1999; Johnson-Laird et al., 2006; Gangemi et al., 2013; Gangemi et al., 2015; Gangemi et al., 2019). In this way, our emotions may help people to achieve or protect their goals, orienting their reasoning processes. Reasoning thus becomes a tool to make them safe. This means that rational reasoning does not need to be cold. In many contexts, such as in a danger one, emotion can even improve our capacity to reason on the threat. An example of a functional reasoning strategy activated by our emotions is the previously mentioned Better Safe than Sorry strategy. In a context of danger, we focus on the threat and this leads us to feel a congruent emotion, such as anxiety. This emotion leads us to prudentially seek for evidence confirming the danger hypothesis. The confirmation of the threat protects us from crucial errors, i.e., undervalue a danger, when it is true. For example, if we are worried because of a persistent symptom, such as stomach pain, we could make a transition to great anxiety, focusing on the worst case as a result of our own anxiety: we could have a serious illness. This danger hypothesis is likely to start a confirmatory pattern of inferences, rather than a falsification process. We thus search for evidence confirming this hypothesis from an available source of information, such as an analogy with a friend, a relative, or a case in a newspaper, and this strengthens our belief in the worst-case scenario (see, e.g., Gangemi et al., 2019). We then infer that we should consult a doctor. If we are mistaken about our illness, no harm is done, but if we fail to consult a doctorand we have the illness, the consequences will be disastrous. We are adopting the better-safe-than-sorry reasoning strategy (de Jong et al., 1998) that helps us to focus on the danger and leads us to search for examples confirming it.

In summary, the relationship between rational thinking and emotion is more complex than a simple contrast between the two, if we avoid using the term “emotion” as a substitute for the word “irrationality”. Emotions can make our reasoning rational, when they orient it to help our relevant goals, for example, by achieving or protecting them. This interpretation accords with the general principle that individuals think more carefully about what is important to them than about what is unimportant (Blanchette and Richards, 2010; Tanner and Medin, 2004).

7 Conclusions

Thanks to a wide number of experiments, we know now that patients’ reasoning is not impaired, and that the source and the maintenance of their illnesses are not in faulty inferences. We argued that they are motivated to reason effortfully to pursue their goals, thus reducing the likelihood of crucial errors, and thereby to avoid their costs (see Friedrich, 1993). And this makes their reasoning rational. The same is for normal people. All of us are rational every time we think to achieve a relevant goal or to avoid compromising it. Our thinking depends on the relevance of the hypothesis we have to test and on the context in which we test it. In other words, what makes our reasoning process rational or irrational is not, for example, the systematic use of a falsification strategy, but choosing the strategy (confirmatory vs. falsificatory) that could avoid us to commit crucial errors, from the point of view of our goals. Our hypothesis testing is indeed a pragmatic directed process, mainly motivated by the costs of our inferential errors. Several studies demonstrated that people’s hypothesis-testing process is indeed domain-specific and guided by their relevant goals: individuals’ reasoning performances depend on the perceived relevance of the hypothesis to one’s personal interests. (Baron, 2008; Evans and Over, 1996; Kirby, 1994; Manktelow and Over, 1991; Smeets et al., 2000).

Moreover, we now know that individuals suffering from mental illness experience intense emotions, and that, thanks to these emotions, can even reason better than those who are mentally healthy (e.g., Johnson-Laird et al., 2006; Owen et al., 2007; Vroling and de Jong, 2009). Yet, cognitive therapists keep on suggesting that the source of illnesses is in faulty inferences. The correction of these inferential errors would contribute to preventing their aberrant emotions. The hyper-emotion theory postulates instead that the emotion directs attention, interpretation, and reasoning to its potential cause ([ohnson-Laird et al., 2006). And again, this is valid for every one of us. The emotions, whether they are induced by the task or are a result of a psychological disorder, lead individuals to be more likely to construct models of possibilities pertinent to their source than to do so for other contents. It postulates indeed that reasoning depends on imagining possibilities - the key assumption in the model-based account of reasoning (see, e.g., Bucciarelli and Johnson-Laird, 1999; Johnson-Laird, 2006; Johnson-Laird and Byrne, 1991). It follows that all human beings will be more likely to envisage the possibilities needed to infer conclusions about the source of an emotion than about other matters. Several experiments (cf. Johnson-Laird et al., 2006; Gangemi et al., 2019) corroborated this prediction, and are consistent with a growing body of evidence suggesting that individuals showed increased normatively correct thinking when reasoning about “protected value”, that is, issues they felt very strongly about, relative to other more neutral issues (Blanchette and Richards, 2010; Tanner and Medin, 2004).


American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (DSM-5) (5th ed.). Washington, DC: American Psychiatric Association.

Baron, J. (2008). Thinking and deciding. Cambridge: Cambridge University Press.

Barres, P., and Johnson-Laird, P. N. (2003). On imagining what is true (and what is false). Thinking and Reasoning, 9, 1-42.

Beck, A. T. (1976). Cognitive therapy and the emotional disorders. New York: Meridian.

Beck, A. T. (2019). A 60-year evolution of cognitive theory and therapy. Perspectives on Psychological Science, 14, 16-20.

Blanchette, I. (2006). The effect of emotion on interpretation and logic in a conditional reasoning task. Memory and Cognition, 34, 1112-1125.

Blanchette, I., and Campbell, M. (2005). The effect of emotion on syllogistic reasoning in a group of war veterans. In Proceedings of the XXVIIth annual conference of the Cognitive Science Society (p. 1401). Mahwah, NJ: Erlbaum.

Blanchette, I., and Richards, A. (2004). Reasoning about emotional and neutral materials. Is logic affected by emotion? Psychological Science, 15, 745-752.

Blanchette, I., and Richards, A. (2010). The influence of affect on higher level cognition: A review of research on interpretation, judgement, decision-making and reasoning. Cognition and Emotion, 15, 561—595.

Blanchette, I., Richards, A., Melnyk, L., and Lavda, A. (2007). Reasoning about emotional contents following shocking terrorist attacks: A tale of three cities. Journal of Experimental Psychology. Applied, 13, 47-56.

Bogels, S. M., and Mansell, W. (2004). Attention processes in the maintenance and treatment of social phobia: Hypervigilance, avoidance and self-focused attention. Clinical Psychology Review, 24, 827-856.

Bucciarelli, M., and Johnson-Laird, P. N. (1999). Strategies in syllogistic reasoning. Cognitive Science, 23, 247-303.

Castelfranchi, C., and Paglieri, F. (2007). The role of beliefs in goal dynamics: Prolegomena to a constructive theory of intentions. Synthese, 155, 237-263.

Cheng, P. W., and Holyoak, K. J. (1985). Pragmatic reasoning schemas. Cognitive Psychology, 17, 391—416.

Clark, D. M., and McManus, E (2002). Information processing in social phobia. Biological Psychiatry, 51, 92-100.

Cosmides, L. (1989). The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason Selection Task. Cognition, 31, 187-276.

de Jong, P. J., Haenen, M.-A., Schmidt, A., and Mayer, B. (1998). Hypochondriasis: The role of fear-confirming reasoning. Behaviour Research and Therapy,36, 65-74.

de Jong, P. J., Mayer, B., and van den Hout, M. (1997). Conditional reasoning and phobic fear: Evidence for a fear-confirming pattern. Behaviour Research and Therapy, 35, 507-516.

Evans, J. St. В. T, and Over, D. E. (1996). Rationality in the selection task: Epistemic utility versus uncertainty reduction. Psychological Review, 103, 356-363.

Friedrich, J. (1993). Primary Error Detection and Minimization (PEDMIN) strategies in social cognition: A reinterpretation of confirmation bias phenomena. Psycological Review, 100, 298-319.

Gangemi, A., Mancini, E, and Dar, R. (2015). An experimental re-examination of the inferential confusion hypothesis of obsessive-compulsive doubt. Journal of Behavior Therapy and Experimental Psychiatry, 48, 90-97.

Gangemi, A., Mancini, E, and Johnson-Laird, P. N. (2013). Models and cognitive change in psychopathology'. Journal of Cognitive Psychology, 25, 157-164.

Gangemi, A., Mancini, E, and van den Hout, M. (2007). Feeling guilty as a source of information about threat and performance. Behaviour Research and Therapy, 45, 2387-2396.

Gangemi, A., Tenore, K., and Mancini, F. (2019). Two reasoning strategies in psychological illnesses. Frontiers of Psychology, 10, 2335.

Garety, P. A., and Hemsley, D. R. (1997). Delusions: Investigations in the psychology of delusional reasoning. Hove: Psychology Press Ltd.

Goel, V., and Vartanian, O. (2011). Negative emotions can attenuate the influence of beliefs on logical reasoning. Cognition and Emotion, 25, 121-131.

Harvey, A., Watkins, E., Mansell, W, and Shafran, R. (2004). Cognitive behavioural processes across psychological disorders: A transdiagnostic approach to research and treatment. Oxford: Oxford University Press.

Hirsch, C. R., and Clark, D. M. (2004). Information-processing bias in social phobia. Clinical Psychology Review, 24, 799-825.

Johnson-Laird, P. N. (2006). How we reason. Oxford: Oxford University Press.

Johnson-Laird, P. N., and Byrne, R. M. J. (1991). Deduction. Hillsdale, NJ: Erlbaum.

Johnson-Laird, P. N., Khemlani, S. S., and Goodwin, G. P. (2015). Logic, probability, and human reasoning. Trends in Cognitive Sciences, 19, 201-214.

Johnson-Laird, P. N., Mancini, E, and Gangemi, A. (2006). A hyper-emotion theory of psychological illnesses. Psychological Review, 113, 822-842.

Kirby, K. N. (1994). Probabilities and utilities of fictional outcomes in Wason’s four card selection task. Cognition, 51, 1-28.

Leahy, R. L. (2004). Contemporary cognitive therapy: Theory, research, and practice. London: Guilford Press.

Mancini, E, and Gangemi, A. (2015). Deontological guilt and obsessive compulsive disorder. Journal of Behavior Therapy and Experimental Psychiatry, 49, 157-163.

Mancini, E, Gangemi, A., and Johnson-Laird, P. N. (2007). 11 ruolo del ragionamento nella psicopatologia secondo la Hyper Emotion Theory'. Ciornale Italiano di Psicologia, 4, 763-793.

Manktelow, K. I., and Over, D. E. (1991). Social roles and utilities in reasoning with deontic conditionals. Cognition, 39, 85-105.

Niler, E. R., and Beck, S. J. (1989). The relationship among guilt, disphoria, anxiety and obsessions in a normal population. Behaviour Research and Therapy, 27, 213-220.

Oatley, K. J., and Johnson-Laird, P. N. (1987). Towards a cognitive theory of emotions. Emotion and Cognition, 1, 29-50.

Owen, G. S., Cutting, J., and David, A. S. (2007). Are people with schizophrenia more logical than healthy volunteers? British Journal of Psychiatry, 191, 453-454.

Scherer, K. R. (1999). Appraisal theory. In T. Dalgleish and M. J. Power (Eds.), Handbook of cognition and emotion (pp. 637-663). Chichester, UK/New York: John Wiley and Sons Ltd.

Shapiro, L. J., and Stewart, E. S. (2011). Pathological guilt: A persistent yet overlooked treatment factor in obsessive-compulsive disorder. Annual Clinical Psychiatry, 23, 63-70.

Shin, L. M., Dougherty, D. D., Orr, S. P., Pitman, R. K., Lasko, M., Macklin, M. L., et al. (2000). Activation of anterior paralimbic structures during guilt-related script-driven imagery. Biological Psychiatry, 48, 43-50.

Smeets, G., and de Jong, P. J. (2005). Beliefbias and symptoms of psychopathology' in a non-clinical sample. Cognitive Therapy and Research, 29, 377-386.

Smeets, G., de Jong, P. J., and Mayer, B. (2000). If you suffer from a headache, then you have a brain tumour: Domain specific reasoning “bias” and hypochondriasis. Behaviour Research and Therapy, 38, 763-776.

Takahashi, H., Yahata, N., Koeda, M., Matsuda, T, Asai, K., and Okubo, Y. (2004). Brain activation associated with evaluative processes of guilt and embarrassment: An fMRI study. Neuroimage, 23, 967-974.

Tanner, C., and Medin, D. L. (2004). Protected values: No omission bias and no framing effects. Psychonoinic Bulletin and Review, 11, 185-191.

Trope, Y, and Lieberman, A. (1996). Social hypothesis testing: Cognitive and motivational mechanism. In E. Higgins and A. Kruglanski (Eds.), Social psychology: Handbook of basic principles (pp. 239-270). New York: Guilford.

Vroling, M. S., and de Jong, P. J. (2009). Deductive reasoning and social anxiety: Evidence for a fear-confirming belief bias. Cognitive Therapy and Research, 33, 633-644.

Wason, P. C. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20, 273-281.

Young, J. E., and Beck, A. T. (1982). Cognitive therapy: Clinical application. In A. J. Rush (Ed.), Short-term psychotherapies for depression. London: Guilford Press.

< Prev   CONTENTS   Source   Next >