The Emotive Character of Religious Rhetoric

These expectations are informed in part by the assumption that there is a relationship between the emotive character of religious rhetoric and the emotions of the people who hear these messages—the voters. This assumption should not be controversial. Volumes of psychological research have used language (and other message characteristics such as music, pitch, and facial expressions) to induce emotions in research participants. In short, it is possible to craft speech so as to bring about emotions— psychologists and candidates alike engage in this regularly.

What is less clear is how to measure a message according to the likely emotional effect it will have on audience members. Ultimately, to make a claim about the emotive character of religious rhetoric, any measure of emotive character ought to bear a relationship with the emotions it induces in message recipients. No tools exist to determine expected emotional inducement; however, several content analysis computer programs have been developed to assess the emotional state of the message source. These tools have been widely applied in a number of contexts. For example, James Pennebaker’s Linguistic Inquiry and Word Count (LIWC) (Pennebaker, Francis, and Booth 2001) software was initially designed to analyze emotional writing (Rude, Gortner, and Pennebaker 2004). Since its development, however, LIWC has also been used to analyze a number of political phenomena. For example, LIWC has been used to track the psychological state of presidential candidates, finding that John Kerry used many more negatively valenced words than his running mate, John Edwards (Pennebaker, Slatcher, and Chung, 2005).3 Likewise, Whissell’s Dictionary of Affect in Language (WDAL) is a computerized content analysis tool used for assessing the emotions of speakers (Duhamel and Whissell 1998; Whissell 1994). Like LIWC, WDAL has been used to assess the emotions of political figures. For example, Cynthia Whissell and Lee Sigelman (2001) find that “power language” (which is language characterized, in part, by positive affect words) in presidential speech has increased with the advent of television.4

Content analysis software is thus a reliable tool to draw speaker personality inferences—at least when the text being analyzed represents the speaker’s own mental disposition (and not that of speechwrit- ers). In addition, there are several reasons why these programs should be good tools to determine the mood induced by public utterances. Theories of emotional contagion hold that when a speaker expresses an emotion, a corresponding emotion can be induced in the message recipient—speakers and audiences essentially converge emotionally (Hatfield, Cacioppo, and Rapson 1994, 153-54).5 These effects should be particularly strong in the case of political speech, when the speakers are experienced at conveying mood and the messages are crafted to do so. In fact, given that campaign rhetoric is often crafted to bring about specific emotional effects, LIWC and WDAL may be more effective tools for determining induced mood than the genuine emotional state of the speaker.6

Both LIWC and WDAL are essentially word count programs that contain large dictionaries of “emotional words.” In both cases, words were rigorously scored by teams of coders for their emotional character.7 Although each program computes emotion scores in a slightly different manner, in both the logic is essentially the same. Scores are computed by counting the number of emotion words in a particular speech or text, and dividing by the total number of words in that text. Because these tools were not designed with the aforementioned expectations in mind, there is some disconnect between the measures included in the software and the actual emotional constructs detailed in my hypotheses. For example, neither WDAL nor LIWC contains a direct measure of enthusiasm—an emotion that is theorized to be quite important politically. To address this, I examined the data across all available measures, aiming to arrive at robust conclusions across both computer programs. This also provided an important cross-validation between the two measurement tools.8

< Prev   CONTENTS   Source   Next >