More Recent Cannell-Inspired Research

Interview Observation

The use of interview observation focused almost solely on interviewer evaluation during the 1970s (Cannell, Lawson, and Hauser 1975; Mathiowetz and Cannell 1980) but then emerged as a pretesting technique with publications by Morton-Williams (1979), Morton- Williams and Sykes (1984), and Oksenberg, Cannell, and Kalton (1991). The expanded use of behavior coding as an evaluation tool for pretesting provided researchers with a systematic, objective, and quantitative means by which to assess survey questions.

The coding scheme used by Oksenberg, Cannell, and Kalton (1991) is relatively simple, consisting of three interviewer behaviors, all related to how the initial question was asked (exactly, slight change, or major change) and seven respondent behaviors (interruption with answer, request for clarification, adequate answer, qualified answer, inadequate answer, don't know, and refusal to answer). All of the respondent's behaviors apart from the provision of an adequate answer were seen as indications of problems with the question. In contrast, the coding scheme used by Marquis and Canned (1969) included 12 interviewer behaviors, 7 respondent behaviors, and 9 behaviors (e.g., laughing) that could be exhibited by either the interviewer or the respondent.

What began as a simple coding scheme for evaluating interviewers, and then later, the quality of questions, has grown into a far more nuanced and detailed examination of the interviewer-respondent interaction, for example, in conversational analysis (e.g., Maynard, et al. 2002). Today we see behavior coding used to understand issues ranging from the construction of questions as part of pretesting activities (e.g., Presser, et al. 2004) to use as a metric to understand data quality in time use studies (e.g., Freedman, et al. 2013).

Interviewing Methods

More recent data collection research demonstrates the potential effectiveness of Commitment for online, self-administered surveys. Cibelli (2017) experimented with the use of Commitment, and Commitment and Feedback, in two studies as part of her dissertation work. With the use of a simple binary Commitment request, where the respondent either agreed or did not agree to commit to working hard to provide accurate information in an online labor force survey, she found a significant decline in item nonresponse and straightlining, as well as the provision of more accurate data as compared to a control group that was not offered a Commitment statement. However, a second experiment, in which the nature of Commitment was operationalized in a request to agree to five distinct behaviors (careful reading of the question; being precise; using records; providing as much information as possible; and answering honestly), coupled with feedback and contextual recall cues, did not result in markedly more accurate or higher quality data as compared to a control group.

Conclusions

Where do we find ourselves now with respect to Cannell's influence on research and practice? Both the social interaction and the cognitive models of the survey interview process continue to inform the construction of questionnaires and our understanding of the sources of response error (e.g., Schaeffer and Dykema 2011). Behavior coding has become a tool widely used by questionnaire designers (e.g., Presser, et al. 2004). Research on the interviewing techniques that Cannell pioneered, however, has not diffused throughout current survey practice.

Cannell's legacy to survey researchers is a perspective on response error and a set of tools for reducing it. The perspective has currency today - it should guide design of questionnaires and instructions for interviewer behavior. As we have seen, Cannell advocated theorizing about likely reporting error in the measurements planned for each survey. Such hypotheses should direct how we ask questions and how we train interviewers to ask them. Scripting questionnaires with orientation for respondents about the need for accuracy and reinforcing behavior that contributes to this goal is an approach that should be tested and widely employed. Implementing techniques, such as Commitment, to motivate respondents to do the necessary work to provide accurate information is particularly important in an era in which "surveys" are myriad and easy to dismiss. Rather than an invitation to refuse participation, a Commitment request at the start of an interview, followed by Instructions and Feedback that teach the respondent role and reinforce appropriate behavior, can distinguish our surveys from trivial requests. Canned's approach can help to legitimate survey requests in the public eye.

Although the applicability of the social interaction model of the survey process may not be obvious for self-administered online data collection, any data collection activity is a process with actors and interaction among those actors. Some actors may be distant (the unseen researcher), some may be inanimate (smartphone; see Schober, et al., Chapter 13, this volume), but the interaction among these actors contributes to determining the quality of survey data. We believe that the world of online data collection is ripe for further research with respect to how Instructions, Feedback, and Commitment could improve the overall quality of such data. The perspective can be applied in Web, interactive voice response (IVR), and short message service (SMS or "texting") surveys. In fact, implementing the techniques in such contexts may be more effective, since automated communication will faithfully execute interviewing instructions that human interviewers do not always follow. If surveys employ avatar interviewers (Cassell and Miller 2007; see Conrad, et al., Chapter 11, this volume), nonverbal reinforcement procedures, attempted but not validated in earlier research (Marquis 1970), can be reliably implemented. In short, the addition of more communication media for conducting surveys opens up new possibilities for testing and adopting methods that Cannell pioneered.

 
Source
< Prev   CONTENTS   Source   Next >