Section III. Managing and Monitoring Interviewers and the Survey Process

Exploring the Mind of the Interviewer: Findings from Research with Interviewers to Improve the Survey Process


The survey interviewer's task in the data collection process is a complex one. Interviewers make many judgments and decisions during the process of interacting with respondents that may contribute to total survey error (West and Blom 2017). Similar to the components of the response process undertaken by respondents during the survey interview (Tourangeau, Rips, and Rasinski 2000), interviewers must: (1) understand what the respondent said; (2) compare the response with the intent of the question and/or match the response to the available response options; (3) judge whether the response was adequate; (4) decide whether or not to probe the respondent's answer if it is inadequate; and (5) record the response (e.g., Japec 2008; Ongena and Dijkstra 2007; Sander, et al. 1992).

How interviewers go about accomplishing these complex tasks can vary. For example, survey organizations typically train interviewers using some form of standardized interviewing (Fowler and Mangione 1990). Standardized approaches aim to minimize variation in the ways that interviewers ask questions, primarily by requiring interviewers to read every question as scripted and use standardized probes; this approach may reduce error. Conversational interviewing, in contrast, acknowledges the dynamic nature of the survey interview process and allows interviewers the flexibility to probe without substantially changing the question meaning, which can increase data quality (Schober and Conrad 1997; Conrad and Schober 2000; Groves and Couper 1998; West, et al. 2018). Interviewers encounter many situations in which the rules of standardization are lacking or incomplete, or they may find it difficult to adhere to standardized practices (see Olson, Smyth, and Cochran 2018, and Chapter 3 of this volume). For example, interviewers are more likely to deviate from an interview script when survey questions are repetitive, sensitive, or difficult (e.g., Haan, Ongena, and Huiskes 2013; Houtkoop-Steenstra and Houtkoop- Steenstra 2000). Minor deviations from interview scripts can occur as often as 33% of the time (Ongena and Dijkstra 2006), demonstrating that interviewers often feel a need to make changes to repair question wording.

How interviewers probe responses is a critical yet understudied aspect of the survey process, and data quality and measurement issues may arise during probing (e.g., Olson, Smyth, and Ganshert 2019). Probing involves decisions about when and how to get additional information about a response. While interviewers are typically trained to probe neutrally, probes are often conversational, emergent, and unscripted, as interviewers respond to the survey context in real-time. Because the survey process can be unpredictable, survey organizations may face challenges training interviewers to use standardized probes consistently for unexpected situations. Interviewer training may help with consistency, but training on how to handle challenging situations is more difficult when interviewers work on multiple survey topics with varying question sensitivity or difficulty.


The motivation to conduct this research was to learn more about interviewers' cognitive and decision-making processes and to use this information to help support interviewers and improve interviewer training. Most research conducted directly with interviewers has been informal, such as debriefings with interviewers, limiting the ability to generalize from these findings. In addition to probing, the use of unscripted question lead-ins, such as apologizing (e.g., Dykema and Schaeffer 2005), forgiving wording (e.g., Naher and Krumpal 2012; Peter and Valkenburg 2011), and distancing oneself from the survey question or organization (e.g., Schaeffer, et al. 2008) are well-documented, but little is known about how interviewers decide to use these techniques. The current research was designed to take a more systematic approach to understanding interviewer cognition and decision-making in the field by conducting in-depth interviews with interviewers, and asking them to react to vignettes that closely represent situations that they might face during real survey interviews.

Research Topics

The study presented in this chapter sought to investigate the following three sets of research areas:

  • 1) Sensitive questions (i.e., questions that are perceived as personal, invasive, or threatening) are typically studied from the perspective of the respondent rather than the interviewer. However, the task of asking respondents sensitive questions may also affect the survey process. For example: Do interviewers perceive some questions as being sensitive to ask of respondents? Do interviewers ask sensitive questions differently than non-sensitive questions?
  • 2) Difficult questions (i.e., questions that are cognitively burdensome; require calculations, estimation, or looking up information in records; or are effortful to answer due to insufficient knowledge or recall problems) are also often examined from the respondent's perspective. However, interviewers play an important role in motivating and assisting respondents in the process of answering difficult questions. For example: Do interviewers perceive some questions as being difficult to ask of respondents? Do interviewers ask difficult questions differently than less difficult questions? What cues do interviewers look for to determine if respondents are having difficulty responding?
  • 3) How do interviewers approach the process of probing respondents' answers? For instance: What cues or features of the interaction (e.g., uncodable answers) do interviewers look for to determine when a response needs to be probed further? How do interviewers decide whether or not to probe? How do interviewers approach probing responses to sensitive or difficult questions?
< Prev   CONTENTS   Source   Next >