Discussion

The evidence in this corpus is that text interviews have quite different dynamics than voice interviews on the same device: they take longer overall but with fewer turns of interaction - allowing respondents to answer when convenient for them and while multitasking, which a number of respondents report finding preferable (see Conrad, et al. 2017a) - and they are more "to the point" with less small talk. The interaction analyses reported here suggest (or are at least consistent with an explanation that) decreased social presence of the interviewer and the asynchrony of interaction in text may have important effects on data quality (in general, benefits) - but that in voice interviews interviewer behaviors that display human fallibility (laughter and repairs) may be associated with improved data quality (whether they cause or result from it). For precision of answers (rounding), interviews with no laughter (text interviews, automated interviews) were associated with more precise responding (better data quality), but more interviewer laughter in voice interviews was associated with more precise responding. For disclosure, respondents reported more socially undesirable behavior (and data quality was presumably better) in interviews with no or fewer disfluen- cies (text and automated interviews), but in voice interviews interviewers' speech disfluen- cies (in particular, fillers) seem to have been associated with greater disclosure.

From a Total Survey Error perspective, text interviewing (vs. voice interviewing) in this data set clearly improved both participation and measurement (Schober, et al. 2015). Although our corpus doesn't allow systematic calculation of interviewer effects (respondents were not assigned to interviewers randomly), the interaction analyses reported here suggest that text interviewing has the potential to reduce interviewer effects. To the extent that interviewer variance is related to interviewer behavior, texting simply has less interviewer behavior; in largely streamlining the interview to its essential question-asking and question-answering elements, text interviewing should lead to more standardized interviews than when interviewing is conducted via voice.

Questions and Implications

The analyses reported here raise at least as many questions as they answer. It remains to be seen whether the patterns of findings extend to other implementations of these modes (different variants on text messaging interviews or automated voice interviews) or to different survey questions, e.g., more or less sensitive behavioral questions or attitudinal or opinion questions. Will the findings generalize to non-convenience (probability) samples, to differently incentivized participants, or to subpopulations of respondents with different levels of experience in particular modes? One might suspect that respondents who are unfamiliar or uncomfortable with texting, or who only feel comfortable talking with a live human, might show different patterns of behavior and interaction than those observed in the current sample of respondents, who opted into participating in a study on their iPhone from advertisements in online sources.

A major challenge in interpreting studies of this sort is how much of a moving target they are attempting to hit. Communication devices themselves continue to evolve, with new versions of mobile devices and operating systems changing features frequently, which means that the features of the modes potentially available for use in survey interviews can be in flux even as a survey is deployed. Different groups or subpopulations may have different time courses for adopting a new device or a particular app, and so researchers deploying any particular mode at any moment may be tapping into unknown levels of experience among subgroups. And even among experienced users of a device or mode, norms for everyday communication evolve over time; people who once would have answered a voice call from an unknown caller may no longer be willing to pick up the phone, which may substantially change their willingness to participate in a particular mode, or their motivation to provide high-quality responses.

All the unknowns make this an exciting - if complex! - time to be exploring communication and interviewer effects in survey interviews. Despite how much more needs to be understood, the current findings do suggest a few main takeaway messages:

  • 1. Interviewer effects (of the sort measured by the intraclass correlation) may take unexpected forms in different modes and as people's communication patterns and norms - not only with other people but with automated systems, in both personal and professional life - evolve. Modes that reduce interviewers' social presence and streamline the interaction (reducing time pressure to respond) have the potential to reduce interviewer effects - though the effects may vary for different measures of data quality in different modes.
  • 2. Systematic methodological study over time and in multiple interviewing modes with a range of respondent populations will be needed to understand how interviewers will best be deployed moving forward. If interviews via text (or in any other new mode) prove to be popular, new interviewer training - and possibly even selecting interviewers with particular experience in or affinity for texting (or another mode) - may be needed.
  • 3. Long-standing (if not always explicitly articulated) assumptions about FTF interviewing as the gold standard may need to be rethought (Schober 2018). The "human touch" in interviewing no doubt will continue to have important benefits for respondent motivation, rapport, and (as evidenced here) satisfaction with interviews. But the social presence of an interviewer can also have serious drawbacks as norms and practices of communication evolve in newer technologies. (The very fact that it is now routine in FTF interviews to switch to self-administration when the topic is judged to be sensitive demonstrates researchers' recognition that social presence matters.) At least some respondents may well end up finding interviewers' physical or virtual presence intrusive or burdensome in the way that many now already find interacting with human (vs. automated) bank tellers. How survey researchers should think about developing new gold standards will be an upcoming challenge.

Acknowledgments

NSF grants SES-1026225 and SES-1025645 (Methodology, Measurement, and Statistics program) to Frederick Conrad and Michael Schober. Many thanks to Patrick Ehlen, Stefanie

Fail, Michael Johnston, Courtney Kellner, Monique Kelly, Mingnan Liu, Kelly Nichols, Leif Percifield, Lucas Vickers, and Chan Zhang for advice and assistance, and to the editors for their helpful questions and comments.

References

Antoun, С, C. Zhang, F. G. Conrad, and M. F. Schober. 2016. Comparisons of online recruitment strategies for convenience samples: Craigslist, Google AdWords, Facebook, and Amazon Mechanical Turk. Field Methods 28(3):231—246.

Conrad, E, J. Broome, J. Benki, F. Kreuter, R. Groves, D. Vannette, and C. McClain. 2013. Interviewer speech and the success of survey invitations. Journal of the Royal Statistical Society: Series A 176(1):191—210.

Conrad, F. G., M. F. Schober, C. Antoun, FI. Y. Yan, A. L. Hupp, M. Johnston, P. Ehlen, L. Vickers, and C. Zhang. 2017a. Respondent mode choice in a smartphone survey. Public Opinion Quarterly 81(Sl):307-337.

Conrad, F. G., M. F. Schober, A. L. Hupp, C. Antoun, and H. Y. Yan. 2017b. Text interviews on mobile devices. In: Total Survey Error in Practice, ed. P. P. Biemer, E. de Leeuw, S. Eckman, B. Edwards, F. Kreuter, L. E. Lyberg, C. Tucker, and В. T. West, 299-318. Hoboken, NJ: John Willey and Sons.

Dijkstra, W. 2018. Sequence viewer. Amsterdam, Netherlands, http://www.sequenceviewer.nl/.

Garbarski, D., N. C. Schaeffer, and J. Dykema. 2016. Interviewing practices, conversational practices, and rapport: Responsiveness and engagement in the standardized survey interview. Sociological Methodology 46(l):l-38.

Johnston, M., P. Ehlen, F. G. Conrad, M. F. Schober, C. Antoun, S. Fail, A. Hupp, L. Vickers, H. Yan, and C. Zhang. 2013. Spoken dialog systems for automated survey interviewing. In: Proceedings of the 14tli Annual S1GDIAL Meeting on Discourse and Dialogue (SIGD1AL 2013), 329-333.

Kreuter, E, S. Presser, and R. Tourangeau. 2008. Social desirability bias in CATI, IVR, and Web Surveys: The effects of mode and question sensitivity. Public Opinion Quarterly 72(5):847-865.

Lind, L. H., M. F. Schober, F. G. Conrad, and H. Reichert. 2013. Why do survey respondents disclose more when computers ask the questions? Public Opinion Quarterly 77(4):888-935.

Schaeffer, N. C, and D. W. Maynard. 1996. From paradigm to prototype and back again: Interactive aspects of cognitive processing in survey interviews. In: Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Interviews, ed. N. Schwarz, and S. Sudman, 65-88. San Francisco, CA: Jossey-Bass.

Schober, M. F. 2018. The future of face-to-face interviewing. Quality Assurance in Education 26(2):293-302.

Schober, M. E, F. G. Conrad, C. Antoun, P. Ehlen, S. Fail, A. L. Hupp, M. Johnston, L. Vickers, H. Y. Yan, and C. Zhang. 2015. Precision and disclosure in text and voice interviews on smartphones. PLoS One 10(6):e0128337.

Tourangeau, R., and T. W. Smith. 1996. Asking sensitive questions: The impact of data collection mode, question format, and question context. Public Opinion Quarterly 60(2):275-304.

Villar, A., and R. Fitzgerald. 2017. Using mixed modes in survey data research: Results from six experiments. In: Values and Identities in Europe: Evidence from the European Social Survey, ed. M. J. Breen, 273-310. New York: Routledge.

West, В. T, and W. G. Axinn. 2015. Evaluating a modular design approach to collecting survey data using text messages. Survey Research Methods 9(2):111—123.

 
Source
< Prev   CONTENTS   Source   Next >