Virtual Interviewers, Social Identities, and Survey Measurement Error

Introduction

Many considerations can - and should - go into the critical decision of survey mode for empirical studies, including cost, data quality, efficiency, and the trade-offs between them. For example, interviewer-administration is more costly than self-administration (e.g., Baker 1998), yet survey interviewers remain indispensable for some types of surveys (e.g., when high response rates are a priority, when studying populations whose members cannot be reached via other modes, and when studying populations with limited literacy). Interviewers can increase response rates, motivate conscientious responding, explain how certain questions should be interpreted or how response tasks should be performed, and help assure respondents that answers will be maintained confidentially. However, interviewers can also introduce error into the measurement process, for example promoting socially desirable responding (e.g., Tourangeau and Yan 2007) about sensitive topics and introducing bias because of their perceived social identities, such as gender, age, race, and sub-group membership when the questions concern those identities (e.g., Davis, et al. 2009; Ehrlich and Reisman 1961; Groves, et al. 2009, 292-295; Kane and McCaulay 1993; Liu 2016; Schuman and Converse 1971; Wolford, et al. 1995).

The effects of interviewers' social identities on responses generally take the same form irrespective of the particular identity involved: more prevalent support for positions that seem more consistent with the views of an interviewer given their identity such as race or gender. For example, Schuman and Converse (1971) reported that 35% of Black respondents indicated they can trust most White people when a White interviewer read the question but only 7% reported that they could trust most White people when the interviewer was Black. Similarly, Kane and McCaulay (1993) reported 19.8% of respondents reported sharing childcare with their spouse when the interviewer was female compared to only 13.5% when the interviewer was male.

In this chapter, we explore an approach that may incorporate attractive features of both interviewer- and self-administration while reducing monetary costs and measurement error inherent in either approach. The proposal is that by implementing embodied, animated agents or "virtual interviewers" (Vis), which are considerably cheaper than live human interviewers, it may be possible to engage respondents more than in traditional text-based web surveys, while deliberately using V7s' social identities to improve the quality of answers. In fact, Vis may facilitate further improving data quality by (1) matching V7s' perceived social identities with respondents' self-reported social identities or even (2) allowing respondents to choose a VI.

Our research questions are: (1) Do Vis bias responses on the basis of their perceived social identities (in this case race and gender), to at least the same extent that human interviewers do? (2) Are respondents more likely to disclose undesirable information for questions concerning sensitive topics when they match the VI on a perceived identity compared to when there is no such match? (3) Do respondents disclose more undesirable information when they choose their VI than when they do not?

 
Source
< Prev   CONTENTS   Source   Next >