Response effects are measurable differences in the responses of people being interviewed that are predictable from characteristics of the interviewers and those being interviewed— like whether the sex or race or age of interviewer and of the respondent are the same or different—and dozens of other things (box 8.6).
As early as 1929, Stuart Rice showed that the political orientation of interviewers can effect what they report people told them. Rice was doing a study of derelicts in flop houses
THE EXPECTANCY EFFECT
In 1966, Robert Rosenthal and Lenore Jacobson (1968) conducted an experiment. At the beginning of the school year, they told some teachers at a school that the children they were about to get had tested out as ''spurters.'' That is, according to tests, they said, those particular children were expected to make significant gains in their academic scores during the coming year. Sure enough, those children did improve dramatically—which was really interesting, because Rosenthal and Jacobson had matched the ''spurter'' children and teachers at random.
This experiment showed the power of the expectancy effect, or ''the tendency for experimenters to obtain results they expect, not simply because they have correctly anticipated nature's response but rather because they have helped to shape that response through their expectations'' (Rosenthal and Rubin 1978:377).
Strictly speaking, the expectancy effect is not a response effect at all. But for fieldworkers, it is an important effect to keep in mind. If you are studying a small community, or a neighborhood in a city, or a hospital or clinic for a year or more, interacting daily with a few key informants, your own behavior can affect theirs in subtle (and not so subtle) ways, and vice versa. Don't be surprised if you find your own behavior changing over time in relation to key informants.
and he noticed that the men contacted by one interviewer consistently said that their down-and-out status was the result of alcohol; the men contacted by the other interviewer blamed social and economic conditions and lack of jobs. It turned out that the first interviewer was a prohibitionist and the second was a socialist (cited in Cannell and Kahn 1968:549). Katz (1942) found that middle-class interviewers got more politically conservative answers in general from lower-class respondents than did lower-class interviewers, and Robinson and Rhode (1946) found that interviewers who looked non-Jewish and had non-Jewish-sounding names were almost four times more likely to get antiSemitic answers to questions about Jews than were interviewers who were Jewish looking and who had Jewish-sounding names.
Since these pioneering efforts, hundreds of studies have been conducted on the impact of things like race, sex, age, and accent of both the interviewer and the informant; features of the environment where the interview takes place (like whether the interview is done in private or in the presence of a third party); the nature of the task that people are asked to perform (like whether the respondent is asked to write out an answer, in text, or to just circle a number on a form); the mode of the interview (like comparing face-to-face, telephone, and Internet interviews about the same topic).
Sex-of-interviewer effects have been the focus of many studies. Hyman and Cobb (1975), for example, found that female interviewers who took their cars in for repairs themselves (as opposed to having their husbands do it) were more likely to have female respondents who reported getting their own cars repaired. Zehner (1970) found that when women in the United States were asked by women interviewers about premarital sex, they were more inhibited than if they were asked by men. Male respondents’ answers were not affected by the gender of the interviewer. McCombie and Anarfi (2002) found the same sex-of-interviewer effect 30 years later in Ghana: Young men (15-18 year olds) were equally likely to tell male or female interviewers that they had had sex, but young women were more likely to divulge this to male interviewers than to female interviewers. In the Tamang Family Research Project in Nepal, William Axinn (1991) found that women were simply better than men as interviewers: The female interviewers had significantly fewer ‘‘don’t know’’ responses than did the male interviewers. Axinn supposes this might be because the survey dealt with marital and fertility histories. In a multi-year study in Kenya of women’s networks and their AIDS-related behavior, Alex Weinreb (2006) found that the most reliable data were collected by female-insider interviewers—that is, women from the local area who were trained to be interviewers for the project— compared to the stranger-interviewers who were brought in from the outside (box 8.7).