Summary and Discussion

The literature explaining interviewer effects on survey unit nonresponse across different surveys has thus far reported a great deal of variability in the most important predictors of unit nonresponse. We reasoned that this is due to the findings in the literature being based on diverse surveys with a high level of inconsistency in terms of key survey characteristics. In this chapter, we attempted to explain interviewer effects on survey unit nonresponse in a more orchestrated way. We based our analyses on four surveys using face-to-face recruitment (GIP 2012, PIAAC, SHARE, and GIP 2014), each of which was conducted in Germany in approximately the same time period by the same survey agency with overlapping interviewers. Across the four surveys, we estimated the same multilevel models for two aspects of survey unit nonresponse - successful contact and cooperation - including the same explanatory variables.

Our study revealed two main findings. First, using multilevel modeling, we analyzed the amount of variance at the interviewer level in a sampled person's propensity to be successfully contacted and a sampled person's propensity to cooperate. To account for the non-random assignment of sampled persons to interviewers and the associated potential that area effects could be confounded with the interviewer effects, we adjusted for numerous sample composition variables in all models. Our results revealed a rather similar interviewer effect on a sampled person's propensity to be successfully contacted in the 2012 GIP, the PIAAC, and the 2014 GIP, with interviewer variance representing around 20% of the total variance. We assume this to be the case because the contact strategies used in the three surveys are rather similar. We did not estimate interviewer effects for SHARE as the majority of the interviewers have a contact rate of 100%.

For a sampled person's propensity to participate in each survey, the percentage of variance due to interviewer effects was rather small for PIAAC (2.1%) and for SHARE (5.2%).

The percentages of total variance due to interviewer effects for the 2012 and 2014 GIP were much higher (GIP 2012:12.8%, GIP 2014:17.2%). The reason for this difference is unknown. We can speculate that this is due to the fact that the procedures used to identify the household member eligible for the interview in the GIP surveys differ from those used for SHARE and PIAAC. GIP interviewers were allowed to select any household member within the eligible age range, whereas for PIAAC and SHARE the target person was prespecified (see Blom, et al. 2015 for details). Interviewer traits may have a bigger influence on cooperation in the GIP field protocols.

The second finding, which is at the same time the main finding of our chapter, concerns the explanatory power of interviewers' characteristics across the four surveys. Although there was a high level of consistency in the designs of the four studies and the interviewers employed in the four studies were quite similar with regard to most of their characteristics collected via the interviewer questionnaire, there was no consistency regarding the significant predictors of survey unit nonresponse across the studies.

There were some differences between the four surveys that might influence the respondent's decision to participate more than the interviewer. For example, the survey topic, the amount of information provided in the cover letter, the age of the target population, the amount of interviewer training, the sponsor, and the research team differed between the surveys examined in this chapter. However, the differences in explanatory power of interviewers' characteristics between the GIP 2012 and GIP 2014 cannot be explained by differences in the survey design, as all the above-mentioned factors were kept constant between the two survey designs. Still, we found no consistent interviewer effects over these two surveys. We have to conclude that other factors unobserved in our study affect the interviewer's success in gaining contact and cooperation.

We can only speculate about the reasons for differences in the explanatory power of interviewer characteristics across the surveys. Even with a high level of consistency across the four studies, our results are consistent with the current literature: interviewer effects during the recruitment phase seem to be study-specific, and need to be analyzed, monitored, and treated accordingly. Thus, different surveys need to take steps to minimize variance in these recruitment outcomes through careful interviewer selection, training, and active fieldwork monitoring (e.g., re-training interviewers found to have unusually low contact and cooperation rates during data collection).

 
Source
< Prev   CONTENTS   Source   Next >