Menu
Home
Log in / Register
 
Home arrow Environment arrow Research Methods in Anthropology: Qualitative and Quantitative Approaches
Source

QUESTIONNAIRES AND SURVEY RESEARCH

Survey research goes back over 200 years (take a look at John Howard’s monumental 1973 [1792] survey of British prisons), but it really took off in the mid-1950s when quota sampling was first applied to voting behavior studies and to helping advertisers target consumer messages. Over the years, government agencies in all industrialized countries have developed an insatiable appetite for information about various ‘‘target populations’’ (poor people, users of public housing, users of private health care, etc.). Japan developed an indigenous survey research industry soon after World War II, and India, South Korea, Jamaica, Greece, Mexico, and many other countries have since developed their own survey research capabilities (box 9.1).

THE COMPUTER REVOLUTION IN SURVEY RESEARCH

There are four methods for collecting questionnaire data: (1) personal, face-to-face interviews, (2) self-administered questionnaires, (3) telephone interviews, and (4) online interviews. All of these methods can be assisted by, or fully automated with, computers.

The computer revolution in survey research began in the 1970s with the development of software for CATI, or ‘‘computer-assisted telephone interviewing.’’ With CATI software, you program a set of survey questions and then let the computer do the dialing. Interviewers sit at their computers, wearing telephone headsets, and when a respondent

BOX 9.1

ANTHROPOLOGY AND SURVEYS

Anthropologists are finding more and more that good survey technique can add a lot of value to ethnography. In the 1970s, Sylvia Scribner and Michael Cole studied literacy among the Vai of Liberia. Some Vai are literate in English, others are literate in Arabic, and some adult Vai men use an indigenous script for writing letters. As part of their project, Scriber and Cole ran a survey with 650 respondents. Michael Smith, the cultural anthropologist on their team, was skeptical about using this method with the Vai. He wrote the project leaders about his experience in administering the survey there:

I was surprised when I first saw how long it [the questionnaire] was. I didn't think that anyone would sit down for long enough to answer it, or, if they did, that they would answer it seriously. . . . Well, I was wrong—and it fascinates me why the Vai should, in the busiest season of the year—during two of the worst farming years one could have picked . . . spend a lot of time answering questions which had little to do with the essential business at hand. . . . Not only did the majority of people eventually come, but when they got there they answered with great deliberation. How many times does one remember someone saying, ''I don't know, but I'll come back and tell you when I've checked with so-and-so.'' (Scribner and Cole 1981:47)

agrees to be interviewed, they read the questions from the screen. With the kind of fixed- choice questions that are typical in surveys, interviewers only have to click a box on the screen to put in the respondent’s answer to each question. For open-ended questions, respondents talk and the interviewer types in the response.

CASI stands for ‘‘computer-assisted self-administered interview.’’ People sit at a computer and answer questions on their own, just like they would if they received a questionnaire in the mail. People can come to a central place to take a CASI survey or you can send them a disk in the mail that they can plug into their own computer (Van Hattum and de Leeuw 1999) ... or you can set up the survey on the web and people can take it from any Internet connection. With Internet cafes now in the most out-of-the-way places, we can continue to interview our informants between trips to the field (box 9.2).

People take quickly to computer-based interviews and often find them to be a lot of fun. Fun is good because it cuts down on fatigue. Fatigue is bad because it sends respondents into robot mode and they stop thinking about their answers (Barnes et al. 1995; O’Brien and Dugdale 1978). I ran a computer-based interview in 1988 in a study comparing the social networks of people in Mexico City and Jacksonville, Florida. One member of our team, Christopher McCarty, programmed a laptop to ask respondents in both cities about their acquaintanceship networks. Few people in Jacksonville and almost no one in Mexico City had ever seen a computer, much less one of those clunky lug-ables that passed for laptops back then. But our respondents said they enjoyed the experience. ‘‘Wow, this is like some kind of computer game,’’ one respondent said.

The technology is wildly better now and fieldworkers are running computer-assisted interview surveys all over the world. Hewett et al. (2004) used A-CASI technology—for ‘‘audio, computer-assisted, self-administered interview’’—in a study of 1,293 adolescents

BOX 9.2

INTERNET-BASED SURVEYS

Internet surveys are easy to build (there's lots of interactive software out there for it) and easy to analyze (typically, the results come to you on a spreadsheet that you can pop into your favorite stats program). In theory, they should also be easy to administer (you just send people a URL that they can click), but it can be tough getting people to actually take an Internet survey. In 2000, my colleagues and I ran an Internet survey of people who had purchased a new car in the last 2 years or were in the market for a car now. There's no sampling frame of such people, so we ran a national, RDD (random-digit-dialing) screening survey. We offered people who were eligible and who said they had access the Internet $25 to participate in the survey. If they agreed, we gave them the URL of the survey and a PIN. We made 11,006 calls and contacted 2,176 people. That's about right for RDD surveys. (The rest of the numbers either didn't answer, or were businesses, or there was only a child at home, etc.) Of the 2,176 people we contacted, 910 (45%) were eligible for the web survey. Of them, 136 went to the survey site and entered their PIN, and of them, 68 completed the survey.

The data from those 68 people were excellent, but it took an awful lot of work for a purposive (nonrepresentative) sample of 68 people. At the time, 45% of adults Americans had access to the Internet (SAUS 2000: table 913). Today, that number is 83% (SAUS 2009: table 1120), but it's still hard to motivate people to take most web surveys.

When people are motivated, though, web surveys can reach respondents in hard-to-reach groups. To study gay Latino men, Ross et al. (2004) placed about 47 million banner ads on gay-themed websites inviting potential respondents for a university-sponsored study. The ads produced about 33,000 clicks, 1,742 men who started the survey, and 1,026 men who finished it. Those 1,026 men were obviously not a random representative sample, but Internet surveys aren't always meant for getting that kind of data. (For more on increasing response rates to Internet and mixed mode surveys, see Dillman et al. 2009.)

in rural and urban Kenya about very sensitive issues, like sexual behavior, drug and alcohol use, and abortion. With A-CASI, the respondent listens to the questions through headphones and types in his or her answers. The computer—a digitized voice—asks the questions, waits for the answers, and moves on. In the Kenya study, Hewett et al. used yes/no and multiple choice questions and had people punch in their responses on an external keypad. The research team had to replace a few keypads and they had some cases of battery failure, but overall, they report that the computers worked well, that only 2% of the respondents had trouble with the equipment (even though most of them had never seen a computer) and that people liked the format (Hewett et al. (2004:322-24).

This doesn’t mean that computers are going to replace live interviewers any time soon. Computers-as-interviewers are fine when the questions are clear and people don’t need a lot of extra information. Suppose you ask: ‘‘Did you go to the doctor last week?’’ and the informant responds: ‘‘What do you mean by doctor?’’ She may have gone to a free?standing clinic and seen a nurse practitioner or a physician’s assistant. She probably wants to know if this counts as ‘‘going to the doctor’’ (box 9.3) (Further Reading: computer- based interviews).

BOX 9.3

CAPI AND MCAPI

CAPI software supports ''computer-assisted personal interviewing.'' MCAPI (mobile CAPI) is particularly good for anthropologists. You build your interview on a laptop or a handheld computer. The computer prompts you (not the respondent) with each question, suggests probes, and lets you enter the data as you go. CAPI and MCAPI make it easier for you to enter and manage the data. Easier is better, and not just because it saves time. It also reduces errors in the data. When you write down data by hand in the field, you are bound to make some errors. When you input those data into a computer, you're bound to make some more errors. The fewer times you have to handle and transfer data, the better. Clarence Gravlee (2002a) used this method to collect data on lifestyle and blood pressure from 100 people in Puerto Rico. His interviews had 268 multiple choice, yes/no, and open-ended questions, and took over an hour to conduct, but when he got home each night from his fieldwork, he had the day's data in the computer.

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel