Field experiments come in a range of ethical varieties, from innocuous to borderline to downright ugly. I see no ethical problems with the lost-letter technique. When people mail one of the lost letters, they don’t know that they are taking part in an experiment, but that doesn’t bother me. Personally, I see no harm in the experiment to test whether people vent their anger by honking their car horns more quickly at people they think are lower socioeconomic class. These days, however, with road rage an increasing problem, I don’t recommend repeating Doob and Gross’s experiment.

Randomized field experiments, used mostly in evaluation research, can be problematic. Suppose you wanted to know whether fines or jail sentences are better at changing the behavior of drunk drivers. One way to do that would be to randomly assign people who were convicted of the offense to one or the other condition and watch the results. Suppose one of the participants whom you didn’t put in jail kills an innocent person?

The classic experimental design in drug testing requires that some people get the new drug, that some people get a placebo (a sugar pill that has no effect), and that neither the patients nor the doctors administering the drugs know which is which. This double-blind placebo design is responsible for great advances in medicine and the saving of many lives. But suppose that, in the middle of a double-blind trial of a drug you find out that the drug really works. Do you press on and complete the study? Or do you stop right there and make sure that you aren’t withholding treatment from people whose lives could be saved? The ethical problems associated with withholding of treatment are under increasing scrutiny (Storosum et al. 2003; Walther 2005; Wertz 1987).

There is a long history of debate about the ethics of deception in psychology and social psychology (see Hertwig and Ortmann [2008] for a review). My own view is that, on balance, some deception is clearly necessary—certain types of research just can’t be done without it. When you use deception, though, you run all kinds of risks—not just to research participants, but to the research itself. These days, college students (who are the participants for most social psych experiments) are very savvy about all this and are on the lookout for clues as to the ‘‘real’’ reason for an experiment the minute they walk in the door.

If you don’t absolutely need deception in true behavioral experiments, that’s one less problem you have to deal with. If you decide that deception is required, then understand that the responsibility for any bad outcomes is yours and yours alone.

The experiments by Piliavin et al. (1969) and Harari et al. (1985) on whether people will come to the aid of a stricken person, or a woman being raped, present real ethical problems. Some of the participants (who neither volunteered to be in an experiment nor were paid for their services) might still be wondering what happened to that poor guy on the subway whom they stepped over in their hurry to get away from an uncomfortable situation—or that woman whose screams they ignored. In laboratory experiments, at least, participants are debriefed—told what the real purpose of the study was—to reduce emotional distress. In the guerrilla theater type of field experiment, though, no debriefing is possible.

Even debriefing has its dark side. People don’t like to find out that they have been duped into being part of an experiment, and some may suffer a terrible loss of self-esteem if they do find out and conclude that they acted badly. How would you feel if you were one of the people who failed to respond to a rape victim and then were told that you were just part of an experiment—that no real rape ever took place, and thank you very much for your help? (Further Reading: deception and debriefing).

If you think some of these cases are borderline, consider the study by West et al. (1975) on whether there is a little larceny in us all.

< Prev   CONTENTS   Source   Next >