The Risks of Agricultural Biotechnology
In contrast to my work with animal producers, which mostly consists of explaining and applying basic philosophical concepts, I think of myself has having done original philosophical work on risk and risk assessment. The characterization of risk that dominates in the sciences takes it to be a quantifiable function of hazard (the bad thing that might happen) and exposure (the conditional probability that it will actually occur, given specifiable circumstances). There is often, but not always, a further presumption that classical utilitarian ethics provides the appropriate normative framework for managing risk. Since 1980, several philosophers have critiqued these framing assumptions, but my particular work has stressed two points. First, the hazard/exposure conceptualization does not provide an adequate analysis of the way the word ‘risk’ is actually used in ordinary language. As a verb, at least, the word ‘risk’ has a grammar that suggests it is a form of intentional action. It indicates a category of actions distinguished in part, perhaps, because they call for caution and, in some contexts, acts of heroism or courage. This usage presupposes a distinction between acts that are risky and those that are not, irrespective of the ubiquitous potential for exposure to hazards. This act-categorizing usage of the word ‘risk’ links hazardous activities with notions of responsibility. Philosophers might say that this yields a more agent-centered account of risk, while hazard and exposure characterize outcomes. Hence to presume, as many risk specialists do, that ‘there is no zero risk’ (e.g., that risk is pervasive across all circumstances) functionally disables grammatical conventions that are normatively important (Thompson, 1999).
Second, in contrast to philosophers such as Kristin Shrader-Frechette (1991) or Carl Cranor (1990), I have argued for the view that epistemic factors can properly give rise to the condition of being at risk. This is, in the most basic case, simply to say that when one is aware of one’s lack of knowledge, it is reasonable and philosophically defensible to characterize one’s uncertainties as generating risk (Thompson, 1986a). In a similar vein, I have defended the view that those who act from ignorance take more risk than those who do not, even when they both do virtually the same exact thing (Thompson, 1986b). Space does not allow even a summary of the reactions to this scholarship or the complex tangle of conceptual, epistemic, and ontological questions that must be sorted out in order to develop an adequate view of what risk is. Here, I will simply highlight some ways in which this work has affected thinking in policy and practice for agricultural biotechnology.
Put succinctly, there is a widespread tendency for people with scientific training to presume that risks are just obviously acceptable when they are offset by benefits, and especially so when the benefits are morally compelling. Since agricultural scientists think of themselves as contributing to future generations’ ability to ‘feed the world,’ they regard the benefits of their work as morally compelling. There are quite a few philosophical presumptions that need deconstructing here, but the most obvious one is a basic point in moral philosophy. Even if, as a classic utilitarian, one is indeed convinced that risk-benefit optimization is the right standard for evaluating technological innovations, any philosopher is going to recognize that there are alternative views and that one has to provide an argument in favor of one’s perspective. Many (and I mean many) scientists do not see this, and hence there are numerous opportunities to explain why people are so enraged. This is a point on which I think Shrader-Frechette, Cranor, and I all agree.
My work among molecular biologists using gene transfer to develop new crops and food animals has often taken the tack of getting this basic point across, and then helping them to appreciate other ways that values influence the conceptualization of risks. I must have given a dozen or so talks to scientific groups in which the main message has simply been to show that an ‘informed consent’ standard—the standard they all must meet when research involves human subjects—is actually intended to block risk-benefit rationalizations for exposing people to risk. As such, people are not being irrational or crazy when their response to novel food technologies is to insist on an institutional structure that allows them to ‘opt out’ (Thompson, 1996). This doesn’t mean they are right to insist on this, but it does place a burden upon the advocates of technology to provide reasons for their preferred alternative. And, as I have argued repeatedly, the advocates of biotechnology were very, very slow to do this (Thompson, 1998; Wolfenbarger et al., 2004).
Thus, a fairly basic point in moral philosophy opens into a set of more complex issues that remain largely unresolved. What are the fiduciary responsibilities of scientists with respect to warning the public about risks, or of advising them when putative dangers are overblown? How does the institutional structure of the disciplines militate against scientists participating in public discourse on technically complex issues, and how do epistemic values (such as objectivity) intersect with the need to undertake educative or persuasive engagement with interested parties? Even more basic questions concern the way that experience and community values have influenced the way that agricultural scientists define the potential hazards from gene transfer, or how they frame gene transfer approaches to new variety development as similar to or different from conventional breeding. As I have argued, these values significantly shape the way that one evaluates the risks of agricultural biotechnology (Thompson, 1988, 2003). Significant for the conception of field philosophy embodied in this volume is that I have published this work in scientific journals, rather than talking to other philosophers. Unfortunately for me, this has made most of my better philosophical work totally invisible to my colleagues in the philosophy of science. This is another point that needs revisiting in the concluding section.
Has this work had influence? While I can be fairly confident in asserting the influence of my largely uncreative work on animal issues, it is much more difficult to document any influence I may have had on the trajectory of agricultural biotechnology and its policy. There have certainly been changes in policy on labeling that are consistent with positions that I have argued. Early on, the U.S. Food and Drug Administration (FDA) was promulgating rules on labeling products of biotechnology that did not accommodate the range of reasons why people wanted to know about it (Thompson, 2002), but, of course, I am not the only or the loudest voice that has argued for those changes. I have been asked to participate in a number of advisor}' roles, including service on a National Resource Council committee and many years of sendee on Genome Canada’s Science Advisor}' Committee. But my actual influence would be difficult to track, and it was made even more diffuse by Monsanto’s appointment of the Canadian philosopher R. Paul Thompson to its ethics advisor}' group, and his lecturing in support of gene transfer techniques. This led to a very uncomfortable session prior to my knowledge of the other Paul Thompson’s work with Monsanto, in which officials at the U.S. National Research Council were suggesting that I had not disclosed my own industry connections. I was also vilified by activists in Mexico as an industry spokesperson, and I have been frozen out of the U.S. policy dialogue ever since. Can I confess my suspicion that muting and muddling my voice is broadly what someone at Monsanto hoped to achieve? Yet more issues to take up in the concluding section.