Scientific Freedom and Social Responsibility

Heather Douglas

DEPARTMENT OF PHILOSOPHY, MICHIGAN STATE UNIVERSITY

Introduction

Over the past two decades, we have witnessed a sea change in the understanding of scientific freedom and social responsibility among scientists. Scientists have gone from thinking of freedom and responsibility as in tension, such that more of one meant less of the other, to being yoked together, such that more of one meant more of the other. This is a crucial and important change, and one that is easy to lose sight of in the midst of debates over particular scientific practices. I will show in this chapter that such a change has occurred (drawing on the central documents and scientific institutions that reflect scientists’ understanding of these issues) and articulate some reasons why such a change occurred (although I can make no claims to a complete causal account here). The change, regardless of its causes, has important implications for the crafting of institutional support for responsible science in the twenty-first century that have yet to be fully grasped, much less implemented.

1 will begin with a description of how scientists have thought about their freedoms and their responsibilities in the period after World War 11. During this time, scientific freedom and responsibility, particularly freedom from external control over their research agendas and moral responsibility1 for the social impacts of science, were thought to be in opposition to each other. In other words, the more freedom one had, the less responsibility, and vice versa. This conception of scientific freedom was made plausible by the developing science policy context of the time, in which the linear model for science policy and a clear distinction between pure and applied science predominated. Starting in the twenty-first century, this understanding began to change, and now we have a different understanding — that scientific freedom must come with social responsibility. However, this new understanding is not yet fully implemented or institutionalized. Embracing it (as I think we should) has important implications for science policy and science education.

In what follows, I will focus primarily on freedom to set research agendas (without external planning, the key concern for scientists discussing freedom in science) and responsibility for the social impact of choices regarding

Scientific Freedom and Social Responsibility 69 which projects to pursue in science. There are many other responsibilities in science (e.g., responsibilities to colleagues, students, evidence, and subjects), but this is not the place to detail them, although their pervasive importance means they will appear in the historical narrative (see Shamoo and Resnik 2003 and Douglas 2014a for more comprehensive accounts).

In addition, 1 will focus on individual responsibility and individual freedom in science - rather than collective responsibility or communal responsibility. 1 will argue that we cannot do without individual responsibility and freedom, even if collective responsibility and communal responsibility are also important. We might decide in some circumstances to collectivize responsibility, thus partially offloading it from the individual to a designated collective (as we do with human subject oversight committees). We might decide that the scientific community as a whole is the more appropriate location for some responsibilities.2 In general, however, some responsibility will always rest with the individual scientists. Communal responsibilities need to be acted upon by individuals; collective responsibilities need responsible individual actors to perform their roles properly. I do not have the space to delineate what should be ideally individual vs. collective vs. communal responsibilities here. I do argue that we need to craft better institutional support for the ability of scientists to meet their social responsibilities, particularly those that continue to rest with the individual.

Freedom vs. Responsibility (1945-2000)

Just as World War II was beginning, an international debate about the purpose of science and the nature of scientific freedom was brewing. In 1939, J.D. Bernal published The Social Function of Science, in which he argued against an ideal of pure science for its own sake and called on scientists to work together for the betterment of humanity (Bernal 1939). The publication of this book alarmed some scientists in the UK, who took it as an assault on the freedom of scientists to pursue knowledge for its own sake (Nye 2011). In 1940, John R. Baker and Michael Polanyi formed the Society for Freedom in Science (SFS) in order to counter this line of thinking from Bernal and other Marxist-leaning scientists (McGucken 1978). For leaders in the SFS, any external effort to direct scientific work toward particular societal problems was unacceptable (Douglas 2014b; Polanyi 1962). The Society grew slowly during the war, and recruited scientists in part by arguing that freedom of science was necessary to counter the authoritarian dictators Britain was fighting.

In 1944, the SFS reached out to Percy Bridgman in the US, believing he would be sympathetic to their ideas. He had already defended in print the ideal of pure science, an essential part of the SFS platform (Bridgman 1943). Bridgman became a key proponent of the freedom in science movement in the US (Bridgman 1944). As policy debates over how to fund science in the US post-war context heated up, Bridgman argued for the ability of scientists to be the sole decision makers on which projects to fund, in contrast to the Kilgore Bill, which would have required geographic equity considerations and issues of social utility to be part of decisions regarding the distribution of US federal research funds (Kleinman 1995). The debates over science funding instantiated a new science policy framework, one which utilized a clear distinction between pure and applied science, and placed science in a “linear model”. In the linear model, government poured money into basic or pure research. Scientists would use these funds and themselves decide which research projects to pursue and how to pursue them, with no central planning. Basic or pure science was to be unfettered. The linear model, articulated in Vannevar Bush’s 1945 monograph Science — The Endless Frontier, argued that investments in basic science would eventually have societal payoffs, as basic became applied research, and with application would come social benefit (thus justifying public expenditure [Bush 1945]). The essential freedom of scientists to choose their own research projects was granted for the pursuit of pure science, which was to be funded without specific social restrictions or considerations.

It was in the midst of these debates about scientific freedom and research planning3 that Percy Bridgman (1947) wrote his classic statement on freedom and responsibility, “Scientists and Social Responsibility”.4 In it, Bridgman set forth the understanding of freedom as being opposed to responsibility (particularly responsibility for the effects of scientific work). The idea is that the more freedom one has, the less responsibility, and vice versa. It was a zero-sum game in Bridgman’s view, and one that he thought should be decided in favor of complete freedom for scientists, both for decisions on what to pursue and from thinking about the social impacts of research. Indeed, he viewed with alarm growing calls for scientists’ societal responsibility. Bridgman noted that in the aftermath of the development and use of nuclear weapons, the general public had “displayed a noteworthy concern with the social effects of [scientists’] discoveries” (Bridgman 1947, 148). He argued that calls for scientists to embrace moral responsibility for the effects of science would “make impossible that specialization and division of labor that is one of the foundation stones of our modern industrial civilization” and would impose a special and unique burden on scientists, because no one else is responsible for all the effects of their work (Bridgman 1947, 149). More importantly, this burden would entail a “loss of freedom” for scientists. This would undermine the value of science, according to Bridgman, because “the scientist cannot make his [sic] contribution unless he is free, and the value of his contribution is worth the price society pays for it” (1947, 149). Bridgman rejected the idea that scientists have special competencies that make them particularly qualified to grapple with the effects of science, noting the (by then) failure of scientists to establish a National Science Foundation (legislation would be disputed and delayed for another few years). But most importantly, according to Bridgman, responsibility was an anathema to freedom: “The challenge to the understanding of

Scientific Freedom and Social Responsibility 71 nature is a challenge to the utmost capacity in us. In accepting the challenge, man [sic] can dare to accept no handicaps” (1947, 153). For Bridgman, any attempt to impose social responsibility considerations on scientists was such a handicap and was tantamount to a removal of freedom to select research agendas and to pursue scientific ideas wherever they might lead.

The linear model and Bridgman’s model of freedom vs. responsibility together provided for freedom of research (at least for pure scientists who are only concerned with uncovering new truths) along with a rejection of responsibility for the uses of that pure research. It was only in the application of pure science (or basic science) that one could speak of societal responsibility. Those who applied research to particular ends, who used basic research for particular purposes, bore responsibility for the impacts of science.5 Freedom in science meant both freedom from external control of research agendas and freedom from bearing responsibility for the choices made regarding research agendas.

This view of scientific freedom and scientific responsibility was predominant for the remainder of the twentieth century. It insulated scientists from the responsibility for the impacts of science on society. At the same time, concerns about responsibility for the methods of science would become potent in the post-World War 11 era, with the revelations of Nazi scientific atrocities and the enshrinement of human rights protections within the context of scientific research.6 Neither Bridgman nor the SFS addressed these kinds of moral responsibilities for scientists. Within the context of the linear model, to impose moral restrictions on scientists for methodologies seemed to be compatible with the freedom from responsibility for the downstream effects of science, and indeed compatible with freedom of scientists to decide which knowledge is worth pursuing (how to pursue it was another issue). Thus, even as controversies about scientists’ responsibilities to human subjects7 grew (and the legislation to regulate human subject research was introduced around the world — e.g., in 1974, the US passed the National Research Act), the structure of the Bridgman view on freedom and responsibility remained unchallenged.8

The potency of this conception can be seen in the 1975 report from the A A AS Committee on Scientific Freedom and Responsibility (Edsall 1975). The Committee had been formed in 1970, at the height of concern over Nixon’s abuse of science advice (including preventing scientific advisors from speaking to Congress about particular issues and lying to Congress about the content of advising reports) and concern over the freedom of scientists in the authoritarian Soviet Union (Edsall 1975; von Hippel and Primack 1972). In addition to these concerns about government abuse of scientists, the broadly beneficial nature of science had come under sharp scrutiny in the 30 years since Bridgman’s argument. In the aftermath of World War II, the development of atomic weapons could be seen as a wartime aberration from the usual course of science providing societal benefit. By 1970, such a sequestering of harm to wartime could no longer bemaintained. It was not just the development of poison gases in World War I, atomic weapons in World War 11, or weaponized herbicides in Vietnam that raised concerns. There were also peacetime industrial processes that contaminated the environment with DDT and other pesticides, the threat of the sonic boom from super-sonic transport, the problem of nuclear safety from peaceful nuclear power, and the growing concerns about the possible uses of genetic engineering. Science was increasingly seen as a double-edged sword, even in times of peace, and there was some consternation among scientists about what to do about this.9 What were scientists’ responsibilities and freedoms?

Despite the rejection of the blanket assumption that science was usually societally beneficial, the work at this time did not reject the framework of the linear model for discussing scientists’ responsibilities. Bridgman’s opposition model for freedom and responsibility continued to enjoy predominance. At first, it seemed that the AAAS Committee was open to rejecting Bridgman’s oppositional approach, writing in the opening pages:

The Committee concluded, early in its deliberations, that the issues of scientific freedom and responsibility are basically inseparable. Scientific freedom, like academic freedom, is an acquired right, generally approved by society as necessary for the advancement of knowledge from which society may benefit. The responsibilities are primary; scientists can claim no special rights, other than those possessed by every citizen, except those necessary to fulfill the responsibilities that arise from the possession of special knowledge and of the insight arising from that knowledge.

(Edsall 1975, 5)

Responsibility, being primary, seems to come with scientific freedom — at least that is one way to read this passage. Yet, for the remainder of the report, the Bridgman model predominated. The Committee divided science into basic and applied for discussions of freedom and responsibility. For basic science, the central responsibilities were to maintain scientific integrity, give proper allocation of credit to other scientists, and treat human subjects appropriately (i.e., the standard Responsible Conduct of Research list). In general, basic scientists were to be given the freedom to follow their research wherever it leads (Edsall 1975, 7), and to not have a general responsibility for thinking about societal impact.

Even when discussing clearly controversial research, the Committee was reluctant to suggest that scientists were responsible for the impact of their work. For example, the Committee did note the growing concern about recombinant DNA techniques, and the efforts by scientists to grapple with these concerns, including potential moratoriums. They wrote: “Clearly this declaration [of scientists to temporarily restrict research] represents a landmark in the assumption of scientific responsibility by scientists themselves

Scientific Freedom and Social Responsibility 73 for the possible dangerous consequences of their work” (Edsall 1975, 13-14). While the Committee thought it wise to “refrain temporarily from further experiments” when those experiments might be a direct threat to human health, they were much more reluctant to consider restrictions on the freedom to do research when that research threatened “human integrity, dignity, and individuality,” concerns raised with cloning (Edsall 1975, 14). They also rejected calls for an end to research into the genetic basis for individual differences in IQ (particularly when correlated with “race”) (Edsall 1975, 15). In general, the Committee argued that the pursuit of knowledge should proceed even when profound ethical worries about the knowledge being produced were present. Restrictions on basic research were only to be found in the clear ethical demands for the protection of human subjects and prevention of immediate threats to human health.

For applied science, responsibility for societal impacts was a far more pressing concern. Here the Committee was much clearer in calling for general social responsibility, arguing “[m]any schemes that are technically brilliant must be rejected because their wider impact, on the whole, would be more damaging than beneficial” (Edsall 1975, 26). Further, the Committee argued that those working in applied science had much less freedom than those working in basic science, having their efforts directed by the institutions in which they worked (Edsall 1975, 27). Applied scientists thus had less freedom over their research agendas (being directed by the laboratory in which they worked, either governmental or commercial), and also more responsibility to consider the social impact of their work. In short, the Committee still maintained Bridgman’s idea that the more freedom one had, the less responsibility, and the more responsibility, the less freedom, particularly regarding the direction of the research agenda.

While the AAAS Committee report represents the thinking of the mainstream of the scientific community, some scientists had decided in previous decades that their work should represent a stronger sense of responsibility. Thus, organizations arose like Pugwash (begun in 1957) and the Union of Concerned Scientists (founded in 1969), which enabled scientists to directly encourage and facilitate the pursuit of socially beneficial work by scientists. Notions such as von Hippel and Primack’s “public interest science” (which was initially focused on science advising for the public interest) also helped to mark pathways to embracing greater social responsibility (von Hippel and Primack 1972; Krimsky 2003). But it was not argued that all scientists needed to embrace this sense of social responsibility, that scientists should all be trying to benefit the broader society. This was something some scientists could choose to do, but was not obligatory.

By the 1980s, some academics challenged Bridgman’s view of the responsibilities of scientists (e.g., Lakoff 1980), while others (e.g., Lubbe 1986) reinforced it. Most work for the remainder of the century claimed that moral responsibilities to consider the societal impact of one’s work came only with special circumstances. Thus, if one was doing applied research, ifone was dealing with human or animal subjects, if one was grappling with a process that posed an immediate human health threat, or if one was involved with giving advice to the government, then such moral responsibilities might obtain. But for most scientists pursuing basic research, there were no general societal responsibilities. This can be seen, for example, in the 1992 US National Academy of Science (NAS) report Responsible Science, which was responding to the charge to examine the role of institutions in “promoting responsible research practices” (NAS 1992, 22). The report focused almost entirely on issues of research misconduct (i.e., data fabrication, falsification, and plagiarism). Responsible research at that time did not generally include societal responsibilities.

This is reflected as well in the NAS booklet On Being a Scientist, an educational booklet for the budding scientist. First published in 1989, the booklet focused on then standard philosophy of science topics (e.g., questions of scientific method, epistemic values in science, the assessment of hypotheses) and internal research ethics questions (e.g., fraud and error, apportioning credit for discovery, peer review). A brief section at the end addressed broader societal responsibilities of scientists. After quickly noting the importance of protecting human subjects, animal subjects, and the environment, and the prevalence of societal responsibility in applied science, it said:

Scientists conducting basic research also need to be aware that their work ultimately may have a great impact on society. ... The occurrence and consequences of discoveries in basic research are virtually impossible to foresee. Nevertheless, the scientific community must recognize the potential for such discoveries and be prepared to address the questions that they raise.

(NAS 1989, 9072)

Examples like the temporary moratorium on recombinant DNA were touted as exceptional demonstrations of societal responsibility (NAS 1989, 9072). In addition, the booklet noted that some scientists take on roles that have more social responsibility (such as governmental science advising), but that other scientists prefer not to be involved in such efforts. The booklet upheld the freedom of the individual scientist to choose whether to take on explicit social responsibility, even if “dealing with the public is a fundamental responsibility for the scientific community” (NAS 1989, 9073). Someone needed to interact with the public to maintain public trust, but plenty of scientists could simply opt out. Thus continued the model of societal responsibility only arising under special circumstances, and otherwise being optional.

When the NAS revised the booklet for a second edition (1995), this treatment of societal responsibilities in science remained mostly unchanged. The booklet repeats much of what was written in 1989, but added that “If

Scientific Freedom and Social Responsibility 75 scientists do find that their discoveries have implications for some important aspect of public affairs, they have a responsibility to call attention to the public issues involved” (NAS 1995, 20-21).

Here we see the beginning of a new, general responsibility for all scientists, starting off as a responsibility to communicate the results of their research when relevant to public interest. (This was a reflection of the growing call for scientists to communicate their research relevant to climate change, for example.) Scientists still had general freedom to choose their basic research agenda, and freedom from responsibility for the social impacts of their choices.

This division between responsibility and freedom was not restricted to the US. It can also be seen in the structures of the International Council of Scientific Unions (ICSU). Work on the freedom of movement of scientists (particularly to attend scientific meetings internationally) began in 1965, and by 1996, this committee shifted attention from freedom of movement to freedom in science more generally, becoming the Standing Committee on Freedom in the Conduct of Science. Also in 1996, a new committee, the Standing Committee on Responsibility and Ethics in Science (SCRES), was formed to address issues of responsible conduct of science. The two committees operated independently of each other (Schindler 2009).

In the 60 years after Bridgman had proposed that freedom and responsibility were antagonistic, an accumulating list of exceptions to this rule (not for applied science, not for human subjects, not for science advisors, etc.) had helped to protect the idea that, at the heart of science and generally, scientists should not be responsible for the societal impact of their work. Responsibilities for societal impact were limited to special circumstances, and could sometimes be offloaded to institutional oversight (such as IRBs for human subject research or IACUCs for animal research). But it was not considered a responsibility of scientists to actively attempt to shape their research agendas, at least when doing “basic research”, to avoid societal harms or to provide societal benefits. This was still seen as part of the protection of scientific freedom for research agenda-setting from external control. Thus, by the end of the twentieth century, the expectation of general societal responsibility for scientists was still limited.

 
Source
< Prev   CONTENTS   Source   Next >