Technoscientific Governance: Ethics and the Life World of Citizens in a Digital Society

The notion of governance that this volume proposes is introduced in Strand and Funtowicz’s chapter (Chap. 1). These authors revisit some key historical moments in the movement from the foundations of modern science to current technoscientific governance. According to Strand and Funtowicz, in a European context, government evolved into governance in science policy as an intuitional attempt to regain legitimacy in the face of an increasing public distrust of governments and expert institutions (see also Wynne 2006; EC 2010). In such a context of crisis in representative politics, governance was proposed as a programme of ‘broadening and improving government’ by more public participation (see p. 6). Whereas accountability, transparency and inclusion have been presented as elements of good governance, ideals have not always matched realities (Delgado and Strand 2009). After the mid-1990s and the beginning of the 2000s, Europe experienced a participatory ‘boom’ (Delgado et al. 2011); however, the ‘new politics of talk’ (Irwin 2006) was created with the goal of producing consensus although dissenting voices were often misrepresented. Somewhat paradoxically, a crisis of political representation was meant to be addressed by constructing a representative public (Lezaun and Soneryd 2007).

During all those years of political discussion, institutional representations of the public have coexisted with non-invited or spontaneous public reactions and developments (Wynne 2007). Environmental movements flourished in connection with nuclear and GM (Genetically Modified) technologies, raising important concerns that were misrepresented by governments. Arguably, through activists’ campaigns and other forms of collective action, such movements have endeavoured to be visible and have their views and values included in the democratic politics of representation. However, the spreading of digital infrastructures and related emerging technologies across society may be creating a different experience of what it means to live with technology (see Chap. 2) and perhaps new manners of articulating political action and citizenship (Rommetveit and Wynne, forthcoming). Arguably, formations such as DIY (Do It Yourself) medicine, the open hardware movement or the citizens’ groups that use GIS technologies act as users and as producers, appropriating and modifying the technologies themselves. These groups are a type of public who are perhaps less insistent on realizing their claims to institutions (as ‘old’ social movements) and who articulate their concerns by ‘doing’. In this light, citizens as users of technologies, and particularly digital technologies, can be perceived as public articulations of another type. They do not necessarily expect that their concerns and claims should be addressed by institutions (in a Deweyan sense).1 Rather, these citizens perform more direct forms of action, becoming involved with the technologies in novel manners. Immediate realities such as the human body or the local space are the concerning issues to be directly addressed, beyond institutional arrangements and constraints. These sort of interventions pose new challenges to classical expert-lay divisions. In neoliberal politics, large state infrastructures tend to dissolve while representative politics weakens. Technology as applied to health (for example, in personalized medicine) and other realms of everyday life increasingly becomes a private matter, a matter of individual choice. A key question is how and to what extent the new formations of the public are challenging such technological individualization, proposing new forms of collective action and citizenship.

On the side of governments, new forms of constructing publics are emerging along with the spreading of digital tools and infrastructures. Digital platforms and design software are crucial technical developments enabling the proliferation of a new institutional talk regarding ‘social innovation’. In the Horizon 2020 framework, the public should be actively involved in technological development, as co-designer or end user; public values should be integrated in the early stages of scientific research; and responsibility for the production of desired technological trajectories should ideally be shared (Owen et al. 2012). In a new politico-technical constellation that is innovation-driven, the governance of science has been configured in terms of distributed responsibility. A relevant question is to what extent the ‘social contract’ and ‘technological citizenship’ (Frankenfeld 1992) might enable an empowerment of the citizenry or rather a liberation of the states from previously demandable liabilities.

Emerging technologies such as geo-engineering, biometrics and converging technologies for human enhancement are oriented towards changing realities in the pursuit of desirable futures. At a governance level, this orientation towards the future results in a paradoxical situation that has been designated the Collingridge dilemma: ‘The social consequences of a technology cannot be predicted early in the life of the technology. By the time undesirable consequences are discovered, however, the technology is often so much a part of the entire economic and social fabric that its control is extremely difficult. This is the dilemma of control. When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time-consuming’ (Collingridge 1980:11 in Nordmann 2010). At the base of this dilemma is the radical uncertainty that is inherent in scientific research (Strand 2002). Whereas scientific practice has attempted to suppress uncertainty by a number of black-boxing epistemic strategies [1]

(Latour 1987), emerging technosciences attempt to control natural and social uncertainty by design strategies (Nordmann 2007). At a governance level, a manner in which to address the paradoxical situation noticed by Collingridge is developing strategies to build social capacity so that better decisions are made in the present and better technological trajectories are developed (Guston 2014). In the face of uncertainty, ethics have been introduced to the governance of technoscience. Both in Europe and the US, research on the ethical aspects of emerging and ICT-based technologies[2] is becoming mandatory in funding research programmes. Another crucial reason why ethics is emphasized as an important element of technoscientific governance is because emerging technologies are perceived as potentially having a direct effect on the life of citizens, transforming them in substantial manners. Such technologies may have a double nature: being promising but also intimately intrusive.

In the FP7 and in the framework of the Science-in-Society programme, the European Commission launched a call to develop ‘new ethical frameworks’ for new and emerging technologies. One of the research projects resulting from that call was TECHNOLIFE.[3] From 2009 to 2012, the authors of this book collaborated to develop that project. Using audiovisual material, TECHNOLIFE provided an online forum for discussion of three technological domains: digital maps, technologies of body enhancement and biometrics (corresponding to three sections of this book). Citizens who were concerned or whose lives could be affected by those technologies were invited to discuss the social, ethical and technological implications of those technologies. The project was designed to allow dissent rather than produce a ‘representative’ public opinion. A final movie provides a summation of the forum’s discussions and concludes, ‘It is not easy to sum up our experience. Many responses were highly imaginative and elaborate, and only a few examples could be presented here. Many emphasized that we are living in times of great change. No clear ideologies or political platforms could be singled out. Many saw politics and bureaucracies as outdated institutions. Many were critical of the monopolies of the mass media and large corporations. Some saw free software, open sources and an open Internet as indicating more sustainable futures’.[4] As Strand and Funtowicz emphasize in their chapter, a diagnosed problem was that ‘The potential is particularly large when the speed of science-based innovation is not matched by the speed of institutions’ and people’s capacity to cope with the change. The result is a particularly poorly governed (as government and governance) process of innovation possessing characteristic difficulties as a fast-moving, unpredictable, uncontrollable and sometimes invisible target’. Such criticism was communicated to the Commission, comprising (or so we thought) a reflective and positive experience of inclusive governance.

  • [1] (Dewey 1927).
  • [2] As the ‘Science-in-Society’ programme becomes integrated into scientific programmes in the FP7of the European Commission.
  • [3] TECHNOLIFE stands for ‘Transdisciplinary Approach to the Emerging Challenges of NovelTechnologies: Lifeworld and Imaginaries in Foresight and Ethics’.
  • [4]
< Prev   CONTENTS   Source   Next >