Facing Distributed Epistemic Responsibility
To understand the epistemic responsibilities of knowers in our contemporary, hyperconnected world, I think all insights outlined above need to be accounted for. Yet it still has to be explored and discussed in detail a) whether, how and to what extent they can be aligned and b) what the implications both on an individual and a governance level could or should be. That means that we need conceptual advancements as well as practical solutions and guidance both for individuals and policy makers. Before I turn to both tasks, let me recapitulate the challenges regarding epistemic responsibility in our hyperconnected era.
As knowers we move and act within highly entangled socio-technical epistemic systems. In our attempts to know, we permanently need to decide when and whom to trust and when to withhold trust, when to remain vigilant. Loci of trust in these entangled and highly complex environments are not only other humans, but also technologies, companies, or organizations—and they usually cannot be conceived in separation but only as socio-technical compounds. This holds true for our daily life, imagine just the case of someone booking a flight online. It holds even more true for scientific environments, where information acquisition and processing involve various hyperconnected agents and institutions.
Socio-technical epistemic systems are highly entangled but also highly differentiated systems consisting of human, non-human and compound or collective entities each equipped with very different amounts of power. To understand this, search engines are a useful example. In highly simplified terms, search engines can be conceived as code written, run and used by human and non-human agents embedded in socio-technical infrastructures as well as in organizational, economic, societal and political environments. While there are potentially many ways to enter the World Wide Web, search engines have emerged as major points of entrance and specific search engines nowadays function as “obligatory passage points” (Callon 1986), exerting tremendous amount of not only economic, but also epistemic power.
That is to say that the fact that both human and non-human entities can qualify as agents does not imply that we have entered a state of harmony and equality: there are enormous differences in power between different agents. To use Barad's terminology, some agents matter much more than others. And—for better or worse— those that matter most do not necessarily have to be human agents.
In Actor-Network-Theory (e.g. Latour 1992; Law and Hassard 1999), power is conceived as a network effect—a view that is highly plausible and useful in the context of search engines, recommender systems or social networking sites, because the power of specific search engines does not stem from any a priori advantage, but rather is the result of collective socio-technical epistemic practices in which we all are involved: it is our practices of knowing, of relying on and using information which influence and shape the power distributions in our environment.
It is in these sociotechnical, hyperconnected and entangled systems, that the notion of epistemic responsibility is becoming a key challenge for both policy makers and us as individual epistemic agents processing information in research just as much as in our every-day lives.
5.1 Re-Conceptualizing Epistemic Responsibility
Responsibility is a rich concept, a concept with many nuances, a noun that changes its meaning if coupled with different verbs. There is a difference between being responsible and taking responsibility: we can be responsible for something, but deny assuming responsibility for it. This temptation to shirk responsibility is probably as old as humankind and has lead to sophisticated techniques in cutting down chains of responsibilities in law or the insurance sector. On the other hand, we may also accept the full responsibility for something, even if we are not, or at least only partially responsible. If a minister steps back, because of some misconduct in her ministry she has not even been aware of, she takes responsibility, she responds. Moreover, responsibility can be assumed oneself as well as attributed to someone else.
All these different meanings of responsibility and their intersections are crucial for understanding what it takes to be epistemically responsible in socio-technical environments consisting of human and non-human agents. For instance, before asking for criteria of how exactly responsibility can be assumed or attributed and further how it should be assumed or attributed, we may start by asking these two related but distinctive basic questions that are of increasing relevance in our computational age: (1) Can epistemic responsibility be assumed only by human agents or also by other agents? (2) Can epistemic responsibility be attributed to only human or also non-human agents?
As a first step to apprehend these questions, I suggest disentangling the notions of agency, accountability and responsibility more carefully. Both Barad and Suchman seem to use the terms responsibility and accountability interchangeably. However, taking some philosophical insights into account, it seems fruitful to maintain a distinction between these two notions. As noted before, for Floridi and Sanders (2004), agency requires interactivity, autonomy and adaptivity, but no intentionality is needed. Accountability is bound to agency only and hence also does not require intentionality of agents. However, responsibility differs from accountability exactly by requiring intentionality. Hence, if we agree with Floridi and Sanders (2004) that responsibility as opposed to agency and accountability requires intentionality, then it makes no sense to talk about responsibility with respect to technical artifacts. A car cannot be made responsible for a crash, it is the driver who is to blame—for negligence or ill-will—or maybe the manufacturer, if a technical flaw caused the crash. If an unmanned vehicle that drives autonomously, interactively and adaptively caused a crash, this car may be accountable for a crash, but it cannot be held responsible. Please note that it is only the technical artifact in isolation, which cannot be made responsible. For socio-technical compounds, the possibility of attributing responsibility would still be given, hence this perspective may in the end well be compatible with Barad's agential realism (Barad 2007).
To my mind, the distinction between accountability and responsibility is crucial and I think we need a strong concept of responsibility reserved for intentional agents to really account for Barad's insights regarding the entanglement between
(a) the social, the technical and the epistemic, as well as (b) between epistemology, ontology and ethics. Reconsider the core distinction between being responsible and taking responsibility: while Barad rightly stresses our interdependences (or rather intradependences), the entanglement of human and non-human agents in knowing, being and doing, the process of assuming responsibility is and remains an intentional act.
For epistemic responsibility this means that as responsible epistemic agents, we intentionally assume responsibility for what we claim to know. In full awareness of our socio-technical epistemic entanglement, we accept to be challenged for what we claim to know, we commit ourselves to provide evidence for our claims and to revise our beliefs in the light of new evidence. Hence, to understand and improve our processes of knowing, to be responsible knowers as individuals, we first need to acknowledge both the deep entanglements between the social, the technical and the epistemic as well as between epistemology, ethics and ontology. However, the only adequate reaction to this awareness must be to assume responsibility as an intentional act. It is only we humans (so far?) that can take this stance, hence it is our duty to assume responsibility for our interrelated ways of knowing, being and doing.
However, what is also clear is that the ease with which epistemic responsibility can be assumed differs between different socio-technical environments: in some environments assuming responsibility for what one knows is rather easy, in others it is much more difficult. Access to various types of evidence, to supporting or contradicting information is essential to become epistemically responsible in knowing. It is in this sense that supporting open access is a very important and valid aspect of Responsible Research and Innovation. More generally it means that our individual efforts must be complemented with appropriate policies that support environments in which epistemic responsibility assumption is enabled, fostered and incentivized.
5.2 Governance for Epistemic Responsibility
Based upon conceptual work regarding the basic meaning of concepts such as responsibility, accountability, action or intentionality, we need to come up with practical solutions to support responsibility assumption and attribution in our hyperconnected reality from a governance perspective. We need to develop policy frameworks that enable and support epistemically responsible behaviour.
How would such frameworks to be conceptualized? Take the example, I have given before, Responsible Research and Innovation (RRI), which is clearly meant to offer guidance for designing and governing environments that elicit and support responsible epistemic practices. Yet despite its name, Responsible Research and Innovation, as currently conceived, cannot fulfill these tasks properly because it fails tackling important challenges worked out in this contribution, namely a) to properly acknowledge the socio-technical entanglement of knowers, b) to properly acknowledge the interdependency of epistemical, ontological and ethical aspects of science, c) to support responsibility assumption and attribution and d) to be attentive to power asymmetries within entangled socio-technical environments.
Hence, in order to really enable and support epistemic responsibility it would be essential to revise and amend current the RRI guidelines by adding new guiding thoughts such as the following:
1. Acknowledge the interrelation of epistemology, ethics and ontology: knowing, doing and being are interrelated, i.e. our processes of knowing have effects on what can be done and what we are—and vice versa.
2. Keep in mind the deep socio-technical entanglement of contemporary epistemic practices: Within our practices of knowing, we depend upon other human and non-human agents just as much as these other agents depend on us.
3. Bear in mind that epistemic relations are power relations: Within socio-technical epistemic systems, different epistemic agents, human as well as non-human agents, such as algorithms, are equipped with different amounts of power.
4. … etc
Thus, if revised appropriately, RRI could provide guidance on how to act responsibly in research and innovation as particularly knowledge-intense domains. Yet epistemic practices exist beyond research and governance supporting epistemic responsibility accordingly has to be expanded beyond advice or regulations regarding research and innovation. Each and every one of us has to assume epistemic responsibility for the things we claim to know in our everyday life as well. When and whom should we trust to know about climate change, about the war on terrorism or just about the latest unemployment numbers? How vigilant do we have to be when accepting information received from various onand offline sources?
While these are challenges that we all face on a daily basis, they also pose challenges for the governance of socio-technical epistemic systems. In a computational age characterized by ever more powerful personalization and profiling techniques assuming epistemic responsibility becomes much harder, because we may neither be able to decide which information we receive nor which information is received about us. After all, how can we be responsible knowers if we cannot assess how trustworthy our sources of knowledge are?
Without denying the utility of personalized services, in order to act epistemically responsible in an age of extensive profiling and personalization, we need the possibility to access, understand and to even trick the systems which are accessing, understanding and potentially tricking us. As Mireille Hildebrandt stresses in her contribution, we need to develop “first (…) human machine interfaces that give us unobtrusive intuitive access to how we are being profiled, and, second, a new hermeneutics that allows us to trace at the technical level how the algorithms can be 'read' and contested” (Hildebrandt 2013). We need policies addressing more broadly the challenges related to distributed epistemic responsibility in a hyperconnected reality, policies to set the parameters for an environment where individuals can act responsibly, i.e. where they can both assume and attribute responsibility even if they are deeply socio-technically entangled.
To conclude: in the long run, it will be essential to develop a concept of epistemic responsibility that can account for the responsibilities of various differently empowered agents within entangled socio-technical epistemic systems. Moreover,
we will need to develop policy frameworks that provide guidance both for the individual seeking to act responsibly in knowing and for the design and governance of environments that support epistemically responsible behaviour. In addition to the goals that Pagallo has described for his notion of “good enough Onlife governance” (Pagallo 2013), these frameworks should entail support for individuals (e.g. education and support of digital literacy) as well as incentives for the research and design of epistemically beneficial systems (e.g. transparency-by-design, research on better interface design, development of tools for argumentation extraction and visualization, etc.).
-  This research was supported by the Austrian Science Fund (P23770) and the Karlsruhe Institute of Technology.