Distributed Cognition in Self-Organizing Teams

Neelam Naikar

CONTENTS

Introduction..............................................................................................................75

Adaptation in Sociotechnical Workplaces...............................................................76

Self-Organization in Sociotechnical Workplaces.....................................................77

Distributed Cognition in Sociotechnical Workplaces..............................................79

Models for Work Analysis and Design....................................................................82

Discussion................................................................................................................90

Acknowledgments....................................................................................................91

References................................................................................................................91

INTRODUCTION

Cognitive work in sociotechnical systems has individual, social, material, and cultural dimensions. The theory of distributed cognition (Hollan, Hutchins, & Kirsh, 2000; Hutchins, 1995; Norman, 1991) explains how computational tasks or processes are distributed across these dimensions of work practice. However, this theory, and its associated modeling tools (e.g., Furniss, Masci, Curzon, Mayer, & Blandford, 2015; Sellberg & Lindblom, 2014; Stanton, 2014), though important developments in the study of cognitive work practice, gives insufficient emphasis to the self-organizing behaviors of these systems, which is vital for effective performance. The concept of self-organization in sociotechnical systems (Naikar & Elix, 2019) recognizes that computational work processes are not only distributed across actors and artifacts, but are also self-organizing, such that the distribution and content of the computations may change unpredictably with the spontaneous behaviors of actors, even over very short timescales. Consequently, models of work practice must accommodate not just the distributed but also the spontaneous, emergent characteristics of computational task performance, so that we can design systems that properly account for manifestations of such cognitive phenomena in actual workplaces. As a result, workers will be better supported in dealing with the challenges of cognitive work in unstable, uncertain, and unpredictable environments.

This chapter considers the nature of distributed cognition in self-organizing teams. This frame of reference was motivated by the observation that, in sociotechnical workplaces, new or different organizational structures emerge from individual, interacting actors’ spontaneous behaviors, in ways that migrate toward becoming fitted to the demands of a continually evolving task environment, so that these systems are self-organizing. This chapter therefore begins by illustrating the nature of individual and organizational adaptation in sociotechnical workplaces, and by describing the concept of self-organization as it manifests in these settings. It then considers the nature of distributed cognition in sociotechnical workplaces, showing how observations from Hutchins’s (1990, 1991, 1995) classic field studies support the argument that the computational processes of work are not only distributed but are also self-organizing. Following that, the chapter examines the implications of the self-organization phenomenon for modeling and designing computational w'ork in these systems, again using examples from Hutchins’s case studies to illustrate the arguments. This chapter therefore leads to an understanding of how complex cognitive work can be supported through design to promote the inherent capacity of sociotechnical systems for self-organization, a phenomenon that is essential for dealing with instability, uncertainty, and unpredictability in the task conditions.

ADAPTATION IN SOCIOTECHNICAL WORKPLACES

Sociotechnical workplaces are systems with strong psychological, social, cultural, and technological attributes (see also Vicente, 1999). In such workplaces, which include hospitals, air traffic control centers, naval vessels, and emergency management organizations, human workers bring both psychological and social dimensions to the task performance, and the work is carried out in context of a substantive cultural and technological environment. In some cases, such as petrochemical refineries, the technological dimension may be more prominent, w'hereas in other cases, such as stock market trading, the psychological, social, and cultural dimensions may be more apparent. Nevertheless, in all cases, all of these facets of complex cognitive work must be accounted for in system design. Designs that fail to consider any one or more of these dimensions may compromise performance, sometimes with disastrous consequences, as demonstrated by a number of recent, high-profile accidents, such as the fatal crashes of two Boeing 737 MAX 8 aircraft in Indonesia and Ethiopia (National Transportation Safety Board, 2019) and the Fukushima Daiichi nuclear power plant disaster (Director General, 2015).

Sociotechnical workplaces are necessarily distributed systems. The scale and demands of the work are such that it cannot be undertaken by any one individual alone. The work is therefore distributed across multiple actors and artifacts, such that it may span mental, social, physical, and temporal spaces, as Hutchins (1995) describes in depth in his classic text on distributed cognition.

The challenges of cognitive work in sociotechnical systems arises in large part from the instability, uncertainty, and unpredictability of the task environment (Naikar & Brady, 2019; Vicente, 1999; Woods, 1988). Workers must operate in highly dynamic conditions, so the problems, demands, or pressures they are faced with may change constantly with the evolving situation. As an example, the threats posed by w'ildfires to emergency management workers may shift constantly depending on the weather conditions, such as the amount of rainfall and wind directions, and the habitation and infrastructure in the affected areas. Workers must also operate with considerable uncertainty. In naval work, for instance, it is difficult to establish the presence of obstacles or adversaries in a ship’s underwater environment w'ith complete certainty because of imperfections in the onboard sensors. Finally, workers must also contend with significant unpredictability in the work requirements. That is, they must deal with events that cannot be foreseen or specified fully a priori by analysts or designers (e.g., Leveson, 1995; Perrow, 1984; Rasmussen, 1969; Reason, 1990; Vicente, 1999). Examples of such events include a new type of military threat (Herzog, 2011; Reich, Weinstein, Wild, & Cabanlong, 2010), an unanticipated reaction of a patient to an anesthetic during a surgical procedure (Hoppe & Popham, 2007), or an unforeseen string of supplier collapses in the aftermath of a natural disaster (Park, Hong, & Roh, 2013).

Field studies show that, given conditions of instability, uncertainty, and unpredictability in the task environment, workers adjust both their individual behaviors and collective structures in line with the evolving demands (Naikar & Brady, 2019; Naikar & Elix, 2016, 2019). Such observations have been documented in a variety of settings including emergency management (e.g., Bigley & Roberts, 2001; Lundberg & Rankin, 2014), military (e.g., Hutchins, 1990, 1991, 1995; Rochlin, La Porte, & Roberts, 1987), commercial aviation (e.g., Hutchins & Klausen, 1998), law enforcement (e.g., Linde, 1988), transport (e.g., Heath & Luff, 1998; Luff & Heath, 2000), and healthcare (e.g., Bogdanovic, Perry, Guggenheim, & Manser, 2015; Klein, Ziegert, Knight, & Xiao, 2006).

As an example, in a field study of emergency management workers, specifically teams of firefighters, Bigley and Roberts (2001) observed considerable flexibility in workers’ use of tools, rules, and routines. When a fire truck arrives at the scene of an incident, for instance, firefighters may have no choice but to improvise with the tools available on the truck to handle the emergency. In addition, workers may find it necessary to breach standard operating procedures or rules. In one case, firefighters deliberately chose the strategy of “opposing hose streams” to deal with an emergency, despite the fact that this strategy is prohibited by procedure, as one group can push the fire into another. Finally, firefighters often tailor their standard routines, such as those for “hose laying” or “ladder throwing,” to local contingencies.

Field studies also provide clear evidence for changes in the work structure or organization. A compelling case is provided by Rochlin et al. (1987), who conducted detailed observations of naval aircraft carriers. Their findings showed that the formal organizational structure of these carriers is strongly hierarchical and defined by clear chains of command and means of enforcing authority. However, during complex operations, the work organization shifts—without a priori planning, external intervention, or centralized coordination—to configurations that may be described as informal in that they are not officially documented. These informal configurations, which are flat and distributed, are not defined by any simple mapping between people and roles. Instead, the mapping changes spontaneously with the local circumstances.

SELF-ORGANIZATION IN SOCIOTECHNICAL WORKPLACES

The preceding case studies show that adaptations in sociotechnical systems can be observed at two levels: in the behaviors of individual actors and the structures of multiple actors. The concept of self-organization (e.g., Haken, 1988; Heylighen, 2001; Hofkirchner, 1998) is important in this context because it provides a plausible and parsimonious framework for understanding why adaptations in both actors’ individual behaviors and collective structures are necessary, and how such adaptations can be achieved spontaneously, continuously, and relatively seamlessly (Naikar, 2018; Naikar & Brady, 2019; Naikar & Elix, 2019; Naikar, Elix, D9gge, & Caldwell, 2017). Moreover, in that the adaptations signify continuous, spontaneous change in computational work processes, the self-organization concept contributes to the understanding of cognitive task performance in distributed systems.

Specifically, the concept of self-organization (e.g., Haken, 1988; Heylighen, 2001; Hofkirchner, 1998), when analyzed in the context of sociotechnical workplaces, suggests that a system’s formal structure bounds individual actors’ degrees of freedom for action in ways that are suited to particular circumstances, usually those that are routine or familiar. Therefore, when new or different conditions are encountered, the formal structure may constrain the system’s response in ways that are unsuitable or ineffective (Figure 4.1). However, in responding to the changed conditions, the spontaneous actions of individual, interacting actors may result in changes in the work organization. Consequently, when a new or different structure emerges that is better suited to the present circumstances, it will stabilize, such that it will continue to constrain and enable the behaviors of individual actors in ways that are fitted to the local conditions. However, when the situation changes again in some fundamental respect, the spontaneous behaviors of individual, interacting actors will result in further structural changes, so that the system is self-organizing (Naikar, 2018; Naikar & Brady, 2019; Naikar & Elix, 2019; Naikar et al., 2017).

Further, the concept of self-organization in sociotechnical workplaces recognizes that, under conditions of instability, uncertainty, and unpredictability, actors move away from formal structures or procedures toward the intrinsic constraints of the sociotechnical system as the principal governing mechanism for their conduct (Figure 4.1). These constraints, which are boundaries on behavior that must be respected by actors for a system to perform effectively, still afford actors many degrees of freedom for action (Rasmussen, Pejtersen, & Goodstein, 1994; Vicente, 1999). For example, organizational values and resources place limits on actors’ behaviors, but still afford actors many possibilities for action. Therefore, given a commitment to these fundamental constraints on behavior, rather than to formal specifications, workers can safely and productively adjust their actions to the local conditions, such that new structures may emerge from their spontaneous behaviors. This phenomenon of self-organization is integral to the process of a system adapting to its environment.

As Naikar and Elix (2019) discuss, the processes of self-organization in sociotechnical workplaces are not without challenges and may not be regarded as perfect or flawless (e.g., Lundberg & Rankin, 2014), at least w'hen assessed against idealized measures or benchmarks. However, many sociotechnical systems are described as high-reliability organizations (La Porte, 1996; Rochlin, 1993; Weick & Sutcliffe, 2001) because of their capacity to balance safety and productivity goals successfully in the face of considerable instability, uncertainty, and unpredictability. Therefore, the processes of self-organization can be relatively seamless, particularly in well- established systems, and the fact is they are necessary. Alternative strategies, such as a priori planning, centralized coordination, or external intervention, are rarely

The phenomenon of self-organization in sociotechnical workplaces

FIGURE 4.1 The phenomenon of self-organization in sociotechnical workplaces: Given the opportunities for action, afforded by the intrinsic constraints of the physical, social, and cultural work environment, new or different organizational structures may emerge from actors’ spontaneous behaviors. The emergent structures constrain and enable actors’ behaviors in ways that are fitted to the local conditions, such that the system is self-organizing.

Note: The dashed lines signify possible or acceptable behavioral trajectories, which are innumerable.

feasible under conditions of instability and ambiguity. Instead, spontaneous behaviors and emergent work structures are required to resolve—in time and in situ—the “proper, immediate balance” (Rochlin et al., 1987, pp. 83-84) between a system’s safety and productivity objectives.

DISTRIBUTED COGNITION IN SOCIOTECHNICAL WORKPLACES

The theory of distributed cognition (Hollan et al., 2000; Hutchins, 1995; Norman, 1991) focuses on human cognition in its natural habitat or in naturally occurring contexts. This theory suggests that, in sociotechnical workplaces, it does not make sense to draw the boundaries of cognitive processes around individuals. Rather, the boundaries must extend beyond individuals to their social, physical, and temporal spaces to account for interactions between people and their environments. Thus, this theory recognizes that in actual workplaces—or in the wild—cognitive accomplishments are necessarily a product of the sociotechnical system, resulting from interactions between mental, social, material, and cultural structures.

The theory of distributed cognition is compelling and has led to significant advancements in the understanding of the computational processes of work in sociotechnical systems (e.g., Furniss et al., 2015; Sellberg & Lindblom, 2014; Stanton,

2014). However, the concept of self-organization suggests that it is important to appreciate that these processes are not only distributed, but are also self-organizing. Given the self-organizing behavior of distributed cognitive systems, the distributions of computational work, as well as its content, may change in ways that are closely fitted to evolving circumstances, which cannot be specified fully a priori. Therefore, the phenomenon of self-organization has significant implications for modeling and designing distributed cognitive systems.

In this section, this argument is illustrated and elaborated with Hutchins’s (1990, 1991, 1995) classic studies of the computational task of navigation on naval ships. Although his works contain many observations that may be used to substantiate this argument, the theory of distributed cognition does not directly address the selforganizing characteristics of distributed cognitive work in complex workplaces.

First, Hutchins’s (1990, 1991, 1995) field studies may be used to support the claim that the distributions of computational processes across human workers in socio- technical systems may vary with their spontaneous behaviors. In the navigation task that Hutchins (1995) describes, it is formally the job of the two bearing takers, who are located on the ship’s left and right wings, to identify nominated landmarks in the surrounding area and to measure and report their bearings. In addition, it is the bearing timer-recorder’s responsibility to time and record the reported bearings in a log, and the plotter’s responsibility to plot those bearings onto a navigation chart. However, Hutchins (1990) observed this formal structure being violated on a number of different occasions. In one incident, the bearing timer-recorder, who is stationed in the pilothouse, assisted the bearing taker on the ship’s right wing with identifying a landmark over the telephone. In another incident, the plotter, who is also stationed in the pilothouse, walked out onto the ship’s right wing to point out a landmark to the bearing taker. Furthermore, in a third incident, the bearing timer-recorder performed the first step of the plotting job when the plotter was called away from the chart table for a consultation with the ship’s captain. These observations suggest that the distributions of cognitive processes across workers may be reconfigured continually in the course of their spontaneous interactions, as they “negotiate” their responsibilities on the job and “participate” in each other’s work (Luff & Heath, 2000).

Hutchins’s (1990, 1991, 1995) field studies may also be used to make the case that it is not just the distributions of cognitive activities across human workers that may vary, but also the distributions of computational processes across material artifacts. In one incident observed by Hutchins (1991), the ship’s gyrocompass failed when it lost its power supply, so that the true bearing of landmarks could no longer be measured directly with this instrument but, instead, had to be computed arithmetically from the relative bearings. This meant that to navigate the ship safely through restricted waters, being as it was in a narrow harbor on its return to port, new computational tasks to establish the ship’s position in space and an efficient structure for organizing these activities across the team’s internal (i.e., mental) and external resources were required. Initially, as the navigation team improvised with different processes for the computational task, such as performing the arithmetic mentally or with a hand-held electronic calculator, there was no consistent pattern in the distribution of computational processes across actors and artifacts. In fact, 30 lines of position for the ship were plotted before the distribution of the cognitive work stabilized. Hutchins (1991, p. 23) commented that “the social structure (division of labor) seems to have emerged from the interactions among the participants without any explicit planning.” This emergent social distribution, which involved the plotter and the bearing timer-recorder calculating the true bearings of landmarks with the aid of an electronic calculator, in place of the bearing takers measuring the true bearings directly with the gyrocompass, was fitted to the local conditions in that it allowed the individuals in the team to perform the new computational work in a manner that was cognitively efficient for each of them given the artifacts available at the time.

Third, observations from Hutchins’s (1990, 1991, 1995) field studies may be used to support the argument that the distributions of computational processes across actors and artifacts may vary in ways that are not fully specifiable a priori. As an example, in the incident from Hutchins’s (1991) study when the ship’s gyrocompass failed, the bearing timer-recorder seemed to prefer performing the calculation for obtaining the true bearing of landmarks from their relative bearings by adding the mathematical terms in the order of the availability of the data, despite the fact that the plotter was encouraging him to perform the computation in another, arguably more meaningful, order. This “conflict” was resolved temporarily, prior to the stabilized structure being reached, by the bearing timer-recorder adopting the order the plotter desired when performing the computation in interaction with him, but not otherwise. These observations suggest that computational strategies that were cognitively efficient for the plotter were not cognitively efficient for the bearing timer- recorder. More generally, the distributions that emerge at any point in time depend on specific details of the local situation, including individual differences between human participants, so that it would be impossible to specify in advance a distribution to cover every circumstance.

Fourth, Hutchins’s (1990, 1991, 1995) field studies may be used to make the point that it is not just the distributions of the computations that may vary, but also the computations that are required in the first place. In the incident in Hutchins’s (1991) study when the ship’s gyrocompass failed, actors were suddenly faced with the task of computing the true bearing of landmarks from the relative bearings, rather than measuring the true bearings directly with the gyrocompass. In other words, the computational tasks that were required when the gyrocompass was functional were different from when it was not. Thus, both the content and distribution of the computational processes may vary.

Finally, Hutchins’s (1990, 1991, 1995) case studies may be used to demonstrate that the distribution and content of the computational work varies with actors’ spontaneous acts on the affordances of the environment. For example, in the incident when the ship’s gyrocompass failed, the computational processes exhibited depended on whether the plotter chose to perform the arithmetic mentally; whether the plotter decided to perform the computation with the aid of a handheld calculator, which he had to obtain by walking over to the charthouse; or whether the plotter elected to share the task with the bearing timer-recorder, when he realized that he could not keep up with the timing requirements for the task even with the aid of an electronic calculator. These observations suggest that the work environment affords different computational possibilities for the job, and the content and distribution of the cognitive work varies depending on which affordances the actors spontaneously act on for the job.

In summary, the concept of self-organization in sociotechnical systems suggests that, given the inherent instability and ambiguity of the task conditions, the computational processes of work are not only distributed across actors and artifacts, but are also self-organizing. In responding to the local conditions, actors’ spontaneous acts on the affordances of the environment produce variations in the distribution and content of the computational work, such that the system’s behavior migrates toward becoming fitted to the demands of the evolving circumstances. This type of selforganizing behavior is evident in a variety of distributed cognitive systems, including in commercial and military aviation, healthcare, transport, law enforcement, and emergency management, as discussed in more depth elsewhere (Naikar & Elix, 2019). The capacity of distributed cognitive systems for self-organization—or for spontaneous change in the computational processes—is fundamental to their viability in task conditions with high levels of instability, uncertainty, and unpredictability.

MODELS FOR WORK ANALYSIS AND DESIGN

The concept of self-organization presented in this chapter recognizes that, in sociotechnical systems, computational work processes change in ways that become closely fitted to evolving circumstances, which cannot be specified fully a priori. The spontaneous behaviors of individual, interacting actors, and the emergence of new structural forms or organizations from these behaviors, make it possible for a system to adapt to the changing demands of the work environment, and thus to “survive in the wild.” This phenomenon of self-organization has distinct implications for modeling and designing distributed cognitive systems or, in other words, for modeling the work requirements of these systems and developing designs for interfaces, teams, and training systems, for instance, that support the work requirements effectively.

Models for work analysis and design may be viewed as having normative, descriptive, or formative orientations (Rasmussen, 1997; Vicente, 1999). In this section, the capacity of these three types of approaches to support the analysis and design of distributed cognitive systems, which are self-organizing, is examined. As in the preceding section, the following arguments are substantiated with observations from Hutchins’s (1990, 1991, 1995) field studies of the navigation task on naval vessels.

First, a key implication of the self-organization phenomenon is that normative models, which formalize or prescribe the processes of work required under specific conditions, are inadequate for distributed cognitive systems. This view may be supported by Hutchins’s (1995) observations of the formal procedures for the navigation task, which he reproduces in his classic text. His findings show' that, although these procedures are detailed, they are only nominal processes for ideal conditions. Consequently, the procedures are routinely violated during everyday operations (Hutchins, 1990, 1995). Furthermore, his studies suggest that it would be impossible to specify a distribution of computational processes to cover every single circumstance. As a case in point, there was no specified procedure for how the navigation task should be carried out when the ship’s gyrocompass failed (Hutchins, 1991). Normative models, therefore, are likely to be incomplete and inflexible, leading to designs that are limited to supporting computational processes in situations that can be pre-specified or anticipated.

Second, the self-organization phenomenon highlights that descriptive approaches, which characterize or describe the processes of work observed in particular situations, are also unsuitable for distributed cognitive systems. In his classic text, Hutchins (1995) presents a detailed “activity score,” which describes how computational processes for position fixing (i.e., establishing the ship’s location in space) are distributed, or coordinated temporally, across actors and artifacts on the vessel. However, this activity score is likely only relevant for a single instance of position fixing—that which Hutchins observed in creating the record. The computational processes in other instances, even very similar ones, are likely to be different. Certainly, Hutchins’s activity score is not relevant to the case when the ship’s gyrocompass failed. Moreover, activity scores cannot be produced for instances that have not been observed, and it is impossible to observe the full range of instances, especially those that have not yet occurred. Consequently, descriptive models of computational work are also likely to be incomplete and inflexible, leading to designs that are limited to supporting computational processes that have or can be observed. Such designs, like those produced by normative models, may inhibit, rather than foster, the spontaneity needed in the workplace for dealing with instability and ambiguity in the task requirements. Such arguments are also applicable to tools developed more recently for modeling distributed cognition, such as Event Analysis of Systemic Teamwork (EAST; Stanton. 2014) and Distributed Cognition for Teamwork (DiCoT; Furniss et al., 2015).

Compared with normative and descriptive approaches, formative models are more appropriate for distributed cognitive systems, given the intent to accommodate their self-organizing characteristics. A well-established framework that falls in this category is cognitive work analysis (CWA; Rasmussen et al., 1994; Vicente, 1999). This framework provides a strong starting point for modeling and designing distributed cognitive systems because it focuses attention on the possibilities for computational work, rather than on formalized or observed computational processes. Designs based on such models can accommodate adaptations in the computational processes to the local conditions, even those that have not been observed or anticipated.

The CWA framework is particularly powerful in the approach it offers for modeling the constraints, or affordances, of the work environment. Specifically, work domain analysis (Naikar, 2013; Rasmussen et al., 1994; Vicente, 1999), the first dimension of the framework, models the constraints of the physical, social, and cultural environment at five levels of abstraction, relating to the system’s purposes, values and priorities, functions, processes, and objects. As indicated above in the section “Self-Organization in Sociotechnical Workplaces”, although these constraints place limits on actors’ behaviors, they also afford actors may degrees of freedom for action. Moreover, as these constraints are event-independent, the affordances are relevant to a wide range of situations, including those that have not been observed or anticipated. Further, activity analysis and strategies analysis, the second and third dimensions of CWA, provide increasingly detailed views of the constraints in terms that relate to the possible activities and strategies in recurring classes of situation.

One problem with the standard CWA framework (Rasmussen et al., 1994; Vicente, 1999), however, is the approach it takes for modeling the distributions of work across actors (Naikar & Elix, 2016, 2019). Specifically, in social organization and cooperation analysis, the fourth dimension of the framework, the distributions are limited to recurring classes of situation and even further to organizational structures that have been observed or are judged to be reasonable in these situations. As a result, the standard framework does not account for the range of work organization possibilities, especially those that may emerge in novel or unforeseen circumstances.

Recently, however, a new modeling tool has been developed for social organization and cooperation analysis, namely the diagram of work organization possibilities (Naikar & Elix, 2016). This tool models the behavioral opportunities of individual actors and the structural possibilities of multiple actors, as afforded by the constraints of the system, in a single, integrated representation—in a way that is compatible with the phenomenon of self-organization in sociotechnical workplaces (Naikar & Elix, 2019). Therefore, this tool has the potential to support the analysis and design of distributed cognitive systems, which are self-organizing.

The basic form of the diagram of work organization possibilities is shown in Figure 4.2. In this representation, the actors signify agents, whether human or artificial, that are capable of goal-directed behaviors. The behavioral opportunities of individual actors are demarcated by sets of work demands, or constraints, from the first three CWA dimensions. The terms “constraints” and “work demands” are used interchangeably in CWA because the constraints place demands on actors by defining boundaries that must be respected in their actions for effective performance (Vicente, 1999). Accordingly, for each actor, the work demands collectively demarcate a field of opportunities for safe and productive behavior. The structural possibilities of multiple actors, then, emerge from the behavioral opportunities of individual, interacting actors (cf. Figure 4.1).

As a simplified example, Figure 4.2 shows that the behavioral spaces of both Actors A and C, though not B, are delimited by work demand 1. Therefore, some of the structural forms that can emerge from these actors’ spontaneous behaviors are: Actor A is engaged in behaviors accommodated by work demand 1, while Actors В and C are occupied in other behaviors; Actor C is involved in behaviors accommodated by work demand 1, while Actors A and В are engaged in other behaviors; or both Actors A and C are involved in behaviors accommodated by work demand 1, while Actor В is occupied in other behaviors. Which structural possibility emerges at any point in time depends on the details of the circumstances, including individual differences between human workers, which cannot be predicted in full a priori.

Figure 4.3 presents a diagram of work organization possibilities for a future airborne system for maritime surveillance (Elix & Naikar, 2019). This diagram shows, for instance, that actors on the flight deck as well as actors located in the cabin of the aircraft, specifically at two observer stations and six tactical workstations, are afforded behaviors for satisfying the constraints of navigation. Therefore, any one or more of these actors may be occupied in such behaviors at any point in time, so that the emergent structural forms may vary with the circumstances. Another example is that only the actors on the flight deck and at the observer stations in the cabin are afforded behaviors for sighting or observing targets out of a window. However, at

Diagram of work organization possibilities. Source

FIGURE 4.2 Diagram of work organization possibilities. Source: Reproduced from Naikar et al. (2017).

any point in time, any one or more of the two flight deck actors and two observer station actors, who are normally located in these positions, may be occupied in these behaviors. Furthermore, if there is an electrical failure, so that some of the sensors available to the six workstation actors can no longer be used for detecting, tracking, localizing, or identifying targets, any one or more of these actors might relocate to the stations with a window to increase the chances of finding the target. The diagram of work organization possibilities therefore accommodates a range of behavioral and structural possibilities for adaptation, which are relevant to a wide range of situations, including novel or unanticipated events.

The processes for constructing a diagram of work organization possibilities have been described in depth elsewhere (Elix & Naikar, 2019; Naikar & Elix, 2016). However, generally the processes involve applying a set of criteria to the work demands from the earlier CWA dimensions to identify the limits on the distribution of the work demands across actors, or the organizational constraints. The set of criteria, which includes compliance with policies or regulations, safety and reliability, access to information/controls, and feasible coordination, competencies, and workload, govern shifts in work organization dynamically in sociotechnical systems (Rasmussen et al., 1994; Vicente, 1999). Therefore, in applying these criteria to construct a model of work organization possibilities, the aim is to rule out behaviors and structures that cannot be manifested in situ, regardless of the circumstances, and to support the possible behaviors and structures in design.

Sample of a work organization possibilities diagram tor a future maritime surveillance system. Source

FIGURE 4.3 Sample of a work organization possibilities diagram tor a future maritime surveillance system. Source: Reproduced from Elix and Naikar (2019).

In the context of a discussion on distributed cognitive systems, it is important to emphasize that the diagram encompasses the computational possibilities for the work given both the actors and artifacts in the system. As mentioned, the actors in the diagram may represent human workers or artificial agents that are capable of goal-directed behaviors. Therefore, some cognitive artifacts, or “physical objects made by humans for the purpose of aiding, enhancing, or improving cognition” (Hutchins, 1999, p. 126), may be represented as actors. As Hutchins (1999) recognizes, computers are a special class of cognitive artifact that “mimic certain aspects of human cognitive function” (p. 127). Cognitive artifacts that are not capable of goal-directed behaviors are incorporated in the diagram as objects or resources in the work environment that define the behavioral spaces of actors. Therefore, depending on whether actors choose to act on the affordances of such artifacts or not, the actors’ cognitive trajectories will be different. Accordingly, the computational structures for the work will vary. (Although physical objects or resources were included in the analysis of the future maritime surveillance system discussed earlier in this section, they are not evident in the diagram shown in Figure 4.3 because, as Elix and Naikar (2019) explain, the constraints were represented at a higher level of abstraction for practical reasons in this case.)

To return to Hutchins’s (1990, 1991, 1995) field studies of the navigation task on naval vessels, let us assume that Actors A, B, and C in the basic form of the work organization possibilities diagram represent the positions of the bearing timer-recorder, bearing taker, and plotter, respectively (Figure 4.4). Further, let us superimpose some of the cognitive artifacts referenced in this study, specifically a gyrocompass (G), paper and pencil (P), and hand-held electronic calculator (C), onto the behavioral spaces of these actors. This representation shows that the gyrocompass is only available to an actor in the position or location of the bearing taker, whereas the paper and pencil and electronic calculator are accessible to actors in any of the three positions. This means that only an actor in the position of the bearing taker can measure the true bearings of landmarks directly with the gyrocompass. However, in the incident when the ship’s gyrocompass failed, any combination of these three actors could have been involved in computing the true bearings of landmarks from their relative bearings using the paper and pencil or hand-held calculator, given the afforded behavioral spaces of these actors. In Hutchins’s (1991) study, the stabilized social structure involved the bearing timer-recorder and the plotter sharing this computational task, with the aid of an electronic calculator, and this possibility for the distribution of computational work is accommodated in the diagram. Notably, this possibility was not accounted for in the formal procedures of the ship or documented in an activity score a priori.

By modeling the computational work possibilities, given the actors and artifacts in the system, the diagram of work organization possibilities provides a means for designing distributed cognitive systems. Designs based on this diagram, whether of interfaces, teams, training systems, or workspace layouts, will not be limited to prespecified computational processes, such as the formal procedures for the task of navigation on naval vessels (Hutchins, 1995). In addition, designs based on this diagram will not be limited to computational processes that have already been observed, such as those documented in the activity score for position fixing presented by Hutchins

The work organization possibilities diagram encompasses the computational possibilities for work

FIGURE 4.4 The work organization possibilities diagram encompasses the computational possibilities for work.

(1995). Instead, designs based on this diagram will have the potential to accommodate the range of possibilities for computational work, regardless of the circumstances. Consequently, actors may be better supported in responding spontaneously to the local conditions, whether these are defined by small variations in routine situations, dramatic changes in circumstances, or workers’ individual preferences. As a result, the system’s inherent capacity for self-organization, a phenomenon that is essential for dealing with instability and ambiguity in the task environment, may be preserved.

For example, Elix and Naikar (2019) present a case study of the future airborne maritime surveillance system discussed earlier in this section that demonstrates how the diagram (Figure 4.3) led to a team design that is integrated with the training and career progression pathway of the crew in a way that maximizes the system’s behavioral and structural possibilities for adaptation, specifically in relation to a novel operational concept involving the control of an uninhabited aerial system (UAS) from the manned aircraft. In this approach, the learning requirements of the flight deck actors, observer station actors, and workstation actors are based on their spaces of behavioral opportunities, as defined by the intrinsic constraints of the system, rather than on pre-specified roles or responsibilities. Consequently, their actual roles and responsibilities on any occasion emerge from their bounded spaces of possibilities for action.

Designs based on the diagram of work organization possibilities, then, may differ significantly from designs produced with conventional approaches. The diagram for the future maritime surveillance system (Figure 4.3) identified that all six workstation actors had the capacity for piloting the UAS and detecting, tracking, localizing, and identifying targets with its sensor. Therefore, an integrated system design was produced, which combined the team, training, and career progression elements in a way that enabled all six workstation actors to operate the UAS. As a result, the design accommodates a range of possibilities for managing the instability, uncertainty, and unpredictability of the tasking environment, such as shifting responsibility for the UAS or combining some of the roles to allow for a dedicated UAS operator if necessary.

In contrast, normative and descriptive approaches are likely to have identified the best crew member for the job in view of specific circumstances. Specifically, normative approaches are likely to have identified the best crew member for dealing with known or anticipated situations, whereas descriptive approaches are likely to have identified the best crew member for the job in situations that have been observed. However, in sociotechnical workplaces, events are rarely reproduced exactly and novel or unforeseen events are possible (Hoffman & Woods, 2011; Rasmussen et al., 1994; Vicente, 1999). Therefore, normative and descriptive approaches are likely to result in designs that are limited in changed or novel circumstances.

Indeed, many field studies show that workers often act outside the boundaries of their professional roles when necessary (e.g., Bogdanovic et ah, 2015; Lundberg & Rankin, 2014; Rochlin et ah, 1987). For example, in Hutchins’s studies (1990, 1991, 1995), the plotter and the bearing timer-recorder were observed to act outside the formal definitions of their roles following the loss of the gyrocompass as well as in more routine situations. However, despite the existing body of empirical evidence, conventional design practices persist in utilizing relatively inflexible forms of social organization, which are rarely manifested during actual operations, in the system development process. As a result, workers may not be well supported by the system design in their improvisations, which are necessary for survival in the wild.

For instance, Lundberg and Rankin (2014) found that role improvisations by workers in crisis response teams, which were not accounted for in training, meant that jobs were sometimes performed less effectively or efficiently than specialists performing the same work. Moreover, Klein, Wiggins, and Deal (2008) observed that the Three Mile Island accident highlighted that nuclear power plant control rooms presented information to workers in ways that sometimes interfered with their ability to understand the situation and adapt to the circumstances. Clearly, then, designs based on inflexible work practices are not only inefficient, but they may also be dangerous, particularly when the task circumstances stretch the capacity of workers to keep the system afloat (pardon the pun!).

In some systems, relatively robust designs may evolve through the bottom-up practices of workers, as opposed to resulting from the top-down practices of designers. For example, in considering the division of labor among members of the navigation team on naval vessels, Hutchins (1990) observes that “the progress of various team members through the career cycle of navigation practitioners produces an overlapping distribution of expertise” (p. 191) that allows team members to take responsibility for all parts of the process to which they can contribute—not just for their own jobs—which enables the system to adapt to changing circumstances.

This observation may explain, for instance, how the bearing timer-recorder was able to perform the first step of the plotting job, when the plotter was called away from the chart table for a consultation with the ship’s captain. In addition, this observation may explain how the plotter and bearing timer-recorder were able to establish the true bearing of landmarks from the relative bearings when the gyrocompass failed, despite the fact that establishing the true bearings of landmarks is formally the responsibility of the bearing takers.

In contrast, in the case of designing future systems, or designing upgrades to existing systems, robust work practices wouldn’t already been evolved, at least not for any new or revolutionary components. Instead, robust work practices must be envisaged and created by designers. For example, in relation to the future maritime surveillance system, all of the six workstation actors are afforded behaviors for navigation, such that they could assist the flight deck actors with these activities. However, although such capabilities may be provided by the software at all of the workstations by default, as it is inexpensive to do so, and the crew may take advantage of these opportunities, these possibilities must be accounted for in the team, interface, and training design of the system to support adaptation effectively. By deliberately seeking to enable adaptation within a set of boundary conditions on safe and productive performance, the diagram provides a means for systematically designing for constrained flexibility in the workplace.

DISCUSSION

This chapter has considered the nature of distributed cognition in self-organizing teams. It has been observed that, while the theory of distributed cognition (Hollan et ah, 2000; Hutchins, 1995; Norman, 1991) has led to significant advancements in the understanding of computational work processes in sociotechnical systems (e.g., Furniss et ah, 2015; Sellberg & Lindblom, 2014; Stanton, 2014), it is important to appreciate that these processes are not only distributed, but are also self-organizing. Given the system’s intrinsic constraints, or affordances, new computational structures may emerge from actors’ spontaneous behaviors, and constrain and enable their behaviors, in a way that is fitted to the local conditions. This phenomenon is essential to the system’s viability under conditions of instability, uncertainty, and unpredictability. Therefore, the concept of self-organization in sociotechnical workplaces was put forward in this chapter as a framework for characterizing the self-organizing behavior of distributed cognitive systems.

This chapter has also proposed an approach for modeling and designing distributed cognitive systems. Specifically, it has been suggested that the diagram of work organization possibilities (Naikar & Elix, 2016), a tool recently developed as an addition to CWA, is suitable for this purpose. This tool, which has a formative orientation, is aligned with the phenomenon of self-organization in sociotechnical workplaces (Naikar & Elix, 2019; Naikar et al., 2017). Therefore, designs based on this framework have the potential to support the self-organizing behavior of distributed cognitive systems.

It should be acknowledged explicitly that Hutchins’s (1990,1991, 1995) work does not disregard the importance of human adaptation for successful performance in complex environments. His studies of the navigation task on naval vessels report a number of incidents in which adaptations are not only described but are regarded as important for system robustness. However, the theory of distributed cognition naturally places greater emphasis on the distributed nature of computational work processes, rather than on their self-organizing characteristics. Accordingly, the methods that Hutchins (1995) utilizes for mapping distributions of computational processes across actors and artifacts, namely the activity score, cannot readily accommodate the self-organizing behaviors of distributed cognitive systems.

Future research on distributed cognition should focus on elaborating its selforganizing properties, so that more robust accounts of how complex cognitive work is accomplished in sociotechnical workplaces can be developed. For example, research that examines the sociocognitive mechanisms by which self-organization occurs in distributed cognitive systems would be beneficial. In addition, future research should be concerned with methods for modeling and designing distributed cognitive systems. The diagram of work organization possibilities, which is aligned with the phenomenon of self-organization in sociotechnical systems, provides a potential starting point. However, further to some initial studies (Elix & Naikar, 2019; Naikar & Elix, 2016), other proof-of-concept demonstrations and analytical and empirical validation of the diagram, w'hether in laboratory, field, or industrial settings, are needed.

In conclusion, this chapter has examined some of the characteristics of selforganizing behaviors in distributed cognitive systems. It has been suggested that cognitive theories and methods used in the analysis and design of sociotechnical workplaces must be able to account for these behaviors. By developing a more complete account of how cognitive work is actually accomplished in complex settings, designs may be created that better support actors in dealing with instability, uncertainty, and unpredictability, and therefore in surviving in the wild.

ACKNOWLEDGMENTS

I am grateful to Mike McNeese and Mica R. Endsley for their thoughtful comments on a draft of this chapter, which were very helpful to me in revising it for publication.

REFERENCES

Bigley, G. A., & Roberts, К. H. (2001). The incident command system: High-reliability organizing for complex and volatile task environments. Academy of Management Journal, 44(6), 1281-1299.

Bogdanovic, J., Perry, J., Guggenheim, M., & Manser. T. (2015). Adaptive coordination in surgical teams: An interview study. BMC Health Services Research, 15, 128-139. Director General. (2015). The Fukishima Daichii accident. Vienna, Austria: International Atomic Energy Agency.

Elix. B., & Naikar, N. (2020). Designing for adaptation in workers’ individual behaviors and collective structures with cognitive work analysis: Case study of the diagram of work organization possibilities. Human Factors, https://doi.org/10.1177/0018720819893510 Furniss, D., Masci, P., Curzon, P., Mayer, A., & Blandford. A. (2015). Exploring medical device design and use through layers of Distributed Cognition: How a glucometer is coupled with its context. Journal of Biomedical Informatics, 53, 330-341.

Haken, H. (1988). Information and self-organization: A macroscopic approach to complex systems. Berlin, Heidelberg, New York: Springer.

Heath, C., & Luff, P. (1998). Convergent activities: Line control and passenger information on the London underground. In Y. Engestrom & D. Middleton (Eds.), Cognition and communication at work (pp. 96-129). Cambridge: Cambridge University Press.

Herzog, S. (2011). Revisiting the Estonian cyber attacks: Digital threats and multinational responses. Journal of Strategic Security, 4(2), 49-60.

Heylighen, F. (2001). The science of self-organization and adaptivity. The Encyclopedia of Life Support Systems, 5(3). 253-280.

Hoffman, R. R., & Woods, D. D. (2011). Beyond Simon’s slice: Five fundamental trades-offs that bound the performance of macrocognitive work systems. IEEE Intelligent Systems, 26(6), 67-71.

Hofkirchner, W. (1998). Emergence and the logic of explanation: An argument for the unity of science. Acta Polytechnics Scandinavica: Mathematics, Computing and Management in Engineering Series, 91, 23-30.

Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: Toward a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction, 7(2). 174-196.

Hoppe, J., & Popham, P. (2007). Complete failure of spinal anaesthesia in obstetrics. International Journal of Obstetric Anesthesia, /6(3), 250-255.

Hutchins, E. L. (1990). The technology of team navigation. In J. Galagher, R. E. Kraut, & C. Egido (Eds.), Intellectual teamwork: Social and technological foundations of cooperative work (pp. 191-221). Hillsdale, NJ: Lawrence Erlbaum Associates.

Hutchins, E. L. (1991). Organizing work by adaptation. Organization Science, 2(1), 14-39.

Hutchins, E. L. (1995). Cognition in the wild. Cambridge. MA: The MIT Press.

Hutchins, E. L. (1999). Cognitive artifacts. In R. A. Wilson & F. C. Keil (Eds.). The MIT encyclopedia of the cognitive sciences (pp. 126-128). Cambridge, MA: The MIT Press.

Hutchins, E. L., & Klausen, T. (1998). Distributed cognition in an airline cockpit. In Y. Engestrom & D. Middleton (Eds.), Cognition and communication at work (pp. 15-34). Cambridge: Cambridge University Press.

Klein, G., Wiggins, S., & Deal, S. (2008). Cognitive systems engineering: The hype and the hope. IT systems perspective, March, 95-97.

Klein, K. J.. Ziegert. J. C„ Knight, A. P„ & Xiao, Y. (2006). Dynamic delegation: Shared, hierarchical, and deindividualized leadership in extreme action teams. Administrative Science Quarterly, 51. 590-621.

La Porte, T. R. (1996). High reliability organizations: Unlikely, demanding and at risk. Journal of Contingencies and Crisis Management, 4(2), 60-71.

Leveson, N. G. (1995). Safeware: System safety and computers. Reading, M A: Addison-Wesley.

Linde, C. (1988). Who’s in charge here? Cooperative work and authority negotiation in police helicopter missions. In Proceedings of the second annual ACM conference on computer supported collaborative work (pp. 52-64). Portland, Oregon: ACM Press.

Luff, P., & Heath, C. (2000). The collaborative production of computer commands in command and control. International Journal of Human-Computer Studies, 52, 669-699.

Lundberg, J., & Rankin, A. (2014). Resilience and vulnerability of small flexible crisis response teams: Implications for training and preparation. Cognition, Technology and Work, 16, 143-155.

Naikar, N. (2013). Work domain analysis: Concepts, guidelines, and cases. Boca Raton, FL: CRC Press.

Naikar, N. (2018). Human-automation interaction in self-organizing sociotechnical systems. Journal of Cognitive Engineering and Decision Making, 12(1), 62-66. https://doi. org/10.1177/1555343417731223

Naikar, N.. & Brady, A. (2019). Cognitive systems engineering: Expertise in sociotechnical systems. In P. Ward, J. M. Schraagen, J. Gore, & E. Roth (Eds.), The Oxford handbook of expertise: Research & application. Oxford: Oxford University Press.

Naikar, N.. & Elix, B. (2016). Integrated system design: Promoting the capacity of sociotechnical systems for adaptation through extensions of cognitive work analysis. Frontiers in Psychology, 7(962), 44-64. http://journal.frontiersin.org/article/10.3389/ fpsyg.2016.00962/full

Naikar, N.. & Elix, B. (2019). Designing for self-organisation in sociotechnical systems: Resilience engineering, cognitive work analysis, and the diagram of work organisation possibilities. Cognition, Technology & Work, https://doi.org/10.1007/sl0111-019-00595-y

Naikar, N.. Elix, B., Dagge, C., & Caldwell, T. (2017). Designing for self-organisation with the diagram of work organisation possibilities. In J. Gore & P. Ward (Eds.), Proceedings of the 13th international conference on naturalistic decision making (pp. 159-166). Bath: University of Bath. Retrieved from www.eventsforce.net/uob/media/uploaded/ EVUOB/event_2/GoreWard_NDM 13Proceedings_2017.pdf

National Transportation Safety Board. (2019). Assumptions used in the safety assessment process and the effects of multiple alerts and indications on pilot performance. Washington, DC: National Transportation Safety Board.

Norman, D. A. (1991). Cognitive artifacts. In J. M. Carroll (Ed.), Designing interaction: Psychology at the human-computer interface (pp. 17-38). Cambridge: Cambridge University Press.

Park, Y., Hong, P., & Roh, J. J. (2013). Supply chain lessons from the catastrophic natural disaster in Japan. Business Horizons, 56(1), 75-85.

Perrow, C. (1984). Normal accidents: Living with high risk technologies. New York, NY: Basic Books.

Rasmussen, J. (1969). Man-machine communication in the light of accident records (Report No. S-l-69). Roskilde, Denmark: Danish Atomic Energy Commission, Research Establishment Risp.

Rasmussen, J. (1997). Merging paradigms: Decision making, management, and cognitive control. In R. Flin, E. Salas, M. Strub, & L. Martin (Eds.), Decision making under stress: Emerging themes and applications (pp. 67-81). Aldershot: Ashgate.

Rasmussen, J., Pejtersen, A. M., & Goodstein, L. P. (1994). Cognitive systems engineering. New York. NY: John Wiley & Sons.

Reason, J. (1990). Human error. Cambridge: Cambridge University Press.

Reich, P. C.. Weinstein, S„ Wild. C„ & Cabanlong, A. S. (2010). Cyber warfare: A review of theories, law, policies, actual incidents—and the dilemma of anonymity. European Journal of Law and Technology, 1(2), 1-58.

Rochlin, G. I. (1993). Defining “high reliability” organisations in practice: A taxonomic prologue. In K. Roberts (Ed.), New challenges to understanding organizations. New York: Macmillan.

Rochlin, G. I., La Porte, T. R., & Roberts, К. H. (1987). The self-designing high-reliability organization: Aircraft carrier flight operations at sea. Naval War College Review, 40(4), 76-90.

Sellberg, C., & Lindblom, J. (2014). Comparing methods for workplace studies: A theoretical and empirical analysis. Cognition, Technology & Work, 16,467-486.

Stanton, N. A. (2014). Representing distributed cognition in complex systems: How a submarine returns to periscope depth. Ergonomics, 57(3), 403-418.

Vicente, K. J. (1999). Cognitive work analysis: Toward safe, productive, and healthy computer-based work. Mahwah, NJ: Lawrence Erlbaum Associates.

Weick, K., & Sutcliffe, К. M. (2001). Managing the unexpected: Assuring high performance in an age of complexity. San Francisco, CA: Jossey Bass.

Woods, D. D. (1988). Coping with complexity: The psychology of human behaviour in complex systems. In L. P. Goodstein, H. B. Andersen, & S. E. Olsen (Eds.), Tasks, errors, and mental models: A festschrift to celebrate the 60th birthday of professor Jens Rasmussen

C Unobtrusive Measurement of Team Cognition

 
Source
< Prev   CONTENTS   Source   Next >