Reflections on Team Simulations—Part I: Historical Precedence

Michael D. McNeese, Lisa A. Delise, Joan R. and Clifford E. Brown

Nathaniel J. McNeese, Rentsch,

CONTENTS

Introduction..............................................................................................................27

Use of Constructivism and Situated Learning.....................................................29

History of Teamwork Research at Wright-Patterson AFB, Ohio.............................30

Necessity of Teams for Military Operations.......................................................30

Early Beginnings in the 1960s and 1970s at USAF Aerospace

Medical Research Laboratory....................................................................32

Research in Group Decision Making/Team Problem Solving: Simulations in the 1980s/1990s...............................................................................................34

C3 Operator Performance Engineering (COPE) Program...................................35

C3 Interactive Task for Identifying Emerging Situations (CITIES)....................40

Jasper/Repsaj.......................................................................................................42

Summary and Concluding Remarks........................................................................45

Notes........................................................................................................................45

References................................................................................................................46

INTRODUCTION

The research landscape to study and understand teamwork and team cognition has been complex and has evolved through various eras of influence—often determined by the Zeitgeist of the times (e.g., orientations may reflect behavioral, cognitive, and ecological values of the researcher conducting the studies). In particular, the study of team cognition has become increasingly focused on distributed aspects of teams related to information, technology, people, place, and environment. In the recent years there has been a blurring between the research areas of team cognition (Cooke, Gorman, Duran, & Taylor, 2007; Salas, Fiore, & Letsky. 2012) and distributed cognition (Hollins, Hutchins, & Kirsh, 2000; Nardi & Miller, 1991), therein the raison d’être for this handbook. The handbook, including this chapter, illustrates how these areas meld together into the integrated research niche of distributed team cognition. We have developed two interrelated chapters (Chapter 2 and Chapter 3) for the purpose of examining a longitudinal perspective and reviewing distributed team cognition research that the authors have been intimately involved with in collaboration with the senior author over the last 35 years. Both chapters examine theoretical foundations, methodological tools, collaborative technologies, and pertinent measures that afford new levels of understanding, insight, and advancement. This chapter develops a historical precedent that lays (1) the foundation for development and (2) the conceptual underpinnings of interdisciplinary research in distributed team cognition, whereas the next chapter focuses more on a contemporary progression of research and practice.

These chapters highlight how teams and team cognition research have evolved through the use of simulations and how simulations should reflect the real-world environment they represent. In doing so we explore frequently researched topics and report what is known about these topics while reviewing lessons learned in utilizing simulation and modeling within a broader context. We also present selected preeminent simulations that have been used with much success.

As the research trajectory of team cognition is examined through various lenses associated with team simulation theory and practice, general issues may become evident as one reads through the text. There certainly are different ways to conceptualize what any team simulation is about and how it might be designed to accomplish the researcher’s goals. One way to conceptualize team simulation is correspondent to Jens Rasmussen’s ideas (Rasmussen, Pejtersen, & Goodstein, 1994) inherent in the abstraction hierarchy (AH). In consideration of any given team simulation it is useful to delineate the following:

  • • What is its purpose and how is this meaningful?
  • • What are some of the values that are salient?
  • • What abstract functions are required?
  • • What are the constraints and measures associated with these?
  • • What measurements are imminent—what measurements may be hidden?
  • • What priorities are suggested?
  • • What general functions will be needed to make it work?
  • • What functions will be allocated to humans, machines, and/or agents?
  • • What physical forms may be taken?

As scientists address these questions a simulation can be conceptualized from abstract to concrete levels of specificity, but simultaneously can be decomposed from global to local spaces. One moves through the hierarchy logically in terms of the why-what-how nature of a complex system wherein the hierarchy approaches multiple interrelated means-end relationships. That is, the most bottom levels are the means to accomplish the levels above them (their ends) which then successively become the means to accomplish the next level up and so on. When a designer or scientist builds a team simulation it may be constructed conceptually to answer these kinds of questions to comprehensively consider multiple, interrelated facets of the design. The AH provides a socio-technical perspective to understand complex systems such as simulations, and hence adds value by thoroughly considering design components when phenomena under question contain social and technical nuances. This philosophy elevates the ecological necessity underlying design, simulation, and human activities (ecological psychology, see Neisser, 1976)

Although there are many insights gained with the use of the abstraction hierarchy in conceptually designing a given team simulation, our approach utilizes a more open-ended but related proscription based on a series of questions that help define conceptualization. Our experience shows the following questions are important to consider:

  • 1. Why is the simulation designed to begin with? What purpose does it serve?
  • 2. What are the salient demands inherent within the context for the simulation? How could they impact human/team performance?
  • 3. How does the simulation facilitate team cognition? What architecture and processes are utilized?
  • 4. What are some important studies or cases that were produced while using the simulation?
  • 5. What was learned by using the simulation? What did it produce?

Because there is a diverse population of team simulations available for specific research intentions, the focus of this chapter is from the perspective of (1) understanding cognitive processes in teamwork and how they impact outcomes, (2) extending human capabilities within teams through the use of support technologies that enhance teamwork, team performance, and communication, and (3) examining the ecological validity of a simulation within a given field of practice.

Use of Constructivism and Situated Learning

One tenet taken to assess and articulate state of the art practice is the idea of constructivism. Constructivist philosophy implies that people build up knowledge and discover new ideas from one state of experience to another (based on what they already know). Inherent in this view is that thinking and learning are coupled to the real-life contexts in which they occur—situated learning (Bransford, Sherwood, Hasselbring, Kinzer, & Williams, 1990). Hence the notion that knowledge is constructed in and through the context or a specific field of practice. Oftentimes constructivist theory is connected to ideas of distributed intelligence (Pea, 1993). Activities in problem solving require intelligence that is distributed across information, people, media, and the environment. Furthermore, constructivist philosophy often is embedded in joint social activities that provide opportunities for new learning through stages of knowledge development (Vygotsky, 1978). More broadly stated, situated cognition pertains to the perspective that people’s knowledge is immersive and embedded in the activity, history, context, and culture where it was learned (Brown, Collins, & Duguid, 1989). As applied with the parameters of this chapter, an initiation point of our work within early team simulations is offered. Overall, the goal is to facilitate conceptualization by using embedded concrete examples and activities and at the same time show evolution within the research that is useful for other researchers involved with the design and utilization of team simulations.

Therein, the chapter develops by capitalizing on work from our own distributed team cognition history and the development of simulations appropriate to the needs at the time. Although the point is not to provide an ad for our own simulations it does afford fertile ground for discussing research foci, foundations, relevant issues, lessons learned, and in situ research approaches and measures. It especially offers an overall anchor point for exploring many elements of distributed team cognition in early work and providing a bridge for the follow-on chapter which in turn is more relevant to the incorporation and integration of technology, computing, and decision-making agents and aids within a simulation. Our development of simulations is indicative of the proactive use of user-centered technological innovation and design. These chapters are presented with the bias to demonstrate the symbiotic (and interdisciplinary) relationship among information, technology, people, and context. In summary, one of the main objectives is to review and show how collaborative-based simulation technologies can incorporate meaningful experiences of distributed team cognition.

HISTORY OF TEAMWORK RESEARCH AT

WRIGHT-PATTERSON AFB, OHIO

It may be the case that current researchers are biased towards using literature and methods that have been published in the last five years—the recency effect. Although this is effective for having the most up-to-date findings and advancing the most precise methodologies, there is also value in knowing where research has come from (historical precedent) and why it has evolved the way it has (reasoned explanation). Our belief is that there is much value in history and that constructivist approaches utilize history to understand current context of use.

Necessity of Teams for Military Operations

Military engagement has historically involved warfighting with adversaries to obtain intentional outcomes for the purpose of securing freedom. Through the use of organizations, information, intelligence, technologies, and people, missions have been carried out with both success and failure. More contemporary military objectives include non-warfighting activities (e.g., peacekeeping operations, medical support in various endeavors, and regime restoration) and the necessary training. This being said, eventually the pathway of current practice is often traceable to objectives that involve warfare, secure operations, fighting terroristic entities, and protecting national interests and resources.

People working together effectively and efficiently as a team lies at the heart of successful operations; however this does not occur automatically. In fact, it can be challenging to take a group of people and transform them into a team that successfully accomplishes a goal with interdependent relationships. Teams formulate a core system of values that allow them to (1) strategically pull together and allocate their joint resources and (2) collectively induct their respective knowledge to perform a mission. Likewise, many operations require members to be part of different teams simultaneously and function as a “team of teams.” Hence, one sees that command, control, communications, and intelligence frequently form a core set of capabilities that enable teamwork to succeed. Also, teamwork does not just happen but requires mutual cooperative learning where knowledge, skills, and abilities are acquired to perform essential tasks within a mission (Johnson & Johnson, 1994; McNeese, 1992). James Clapper, former Director of National Intelligence of the United States, mentions in his new memoir (Clapper, 2018),

I’d been in the SIGINT world and around the rest of the community long enough to realize that each agency needed to embrace its own culture, traditions, and capabilities. After honoring that we could inspire them to cooperate to take advantage of complementary strengths... bringing together different perspectives and experiences enabled us to formulate a range of different options for action,. . . the old saying “the sum is greater than the parts” has profound meaning.

(p. 102)

Inherently, the idea of teamwork contains the theme that the cognition of team members can be integrated—to be roughly on the same page—to accomplish mutual goals while yet retaining the stability of their individual performance, skills, and knowledge. When knowledge, awareness, and perspectives are disparate it is much more difficult to formulate teamwork and thereby establish correct action given the situation at hand. Clapper (2018, p. 4) refers to the creation of a new field, geospatial intelligence, as requiring the functional synthesis of mappers and image analysts, whereupon “one of our big goals was to get people with different skill sets to physically and functionally w'ork together.” In today’s world, distributed work is often the norm and working together has many challenges at a distance. Many of the simulations involving team cognition therein require specification of distributed cognition—not just team cognition at collocated facilities. Distributed cognition has become possible and even an everyday occurrence. Texting on a smartphone sent to multiple parties (with videos or pictures embedded) is a very basic example of distributed team cognition. More expansive opportunities exist through the power of online collaboration spaces (that contain workflow management, discussion boards, group posts, or feeds), smartphones with various applications such as Facebook, YouTube, Google Docs, and specific collaborative software platforms designed solely to support cooperative work whether it includes video teleconferencing or group chat capability (e.g., the applications Zoom or Slack).

Within military teamwork, the role of technology is often prevalent as new technological innovations are designed and tested with real situational urgency (e.g. distributed crews working with autonomous vehicles can be used for land, space, ground, and underwater missions that typically can involve surveillance, reconnaissance, weaponization involving precision strikes, and intelligence gathering). As is often the case, technology-centered solutions may occur simply because they could be designed, and in turn fail to properly serve the human therein usually resulting in errors or wrong use. When teams and the individuals who compose teams cannot use technologies because they fail to be user-centered in their conception and design, then failure is imminent. Often times it is catastrophic failure resulting in loss of life and/or significant material goods. Therein, one of the primary reasons to begin studying teams was to eliminate errors in practice, to help facilitate cooperative learning, and to ensure technology fits the needs and capabilities of the humans involved (human factors engineering; Meister, 1999).

As one begins to study teamwork there are six major fundamental research components to consider, which have led to incisive research in both quantitative and qualitative spectrums. They are important to facilitate proper functioning and integration of teammates as they work towards mutual, interdependent activity given their select roles in the team:

  • 1. Coordination
  • 2. Communication
  • 3. Collaboration
  • 4. Cooperation
  • 5. Awareness
  • 6. Context/culture

These components should be taken under proper consideration to encourage and support a mutual work objective, improve decision making in cooperative work that makes a real difference, and properly design user-centered interface technologies that are successful in practice. As an example of these elements applied individually as well as collectively the reader is referred to Letsky, Warner, Fiore, and Smith (2008) and their work on macrocognition.

As history would have it, different eras of research produce related but different points of view and may even ignore other areas simply as a matter of what was salient at the time. The knowledge and culture that has informed much of the senior author’s work was inspired through his work with the U.S. Air Force. In turn, this has provided a specific time and place for what was learned and practiced in team simulations and how it has evolved across the decades. A foundational conceptual question that underlies much of this research is: What is the purpose of a team simulation?

Early Beginnings in the 1960s and 1970s at USAF

Aerospace Medical Research Laboratory

In order to obtain a sense of historical development of team simulations it is instructive to look at the approach, context, and outcomes that were operative and in vogue within a given phase. Therein, this section utilizes select themes to capture constructivist foundations of simulation. The senior author began work in the summer of 1977 at Wright-Patterson Air Force Base early in his career in the role of a designer. His first introduction to teams was that of participating on engineering design teams' that developed integrative systems avionics products within the Aeronautical Systems Division. Typically, these teams typically were composed of engineers, designers, draftspersons, technicians, and business managers. In the 1980s, he began a civil servant career applying human factors engineering to aviation systems which then changed from applied design and evaluation of real fielded products to more applied research in human factors. By way of history then, this research turned towards the path of teamwork and how teams could enhance human-system performance. As mentioned the military used teams to produce gains in operational advantage in warfare, benefits in training, and collective insight beyond what an individual could muster alone. In turn, much of the early work enabled a lifetime of research that began with practical problems found within the context of the U.S. Air Force, in particular at the Fitts Human Engineering Division, Aerospace Medical Research Laboratory, Wright-Patterson AFB, where he worked as an engineering research psychologist beginning in the mid-1980s. However, teamwork and the value of team performance were actually studied much earlier than this at the USAF Aerospace Medical Research Laboratory during the 1960s (and probably prior to this although the history of this paper will only span into the 1960s).

Through the early work of Paul M. Fitts and his colleagues (e.g., Dr. Walter Grether and Dr. Mel Warrick),2 human factors research took root specifically at Wright-Patterson Air Force Base in the Aero Medical Laboratory. This occurred within the crossroads of military psychology practice (from the 1940s and before, see Alluisi, 1994), aviation psychology, and the subsequent emergence of the field of engineering psychology as required for military operations to have proper training and selection, avoidance of human error, and establish peak human performance (Grether, 1995). In these early days, human factors primarily focused on individual performance but not completely. The earliest team-based work in the 1960s could be thought of as an extension of experimental psychology (see Chapanis, Garner, & Morgan, 1949) applied to group settings such as air crews (see Williges, Johnston, & Briggs, 1966 for exemplary work on this) prior to the introduction of contemporary cognition studies per se. Hence, as an example some early ideas of teaming focused on how social theories might impact structural relationships in a team (Morrissette, 1958), how team size and communication (Kincade & Kidd, 1958) affect decision making, and how confinement and sustained operations impact group function (Alluisi, Chiles, Hall, & Hawkes, 1963). Even at this early juncture, there was a recognition and opportunity to address human problems with (1) theories associated with social behavior and (2) the discipline of human factors engineering (see McCormick, 1957; Meister, 1999) and hence focusing on individual-to-team performance (e.g., Lorge, Fox, Davitz, & Brenner, 1958). The signal of human factors involvement suggests that work at this time touched upon technological support albeit more primitive formulations related to equipment design.

These early simulations were more primitive and patterned after the typical toy tasks used in experimental psychology but generalized to team-level functions. Yet the research emphasized really important considerations that still remain salient for contemporary researchers engaged with the design and use of team simulations such as ecological validity, fidelity of the task simulation, level and degree of training, reliability and validity of team measures, the structure of a team, and apropos statistical analyses. Although cognition was not a foreign concept (see “cognitive dissonance,” Festinger, 1962) the way it was framed prior to the cognitive revolution in psychology (see Gardner, 1985) was coupled to things like sensory-perception integration, decision making, and the level of demands inherent within a job (i.e., workload, see Kidd, 1961). Likewise, these studies were conducted prior to a strong and relevant focus on team cognition, which would come later. These early attempts to garner the power and utility of team performance for military teams laid a solid foundation for work that would emerge in the 1980s.

RESEARCH IN GROUP DECISION MAKING/TEAM PROBLEM

SOLVING: SIMULATIONS IN THE 1980S/1990S

Continuing at Wright-Patterson AFB, the senior author transferred into the Aerospace Medical Research Laboratory/Human Engineering Division in 1984. The focus evolved from the early days of human factors and the effectiveness and efficiency of teamwork was garnering support as a major research topic of interest for the USAF and other branches of service. Three primary reasons for the increasing importance of teamwork were (1) command posts in real warfare operations, (2) command, control, communications, and intelligence (C3I) operations, and (3) air crew performance and associated team training. The first author transferred into the Crew Systems Branch which had a purpose of looking at crew systems performance but readily identifying support technologies that would enhance warfighter capabilities, focusing more on areas 1 and 2 above rather than area 3.

It is worth noting that the context at this point in time drove the development of team performance out of necessity as teams were required to assess, process, interpret, and act on different kinds of information and intelligence. A popular early model used to capture this was Joe Wohl’s SHOR framework (Stimulus— Hypothesis—Options—Response) model (Wohl, 1981). Although context drove the need for understanding teamwork and team performance, most approaches were still heavily coupled to quantitative, experimental approaches to understanding behavior. One of the most difficult problems to overcome was a bias towards technologycentric designs to support teamwork (the “build it and they will come” mentality) which elevated the idea that just because something could be produced it would inherently be good. The fallacy of this logic is that technology was produced but it created human errors and failure as it was designed devoid of understanding human constraints and capabilities. Therein, human factors3 had a real calling to improve designs for individual and team performance. And herein was where the role and importance of team simulation came to pass. Team simulations could actively identify individual and team errors for various degrees of difficult tasks and pinpoint where failure would be most inclined to happen, while identifying potential causalities. Hence the value of team simulations began to be accepted.

During the 1970s the cognitive revolution took hold with much zeal and there was much more focus on how humans used their abilities such as attention, perception, memory, language, judgment and decision making, problem solving, reasoning, and learning (Gardner, 1985; Neisser, 1967, 1976). While cognition was coming into its own the approach primarily utilized the same experimental psychology paradigm and the essential elements mentioned in the last section were still relevant for designing and implementing a sound experiment. Teamwork and team performance studies started looking at cognitive activities that coupled the individual with teamwork. Therein, team simulation was often at the heart of these studies to enable controlled experimentation with high reliability and validity, under precise conditions, where multiple measurements could be acquired of participants. One major difference owing to the advancement of both software and hardware technologies (i.e., computing) was that the tasks an experimenter used were not just toy tasks but more along the lines of what has been termed a synthetic task environment (Martin, Lyon, &

Schreiber, 1998). These new computer-enabled environments provided significant power increases in display, control, precision, measurement, and flexibility while abstracting out important elements from the real-world context (Cooke, Rivera, Shope, & Caukwell, 1999). Moreover, they also enabled incorporation, testing and evaluation of collaborative technologies one might be designing and building (e.g., a large group display). Next, we examine some of these simulations that were important for our research and development.

C3 Operator Performance Engineering (COPE) Program

The COPE program at the Fitts Human Engineering Division (see McNeese and Brown (1986) for representative work applicable to military command, control, and communications (C3)), provided the first real opportunity for research engagement with teams especially given how team performance could be improved through types of collaborative technology and with the application of human factors engineering. This particular program allowed connection with a U.S. working group called the DAWG (Decision Aiding Working Group). This is important as it underlines the focus from earlier work that related to decision making as a major theme that continued and led to the development of team cognition as an important concept in current research. The time period representative of the COPE work is approximately 1983-1989. This work approached understanding of team interaction from a human information processing perspective (Lindsay & Norman, 1977) while still preserving the experimental psychology perspective. This is still an acceptable viewpoint in teamwork literature (Hinsz, Tindale, & Vollrath, 1997; Mesmer-Magnus & DeChurch, 2009), however it would prove to have some limitations as will be pointed out in the next chapter.

As the COPE research group considered the contexts within C3 where cognition was pertinent for successful teamwork, it was necessary to conceptualize (1) the research purpose of a given team simulation, (2) the kinds of cognitive teamw'ork required and hence the specific demands inherent within the task simulation to be authentic to specific research needs, and (3) the kinds of technologies that might support cognitive teamwork. One of the pertinent frameworks that has allowed development of different simulations relevant to points 1, 2, and 3 is McGrath’s Group Circumplex (1984). The circumplex breaks down task demands into specific categories as related to how a team performs. In particular, the vertical dimension of the framework looks at whether the teamwork is representative of collaboration, coordination, or conflict resolution, whereas the horizontal dimension portrays activities as either cognitive or behavioral. These crossings then result in eight different task demands. Simulations therein should represent a given kind of task demand that will help accomplish research objectives. This relates to what we mentioned in the introduction about how a researcher needs to spend copious amounts of time focused on the conceptualization of the team simulation that is appropriate to their needs. As team simulations are reviewed—emergent from the COPE program and beyond— answers to specific questions of conceptualization can be presented as apropos. As a guide, most all of our research tasks are related to decision making or problem solving therein falling within the quadrants of “generate” and “choose.” Breaking that down further the tasks designed within our simulations fell into the regions of “intellective” or “creativity” or “planning” tasks.

During the late 1970s into the 1980s there was a distinctive shift in experimental psychology towards utilizing cognitive constructs as a means of understanding human behavior (the cognitive revolution). This perspective also applied to understanding team behavior and whether teams were effective or not. This constituted the human information processing perspective but it also included classical decisionmaking approaches such as the judgment and decision-making perspective (Baron, 2004; Tversky & Kahneman, 1974) where models portrayed optimal decisions based on laws of probability. Some of the work related to heuristics and biases eventually led in part to a new approach termed naturalistic decision making which demonstrated actual decision makers in real environments do not often utilize optimal strategies but construct naturalistic ones (Klein, Orasanu, Calderwood, & Zsambok, 1993). Many of the topics of interest to the DAWG included looking at teamwork in terms of biases, belief functions, probabilistic reasoning, heuristics, and judgment. Consequently, the development of systems and technologies to support teamwork were predicated on these kinds of perspectives coming into play, especially when complexity existed. An early example of this genre of research was conducted early in the COPE program that is indicative of looking at how decision making could be enhanced through the use of decision aids (Eimer, 1987). Dr. Erhard Eimer was an experimental psychologist and professor at Wittenberg University, who was on sabbatical at Wright-Patterson Air Force Base. Dr. Eimer provided a niche for this kind of quantitative work in team decision making study from the tradition of Kahneman, Tversky, and others, therein laying some of the groundwork that helped establish this area in the laboratory.

The earliest seminal paper (McNeese & Brown, 1986) that the senior author produced while in the COPE program focused on the use of large group display technologies in terms of how they affected team performance. This DTIC technical report looked at guidelines, research, and human factors considerations that would improve teamwork. The simulations developed under COPE were developed with the C3 context in mind. This period of growth shows a distinct translation from the earlier focus on equipment design for teams into collaborative support systems (aids, associates, interfaces) that assist team members along the cognitive dimension. Therein, the COPE simulations developed tasks with inherent demands that centered around a specific type of decision making.

A significant component of the COPE program worked directly with national-level command posts (e.g., North American Aerospace Defense Command—NORAD) for performance improvement vis-a-vis adoption of a new technology area—humancomputer interaction (Myers, 1998). This was the beginning of qualitative work representative of naturalistic decision making that helped inform understanding of missions-scenarios-problems. Research findings showed evidence that team workers felt pressure with very demanding tasks that were ensconced within coordination, communication, collaboration, cooperation, awareness, and cultural imprints (Cannon-Bowers & Salas, 1998).

Next we examine specific team simulations developed under the COPE program, and what they provided in terms of building blocks for team cognition research today.

As previously mentioned this is only one strand of team simulation work derived from the 1980s and 1990s that afforded investigation of team cognition. In order to compare/contrast any team simulation we propose using these attributes to assess the distinctiveness, power, and viability of the simulation for research purposes.

The Team Resource Allocation Problem (TRAP) was the first team simulation (introduced by Brown & Leupp, 1985) produced for the COPE program. The design and utilization of TRAP was overseen by Dr. Clifford Brown, one of the authors of this chapter. Dr. Brown, following in the footsteps of Dr. Eimer from Wittenberg University, was a National Research Council Research Associate (twice) at Wright-Patterson Air Force Base where he began work with the COPE program. During this time TRAP was designed and utilized for experimental research purposes and integrated into experimental requirements. TRAP represented a highly quantitative approach to distributed team cognition. It was a mathematically based team decision-making task that was designed to be a generic testbed for studying issues of importance in actual command, control, and communication environments. It provided multiple objective measures of team performance that could be compared across experimental conditions and compared to models of effective team problem solving. Importantly, it could be sped up or slowed down to represent the effect of time stress without altering the problem or its corresponding measures of performance. Therefore, for example, the effects on team performance of variables such as graphic versus alphanumeric display of information under low or high time stress could be systematically investigated.

The research goals using the TRAP simulation were to explore the intersections of team information display, social/organizational psychology, and cognitive processes, and to explore useful support mechanisms with specific kinds of collaborative technologies. This interdisciplinary approach became the hallmark for most of the simulations our research group has undertaken. TRAP provided a simulation indicative of interdependent, dynamic decision making. The original TRAP was developed by upgrading a complex, individual decision-making problem/task (Pattipati, Kleinman, & Ephrath, 1983) to generate a group-level problem where cognition was shared across a small team. It comprised elements of intellective choice tasks and planning tasks from McGrath’s framework.

The context underlying the creation of TRAP focused on analytical reasoning, information sharing, and team collaboration within C3 domains. These are real-world factors that come into play in actual command post interactions. Team performance is accomplished dependent on how well these activities are done. Note that team performance is a joint product of individual tasks accomplished in addition to the team-level tasks that need to be processed. Part of the complexity involves figuring out how to work individually while also interacting with team-level demands, and what takes precedence or priority at any given point in time. Some of the major principles that contributed to the dynamics of TRAP were: (a) how difficult the demands were across a small team at any given juncture (individual and team workload, urgency); (b) awareness of other team member’s activities and whether they could contribute to a solution at a given point in time (information sharing and team situation awareness); (c) deriving team solution tactics with specific rules embedded in the game—with given input parameters under changing conditions (cognitive analytics and interdependency); and (d) comprehending how team members communicated and made decisions while being supported with collaborative technologies (e.g., this continued work comparing small/large group displays, see Wilson, McNeese, & Brown, 1987). TRAP represented an abstract formulation of team resource allocation within a typical C3 context as performed in a synthetic task environment (simulation). One of the benefits derived was that individual and team performance could both be evaluated at any point in time as a team member could be working individually or as part of a team. Individual and team performance could be calculated (team performance scores were determined based on the processing of numerous tasks by the team and the point values derived). Although the context roughly approximated team cognition in a C3 domain, the task resonated more towards abstract planning and reasoning rather than a concrete, situated team-level task that replicated an actual C3 team task (i.e., realism). This enabled a broad framework for testing different theories and hypotheses relevant to the research base. This reified the experimental psychological/experimental design values of controlled repeatable performance trials, training to criterion levels, precise measurements, and a focus on internal validity.

Inherently, this task required timely coordination and communication to enable optimal team performance. Interdependency—one of the important principles underlying team cognition and decision making—is a prominent part of the task structure. The simulation structure enabled abstract “processing of tasks” according to: (1) whether there is sufficient time to complete the task, (2) whether the required resources are available to “process” the task, and (3) whether the point value as derived from task characteristics (e.g., shape and color) optimizes team performance. Specific tasks are worth more than others according to a defined rule set (see Brown & Leupp, 1985 for specific rule structure). A team must work through the cognitive analytics necessary to produce the highest number of accumulated points based on a combination of processing opportunities. Because team members only have a set number of resources, they need to make timely and thoughtful decisions with teammates to accrue the most points. Also, note that a team member is required to stick with a given task for the duration of the task cycle required (e.g., 15 or 30 seconds) before starting on a new task. This requires communication, coordination, task-team member awareness, and recognition of best possible outcomes by all team members (team situation awareness).

TRAP employed a three-person team working at a console wherein they have a display that shows the opportunity window with the number of tasks available within a given period of time. As time progresses in a trial different tasks (opportunities) become available for team members to process (some are for individuals, some are for dyads, and some are for all three team members). The initial setup used computers to generate the tasks on the opportunity window for a given trial and to collect data from team members for that trial. The display provided all the information required for the team to work through the tasks. Each team member utilizes a control unit (module) to move their cursor over a task; once all required team members’ cursors are over the task, any one of them can press their start button to begin processing it. While processing one task, team members can move their cursors to the next task they plan to process, but until the previous task is completed (or aborted) they are not able to start processing the next task. Therein, the simulation consists of a computer system that facilitates the appropriate control and displays for the team and records data as needed. Variations were made to the display units as the information could be displayed on a team-level large group display and/or optionally on individual display units for each team member.

The value produced by processing various one-person, two-person, and three-person tasks dynamically changed across time, requiring adaptive and coordinated responses for the best overall performance solution. TRAP highlighted temporal dynamics, interdependence, and adaptive changes among team members.

One of the positive lessons learned was that designing a highly controlled teamlevel decision-making simulation with easily changed parameters yielded many advantages in terms of creating experimental designs and testing different hypotheses. Also, the ability to seamlessly integrate new technologies within the team simulation provided researchers with an initial ability to assess the value and impact of innovations and designs upon team performance. Another advantage was that the problem involved both individual and team performance, and the demand to switch from individual to the team focus during the course of a session (which often replicates real-world demands in decision making and problem solving). The TRAP task generated a high level of interdependency among team members which is important for simulations that hope to replicate teamwork with common goals and mutuality. Although the TRAP simulation provided a testbed for investigating variables of importance in the C3 domain, it was abstract and generic (an experimental strength), but without any real-world context. With the growing influence of naturalistic decision making and ecological validity, TRAP was seen by some as a just a simple game. The decision making was well-defined once mastery of the decision rule set was fluent and if participants communicated at the proper times and in a clear way. This well-defined component of decision making may be an advantage, but many situated problems that occur in C3 are ill-defined and without objectively identifiable optimal solutions.

TRAP captured some of the dynamics and coordination issues that are important for team cognition but it also had some challenges. TRAP emulated the cognitive psychology representative of the 1980s, but adapted it for understanding team performance. Given the requirements it was designed for, it provided a reliable team simulation that enabled performance with and without specific technologies, thereby enabling meaningful evaluations of their effects on teamwork

Many useful results were produced by TRAP. The first experiments investigated (1) relative differences in team performance when using small versus large group displays, (2) whether differing formats of information display (graphical versus alphanumeric representation) provided to a team helped facilitate team performance, and (3) how information presentation rates (moderate or fast) impacted team performance. The result provided useful understanding in coupling technologies with information processing for individual and team performance. Performance was also assessed with respect to subjective workload measurements showing how team information processing could impact one’s perception of workload in a cognitively engaging team task. Workload during the 1980s was a measure of prime interest in applied cognition and human factors, hence one of the reasons why it was employed as a dependent measure in some of the TRAP experiments. Additional research using TRAP (see McBride & Brown, 1989) demonstrated that providing decision heuristics enhanced team performance.

C3 Interactive Task for Identifying Emerging Situations (CITIES)

The second major team simulation developed under the COPE program was C3 Interactive Task for Identifying Emerging Situations (CITIES). Dr. A. Rodney Wellens was its creator and implementer (during his time as a senior scholar at the Air Force Office of Scientific Research at Wright-Patterson Air Force Base). Dr. Brown, Dr. Wellens, and the senior author of this chapter worked together in the COPE program in the 1980s to understand the intersection of team decision making and potential use of technologies to improve performance.

As one looks at the limitations of TRAP, they provided inputs for the design of the next team simulation. The goal was to look at interdependent teamwork that emerged in a situated C3 context. In particular, the focus remained team decision making, but the purpose of the CITIES simulation was to make the context more prominent in terms of task content, materials, and interfaces that represented a real-world domain. This was directly in contrast to the abstract context-independent nature of TRAP. Hence CITIES still enabled a controlled experimental task with a requisite team composite score that required individual and team performance, dependent upon what events transpired.

Additionally, the simulation was designed to test a specific hypothesis of teamtechnology interaction, termed psychological distancing (see Wellens, 1990), wherein different kinds of technological media facilitated degrees of psychological closeness (e.g., face-to-face communication provides a high degree of closeness whereas computer messaging does not). Originally, this construct was portrayed as a linear dimension and varied from face-to-face to two-way TV (to emulate video teleconferencing) to telephone to two-way messaging (to emulate computer communication). Experiments using CITIES could assess how technology impacted closeness and in turn how this might influence team performance. Furthermore, the initial experiments pioneered the use of expert systems as a means of aiding team decision makers, and additionally pioneered the use of “talking heads” to represent how an expert system could interact with a human, in contrast to the typical text-based verbal communication (Wellens, 1993). These technological innovations made the CITIES research very much ahead of its time.

One of the strategic moves with CITIES was to generate a realistic C3 context that could be used without concerns for classified materials. Because of this constraint it was decided that the simulation could utilize emergency crisis management a context similar to C3 wherein teamwork and sharing of information for different situations/ events could be easily utilized. The overall simulation, like TRAP, was a resource allocation task, but it was more realistic. Information regarding events that took place in a city were sent to two different dispatchers for processing: one dispatcher controlled police and towing resources while the other dispatcher controlled fire and rescue resources. One element of the context that was specifically emphasized and remains of interest was team situation awareness (SA). At the time SA was introduced as an individual cognition concept (Endsley, 1995) and had not been applied much to teamwork. Therein, the CITIES simulation was unique in that it was looking at SA at the team level with the purpose to see how team SA and performance might vary with the use of different technologies, defined by their relative degree of psychological closeness (Wellens, 1993).

CITIES was similar to the abstract TRAP task in that it was predicated on team resource allocation, required individual performance as well as group-level performance, required communication and interdependent decision making, included a built-in team performance score, and employed teams to process shared information resulting in decisions that had consequences on upcoming choices and outcomes. However, the simulation was entirely embedded in real-world situations that contained events that could reveal underlying scenarios and attributes. As events unfolded greater SA across the team was possible which theoretically provided a greater opportunity for understanding w'hat was going on and in turn what was required for resource allocation. The task required the police/tow dispatcher and the fire/rescue dispatcher to allocate specific but limited resources to be applied to events that emerged as portrayed on a computer interface. Depending on the severity and type of event, different resources w'ould need to be allocated. If appropriate resources were not allocated to an event in a timely manner, an event could grow worse over time (e.g., a fire could grow out of control, a traffic accident could create gridlock).

Team cognition therefore involved communication among the team members to coordinate resources allocated for specific events, to keep track of them, and to keep monitoring the situation for new events and w'hat would be needed (anticipation). Team members also had to keep an eye on whether their own resources were nearing depletion. Obtaining team SA helped to understand the big picture and know in advance w'hat the projected resource demand and allocation would entail. It was hypothesized that different kinds of technologies (basically looking at electronically mediated communication) would afford relative levels of psychological closeness and team SA, resulting in improvement or decline in team performance scores. In summary, this simulation focused on the team cognition constructs of team resource allocation-based decision making and team SA, but was designed to gain an understanding of whether technology would either support or detract from team performance, in contrast to face-to-face performance.

CITIES utilized a fairly basic computer hardware setup of two Apple II computers comprising two experimental rooms connected to each other through a control room (Wellens, 1990). The computers presented information at each team member’s workstation as the information flow dynamically changed and propagated various event streams. Standard programming was used but it was somewhat ad hoc (i.e., the programming was specific to this simulation only) for many tasks. The architecture needed to simulate and introduce an electronic aid as a team member, w'herein an expert rule-based system was produced to enable team members to work with a decision aid. Uniquely, the system employed a message transformer whereupon a message from the aid could be communicated to the other team members via textbased dialogue or through a talking head which could be either a male or female representation. Furthermore, the computer systems underlying CITIES afforded great integration and depth of measurement. In addition to the composite team performance scores, automated communication measures were captured to obtain signal detection and duration to help understand how team communications were shared and in turn contributed to SA and performance. In addition to the communication measures, CITIES also integrated heartrate monitoring as a physiological measure of workload. All in all, the CITIES simulation was ahead of its time and the architecture utilized while being ad hoc in nature was adept at accomplishing the purposes inherent in research quests.

Some of the major results obtained through the use of CITIES included (1) showing the impact and effect of having an expert system as a team member, (2) showing how different forms of electronic-mediated technologies could impact team situation awareness and team performance, (3) demonstrating that higher team SA does not necessarily result in having higher team performance, and (4) utilizing early forms of physiological and communication measures within the context of teamwork and how this could provide useful information. Although the theories underlying some of the design of CITIES have decreased in prominence, the basic bones and premises of having a dynamic, temporal-based team simulation have withstood the test of time. As will be communicated in the next chapter, CITIES became the foundation for NeoCITIES, which continues in use today.

Jasper/Repsaj

The next simulation, termed Jasper/Repsaj, to be reviewed was actually more of what is termed a “problem set” which was housed within a simulation shell for the purpose of team research. The Jasper series was obtained through the first author’s dissertation work with the Vanderbilt Learning Technology Center (through PhD advisor Dr. John Bransford) in the late 1980s and early 1990s, specifically taking place at the Air Force Research Laboratory. The use of Jasper Series represented a somewhat dramatic turn in focus in terms of the demands of the task and the kind of task that team members were required to solve. Qualitatively, Jasper provided an alternative look at team cognition, one that required deeper levels of cognition than TRAP or CITIES but was also highly immersed in perception and context.

The Jasper series is a specific problem set officially called “The Adventures of Jasper Woodbury” (The Cognition and Technology Group at Vanderbilt, 1992) and was designed to engage people in understanding, learning, and practicing “distance = rate x time” physics problems within a complex and interconnected context. The context involved crisis management and the purpose was to have a simulation representative of problem-based learning. In particular, the use of Jasper was designed to assess the degree to which a team can learn together in a realistic context and determine how memory and transfer of learning is encoded when given an individual near term transfer problem. As such Repsaj (Jasper in reverse) was a near-term analogical problem given to individuals after they had solved Jasper in the team context. Therein, the overall purpose of Jasper/Repsaj was to simulate team problem solving in a highly situated context and determine to what extent collective induction produced positive impacts on the transfer of learning and memory of an individual (see McNeese, 2000).

In this case, Jasper had all the real-world constraints and solutions for multi-step planning/problem solving embedded in a video. The video contained all the information needed to create an optimal solution path, however the problem space was ill-defined and ill-structured. Therein, participants had to determine what the actual problem was and integrate elements in context to determine how a solution path might proceed. To make things more difficult, Jasper could be solved in a variety of ways, so participants had to figure out the best solution to the challenge by contrasting and comparing different possible solutions to see which one was best. The problem-solution paths contained aspects of “distance = rate x time” physics problems, but also contained other information interdependencies that constrained possible solutions.

Specifically, Jasper required participants to figure out how to save an endangered species (an eagle) who was shot deep within a forested area without much access. This set up an urgent temporal component to the problem because the faster the eagle obtained veterinary help the higher was its probability of survival. The problem context was heavily embedded with differing means of transportation (cars, walking on foot, use of an ultralight air vehicle) and other constraints that impacted the major variable of interest: the time it took to rescue the eagle. The primary source of problem solving involved calculating D = R x T equations for various scenarios wherein secondary issues would impact solutions space (for example, the combined weight of a flyer, gas tank, and gas could be one constraint that might prohibit the use of the ultralight). The problem demanded dynamic decision making to figure out the most optimal solution. Many different solutions could be utilized, but there was one optimal solution based on insight, analytics, and planning. The problems, while being ill-defined in nature, were all presented in a highly real-world context generated by the video. The demands inherently required understanding of different roles, basic physics, and being able to test the validity of generated plans.

Jasper/Repsaj is similar to CITIES only from the standpoint they both utilize the emergency crisis management context, albeit different ones. One very unique element of Jasper is that it provides a broad exposure to problem finding and problem solving. Because the problem set contains many ill-defined elements (actually requiring a participant to parse the Jasper video into highly specific but interrelated sub-problems) it requires (using McGrath’s circumplex framework) planning, creativity, and intellective decision making to generate the best solution. The way our research group utilized it for team performance was by allowing dyads (two-person teams) to work in a way that required mutual cooperative learning. This is a type of open-ended teamwork that is used for differing contexts that do not predefine how teams have to work together, so it is very valuable for real world, on-the-fly decision making that may have wicked problems embedded. Our instructions for the joint problem solving stated that the dyad was to work together and solve the challenge problem.

One of the elements in the Jasper experiments that was studied was the extent to which collective induction (Laughlin, 1999) developed and in turn how this helped to create memory and learning. The determination of how' much they actually worked together (collective induction) was determined through analyses of actual videos of teamwork that were encoded along a number of dimensions (see McNeese, 2000). After completing the cooperative work condition, each individual was provided with the Repsaj problem to test whether collective induction facilitated spontaneous access of knowledge, memory response, and analogical transfer. Repsaj was a near term analogy also involving emergency crisis management but in a context of a military officer rescuing another officer experiencing frostbite in a remote area of Canada using a flying snowmobile contraption.

In contrast to TRAP and CITIES, Jasper/Repsaj tapped into a different team cognitive skill set owing to the demands presented. It provided a more naturalistic open-ended scenario for team cognition to develop. In addition to the level of the solution generated by dyads, the actual diagnosis and problem solving accomplished was evaluated through the use of a planning net (Goldman, Vye, Williams, Rewey, & Hmelo, 1992) to determine which sub-problems were actually solved effectively. The simulation represented a socio-cognitive science approach and relied on both quantitative and qualitative methods of evaluation.

During the late 1980s to early 1990s the setups for simulations were not sophisticated and often took some improvisation to create emulation of the processes needed. In the case of Jasper/Repsaj this was certainly true as the computing architecture was rather basic. The architecture/setup for Jasper was centered around a Macintosh computer connected with a laserdisc player, and a color monitor. A time-signal apparatus was used to record specific timing behavior of the participants. Dyads were required to solve the Jasper problem in a “think aloud” paradigm. Therein, three video cameras with integrated microphone systems were connected to a VCR to record the video/audio problem-solving components of a dyad’s think aloud protocols. One affordance presented by a laser disc player was that it enabled participants to easily return to specific scenes in the video to replay them. This provided “real” reenactments of different subproblem elements embedded within the video case. This allowed for the process of perceptual differentiation to occur wherein contrasts and comparisons of facts, scenes, and transitions among scenes were available and could potentially be integrated. The architecture provided capture of all forms of problem-solving behavior. For the transfer problem solved by individuals, separate rooms were provided wherein the Repsaj problem was presented as verbal analogue of Jasper. This architecture achieved the desired purposes of the experiment but was inherently different from TRAP and CITIES simulations.

This particular orchestration of Jasper/Repsaj enabled experimenters to look at three phases of team cognition vis-à-vis dyadic team problem solving: knowledge acquisition and solving of a problem, transfer of learning to a near-term analogy, and memory recall of both Jasper and Repsaj problems. Among the many findings was that the transfer of knowledge is difficult for individuals when solving Repsaj as a verbal problem, even though they acquired knowledge in Jasper, a video that emphasized perceptual learning. The setup provided a lot of instruction about how to do an extended, complex experimental session that would test and evaluate dyads engaged in ill-defined, dynamic problem solving within a realistic, perceptual environment while accessing multiple measures of team and individual performance.

SUMMARY AND CONCLUDING REMARKS

Much of the work at Wright-Patterson Air Force Base was entrenched in actual applied settings such as aircraft crews, command and control teams, and engineering design teams to address issues of great concern for the Department of Defense. The history of this work set the stage for more contemporary progressions of distributed team cognition (as portrayed in the next chapter). The foundational work presented in this chapter provided much insight and ingenuity. Of course, along the way there were failures, mistakes, and false starts. This is the nature of real-world work and research where limited understanding created gaps that produce consequences. Yet much was learned and adapted for future work that helped to inspire and provide needed feedback for learning to take place. This is how lessons learned are formed and contribute to progress. As the senior author began research on teams in the early 1980s, major theoretical positions and reviews of the literature were provided by Dyer (1984), Hackman and Morris (1975), Hill (1982), and Roby (1968). Today these perspectives, reviews, and expertise have been replaced by the likes of Salas, Cooke, Mathieu, Fiore, Mohammed, DeChurch, and others as paragons of distributed team cognition. As the reader will see, many of these scientists are represented within this handbook and provide great depth of knowledge and perspicacious advice within different research areas of distributed team cognition. Likewise, there have been numerable advances in methods and measures that have facilitated advancements and greater comprehension about the target research area. Technology has deepened with complexity but has also created innovative advances that allow precise support of teamwork especially as it encompasses distributed information, places, and people. Tools have been designed that employ new outlays of data collection and data interpretation that were not thought possible 30 years ago. When one considers all these changes over the last 30 years, it is absolutely incredulous what has come to pass in advancing the state of the art of the field. This chapter has captured historical and conceptual developments from the greater good, and the next chapter will show how simulation has evolved in many different ways, and what is still possible.

NOTES

  • 1. Ironically later in his research life at the Air Force Research Laboratory, Wright-Patterson AFB, OH he was able to become the Director of the Collaborative Design Technology Laboratory which provided research on how computer-supported cooperative work and technology could be adapted and developed for engineering design teams. This represents a salient progression from actual design work with an engineering team to study of distributed team cognition within engineering teams, therein providing an important ecological niche to much of my research that focused on supporting teams with technological innovation.
  • 2. The first author had the pleasure to meet and interact with Dr. Warrick, one of the early pioneers of human factors, while at the Fitts Human Engineering Division. Dr. Warrick continued to volunteer at the laboratory into his 80s.
  • 3. The use of the term “human factors” provides a generic term meant to collectively refer to human factors engineering, user-centered design, human-computer interaction, cognitive systems engineering, and human-system integration.

REFERENCES

Alluisi, E. A.. Chiles. W. D.. Hall, T. J.. & Hawkes. G. R. (1963). Human group performance during confinement. USAF AMRL Report No. TDR-63-87. Wright-Patterson Air Force Base, OH: Armstrong Aerospace Medical Research Laboratory.

Alluisi, E. A. (1994). Roots and rooters. In H. L. Taylor, (Ed.), Division 21 members who made distinguished contributions in engineering psychology, (pp 4-22). Washington, DC: APA.

Baron, J. (2004). Normative models of judgment and decision making. In D. J. Koehler & N. Harvey (Eds.), Blackwell handbook of judgment and decision making (pp. 19-36). London: Blackwell.

Bransford. J., Sherwood. R., Hasselbring. T.. Kinzer, C., & Williams, S. (1990). Anchored instruction: Why we need it and how technology can help. In D. Nix & R. Spiro (Eds.), Cognition, education, and multimedia: Exploring ideas in high technology (pp. 115— 141). Hillsdale, NJ: Lawrence Erlbaum.

Brown, C. E., & Leupp, D. G. (1985). Team performance with large and small screen displays. AAMRL-TR-85-033. Wright-Patterson Air Force Base, OH: Armstrong Aerospace Medical Research Laboratory.

Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(), 32-41.

Cannon-Bowers, J. A., & Salas, E. (1998). Individual and team decision making under stress: Theoretical underpinnings. In J. A. Cannon-Bowers & E. Salas (Eds.), Making decisions under stress: Implications for individual and team training (pp. 17-38). Washington, DC: American Psychological Association.

Chapanis, A. R.. Garner, W. R., & Morgan. C. T. (1949). Applied experimental psychology. New York: John Wiley & Sons.

Clapper, T. C. (2018). Facts and fears: Hard truths from a life in intelligence. New York: Viking.

Cognition and Technology Group at Vanderbilt. (1992). The Jasper experiment: An exploration of issues in learning and instructional design. Educational Technology Research and Development. 40(1), 65-80.

Cooke, N. J., Gorman, J. C., Duran, J. L., & Taylor, A. R. (2007). Team cognition in experienced command-and-control teams. Journal of Experimental Psychology: Applied, Special Issue on Capturing Expertise across Domains, 13, 146-157.

Cooke, N. J., Rivera, K., Shope, S. M., & Caukwell, S. (1999). A synthetic task environment for team cognition research. Proceedings of the Human Factors and Ergonomics Society 43rd Annual Meeting (pp 303-307). Santa Monica, CA: Human Factors and Ergonomics Society.

Dyer, W. G. (1984). Strategies for managing change. Reading, MA: Addison Wesley Publishing.

Eitner. E. O. (1987). When decision aids fail. AAMRL-TR-87-035. Wright-Patterson Air Force Base, OH: Armstrong Aerospace Medical Research Laboratory.

Endsley, M. R. (1995). Measurement of situation awareness in dynamic systems. Human Factors, 37(1), 65-84.

Festinger, L. (1962). Cognitive dissonance. Scientific American, 207(4), 93-107.

Gardner. H. (1985). The mind's new science: A history of the cognitive revolution. New York: Basic Books.

Goldman. S. R., Vye, N. J.. Williams, S. M., Rewey. K., & Htnelo. C. (1992, April). Planning net representations and analyses of complex problem solving. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.

Grether, W. F. (1995). Human engineering: The first 40 years 1945-1984. In R. J. Green. H. C. Self, & T. S. Ellifritt (Eds.), 50 years of human engineering: History and cumulative bibliography of the Fitts Human Engineering Division. Wright-Patterson Air Force Base, OH: Crew Systems Directorate, Armstrong Laboratory, Air Force Materiel Command.

Hackman, J. B., & Morris, C. G. (1975). Group tasks, group interaction process, and group performance effectiveness: A review and proposed integration. In L. Berkowitz (Ed.). Advances in experimental social psychology (Vol. 8). New York: Academic Press.

Hill. G. W. (1982). Group versus individual performance: Are n+1 heads better than one? Psychological Bulletin, 91(3), 517-539.

Hinsz, V. B., Tindale, R. S., & Vollrath, D. A. (1997). The emerging conceptualization of groups as information processors. Psychological Bulletin, 121,43-64.

Hollins, J.. Hutchins, E.. & Kirsh, D. (2000). Distributed cognition: Towards a new foundation of HCI. ACM Transactions on Computer Human Interaction, 7(2), 174-196.

Johnson, D.W. & Johnson, R.T. (1994). Learning together and alone (4th ed.), Needham Heights, MA: Allyn and Bacon.

Kidd. J. S. (1961). A comparison of one-, two-, and three-man work units under various conditions of work load. Journal of Applied Psychology, 45(3), 195-200.

Kincade, R. G., & Kidd, J. S. (1958). The effect of team size and’ intermember communication on decision-making performance. WADC TR 58-474. Wright-Patterson Air Force Base, OH: Aero Medical Laboratory. Wright Air Development Center.

Klein, G. A.. Orasanu, J., Calderwood, R.. & Zsambok. C. E. (Eds.). (1993). Decision making in action: Models and methods. Westport, CT: Ablex Publishing.

Laughlin, P. (1999). Collective induction: Twelve postulates. Organizational Behavior and Human Decision Processes, 80(1), 50-69.

Letsky. M. P., Warner, N. M.. Fiore. S. M., & Smith. C. A. P. (Eds.). (2008). Macrocognition in teams. Burlington, VT: Ashgate.

Lindsay, P. H.. & Norman, D. A. (1977). Human information processing: An introduction to psychology. New York: Academic Press.

Lorge, I., Fox, D., Davitz, J., & Brenner, M. (1958). A survey of studies contrasting the quality of group performance and individual performance, 1920-1957. Psychological Bulletin, 55(6), 337-372.

Martin, E., Lyon, D. R., & Schreiber, B. T. (1998). Designing synthetic tasks for human factors research: An application to uninhabited air vehicles. In Proceedings of the Human Factors and Ergonomics Society annual meeting (pp. 123-127). Santa Monica, CA: Human Factors and Ergonomics Society.

McBride, D. J., & Brown. C. E. (1989). Team performance in a dynamic resource allocation task: The importance of heuristics. In Proceedings of the Human Factors and Ergonomics Society 33rd annual meeting (pp 831-835). Santa Monica, CA: Human Factors and Ergonomics Society.

McCormick, E. J. (1957). Human engineering. New York: McGraw-Hill Publishers.

McGrath, J. E. (1984). Croups: Interaction and performance. Englewood Cliffs, NJ: Prentice Hall.

McNeese, M. D. (1992). Analogical transfer in situated cooperative learning. Unpublished doctoral dissertation. Vanderbilt University, Nashville, TN.

McNeese, M. D. (2000). Socio-cognitive factors in the acquisition and transfer of knowledge. Cognition, Technology, and Work, 2, 164-177.

McNeese, M. D., & Brown, C. E. (1986). Large group displays and team performance: An evaluation and projection of guidelines, research, and technologies. AAMRL-TR-86-035. Wright-Patterson Air Force Base, OH: Armstrong Aerospace Medical Research Laboratory.

McNeese, M. D.. & Brown, C. E. (1986). Large group displays and team performance: An evaluation and projection of guidelines, research, and technologies. AAMRL-TR-86-035. Armstrong Aerospace Medical Research Laboratory, Wright-Patterson Air Force Base, OH.

Meister, D. (1999). The history of human factors and ergonomics. Mahwah, NJ: Lawrence. Erlbaum Associates.

Mesmer-Magnus, J., & DeChurch, L. (2009). Information sharing and team performance. A meta-analysis. Journal of Applied Psychology, 94(2), 535-546.

Morrissette, J. O. (1958). An experimental study of the theory of structural balance. Human Relations, 11(3), 239-254.

Myers, B. A. (1998). A brief history of human computer interaction technology. ACM Interactions, 5(2), 44-54.

Nardi, B., and Miller, J. (1991). Twinkling lights and nested loops: Distributed problem solving and spreadsheet development. International Journal of Man-Machine Studies 34:161-184.

Neisser, U. (1967). Cognitive psychology. New York: Appleton-Century-Crofts.

Neisser, U. (1976). Cognition and reality: Principles and implications of cognitive psychology. New York: Freeman.

Pattipati, K. R., Kleinman, D. L., & Ephrath, A. R. (1983). A dynamic decision model of human task selection performance. IEEE Transactions on Systems, Man, & Cybernetics, SMC- 13(2). 145-166.

Pea. R. D. (1993). Practices of distributed intelligence and designs for education. In G. Salomon (Ed.), Distributed cognitions (pp. 47-87). New York: Cambridge University Press.

Rasmussen, J., Pejtersen, A. M., & Goodstein, L. P. (1994). Cognitive systems engineering. New York: Wiley.

Roby, T. B. (1968). Small group performance. Chicago, IL: Rand McNally.

Salas, E., Fiore, S. M., & Letsky, M. P. (Eds.). (2012). Theories of team cognition: Cross-disciplinary perspectives.

Tversky, A., & Kahneman, D. (1974). Judgments under uncertainty: Heuristics and biases. Science, /55(4157), 1124-1131.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.

Wellens, A. R. (1990). Assessing multi-person and person-machine distributed decision making using an extended psychological distancing model. AAMRL-TR-90-006. Wright-Patterson Air Force Base, OH: Armstrong Aerospace Medical Research Laboratory.

Wellens, A. R. (1993). Group situation awareness and distributed decision making: From military to civilian applications. In J. Castellan (Ed.), Individual and group decision making: Current issues (pp. 267-291). Hillsdale, NJ: Lawrence Erlbaum Associates.

Williges, R. C., Johnston, W. A., & Briggs, G. E. (1966). Role of verbal communication in teamwork. Journal of Applied Psychology, 50(6), 473-478.

Wilson, D., McNeese, M. D., & Brown, C. E. (1987). Team performance of a dynamic resource allocation task: Comparison of shared versus isolated work setting. In Proceedings of the 31st annual meeting of the Human Factors Society (Vol. 2, pp. 1345-1349). Santa Monica, CA: Human Factors Society.

Wohl, J. G. (1981). Force management decision requirements for air force tactical command and control. IEEE Transactions on Systems, Man, and Cybernetics, 11(9), 618-639.

 
Source
< Prev   CONTENTS   Source   Next >