Expertise and Distributed Team Cognition: A Critical Review and Research Agenda

James A. Reep

CONTENTS

Introduction............................................................................................................185

Expertise................................................................................................................186

Knowledge Organization...................................................................................187

Skill............... 188

Cognitive Reasoning.........................................................................................189

Expertise Distributed in Context.......................................................................190

Expertise Distributed in Teams.........................................................................191

Measuring and Capturing Expertise.......................................................................192

Quantitative Measures.......................................................................................193

CWS Performance Index..............................................................................193

Inference Verification Technique..................................................................194

Thurstonian Model.......................................................................................194

Qualitative Measures.........................................................................................194

Mental Models..............................................................................................195

Recognition-Primed Decision Model...........................................................196

Recognition/Metacognition Model..............................................................196

Discussion..............................................................................................................197

Future Research Agenda........................................................................................197

Conclusion.............................................................................................................199

Notes......................................................................................................................200

References..............................................................................................................200

INTRODUCTION

Understanding expertise is an important endeavor in a variety of research contexts. Cognitive scientists seek to understand individual differences of knowledge structures and representations, and their influence on decision-making processes (Wright & Bolger, 1992). Al researchers are interested in how experts learn in hopes of creating intelligent systems that perform at high or human-like levels (Hambrick & Hoffman, 2016). Educationalists seek an understanding of how experts are created so that methods of instruction and training can be improved (Alexander, 2005; Bereiter & Scardamalia, 1986).

Within complex fields of practice, formalized education (i.e. attending a college or university) provides a much-needed knowledge structure for students (Alexander, 2005; Bereiter & Scardamalia. 1986; Goldman & Petrosino, 1999; Mehta, Suto, Elliott, & Rushton, 2011) but often lacks the specialized task and contextual knowledge employers need for them to be autonomous employees, or experts (Bishop, 1989). To mitigate the lack of expertise for these newly hired employees, informal and formal training programs can be implemented to supplement an individual’s knowledge gap. The problem that arises in these programs, however, is that they neglect the teamwork aspect of the environment, which has been identified as being important to employers (Hesketh, 2000). As such, having someone familiar with the context (i.e. an expert) train alongside a novice in these settings would be beneficial in addressing teamwork and communication between novice and expert employees, emulating some of these very characteristics (i.e., learning, problem solving, communication, and others) during the training (Tracey et al., 2015; Urick, 2017).

Unfortunately, experts and novices1 have very different understandings and perspectives on the environment (Hinds, Patterson, & Pfeffer, 2001; Hmelo-Silver & Pfeffer, 2004), how they approach problem solving (Fischer, Greiff, & Funke, 2012; Gick, 1986; Klein & Borders, 2016), and how their cognition is distributed (or not) across content and artifacts within the environment (Hollan, Hutchins, & Kirsh, 2000; Hutchins, 1995). This presents organizations with the challenge of imbuing novices with the knowledge contained within individual experts with years of service, particularly due to the disparate mental models of these two group classifications (i.e. experts versus novices) and the composition of their cognitive processes. More pointedly, experts’ cognitive processes are distributed among their internal biological cognitive processes, the context, and the artifacts within their environment, whereas conversely, novices do not possess the strong connections between the environment and the task limiting the distribution of their cognitive processes.

To better understand expertise, this chapter will critically analyze the extant literature on the topic, analyze how it is measured and used, and declare a future research agenda employing the information explicated.

EXPERTISE

Because of the important role that experts play in our society, the nature of expertise has been the topic of much research. Most researchers define expertise as knowledge and experience gained from spending a significant amount of deliberate practice at a particular skill or within a particular domain (Ericsson, Krampe, & Tesch-Romer, 1993; Ericsson, Prietula, & Cokely, 2007; Hambrick et al., 2014). The key to developing expertise is this idea of deliberate practice, which “entails considerable, specific, and sustained efforts to do something you can’t do well—or even at all” (Ericsson et al., 2007, p. 3).

The general amount of time ascribed for an individual becoming an expert is about 10,000 hours of deliberate practice (Ericsson et al., 1993; Gladwell, 2008) or ten years (Hayes, 1989; H. A. Simon & Chase, 1973). While many researchers disagree with both of these figures, they all agree that there is still a large amount of painstaking effort and time spent in order to develop expertise (Baker & Young, 2014; Hambrick & Hoffman. 2016; Hambrick et al., 2014). In fact, a distinguishing characteristic of an expert is their ability to consistently and reliably display superior performance upon demand (Ericsson & Lehmann, 1996)

Weinstein (1993) argues for identification of two categories of expertise: (1) epis-temic expertise, which is a function of what an expert knows, and (2) performative expertise, which is a function of what an expert does. Within the realm of cognitive science, expertise is defined in terms of development, knowledge structures, and reasoning processes (Hoffman, 1998). These loosely correlate with the categories identified by Weinstein in that development of expertise is gaining the skills necessary to perform a task, or performative expertise. Likewise, knowledge structures and their organization relate to epistemic expertise. However, Hoffman identified an additional attribute of expertise called “reasoning processes” that relates to the cognitive processes involved with making decisions.

Knowledge Organization

The organization of knowledge that an expert knows—their epistemic expertise— is much different than that of a novice. An expert’s extensive knowledge affords them the ability to abstract both the knowledge and the problem, whereas novices have a much more simplistic grasp (Hmelo-Silver & Pfeffer, 2004). Furthermore, when discussing complex systems, experts tend to “have a more functional and behavioral understanding w'hereas novices, regardless of age, have a more structural representation” (Hmelo-Silver & Pfeffer, 2004, p. 132). Having greater knowledge, experts describe complex systems in terms of relationships, patterns, and outcomes, expressing a more integrative understanding (Chase & Simon, 1973a; Chi, Feltovich, & Glaser, 1981; D. P. Simon & Simon, 1978) affording experts the capability of distributing cognition into the context, wliereby artifacts within the system are coupled together with the cognitive processes of the individual (Hutchins, 1995). Novices, on the other hand, describe these same complex systems more simplistically (Adelson, 1981, 1984), making reference to syntactically organized knowledge. For example, Hmelo-Silver and Pfeffer (2004) asked experts and children to create and then describe an aquarium. Experts discussed not only the physical structures that were present in their drawing but also their function, purpose, and behavior within the tank. The added dimensions of context and artifacts into the distributed cognitive processes further strengthens the important connections that couples epistemic knowledge with performative knowledge and reasoning.

Studies have shown that experts are also able to connect concepts and memories in meaningful ways. Their conceptual categories (Voss, Greene, Post, & Penner, 1983) are then able to be utilized to recognize patterns and provide the expert with intuition that conceptually different problem types may also exhibit the same characteristics (Murphy & Wright, 1984). For example, Groen and Patel (1988) found when comparing medical diagnosticians and medical students that experts tended to remember the essence of cases instead of their individual specifics.

Ericsson and Lehmann (1996) also found that when comparing experts to novices, experts are able to arrange problems into categories using features of their solutions, while novices are only able to utilize features of the problem statement alone. For example, Chi, Glaser, and Rees (1982) found that not only did experts possess more knowledge about physics than novices, but because of their superior knowledge structure organizations, experts could represent problems in terms of their relative principles. Conversely, novices were only able to express problems with regard to their surface elements.

The pattern recognition and interrelated problem categories possessed by experts affords them the ability to plan solutions, when possible, in memory and on-the-fly (VanLehn, 1996). Chase and Simon (1973b) demonstrated that an expert’s pattern recognition accounted for the superior chess move selection and seemingly supernatural memory without breaking inherent human information processing limitations (e.g. limited short-term memory capacity) (Newell & Simon, 1972). Furthermore, Klein, Calderwood, and Clinton-Cirocco (1986) developed the Recognition Primed Decision (RPD) model that emphasizes the use of recognition for decision making rather than calculation or analysis. While studying fire ground commanders (FGCs), Klein et al. (1986) discovered that FGCs relied primarily on experience to identify an appropriate course of action rather than comparisons and evaluations of alternative options. FGCs were able to effectively identify when situations were typical (or not) and select the most effective action to take incorporating the distributed context into their cognitive process. Moreover, FGCs relied on their expertise to avoid meticulous internal deliberations when selecting an appropriate course of action (Klein, 1993).

Skill

Much of the relevant research with regards to performing as an expert measures expertise on a continuum from novice to expert (Abelson, 1981; Phelps & Shanteau, 1978). When engaging in deliberate practice, a novice first gains knowledge about the rules in order to be able to at least perform the skill being practiced. Through continued repetition and practice, the skill becomes almost innate. For instance, when learning to ride a bicycle, novices must learn how to balance, turn, pedal, and brake (i.e. the rules and functions of the bicycle). As a skill is practiced and developed there are “stage-like qualitative shifts that occur as expertise develops” (Hoffman, 1998, p. 84) whereby explicit instructions and knowledge become tacit or “automatic” (Sanderson, 1989).

When knowledge becomes tacit, it is more readily available for use in decision making and performing skill. There is a sort of muscle memory that develops that leads to decreased response times, quicker decision making, and more precise reactions (McLeod & Jenkins, 1991). These benefits are generally quantitatively measurable. For example, elite athletes when prompted with opportunities to perform can produce required reactions faster and earlier in the process than athletes with less skill. The reasoning for this difference is that tacit knowledge provides experts with the ability to perform anticipatory movements (Helsen & Pauwels, 1993) within a much smaller reaction window', often with greater accuracy and precision (McLeod & Jenkins, 1991).

In an analysis of expert jugglers, Huys, Daffertshofer, and Beek (2004) demonstrated that with a partially obstructed view and thus shorter available reaction times, experts were able to make necessary movement corrections while only able to view an object’s apex. Furthermore, the researchers found that an expert juggler has a much lower variability to object trajectory patterns minimizing the corrections needed to keep the objects aloft in the first place. Likewise, in studying typing experts it was shown that experts tended to read well ahead of the text that they are typing (Gentner, 1988) and were able to link together prepared movements of their fingers in advance (Gentner, 1983; Salthouse, 1984), which led to an increase in typing speed as compared to those of novice typists. In fact, Salthouse (1984) determined that when an expert is disallowed the ability to look ahead, their typing skill was nearly reduced to that of a novice. Furthermore, Klein (1993) concluded that an expert’s ability to mentally simulate a situation affords them to the capability to discover the optimal solution to problems quickly rather than having to compare several options. As a result, experts often identify a reasonable solution quickly, when considering what is known, and therefore do not necessarily need to generate alternative course of actions.

To further illustrate the look ahead capability of experts, researchers have found that expert sight-readers—those that can play an unfamiliar piece of music on an instrument given the musical score—tend to look further ahead to anticipate music notes and movements (Bean, 1938; Goolsby, 1994) similar to expert typists. Additionally, novice sight-readers often look at single individual notes, whereas experts tend to look at chunks of notes throughout the score of music. Consequently, experts are able to utilize their advanced knowledge of music theory, which “facilitates the efficiency of encoding of patterns and chunks” in anticipation of upcoming musical notes (Lehmann & Ericsson, 1996, p. 5).

Cognitive Reasoning

Similar to the phases of skill acquisition, cognitive reasoning in an individual progresses in stages (Kim, Ritter, & Koubek, 2013). In the early stages of learning, the novice spends their time attempting to grasp knowledge about the domain without applying it. This phase of development is characterized by reading, studying, and other methods of acquiring information. As practice continues, individuals begin to transition from knowledge acquisition to application whereby their attention shifts toward solving problems (VanLehn, 1996).

The study of expert performance began with the study of expert chess players and has continued to be the focus domain for decades (see Chase & Simon, 1973a; De Groot, 1965). In his seminal work, De Groot (1965) examined the cognitive processes of expert and novice chess players using think aloud strategies as they selected the best move when presented with various chess boards. Results showed that while both world-class chess players and those with considerably less skill performed planning and cognitive searches, the experts consistently selected the best move because the chess board is internally assimilated into the expert’s cognitive processing. Conversely, those with less skill often failed to consider the best move despite undergoing the same cognitive processes. Further study (Chase &

Simon, 1973b) of the phenomenon determined that expert chess players did not generate the best selection during cognitive searching but rather by memory recall. Visual cuing by the chess board prompted the experts—those with considerably more practice and exposure to chess move permutations—to simply recall the best move from a previous example.

Anderson (1993) claims the ability to “speed up” is due to knowledge being converted from declarative knowledge (i.e. what they know) into procedural knowledge (i.e. what they can do), and “the speed of the individual pieces of procedural knowledge also increase with practice” (VanLehn, 1996, p. 25). Other researchers (Newell, 1994; Newell & Rosenbloom, 1981) have shown that this process of practice and experience gradually allows an individual to integrate several smaller pieces of knowledge into larger subsystems for the specific tasks to be accomplished. The larger subsystems of knowledge can then be applied to a problem or task with the addition of only a few pieces of declarative knowledge, which increase efficiency and speed.

Expertise Distributed in Context

In the formative years of research on expertise, chess was the dominant domain of interest (see De Groot, 1965, 1966). Chess is a highly organized and rules-based context that is ideal for research but lacks the complexity of actual human activity. Since all human activity takes place within a complex environment (Feltovich, Ford, & Hoffman, 1997), chess simplified this environment, facilitating the study of expertise. Many of the findings within the chess context were important and supported research into more complex environments (Lewandowsky & Kirsner, 2000).

Context has been shown to be an important element to consider when studying knowledge and expertise resulting from its role in codifying and structuring knowledge within an individual. Perception of the environment provides cues that prompt knowledge recall and application. Nassehi (2004) articulates that there is a gap between knowing and doing (application). Novices that might possess knowledge or skill but do not act or are not able to accurately apply and use it present a real problem. Moreover, just because someone possesses a particular piece of knowledge does not mean that it gets transferred, recalled, or applied when appropriate.

Bransford, Franks, Yve, and Sherwood (1989) define this possessed but not accessed knowledge as “inert knowledge” in that the knowledge “is accessed only in a restricted set of contexts even though it is applicable to a wide variety of domains” (Bransford et al., 1989, p. 472). For example, Carraher, Carraher, and Schliemann (1985) found that street vendors in Chile accustomed to mentally calculating a customer’s bill were unable to perform the same calculations when removed from the context without the visual cues of handling the product. Unfamiliar with the usefulness of the calculations the vendors daily employed, they were unable to apply them to the tests given by the researchers. The disconnect between context and cognitive reasoning can be addressed during the learning process via examples that serve to give deeper meaning to the knowledge being acquired in order to foster its use when faced with similar problems (van Gog, Paas, & van Merrienboer, 2004; Kaminski, Sloutsky, & Heckler, 2008). Specifically, examples reinforce the “why” and the

“how” information is (and can be) used, providing grounding in the various uses of the knowledge, as well as, the rationale for each step in the cognitive reasoning process (van Gog et al., 2004). Experts, having been exposed to several scenarios and examples, are able to consistently and effectively access and thereby apply their knowledge both in familiar and in unfamiliar situations.

Furthermore, Choi and Hannafin (1995, p. 53) state that knowledge is a result of “unique relationships between an individual and the environment” implying that the environment is an integral part of an individual’s cognitive process. Removal from the context likewise removes the perceptual cues that are provided by the environment and the requisite interactions between the context and the team (Evans & Garling, 1991; Kaplan, 1991). Context is therefore important in “establishing meaningful linkages with experience and in promoting connections among knowledge, skill, and experience” (Choi & Hannafin, 1995, p. 54). Likewise, learning that occurs without context is less likely to be accessed and applied in unrelated situations (Black, Segal, Vitale, & Fadjo, 2012; Carraher et al., 1985). Consequently, learning that occurs through the use of contextually relevant problems and authentic tasks promotes engagement, motivation, and learning in students (Choi & Hannafin, 1995; Dochy, Segers, Van den Bossche, & Gijbels, 2003; Wilson, 1993).

Since human activity is situated within a given context, the environment can allow an individual to off-load some of their cognitive tasks (i.e. through the use of alarms, signs, and other environmental cues), which is important in complex work domains (Hollan et al., 2000; Smith & Collins, 2010). Responding to an alarm system or system notifications within a complex work environment, for example, is situated within the specific context and decisions are made internally but in concert with the external inputs presented to the individual (Shattuck & Miller, 2004, 2006).

Expertise Distributed in Teams

Just as context is an important aspect of expertise to consider, so too are tasks that involve teams of people working together. In organizations or tasks that utilize teamwork, effectively managing and coordinating the expertise distributed among the team members is important to ensure quality output (Faraj & Sproull, 2000). Not only does teamwork require expertise to perform tasks, but there must also be an awareness of where expertise is located within the team itself, where expertise is needed, and how it is to be applied to the task at hand. Faraj and Sproull (2000) found that coordination of expertise was strongly related with team performance (i.e. teams that perform well also optimally coordinate their expertise).

In team-oriented complex fields of practice requiring peak performance, much research has been conducted on team performance elucidating the fact that teams having experience working together generally perform better than those having no prior experience with their teammates (Faraj & Sproull, 2000; Larson, Christensen, Abbott, & Franz, 1996; Lewis, 2003; Moreland, Argote, & Krishnan, 1996; Smith-Jentsch, Kraiger, Cannon-Bowers, & Salas, 2009). Much of this success comes from an accurate, shared understanding of the context, tasks, and problems (Mathieu, Heffner, Goodwin, Salas, & Cannon-Bowers, 2000), awareness of the common knowledge overlap, generally known as a shared mental model (SMM)

(Cannon-Bowers, Salas, & Converse, 1993), and knowledge of where information can be elicited from within the team (Reagans, Argote, & Brooks, 2005).

As a result, the expertise gained through prolonged cooperation among team members has been shown to increase levels of trust (Uzzi, 1996), which in turn leads to a greater degree of information sharing (Uzzi & Lancaster, 2003). Consequently, increased trust allows an alignment of mental models and fosters an awareness of the knowledge and expertise availability distributed throughout the team.

Research has also shown that the degree of information sharing, specifically the sharing of “private” information, affects the decision-making processes within teams (Bowman & Wittenbaum, 2012; Lu, Yuan, & McLeod, 2012). When information is shared fluidly between team members, it has been observed that teams tend to make decisions more quickly and accurately due to everyone having a common mental picture, or team mental model, of the problem, context, and possible solutions (Mathieu et al., 2000; Mohammed & Dumville, 2001). As individuals work together in a team, they develop a conceptual picture of the skills and knowledge possessed by the other members on the team (Cannon-Bowers & Salas, 2001; Fiore, Salas, Cannon-Bowers, & London, 2001; Mohammed & Dumville, 2001). As a result, individuals are aware of who is best suited to provide a specific piece of expertise or skill to be leveraged in response to a problem, task, or event.

Moreover, research has shown that cross-training employees has a desirable impact on team performance as a result of individuals being trained in the skills and responsibilities of each other’s roles within the organization (i.e. team expertise) (Cannon-Bowers, Salas, Blickensderfer, & Bowers, 1998; Marks, Sabella, Burke, & Zaccaro, 2002; Volpe, Cannon-Bowers, Salas, & Spector, 1996). Entin and Serfaty (1999) acknowledge that cross-training provides a framework for increasing the accuracy of SMMs, providing each individual with an explicit view of their teammates’ roles, responsibilities, and expertise. However, they also recognize that, while cross-training positively influences SMMs, for the most benefit to team performance training must afford teams an opportunity “to exercise their SMMs through specific training of coordination strategies” (Entin & Serfaty, 1999, p. 324). In other words, teams must be trained together in situations that replicate their working environment, tasks, and necessary intra-team interactions (i.e. coordination strategies) to ensure that the training is authentic. Cross-training, therefore, not only positively impacts task specific expertise, but also fosters team-oriented expertise.

MEASURING AND CAPTURING EXPERTISE

One of the difficult endeavors when studying expertise is its measurement, especially when considering the domain specificity associated with being an expert. Also, while expertise may be available, it may only have a limited application and may not translate well to similar domains that have conditions that exceed the expertise a person has. Furthermore, the validity of such measures could potentially be affected by cognitive biases, prejudices, emotion, and cultural factors. Therefore, expertise has boundary conditions and its generalizability may be coupled to its degree of bandwidth. If expertise is measured using a narrow spectrum only then it may not generalize well to other fields of practice. As a result, there are many measurements that have been created that measure expertise, but they are all domain specific and have no generalizability outside of the target domain.

Quantitative Measures

Despite the inherit challenges to measuring expertise, attempting to do so provides many tangible benefits. When expertise is quantifiable, it can be used to identify experts among a pool of individuals within a given subject (Boeva, Krusheva, & Tsiporkova, 2012) or to compare the expertise of one individual to another (e.g. hiring new employees) (Royer, Carlo, Dufresne, & Mestre, 1996; Shanteau, Weiss, Thomas, & Pounds, 2002). Quantitative measurement is often trivial in domains and activities that have a visible ground truth with “fixed capabilities and limited moves, [and] the goal is unambiguous” (Serfaty, MacMillan, Entin, & Entin, 1997, p. 233), such as the formative research on expertise among chess players. For problems that are ill-defined, complex, ambiguous, and reliant on information availability, methodologies have been developed that seek to bridge the gap between the observational data inherent in qualitative measurement the difficulty of reliably measuring expertise.

CWS Performance Index

One such quantitative measurement index developed to measure expertise, the Cochran-Weiss-Shanteau (CWS) Performance Index developed by Cochran, Weiss, and Shanteau (Cochran, 1943; Weiss & Shanteau, 2003), has been used to assess expertise based on the premise that “expert judgment involves discrimination— seeing fine graduations among the stimuli and consistency evaluating similar stimuli similarly” (Germain & Tejeda, 2012, p. 206). The general idea of the CWS is that experts must be able to discriminate consistently. Measuring an individual’s response to different stimuli can gauge discrimination while response variances measure the inconsistency. When combined this ratio (i.e. discrimination to consistency) gives a quantifiable measurement of performance expertise. Moreover, the CWS Performance Index can be used to measure expertise in teams (a single measurement for the entire team) and individuals (a single-subject score). Indeed, it has been used to measure the expert performance of air traffic controllers (Thomas, Willem, Shanteau, Raacke, & Friel, 2001), ergonomists, agricultural judges, and occupational therapists.

However, while the CWS Performance Index has been successfully applied within various contexts, it has limitations. First, CWS is interpreted relative to other experts and is not interpreted absolutely. As such, CWS can be used to compare the expertise of individuals to determine which is performing better (Germain & Tejeda, 2012) but only relative to each other. This presents an issue when even experts perform consistently incorrectly to stimuli. While their CWS Performance Index score might be high, indicative of an “expert,” in actuality they may not be acting appropriately. Therefore, an expert might receive a high CWS Performance Index score, but that does not guarantee expertise. Germain and Tejeda explain that “a dance judge who evaluates the contenders primarily on the basis of appearance, taking into account hairdo and outfit very heavily, would be deemed an expert according to the CWS

Performance Index if those attributes were used to discriminate consistently among the dancers” (Germain & Tejeda, 2012, p. 207). In other words, the judge would consistently discriminate using the same criteria for each dancer, yielding a high discrimination/inconsistency ratio despite not accounting for skill of the dancer. However, it is obvious that this is not a true measure of the skill of the dancer and therefore we would not be able to say that we are measuring true expertise.

Inference Verification Technique

The Inference Verification Technique (IVT) developed by Royer, Carlo, Dufresne, and Mestre (1996) is another quantitative measure which can be used to discriminate between experts and novices. To measure expertise in a particular domain, an individual is given a corpus of text and asked to make inferences from two items taken from the text and creating an inference that connects the two items, called near inferences. Secondly, the individuals are then asked to connect an item of text with prior knowledge about the domain of interest and drawing valid inferences between the two items, called far inferences. Royer et al. (1996) found that experts were able to see underlying principles to problem solutions, but novices were only able to see the surface problem elements.

Thurstonian Model

The Thurstonian Model, developed by Steyvers, Miller, Lee, and Hemmer (2009), relies on the crowd as applied to order data, in that the average answers to ordering of data by a crowd is at least as good or better than that of each of the individual answers. When developing the model, it is important to weight the answers of those with more expertise and experience higher than others. However, the model is designed to account for individual differences as the distributions of answers are allowed to vary, which accommodates differences between the individuals while still capturing information about the objective ground truth from the crowd.

The Thurstonian Model has been shown to reliably identify the expert among a large pool of participants and can reliably measure expertise that “correlates highly with the actual accuracy of the answers” (Lee, Steyvers, de Young, & Miller, 2012, p. 3). However, this means that the model requires a large pool of participants from which to draw answers. Additionally, when ground truth might not necessarily be known, the model can be applied to represent ground truth as drawn from the crowd and can further be applied to prediction tasks. For example, the model could be used to ask individuals to predict end-of-season rankings for a sports team and then the information can be used to identify the expert ahead of time. Furthermore, the results could be used at the end of the year to further refine the model for subsequent seasons.

Qualitative Measures

Quantitative measurement of expertise is a worthwhile endeavor but has the potential to overlook some of the nuisances that occur within such a complex concept. Expertise is more than just performing to some gold standard. While an individual might possess extensive knowledge about a particular subject (i.e. they are a subject matter expert), they might not be able to perform tasks much better than a novice. Furthermore, in highly dynamic and complex environments that are uncertain, multidimensional, and (potentially) dangerous, quantitative measurement is difficult or near impossible. Dynamic environments stand starkly contrasted from more stable environments, such as the early research into the expertise of chess players (De Groot, 1966) as compared to the activities of operational planners (Rasmussen, Sieck, & Smart, 2009). Fortunately, dynamic environments can still provide useful knowledge with regards to expertise through qualitative measurement. Qualitative measurement can be employed in order to gather an understanding of the tasks, context, and knowledge necessary to perform as an expert. In contrast to traditional research on expertise, researchers sought to study how experts make decisions in their natural contexts or in simulations that captured the essential elements of their environment (Zsambok, 1997). To that end, naturalistic decision making (NDM) research studies “the way people use their experience to make decisions in field settings” (Zsambok, 1997, p. 4). Many qualitative models for measuring and studying expertise developed within the NDM research domain (e.g. mental models, recogni-tion/metacognition model, recognition-primed decision model).

Mental Models

Used in several disciplines for several years, mental models are the organized knowledge structures within individuals that afford the capability to describe and make predictions about the physical environment with which they are interacting (Greca & Moreira, 2000). Mental models also enable individuals to recognize and remember relationships among the various components of their surrounding environment (Mathieu et al., 2000). Furthermore, mental models provide a vehicle for experts to perform mental simulation, thereby making effective and accurate decisions, even when under time stress. In fact, Cannon-Bowers and Salas (2001) found that in timesensitive and high-stress environments that present time constraints that hinder planning, accurate mental models are crucial for optimal performance. Furthermore, not only do individuals have mental models that can be expressed, so too do teams of individuals. These intra-team mental models, or shared mental models, allow teams to adapt quickly to rapidly changing environments while still performing their job effectively (Cannon-Bowers et al., 1993).

Capturing and measuring mental models can be accomplished by a variety of qualitative methodologies depending on the phenomenon being studied, such as field observations (Button & Sharrock, 2009), think aloud protocols (Halasz & Moran, 1983), and cognitive mapping, to name a few (Kolkman, Kok, & van der Veen, 2005). Often, in order to fully capture a complex environment, it may be necessary to utilize multiple methodologies (Kraiger & Wenzel, 1997) and the researcher must be prepared to justify the choice of technique used as directed by their research questions (Mohammed, Klimoski, & Rentsch, 2000). These qualitative methodologies provide an opportunity to capture not only the knowledge and expertise required to perform complex tasks in dynamic environments, but can examine the interactions and importance of the context to the overall work.

Early research on shared mental models used the results to retrospectively explain performance differences among teams working on related tasks (Kleinman & Serfaty, 1989). However, more contemporary studies have been conducted in an effort to measure expertise and knowledge more directly. Results have shown that “teamwork and taskwork related positively to team process and performance” (Mohammed & Dumville, 2001, p. 91).

Recognition-Primed Decision Model

The recognition-primed decision (RPD) model was developed through study of firefighters to understand how these experts made decisions when dealing with time pressure and uncertain situations. Research for the RPD was conducted by employing probing question-based interviews with firefighters that had an average experience of 23 years (Klein, Calderwood, & Macgregor, 1989). Results found that participants overwhelmingly selected the first course of action that they identified rather than comparing a multiplicity of options that may be available. The tendency in critical situations where time pressure or uncertainty are involved is to “go with what we know” based on our prior experience. In this way, expertise provides a framework for mitigating problems rather than retrieving an analog, although that is not to say that analogical reasoning is not occurring (Lipshitz, Klein, Orasanu, & Salas, 2001).

The RPD model suggests that given time pressure and uncertainty or ill-defined goals, experts are able to work forward from existing situations, rather than working backward from a goal state to the current situation. Working forward allows the expert to continually adapt their course of action with regard to the current state as patterns are recognized (Lipshitz et al., 2001). Conversely, novices and intermediate individuals tend to work backward from a desired goal state to the current situation. Unfortunately, this approach falls apart in dynamic situations as the process must be restarted multiple times to work backward from the desired goal to the now-existing circumstance.

Recognition/Metacognition Model

The RPD model relies heavily on an expert’s ability to recognize patterns and intuitively make decisions in spite of time pressure and uncertainty. However, when recognition is insufficient in the absence of discernable patterns for the current situation, the RPD falls short. Consequently, the recognition/metacognition (R/M) model, an extension the RPD model, adds an additional component that addresses the cognitive processes that occur in these unrecognizable situations (Cohen, Adelman, Tolcott, Bresnick, & Freeman, 1994). Cohen et al. (1994) suggest that when experts are presented with situations that are unrelated to prior patterns, cognitively they engage in metarecognition tasks, whereby they critique their understanding of the problem, correct their mental models, and reassess their course of action considering time available, cost of any errors, and their degree of uncertainty.

The R/M model provides a framework for understanding how expert decision makers test and improve the results of their pattern recognition and solution application (Cohen, Freeman, & Thompson, 1997). According to the R/M model, experts handle complex and dynamic situations and work within these unfamiliar situations by engaging in metacognitive strategies as they work forward towards an effective course of action. While this process is occurring, expert decision makers will continue to evaluate their current knowledge and expertise identifying knowledge gaps and are cognizant of the dangers of excessive trial and error.

DISCUSSION

While many researchers disagree with the amount of effort and time required to become an expert, they all agree that it takes considerable time and effort to do so. Furthermore, expertise is gained through deliberate and consistent practice. Deliberate practice prepares an individual to understand the rules and functions necessary to perform a particular task, develop organized knowledge structures, and promotes cognitive reasoning skills. Furthermore, increased experience gained through practice allows an expert to perform a task consistently, reliably, and accurately. Experts are able to apply these abilities in complex situations and are able to make connections to prior knowledge allowing knowledge to be transferred.

Despite its importance to society as a whole, expertise is a difficult concept to measure in any generalized manner due to the domain specificity of the concept. Many of the measures do a fair job at measuring the construct within their particular research domain but lack overall generalizability to the larger population. However, that does not mean that we should not continue to use these measures as they provide valuable empirical evidence of expertise despite specificity. The data can be used to quantifiably identify experts, which could be beneficial for identifying colleagues to collaborate with on interdisciplinary research topics or make predictive analyses of possible experts based on their responses to specialized domain-specific problem sets.

Despite the difficulty in quantifying expertise, naturalistic decision models can provide researchers a qualitative framework for analyzing complex and dynamic contexts and tasks. Complex tasks can be observed in their natural contexts so that not only are the tasks explored, so too are the interactions of individuals with each and the context directly. Other qualitative processes, such as think aloud protocols, can provide the researcher a view of the decision-making process of an expert while it occurs, rather than relying on respective recollection of the task. Therefore, realtime data can be gathered and the cognitive processes of the expert can be further explored by the researcher as it occurs. Quantitative measures typically lack this explorative nature and therefore are not flexible enough to capture the intricacies of complex environments.

FUTURE RESEARCH AGENDA

One of the fundamental problems surrounding the research of experts are the practicalities and logistics involved in such endeavors. When performing research within a specific domain, recruiting expert participants can be costly and inefficient. Recruitment becomes increasingly more difficult if a study is to be replicated across multiple domains. In an effort to alleviate these recruitment challenges and to employ comparisons between novices and experts, we present a framew'ork for

Creation of “convenient experts.”

FIGURE 8.1 Creation of “convenient experts.”

the comparison of novices and experts within a variety of domains. To address the difficulty of recruiting experts and then comparing the results with a novice, we propose the creation of “convenient experts” (see Figure 8.1). Typically, in academic research students are recruited as participants as a result of convenience due to their (1) quick and easy availability and (2) cost effectiveness. In the early stages of research, this has been shown to be sufficient to at least explore concepts at a cursory level. However, since expertise is domain specific, finding students with expertise in a particular domain (e.g. the chemical processing industry) is problematic at best. Hence, the problem that exists is that the results of many experiments/studies are predicated on novice—not expert—performance. What would be useful is to provide expert performance in studies so as to compare with novice performance in meaningful ways.

Studies have shown that real-world simulations modeled after a context have proven to provide a rich research environment for performing empirical research within a controlled laboratory setting. Furthermore, simulations provide flexibility in testing scenarios, hypotheses, and policies. Additionally, simulations can be used as an educational platform for training (Gallagher et al., 2005; Johansson, Trnka, Granlund, & Gotmar, 2010), decision-making tool (Sheridan & Parasuraman, 2005), and as a planning model (Paul, Reddy, & DeFlitch, 2010). For example, the Living Lab Framework (LLF) developed by McNeese and colleagues (McNeese, Mancuso, McNeese, Endsley, & Forster, 2013) has successfully been deployed in multiple research studies over the last 15 years to further a variety of research agendas. The LLF methodology is an ecological psychology approach that presents a flexible framework useful for designing and evaluating technology in an authentic simulated scaled-world environment. It is useful for gaining a deep understanding of real-world contexts and identifying problems, creating solutions relevant solutions, and then evaluating them. For instance, the framework has been utilized in several studies to analyze such varying domains as crisis management (Hamilton et al., 2010; Jones,

2007; McNeese et al., 2005), military command and control (Hellar & Hall, 2009; Jones, McNeese, Connors, Jefferson, & Hall, 2004), and cybersecurity (Mancuso, Minotra, Giacobe, McNeese, & Tyworth, 2012; McNeese et al., 2013; Minotra & McNeese, 2017).

In addition to being deployed across a multitude of domains, it has also been instrumental in conducting research on varying theoretical perspectives using a plethora of methodologies. For example, team mental models and situation awareness have been studied using intelligent group interfaces and fuzzy cognitive maps. Other theories explored include transactive memory, task prioritization, information overload, and workload using virtual storytelling, geo-collaborative interfaces, shared workspaces, and cyber-visualizations from (McNeese et al., 2013)

To alleviate the difficulties associated with recruiting experts, we propose a process whereby we create domain-specific experts utilizing students. Achieving creation of experts will be accomplished through a regimented training program that takes advantage of real-world simulations of the target research domain2 modeled after a particular research domain. The training will consist of providing the students with foundational information needed in order to perform tasks normally reserved for experts. Following the completion of the declarative knowledge training, several simulations will be given to familiarize participants with the tasks and how they are performed (i.e. procedural).

For example, NeoCITIES, a scaled-world environment developed the MINDS Group at Penn State University, could be used for the simulation portion of the research. NeoCITIES has enjoyed a lengthy career as a testbed for research in many domains covering numerous theoretical perspectives. NeoCITIES underlying architecture provides an adaptive framework giving the researcher broad latitude in modifying the user interface to support their particular research interests.

NeoCITIES has the capability to calculate team-based and individual performance measures. These performance measures can be utilized to identify experts once training and practice no longer provide measurable benefit and progression to performance. Once experts have been identified3 they can then be used for comparative studies alongside novices that receive limited training and exposure. Being able to run comparative analyses on novices and experts provides a basis for testing interface design changes and its effects on performance for both experts and novices.

CONCLUSION

The demographics of the modern workplace are changing as the Baby Boomer generation nears retirement (Kuyken, Ebrahimi, & Saives, 2009; Spitulnik, 2006; Vu, 2006). In environments increasingly supported by complex socio-technical systems and characterized by interdependent workflow processes, ensuring the continuity of employee skills and knowledge is critical (Hinds et al., 2001; Vashisth, Kumar, & Chandra, 2010). If left unattended, this gap in specialized knowledge could lead to the loss of human capital, talent shortages, and increased costs associated with training younger talent. In a 2015 Center for Energy Workforce Development (CEWD) survey,4 over 35% of the workforce was age 53 and above. As such, organizations should be preparing a means of capturing the already-possessed skills and knowledge of its aging workforce (Davenport & Prusak, 2005; Spender & Grant, 1996). In industries that heavily rely on teams to monitor and maintain critical system infrastructure, such as in the chemical processing industry (e.g. natural gas and offshore drilling operations), military command and control, and aviation environments, it becomes even more imperative that steps be taken now to prevent loss of service and allow continued safe operation within these environments. The provided expertise framework can be utilized to examine this and other challenging complex and dynamic domains of practice in hopes of developing a deeper understanding of the challenges of learning and training within these domains.

However, within this chapter we provide a framework and methodology in order to create “convenient experts” comprised of students, which are convenient to recruit in higher education environments. First, we expose students to declarative knowledge training providing a foundational basis for performing domain-specific tasks. Utilizing specialized software, such as NeoCITIES, a simulated scaled-world environment, offers a context-specific environment for performing laboratory-controlled research, whereby participants are repeatedly exposed to the tasks (i.e. procedural knowledge). Their progress can then be tracked and used for analyses concerning their level of expertise. Upon establishing “experts,” further study can be done comparing the “convenient experts” to novices regardless of the research domain of interest.

NOTES

  • 1. While literature exists that describes expertise in a nonlinear fashion (see Araujo et al., 2010), in an effort to remain parsimonious this chapter will be using the more traditional novice/expert distinction unless noted otherwise as deemed necessary.
  • 2. Given that the natural gas industry is plagued by an impending crisis of retirement and thereby a loss of expertise, we will be conducting research in this domain. It is our goal to understand this complex and dynamic environment so that knowledge acquisition and transfer can be facilitated through properly developed training protocols employing a real-world simulation of the environment that leverages current experts training alongside newly hired employees (i.e. novices), and further supplemented with an electronic cognitive aide.
  • 3. One phase of the study will be to create “experts” through multiple exposures to the simulated environment and with training. The NeoCITIES scoring model will be used to identify the point that performance plateaus thereby indicating that a participant has reached the point of limited progression and are therefore now an expert (see Figure 8.1).
  • 4. www.cewd.org/surveyreport/CEWD2015SurveySummary.pdf

REFERENCES

Abelson, R. P. (1981). Psychological status of the script concept. American Psychologist, 36(1), 715-729. https://doi.Org/10.1037/0003-066X.36.7.715

Adelson, B. (1981). Problem solving and the development of abstract categories in programming languages. Memory & Cognition, 9(4), 422-433. https://doi.org/10.3758/ BF03197568

Adelson, B. (1984). When novices surpass experts: The difficulty of a task may increase with expertise. Journal of Experimental Psychology: Learning, Memory, and Cognition, 10(3), 483-495. https://doi.Org/10.1037/0278-7393.10.3.483

Alexander, P. A. (2005). Teaching towards expertise. British Journal of Educational Psychology, 2(3), 29-45.

Anderson. J. R. (1993). Rules of the mind. In Rules of the mind (pp. ix, 320 p.). New York. NY: Psychology Press, https://doi.org/10.4324/9781315806938

Araiijo, D., Fonseca, C., Davids, K., Garganta, J., Volossovitch, A., Brandao, R., & Krebs. R. (2010). The role of ecological constraints on expertise development. Talent Development and Excellence, 2(2), 165-179.

Baker, J., & Young, B. (2014). 20 years later: Deliberate practice and the development of expertise in sport. International Review of Sport and Exercise Psychology, 7(1), 135-157. https://doi.org/10.1080/1750984X.2014.896024

Bean. K. L. (1938). An experimental approach to the reading of music. Psychological Monographs, 50(6), i—80. https://doi.org/10.1037/h0093540

Bereiter, C., & Scardamalia, M. (1986). Educational relevance of the study of expertise. Interchange, 17(2). 10-19.

Bishop, J. (1989). Occupational training in high school: When does it pay off? Economics of Education Review, 3(1), 1-15. https://doi.org/10.1016/0272-7757(89)90031-9

Black, J. B., Segal, A., Vitale, J., & Fadjo, C. (2012). Embodied cognition and learning environment design. In D. Jonassen and S. Lamb (Eds.), Theoretical foundations of student-centered learning environments (Vol. 2). New York, NY: Routledge.

Boeva, V., Krusheva, M., & Tsiporkova, E. (2012). Measuring expertise similarity in expert networks. In 2012 6th IEEE International Conference Intelligent Systems (pp. 53-57). Sofia, Bulgaria: IEEE. https://doi.org/10.1109/IS.2012.6335190

Bowman. J. M., & Wittenbaum, G. M. (2012). Time pressure affects process and performance in hidden-profile groups. Small Group Research, 43(3), 295-314. https://doi. org/10.1177/1046496412440055

Bransford. J. D., Franks, J. J., Yve, N. J., & Sherwood, R. D. (1989). New approaches to instruction: Because wisdom can’t be told. In S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 470-497). Cambridge: Cambridge University Press, https:// doi.org/10.1017/CB09780511529863.022

Button. G., & Sharrock, W. (2009). Studies of work and the workplace in HCI: Concepts and techniques. In J. M. Carroll (Ed.), Synthesis lectures on human-centered informatics (Vol. 2. pp. 1-96). Williston. VT: Morgan & Claypool, https://doi.org/10.2200/ S00177ED1V01Y200903HCI003

Cannon-Bowers, J. A., & Salas, E. (2001). Reflections on shared cognition. Journal of Organizational Behavior, 22(2), 195-202. https://doi.org/10.1002/job.82

Cannon-Bowers, J. A.. Salas. E.. Blickensderfer, E.. & Bowers. C. A. (1998). The impact of cross-training and workload on team functioning: A replication and extension of initial findings. Human Factors, 40(1). 92-101. https://doi.org/10.1518/001872098779480550

Cannon-Bowers, J. A., Salas, E., & Converse, S. (1993). Shared mental models in expert team decision making. In Individual and group decision making: Current issues (Vol. 221, pp. 221-246). Hillsdale, NJ: Lea Lawrence Erlbaum Associates.

Carraher, T. N., Carraher, D. W„ & Schliemann. A. D. (1985). Mathematics in the streets and in schools. British Journal of Developmental Psychology, 3(1), 21-29. https://doi. org/10.111 l/j.2O44-835X.1985.tbOO951.x

Chase, W. G., & Simon. H. A. (1973a). Perception in chess. Cognitive Psychology, 4(1), 55-81. https://doi.org/10.1016/0010-0285(73)90004-2

Chase, W. G., & Simon. H. A. (1973b). The mind’s eye in chess. In W. G. Chase (Ed.). Visual information processing (pp. 215-281). New York, NY: Academic Press. https://doi. org/10.1016/B978-0-12-170150-5.50011-1

Chi, M. T. H. H., Feltovich, P. J., & Glaser, R. (1981). Catagorization and representation of physics problems by experts and novices. Cognitive Science, 5(2), 121-152. https://doi. org/10.1207/sl 5516709cog0502_2

Chi. M. T. H. H., Glaser. R.. & Rees. E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.) Advances in the psychology of human intelligence (Vol. 1, pp. 7-75). Hillsdale, NJ: Erlbaum.

Choi, J.-L, & Hannafin, M. (1995). Situated cognition and learning environments: Roles, structures, and implications for design. Source: Educational Technology Research and Development, 43(2). 53-69. https://doi.org/10.1007/Bf02300472

Cochran. W. G. (1943). The comparison of different scales of measurement for experimental results. The Annal of Mathematical Statistics, 14(3), 206-216.

Cohen. M. S.. Adelman, L„ Tolcott, M. A.. Bresnick. T. A., & Freeman. M. F. (1994). A cognitive frameworkfor battlefield commanders’ situation assessment. Arlington, VA: Cognitive Technologies Inc.

Cohen, M. S., Freeman, J. T.. & Thompson, В. B. (1997). Training the naturalistic decision maker. In C. E. Zsatnbok & G. Klein (Eds.), Naturalistic decision making (pp. 257-268). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Davenport, T. H., & Prusak, L. (2005). Working knowledge: How organizations manage what they know [Book Review], IEEE Engineering Management Review, 31(4), 301. https:// doi .org/10.1109/EM R. 2003.1267012

De Groot, A. D. (1965). Thought and choice in chess. The Hague: Mouton.

De Groot, A. D. (1966). Perception and memory versus thought: Some old ideas and recent findings. In B. Kleinmuntz (Ed.) Problem solving: Research, method and theory (pp. 19-51). New York. NY: John Wiley.

Dochy, E. Segers. M., Van den Bossche. P., & Gijbels. D. (2003). Effects of problem-based learning: A meta-analysis. Learning and Instruction, 13(5), 533-568. https://doi. org/10.1016/S0959-4752(02)00025-7

Entin, E. E., & Serfaty, D. (1999). Adaptive team coordination. Human Factors, 41(2), 312-325. https://doi.org/10.1518/001872099779591196

Ericsson, K. A.. Krampe, R. T, & Tesch-Romer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363-406. https:// doi.org/10.1037/0033-295X.100.3.363

Ericsson, K. A., & Lehmann, A. C. (1996). Expert and exceptional performance. Annual Review of Psychology, 47(), 273-305.

Ericsson, K. A.. Prietula. M. J.. & Cokely. E. T. (2007). The making of an expert. Harvard Business Review, 1-9.

Evans, G. W„ & Garling, T. (1991). Environment, cognition, and action: The need for integration. In T. Garling & G. W. Evans (Eds.), Environment, cognition, and action: An integrated approach (pp. 3-14). New York: Oxford University Press.

Faraj, S., & Sproull, L. (2000). Coordinating expertise in software development teams.

Management Science. 46(12). 1554-1568. https://doi.org/10.1287/mnsc.46.12.1554.12072 Feltovich. P. J., Ford. K. M.. & Hoffman. R. R. (1997). Expertise in context. Menlo Park, 590. Fiore, S. M., Salas, E., Cannon-Bowers, J. A., & London, M. (2001). Group dynamics and shared mental model development. In How people evaluate others in organizations. (pp. 309-336). Mahwah, NJ: Lawrence Erlbaum Associates.

Fischer, A., Greiff, S., & Funke, J. (2012). The process of solving complex problems. Journal of Problem Solving, 4(1), 19-42. https://doi.org/10.7771/1932-6246.! 118

Gallagher, A. G., Ritter, E. M.. Champion, H., Higgins, G., Fried, M. P., Moses, G., . . . Satava, R. M. (2005). Virtual reality simulation for the operating room. Annals of Surgery, 241(2), 364-372. https://doi.org/10.1097/01.sla.0000151982.85062.80

Gentner, D. R. (1983). The acquisition of typewriting skill. Acta Psychologica, 54(1-3), 233-248. https://doi.org/10.1016/0001-6918(83)90037-9

Gentner, D. R. (1988). Expertise in typewriting. In The nature of expertise (pp. 1-21). Mahwah, NJ: Lawrence Erlbaum Associates.

Germain, M.-L., & Tejeda, M. J. (2012). A preliminary exploration on the measurement of expertise: An initial development of a psychometric scale. Human Resource Development Quarterly. 23(2), 203-232. https://doi.org/10.1002/hrdq.21134

Gick, M. L. (1986). Problem-solving strategies. Educational Psychologist, 21(1-2), 99-120. https://doi.org/10.1207/sl5326985ep2101&2_6

Gladwell, M. (2008). Outliers: The story of success. New York: Little, Brown and Company.

Goldman, S. R., & Petrosino, A. J. (1999). Design principles for instruction in content domains: Lessons from research on expertise and learning. In F. T. Durso (Ed.), Handbook of applied cognition (pp. 595-628). Chichester: Wiley.

Goolsby, T. W. (1994). Eye movement in music reading: Effects of reading ability, notational complexity, and encounters. Music Perception: An Interdisciplinary Journal, /2(1), 77-96. https://doi.org/10.2307/40285756

Greca, I. M., & Moreira, M. A. (2000). Mental models, conceptual models, and modelling. International Journal of Science Education, 22(1), 1-11. https://doi. org/10.1080/095006900289976

Groen, G. J., & Patel. V. L. (1988). The relationship between comprehension and reasoning in medical expertise. In The nature of expertise (pp. 287-310). Mahwah, NJ: Lawrence Erlbaum Associates.

Halasz, F. G., & Moran, T. P. (1983). Mental models and problem solving in using a calculator. Proceedings of the SICCHI Conference on Human Factors in Computing Systems— CHI '83. 212-216. https://doi.org/10.1145/800045.801613

Hambrick. D. Z.. & Hoffman. R. R. (2016). Expertise: A second look. IEEE Intelligent Systems. 3/(4), 50-55. https://doi.org/10.1109/MIS.2016.69

Hambrick. D. Z.. Oswald. F. L., Altmann. E. M.. Meinz, E. J.. Gobet. F.. & Campitelli, G. (2014). Deliberate practice: Is that all it takes to become an expert? Intelligence, 45, 34-45. https://doi.Org/10.1016/j.intell.2013.04.001

Hamilton, K., Mancuso, V., Minotra, D., Hoult, R., Mohammed, S., Parr, A.,. . . McNeese, M. (2010). Using the NeoCITIES 3.1 simulation to study and measure team cognition. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 54(4), 433-437. https://doi.org/10J 177/154193121005400434

Hayes, J. R. (1989). Cognitive processes in creativity. Handbook of Creativity, 7(18), 135-145.

Hellar, D. B.. & Hall, D. L. (2009). NeoCITIES: An experimental test-bed for quantifying the effects of cognitive aids on team performance in C2 situations. Proceedings of SPIE—The International Society for Optical Engineering, 7348. https://doi. org/10.1117/12.818797

Helsen, W., & Pauwels, J. M. (1993). The relationship between expertise and visual information processing in sport. Advances in Psychology, 102(C), 109-134. https://doi. org/10.1016/SO166-4115(08)61468-5

Hesketh, A. J. (2000). Recruiting an elite? Employers’ perceptions of graduate education and training. Journal of Education and Work, 13(3), 245-271. https://doi. org/10.1080/713676992

Hinds, P. J., Patterson, M., & Pfeffer, J. (2001). Bothered by abstraction: The effect of expertise on knowledge transfer and subsequent novice performance. Journal of Applied Psychology, 86(6). 1232-1243. https://doi.Org/10.1037/0021-9010.86.6.1232

Hmelo-Silver, C. E., & Pfeffer, M. G. (2004). Comparing expert and novice understanding of a complex system from the perspective of structures, behaviors, and functions. Cognitive Science, 28(1). 127-138. https://doi.org/10.1016/S0364-0213(03)00065-X

Hoffman, R. R. (1998). How can expertise be defined? Implications of research from cognitive psychology. In R. Williams, W. Faulkner, & J. Fleck (Eds.), Exploring expertise (pp. 81-100). London: Macmillan Press, https://doi.org/10.1007/978-l-349-13693-3_4

Hollan, J. J. D., Hutchins, E.. & Kirsh, D. (2000). Distributed cognition: Toward a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction. 7(2), 174-196. https://doi.org/10.1145/353485.353487

Hutchins, E. (1995). How a cockpit remembers its speeds. Cognitive Science, 19(3), 265-288. https://doi.org/10.1016/0364-0213(95)90020-9

Huys, R.. Daffertshofer, A., & Beek, P. J. (2004). Multiple time scales and multiform dynamics in learning to juggle. Motor Control. 8. 188-212. https://doi.org/10.! 123/mcj.8.2.188 Johansson. B. J. E.. Trnka, J.. Granlund, R.. & Gdtmar. A. (2010). The effect of a geographical information system on performance and communication of a command and Control Organization. International Journal of Human-Computer Interaction. https://doi. org/10.1080/10447310903498981

Jones, R. E. T. (2007). The development of an emergency crisis management simulation to assess the impact a fuzzy cognitive map decision-aid has on team cognition and team decision-making. Dissertation Abstracts International: Section B: The Sciences and Engineering, 67,4521.

Jones, R. E. T., McNeese. M. D., Connors. E. S.. Jefferson. T.. & Hall, D. L. (2004). A Distributed cognition simulation involving homeland security and defense: The development of NeoCITIES. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 48(3), 631-634. https://doi.org/10.1177/154193120404800376

Kaminski, J. A.. Sloutsky, V. M.. & Heckler, A. F. (2008). Learning theory: The Advantage of Abstract Examples in Learning Math. Science (New York, N.Y.), 320(5875), 454-455. https://doi.org/10J 126/science.l 154659

Kaplan, R. (1991). Environmental description and prediction: A conceptual analysis. In T. Garling & G. W. Evans (Eds.), Environment, cognition, and action: An integrated approach (pp. 19-34). New York: Oxford University Press.

Kim, J. W., Ritter, F. E., & Koubek, R. J. (2013). An integrated theory for improved skill acquisition and retention in the three stages of learning. Theoretical Issues in Ergonomics Science, 14(1), 22-37. https://doi.org/10.1080/1464536X.2011.573008

Klein, G. A. (1993). A Recognition-Primed Decision (RPD) model of rapid decision making. New York: Ablex Publishing Corporation.

Klein, G. A., Calderwood, R., & Clinton-Cirocco, A. (1986). Rapid decision making on the fire ground. Proceedings of the Human Factors Society Annual Meeting, 30(6), 576-580. https://doi.0rg/lO.l 177/154193128603000616

Klein, G. A., Calderwood, R., & Macgregor, D. (1989). Critical decision method for eliciting knowledge. IEEE Transactions on Systems, Man and Cybernetics, 19(3), 462-472. https://doi.org/10.1109/21.31053

Klein, G„ & Borders, J. (2016). The ShadowBox approach to cognitive skills training: An empirical evaluation. Journal of Cognitive Engineering and Decision Making, 10(3), 268-280. https://doi.org/10.1177/1555343416636515

Kleinman, D. L., & Serfaty, D. (1989). Team performance assessment decision making. In Proceedings of the symposium on Interactive Networked Simulation for Training (pp. 22-27). Orlando. FL: University of Central Florida.

Kolkman, M. J., Kok. M., & van der Veen, A. (2005). Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management.

Physics and Chemistry of the Earth, Parts A/B/C, 30(4-5), 317-332. https://doi. org/10.1016/j.pce.2005.01.002

Kraiger, K., & Wenzel, L. H. (1997). Conceptual development and empirical evaluation of measures of shared mental models as indicators of team effectiveness. Team Performance Assessment and Measurement: Theory, Methods, and Applications, 63-84.

Kuyken, K., Ebrahimi, M., & Saives, A.-L. (2009). Intergenerational knowledge transfer in high-technological companies: A comparative study between Germany and Quebec. In Proceedings of the annual conference of the Administrative Sciences Association of Canada (ASAC), Niagara Falls, 6-9 June. Montréal, Québec, Canada: Emerald.

Larson. J. R., Christensen, C., Abbott, A. S., &Franz, T. M. (1996). Diagnosing groups: Charting the flow of information in medical decision-making teams. Journal of Personality and Social Psychology, 71(2), 315-330. https://doi.Org/io.1037/0022-3514.71.2.315

Lee, M. D., Steyvers, M.. de Young, M., & Miller, B. (2012). Inferring expertise in knowledge and prediction ranking tasks. Topics in Cognitive Science, 4(1), 151-163. https://doi. org/10.1111 /j. 1756-8765.2011.01175.x

Lehmann, A. C., & Ericsson, K. A. (1996). Performance without preparation: Structure and acquisition of expert sight-reading and accompanying performance. Psychomusicology: A Journal of Research in Music Cognition, /5(1-2), 1-29. https://doi.org/10.1037/ 110094082

Lewandowsky, S., & Kirsner, K. (2000). Knowledge partitioning: Context-dependent use of expertise. Memory & Cognition. 28(2), 295-305. https://doi.org/10.3758/BF03213807

Lewis, K. (2003). Measuring transactive memory systems in the field: Scale development and validation. Journal of Applied Psychology, 88(4), 587-604. https://doi. org/10.1037/0021-9010.88.4.587

Lipshitz, R., Klein, G., Orasanu, J., & Salas, E. (2001). Taking stock of naturalistic decision making. Journal of Behavioral Decision Making, 14(5), 331-352. https://doi. org/10.1002/bdm.381

Lu, L., Yuan, Y. C., & McLeod, P. L. (2012). Twenty-five years of hidden profiles in group decision making: A meta-analysis. Personality and Social Psychology Review, 16(1), 54-75. https://doi.org/10.! 177/1088868311417243

Mancuso, V. E. Minotra. D.. Giacobe, N.. McNeese. M„ & Tyworth, M. (2012). idsNETS: An experimental platform to study situation awareness for intrusion detection analysts. In 2012 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support, CogSIMA 2012 (pp. 73-79). New Orleans, LA: IEEE. https://doi.org/10.1109/CogSIMA.2012.6188411

Marks. M. A.. Sabella. M. J.. Burke, C. S., & Zaccaro. S. J. (2002). The impact of crosstraining on team effectiveness. Journal of Applied Psychology, 87(1), 3-13. https://doi. org/10.1037/0021-9010.87.1.3

Mathieu, J. E.. Heffner, T. S.. Goodwin. G. E. Salas. E„ & Cannon-Bowers. J. A. (2000). The influence of shared mental models on team process and performance. Journal of Applied Psychology, 85(2), 273-283. https://doi.Org/10.1037/0021-9010.85.2.273

McLeod, P.. & Jenkins, S. (1991). Timing accuracy and decision time in high-speed ball games. International Journal of Sport Psychology, 22(3-4), 279-295.

McNeese. M. D.. Bains. P.. Brewer. I.. Brown. C„ Connors. E. S.. Jefferson, T. & Terrell. I. (2005). The NeoCITIES simulation: Understanding the design and experimental methodology used to develop a team emergency management simulation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 49(3), 591-594. https://doi. org/10.1177/154193120504900380

McNeese, M. D.. Mancuso, V., McNeese, N.. Endsley, T, & Forster, P. (2013). Using the living laboratory framework as a basis for understanding next-generation analyst work.

In B. D. Broome, D. L. Hall, & J. Llinas (Eds.), SPIE 8758, next-generation analyst (p. 87580F). Baltimore, MD: International Society for Optics and Photonics. https://doi. org/10.1117/12.2016514

Mehta, S., Suto, I., Elliott, G., & Rushton, N. (2011). Why study economics? Perspectives from 16-19-year-old students. Citizenship, Social and Economics Education, 10(2-3), 199-212.

Minotra, D., & McNeese, M. D. (2017). Predictive aids can lead to sustained attention decrements in the detection of non-routine critical events in event monitoring. Cognition, Technology & Work. 19(1). 161-177.

Mohammed, S., & Dumville, В. C. (2001). Team mental models in a team knowledge framework: expanding theory and measurement across disciplinary boundaries. Journal of Organizational Behavior. 22(22), 89-106. https://doi.org/10.1002/job.86

Mohammed, S., Klimoski, R., & Rentsch, J. R. (2000). The measurement of team mental models: We have no shared schema. Organizational Research Methods, 3(2), 123-165. https://doi.org/10.1177/109442810032001

Moreland, R. L., Argote, L., & Krishnan. R. (1996). Socially shared cognition at work: Transactive memory and group performance. In What’s social about social cognition? Research on socially shared cognition in small groups (pp. 57-84).

Murphy, G. L., & Wright, J. C. (1984). Changes in conceptual structure with expertise: Differences between real-world experts and novices. Journal of Experimental Psychology: Learning, Memory, and Cognition, 10(1), 144-155. https://doi. org/10.1037/0278-7393.10.1.144

Nassehi, A. (2004). What do we know about knowledge? An essay on the knowledge society. The Canadian Journal of Sociology, 29(3), 439-449. https://doi.org/10.1353/ cjs.2004.0043

Newell, A. (1994). Unified theories of cognition. Cambridge, MA: Harvard University Press.

Newell, A., & Rosenbloom. P. S. (1981). Mechanisms of skill acquisition and the law of practice. In J. Anderson (Ed.), Cognitive skills and their acquisition (pp. 1-56). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Newell, A.. & Simon. H. A. (1972). Human problem solving (Vol. 104). Englewood Cliffs. NJ: Prentice-Hall.

Paul, S. A., Reddy, M. C., & DeFlitch. C. J. (2010). A systematic review of simulation studies investigating emergency department overcrowding. Simulation, 86(8-9), 559-571. https://doi.org/r0.1177/0037549710360912

Phelps, R. H.. & Shanteau, J. (1978). Livestock judges: How much information can an expert use? Organizational Behavior and Human Performance, 21(2), 209-219. https://doi. org/10.1016/0030-5073(78)90050-8

Rasmussen, L. J., Sieck, W. R.. & Smart, P. R. (2009). What is a good plan? cultural variations in expert planners’ concepts of plan quality. Journal of Cognitive Engineering and Decision Making. 3(3), 228-252. https://doi.org/10.1518/155534309X474479

Reagans, R., Argote. L., & Brooks, D. (2005). Individual experience and experience working together: Predicting learning rates from knowing who knows what and knowing how to work together. Management Science, 51(6), 869-881. https://doi.org/10.1287/ mnsc.1050.0366

Royer, J. M., Carlo, M. S., Dufresne. R., & Mestre, J. (1996). The assessment of levels of domain expertise while reading. Cognition and Instruction, 14(3), 373-408. https://doi. org/10.1207/sl 532690xci 1403.4

Salthouse, T. A. (1984). Effects of age and skill in typing. Journal of Experimental Psychology: General. 113(3), 345-371. https://doi.Org/10.1037/0096-3445.113.3.345

Sanderson, P. (1989). Verbalizable knowledge and skilled task performance: Association, dissociation, and mental models. Journal of Experimental Psychology. Learning, Memory, and Cognition, /5(4), 729-747. https://doi.Org/10.1037//0278-7393.15.4.729

Serfaty, D., MacMillan, J., Entin, E. E., & Entin, E. B. (1997). The decision-making expertise of battle commanders. In C. E. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 233-246). New York. NY: Psychology Press.

Shanteau, J., Weiss, D. J., Thomas, R. R, & Pounds, J. C. (2002). Performance-based assessment of expertise: How to decide if someone is an expert or not. European Journal of Operational Research, 136(2), 253-263. https://doi.org/I0.1016/S0377-22I7(01)00113-8

Shattuck, L. G„ & Miller, N. L. (2004). A process tracing approach to the investigation of situated cognition. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 48(3). 658-662. https://doi.org/10.! 177/154193120404800382

Shattuck, L. G., & Miller, N. L. (2006). Extending naturalistic decision making to complex organizations: A dynamic model of situated cognition. Organization Studies, 27(1), 989-1009. https://doi.org/10J 177/0170840606065706

Sheridan,T. B.. & Parasuraman, R. (2005). Human-automation interaction. Reviews of Human Factors and Ergonomics, 1(1). 89-129. https://doi.org/10.1518/155723405783703082

Simon, D. P., & Simon, H. A. (1978). Individual differences in solving physics problems. Children’s Thinking: What Develops. 325-348.

Simon. H. A., & Chase, W. G. (1973). Skill in chess. American Scientist, 61.394-403. https:// doi.org/10.1511/2011.89.106

Smith. E. R., & Collins. E. C. (2010). Situated cognition. In B. Mesquita, L. F. Barrett. & E. R. Smith (Eds.), The mind in context (pp. 126-148). New York: The Guildford Press.

Smith-Jentsch, K. A., Kraiger, K., Cannon-Bowers, J. A., & Salas, E. (2009). Do familiar teammates request and accept more backup? Transactive memory in air traffic control. Human Factors, 51(2), 181-192. https://doi.org/10.1177/0018720809335367

Spender, J.. & Grant, R. M. (1996). Knowledge of the firm: Overview. Strategic Management Journal, 17(S2), 5-9. https://doi.org/10.1002/smj.4250171103

Spitulnik, J. J. (2006). Cognitive development needs and performance in an aging workforce. Organization Development Journal, 24(3), 44-53.

Steyvers, M.. Miller, B., Lee, M., & Hemmer, P. (2009). The wisdom of crowds in the recollection of order information. In Twenty-third annual conference on Neural Information Processing System (pp. 1785-1793). Red Hook, NY: Curran Associates.

Thomas. R. P.. Willem. B., Shanteau. J.. Raacke. J.. & Friel. B. (2001). CWS applied to controllers in a high fidelity simulation of ATC. In International symposium on aviation psychology. Columbus, OH: Ohio State University.

Tracey, J. B., Hinkin. T. R.. Tran. T. L. B.. Emigh. T, Kingra. M.. Taylor, J.. & Thorek.

D. (2015). A field study of new employee training programs. Cornell Hospitality Quarterly, 56(4), 345-354. https://doi.org/10.1177/1938965514554211

Urick, M. (2017). Adapting training to meet the preferred learning styles of different generations. International Journal of Training and Development, 2/(1), 53-59. https://doi. org/10.111 l/ijtd.12093

Uzzi, B. (1996). The sources and consequences of embeddedness for the economic performance of organizations: The network effect. American Sociological Review, 61(4), 674-698. https://doi.org/10.2307/2096399

Uzzi, B.. & Lancaster, R. (2003). Relational embeddedness and learning: The case of bank loan managers and their clients. Management Science, 49(4), 383-399. https://doi. org/10.1287/mnsc.49.4.383.14427

van Gog, T., Paas, F., & van Merrienboer, J. J. G. (2004). Process-oriented worked examples: Improving transfer performance through enhanced understanding. Instructional Science, 52(411), 83-98. https://doi.Org/10.1023/B:TRUC.0000021810.70784.b0

VanLehn, K. (1996). Cognitive skill acquisition. Annual Review of Psychology, 47, 513-539. https://d0i.0rg/l 0.1146/annurev.psych.47.1.513

Vashisth, R.. Kumar, R., & Chandra, A. (2010). Barriers and facilitators to knowledge management: Evidence from selected indian universities. The IUP Journal of Knowledge Management. VI11(4). 7-27.

Volpe, C., Cannon-Bowers, J., Salas, E., & Spector. P. (1996). The impact of cross-training on team functioning: An empirical investigation. Human Factors, 1, 87-100. https://doi. org/10.1037/0021-9010.87.1.3

Voss, J. F.. Greene. T. R.. Post. T. A.. & Penner, B. C. (1983). Problem-solving skill in the social sciences. Psychology of Learning and Motivation, 17, 165-213. https://doi. org/10.1016/S0079-7421(08)60099-7

Vu, Y. (2006). Unprepared for aging workers. Canadian HR Reporter, /9(14).

Weinstein, B. D. (1993). What is an expert? Theoretical Medicine. 14(). 57-73. https://doi.

org/10.1007/BF00993988

Weiss, D. J., & Shanteau, J. (2003). Empirical assessment of expertise. Human Factors, 45(1), 104-116. https://doi.org/10.1518/hfes.45.1.104.27233

Wilson, A. L. (1993). The promise of situated cognition. New Directions for Adult and Continuing Education. 1993(51), 71-79. https://doi.org/10.1002/ace.36719935709

Wright. G., & Bolger, F. (1992). Expertise and decision support. New York, NY: Plenum Press, https://doi.org/10.1007/bl02410

Zsambok, C. E. (1997). Naturalistic decision making: Where are we now? In Z. C. E. & G. Klein (Eds.), Naturalistic decision making (pp. 3-16). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

 
Source
< Prev   CONTENTS   Source   Next >