A Method for Rigorously Assessing Causal Mental Models to Support Distributed Team Cognition

Jill L. Drury, Mark S. Pfaff, and Gary L. Klein

CONTENTS

Introduction............................................................................................................115

Task Knowledge. Team Knowledge, and Temporal Knowledge...........................119

Task Knowledge................................................................................................119

Team Knowledge...............................................................................................120

Temporal Knowledge........................................................................................120

Capturing and Comparing Mental Models Using DESIM....................................121

Overview...........................................................................................................121

Eliciting Causal Mental Models and Representing Them Interactively............122

Assigning Edge Weights...................................................................................123

Examining Edge-Weight Differences and Viewing Results..............................125

Example Use of DESIm".......................................................................................126

Implications for Support to Distributed Team Cognition......................................129

DESIM and the Coordination Space Framework..............................................129

General Implications.........................................................................................131

References..............................................................................................................132

INTRODUCTION

The concept of team cognition has evolved as the global culture of teamwork has continuously changed. A variety of non-traditional distributed teaming situations now occur frequently, such as:

  • • A cross-organizational team comprised of individuals scattered across time zones who never meet face-to-face and have different organizational reporting structures
  • • A team including Amazon Mechanical Turk workers as temporary colleagues from the “gig” economy, at least some of whom are unfamiliar with corporate cultural norms, and others having a variety of experiences with corporate cultures
  • • Assistance on tasks rendered by pseudonym-identified contributors working from unknown locations who post helpful comments, observations, or answers to questions in response to social media blog entries describing a work challenge

Such situations have called into question the classic definitions of teams: “a distinguishable set of two or more people who interact, dynamically, interdependently, and adaptively toward a common and valued goal/objective/mission” (Salas, Dickinson, Converse, & Tannenbaum, 1992, p. 4). The primary sticking point pertains to the concept of a “common and valued goal.” While all individuals have goals when they engage in activities (even if the goal is to avoid being bored), people who work together may not have the same goals. For example, the pseudonym-identified contributor described in the third situation above has the goal of demonstrating his or her expertise and garnering “likes” and other forms of reputational currency in an online environment, whereas the person being helped has the goal of resolving a specific business challenge.

Another issue regarding the definition of teaming arises from the nature of team members’ interdependence—specifically, the level, constancy, and/or predictability of their interdependence. In traditional teaming arrangements, there is an expectation that members will engage with each other with at least somewhat predictable and/or pre-negotiated dependencies of each other’s work processes and products. In a social media environment, unpredictable interactions and dependencies may occur with individuals due to different, perhaps previously unknown, people choosing to engage in addressing different challenges based on the relevance of each work task to the potential respondents’ expertise and interests.

Some would argue that the lack of common goals and variable interdependence necessarily mean that the set of people do not constitute a team, but instead are members of a group (a set of individuals who coordinate their efforts), an online distributed group, or a collective (a large, undifferentiated set of people with diverse membership characteristics). However, given the fluidity of interactivity and the potential emergence over time of common goals from online social interaction, the definitions of teams and teamwork are softening (McNeese, 2019). Besides crowdsourcing and other forms of social online communication, the current culture of the gig economy, rapid change, and technological innovation may require stretching the teaming definitions that have been applied historically. It is now quite common, in fact, to speak even of “human-machine teaming” despite the fact that machines do not have the sentience and individual volition that are normally deemed necessary to form and adapt goals. Thus, when we speak of “distributed teams” in the context of this chapter we are taking a broad view of team characteristics, knowing that some may disagree with our inclusive stance. This broadened definition of distributed teams means that the conception of a teaming or shared mental model among team members may need to be quite different than our traditional notion of it. In turn, the ways we model and measure these kinds of teaming entities may have to change or be addressed in new ways. Accordingly, this chapter offers a new way to model and measure distributed team cognition.

The broadening of distributed team characteristics and configurations means that team members’ viewpoints, backgrounds, motivations, and loci of attention are almost certainly becoming more diverse. In the aggregate, these characteristics may result in team members belonging to different cognitive cultures (Sieck, Rasmussen, & Smart, 2010). This broadening increases the need to make salient components of distributed team cognition more transparent, so that team members’ diverse conceptualizations of the situation and possible decision choices can be readily perceived and thus more fully considered (Endsley, Reep, McNeese, & Forster, 2015). These conceptualizations of options and their potential outcomes can be described as mental models, most often defined as a “mechanism whereby humans generate descriptions of system purpose and form, explanations of system functioning and observed system states, and predictions of future system states” (Rouse & Morris, 1986, p. 360).

To reinforce the aspect of mental models that leads to predictions of future states, we refer to them as causal mental models. “A causal mental model is a network of ideas that constitute people’s explanations for how things work. These models influence people’s behavior, judgments and real-world decision making” (Gentner & Stevens, 1983, cited in Sieck et al., 2010). This description emphasizes the importance of capturing and including in mental models the causation chains that lead to state transitions. A causal model can be contrasted with a non-causal declarative model that captures the features of an object or lists facts about a situation.

The commonly occurring non-collocation of team members has a deleterious effect on forming shared causal mental models: “Team distribution has its largest impact on the development and utilization of shared mental models because of the team opacity [decreased awareness of team member interactions] arising from distributed interaction” (Fiore, Salas, Cuevas, & Bowers, 2003, p. 346). Achieving synchrony among team members is difficult when causal mental models among team members differ and those differences are not revealed. For example, two people might agree that the last ten years have been the hottest in recorded history, but if they disagree on why that is happening, they will disagree on how to address it. Shared causal mental models typically derive from continual articulation of events as situations evolve, including negotiating the meaning and causation of what is being shared. This negotiation is simply more difficult to achieve when team members are distributed because of diminished opportunities for establishing common understanding, including the inability to use deictic references and visual cues to establish whether conversation partners are experiencing confusion (McNeese, 2019). Having shared causal mental models, or at least an articulated understanding of divergences among causal mental models held by team members, is important because it can lead to higher team performance (Mathieu, Heffner, Goodwin, Salas, & Cannon-Bowers, 2000).

The literature around diverse workforces also notes that “diversity is a key driver of innovation and is a critical component of being successful on a global scale” (Forbes Insights Team, 2011, p. 3). The act of identifying that diversity and then discussing differing viewpoints so that the resulting shared causal mental model is different from anyone’s original model is important to achieving better work outcomes (Gallo, 2018). Even when it is not possible to develop a single causal mental model, having awareness of different views of causation held by team members could enable decision makers to avoid taking actions that would be catastrophic if any of the known alternative viewpoints hold true. Further, maintaining awareness of diverse viewpoints makes it possible for team members to be alert for information that confirms one viewpoint versus another. Bringing this information back to the team could result in merging causal mental models over time.

The centrality of causal mental models as a mechanism to describe and operationalize team cognition is broadly accepted, despite the diversity in frameworks and theories pertaining to distributed team cognition (e.g., Hutchins, 1995; Fiore et al., 2003; Letsky, Warner, Fiore, Rosen, & Salas, 2007; Jonker, van Riemsdijk, & Vermeulen, 2010; Hinsz & Ladbury, 2012; Perry, 2017). It is challenging, however, to extract causal mental models from people’s heads and compare them in a fashion that is both rigorous and accessible to people who are not modeling experts. The Descriptive to Executable SIMulation modeling methodology (DESIM; Pfaff, Drury, & Klein, 2015) provides a means to describe and assess causal mental models qualitatively and quantitatively, and therefore can allow team members to explicitly discuss their different understandings of the ways in which they believe the situation and their potential courses of action will evolve.

The DESIM methodology captures mental models in the form of influence diagrams that allow us to assess and quantify the relationships in models, and which ultimately allow us to execute models as simulations. The first part of DESIM is based upon Sieck’s work in cognitive network analysis, which focuses on “gathering, analyzing, and representing the relevant cultural concepts, beliefs and values that drive decisions” (Sieck et al., 2010, p. 237). Our contribution is that we have extended a method by Osei-Bryson for individually quantifying the relationships in the model (Osei-Bryson, 2004) to a flexible crowd-sourcing technique to quantify the strengths of the relationships in the model. Crowd-sourcing enables modelers to quickly tap into a variety of knowledgeable subject matter experts. This technique is flexible because it handles incomplete data, if necessary, from each expert, thus enabling modelers to split up the information-provision load among the experts. Finally, we apply our own algorithm to execute the quantified model as a simulation. DESIM’s unique contribution is integrating these separate steps into an efficient crowd-sourced process and developing prototype tools to streamline the elicitation, validation, and simulation of causal mental models.

The contribution of this chapter is a presentation of a rigorous method to expose differences in causal mental models held by distributed team members such that they can be resolved or otherwise accounted for. In the following sections, we review the different types of knowledge from the standpoint of their abilities to provide the raw data for causal mental models. Next we present DESIM, followed by an example of using DESIM to highlight its support to distributed team cognition. This chapter concludes by identifying the relationship between DESIM and the theoretical construct from distributed team cognition known as a distributed coordination space (Fiore et al., 2003).

TASK KNOWLEDGE, TEAM KNOWLEDGE,

AND TEMPORAL KNOWLEDGE

Usually causal mental models are developed from task-based, team-based, and temporal knowledge (Mohammed, Hamilton, Tesler, Mancuso, & McNeese, 2015). Task-based knowledge pertains to an understanding of what needs to be done, including a shared terminology so communications about tasks can occur successfully. Team-based knowledge refers to understanding who team members are, whether they are available to engage in joint work at any given time, which team member knows what, and who is responsible for what parts of the joint task flow. We suggest that this list of team-based knowledge can be augmented by an awareness of the differences in causal mental models held by team members. Temporal knowledge refers to a detailed understanding of who is taking (or should take) what action at any given time, and how to pursue interactions with team members at any given moment. Team members may have good shared comprehension of the task (task-based knowledge) and their team members’ skills and responsibilities (team-based knowledge), but if they have poor knowledge about the intricacies of time-based dependencies (temporal knowledge), their performance may suffer (Mohammed et al., 2015). These three knowledge types are described in more detail below.

Task Knowledge

An in-depth knowledge of the facts of the environment in which the task is situated is most often known as situation awareness (Endsley, 1988). Situation awareness is defined as the perception of an environment within a volume of time and space, plus the comprehension of that environment to the degree that enables the projection of what will happen in the near future (Endsley, 1988). Situation awareness originated as a concept to describe an individual’s knowledge of an environment, but it has been expanded to encompass and describe the knowledge held by members of a collaborating team that is working towards a joint goal (e.g., Endsley, 1995). Several approaches to team situation awareness have been proposed (see She & Li, 2017 for a review). For example, one approach consists of the aggregation of the facts that individuals know (Endsley, 1995), and another focuses on the content of the team’s factual knowledge plus the understanding of which knowledge content is shared (Espinosa & Clark, 2014).

While having good situation awareness is a necessary prerequisite for creating a causal mental model of the environment, making decisions and taking actions in any given situation, it is not sufficient to completely support the decision-making process (Pfaff et al., 2013). Option awareness is also needed: the knowledge of the available courses of action and the relative desirability of one choice versus another (Drury, Klein, Pfaff, & More, 2009). To paraphrase, option awareness is the understanding of the options available and plausible outcomes of choosing one option versus another. Still, people who possess the same situation awareness and have been given the same type of option awareness support, may nevertheless assess options differently if they hold different causal mental models.

A concept for team-based option awareness (Klein, Drury, Pfaff, & More, 2010; Liu, Moon, Pfaff, Drury, & Klein, 2011) relies on identifying synergistic joint courses of action. Simply collaborating over individual decisions will not achieve collaborative option awareness because jointly executing even the most robust individual options may not yield the most robust joint option (Klein et al., 2010). Collaborative decision making under these conditions is complex not only due to the difficulty of forecasting the impact of this synergy, but also the need to achieve a common causal mental model among the decision-making participants.

Team Knowledge

Team-based knowledge refers to an understanding of what team members are doing, whether they are present or absent, and their goals and intentions for future task-related actions. Knowing about team members’ activities is more challenging when team members are distributed across locations and potentially time zones, adding asynchronicity to the mix. A process for generating team-based knowledge is described as “macrocognition in teams,” which Fiore, Smith-Jentsch, Salas, Warner, and Letsky (2010) defined as “the process of transforming internalised team knowledge into externalised team knowledge through individual and team knowledge building processes” (p. 258). Knowledge of one’s collaborators’ states is described as having awareness of what team members are doing, have done, or are intending to do (Dourish & Bellotti, 1992; Drury & Williams, 2002; Gutwin & Greenberg, 2002; Gaver et ah, 1992; Gross, Stary, & Totter, 2005). This type of awareness is essential for synchronizing activities and avoiding duplicating work or leaving important tasks undone and primarily serves as a means to promote effortless coordination (Gross, 2013). This type of knowledge supplements and augments users’ causal mental models of the situation to the degree that team related information affects the available options and the plausible outcomes of choosing one option versus another.

It is difficult to assess the degree to which a common understanding among team members has been attained. Espinosa and Clark (2014) developed a network model that shows which elements of the situation are known by which team members. While such networks illustrate the degree to which there is common knowledge of the facts of the situation (that is, situation awareness), they do not provide an assessment of whether team members share common causal models that can lead to a shared option awareness.

Temporal Knowledge

Team members have sufficient temporal knowledge when they are aware of the time-based dependencies of their assigned tasks, the moment-by-moment progress towards task completion, and the point when they need to execute their tasks to respect those dependencies (Mohammed et al., 2015). Such knowledge is enriched by a shared causal model because it can inform why the temporal dependencies exist and what to do to repair a situation in which the task sequence is disrupted.

CAPTURING AND COMPARING MENTAL MODELS USING DESIM

Overview

The DESIM process elicits and transforms descriptive causal models into executable computer simulation models based on information obtained from multiple experts in a subject area. A computer user interface backed by computational algorithms produces quantitative values for the strengths of causal relationships between variables in the descriptive models, resulting in unbiased distributions of estimated values for each relationship and enabling the models to be computationally processed. The result is a quantitative depiction of causal mental models, shown as influence diagrams, plus an improved depiction of potential outcomes known as a decision space visualization (Pfaff et al., 2013). Decision space visualizations present the relationships among options, actions, or variables that can be used to analyze a focus question and support decision making. Figure 6.1 depicts the DESIM process.

The advantages of this process include capturing perceptions that are difficult to quantify and collecting data in an engaging, participatory approach that is accessible to domain experts unfamiliar with modeling processes (Ozesmi & Ozesmi, 2004). Participatory modeling (Prell et al., 2007; McNeese, Zaff, Citera, Brown, & Whitaker, 1995) provides domain experts with more control over how the model is constructed than situations in which the model development occurs solely via the interpretation by the analyst of the expert’s answers to interview questions. Involving domain experts as co-constructors of the models is critical to developing accurate models quickly. Using multiple experts in participatory modeling ensures that the

Overview of the DESIM process

FIGURE 6.1 Overview of the DESIM process.

Note: Boxes indicate processes and arrows indicate inputs to and/or outputs from the processes.

resulting model(s) are not idiosyncratic of a single domain expert (Vennix, 1999). Finally, this method allows for detailed comparisons of causal models developed with different domain experts, enabling the differences to be highlighted and probed. For example, multiple sessions may be held with domain experts to negotiate developing a composite model, or they may “agree to disagree,” resulting in preserving multiple models that represent the different viewpoints.

The rest of this section describes the methods for eliciting one or more cognitive models of a problem, representing the models interactively, eliciting values for the models from many experts, and analyzing and viewung the data resulting from executing the models.

Eliciting Causal Mental Models and Representing Them Interactively

The DESIM process starts by eliciting one or more domain expert’s causal mental model via interviews that explore a focal question such as “What is causing employee dissatisfaction?” Most often, multiple domain experts are interviewed to understand different and potentially conflicting perspectives on the problem. Experts are asked questions requesting that they identify model components, which are factors that experts believe influence the outcomes of the focus question; links between components; and the dynamic and functional relationships among the components. This process is facilitated by an analyst w'ho displays concept mapping software to develop one causal mental model per expert.

The causal mental models are represented on a computer as a concept map or influence diagram: a graph of nodes connected by edges. A node in a descriptive model is a variable that represents a concept such as an action, option, or policy that has a continuous or discrete range of values. Multiple tools exist for visually depicting and computationally representing mental models in this manner. CMapTools (Canas et al., 2004), for example, provides a graphical interface for constructing and editing cognitive models and provides machine-readable output for use by other computational tools. Similarly, MentalModeler (Gray, Gray, Cox, & Henly-Shepard, 2013) w'as designed to support a fuzzy cognitive modeling process, including model elicitation, graphical representation, and exploratory simulation.

The validation process begins by displaying a graphical representation of the newly created model to the domain expert, w'ho checks it for completeness and accuracy (Sieck et ah, 2010). We have had good results with using a mental modeling tool interactively during the interview session to turn responses to questions immediately into nodes and edges (see Figure 6.2 for an example). This approach enables the analyst to obtain model-related information, construct a model, and receive initial validation and/or real-time corrections from the expert during the same meeting, usually one to two hours in length depending upon the complexity of the subject area being explored.

The causal mental models created for each expert may be similar or may diverge. The divergence may consist of additional factors (nodes) that were included by one or more experts. Alternatively, the same set of nodes may be connected by experts in different ways (that is, the sets of edges are dissimilar), signifying differences in beliefs regarding the relationships among the same factors. It is also possible that

Example mental model illustrating causal relationships between concepts (arrows) and distributions of edge weights elicited from domain experts (histograms)

FIGURE 6.2 Example mental model illustrating causal relationships between concepts (arrows) and distributions of edge weights elicited from domain experts (histograms).

the direction of the edges differs from one expert’s model to another, indicating a disagreement in which factors cause other factors to occur (a chicken-and-egg dilemma). Analysts combine multiple models based on concepts they hold in common and invite the experts to review the resulting model for completeness and accuracy. However, it is also illuminating to have more than one model describing the experts’ mental models and apply the remainder of the DESIM process to each one and compare the results.

Assigning Edge Weights

Once a model is elicited from one or more domain experts and represented structurally, the DESIM computer program processes it in parts to obtain edge weights with the range from -1 to +1. An edge weight quantifies a causal association or relationship between the two or more nodes that are connected by the edge, similar to that described by Perusich and McNeese (2006) and McNeese, Rentsch, and Perusich (2000). The sign of an edge weight denotes a direction of correlation between nodes, and the magnitude of an edge weight denotes the strength of the causal relationship between the nodes. While a static value for an edge weight could be elicited from a single expert (but may be unreliable), a more trustworthy distribution of values for the edge weights can be determined through appropriately querying multiple experts. In a departure from classic fuzzy cognitive maps, an algorithm determines how the feedback defines a distribution of edge weights for each edge.

While the interviewed experts are able to give the sign (+ or -) of a causal relationship, they are less able to give an accurate estimate of the magnitude (Osei- Bryson, 2004). Because subjective point estimates are unreliable, another method is necessary to produce accurate edge weights. This is achieved through a systematic set of pairwise comparisons of the connected node pairs in the model, for which an expert rates the comparative strength of two relationships. The choice is whether relationship X, —► X2 is stronger than X , —* X4, and by how much, repeated for as many pairwise comparisons of edges in the model necessary to produce a complete set of edge weights.

DESIM uses “expert sourcing” (crowdsourcing among domain experts) to quantify the relationships in the descriptive causal model. Crowdsourcing is a process of obtaining services, ideas, or content by soliciting contributions from a large group of people referred to as a crowd (Howe, 2006). Crowdsourcing combines the incremental efforts of numerous contributors to achieve a greater result in a relatively short period of time. Lin et al. (2012) used crowdsourcing to understand mental models of privacy in mobile applications, but did not create a model explicitly.

Our web-based automated survey tool called IMPACT (Interactive Model PAirwise Comparison Tool) can obtain information from large numbers of people in a population with expertise about the problem. IMPACT takes the machine-readable model produced in the preceding steps and generates a set of pairwise comparisons that are presented in sequence to the experts. First, a single relationship X, —► X2 is presented graphically to the user with the question “Do you agree with this relationship?” The two choices are “Agree: An increase on the left causes and increase on the right,” and “Disagree: An increase on the left does not cause an increase on the right.”

After the respondent has agreed with at least two relationships in the model, these relationships (A = X, —► X2 and В = X, —»X,) are presented with the question “Which relationship is stronger?” with the choices “A is stronger than В,” “B is stronger than A,” or “A is the same as B.” If either of the first two choices are selected, the respondent is additionally asked “How much stronger?” and is presented with a slider ranging from “A is much stronger than B” to “A is the same as B.” After answering, the respondent then proceeds to the next comparison. When the respondent disagrees with a given relationship, it is given a weight of zero and eliminated from all future pairwise comparisons given to that expert.

After a respondent completes all of the essential pairwise comparisons, IMPACT analyzes the results of the pairwise comparisons using a form of the analytic hierarchy process (AHP; Saaty, 1990) modified to accommodate incomplete sets of comparisons (Harker, 1987). From this analysis, it calculates the complete set of edge weights and as a final calibration step it asks the respondent to provide an absolute weight for the relationship with the strongest edge. A respondent who rates one relationship stronger than the rest does not necessarily believe that it is a very strong relationship, but without this calibration step, that strongest edge would be rated near the top of the scale. Rescaling each respondent’s edge weights accordingly more accurately captures their true beliefs and enables more accurate comparisons between experts and across models.

The model validation begun earlier continues by computing the internal consistency of pairwise responses (creating a consistency ratio), and by examining the level of support from respondents for each relationship as derived from the computed edge weights (see Pfaff, Klein, & Egeth, 2017, for more validation details).

The sets of edge weights for all respondents are aggregated and used to populate the original model with distributions of edge weights for each relationship in the model. Using these distributions of weights, multiple simulation model processing runs can be performed to assess how the values in the distributions for each variable affect the ranges of outcomes for each possible decision option to address the focal question. These processing runs generate one or more outcome depictions using an iterative fuzzy cognitive modeling (FCM) method (Kosko, 1986). In this method, initial node values and edge weights can be varied for each processing run to create the distribution of outcomes. Team members’ analyses of the resulting distribution of outcomes provides a more comprehensive understanding of the tradeoffs and tipping points than a single aggregated mean estimate regarding how various variables impact the focal question (Pfaff et al., 2013).

Examining Edge-Weight Differences and Viewing Results

The edge weights can be interpreted not only as a respondent’s estimate of the strength of the causal relationship between two nodes, but also his or her level of agreement with the idea that the two nodes are related at all. Calculated weights that are at or very near zero indicate broad disagreement with a proposed relationship. While there may have been agreement within the small population that was interviewed by analysts to produce the initial model, the larger population accessed via the IMPACT survey may reveal differences in beliefs regarding the model’s structure.

Even subpopulations that agree completely on the structure of models may assign different values to edge weights. For example, there may be bi-modal or multi-modal distributions of edge weights among subpopulations based on differences in belief regarding the relative strengths of relationships between factors represented by the model’s nodes. Each relationship for which disagreement exists can form the basis for further probing in follow-up interviews or focus groups.

The model can be executed across each of the sets of edge weights elicited from the subject matter experts. To score the outcomes, one or more nodes in the model, such as cost or mission effectiveness, are selected as evaluative criteria. Because the fuzzy cognitive modeling method used in DESIM is computationally lightweight, we can additionally examine a wide variety of plausible conditions, such as environmental factors beyond the control of decision makers, under which an option might be executed. An example of these environmental factors is weather, which, for certain domains such as military command and control or emergency response, may affect the team’s performance when executing an option. (For example, performance in a case that assumes dry and gusty conditions may be very different from another case that assumes a drenching downpour with limited visibility.) The result is a range of outcomes computed for each decision option, generated by running each set of edge weights crossed with a set of plausible values for each of the variables in the environment.

The outcomes are displayed as a range of possible results for each option—a decision space visualization, such as can be seen in Figure 6.3. Team members can apply their beliefs and judgment regarding the distribution of outcomes that would be acceptable and can dig into the parameters of each data point to determine tradeoffs and tipping points. Further, they can use decision space visualizations to facilitate team discussions regarding risks and costs (or other evaluative criteria).

Two visualizations of the same example decision space, using a scatter plot and box plots

FIGURE 6.3 Two visualizations of the same example decision space, using a scatter plot and box plots.

Note: The box plots summarize the maximum, minimum, mean, and highest and lowest values of the middle 50% of the cases for each option, which are scored based on cost in this situation. Note that the three options have almost the same mean values, but the shapes of their distributions are very different. If only means are reported rather than showing their entire distributions, much of the nuance of the results for each option is lost.

EXAMPLE USE OF DESIM

DESIM was used in a real-world application after an organization’s Likert-scale survey about employee satisfaction revealed unexpected results that required further examination. DESIM was chosen because of its potential to uncover the causal relationships staff members believed affect various aspects of job satisfaction and to reveal the differences in opinions among subpopulations.

To start, 50 employees participated in hour-long semi-structured interviews. Analysis resulted in seven causal models being created using MentalModeler (Gray et al., 2013) regarding the subjects of compensation, connections among staff, getting good work, management practices, perceived value, conflict between organizational groups (called divisions), and promotions. Models were validated with employees.

These seven models contained a total of 80 nodes and 98 edges, leading to a total of 199 pairwise comparisons. Because the time needed to complete all 199 comparisons would be too long to expect of an individual, employees participating

Influence diagram describing a causal mental model for the topic “Connections Among Staff,” part of an investigation into the factors that affect job satisfaction and knowledge sharing

FIGURE 6.4 Influence diagram describing a causal mental model for the topic “Connections Among Staff,” part of an investigation into the factors that affect job satisfaction and knowledge sharing.

Source-. Adapted from Pfaff et al., 2017.

in the crowdsourcing phase were asked to complete the comparisons for three randomly selected models, with the remaining model comparisons being optional if the employee wished to continue.

A total of 232 employees used IMPACT to create over 10,000 individual edge weights. After computing the consistency ratio (CR), the “compensation” model was revealed as having particularly inconsistent results, meaning that the model required further review and clarification before its results could be trusted. In contrast, the “connections among staff’ model was shown to have extremely consistent results (see Figure 6.4). Within this model, however, there were disagreements regarding the strengths of relationships.

For example, roughly one-third of respondents explicitly disagreed with the proposition formed by the node-edge-node tuple of “Shared understanding of division priorities increases the degree of connection among staff.” An additional approximately one-third gave the relationship a weight of less than 0.05, which indicates a very small effect. The remainder believed there was at least a moderately strong relationship, leading to a multi-modal response pattern.

Another example of disagreement regarding edge weights pertains to the proposition “Degree of connection among staff increases knowledge sharing.” This proposition received a lot of support overall, with a quarter of the respondents rating this edge strongest in the model and only one respondent explicitly disagreeing with it. A bi-modal distribution, however, revealed two subpopulations: one believing that connection among staff strongly increases knowledge sharing and one that sees only a moderate relationship between the two nodes. This result prompted follow-up that determined that staff from two out of the seven departments within the total population were responsible for a majority of the moderate edge weights. This helped management to understand the necessity of investigating the culture of those two departments to understand why they, compared with other departments in the same division, perceive a substantially smaller effect of connection among staff on knowledge sharing. Other cases of bi-modal and tri-modal distributions in the models led to similar insights into specific departments. Because of the differences in causal mental models regarding the factors that affect employee satisfaction, management came to understand that there is no single solution that would satisfy everyone.

Once the distribution of edge weights was computed, and by assigning initial values to nodes, it was possible to manipulate factors in the “connections among staff” model and execute it across all of the sets of edge weights elicited from employees to see their potential impacts on other nodes in the model, in this case looking at effects on knowledge sharing. We modeled the distribution of outcomes in four potential scenarios defined by combinations (low versus high) of two factors: whether there is local space for new hires and whether there is miscommunication from the organization’s managers. The box plots in Figure 6.5 summarize the distributions of outcome

Distributions of modeled outcomes for four scenarios that imply different courses of action

FIGURE 6.5 Distributions of modeled outcomes for four scenarios that imply different courses of action.

Source-. Adapted from Pfaff et al., 20I7.

values ordered by decreasing mean value, with the outlier points indicating more extreme values. A large fraction of the possible outcomes of the first two scenarios suggest little to no effects of either factor on knowledge sharing, but many of the possible outcomes were more negative for the last two scenarios. These results indicate that non-collocation of department members has a stronger negative effect on knowledge sharing than communications from management, shown through the clearly more negative outcomes of having little local office space, regardless of the level of organizational miscommunication. Without this type of visual assistance, people are usually incapable of exploring the implications of their mental models. Using the visualizations, we worked with department management to understand the results so that they could determine data-supported courses of action for organizational improvement.

IMPLICATIONS FOR SUPPORT TO DISTRIBUTED TEAM COGNITION

The foregoing explained how DESIM supports distributed teamwork by identifying the similarities and differences of team members’ causal mental models and by providing visualizations that depict the range of possible outcomes for the analyzed options (such as in Figures 6.3 and 6.5). But how should DESIM be situated in relationship to theories that describe how distributed teamwork takes place? To investigate this question, we examined DESIM’s relationship to a theoretical construct known as a distributed coordination space (Fiore et al., 2003). A distributed coordination space is “a theoretical framework designed to elucidate the many issues surrounding distributed team performance, emphasizing how work characteristics associated with such teams may alter both the processes and the products emerging from distributed interaction” (Fiore et al., 2003, p. 340).

DESIM and the Coordination Space Framework

The distributed coordination space framework consists of three phases of team-based information sharing to achieve coordination. In doing so, it highlights issues that are most characteristic of distributed team performance during each phase and asserts that this team-based information sharing may modify the processes and products resulting from the team’s interaction. As described in this section, DESIM models can enhance the coordination space framework in each of the framework’s phases: pre-process coordination, in-process coordination, and post-process coordination.

Pre-process coordination for information management is defined as distributing targeted (relevant) information to specific team members during pre-briefs or executive summaries prior to actual interaction (Fiore et al., 2003). By capturing the mental models at or prior to the pre-process phase, there is a baseline from which to compare later activations of long-term memory, which may be influenced by the context and therefore evolve over time. Memories will be biased by the context of what one does in the moment, and so having a shared mental model created prior to the in-process stage can provide a more stable basis for team interaction during the in-process stage.

The DESIM method can provide two types of distributed targeted information relevant to distributed teams during the pre-process coordination phase. First, DESIM allows teams to identify differences in mental models about how the world works and/or how the team works together and take the necessary remediation steps to align the team’s diverse mental models or create a new model reflecting and preserving the diverse beliefs. By sharing the DESIM-generated products, conversations are enabled around the graphical influence diagrams describing the causal relationships among the environmental factors that are understood by each team member: in other words, beliefs about how the environment came to its current state or will progress to other states. This approach contrasts with simply modeling and sharing the values of the environmental factors, which depict the state of the environment (the “what”) but not the mechanisms that resulted in the environment’s state (the “how”). Moreover, for each team member, the strengths of these causal relationships can also be represented quantitatively on the diagrams. Understanding distributed team members’ beliefs about the environment and factors that affect its changing state can positively impact task performance (Lim & Klein, 2006).

The second type of targeted information that DESIM can provide during the pre- process phase consists of depictions of the ranges of likely outcomes for each possible decision option (that is, the decision space). These depictions are created using the iterative FCM process described earlier in the chapter, which produces computational models that create distributions of plausible outcomes. Doing so can develop a team’s common understanding of the relative desirability of the possible courses of action and the factors that lead to better versus worse outcomes. Thus, team members can be better prepared to take actions during the next phase.

The next phase in the coordination space framework is in-process coordination, which consists of parsing information during task execution, for example via a knowledge manager (Fiore et al., 2003). This approach is similar to Perusich and McNeese’s (2006) use of fuzzy cognitive maps to filter and direct information to experiment participants acting as crew members of a simulated command and control aircraft. DESIM products can use awareness of their team’s causal mental model(s) as guidance for disseminating information that is relevant for each team member’s roles and beliefs.

We have used DESIM to develop models of how processes work, but it would be possible to create models of how the team works together to perform in-process coordination during task execution, that is, capture episodic or transactive memory (Fiore et al., 2003). DESIM could depict interpersonal processes that make up team behavior, with different cases based on different contexts. For example, the team leader might need three different models for how the team works in three different situations. By eliciting models from team members under these three situations, we could identify any incongruities among them. The DESIM process allows us to quantify these different models and run them as executable models to determine whether their outcomes would be different given the same assumptions about environmental factors as input values. We could not only determine what conflicts exist among the team members, but could also determine which conflicts would need to be resolved to improve team performance because they have a large impact on outcomes.

Post-process coordination includes information disseminated after interaction and post-interaction assessment of performance (Fiore et al., 2003). The long-term memory models captured by the DESIM process can also assist in such performance assessment by providing a standard structure against which performance can be assessed. Specifically, actual performance can be compared to what the models predict would be the performance, because the models codify what each individual expected would happen.

General Implications

DESIM can improve joint understanding of each other’s causal mental models by enabling conversations around accessible graphical representatives of members’ diverse causal mental models. By sharing this information to make diversity more explicit, teams could be better able to align causal mental models, which Mathieu et al. (2000) have found leads to performance increases. Alternately, teams could harness the strength of their diversity so that they can consider the issues explicitly from multiple perspectives and develop more innovative or robust solutions as a consequence (De Dreu & West, 2001). The key to both approaches is to make causal mental models explicit and understandable to team members.

DESIM is an example of a process and toolset that can be used to rigorously examine causal mental models held by distributed team members and thereby enable them to better understand and/or align those models across the distributed team. DESIM’s rigor comes from a mathematical process for crowdsourcing pieces of mental models and recombining the results to obtain unbiased estimates of edge weights, whose distributions can be examined for agreement and patterns. In addition to the example of the organizational effectiveness evaluation presented in this chapter, we have used DESIM to explore attitudes regarding information technology acceptance in healthcare, military decision making, geopolitical events, and electric vehicle purchase decisions (Pfaff et al., 2015; Pfaff, Drury, Klein, & Boston-Clay, 2016; Pfaff et al., 2017). DESIM is a context-agnostic process that therefore is appropriate for use in a wide variety of domains.

DESIM is not well suited for analyzing situations that are both infrequently encountered and have not been addressed in training, such that subject matter experts have not had a chance to form causal mental models around those situations. We have found DESIM to be most useful when there is a high likelihood of complex model(s) or segmentation of beliefs among subpopulations. At the other end of the spectrum, DESIM is not a good candidate for analyzing simple models with a high degree of agreement among subject matter experts, because the effort needed for a full DESIM analysis in these cases is usually unwarranted. For these situations, or when there is time only for an abbreviated investigation, participatory modeling sessions using a freely available tool such as MentalModeler (Gray et al., 2013) can be beneficial and quickly performed. Comparing the structures of the resulting models from several experts can result in identifying points of similarities and differences that can be explored further via focus groups, followed by broad dissemination of findings that encourage further conversation and convergence by team members on a shared causal mental model.

REFERENCES

Canas. A. J.. Hill, G„ Carff, R., Suri, N.. Lott. J.. & Gomez, G. (2004). CmapTools: A knowledge modeling and sharing environment. In A. J. Canas, J. D. Novak, & F. M. Gonzalez (Eds.), Concept maps: Theory, methodology, technology. Proceedings of the first international conference on concept mapping (pp. 125-133). Pamplona, Spain: Universidad Publica de Navarra.

De Dreu, С. K. W„ & West, M. A. (2001). Minority dissent and team innovation: The importance of participation in decision making. Journal of Applied Psychology, 86(6), 1191— 1201. http://doi.org/10.1037//0021 -9010.86.6.1191

Dourish, P., & Bellotti, V. (1992). Awareness and coordination in shared workspaces. In Proceedings of the 1992 ACM conference on Computer-supported Cooperative Work (CSCW '92) (pp. 107-114). New York. NY: ACM. http://doi.org/10.1145/143457.143468

Drury. J. L., Klein. G. L.. Pfaff. M. S.. & More. L. D. (2009). Dynamic decision support for emergency responders. In Proceedings of the 2009 IEEE conference on Technologies for Homeland Security (HST ’09) (pp. 537-544). New York, NY: IEEE.

Drury, J. L., & Williams, M. G. (2002). A framework for role-based specification and evaluation of awareness support in synchronous collaborative applications. In Proceedings of the eleventh IEEE international Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE) 2002 (pp. 12-17). New York, NY: IEEE.

Endsley, M. R. (1988). Situational awareness global assessment technique (SAGAT). In Proceedings of the IEEE National Aerospace and Electronics conference (pp. 789- 795). New York, NY: IEEE.

Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors Journal, 37(1), 32-64.

Endsley, T., Reep, J., McNeese, M. D., & Forster, P. (2015). Crisis management simulations: Lessons learned from a cross-cultural perspective. In Proceedings of the 6th Int'l conference on Applied Human Factors and Ergonomics (AHFE 2015). Amsterdam: Elsevier. http://doi.Org/10.1016/j.promfg.2015.07.918

Espinosa, J., & Clark, M. (2014). Team knowledge representation: A network perspective. Human Factors, 56, 333-348. http://doi.org/10.1177/0018720813494093

Fiore, S. M., Salas. E., Cuevas. H. M., & Bowers, C. A. (2003, July-December). Distributed coordination space: Toward a theory of distributed team process and performance. Theoretical Issues in Ergonomic Science, 4(3-4), 340-364.

Fiore, S. M„ Smith-Jentsch. K. A.. Salas, E., Warner, N.. & Letsky. M. (2010). Towards an understanding of macrocognition in teams: Developing and defining complex collaborative processes and products. Theoretical Issues in Ergonomics Science, 11(4), 250-271. http://doi.org/10.1080/14639221003729128

Forbes Insights Team. (2011). Global diversity and inclusion: Fostering innovation through a diverse workforce. New York: Forbes. Retrieved from https://i.forbesirng.com/forbes- insights/StudyPDFs/Innovation_Through_Diversity.pdf

Gallo, A. (2018, January 3). Why we should be disagreeing more at work. Harvard Business Review. Cambridge, MA: Harvard University. Retrieved from https://hbr.org/2018/01/ why-we-should-be-disagreeing-more-at-work

Gaver, W. W., Moran, T., MacLean, A., Lovstrand, L., Dourish, R, Carter, K. A., & Buxton, W. (1992). Realising a video environment: EUROPARC’s RAVE system. In Proceedings of the conference on human factors in computing systems (CHI ’92) (pp. 27-35). New York. NY: ACM.

Gentner, D., & Stevens, A. L. (1983). Mental models. Hillsdale, NJ: Lawrence Erlbaum Associates.

Gray, S. A., Gray, S., Cox, L. J., & Henly-Shepard, S. (2013). Mental modeler: A fuzzy- logic cognitive mapping modeling tool for adaptive environmental management. In Proceedings of the 46th Hawaii international conference on system sciences (pp. 965- 973). New York. NY: IEEE.

Gross, T. (2013). Supporting effortless coordination: 25 years of awareness research. Computer Supported Cooperative Work, 22(4-6), 425-474.

Gross, T, Stary, C., & Totter, A. (2005, June). User-centered awareness in computer-supported cooperative work-systems: Structured embedding of findings from social sciences. The International Journal of Human-Computer Interaction (IJHCl), 18(3), 323-360.

Gutwin, C., & Greenberg, S. (2002). A descriptive framework of workspace awareness for real- time groupware. Computer Supported Cooperative Work, 11(3-4), 411-446.

Harker, P. T. (1987). Incomplete pairwise comparisons in the analytic hierarchy process. Mathematical Modelling, 9(11), 837-848. http://doi.org/10.1016/0270-0255(87)90503-3

Hinsz, V. B„ & Ladbury, J. L. (2012). Combinations of contributions for sharing cognitions in teams. In E. Salas, S. M. Fiore, & M. P. Letsky (Eds.), Theories of team cognition: Cross- disciplinary perspectives (pp. 245-270). New York, NY: Taylor and Francis Group.

Howe, J. (2006, June). The rise of crowdsourcing. Wired. Retrieved from http://archive.wired. com/wired/archive/14.06/crowds_pr.html

Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press.

Jonker, С. M., van Riemsdijk, M. B., & Vermeulen, B. (2010). Shared mental models: A conceptual analysis. In Proceedings of 9th Int. conference on Autonomous Agents and Multiagent Systems (AAMAS 2010) (pp. 132-151). New York, NY: ACM.

Klein. G. L.. Drury, J. L., Pfaff. M. S., & More, L. D. (2010). COAction: Enabling collaborative option awareness. In Proceedings of the 15tli International Command and Control Research and Technology Symposium (ICCRTS). Washington, DC: Department of Defense.

Kosko, B. (1986). Fuzzy cognitive maps. The International Journal of Man-Machine Studies, 24(1), 65-75.

Letsky, M., Warner, N., Fiore, S. M., Rosen, M„ & Salas, E. (2007). Macrocognition in complex team problem solving. In Proceedings of the 12th International Command and Control Research and Technology Symposium (ICCRTS). Washington, DC: Department of Defense.

Lim, В.-C., & Klein, K. J. (2006). Team mental models and team performance: A field study of the effects of team mental model similarity and accuracy. The Journal of Organizational Behavior, 27,403-418. http://doi.org/10.1002/job.387

Lin, J., Sadeh, N.. Amini, S., Lindqvist, J., Hong, J. I., & Zhang, J. (2012). Expectation and purpose. In Proceedings of the 2012 ACM Conference on Ubiquitous ComputingUbi Comp 72 (pp. 501-510). New York: ACM Press, http://doi.org/10.1145/2370216.2370290

Liu. Y.. Moon, S. P., Pfaff, M. S.. Drury. J. L„ & Klein, G. L. (2011). Collaborative option awareness for emergency response decision making. In Proceedings of the 8th Int’l conference on Information Systems for Crisis Response and Management (ISCRAM). New York, NY: ACM.

Mathieu, J. E.. Heffner, T. S.. Goodwin. G. F., Salas, E., & Cannon-Bowers. J. A. (2000). The influence of shared mental models on team process and performance. Journal of Applied Psychology, 85(2), 273-283.

McNeese, M. D. (2019). Personal communication regarding the changing nature of teaming, 24 April (original communication freely available upon request from the authors).

McNeese, M. D.. Rentsch, J. R., & Perusich. K. (2000). Modeling, measuring, and mediating teamwork: The use of fuzzy cognitive maps and team member schema similarity to enhance BMC'd decision making. In Proceedings of the IEEE Int’l conference on systems, man, and cybernetics (1081-1086). New York, NY: IEEE.

McNeese, M. D., Zaff, B. S., Citera, M., Brown, С. E., & Whitaker, R. (1995). AKADAM: Eliciting user knowledge to support participatory ergonomics. International Journal of Industrial Ergonomics, /5(1995), 345-363.

Mohammed, S., Hamilton, K., Tesler, R., Mancuso, V., & McNeese, M. (2015). Time for temporal team mental models: Expanding beyond “what” and “how'” to incorporate “when”. European Journal of Work and Organizational Psychology, 24(5), 693-709. http://doi.org/10.1080/1359432X.2015.1024664

Osei-Bryson, K.-M. (2004). Generating consistent subjective estimates of the magnitudes of causal relationships in fuzzy cognitive maps. Computers & Operations Research, 3/(8), 1165-1175. http://doi.org/10.1016/S0305-0548(03)00070-4

Ozesmi, U., & Ozesmi, S. L. (2004). Ecological models based on people’s knowledge: A multi-step fuzzy cognitive mapping approach. Ecological Modelling, /76(1-2), 43-64.

Perry, M. (2017). Socially distributed cognition in loosely coupled systems. In S. J. Crowley & F. Vallee-Tourangeau (Eds.), Cognition beyond the Brain: Computation, interactivity and human artifice (2nd ed., pp. 147-169). London: Springer-Verlag.

Perusich, K. A., & McNeese, M. D. (2006). Using fuzzy cognitive maps for knowledge management in a conflict environment. IEEE Systems, Man and Cybernetics, 36(6), 810-821.

Pfaff, M. S., Drury. J. L., & Klein. G. L. (2015). Crowdsourcing mental models using DESIM (Descriptive to Executable Simulation Modeling). In Proceedings of the naturalistic decision making conference. McLean, VA: The MITRE Corporation.

Pfaff, M. S„ Drury. J. L., Klein, G. L., & Boston-Clay, C. (2016). Modeling knowledge using a crowd of experts. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1), 183-187.

Pfaff, M. S., Klein. G. L., Drury. J. L., Moon, S. P„ Liu, Y„ & Entezari. S. O. (2013). Supporting complex decision making through option aw'areness. Journal of Cognitive Engineering and Decision Making, 7(2), 155-178.

Pfaff, M. S., Klein, G. L„ & Egeth, J. D. (2017). Characterizing crowdsourced data collected using DESIM (Descriptive to Executable Simulation Modeling). In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 6/(1), 178-182. Newbury Park. CA: Sage Publishing. https://doi.Org/10.l 177/1541931213601529

Prell, C., Hubacek, K., Reed. M.. Quinn, C.. Jin. N„ Holden. J.....Sendzimir, J. (2007). If

you have a hammer, everything looks like a nail: Traditional versus participatory model building. Interdisciplinary Science Reviews, 32(3), 263-282.

Rouse, W. B., & Morris, N. M. (1986). On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin, 100, 349-363.

Saaty, T. L. (1990). How to make a decision: The analytic hierarchy process. European Journal of Operational Research, 48(1), 9-26.

Salas, E., Dickinson, T. L., Converse, S. A., & Tannenbaum, S. I. (1992). Toward an understanding of team performance and training. In R. W. Swezey & E. Salas (Eds.), Teams: Their training and performance (pp. 3-29). Norwood, NJ: Ablex.

She, M., & Li, Z. (2017). Team situation aw'areness: A review' of definitions and conceptual models. In D. Harris (Ed.), Engineering psychology and cognitive ergonomics: Performance, emotion and situation awareness, EPCE 2017, lecture notes in computer science, 10275 (pp. 406-415). London: Springer-Verlag.

Sieck, W., Rasmussen, L., & Smart, P. R. (2010). Cultural network analysis: A cognitive approach to cultural modeling. In D. Verma (Ed.), Network science for military coalition operations: Information extraction and interaction (pp. 237-255). Hershey, PA: IGI Global.

Vennix, J. (1999). Group model-building: Tackling messy problems, System Dynamics Review, /5(4), 379-401.

Copyright 2019 The MITRE Corporation. All rights reserved. Approved for public release, distribution unlimited [case 18-1450-2]. The authors’ affiliations with The MITRE Corporation is provided for identification purposes only, and is not intended to convey or imply MITRE’s concurrence with, or support for, the positions, opinions or viewpoints expressed by the authors.

 
Source
< Prev   CONTENTS   Source   Next >