Fuzzy Cognitive Maps for Modeling Human Factors in Systems

Karl Perusich



Constructing a Fuzzy Cognitive Map: Nodes........................................................189

Constructing a Fuzzy Cognitive Map: Edges........................................................195

Constructing a Fuzzy Cognitive Map: Assigning Values Edges............................198

Inference in a Fuzzy Cognitive Map......................................................................199

Other Uses of Fuzzy Cognitive Maps: The Reachability Matrix...........................205




A security director of a large midwestern university was concerned about the possibility of a terrorist attack at her institution. Recognizing that how a response would be made and resources deployed w'ould be very different if it were in fact a terrorist attack rather than a routine emergency, she wondered if it would be possible to develop an algorithm to analyze the available data in real time to assess its likeliness. To this end she canvassed a number of experts on campus on this posing the following question: How likely would it be that a terrorist attack was underway if there was a fire in a biology lab that contained lethal toxins, an accident at a strategic intersection on campus that stopped traffic, and an attack on personnel in the security office?

Since each individual had their own area of expertise, the director received a number of responses. One expert, the fire chief for the campus, was concerned about the fire in the biology lab. This individual hypothesized that a suspicious individual seen in the area might indicate that the fire was set. The fire being set definitely indicated the incident was non-routine, i.e. not an accident. Such a state of affairs might indicate that the attack was planned and might be part of a terrorist plot.

A second individual contacted by the director, a professor in the political science department with expertise in international terrorism, concentrated on whether there were simultaneous attacks present, believing that this increased the likeliness of the attack being planned, and any planned attacked likely indicated a terrorist plot. Good evidence of a terrorist plot and a catastrophic incident were a good indication of a local terrorist attack. For this individual, the presence of at least two of the three incidents described by the director indicated that a simultaneous attack was underway.

The campus director of emergency management services felt that any incident that resulted in fatalities should be classified as a catastrophic incident. Since fatalities were generally the end game for a terrorist attack, any catastrophic incident would likely indicate the possibility of a local terrorist attack. This individual believed that the attack on the security office would result in the loss of communication for emergency personnel on campus, in turn reducing their ability to coordinate a response. Without coordination the chances of fire or EMS resources reaching affected areas in time to reduce the threat of casualties would be reduced. The delay of the EMS personnel would increase the potential fatalities. Without department resources reaching the biology building, the fire would get out of control, likely increasing the chances lethal toxins would be released. The release of lethal toxins would definitely cause fatalities.

A graduate student in the policy department indicated that an accident at a strategic intersection on campus would block traffic, reducing the ability of fire department and EMS personnel to get to the place where the incidents were occurring. This in turn would allow the fire to get out of control, releasing deadly toxins that would result in potential fatalities.

These insights provided the director with a somewhat disjointed assessment of the potential situation she might encounter. She noted a few things about the information provided. Each subject matter expert provided detailed information and assessment about the domain they were most familiar w'ith and did it by developing a chain of events. None provided a global, overarching picture of the situation. Many used common concepts in their assessment— simultaneous events, potential fatalities, attack was planned—in the chain of causality they developed. These could be used to connect the thinking of the different experts into an overall picture or model of the situation. What she needed was a modeling tool that could use these individual assessments to develop an overall picture of the situation.

Enter fuzzy cognitive maps. Fuzzy cognitive maps are a unique way to model multi-faceted, multi-disciplined systems that incorporate a variety of attributes. These attributes can range from “hard” concepts or values like the fuel remaining in an aircraft to very “soft” concepts like the attitudes of an adversary. Because the map captures causal relationships and requires only knowing the level of change that has occurred, a common numeric metric for the nodes is not necessary. In addition to this ability to compare apples to oranges, another key value of a fuzzy cognitive map is the ability to use multiple subject matter experts. A system, i.e. overall fuzzy cognitive map, is made by piecing individual maps of the subject matter experts together through common nodes. This allows each individual to develop their map using a language and concepts that they are familiar with.

A fuzzy cognitive map is a signed di-graph that captures the causality a subject matter expert or experts believe to define a problem space. The mapping concept has been used in a variety of different contexts and fields as diverse as economics, political science, human factors engineering, and others (Kosko, 1987; McNeese, Rentsch, & Perusich, 2000; McNeese & Perusich, 2000; Papageorigiou & Poczeta, 2015). As a modeling technique, fuzzy cognitive maps have a number of strengths that make them ideally suited for capturing the underlying relationships in a complex decision space that includes factors with a variety of attributes. Chief among these are the ability to compare “apples to oranges,” the lack of a need for a common numerical metric, the identification of feedback loops, and the ability to construct a complete map from individual sub-maps. These strengths in this modeling technique come with a price. For the most part a fuzzy cognitive map can infer only qualitative changes, not quantitative changes. The best a fuzzy cognitive map can do is postulate a “large” increase in a concept, not an exact value for this increase (Kosko, 1986).

Once the map has been constructed, it can then be used in several ways for analyzing a problem space. In one the reachability matrix is calculated. This matrix is used to assess whether a particular node is one of the causes of another node. This can be used to identify only relevant nodes for a particular effect. In the second method certain nodes are designated inputs and assigned values that are then propagated through the map until some sort of equilibrium is reached. This infers a state of the system as represented by all of the nodes in the map.


A fuzzy cognitive map is composed of nodes connected by directed lines. A node in a fuzzy cognitive map represents a changeable concept in the map. Nodes are then connected by a directed line segment that infers a causality between the source and destination node. A change in the causal node is said to cause a change in the effect node, the definition of the type of node determined by the direction of the line segment connecting the two. Note that nodes are not strictly limited to being either a cause or an effect. It depends on the context at that point in the map. A particular node can be a cause for one pair of nodes and an effect for another pair.

Key to the construction of a valid fuzzy cognitive map is to recognize that the nodes must represent a changeable quantity, i.e. typically a characteristic of the underlying concept represented that can increase or decrease. A simple example from arms control theory can illustrate this. “Our battleships” causes “their battleships” would be an inappropriate pair of nodes for a fuzzy cognitive map. Neither the cause nor the effect in this relationship can change. But “an increase in the number of our battleships” would cause “an increase in the number of their battleships” meets the criteria for defining nodes in a fuzzy cognitive map because each node can now increase or decrease. In essence the map infers that an increase in the underlying concept of a node causes an increase (or decrease) of the concept of the underlying concept of the effect node. In addition to representing nodes that can capture an increase or decrease in an underlying concept it represents, they can also capture the presence or absence of some attribute associated with the space being modeled, termed binary nodes.

Let’s return to the security director of the midwestern university. One of the experts that provided her information was concerned about the fire in the biology lab. Specifically, this individual hypothesized that a suspicious individual seen in the area might indicate that the fire was set. The fire being set definitely indicated the incident was non-routine, i.e. not an accident. Such a state of affairs might indicate that the attack was planned and might be part of a terrorist plot.

This individual has identified six changeable concepts that can be captured as nodes in a fuzzy cognitive map (underlined in the previous paragraph). Each of these nodes can be characterized as a binary node, which represents the presence or absence of the concept. Part of the challenge in constructing a fuzzy cognitive map is recasting or distilling the description given by an expert into some sort of causal chain. Keywords like indicate, shows, etc. mean that a causal relationship connects the concepts described. So for this description each of the changeable concepts identified can have two values: there was a fire in the biology lab or there wasn’t, a suspicious individual was or wasn’t seen in the area, the fire was set or it wasn’t, the incident was routine or it wasn’t, the attack was planned or not planned, and finally there was a terrorist plot or there wasn’t one.

The causal connections are indicted by how an individual has described the relationships between the concepts. For example, a suspicious individual seen in the area might indicate that the fire was set indicates that the expert has identified a causal relationship between suspicious individual and fire was set, with a suspicious individual being a cause of the fire being set. A suspicious individual present at the biology building in their thinking would be a cause of the fire being set.

A completed fuzzy cognitive map for this expert’s situation description is given in Figure 8.1.1 Each of the nodes is a circle in the map with the concept it captures given by the label within it: Fire in Biology Lab, Fire was Set, etc. Causal relationships are indicated by a line connecting two nodes, the arrow showing the direction of “flow” of causality. The line segment starts on the cause and ends on the effect. Fuzzy cognitive maps for the descriptions provided by the other three experts contacted by the director are given in Figures 8.2 to 8.4.

This brings up another point. The line segments connecting a cause-effect pair are signed. If the line segment is positive, the change in the underlying concepts are of the same type; an increase in the causal node gives an increase in the effect node. Likewise, a decrease in the cause causes a decrease in the effect. For binary nodes the presence of the cause indicates the presence of the effect, with the absence of the cause indicating the absence of the effect when the edge connecting them is positive. The relationship can be inverse if the sign of the directed segment linking the nodes is negative. In this case an increase in the source causes a decrease in the effect and vice versa. For binary nodes with inverse causality, the presence of a cause would indicate the absence of the effect and vice versa.

To jump ahead a bit, inference is done in a fuzzy cognitive map by selecting a subset of nodes to define as inputs and assigning values to these. These input nodes represent the sources of causality in the map in the same way a voltage source represents sources of energy in an electric circuit. Keep in mind that the node represents a change, increase or decrease, presence or absence, in the underlying concept being captured by it. Although it is possible to define fuzzy values for this change for the node using linguistic qualifiers, for example somewhat increases or significantly decreases, it is more standard to simply map the node to crisp values of increase, decrease, or no change, or in the case of binary nodes, presence or absence (Perusich & McNeese, 1998). This allows the nodes to be modeled with simple numerical values: 1 for increase, -1 for decrease, and 0 for no change, or 1 for presence and 0 for absence for binary nodes.



vy cognitive mapforljKchjef

Fuzzy cognitive map for campus director of EMS

FIGURE 8.3 Fuzzy cognitive map for campus director of EMS.

Fuzzy cognitive map for graduate student in the policy department

FIGURE 8.4 Fuzzy cognitive map for graduate student in the policy department.


Common Nodes for Experts (Partial List)


Fire Chief

Political Science Professor

Director of EMS

Policy Department Graduate Student

Fire in biology lab




Accident at strategic location




Security office personnel attack


Simultaneous attack




Attack is planned



Terrorist plot



Potential fatalities



Catastrophic incident



Nodes common to the indicated expert are denoted by X

One of the advantages of using fuzzy cognitive maps is that they can be constructed from sub-maps develop by multiple experts through common nodes. Each expert can describe the space in which they are most fluent using a language that they are most comfortable with. Very often these experts will only describe (i.e. map) a part of the overall problem space under examination. Fuzzy cognitive mapping allows the build up of a high-fidelity model of a problem from multiple experts.

Returning to the problem of identifying a terrorist attack on a university, the director received expert information from four individuals, each with a different viewpoint. None completely described all facets of the problem. Instead, each relied on their own expertise and described what they knew best. These individual submaps can be merged through common nodes to develop a composite, global map of the entire problem. A partial list of common nodes for the four experts are given in Table 8.1 with the composite map given in Figure 8.5.


The fuzziness in the map comes in primarily through the strength of the line segments connecting a cause-effect pair. Take for example a simple two-node pair where A is the cause and В is the effect, as shown in Figure 8.5. As can be seen in the figure, the line segment, directed from A to B, has a strength of “a little.” The idea is here is that an increase in A causes a little increase in B. The small increase in В may or may not be important to the system as modeled by the totality of values of the nodes in the map.

As previously stated, one of the chief strengths of a fuzzy cognitive map is its ability to compare apples to oranges. Since the map infers changes in nodes from changes in the underlying concepts, a common numerical metric is not needed. Values for the nodes do not need to be graphed to a common numerical measure, for example money, many times artificially, to be used in the map. The only knowledge that is needed is whether the underlying concept the node represents has changed and how qualitatively that change affects the effects that it is deemed to cause. Each

Composite fuzzy cognitive map for analyzing terrorist attack against a university

FIGURE 8.5 Composite fuzzy cognitive map for analyzing terrorist attack against a university.

subject matter expert that may have insight into part of the problem space can construct maps using their own “language.” This allows the individual to incorporate a variety of attributes into the map, from hard details about the system like fuel level, remaining weapons, and altitude, to soft characteristics and decisions, like the cognitive state of the pilot and the rules of engagement they are adhering to.

This highlights another important value of fuzzy cognitive maps as a modeling tool. Very often problem spaces can be large, complex, and cut across many disciplines and areas of expertise. Rarely can a single individual have enough insight to model the entire problem space themselves, yet, it is the meta-space that is most interesting. Since common numeric metrics are not needed in a fuzzy cognitive map, a group of subject matter experts can be used to model this meta-space. Each develops a small map (sub-map) that incorporates their understanding of the causal relationships that exist from their viewpoint as a relevant subject matter expert. This viewpoint typically does not incorporate or attempt to model the entire problem space. The meta-model is then contracted by piecing these individual sub-maps together through common nodes.

For example, assume that some problem beset an aircraft as it was making a final approach for a landing in storm. In this case one might identify three subject matter experts relevant to understanding the situation of the aircraft at the time of the mishap: the pilot, the air traffic controller, and the meteorologist at the weather service. Each would have a particular vantage point with relevant information, but most likely none of the three has a complete picture of all of the relevant information important for understanding the situation, if for no other reason than that each only has access to data through their instruments. The pilot can see firsthand through the cockpit window prevailing weather conditions in the immediate surroundings of the aircraft with additional information about the status of the aircraft available through their flight instruments. The air traffic controller would have data about flight paths, location of other aircraft in the area, etc. The weather officer would have real-time data about winds, changing weather conditions along a flight path, location of thunderstorms and their movements, etc.

Each has a particular vantage to the problem space that encompasses all of this data and each would make decisions based on what is available to them. Ideally, information sharing would occur, but one actor might not know that data they have in real time is of value to another actor unless asked. Likewise, this actor may not know the importance of a particular datum, and hence ask for it, given the developing situation, or may not know it’s available. When reconstructing the problem space using the data and vantage point of each actor and then piecing it together to form a meta-map of the situation at the time of the mishap, the importance of certain data may become apparent. Future similar situations can then be avoided by providing this relevant data in real time and educating the participants of its importance.

One of the interesting results that can happen from the process of constructing a meta-map from individual sub-maps is the development of feedback loops. It may turn out that as a change in a concept is propagated through the meta-map, there may be a lengthy causal connection that returns to the originating node, i.e. there is a causal feedback loop through the map to this initial cause. If positive feedback is present the causal loop tends to reinforce the initial change increasing its effect, sometimes without bounds. If negative feedback is present the initial change will decrease and decay away, normally the desired state of affairs.

Identification of feedback loops in a map is one of the most desirable features of a fuzzy cognitive map when it is constructed from multiple subject matter experts. One can argue that since an expert “sees” only a piece of the problem space they tend to be unaware of the global effects of their decisions and designs, i.e. they are missing the forest for the trees. They are missing the big picture not necessarily because they want to but more normally because they have to. They do not have the cognitive bandwidth or the knowledge or access to relevant information to fully understand other aspects of the problem space in which their expertise is embedded. Feedback loops are one of those unintended consequences that are only apparent when expertise from multiple individuals is combined to form a meta-model and can be used to explain negative effects when a “big picture” approach is taken to understanding a problem.

Once constructed, a fuzzy cognitive map can be used in two major ways. In the first, the map can be used as a guide in future data assessment by looking at the causal chain between nodes. In this case a user is interested in what causes have a direct connection to a particular effect. In the second, the map is used as a metamodel of the problem space for predictive purposes. Certain nodes are assigned values that are then propagated through the map to assess the resulting changes in other nodes. In this case the totality of nodes about a problem space is the assessment of the output given the input.


Inference in a fuzzy cognitive map will typically involve the propagation of nodal values through the map. As stated previously, nodes can be given fuzzy values through linguistic qualifiers or crisp values that strictly represent an increase or decrease in the concept it represents. Let’s concentrate on the latter situation where nodes are assigned crisp values of -1 for decrease, 0 for no change, or 1 for increase. A standard technique for assigning nodal values explained in detail in the next section is to sum all of the causal nodes weighted by their edge strengths that are connected to a particular node. This value is then mapped to one of the three possible nodal values. The fuzziness, then, in both the map and the inference process is captured by the strength of the edge connecting two nodes (Osoba & Kosko, 2017).

The values of these edges can be assigned directly by defining fractional values to particular linguistic terms used in the description of the causal chain being modeled. Typically this involves first sorting the terms in a prescribed way, for example, from weakest to strongest. Fractional values are then assigned for each on the interval [0,1]. Since there is normally not a compelling reason to do otherwise, the interval is divided into equally spaced intervals with each value then assigned to one of the terms. For example, the description has identified five terms such that the interval [0,1] would be divided into five increments: 0.2, 0.4, 0.6, 0.8, and 1.0. Each term would then be assigned to one of these values based on where it is in the ranking from weakest to strongest. An example of these assignments is given in Table 8.1.


Numerical Values for Fuzzy Edges

Linguistic Qualifier for an Edge in a Map

Numerical Value Assigned

Very little


A little








These numeric values are always then checked by inferring nodal values in a map to ensure that they give the desired results and adjusted if necessary.

Let us return to the descriptions provided to the director of security by the political science professor. This expert was most concerned about whether a simultaneous attack was present. If there were simultaneous attacks present, then the likeness of the attack be ins planned increased, and any planned attacked likely indicated a terrorist plot. For this individual the presence of at least two of the three incidents described by the director2 were present. Again the causal concepts captured by nodes in the map are underlined. Except for the likeliness of the attack being planned, each of the causal concepts is binary in nature. Rather than an increase or decrease being captured, the presence or absence of the concept is captured by the node. The map for this expert’s reasoning is given in Figure 8.2.

Assessing the strength of the causal connections in this map is done partly with reference to the inference process, described in the next section. For this map, inference will be done by summing causal nodes for a particular effect weighted by the strength of the edge connecting them. The sum is then mapped to the nodal value of 1 (increase or presence), -1 (decrease), or 0 (no change). As stated in the description by the professor, the presence of simultaneous attacks “likely” indicates that the attack is planned. This is not direct causality. Simultaneous attacks do not a priori indicate the attack was planned, but there is a strong relationship present. One might assume given the strong causal relationship that few if any other causes would need to be present to give the effect attack is planned, so a value of 0.6 is planned.3

For the node simultaneous attacks, a condition defined by the professor is that if any two of the three incidents, attack on the security office, accident at critical location, and fire in the biology lab, then the node simultaneous attacks will fire. Values of 0.6 are assigned for the edge strengths in this case. In this way the sum of the causes weighted by their strengths for the node simultaneous attacks will always be greater than 1 whenever any two of the causes are present.


There are several ways in which a fuzzy cognitive map can be used to understand a problem space. In one the fuzzy cognitive map is a model with predictive powers. Given an initial set of nodal values it can be used to infer other values in the map.

The idea is to keep these input nodes constant and propagate the causality through the map until it equilibrates to static values or a limit cycle, a repeating fluctuation in nodal values (dynamic equilibrium). The state or solution of the problem at hand is the value of all the nodal values once initial conditions are applied and the map reaches static or dynamic equilibrium.

To begin the process a subset of nodes in the map are identified as “inputs.” One of the interesting things about a fuzzy cognitive map is that this subset can change depending on the context of the problem. In some cases less information is available so a smaller set of nodes is used. In other cases more information is available so a larger subset is used. In still others, an entirely different set of nodes is used that may or may not overlap with the original set.

Once a set of input nodes is identified, they must be assigned initial values. These will represent the sources of causality for the inference process and will be held constant throughout the process. In some cases the initial value is set purely subjectively by the user, for example, a “large increase” in the adversary’s willingness to attack. In other cases the nodal value (keeping in mind that it must represent an increase or decrease in the underlying concept represented by the node) can be mapped from actual data from the attribute that the node represents. This is generally done by breaking the range of actual values associated with the node into mutually exclusive intervals that are then mapped to a linguistic modifier. Again there is no a priori method for assessing these intervals. They are determined by the judgment of the subject matter experts. Very often an initial set of intervals is assigned and then adjusted based on the inference process to give known outcomes.

An alternative approach for nodal values is to assign each one of three possible values: 1 to indicate an increase in the underlying concept captured by the node, -1 to indicate a decrease, and 0 to indicate no change. For binary nodes in the map 1 would indicate the presence of the concept and 0 its absence. Which format is used for the definition of nodal values determines w'hich inference technique is used. Both will be examined here.

Let us first examine the case where nodal values are defined with linguistic qualifiers. An example fuzzy cognitive map is given in Figure 8.6. In this case node A is to capture changes in the weight associated with a component in the system being modeled by the fuzzy cognitive map. The exact ranges of increases in weight for each linguistic qualifier would be determined by the system component that the node is capturing in the map and the context in which it is in. For this example, if the exact increase was 245 kgs then the node would be given a fuzzy value of “big increase.” If this node was considered an input during the inference process, then it would retain the value “big increase” regardless of changes the map might attempt to make to it because of its connections to other nodes. It is a source of causality so its value is fixed in the same way that a voltage sources value is fixed in an electric circuit.

A basic fuzzy cognitive map is defined by its connection matrix, A, where the A,у represents the causal link from node i to node j, sign and strength. An entry of 0 indicates that no causal link exists from node i to node j. So, with this definition the rows of A represent causes and the columns represent effects. The values of the entry capture the fuzziness in some way, with a positive value indicating direct causality and a negative sign indicating inverse causality. Fractional values capture

Fuzzy cognitive map after initial conditions are propagated through map

FIGURE 8.6 Fuzzy cognitive map after initial conditions are propagated through map.


Example Interval Mapping to Linguistic Qualifier

Change in Weight

Linguistic Qualifier

0-200 kgs

A little increase

100-200 kgs

Somewhat increases

201-300 kgs

A lot of increase

301-400 kgs

Very big increase

401-500 kgs

Extremely big increase

the fuzziness in strength of the causality representing modifiers like “somewhat,” “a little,” “a lot,” etc.

Inference proceeds in the following way once the input nodes have been defined and their initial values have been assigned. A vector of all nodes with the input nodes set to their assigned values is multiplied by the connection matrix to propagate the causality one step through the map, yielding an updated vector of nodal values. Should they change because of connections in the map, the input nodes, which represent sources of causality, are reset to their initial values. This process is then repeated with this updated vector being multiplied by the connection matrix, with input nodes reset if necessary, to give an updated vector. Mathematically the update process is given by the following equation:

where N+I is the updated vector of nodal values, A is the connection matrix, and VjV is the initial vector of nodal values. This process is repeated until the nodal vector V,v equilibrates to a static set of values or a limit cycle. With a limit cycle the nodal values repeat in a particular pattern indicating an instability in the system. The final set of nodal values reached in the map represents the state of the system it models. All nodal values are part of this inference.

For maps that use a linguistic qualifier for nodal definitions methodology that mimics fuzzy logic, a max-min approach is used. In this approach each cause is measured against the link with the minimum value of the cause and the strength of its link used as its contribution to the effect node. This is done for each causal node and the maximum of these is chosen as the value of the effect node.

where E/ is the effect node, К is the number of causal nodes that have non-zero links to E-, C( is the fuzzy value of the ith causal node, and A,( is the strength of the connection from C, to E.

As an example of the max-min inference process consider the simple fuzzy cognitive map given in Figure 8.6. It contains four nodes, with four causal connections. The nodal connections have linguistic strengths summarized in Table 8.2, where “somewhat” is considered stronger than “a little”, “a lot” is stronger than “somewhat” and so on. So, for example, the causal link from A to В can be read as “an increase in A somewhat causes an increase in B”.

To see how max-min inference works, assume that nodes A and C are designated as inputs, with linguistic values of “a little” and “a lot” respectively. Since there is no feedback present in the map, it is entirely feed forward, so the inference process is straightforward in the sense that it takes only one pass through the map to equilibrate. The values for A and C are propagated through the map until all remaining nodes have values. Should feedback have been present, the inference process would have to have been repeated multiple times, with nodes A and C reset to their initial conditions as inputs should they have been changed.

Node В is affected by the two input nodes, A and C. To determine a value for node B, the values of nodes A and C are evaluated against the strength of the connection between them and B, with the minimum value, as given in Table 8.2, as the contribution of each node to B. Node A has a value of “a little” while its connection strength to node В is “somewhat.” Choosing the minimum of the two, “a little” becomes the contribution of A to causing B.

Node D in turn is caused by nodes В and C. (Note this is why the value of node В must be calculated before node D.) The contribution of node C to node D is the


Sample Linguistic Values for a Fuzzy Cognitive Map


Linguistic Value


A little


A lot


A great deal

minimum of “a lot,” the nodal value of C, and “somewhat,” the strength of connection from C to D, thus giving a value of “somewhat.” The minimum of “a little” (the value of node B) and “a lot” (the strength of the connection from В to D) is the contribution of В to D, in this case “a little.” The value of the node D, then, is the maximum of “a little” and “somew'hat,” or “somewhat.” Remembering that the system is the totality of nodes, its final state is given as in Figure 8.6. Although only some nodes may be considered outputs and of immediate interest, the state of the system is the inferred values for all nodes.

If the nodes are not defined using linguistic qualifiers an arithmetic approach can be used. In this case, numerical values are assigned to the strength of the nodes and the links between them. Nodal values are limited to 1, -1, and 0, as defined previously. A positive value of 1 for a node indicates an increase, while a value of -1 indicates a decrease in the underlying concept of the node. A node is updated by multiplying all causes connected to it by the strength of the link. These are then summed to give a value for the effect node. This value can then be mapped to a linguistic scale. In this approach causality accumulates. Mathematically:

where Е,- is value of the effect node, C, is the ith node that is a cause of Er and A,; is the strength of the connection between Ej and C,. The sum is over the К nodes that are causes. Once the value of E, is determined it can then be mapped back to a fractional interval and its associated linguistic value. This method is most often used when the nodal values are defined by 1, -1, and 0. A common mapping used is:

Returning to the composite map prepared by the director of assessing the likeliness of a terrorist attack, any set of its nodes can be used as inputs. Let’s assume that information is available that there was a fire in the biology lab, there was no suspicious character seen in the area but there was also an accident at a key intersection on campus that blocked traffic. For this case nodal values for initial conditions would be as given in Table 8.5.


Initial Conditions for Scenario


Nodal Value

Numerical Nodal Value

Suspicious character present



Fire in biology lab


+ 1

Accident at strategic location


+ 1

Blocked traffic


+ 1

Inference in composite fuzzy cognitive map

FIGURE 8.7 Inference in composite fuzzy cognitive map.

Note: Initial inputs are: No suspicious person present. There is a fire in the biology lab and there is an accident at a strategic location on campus.

To illustrate how the inference process works let’s examine it for the node Simultaneous Attacks. In this case the node has three causes, Fire in Biology Lab, Accident Strategic Location, and Security Office Personnel Attack. As stated in the scenario description Fire in the Biology Lab and Accident at Strategic Location are initial conditions so their values are set at +1 and fixed during the inference process. No information is available about Security Office Personnel Attack so it is given a value of 0. The initial sum for the node Simultaneous Attacks is then calculated:

This value is then mapped to give a value of 1 for the node Simultaneous Attacks. The remaining initial conditions are propagated through the map to give the values in Figure 8.7.



One of the important ways in which a fuzzy cognitive map can be used in the assessment of a problem space is to determine the causal links between nodes. In this instance a user is trying ty identify which nodes will ultimately affect a node or nodes of interest, that is, the user is trying to find causal paths from nodes that are defined as inputs to nodes assigned as outputs. Note that depending on the context, the set of nodes defined “inputs” and “outputs” can change (Perusich & McNeese, 2005).

Such information can be used in two different ways. In the first way a user is trying to identify what input nodes affect output nodes of interest. Large maps constructed from multiple subject matter experts may contain tens, hundreds, even thousands of nodes cutting across multiple expert domains. It is not always obvious what and how nodes will ultimately affect another node. Developing this information will help the user to concentrate on only nodes that do in fact affect the output node of interest. This can be useful when looking for causes when the output node is some type of disaster, steering the user to only those nodes, and the concepts that they are modeling, that do in fact cause the output. Sometimes nodes that would seem initially to be contributing to the output can be eliminated. More importantly, nodes that might not be considered initially could become highlighted through this process, especially when multiple sub-maps are pieced together in developing the final map. It might not be obvious in a sub-map that a particular node could be a contributing trigger to some event, but when the sub-maps are combined a chain of causality is identified that individual subject matter experts failed to understand.

Once the causal link has been determined from a cause to an effect it can be used to identify and assess “interventions” that can be used to change the outcome. Interventions are add-ons that be incorporated into the system being modeled. These add-ons then become new nodes in the map that change the causal nodes present in it in some way. In some instances it may be possible to break the link by eliminating a node, for example, blocking an intersection if the traffic at that point is contributing to a problem downstream. A second possibility is to change the map by adding additional nodes at key points in it that mitigate or change the effects of already-present nodes. For example, it may be the case that a particular node is significantly contributing to the overheating of the system modeled by the map. In this case it may be possible to add nodes that contribute to the cooling of other nodes in the map. If this is the case then the technology modeled by the node would need to be physically built into the system. Using the map in this way, though, would help user identify practical and effective ways to design interventions that would give the desired results before time and resources are committed to changing the actual system.

Another way to design interventions is to force and eliminate feedback in the map. As stated previously, one of the interesting but often not recognized attributes of a fully constructed map is the presence of feedback loops. These loops of causality reinforce or mitigate certain nodes. Identification of these loops can suggest ways in which they can be broken if their net result is undesirable. It may also be possible by careful examination of the causal chain leading back to the node to change its characteristic. Many times these loops give positive feedback, reinforcing a node. Just as in a control circuit this can be undesirable, causing a particular attribute to grow without bounds and saturate. If this loop can be flipped to negative feedback then the “shock” to the system will be mitigated and decay away.

To identify whether one node ultimately will affect another node, the reachability matrix for the map is calculated. To calculate the reachability matrix entries in the connection matrix A are first replaced with a 1 if they are non-zero. Since for this calculation knowing only that a causal connection exists is necessary, knowing the strength is not necessary. The reachability matrix R is calculated by repeated multiplication of A. After each multiplication the entries in R are also mapped to 1: non-zero values in the matrix are given values of 1 with values of 0 remaining unchanged. This process is repeated until the entries in R are static. In theory this could take N multiplications, where N is the number of nodes in the map. In practice the map equilibrates well before N multiplications.

A non-zero entry in the reachability matrix at R,; indicates that there is a causal path from node i to node j. Note that there may be multiple paths through the nodes in the map that connect node i to node j. A visual inspection of the map is then generally used to determine the path or paths from node i to node j.


Fuzzy cognitive maps are an effective way to model multi-faceted problem spaces with many different types of attributes present. Nodes in the map represent changes in the underlying concept so a common numeric metric is not needed to compare the different concepts being modeled. Another advantage of this technique is that the map can be constructed from sub-maps, each prepared by an expert with knowledge about only part of the problem through common nodes. The map itself is a true model in the sense that it can predict outcomes. A set of nodes is chosen as inputs and their causal effects are allowed to propagate through the map. The resulting state of the system is then the values of all the nodes in the map viewed together. In addition to predictive capabilities a fuzzy cognitive map can be used to understand the relationships between concepts incorporated in the model. This can be used to examine the effectiveness of proposed changes to aspects in it.


  • 1. Each line segment has a value associated with it. indicating the strength of the causal relationship between the two nodes. A description of the meaning of these values and how they are assigned is given in the next section.
  • 2. The incidents were a fire in a critical biology lab, an accident at a strategic location, and an attack on personnel in the security office.
  • 3. The mapping used later for inference in the map is that if the sum is greater than 1, then the node maps to 1. So by choosing an edge strength of 0.6, only two of the causes for this node need to fire for it be 1.


Kosko, B. (1986). Fuzzy cognitive maps. International Journal of Man-Machine Studies, 24, 65-75.

Kosko. B. (1987). Adaptive inference in fuzzy knowledge networks. IEEE International Conference on Neural Networks, June (11-261-268).

McNeese, M. D., & Perusich, K. (2000). Constructing a battlespace to understand macro- ergonomic factors in team situational awareness. In Proceedings of the Industrial Ergonomics Association/Human Factors and Ergonomics Society (IEA/HFES) 2000 Congress (pp. 2-618-2-621). Santa Monica, CA: Human Factors and Ergonomics Society.

McNeese, M. D., Rentsch, J. R., & Perusich, K. (2000). Modeling, measuring, and mediating teamwork: The use of fuzzy cognitive maps and team member schema similarity to enhance BMC3I decision making, in Proceedings of the IEEE international conference on systems, man, and cybernetics. New York: Institute of Electrical and Electronic Engineers (pp. 1081-1086).

Osoba, O., & Kosko, B. (2017). Fuzzy cognitive maps of public support for insurgency and terrorism. Journal of Defense Modeling and Simulation: Applications, Methodology, Technology, 14( 1). pp. 17-32.

Papageorigiou, E., & Poczeta, K. (2015). Application of fuzzy cognitive maps to electricity consumption prediction. 2015 annual conference of the North American Fuzzy information processing society. 10.1109/NAFIPS-WConSC.2015.7284139 Perusich, K., & McNeese, M. D. (1998). Understanding and modeling information dominance in battle management: Applications of fuzzy cognitive maps. Air Force Research Laboratory. AFRL-HE-WP-TR-1998-0040.

Perusich, K., & McNeese, M. D. (2005). Using fuzzy cognitive maps as an intelligent analyst. Proceedings of the 2005 IEEE international conference on computational intelligence for homeland security and personal safety (pp. 9-15). 10.1109/ CIHSPS.2005.1500602

Q Understanding Human-Machine Teaming through Interdependence Analysis

Matthew Johnson, Micael Vignatti, and Daniel Duran



Understanding Interdependence.............................................................................210

Interdependence Design Principles........................................................................211

Interdependence Analysis Tool..............................................................................217

Modeling of Joint Activity................................................................................219

What Is Joint Activity?.................................................................................219

How to Model Joint Activity—The Joint Activity Graph............................220

Assessing Potential Interdependence................................................................221

Enumerating Viable Team Role Alternatives................................................221

Assessing Capacity to Perform and Capacity to Support.............................223

Identifying Potential Interdependence Issues...............................................224

Determining System Requirements..............................................................224

Analyzing Potential Workflows.........................................................................225

Establishing Workflows................................................................................225

Assessing Workflows....................................................................................226

Interdependence Analysis Example.......................................................................226

Summary of IA Tool Capabilities.....................................................................229




As any craftsman knows, having the right tool for the job makes all the difference. Sadly, adequate formative tools are lacking for the design of human-machine teaming. The vast majority of formative design tools are technology-centric in which the human is not even considered part of the system (e.g. MATLAB, LabVIEW).

Human factors research has highlighted the need for consideration of the human, but this is often accomplished using summative assessments to evaluate existing systems at the end of development (e.g. NASA TLX). User-centered design has pushed for such evaluations to happen earlier in the development process. Apple, for example, has a reputation for early user evaluations translating into successful products. However, this remains the exception rather than the norm for most system development efforts. Moreover, while user evaluations are undoubtedly valuable, designers need to be able to account for such considerations long before there is a system to evaluate. To assist designers in accomplishing this, they need formative tools available in the design process that include the human as part of the system from the beginning.

Another major impediment to designing human-machine teaming is that most existing tools focus on the taskwork, in particular the physical activity. Taskwork is only a small piece of the teaming puzzle. More advanced efforts from cognitive systems engineers extend this with consideration for cognitive activity (e.g. Crandall & Klein (2006)). This is a valuable contribution, but still tends to be taskwork focused. While these traditional task-based approaches are important, they do not sufficiently capture human-machine teaming requirements. Moreover, the language, concepts, and products of those who focus on cognitive aspects are often far removed from those who design and implement working systems and do not translate “into a language that helps a product be built or coded” (Hoffman & Deal, 2008).

The key to understanding human-machine teaming, and in fact teaming in general, is understanding interdependence. Teaming, both human-human and human- machine, comes in many forms with varying characteristics and properties. The one concept that is consistent in every case is the importance of interdependence. This truth is invariant across all domains. This paper begins with an explanation of interdependence. To help designers reorient their minds and view the problem space through the lens of interdependence, we propose several design principles. These principles are intended to help designers shift their perspective and reframe the design challenges to focus on the core elements of teaming, specifically the interdependence needed to support it. Lastly, we discuss a formative tool called the Interdependence Analysis tool.


In order to analyze a human-machine system’s interdependence, it is necessary to understand the concept of interdependence. It is often simply equated to dependence, where one entity relies on another because it lacks some capability provided by the other. However, this definition of the concept is too simplistic to capture the nuances observed in interdependence relationships among teams engaged in joint activity. Our definition of interdependence is as follows:

“Interdependence” describes the set of complementary relationships that two or more

parties rely on to manage required (hard) or opportunistic (soft) dependencies in joint


(Johnson et al., 2014)

Interdependence is about relationships, not taskwork. It is true that the taskwork influences the potential relationships, however, designing human-machine teaming is about designing the interdependence relationships, not the taskwork functions. These relationships are used to manage the interdependence in the joint activity. Relationships can be required, but a significant portion of teaming is about exploiting opportunistic relationships.

To better intuit the concept of interdependence, consider an example of playing the same sheet of music as a solo versus a duet. Although the music is the same, the processes involved are very different (Clark, 1996). The difference is that the process of a duet requires ways to support the interdependence among the players. Understanding the nature of the interdependencies among team members provides insight into the kinds of coordination or teaming that will be required, such as the management of timing, tempo, and volume. Supporting these relationships through agreed signaling and exploiting these interdependence relationships at run-time is what teaming is all about. Success in a duet requires not only execution of the musical score (i.e. individual taskwork competency), but also the extra work of coordinating with others via interdependence relationships to produce effective performance. Flawless execution of the musical score individually is not enough for the duet to be successful, nor is exceptional individual competence sufficient for effective teaming.


While having the right tools is valuable, having the right mindset or perspective on the problem is also essential to effectively leveraging the tools. This turns out to be particularly challenging for human-machine teaming largely because of the pervasiveness of today’s function allocation-based paradigms (Johnson, Bradshaw, Feltovich, Hoffman et al., 2011). In general, such approaches date back to Sheridan and Verplank’s work on supervisory control (Sheridan & Verplank, 1978). Follow-up research on dynamic and adaptive function allocation has led to numerous proposals for dynamic adjustment of autonomy. Such approaches have been variously called adjustable autonomy, dynamic task allocation, sliding autonomy, flexible autonomy, and adaptive automation. In each case, the system must decide at runtime which functions to automate and to what level of autonomy (Parasuraman, Sheridan, & Wickens, 2000), thus we refer to them as function allocation-based approaches. A restrictive outcome of such approaches is that automation choices dictate interaction possibilities. A simple example of this can be seen with the Roomba vacuum. The original bump-and-go exploration strategy of the Roomba prevented the ability to pause and resume work. Newer models actually map the room and plan efficient routes, enabling such interaction. A more recent example can be found in machine learning. This automation choice has demonstrated sophisticated capabilities that have raised interest, while at the same time the DARPA XAI project is examining how to redesign these types of technologies to be more explainable, so they can be more useful to the humans with whom they will need to work. These are just a few of the examples where automation choices limited interaction potential.

Our approach inverts this paradigm, suggesting that desired teaming interactions should shape the automation design (Johnson, Bradshaw, Feltovich, Jonker et al.,

2011). Thus, our focus is on the interdependence relationships that enable interaction. Focusing on relationships instead of functions is challenging for most people raised under the traditional task decomposition paradigm. Task decomposition is the staple approach to tackling hard problems by breaking them down into smaller ones. Understanding interdependence requires an additional step of considering how those pieces will fit back together again when distributed between people and machines. As Peter Drucker noted long ago, “when it comes to the job itself, however, the problem is not to dissect it into parts or motions but to put together an integrated whole” (Drucker, 1954). While his statement was about business management, it summarizes the true challenge of human-machine teaming nicely.

To help designers reorient their minds and view' the problem space through the lens of interdependence, we propose several design principles. The first principle is about the value of teaming. While there is certainly a cost to teaming, and that cost must be controlled (Klein, Woods, Bradshaw, Hoffman, & Feltovich, 2004), there is a value to teaming that must be balanced against the cost. Historically, traditional approaches appear almost surprised by interdependence. For example, an early task decomposition approach to planning noted that “the expansion of each node produces child nodes. Each child node contains a more detailed model of the action it represents. The individual subplan for each node will be correct, but there is as yet no guarantee that the new plan, taken as a w'hole, will be correct. There may be interactions between the new, detailed steps that render the overall plan invalid” (Sacerdoti, 1975). Similarly, another approach noted “because the relevant constraints have been shared during the planning process, the expectation is that few, if any, conflicts will appear during plan merging. However, because of the complexity of planning dependencies, conflicts can arise” (desJardins & Wolverton, 1999). These references from early planning systems unknowingly highlighted the need to handle interdependence. Their solution w'as to try to supplement their task decomposition approach with capabilities such as critics used to resolve conflicts and backtracking techniques to handle cases that could not be resolved. Today’s function allocation approaches are no different from these early planning approaches because they focus on w'hat to automate and w'ho to assign it to (Parasuraman et ah, 2000). More recent w'ork still notes that ‘“conventional wisdom’ is often an over-simplification, and will be modified and sometimes reversed by a host of contextual factors” (Wickens, Li, Santamaria, Sebok, & Sarter, 2010). The main problem with traditional approaches was noted by Stefik long ago who observed “Subproblems interact. This observation is central to problem solving” (Stefik, 1981). This is a critical insight, however he concluded, “a key step in design is to minimize the interactions between separate subsystems” (Stefik, 1981). While traditional approaches view interdependence as an unexpected or unfortunate problem to be resolved, avoided, or minimized, we view it as the core design element. It is something to be leveraged opportunistically. The main reason is because the value of teaming comes not from avoiding, ignoring, limiting, or minimizing teaming, but from exploiting it. For example, searching a building can be more effectively done as a team. However, to attain the benefits, the team must exploit their interdependencies by coordinating search activity and sharing information about their individual search efforts. Thus, our first principle:

Principle 1: the value of teaming comes from exploiting it not avoiding it.

In order to exploit teaming, it is important to understand teaming. Historically, engineers have designed work to be done by an individual. Staying with our building search example, engineers have created robotic algorithms for an individual robot to search a building. However, if two robots with such an algorithm were put into a building, the result would not be teaming, but two individual efforts. The reason is the algorithm designers did not view the work as joint work. In contrast, we propose that all behaviors should be designed from the beginning to be joint work (Johnson, Bradshaw, Feltovich, Jonker et al., 2011). This means that as any behavior or algorithm is created, designers should consider the potential teaming associated with the activity. If this is done, the single agent case is just a degenerate case and is achieved for free. However, the converse is not true. Thus, our second principle is:

Principle 2: All work should he designed as joint activity (coactive), with independent

work being the degenerate case.

The implication of this principle is that designers should be designing collaborative algorithms for distributed and decentralized systems. This is important because human-machine systems are inherently distributed and decentralized.

In order to achieve the second principle, we need to be able to describe joint activity. This requires a theoretical understanding of how work can be performed jointly, which is absent in existing approaches. Currently, engineers decide where to divide work into sub-tasks or actions. This can be aided by approaches like hierarchical task analysis (Annett, 2003), but few teams employ such approaches formally in practice. Even if task analysis approaches are used, they provide no logic for the division choices. Thus work decomposition is done with limited understanding of the consequences or impact of these choices, resulting in well-known issues like the substitution myth (Christoffersen & Woods, 2002). It is not just the automation, but the design of the automation that affects performance. Because of this, there is a need for understanding the work itself and the interdependence created by decomposition and distribution. Some examples of this can be found in research on group processes and productivity which provides a taxonomy of group tasks (Steiner, 1972). It includes knowing if a task is divisible or not, whether the task requires maximizing or optimizing, and whether the task is additive, disjunctive, or conjunctive just to name a few. Continuing with the building search example, moving debris is an additive task allowing for joint activity, while pushing an elevator button is disjunctive, with little value if done jointly. This leads us to our third principle:

Principle 3: Any work has inherent potential for jointness and limitations to jointness.

This principle is about uncovering and identifying the potential interdependencies in joint work. One particularly human capability is our amazing ability to team. People are thrown into teams without training and sometimes even without domain knowledge and yet are capable of at least rudimentary teaming. Our life experiences have enabled us to understand what aspects of work require synchronization, when acknowledgment is useful, what information is relevant to share, and how to request assistance. While people do make mistakes and some people are more naturally skilled than others, this generalizability across domains—regularly exhibited by people—provides the intuition that there are common patterns in joint work which support our third principle.

In order to model joint work, we must first address what counts as work. Traditional engineering and planning target physical actions that affect the world, but can ignore cognitive aspects such as sensing, perception, memory, reasoning, and understanding. This oversight is demonstrated by the development of an entirely separate field known as cognitive engineering (Norman, 1986) to address issues ignored by traditional engineering. While teamwork does involve coordinating physical actions, a significant role of teamwork is coordinating cognitive activities as well (Fong, 2001). Examples include monitoring the state of the world, drawing attention to significant events, assessing the progress of team members, and reasoning over current circumstances, all of which can be enhanced through effective teaming. Designs that do not include these aspects will not produce effective team players (Klein et al., 2004). Thus, our fourth principle:

Principle 4: Teaming occurs not just on the physical level, but also on the cognitive


As we get into more specific discussions on modeling and representation, we can draw upon good design principles that cross domain boundaries. Our fifth principle is just good design practice in general and will be applied in several ways:

Principle 5: Separate the “what" from the “how."

As an example, consider the goal of making sure a teammate does not fall into a hole while working. This could be accomplished by specifying a route for the teammate that avoids the hole, informing the teammate about the hole and letting them avoid it, or positioning oneself or an object by the hole to force the teammate to avoid both the obstacle and the hole. All of these achieve the same purpose (the what) but through different means (the how).

There are many factors that go into effective teaming. These include the work itself, the team members involved in the work, and the environmental factors in which the work is performed. The challenge is that all of these factors interact. In accordance with principle five, we propose our sixth principle:

Principle 6: Joint work can and should be modeled in an agent-agnostic manner.

By this we mean that the description of the work should not include considerations for specific team compositions or team member capabilities. Using the building search example, the work would involve moving around the building and identifying people. It would not include walking versus flying, or infrared versus visible light detection. This does not mean team composition will not be accounted for, just that the initial work description is not the appropriate place to do this. This is indeed a foreign concept in robotics, as most developers strive to customize their solutions to the specific targeted hardware, though recent efforts like ROS (robot operating system; Quigley et al., 2009) have shown the value of abstraction in the robotics domain. This concept is less foreign in domains such as cross-platform application development. In such domains, layers of abstraction are used to enable generalization. Similarly, our principle is trying to emphasize that you may not know the characteristics of the teammates ahead of time, so designing your joint work on top of a reasonable abstraction of the work itself will allow for broader applicability and reuse.

If work should be modeled as joint work, how is this different than regular work? We propose it is the inclusion of interdependence in the model that captures the potential jointness. Malone and Crowston define coordination as “managing dependencies between activities” (Malone & Crowston, 1994). Studying human teams and team effectiveness, researchers have identified team member interdependence as a critical feature defining the essence of a team (Salas, Rosen, Burke, & Goodwin, 2009). From human-machine research, Feltovich et al. propose interdependence is the essence of joint activity (Feltovich, Bradshaw, Clancey, & Johnson, 2007). Interdependence focuses on how the decomposed and potentially distributed work remains interdependent. This provides exactly what is needed to understand joint work, to understand the implications of different decomposition choices, and to specify requirements based on teaming (Johnson et al., 2014). Thus, our seventh principle:

Principle 7; Interdependence provides the basis for understanding potential jointness.

Some interdependencies are obvious. For example, resource constraints on a shared resource or sequencing constraints on dependent tasks. These are hard requirements that must be coordinated through teaming. As hard requirements, they are unavoidable and therefore obvious. However, a significant amount of normal human teaming involves opportunistic (i.e., soft) interdependence relationships. Soft interdependence does not stem from a hard constraint or a lack of capability. It arises from recognizing opportunities to be more effective, more efficient, and/or more robust by working jointly. Soft interdependence is less obvious because it is optional and opportunistic rather than strictly required. It includes a wide range of helpful things that a participant may do to enhance team performance. Examples include progress appraisals (“I’m running late”), warnings (“Watch your step”), helpful adjuncts (“Can I get the door for you?”), and observations about relevant unexpected events (“It has started to rain”). Many aspects of teamwork are best described as soft interdependencies. Our observations to date suggest that good teams can often be distinguished from great ones by how well they manage soft interdependencies. Thus, our eighth principle:

Principle 8: Teaming involves both required (hard) and opportunistic (soft)


So how does a designer identify interdependencies, particularly the less obvious soft interdependencies? Coactive design proposed three essential interdependence relations: observability, predictability and directability (Johnson et al., 2014). Observability means making pertinent aspects of one’s status, as well as one’s knowledge of the team, task, and environment observable to others. Predictability means one’s actions should be predictable enough that others can reasonably rely on them when considering their own actions. Directability means one’s ability to influence the behavior of others and complementarily be influenced by others. There are certainly additional types of interdependence relationships, such as explainability and trust, but we view these three as foundational to the others. This leads to principle nine:

Principle 9: Observability, predictability, and directability are compulsory interdependencies in teamwork.

One characteristic of teaming that provides compelling value is the ability for teams to be robust to individual failures. This does not happen by accident or without any effort. Teams monitor and assess the state of each other in order to achieve this advantage. Underlying this skill is the understanding that failure is always an option. Human failings and limitations are well known and include issues like experience, motivation, and attention. These issues can vary over time and across individuals. Human failings are often the motivation for more automation (Johnson & Vera, 2019). Yet automation has its own failings and limitations. Automation can have blind spots, it can be brittle, and it often lacks contextual awareness just to name a few limitations. Neither human failings nor automation failings are a problem as long as the team is attentive to the potential for failure. More directly, that the human-machine team is designed to address it. Thus, principle ten:

Principle 10: Failure is always an option and teaming should be designed to help the

team be robust to failure of both people and machines.

This means that designers should not only be considering if performing some task is possible, but should be considering the risks and frailties of a task with respect to all team members. This provides insight into the importance and potential necessity of different teaming options in order to prevent any single team member from being a critical point of failure. Supporting this principle involves designing appropriate observability and predictability relationships to support monitoring, backup behaviors, and other teaming competencies.

Our last principle is another based on principle five—separating the what from the how. It addresses an aspect of teaming that is important, namely teaming strategy. Teaming strategy is the means by which a team chooses to exploit available interdependence options. In traditional systems that tackle multi-agent teaming, the teaming strategy is often tightly coupled to the overall system implementation, making it difficult if not impossible to change teaming strategies (e.g. (Dias, Zlot, Kalra, & Stentz, 2006)). It also confounds the scientific analysis of teamwork by not separating the teaming capability within the work from the teaming strategy employed by the team members. Learning to understand the teaming capability of the w'ork


Summary of Interdependence Design Principles

Interdependence Design Principles


The value of teaming comes from exploiting it not avoiding it.


All work should be designed as joint activity (coactive), with independent work being the degenerate case.


Any work has inherent potential for jointness and limitations to jointness.


Teaming occurs not just on the physical level, but also on the cognitive level.


Separate the "what” from the “how.”


Joint work can and should be modeled in an agent agnostic manner.


Interdependence provides the basis for understanding potential jointness.


Teaming involves both required (hard) and opportunistic (soft) interdependencies.


Observability, predictability, and directability are compulsory interdependencies in teamwork.


Failure is always an option and teaming should be designed to help the team be robust to failure of both people and machines.


The jointness of work and the teaming strategy can and should be considered separately, though they need to be designed to work together.

separate from the teaming strategy of the team members helps designers comprehend the influence of each on overall system performance. Thus, principle eleven is:

Principle 11: The jointness of work and the teaming strategy can and should be considered separately, though they need to be designed to work together.

The purpose of these principles is to help reshape a designer’s mindset in order to effectively employ the Interdependence Analysis tool. Table 9.1 summarizes the interdependence design principles.


The purpose of interdependence analysis (I A) is understanding how people and automation can effectively team by identifying and providing insight into the potential interdependence relationships used to support one another throughout an activity. The Interdependence Analysis tool was developed to assist with IA. The tool can be used to analyze existing systems, but one of the tool’s strengths is that it can also be used formatively to guide the initial design process. The need for formative tools is consistent with Kirlik et al., who emphasize “the importance of understanding why cognitive demands are present, prior to determining a strategy for aiding the operator in meeting these demands” (Kirlik, Miller, & Jagacinski, 1993, p. 950). Understanding and designing for interdependence can provide this type of guidance. IA provides insight into when the information requirements needed for specific interdependencies are adequately supported and when they are not. It can inform the designer of what is and is not needed, what is critical, and what is optional. Most importantly, it can indicate how changes in capabilities affect relationships.

Generic interdependence analysis table with three main section labeled

FIGURE 9.1 Generic interdependence analysis table with three main section labeled.

Note: Section 1 helps designers model the joint activity, section 2 helps them identify potential interdependencies in the activity, and section 3 helps analyze the potential workflows to better understand the flexibility and risk in the human-machine system.

As systems develop and improve, understanding the impact of how these changes impact human-machine teaming is critical to ensuring acceptance and utility of new technology.

The IA tool is in the form of a table, as shown in Figure 9.1, with three main sections: joint activity modeling, assessment of potential interdependence, and analysis of potential workflows.

Modeling of Joint Activity

The first section of the IA tool focuses on modeling the joint activity. In accordance with principle two, this modeling should model all work as joint activity. In order to accomplish this, it is important to understand what is unique and important about joint activity and how these aspects can be modeled.

What Is Joint Activity?

So, what does it mean to model some task or function as joint activity? Our view of joint activity comes from work on joint activity theory (Feltovich et al., 2007; Klein et ah, 2004), a generalization of Flerbert Clark’s work in linguistics (Clark, 1996). Joint activity has important characteristics with respect to structure, process, and the potential for interaction (Bradshaw, Feltovich, & Johnson, 2011).

The first distinction is in the overall structure of joint activity. Joint activity is sets of nested actions. When viewed as a task, traditional engineering practice suggests making the given work an atomic isolated module that performs the specified function when called. The internals of the standalone module are typically hidden through encapsulation. While this is good programming practice for software development and integration, it turns out to be problematic for teaming. This is because tasks or functions are actually part of sets of nested actions, in other words an activity. The function may have several sets of nested actions within it, or it may itself be nested within a set of actions. Though team members may be working on functions that can be represented individually, it is important, from a teaming perspective, to understand the overall team activity context provided by the activity structure.

The second distinction is that joint activity is a process, one that extends in space and time. When viewed as a task or function, this is often overlooked, as if no time transpires and the world stands still. This has many implications from a teaming perspective. The first is that events of the past, current status within an activity and intentions for the future are all potentially relevant for effective teaming. Additionally, having a process implies that there is additional work necessary to compose the hierarchical structure. This additional work, often referred to as coordination, is also important for effective teaming.

The last distinction important to this discussion is that joint activity has the potential for interaction. The previous distinctions (considering work as sets of nested actions that are part of a process) shift thinking from individual tasks to activities. However, it is consideration of the potential for interaction that enables the activity to be considered joint. If there is not substantive interaction, then the work is parallel— not joint (Bradshaw et al., 2011). This need to support interaction drives what it means to model joint work. It means to decompose joint activity into a set of nested actions and to instrument those actions to properly support interactions needed for distributed team members to recompose the final solution. This is where designing for interdependence opposes traditional information-hiding practices. Functional encapsulation and opaque automation are often at odds with effective teaming. Understanding the nature of joint activity also plays a role in a more nuanced understanding of interaction. Since activity involves nested sets of activities, interaction can happen across a variety of levels of abstraction. Since it is a process, interaction can involve the past, the future, and ongoing progress during an activity. Defining a system as a human-machine team is defining it as joint activity, but defining it alone is not enough to make it a reality; the system must be designed and built.

How to Model Joint Activity—The Joint Activity Graph

While there are conceivably numerous ways to model joint activity, we propose one solution called the Joint Activity Graph (JAG). The purpose of the JAG is to capture the key elements of joint activity: structure, process and potential interactions.

The data structure underlying a JAG, is a simple graph. It is a tree of unit height— a single activity with zero or more children. More complex activity can be constructed by assembling multiple JAGs into a larger structure that is itself a JAG. Any high-level JAG can be understood by recursively expanding all the children to provide a tree-like view on the overall activity. This recursive data structure has intrinsic benefits for composition and reuse. The JAG provides a description of all the nested activities that are necessary to the proper execution of the activity it describes. As a hierarchical structure, it is similar to hierarchical task networks (HTNs) (Erol, Hendler, & Nau, 1994; Georgievski & Aiello, 2014). As w'ith HTNs, JAGs can be directly executable or conceptual like a goal, sometimes referred to as primitive and non-primitive tasks respectively. However, JAGs do not require explicit declaration as such. Whereas planners are built on top of existing systems with known capabilities, human-machine teaming is constantly evolving and the potential variation in team composition means rigid assumptions on the type of actions that are executable by one agent will likely be invalid for some other agent. Thus, JAGs were designed to postpone the decision on what is directly executable allowing teams flexibility to resolve this at runtime.

If structure alone was all that was needed to understand teaming, then JAGs would be unnecessary as a hierarchical tree would suffice. However, the assembly of a JAG into a more complex activity also requires a process description. JAGs contain a process model whose purpose is modeling how to assemble its immediate descendants into the parent. As with structure, flexibility is an important aspect of the teaming process. The goal of modeling the process with a JAG is not to dictate the “right way” to complete an activity, but to model the permissible ways to do so. The JAG aims to depict the roadmap of options not just one viable road. In other words, it is not meant to define the one solution, but a solution space. The re-composition process can be individually defined for each unit JAG and is fully customizable. While these can be designed from scratch to fit specific needs, we natively support a set of common useful alternatives. These alternatives are combinations of two broad concepts: sequencing and logic. Sequencing provides for sequential or parallel execution. Logic provides the operators “And” and “Or” for specifying conjunctive and disjunctive tasks. Together these provide four useful combinations:

  • 1. Sequential-And: All activities need to be completed successfully in sequence.
  • 2. Sequential-Or: Activities are executed in sequence until one is completed successfully.
  • 3. Parallel-And: All activities can be executed in parallel and need to be completed successfully.
  • 4. Parallel-Or: All activities can be executed in parallel but are racing each other until one completes successfully.

These four processes provide sufficient mechanisms to model many different activities. However, because it is unlikely that we can cover all types of processes imaginable, we left the process definition open for future extensions.

Structure and process dictate how an activity can be decomposed and distributed and how it can be recomposed to achieve the goals of the activity, however neither defines the potential for interactions that determine the jointness. This is accomplished by identifying the interdependence relationships that underlie and define the needed interactions which in turn enable teamwork. Some interdependencies come from the structure. For example, if a JAG’s children are distributed across team members, to know that the parent JAG is successful may require information sharing. Interdependence can also stem from the process. For example, if sequential activities are distributed across team members then they will need to coordinate their sequencing. However, many of the important teamwork dimensions such as adaptability, situation awareness, performance monitoring and feedback, and decision making (Driskell, Goodwin, Salas, & O’Shea, 2006) involve interdependencies beyond those associated with structure or process. The purpose of the IA tool is to help designers identify potential interdependencies. Specifically, observability, predictability and directability (OPD) requirements explain what kind of interactions are potentially valuable to achieve effective teaming (principle 9). More importantly they depict precisely how a given activity must be instrumented to expose and ingest appropriate information.

Assessing Potential Interdependence

After modeling the joint activity, the next step is to assess the potential interdependence. This involves enumerating the viable team role alternatives, assessing the capacity to perform the work, assessing the capacity to support another team member as they perform the work, identifying potential interdependencies, and then determining the requirements to support the interdependence relationships of interest.

Enumerating Viable Team Role Alternatives

While the joint activity is modeled in an agent-agnostic manner, the assessment of interdependence is in general not. This is because interdependence is about relationships. Therefore, any team alternative needs at a minimum two team members, though larger teams are permitted. Having specific individual team members is not required, though it does permit greater specificity. The team alternatives section of the tool captures the fact that team composition has an impact on teaming. It also permits analysis and comparison of how changes to team composition impact teaming.

To aid interdependence analysis, the IA tool makes a distinction between a performer and supporting team members. The performer is the individual primarily doing the given aspect of the work. The supporting team members are then viewed from the perspective of assisting the performer. It is the supporting team member columns that are key to identifying interdependencies. For a given alternative, only one entity is assigned as the performer, labeled P* in Figure 9.1. This is not to say others cannot do the work, but is simply a mechanism to aid the designer in considering a certain perspective.

In general, IA tables will have a minimum of two alternatives. The first with a given performer and supporter and the second with the roles reversed. This allows the designer to consider all of the joint work from both perspectives. This permutation is another key element to identifying all potential interdependencies, which in turn guides the design of effective teaming. In addition to considering any party performing any part of the work, this also forces consideration for any party assisting in any part of the work. If a team consisted of two identical team members, then having two team alternatives would be redundant since they would be identical. The main reason for a minimum of two alternatives is that people and machines are inherently different. This asymmetry needs to be understood and accounted for when designing human-machine teaming.

The columns in each alternative can represent specific individuals (existing or planned). If the team has more than two members then additional columns can be used, as represented by columns А, В, C, and D in Figure 9.1. For larger teams this can become unwieldy, but categories and roles can be used to keep it manageable. For example, consider a single operator managing four unmanned aerial vehicles (UAVs) and two unmanned ground vehicles (UGVs). Assuming the vehicles are of the same type then categories can be used. The team alternative would be three columns: one human operator, one for UAVs, and one for UGVs. Multiple people are also permitted and can be simplified with roles. Consider extending the previous example to be two such units being managed by a commander. These 15 entities can be captured by four columns: commander, operator, UAV, UGV. There is a limit to the feasibility of extending the table to large teams, but such teams are more like organizations than teams.

In general, a permutation of team members in which each is assigned the performer is how team alternatives are created. While permutations across a team seems a bit daunting and suggest visions of exponential explosion, in practice we have found that humans generally team in small numbers and larger teams use categories and roles to manage scale. Some may question whether such consideration for the team composition is necessary for effective design. Consider a simple example of providing a driver instructions on how to get to your house with some road construction blocking the obvious route. If the driver you are teaming with is someone familiar with your neighborhood versus someone unfamiliar, you probably need different coordination mechanisms. If the driver is a teenager learning to drive, a young adult, or a senior citizen you might also imagine different teaming mechanisms. There is no “one size fits all” when designing teaming, especially when considering team composition. This is not a limitation of our approach, but merely a reflection of reality. There is a tradeoff to be made with respect to how much detail to include, but having no representation of team composition is unlikely to be effective.

Assessing Capacity to Perform and Capacity to Support

After the team alternatives are determined, the next step is an assessment of each column in a team alternative to each row in the joint activity model. For formative tasks, the assessment is necessarily subjective, but when assessing existing systems, it is possible to use empirical data to inform the assessment (Johnson et al., 2017). The assessment process uses a color coding scheme, as shown in Figure 9.2. The color scheme is dependent on the type of column being assessed.

Under the “performer” columns, the colors are used to assess the individual’s capacity to perform the activity specified by the row. It complies with principle 10— considering the potential for failure. The color green in the “performer” column indicates that the performer can do the task. For example, a robot may have the capacity to navigate around an office without any assistance. Yellow indicates less than perfect reliability. For example, a robot may not be able to reliably recognize a coffee mug all the time. Orange indicates some capacity, but not enough for the task. For example, a robot may have a 50-pound lifting capacity, but would need assistance lifting anything over 50 pounds. Another use of orange is to indicate hardcoded assumptions that limit the performance to very specific contexts. For example, a robot may be able to pick up a coffee mug from a table, but it may not be able to do so from the floor or cluttered cabinet. The color red indicates no capacity, for example, a robot may have no means to open a door.

Under the “supporting team member” columns, the colors are an assessment of that team member’s potential to support the performer for the activity specified by the row. The color red indicates no potential for interdependence, thus independent operation is the only viable option for the task. Orange indicates a hard constraint, such as providing supplemental lifting capacity when objects are too heavy. Another example of orange is when a machine needs human authorization to perform the activity. Yellow is used to represent improvements to reliability. For example, a

Color key for team member role alternative capability assessment

FIGURE 9.2 Color key for team member role alternative capability assessment.

human could provide recognition assistance to a robot and increase the reliability in identifying coffee mugs. Green is used to indicate assistance that may improve efficiency. For example, a robot may be able to determine the shortest route much faster than a human or could assist in cleaning up a room.

One last note on color coding. Some relationships are not significant and so either the performer or the supporting team members’ assessment may not be applicable. In these cases, use of gray is suggested to minimize the attention drawn by such cases (Beierl & Tschirley, 2017).

Identifying Potential Interdependence Issues

Once the assessment process is finished, the color pattern can be analyzed. The color pattern characterizes the nature of the interdependence within a team for the given joint activity. Colors other than green in the “performer” column indicate some limitation of the performer, such as potential brittleness due to reliability (yellow) or hard constraints due to lack of capacity (orange). The hard constraints in the performer column indicate a need to team to accomplish the work and are usually fairly obvious. The more interesting situation is the potential brittleness which is often less obvious. Teaming is not required in this circumstance, but doing so can make the team more resilient.

Colors other than red in the “supporting team member” columns indicate required (orange) or opportunistic (yellow and green) interdependence relationships between team members. Again, it is the opportunistic cases that tend to be the most interesting, less obvious, and contribute to resilience.

Determining System Requirements

With the assessment complete, the IA tool can now help designers extract clear-cut design requirements needed to enable and support specific interdependence relationships. For each relationship of interest, the design considers the compulsory interdependencies of observability, predictability, and directability (OPD) in accordance with principle 9. In other words, identifying who needs to observe what from whom, who needs to be able to predict what, and how members need to be able to direct each other for a given aspect of the work. As an example, we have created a small IA table based on Fong’s collaborative control work (Fong, 2001), as shown in Figure

  • 9.3. In Fong’s example there was one teaming alternative: the human assisting the robot. The robot was capable of performing obstacle avoidance; however, it was less than 100% reliable (yellow) in interpreting if an obstacle is passable. The human was capable of providing assistance, thus increasing the reliability (yellow) of the robot in this task. The yellow coloring of the human column indicates soft interdependence. Requirements can be derived from analyzing the IA table in Figure
  • 9.3. The robot must be predictable in notifying the human when unsure about an obstacle because the human is not constantly watching (red). The human’s ability to interpret depends on being able to sense the obstacle, so there is an observability requirement. Once the human has interpreted if the obstacle is passable, this information must have a way to alter the robot’s behavior, so there is a directability requirement. These requirements define what is needed by both the algorithm and the human interface to support this interaction. Note that the IA tool’s purpose is to identify what the requirements are, not how to meet them (principle 5). That is an implementation choice. These particular OPD requirements are based on the desire
Interdependence analysis example from Fong’s

FIGURE 9.3 Interdependence analysis example from Fong’s (2001) collaborative control work, showing observability, predictability, and directability requirements based on choosing to allow the human to provide interpretation assistance to the robot during navigation.

to support a particular interdependence relationship: the human assisting in interpretation of whether an obstacle is passable. This example demonstrates how OPD requirements derive from the role alternatives the designer chooses to support, their associated interdependence relationships, and the required capacities. This example does not include the reciprocal teaming alternative with the human assisted by the robot because this was outside the scope of the original work.

Analyzing Potential Workflows

The original IA tool was introduced as a way to help designers understand and design for the interdependence in human-machine teams (Johnson et al., 2014). The assessment of potential interdependence provided this, but demanded a fair bit of imagination to envision the implementation alternatives. As the tool was applied in the development process within a team of engineers, it became clear that it would be valuable to provide an improved visualization of the teaming alternatives to aid the team in developing a unified understanding (Johnson et ah, 2017). Specifically, we needed a way to connect the theoretical understanding of interdependence to the physical instantiation of what had been developed thus far (i.e., the implementation) and what was planned to be developed.

Establishing Workflows

To achieve this, we associated leaf JAGs to particular team member’s capability of supporting them. We also expanded the team members to allow more detailed mapping to specific algorithms, interface elements, or human abilities. Across the top of section 3 in Figure 9.1, there are column headings for each algorithm, interface element, or human capability used to accomplish the task. Below each heading is a black dot to indicate where in the activity that particular component has a role. We then connect the dots with arrows to indicate potential workflows to accomplish the goal. The resulting graph structure is a visual description of all existing and potential workflows. Therefore, it is a depiction of the flexibility in the system (Johnson et al.,

2015). The graph can be thought of as the adjustment options in adjustable autonomy or the initiative options in mixed-initiative interaction (Allen, Guinn, & Horvitz, 1999). However, what they really are is an enumeration of the possible ways a team can complete the joint activity. This is how the IA tool provides not a solution, but a solution space.

Additionally, the graph makes it clear that discrete function allocation is not what is happening because the human is informed by automation and display elements, and automation can be assisted by the human as indicated by the numerous horizontal and diagonal lines shown in section 3 in Figure 9.1. Most importantly, it shows where teaming is supported and where it is not. This allows grounded debate on the value of additional teaming support versus the cost of implementation.

Assessing Workflows

The color coding in the workflow section of the IA tool, section 3 of Figure 9.1, is indicative of risk for a given pathway. The color for a given cell is determined by the performer column of the assessment section. Moving vertically the risk is compounded across different functions. However, horizontal pathways indicate potential mitigations through teaming.

The potential workflows section of the IA tool is valuable for several reasons. First, it ensures the joint activity model grounds out in actionable terms and does not fall prey to wishful mnemonics (McDermott, 1976) or suitcase words (Brooks, 2017). Delineating the roadmap of possibilities also helps to guide a design team in their development. They can easily see what pathways are supported and which are not, but more importantly they can be guided to potentially opportunistic pathways they may not have found otherwise. The IA tool structure is also well suited to collection of empirical data to help inform design choices throughout the design iteration process (e.g. (Johnson et al., 2017)). Information about reliability, performance time, and frequency of use associated with specific pathways can help shape design decisions and provide credence to implementation directions.


As an example of how to use the IA tool we will use it to analyze manned aircraft collision avoidance. Figure 9.4 depicts an abbreviated IA of this challenge. The JAG, shown in the left side of Figure 9.4, depicts the agent agnostic joint activity (principles 3, 5, and 6). The work includes the cognitive aspects (principle 4) to sense and interpret traffic and decide when there is a conflict and what to do about it, as well as the physical task to take the necessary actions. We assume the work is jointly shared (principle 2) by some automation working together with a human pilot. Our hypothetical system includes automation, such as a Traffic Collision Avoidance System (TCAS) and a Detect and Avoid System (DAA), shown as headings in the workflow section of Figure 9.4. The figure depicts two TCAS options for discussion purposes. In teaming alternative one (TCAS-I), assessing the automation’s ability to perform, we see that TCAS can reliably sense and interpret traffic, but only cooperative traffic. TCAS has no ability to detect non-cooperative traffic (i.e. non-transponder

Interdependence analysis of manned traffic collision avoidance

FIGURE 9.4 Interdependence analysis of manned traffic collision avoidance.

equipped). While TCAS-I only warns about traffic, TCAS-II can decide what to do and will provide vocalized instructions, but this is based on the limited sensing capability of TCAS-I. While DAA systems are not fully operationalized yet, here we assume we are designing a system to have such capability. Our envisioned DAA system leverages TCAS decisions to determine control inputs. We have assumed perfect execution (green) but left open the possibility that the automated system might misdiagnose on rare occasion (yellow) accounting for potential failure (principle 10). By considering that the human pilot has capabilities that could assist the automation (yellow) such as recognizing uncooperative traffic and making the avoidance decision we have identified potential soft interdependencies (principles 7 and 8). TCAS-II shows the human is capable of everything except directly executing the controls with errors most likely occurring in sensing due to attention or decision making (slow reaction time). Automation can assist the pilot in sensing cooperative traffic and making the avoidance decision (yellow) and generating control commands (green).

The potential workflows are depicted in the right half of Figure 9.4. One can see the potential automated solution with TCAS identifying traffic and DAA commanding avoidance maneuvers. Though the final execution of commands is highly reliable by today’s automation (green), it is dependent on sensing that lacks some context (red) and decision making that may not be perfect (yellow). This provides designers with an overall risk assessment for the automated solution path. The manual solution is also possible with the pilot detecting traffic, deciding what to do, and taking action. There is risk in this solution, but it is different. Pilots have limited attention and may miss traffic and misjudge a decision. These are just two solutions, neither of which exploits teaming. By considering the supporting team member columns in each teaming alternative, designers are guided toward teamwork options throughout the activity.

Teaming alternative 2 (TA2) considers how the automation can help the pilot. Each option can be represented by horizontal lines that cross any boundary between teammates. Each relationship will have its own associated OPD requirements (principle 9). For example, TCAS-I can sense and interpret cooperating traffic for the pilot. This interdependence relationship is indicated by the upper horizontal red line connecting the TCAS-I interpretation to the human pilot cognition. To support this teaming pattern, the system must make potential traffic conflicts it detects observable to the pilot, in this case through a warning display. This activity can be done in parallel and thus both the pilot and the automation can contribute. The workflow includes both sense and interpret pathways because the automation is not replacing, but supplementing the pilot’s ability, as most flight manuals will warn. It also means the risks of each individual pathway, missing context for automation and lack of attention for pilots, can potentially be mitigated by the other. Alternatively, TCAS-II can provide a suggested course of action. This interdependence relationship is indicated by the lower horizontal red line connecting the TCAS-II decision to the human pilot decision. It has different OPD requirements than the first. It requires a mechanism to direct pilot action, in this case voice instructions. Note that although the DAA has the potential to determine control inputs, it currently does not support providing such assistance directly to the pilot, only through the fully automated solution.

Teaming alternative 1 (TA1) considers how the human can help the automation. Although the assessment column for the human in TA1 indicates the potential to help recognize uncooperative traffic or decide on a course of action, no support is provided. There are no horizontal arrows from the human to the machine in the workflow, except the control input command. If the pilot performs these activities, they are done outside of any teaming and are not sharable with the automation in the current design.

Figure 9.4 represents the existing or proposed interaction possibilities. By describing the interdependence landscape in such a manner, it is easy for the designer to see where interdependencies exist and where they do not. This can guide designers toward new interaction concepts. For example, is it possible for the human pilot to supplement the TC.AS automation of sensing and interpretation and if so, what kind of OPD requirements would need to be supported? How can the automation and pilot jointly engage in decision making, instead of having the either-or situation as in Figure 9.4? The IA tool is useful in both understanding one’s design and helping designers identify novel design alternatives.

This example demonstrates the process of interdependence analysis and how it helps designers understand human-machine teaming. We connected the design principles to the analysis story to show how they relate to teaming and how the IA table guides the designer through these considerations. It should be clear that the potential for teaming and the requirements for supporting it will change as design changes are considered, such as different types of automation (e.g. TCAS-I vs. TCAS-II) or having the pilot be remote. This is how the IA tool aids designers in diagnosing the teaming implications of design choices.

Summary of IA Tool Capabilities

The IA tool is unique in its capability to aid designers in assessing interdependence in human-machine teams. Currently, it is the only design tool that specifically shows interdependence. It is also novel in that it does not try to determine which team member, the human or the machine, should be allocated to what aspects of work. Instead its focus is capitalizing on the opportunities for synergy in the team. It enables designers to more effectively find teaming opportunities and helps them understand the requirements needed to exploit those opportunities (principle 1).

The IA tool is also novel in how it includes both people and machines in the design of a system. Many approaches do not include people as part of the system, while others assess human performance without the context of the automation and interfaces. The novel approach to modeling all work as joint work in an agent agnostic way allows for complete flexibility of teaming. The support for consideration of teaming alternatives helps designers to see the problem from multiple perspectives. Providing workflow visualization that includes people, algorithms, and interfaces grounds the theory into actionable practice.

The IA tool is also unique in how it captures soft interdependencies. These types of capabilities are often overlooked, but their importance to teaming should not be. Support for such opportunistic teaming options can lead to improved flexibility and better overall system resilience (Johnson et al., 2017). All too often the purpose of human-machine teaming is viewed as a constraint due to automation not being able to do something. IA is about capitalizing on opportunities for synergy, not just filling required slots. It is about mutual enhancement, not replacement or substitution.

The IA tool is also unique in providing not just a single solution, but a roadmap of design alternatives. This type of view is critical for development of advanced technologies which involve highly iterative development. Supporting multiple pathways on the roadmap provides flexibility and contributes to system resilience.

It is worth noting that teaming strategy is not addressed by the IA tool, but it is included in the discussion because strategy is an important part of teaming. The IA tool provides insight into the tools and options available to teaming strategies (e.g. state, structure, and skills; Johnson & Vera, 2019), but not the strategy itself.

A critique of the IA process is that it is potentially time consuming to develop joint activity models and consider the different teaming alternatives. This is a valid concern, but building systems that fail or struggle to meet people’s needs is also costly. While interdependence analysis will definitely take more time than simple task decomposition, it is unlikely to take more time than the first iterative development of any complex system. Our experience is that its value throughout the life of the project greatly exceeds the initial time investment.

Another concern is that IA requires a high level of expertise. In some sense this is true. The main issue is shifting away from the traditional paradigm. Shifting from a function allocation mindset is not easy, just as shifting to object-oriented programming or functional programming is not easy for someone trained in procedural programming. It also requires two skill sets not often found in combination: technology expertise and human expertise. However, ignoring either technology or the human because dealing with both is difficult is not a viable answer. Future educational tracks that emphasize both could alleviate this problem. We have proposed several guiding principles to help designers reorient their perspective, but as in any field, there is no substitute for practice and experience.


The purpose of interdependence analysis (I A) is understanding how people and automation can effectively team by identifying and providing insight into the potential interdependence relationships used to support one another throughout an activity. These interdependence relationships enumerate the potential human-machine interactions that comprise teaming. In this way, IA provides a roadmap of the opportunities afforded by a given system design. Interdependence relationships help to understand both the human factors and the technological factors that enhance or inhibit effective teaming. IA can be used to derive specific design requirements, both algorithmic and interface, that determine human-machine interactions for each alternative. The analysis also provides a risk assessment associated with each teaming option. Having alternatives provides flexibility, while understanding the risk associated with each option assists operational decision making.

As we build more intelligent machines to tackle more sophisticated challenges, designers need formative tools for creating effective human-machine teaming. IA meets this need by explicitly modeling the machine, the human, the work, and the interplay of all three. This enables developers to architect effective teaming by designing support for interdependence. IA should be viewed as an approach to determining how and what automated capabilities are built such that intelligent systems are imbued with teaming competence.


Allen, J. E., Guinn, С. I., & Horvitz, E. (1999). Mixed-initiative interaction. IEEE Intelligent Systems, 14(5), 14-23. https://doi.Org/http://dx.doi.org/10.1109/5254.796083

Annett, J. (2003). Hierarchical task analysis. In Handbook of cognitive task design (pp. 17-35). London: Lawrence Erlbaum Associates.

Beierl, C., & Tschirley, D. (2017). Unmanned tactical autonomous control and collaboration situation awareness. Retrieved from www.dtic.mil/dtic/tr/fulltext/u2/1046299.pdf

Bradshaw, J. M., Feltovich, P. J., & Johnson, M. (2011). Human-agent interaction. In The handbook of human-machine interaction: A human-centered design approach. Burlington. VT: CRC Press, Ashgate Publishing Company.

Brooks, R. (2017). The seven deadly sins of Alpredictions. Retrieved November 17,2017, from MIT Technology Review website: www.technologyreview.com/s/609048/the-seven- deadly-sins-of-ai-predictions/?utm_campaign=add_this&utm_source=email&utm_ medium=post

Christoffersen, K., & Woods, D. D. (2002). How to make automated systems team players. Advances in Human Performance and Cognitive Engineering Research, 2, 1-12.

Clark, H. H. (1996). Using language. Retrieved from www.loc.gov/catdir/toc/cam023/ 95038401.html

Crandall, B., & Klein, G. (2006). Working minds: A practitioner's guide to cognitive task analysis. Cambridge, MA: The MIT Press.

desJardins, M., & Wolverton, M. (1999). Coordinating a distributed planning system. ЛI Magazine, 20(4), 45. https://doi.org/10.1609/aimag.v20i4.1478

Dias, M., Zlot, R., Kalra, N.. & Stentz, A. (2006). Market-based multirobot coordination: A survey and analysis. In Proceedings of the IEEE (pp. 1257-1270). Retrieved from http:// robotics.cse.tamu.edu/dshell/cs689/papers/dias06market.pdf

Driskell. J. E., Goodwin, G. F.. Salas, E., & O'Shea, P. G. (2006). What makes a good team player? Personality and team effectiveness. Croup Dynamics: Theory, Research, and Practice, 10(4), 249-271. https://doi.Org/10.1037/1089-2699.10.4.249

Drucker, P. F. (1954). The practice of management. Retrieved from www.amazon.com/ The-Practice-Management-Peter-Drucker/dp/0060878975

Erol, K., Hendler, J., & Nau, D. S. (1994). HTN planning: Complexity and expresivity. In AAAI (pp. 1123-1128). Retrieved from www.aaai.org/Papers/AAAI/1994/AAAI94- 173.pdf

Feltovich, P. J.. Bradshaw, J. M., Clancey, W. J., & Johnson, M. (2007). Toward an ontology of regulation: Socially-based support for coordination in human and machine joint activity. In G. O’Hare, M. O’Grady, A. Ricci, & O. Dikenelli (Eds.), Engineering societies in the agents world VII: Vol. lecture no (pp. 175-192). Heidelberg, Germany: Springer.

Fong, T. W. (2001). Collaborative control: A robot-centric model for vehicle teleoperation. Pittsburgh, PA: Robotics Institute, Carnegie Mellon University.

Georgievski, I., & Aiello, M. (2014). An overview of hierarchical task network planning. Retrieved from http://arxiv.org/abs/1403.7426

Hoffman, R. R., & Deal, S. V. (2008). Influencing versus informing design, part 1: A gap analysis. IEEE Intelligent Systems, 23(5), 78-81.

Johnson, M., Bradshaw, J. M., Feltovich, P. J., Hoffman, R. R., Jonker, C., Riemsdijk, В. V, & Sierhuis, M. (2011). Beyond cooperative robotics: The central role of interdependence in coactive design. IEEE Intelligent Systems, 26(3). https://doi.Org/10.l 109/MIS.2011.47 Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, C., van Riemsdijk, B., & Sierhuis, M. (2011). The fundamental principle of coactive design: Interdependence must shape autonomy. In M. De Vos, N. Fornara, J. Pitt, & G. Vouros (Eds.), Coordination, organizations, institutions, and norms in agent systems VI (Vol. 6541, pp. 172-191). Berlin: Springer-Verlag. https://doi.org/10.1007/978-3-642-21268-0_10 Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, С. M., van Riemsdijk, В. M, & Sierhuis, M. (2014). Coactive design: Designing support for interdependence in joint activity. Journal of Human-Robot Interaction, 3(1), 43-69.

Johnson. M., Shrewsbury, B.. Bertrand, S„ Calvert, D., Wu. T, Duran, D.....Pratt. J. (2017).

Team IHMC’s lessons learned from the DARPA robotics challenge: Finding data in the rubble. Journal of Field Robotics, 34(2), 241-261. https://doi.org/10.1002/rob.21674

Johnson. M., Shrewsbury, B., Bertrand, S., Wu, T. Duran, D., Floyd, M.....Pratt, J. (2015).

Team IHMC’s lessons learned from the DARPA robotics challenge trials. Journal of Field Robotics, 32(2), 192-208. https://doi.org/10.1002/rob.21571 Johnson, M., & Vera, A. H. (2019, Spring). No AI is an Island: The case for teaming intelligence. AI Magazine, pp. 16-28.

Kirlik, A., Miller, R. A., & Jagacinski, R. J. (1993). Supervisory control in a dynamic and uncertain environment II: A process model of skilled human environment interaction. IEEE Transactions on Systems, Man, and Cybernetics, 23, 929-952. https://doi. org/10.1109/21.247880

Klein, G.. Woods, D. D., Bradshaw, J. M., Hoffman. R. R., & Feltovich. P. J. (2004). Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intelligent Systems, 19(6), 91-95. https://doi.Org/http://dx.doi.org/10.1109/MIS.2004.74 Malone, T. W„ & Crowston, K. (1994). The interdisciplinary study of coordination. ACM Computing Surveys, 26(1), 87-119. https://doi.Org/http://doi.acm.org/10.1145/174666. 174668

McDermott, D. (1976). Artificial intelligence meets natural stupidity. ACMSIGARTBulletin.

Retrieved from http://dl.acm.org/citation.cfm?id=1045340 Norman, D. A. (1986). Cognitive engineering. User-centered system design (pp. 31-61). Retrieved from https://pdfs.semanticscholar.org/57fl/76992f92ae559d9cl 1021 Id7f04c 5143cb44.pdf

Parasuraman, R., Sheridan, T, & Wickens, C. (2000). A model for types and levels of human interaction with automation. Systems, Man and Cybernetics, Part A, IEEE Transactions On, 30(3), 286-297. http://doi.org/10.1109/3468.844354

Quigley, M„ Conley. K.. Gerkey. B„ Faust, J., Foote. T. Leibs, J.....Mg, A. (2009). ROS:

An open-source robot operating system. Icra, 3(Figure 1), 5. Retrieved from www.wil- lowgarage.com/papers/ros-open-source-robot-operating-system Sacerdoti, E. D. (1975). The nonlinear nature of plans. Proceedings of the 4th International Joint Conference on Artificial Intelligence, I, 206-214. https://doi.org/10.1017/ CB09781107415324.004

Salas. E„ Rosen. M. A.. Burke. C. S.. & Goodwin. G. F. (2009). The wisdom of collectives in organizations: An update of the teamwork competencies. In Team effectiveness in complex organizations: Cross-disciplinary perspectives and approaches (pp. 39-79). New York: Routledge/Taylor & Francis Group.

Sheridan, T. B., & Verplank, W. (1978). Human and computer control of undersea teleoperators. Cambridge, MA: Man-Machine Systems Laboratory, Department of Mechanical Engineering, MIT.

Stefik, M. (1981). Planning with constraints (MOLGEN: Part 1). Artificial Intelligence, /6(2), 111-139. https://doi.org/10.1016/0004-3702(81)90007-2 Steiner, I. D. (1972). Group process and productivity. New York. Retrieved from http://books.

google.com/books?id=20S3AAAAIAAJ&pgis=l Wickens, C. D., Li, H., Santamaria, A., Sebok, A., & Sarter. N. B. (2010). Stages and levels of automation: An integrated meta-analysis. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 54(A), 389-393. https://doi.org/10.1177/ 154193121005400425

10 Using Conceptual Recurrence Analysis to Decompose Team Conversations

Michael T. Tolston, Gregory J. Funke, Michael A. Riley Vincent Mancuso, and Victor Finomore


Conceptual Recurrence Analysis...........................................................................238

Recurrence Quantification Analysis and Networks...............................................239


Lost in the Desert Task......................................................................................240





CRA: Data Preparation.................................................................................242

Dimensionality Reduction: Multidimensional Scaling................................242

Thresholding and Network Creation............................................................243

Community Detection..................................................................................243

Calculating Representative Conceptual Content..........................................244

Identifying Conceptual Transitions..............................................................244





Teams form the essential substrate of most modern organizations, from small businesses to large government agencies. In order for teams to perform effectively, team members have to coordinate behaviors and tasks and interact in prosocial ways. This requires team members to possess a common understanding of the team’s resources, long-term goals, immediate objectives, and the constraints under which the team works (Salas, Sims, & Burke, 2005). In other words, effective teamwork requires the establishment and alignment of shared conceptual understanding between teammates. Examples include team member knowledge, beliefs, and attitudes regarding entities, processes, and strategies relevant to setting and achieving team objectives and goals (DeChurch & Mesmer-Magnus, 2010). The overlap of these cognitive structures across teammates has broadly been referred to as “shared mental models” (Cannon-Bowers, Salas, & Converse, 1993; Cannon- Bowers & Salas, 2001; Mohammed, Klimoski, & Rentsch, 2000), and alignment of these models is thought to be an essential coordinative mechanism that underwrites successful team outcomes (Salas et ah, 2005). However, some authors argue that conventional operationalizations of this mechanism only partly explain how teams successfully interact (Cooke, Gorman, Myers, & Duran, 2013; Gorman, Dunbar, Grimm, & Gipson, 2017).

While the concept of shared mental models was initially formed to explain why teams can efficiently coordinate behaviors implicitly by relying on shared understanding (Cannon-Bowers et ah, 1993), the term has taken on a narrower meaning largely synonymous with similarity in long-term memory structures (Mohammed, Hamilton, Sanchez-Manzanares, & Rico, 2017). As such, the “mental models” aspect of shared mental models has been conceptualized as having existence prior to and independent of the task or process the models would serve. In this framework, measuring shared mental models equates to assessing pre-existing cognitive structure individually for each team member and aggregating or comparing them within teams to understand the degree to which conceptual knowledge structures overlap (DeChurch & Mesmer- Magnus, 2010). However, some have argued that the static long-term knowledge structures participants have in place prior to engaging a task, while important, are insufficient predictors of performance outcomes (Cooke et al., 2013; Gorman, Dunbar, Grimm, & Gipson, 2017). Accordingly, the dynamic nature of teams and team tasks may benefit from alternative approaches to understanding how teams solve problems.

Teams can be characterized as dynamic systems with emergent properties (Kozlowski & Ilgen, 2006). Importantly, both teams and circumstances frequently change in many ways; teams expand and dissolve, task objectives are met and replaced witli new ones, team goals are revised, and task constraints shift, intensify, or disappear. As such, many important processes and properties of teams exist in the high-order dynamic relationships team members establish with each other and with their environment (Cooke, Gorman, & Rowe, 2009). From this perspective, interactive and dynamic alignments at the conceptual, perceptual, and behavioral levels underlie cognitive similarities in teammates. This process of dynamic alignment has been called team cognition (Cooke et ah, 2013).

In order to assess higher-order relational properties inherent in team cognition, higher-order observables are needed; measuring team cognition requires measuring the team interacting in a context specific to the goals and objectives relevant to the motivating research question. Cooke et ah (2013) argued that team effectiveness and subsequent successes are dependent upon the ways information and knowledge are employed by the team, i.e., through the emergent information processing that implicitly relies on knowledge structures and which is embodied in the interactions between teammates as they uncover, share, and negotiate the context-specific meaning of goal-relevant information. As such, many efforts to understand and enhance team performance have been directed toward analysis of team communication patterns (Gorman, Cooke, Amazeen, & Fouse, 2012; Gorman et ah, 2019; Russell, Funke, Knott. & Strang, 2012; Wiltshire, Butner, & Fiore, 2018).

Although team communications provide a potential window to the underlying cognitive processes that enable team performance, there are many challenges associated with operationalizing and measuring cognition (Mohammed, Ferzandi, & Hamilton, 2010). Any method aimed at doing so must address three fundamental aspects of cognitive measurement (DeChurch & Mesmer-Magnus, 2010): elicitation—how cognitive content is observed; structural aspects—how the elements of cognitive content are related to each other; and emergence—how team-level understanding relates to individual understanding. Importantly, the inferred content and relational structure of cognition is sensitive to the techniques employed in its measurement (DeChurch & Mesmer-Magnus, 2010).

In regard to these three challenges, it is important to remark that team settings allow cognitive behaviors that are normally implicit or hidden in observations of individual cognitive performance to be readily observed as teammates interact and communicate. Team members often interact through naturalistic communication to establish and operate on common ground understanding—shared mental models—that supports descriptions, explanations, and predictions of team tasks and efforts (Mohammed et al., 2010). In other words, teams verbally share and operate on knowledge during their interactions, rendering the act of information processing into an observable operation. Thus, a natural language solution to team-level cognitive measurement that takes advantage of team interactions to provide answers to all three aspects outlined by DeChurch and Mesmer-Magnus (2010) is to present a collaborative problem-solving task to a group and record their communicative interactions to determine the content and structure of their cognitive processing (cf., Ericsson & Simon, 1998). As outlined in this chapter, this approach, combined with natural language processing (NLP) of communication analysis, allows the observation of concepts defining a problem space and important relationships between them, and also provides a way to directly capture emergent team cognition.

Despite its potential for revealing insights into team cognitive processes, team communication analysis presents a number of special challenges to researchers. Largely, these challenges arise because semantic information is often ambiguous and difficult to quantify. Simple approaches such as frequency counts from hand- coded communication data can provide meaningful descriptions of communicative exchanges (e.g., Mancuso, Finomore, Rahill, Blair, & Funke, 2014), but these methods are time consuming and limited to pre-defined dictionaries that may not make use of idiosyncratic information available in a particular communication dataset. Other approaches, like latent-semantic analysis (Foltz, 1996), and, more recently, word2vec (Mikolov, Chen, Corrado, & Dean, 2013), make use of powerful mathematical approaches that can uncover statistical regularities in text data. Though such techniques have been used to meaningfully quantify team communications (e.g., Gorman et al., 2016; Gorman, Foltz, Kiekel, Martin, & Cooke, 2003), these approaches can produce abstract mathematical spaces whose dimensions can be difficult to understand (Iliev, Dehghani, & Sagi, 2014). In this chapter, we discuss the utility of conceptual recurrence analysis (CRA; Angus, 2019; Angus, Smith, & Wiles, 2012a) and show how CRA conducted in the Discursis software package (Angus et al., 2012a; Angus, Smith, & Wiles, 2012b) can be extended using network analysis to provide insights into the structure of team communications.

Specifically, in this chapter we show: (1) how CRA can be used to identify key concepts that form the basis of meaningful task-specific discourse; (2) how CRA can then be used to evaluate the similarity of verbal exchanges in team process; (3) how dimensionality reduction of the matrix encoding conceptual similarity between utterances can be combined with network analysis to facilitate clustering of utterances and coding according to semantic themes; (4) how this classification then forms the basis of generating a categorical time series from team utterances; (5) how the categorical time series can be used to assess transitions between conceptual themes; and (6) how network approaches can again be used to construct a graph from the transition matrix to show' the relationships between themes.

We view' the final product as a representation of an aggregated team mental model of task-relevant concepts and relationships between them obtained by measuring team cognition. In other words, by observing temporal relations between concept transitions, we can observe how teams are processing conceptual information. We present the following methods as an introduction that we believe will be useful to team researchers.


CRA is an extension of recurrence quantification analysis (RQA; Webber & Zbilut, 1994), a nonlinear technique used to quantify structure in complex time series data. RQA quantifies structure in terms of the recurrence (i.e., repetition) of states of the system (i.e., of values in the time series). CRA quantifies communication data, such as a conversation, by measuring the extent to which individuals’ utterances— complete conversational turns—are semantically similar. Importantly, in addition to providing a measure of semantic similarity between utterances, CRA also provides a set of concepts over w'hich similarity is measured. This approach, as implemented in the program Discursis (Angus et al., 2012a, 2012b), has been used to quantify interpersonal communication data, including those observed in doctor-patient interactions (Angus, Watson, Smith, Gallois, & Wiles, 2012; Atay et al., 2015; Watson, Angus, Gore, & Farmer, 2015), talk show interview's (Angus et al., 2012a; Angus & Wiles, 2018), and team communications (Tolston, Riley, Mancuso, Finomore, & Funke, 2019).

Details underlying CRA have been outlined in previous work (Angus et al., 2012a, 2012b), but here we wall broadly discuss essential aspects of the technique. CRA as implemented in Discursis first conducts pre-processing of text data by removing common w'ords and optionally stemming w'ords to remove prefixes and suffixes. The resultant words are referred to as concepts. Important w'ords—referred to as key concepts—are automatically chosen from this list, partly based on their prevalence in the corpus. This has been conceptualized as a “bottom up” approach to revealing conceptual content, in contrast with a “top dow'n” approach in w'hich important concepts are identified a priori (Tolston et al., 2019). Discursis then constructs a semantic vector space from these key concepts by assessing concepts for similarity to key concepts using statistical techniques that depend in part on w'ord co-occurrence. This forms the mathematical structure by which each utterance can be represented in a vector space in terms how prevalent each key concept is in that utterance. Specifically, the conceptual content of each utterance is computed as the sum total of the projections of all individual concepts in that utterance onto the key concept basis. In other words, the semantic content of an utterance is computed by the vector addition of the vectorized representation of the concepts in the utterance. This resultant vector codes the extent to which each key concept is invoked in each utterance. A similarity value is then computed for each pair of utterances, yielding a number that captures the degree to which key concepts occur in similar proportions between each utterance. This value is similar to a pointwise correlation; for each pair of utterances, the dimension- wise (i.e., key-concept) projections of the two utterances in the semantic space are multiplied and summed (akin to a covariance) and then normalized by the products of the lengths of the two projections (akin to the product of the variances). The result is a matrix that captures the degree of semantic similarity between all utterances. Readers interested in a more comprehensive discussion of CRA and Discursis are referred to Angus, Watson et al. (2012) and to Tolston et al. (2019).


As we will demonstrate in this chapter, the utility of CRA can be expanded using network-based analyses. This expansion is derived from techniques developed to extend the parent analysis (RQA) to generate recurrence networks—complex networks generated from an adjacency matrix obtained from RQA of time-series data (Donner et al., 2011; Donner, Donges, Zou, & Feldhoff, 2015; Donner, Zou, Donges, Marwan, & Kurths, 2010). Such an approach requires a similarity threshold that discretizes the distance matrix into a recurrence matrix that contains values of 0 (for non-recurrent states) and 1 (for recurrent states), which can then be visualized in a recurrence plot with “on” pixels indicating recurrent states and “off” pixels indicating non-recurrent states. In other words, the threshold sets the minimum degree of computed conceptual similarity that utterances must have to be defined as recurrent, i.e., as having the same conceptual content. The resultant recurrence matrix can be used as an adjacency matrix to create a complex network, with nodes in the network corresponding to utterances, and edges linking utterances that have sufficiently similar content. Donner et al. (2010) described how this network-based approach can reveal geometric properties of time series in terms of linking together similar system states. Importantly, such network constructions provide a set of powerful metrics that can quantify geometric and topological aspects of the system under study (e.g., Zhang & Small, 2006).

Networks are often characterized in terms of both local and global characteristics (e.g., Donner et al., 2015), where local characteristics capture information relating to each node or edge and global characteristics describe the network as a whole. However, there are also intermediate, or meso-scale, structures that may be informative for quantifying complex networks (Saggar et al., 2018). For example, community structure captures how well a network can be partitioned into meaningful subgraphs, or communities, with a quality metric indicating the proportion of edges that join nodes within a community to edges that join nodes outside of the community. This metric can capture intermediate level structures that exist when highly similar observations are clustered together within the larger network. This community detection can be seen as a form of clustering, in which nodes in a network are classified according to which community they belong.

With respect to communication analysis of team exchanges, community structure can capture the degree to which teams are producing heterogenous utterances. In other words, the degree of community structure in a semantic network indicates the degree to which utterances are focused on particular combinations of key terms, where utterances belonging to communities dominated by particular combinations of key terms have less similarity to other communities than within the community. Traditionally, the analysis of similarities between concepts to create themes or kernels—clusters of related concepts—has been a central endeavor in the concept mapping approach to studying cognition (McNeese & Reddy, 2015; Zaff, McNeese, & Snyder, 1993). In the present work, we show how clusters of highly related concepts can be obtained automatically by combining CRA with network analyses.

In previous research, CRA has largely been conducted utilizing similarity matrices without a threshold applied (e.g., Angus & Wiles, 2018; Tolston et al., 2019). In this chapter, we evaluated the effect that thresholding has on the structure of semantic networks derived from CRA applied to communication data collected from teams performing a collaborative consensus building task. Specifically, we combined thresholding with a dimensionality reduction technique to determine their effects on a resulting network’s capacity to capture the semantic relationships between utterances. Our main research question was whether there was specific structure in the utterances created by teams in a collaborative consensus-building task that could lend insight into the ways that teams were processing semantic information. We used a task with known expert logic linking items being reasoned about to external concepts. We asked whether our decomposition using CRA and network analyses could be used to create a categorical time series to show how conceptual clusters may be associated during team conversations about a consensus-building task in ways that reveal how teams reason about the task.


Lost in the Desert Task

“Lost in the Desert” is a problem-solving task in which groups or teams of individuals are presented with a desert survival scenario (Lafferty, Eady, & Elmers, 1974). Teams are told to imagine that they are the sole survivors of a passenger plane crashlanding in a desert. They are told that they are all uninjured, that they should stay where they are until help arrives, and that there are 15 items that can be salvaged from the plane wreckage. Team members are then asked to rank those items according to their importance for survival, first individually and then as a team via consensus building discussion (see Table 10.1 for a list of the items, ranked by subject matter experts in order of most to least important for survival).

This task was chosen as an ice breaker for a subsequent distributed team decisionmaking task (results for the subsequent task are reported in Tolston et al., 2017). This and similar tasks have been shown to generate rich conversations and are often used

TABLE 10.1

Items Presented to Teams in the Lost in the Desert Scenario




A cosmetic mirror


Visual signaling

Overcoat (for everyone)


Helps ration sweat by slowing evaporation

Two liters of water per person


Drinking water (a person requires a gallon of water a day in the desert)

Flashlight with four battery cells


Nighttime signaling

Parachute (red and white)



Folding knife


For cutting rope, preparing food, etc.

Plastic raincoat (large size)


To collect condensation

45-caliber pistol (loaded)


Defense and signaling

Sunglasses (for everyone)


Protection against sun glare

First-aid kit


Emergency use and ad hoc rope (nobody on the team is injured in the crash)

Magnetic compass


Reflective signaling device

Air map of the area


Paper for fire and environmental protection

A book entitled “Desert Animals That Can Be Eaten”


Food is less important than water in the desert; digestion consumes water

Two liters of 80-proof vodka


Useful as an antiseptic, firestarter, etc.; will cause dehydration if consumed

Bottle of 1.000 salt tablets


Of no use in desert

to assess factors that affect team behaviors and decision making (e.g., Citera, 1998; Ferrin & Dirks, 2003; Liu & Li, 2017).



Data were collected from 64 participants (29 men, 35 women) recruited from the Dayton, Ohio area. The range of participants’ ages was 18 to 33 years (M = 23.05, SD = 3.76). Participants completed the experiment as members of four-person teams, resulting in 16 experimental teams in our sample. Participants were compensated $15/hour for their time. All participants had normal or corrected to normal vision.


Upon arrival, participants completed an informed consent document and were provided instructions regarding the task. During the task, participants sat at individual computer workstations that were visually isolated from each other. Team members received instructions about the task and wore asked to individually rank the 15 items. They were then given an unlimited amount of time to discuss their ranking preferences with their teammates. The goal of this phase was for the team to reach a consensus regarding the rank for each item. During the experiment, participants were asked to communicate exclusively by typing in a chat interface. Participants did not receive any guidance regarding how to coordinate or strategize. After the team ranked all items the task ended.


CRA: Data Preparation

Data were preprocessed by correcting spelling errors and merging semantically identical terms (e.g., “rain coat” was changed to “raincoat”). The data for each team was then concatenated into a long-form file containing all team communications. This file was then analyzed with Discursis. The default parameter was used for the upper limit on the number of concepts extracted (100) and the merge word variants option was turned on, which is a word-stemming option that identifies and merges words with similar roots. These settings were chosen from an evaluation of parameter settings in CRA that was conducted in a previous study (Tolston et al., 2019). The default stop-list from Discursis—which includes 431 common words, such as “a” and “and”—was expanded to include additional terms that were prevalent in team communications (e.g., “important” and “use”). This stop-list was used to remove common words, including pronouns and prepositions, which provide little discriminating information and that would otherwise saturate the semantic space. The output similarity matrix from Discursis was read into MATLAB for further processing.

Dimensionality Reduction: Multidimensional Scaling

In the present analyses, we used multidimensional scaling (MDS) to assess the impact of dimensionality reduction on community structures in the complex networks obtained from similarity matrices outputted from Discursis. MDS is similar to principal components analysis (PCA), but MDS is a more general dimensionality reduction technique that can be used on either Euclidean (i.e., classical MDS) or non- Euclidean distance matrices (e.g., generalized MDS), rather than only Euclidean vector spaces, as in PCA. Dimensionality reduction techniques like PCA can improve clustering results (Ding & He, 2004), and MDS has been used specifically to evaluate clusters in a semantic space to patterns in sentiment analysis (Cambria, Song, Wang, & Howard, 2014). We used classical MDS to capture the regularities in the relationships between utterances in the space spanned by the important topics identified by CRA, and then projected back through the original space spanned by key concepts to identify interpretable combinations of data.

Classical MDS—hereafter referred to as MDS—requires a Euclidean distance matrix upon which to operate. Such a distance matrix can be calculated from the similarity matrix obtained from CRA using the fact that cosine similarity is related to Euclidean distance by the following relationship:

(Seber, 2004), where dtj is a Euclidean metric distance and st) identifies pairwise cosine similarities between utterances. The output from MDS is a lower-dimensional representation of the metric space that best preserves the distances between points in the original space. We used MDS to create a vector space representation of the utterance-utterance similarity matrix such that the maximum variance in utterance similarity was accounted for with the fewest number of dimensions. Prior to analyzing the similarity matrix with MDS, all utterances with no similarity to other utterances were removed from the matrix.

Thresholding and Network Creation

The output of MDS is a projection of the data onto the coordinates of the vector space reconstructed from the distance matrix, with a component for each observation in the distance matrix. If the space is Euclidean and there is enough data, then eigenvalues will be non-zero and positive up to the number of dimensions that define the coordinate axes of the original space from which the distance matrix was derived, and zero after. A subset of these components can be used in subsequent analyses, for example by creating new distance matrices that use only the first few principal components that account for the majority of variance in the distance matrix. For our analyses, we created a range of such matrices, using an increasing number of components to identify the number that resulted in the clearest community structure. For each component of the MDS vector space, from 2 to 96—the number of key concepts identified in our CRA analyses of the team communications (see the section “Results” for further information regarding key concepts extracted)—a new distance matrix was created from reconstructions of the vector space using components from one up to that number, and then thresholded using the connectivity efficiency metric introduced in Huang, Xu, Wang, Zhang, and Liu (2018). The connectivity efficiency metric identifies the threshold that results in a network that has the least number of disconnected components (e.g., the fraction of the coverage of the network is maximal) while penalizing for network density. It is defined as

where F is the fraction of the coverage of the network—the proportion of nodes connected to the largest component of the network—and D is the network density—the number of edges observed in the network divided by the number of possible edges. Connectivity efficiency has been shown to have a convex shape as a function of similarity threshold, meaning that a single optimal value is often obtained that balances network connectivity with sparsity (Huang et al., 2018).

Community Detection

For each of the thresholded networks, the community structure of that network was assessed using the Louvain method for community detection (Blondel, Guillaume, Lambiotte, & Lefebvre, 2008), which results in a quality metric (Q) that captures the proportion of edges that link nodes within a community to edges that link outside of a community. Such quality metrics are used to assess the relative communities identified by different algorithms and also as an objective function (Blondel et al., 2008), which is how it is implemented here. After finding the community quality for each network, the best network (e.g., the network that had the largest Q) was chosen for subsequent evaluation.

Calculating Representative Conceptual Content

Once the final graph was found, the vectorized representations of the conceptual content of the utterances were normalized to unit length such that the sum of the squared components of each vector equaled one. These were averaged together in each community to create a representative vector capturing the average conceptual content of utterances in that community. These were then used to identify prominent conceptual themes.

Identifying Conceptual Transitions

Once the conceptual network was constructed and community detection was conducted, each node in the network (e.g., each utterance) was coded according to its community. This was used to create a categorical time series for each team where each utterance took on the label of the community to which it belonged. This time series was then used to form a transition matrix of first-order transitions between conceptual communities. For example, if a categorical time series 1-2-3-2-1 was observed, the transition matrix would be




  • 1
  • 2






where the rows correspond to the initial state and the columns correspond to the state that is transitioned to, and the entries are the empirical probabilities of going to a state given an initial state. This approach leverages the fact that structure between concepts can be observed in the temporal interdependencies of transitions between concepts. Specifically, the probabilities of transitions between categories in a time-ordered symbolic sequence can be considered a measure of similarity between them (Cooke, Gorman, & Kiekel, 2008; Cooke, Neville, & Rowe, 1996). Such transition matrices have been used to quantify the structure of hand-coded categorical transitions in individual and dyadic brainstorming activities (Brown, Tumeo, Larey, & Paulus, 1998), as well as interaction patterns in team communications (Cooke et al., 2008).


On average, teams spent 33.50 minutes (SD = 14.62) and produced 242.50 utterances (SD = 78.66) discussing the Lost in the Desert scenario. A word-cloud depicting the relative frequencies of terms in team utterances can be seen in Figure 10.1. As can be seen in the figure, items that the team had to rank, such as “knife” and “mirror,” make up the majority of prominent terms.

The similarity matrix output from Discursis consisted of a semantic space consisting of 96 dimensions (key concepts). Of the 3,880 total utterances observed from

Word cloud showing relative frequency of words in team communications after removing stop words

FIGURE 10.1 Word cloud showing relative frequency of words in team communications after removing stop words.

Note. In the figure, increases in font size correlate with increases in the frequency of occurrence of the word in team communications. The words highlighted in orange (color online) are those which occur most frequently.

teams discussing the Lost in the Desert scenario, 2,281 were similar to at least one other utterance and were retained in MDS analyses—the high number of discarded utterances can be largely accounted for by the fact that short utterances consisting entirely of common stop words, like “yes” and “uh huh,” were removed during preprocessing.

The eigenvalues of the components extracted from MDS of the distance matrix obtained by transforming the similarity matrix output by Discursis can be seen in Figure 10.2. A visual inspection shows that the first 20 components were highly important, accounting for about 65% of the variance in the 96-dimensional space. The eigenvalues decrease sharply to zero after 96 components, meaning that the implicit dimensions that determine the distances in the matrix were fully recovered.

A representative example of the results from calculating the connectivity efficiency of networks obtained by thresholding the distance matrix can be seen in Figure 10.3, where the number of dimensions used to construct the similarity matrix was 57 (the number of dimensions identified as optimal for community detection; see the section “Results” for further information). The convex curve around the peak value shows a single point that optimally balances network connectivity with network sparsity.

The quality results of the community detection algorithm as a function of the number of components retained from MDS can be seen in Figure 10.4. There is an obvious peak of .87 at 57 dimensions in the thresholded distance matrix. These results show the value of applying a similarity threshold during CRA, since it leads

Eigenvalues of metric multidimensional scaling

FIGURE 10.2 Eigenvalues of metric multidimensional scaling: Components 1 through 96 all have non-zero positive eigenvalues, while components greater than 96 have eigenvalues of zero (values past 97 are not shown).

Note: The implicit dimensions of the conceptual space constraining the values in the distance matrix were fully recovered using MDS.

The connectivity efficiency of the graph obtained using 57 components from the MDS analysis of the distance matrix computed from the Discursis similarity matrix

FIGURE 10.3 The connectivity efficiency of the graph obtained using 57 components from the MDS analysis of the distance matrix computed from the Discursis similarity matrix.

Note: The convex curve shows a single best answer balancing network connectivity with sparsity.

Number of components retained from MDS and the resultant community quality for both thresholded and unthresholded distance matrices

FIGURE 10.4 Number of components retained from MDS and the resultant community quality for both thresholded and unthresholded distance matrices.

Note: Since using the distance matrix from the full span of non-zero eigenvalues is equivalent to the distance matrix from Discursis, the highest value for the non-thresholded matrix is equal to that from the raw Discursis output.

to clearer identification of similar utterances and a better separation of dissimilar utterances.

A simplified version of the final network obtained from using the optimal parameters estimated from the connectivity efficiency and community detection procedures can be seen in Figure 10.5. The graph was plotted using a force-directed algorithm (Fruchterman & Reingold, 1991). Each community is uniquely colored to aid visual inspection of the graph (colored figures available online). The high degree of modularity in the graph is apparent in that communities with high intra-community edge density and low inter-community edge density dominate the graph. To increase legibility of the figure, only well-populated communities (with more than 21 utterances) and densely connected nodes (with at least 20 edges) are shown. The full graph has similar visual characteristics as the presented reduced graph. The distribution of utterances by community can be seen in Figure 10.6.

After the community structure was calculated, each utterance was categorized by the community to which it belonged, and a transition matrix was created for each team. These were then aggregated into a single matrix to obtain empirical transition probabilities between conceptual clusters (see Figure 10.7). To reduce the dimensionality of the transition matrix, transitions to communities that had fewer than 21 observations were relabeled as transitions to a single ground state that represented all of these less frequently occurring states (Fu, Shi, Zou, & Xuan, 2006). The number 21 was chosen as a cutoff from a visual inspection of the graphs for intelligibility,

Final network with communities numbered, highlighted, and labeled; labels correspond to the top three concepts of each community

FIGURE 10.5 Final network with communities numbered, highlighted, and labeled; labels correspond to the top three concepts of each community.

Note: To increase legibility of the figure, only well-populated communities (with more than 21 utterances) and densely connected nodes (with at least 20 edges) are shown.

The number of utterances belonging to each community

FIGURE 10.6 The number of utterances belonging to each community.

Network showing directed transitions between communities

FIGURE 10.7 Network showing directed transitions between communities.

Note: Nodes are labeled with the top three concepts, in order, observed in each community. Edges represent transitions between nodes with probabilities of at least .075. The width of edge lines in the figure are proportional to the probability of a transition and node size is proportional to the number of utterances belonging to a community. The central node is a ground node that represents transitions to communities with fewer than 21 observations.

sparsity, and connectivity. There were 22 communities that had at least 21 observations. Finally, the transition matrix was thresholded to highlight important transitions and increase interpretability. From visual inspections of the graphs obtained by thresholding, a transition probability of .075 was chosen as a cutoff. Further decreasing this value led to a saturation of connections, while increasing it removed informative structure from the graph.


Measuring the cognitive structures underlying team reasoning and decision making is a critical endeavor with many important practical implications. In the present chapter, we introduced a methodological approach to mapping team cognition using a naturalistic language problem-solving task combined with CRA and network analyses. Our results showed the utility of generating networks from conceptual recurrence matrices, how MDS can improve clustering of similar utterances, how community structure in the networks reveals thematic patterning in the conceptual content of utterances, and how transitions between communities reveals structural aspects of team cognition. In doing so, we showed how this method can be used to leverage the overt nature of cognition in team interactions to map the conceptual content, structure, and emergent patterns underlying how teams think about a problem. By using bottom-up statistical approaches based on NLP, we uncovered conceptual themes that fit well with expectations based on expert reasoning in a problem-solving task without a priori concept selection that some other approaches to measuring cognitive content rely upon. Taking this a step further, we then showed how researchers can make use of the temporal order of team communications to reveal the links between thematic conceptual clusters that are important for team decision making. Below, we review our specific findings and provide several directions researchers may pursue with the given techniques.

The task we used, Lost in the Desert, has a set of items that teams must reason about with respect to their utility for desert survival. Our motivating question was whether the approach to measuring team cognition outlined in this chapter provides insight into the conceptual content and structure that constrains team reasoning in a decision making task. We believe it does. We were further interested in whether team-level cognitive structure would be in line with expert reasoning in that task. Our results reveal several ways in which it was. Analyses showed that utterances in the Lost in the Desert scenario tended to be about specific thematic combinations of key concepts. This was evidenced by our findings showing that the connectivity efficiency of thresholded distance matrices was a useful criterion for generating semantic networks, and that the best network had a high-quality community structure (0.84). Many of the themes uncovered in our analyses revolved around groups of nouns which have similar utility. Several examples include that the compass was often discussed along with the map and direction, the parachute was associated with shelter and the sun, the book identifying edible desert animals was discussed in conjunction with the gun and knife, the flashlight and night were discussed along with fire mirror and signaling, and nighttime was associated with cold. Teams also discussed vodka in conjunction with the first aid kit (i.e., as an antiseptic) and as a means of starting a fire. Many of these themes highlight conceptual connections that are in line with expectations based on expert reasoning (see Table 10.1). By next looking at transitions between themes, our analyses showed some of the reasoning behind team decision making that also lined up with expert analysis.

The transition matrix based on community structure in the network of utterances shows that semantically similar items were often discussed along trajectories exploring the meaning and utility of similar or complementary items in ways that lined up with reasoning by experts. For instance, discussions of needing protection against the sun (community 21 in Figure 10.7) are connected to items that provide shelter (e.g., the parachute, community 6) and protection from extreme temperatures (communities 14 and 15). Below is example data showing a transition from an utterance mainly about the desert (community 16) to one mainly about water (community 3):

“How much desert experience do you have from your deployment, Mike?”

“Um none really but I know water will be important.”

These connections and their overlap with expert reasoning illustrates the utility of the method for generating meaningful representations of cognitive structure from bottom-up analysis of team communications.

Interestingly, the network analysis shows that while the terms teams used routinely clustered into semantically coherent groups, the specific way that teams created and traversed the conceptual landscape was largely unique, with a substantial portion of the utterances teams made being rather idiosyncratic combinations of topics (represented as the central ground node). Specifically, transitions to the central node in the network identified in Figure 10.7 account for about 19% of the total number of utterances analyzed, with the other transitions accounting for 7% or less each. However, the network provides insight into the concerns of a large proportion of the teams.

Using the approaches outlined in this chapter, we were able to gain insight into the motivations and concerns of the teams that guided their decision making. However, there are important links between syntactic, semantic, behavioral coordination, and communication that go beyond the current analysis (e.g., Dale, Fusaroli, Duran, & Richardson, 2013; Dale & Kello, 2018). Future work can leverage the present techniques, along with other measures of linguistic and behavioral alignment, to evaluate the interactive and hierarchical nature of team interactions that embody information processing. For instance, measures of syntactic alignment as well as measures of similarity of postural dynamics can both be obtained with RQA (i.e., Dale & Spivey, 2006; Shockley, Santana, & Fowler, 2003). Networks can then be generated from these analyses which can be assessed using multilayer network approaches to determine how well the structure across these modalities can predict team effectiveness and performance outcomes (Kivela et al., 2014).

Additionally, it would be possible to evaluate the relationship between the timing of transitions between postural configurations and conceptual clusters to view' the bottom-up influence of lower-level processes on higher-level interactions. This provides a framework for the analysis of multilevel team dynamics, an area of study with important theoretical implications (Kozlowski, 2015).

Another possible extension is to assess the temporal structure of the thematic data. The current study aggregated all utterances and was therefore blind to differences in the evolution of topic emergence over time. Outstanding questions include whether and if the meso-scale communities reach a stationary distribution or if they continually evolve in time. Additionally, prior work using CRA has showm that in a different consensus building task the content of discussion shifts reliably over time depending on the manner in which information is presented to the teams (Tolston et al., 2017). Expanding the current work with a temporal network approach (Holme & Saramaki, 2012) is a promising direction.

Another future direction is to assess individual team cognitive models against the aggregated model. Moving towards a multivariate framework that can simultaneously decompose structure in data across teams to get an average conceptual model while also placing teams in the conceptual space spanned by the average model to determine interrelations between teams may be an important step in understanding key differences in teams (Thioulouse, Simier, & Chessel, 2004). This would also help determine if there is a heterogeneous distribution of team mental models and cognition. Further, the hierarchical nature of teams means that the aggregation of data from the CRA framework can be conducted in multiple ways to measure different aspects of team interactions, from the individual level, intermediate level of subgroups within teams, within team level, and between team levels (Angus & Wiles,

2018) . The current work was based on analyses of data aggregated at the between- team level, but the richness of the data obtained from natural language interaction means there are many more layers to unravel in future work.

A possible limitation of the current approach is the use of modularity quality to determine the optimal network. Though widely used, this metric can result in a number of graphs that have similar quality metrics, but are qualitatively distinct (Good, de Montjoye, & Clauset, 2010). An extension of the current approach would be to evaluate the similarity of the set optimal networks to identify different community structures that have similar partition qualities and to evaluate representative members (e.g., graphs) from multiple communities in the graph relating the different networks. Also of interest is the similarity between the eigenvalues obtained from MDS (Figure 10.2) and the number of utterances in each community (Figure 10.6). This is likely no coincidence and could be explored as an alternative way to choose the number of thematic communities.

Importantly, modifying parameters that led to the graph shown in Figure 10.7 can result in very different graph topologies, with no obvious cutoff parameters with respect to the probability threshold. The graph presented was selected by visually inspecting candidate graphs for interpretability of prominent edges. Future work can extend the current efforts by utilizing surrogate techniques to generate null distributions against which significant similarity can be inferred (Lancaster, Iatsenko, Pidde, Ticcinelli, & Stefanovska, 2018).

The current similarity assessment technique relies on an assumption of linearity (i.e., the meaning of a sentence or utterance is the additive combination of the meaning of all the words contained therein). In other words, the key concepts form a basis of a Euclidean space and can be operated upon with vector addition and scalar multiplication to span the semantic space. We note that this is a rather strong assumption that is certainly incorrect; language is inherently nonlinear and contextual with multiplicative interactions where small changes in the context or composition of the utterance can have large effects on its meaning. Recent trends in NLP have started incorporating contextual information in discovering the meaning of sentences and utterances that can better address this important limitation (Devlin, Chang, Lee, & Toutanova, 2018).

Of final note, the methods introduced in this chapter represent new directions in the assessment of team interactions and cognition. While the underlying method of CRA has been shown to be consistently related to experimental manipulation in ways expected, there are outstanding questions regarding the measurement of semantic data, including validity, accuracy, and reliability. Future work might directly compare CRA using Discursis to other methods of assessing conceptual alignment of utterances in order to better understand the degree to which there is convergence among purportedly similar measurement techniques. Further, while Discursis has been shown to be effective for analysis of small data sets (Tolston et al.,

2019) , a rigorous analysis of the effect of corpus size on its output would be welcome. Additionally, we emphasize the need to explore how identifying and measuring higher-order observables that exist and operate directly at the level of interest to the researcher (e.g., team-level measurements) can improve our understanding of teams. In particular, comparing the predictive performance of models of team cognition outlined in this chapter, which measure team-level cognition directly, to traditionally obtained estimates of shared mental models, which typically infer team-level cognition from individual-level models, would be quite interesting.


Evaluating information flow in distributed team contexts can be difficult. Language analysis is an important key to understanding the relationships between the concepts underlying team decision making in complex tasks. Done carefully, it can yield detailed information about co-occurrence and mixtures of concepts that shows how teams process and group information to make a decision. Here, we have shown that combining CRA with network-based approaches yields consistent structures that allow researchers to understand how teams are dealing with conceptual-linguistic information.

Importantly, aside from team dynamics and team-specific questions, studying behavior in distributed teams provides a critical resource to cognitive science, in that distributed teams often have to talk to one another while they solve problems, and the language they use provides an excellent avenue to map out the cognitive processing of conceptual information that is key to decision making. We think the tools and methods presented in this chapter offer exciting possibilities for this direction.


Angus, D. (2019). Recurrence methods for communication data, reflecting on 20 years of progress. Frontiers in Applied Mathematics and Statistics, 5, 54.

Angus, D., Smith, A. E., & Wiles, J. (2012a). Conceptual recurrence plots: Revealing patterns in human discourse. IEEE Transactions on Visualization and Computer Graphics, 1816), 988-997.

Angus, D., Smith, A. E., & Wiles, J. (2012b). Human communication as coupled time series: Quantifying multi-participant recurrence. IEEE Transactions on Audio, Speech, and Language Processing, 20(6), 1795-1807.

Angus, D., Watson, B., Smith, A., Gallois, C., & Wiles, J. (2012). Visualising conversation structure across time: Insights into effective doctor-patient consultations. PLoS One, 7(6), e38014.

Angus, D., & Wiles, J. (2018). Social semantic networks: Measuring topic management in discourse using a pyramid of conceptual recurrence metrics. Chaos: An Interdisciplinary Journal of Nonlinear Science, 28(8), 085723.

Atay. C., Conway, E. R., Angus. D., Wiles, J.. Baker. R., & Chenery, H. J. (2015). An automated approach to examining conversational dynamics between people with dementia and their carers. PLoS One, /0(12), e0144327.

Blondel, V. D„ Guillaume, J.-L.. Lambiotte, R., & Lefebvre, E. (2008). Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008(), P10008.

Brown, V., Tumeo, M., Larey, T. S., & Paulus, P. B. (1998). Modeling cognitive interactions during group brainstorming. Small Group Research, 29(4), 495-526.

Cambria, E., Song, Y., Wang, H., & Howard, N. (2014). Semantic multidimensional scaling for open-domain sentiment analysis. IEEE Intelligent Systems, 29(2), 44-51.

Cannon-Bowers, J. A., & Salas, E. (2001). Reflections on shared cognition. Journal of Organizational Behavior, 22(2), 195-202.

Cannon-Bowers, J. A., Salas, E., & Converse, S. A. (1993). Shared mental models in expert team decision making. In N. J. Castellan Jr. (Ed.), Individual and group decision making: Current issues (pp. 221-246). Hillsdale, NJ: Erlbaum.

Citera, M. (1998). Distributed teamwork: The impact of communication media on influence and decision quality. Journal of the American Society for Information Science, 49(9), 792-800.

Cooke, N. J., Gorman, J. C., & Kiekel, P. A. (2008). Communication as team-level cognitive processing. In M. P. Letsky, N. W. Warner, S. M. Fiore, & С. A. P. Smith (Eds.), Macrocognition in teams: Theories and methodologies (pp. 51-64). Hants: Ashgate.

Cooke, N. J., Gorman, J. C., Myers, C. W., & Duran, J. L. (2013). Interactive team cognition. Cognitive Science, 37(2), 255-285.

Cooke, N. J., Gorman, J. C., & Rowe, L. J. (2009). An ecological perspective on team cognition. In E. Salas, J. Goodwin, & C. S. Burke (Eds.), Team effectiveness in complex organizations: Cross-disciplinary perspectives and approaches (SIOP Organizational Frontiers Series, pp. 157-182). New York: Taylor & Francis.

Cooke, N. J., Neville, K. J., & Rowe, A. L. (1996). Procedural network representations of sequential data. Human-Computer Interaction, /7(1), 29-68.

Dale, R.. Fusaroli. R., Duran, N. D.. & Richardson, D. C. (2013). The self-organization of human interaction. In В. H. Ross (Ed.), Psychology of learning and motivation (Vol. 59, pp. 43-95). Waltham, MA: Academic Press.

Dale, R., & Kello, С. T. (2018). “How do humans make sense?” multiscale dynamics and emergent meaning. New Ideas in Psychology, 50, 61-72.

Dale, R., & Spivey, M. J. (2006). Unraveling the dyad: Using recurrence analysis to explore patterns of syntactic coordination between children and caregivers in conversation. Language Learning, 56(3), 391-430.

DeChurch, L. A., & Mesmer-Magnus, J. R. (2010). Measuring shared team mental models: A meta-analysis. Group Dynamics: Theory, Research, and Practice, 14(1), 1-14.

Devlin, J., Chang, M.-W., Lee, K„ & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv Preprint arXiv:I810.04805.

Ding, C, & He, X. (2004). К-means clustering via principal component analysis. In Proceedings of the twenty-first international conference on Machine learning (pp. 29-36). Banff, Alberta, Canada: Association for Computing Machinery.

Donner, R. V., Donges, J. F., Zou. Y.. & Feldhoff, J. H. (2015). Complex network analysis of recurrences. In Recurrence quantification analysis (pp. 101-163). London: Springer.

Donner, R. V., Small, M., Donges, J. F., Marwan, N., Zou, Y., Xiang, R., & Kurths, J. (2011). Recurrence-based time series analysis by means of complex network methods. International Journal of Bifurcation and Chaos, 21(4), 1019-1046.

Donner, R. V., Zou, Y., Donges, J. F., Marwan, N., & Kurths, J. (2010). Recurrence networks—a novel paradigm for nonlinear time series analysis. New Journal of Physics, 12(3), 033025.

Ericsson, K. A., & Simon, H. A. (1998). How to study thinking in everyday life: Contrasting think-aloud protocols with descriptions and explanations of thinking. Mind, Culture, and Activity, 5(3), 178-186.

Ferrin, D. L., & Dirks, К. T. (2003). The use of rewards to increase and decrease trust: Mediating processes and differential effects. Organization Science, 14( 1), 18-31.

Foltz, P. W. (1996). Latent semantic analysis for text-based research. Behavior Research Methods, 28(2), 197-202.

Fruchterman, T. M., & Reingold, E. M. (1991). Graph drawing by force-directed placement. Software: Practice and Experience, 2/(11), 1129-1164.

Fu, D., Shi, Y. Q„ Zou, D., & Xuan, G. (2006). JPEG steganalysis using empirical transition matrix in block DCT domain. In IEEE 8th workshop on multimedia signal processing, 2006 (pp. 310-313). Piscataway, NJ: IEEE.

Good, В. H., de Montjoye, Y.-A., & Clauset, A. (2010). Performance of modularity maximization in practical contexts. Physical Review E, 81(A), 046106.

Gorman, J. C., Cooke, N. J., Amazeen, P. G., & Fouse, S. (2012). Measuring patterns in team interaction sequences using a discrete recurrence approach. Human Factors: The Journal of the Human Factors and Ergonomics Society, 54(4), 503-517.

Gorman, J. C., Dunbar, T. A., Grimm, D„ & Gipson, C. L. (2017). Understanding and modeling teams as dynamical systems. Frontiers in Psychology, 8, 1053.

Gorman, J. C.. Foltz. P. W.. Kiekel. P. A., Martin. M. J.. & Cooke. N. J. (2003). Evaluation of Latent Semantic Analysis-based measures of team communications content. In Proceedings of the human factors and ergonomics society annual meeting (Vol. 47, No. 3, pp. 424-428). Los Angeles. CA: SAGE Publications.

Gorman, J. C., Grimm, D. A., Stevens, R. H., Galloway, T., Willemsen-Dunlap, A. M., & Halpin, D. J. (2019). Measuring real-time team cognition during team training. Human Factors. Advance online publication, https://doi.org/0018720819852791.

Gorman, J. C., Martin, M. J., Dunbar, T. A., Stevens, R. H., Galloway, T. L., Amazeen, P. G., & Likens, A. D. (2016). Cross-level effects between neurophysiology and communication during team training. Human Factors, 5S(1), 181-199.

Holme. R, & Saramaki, J. (2012). Temporal networks. Physics Reports, 5/9(3), 97-125.

Huang, Z., Xu, L., Wang, L., Zhang, G., & Liu, Y. (2018). Construction of complex network with multiple time series relevance. Information, 9(8), 202.

Iliev, R., Dehghani, M., & Sagi, E. (2014). Automated text analysis in psychology: Methods, applications, and future developments. Language and Cognition, 7, 1-26.

Kivela, M., Arenas, A., Barthelemy, M., Gleeson, J. P, Moreno, Y., & Porter, M. A. (2014). Multilayer networks. Journal of Complex Networks, 2(3), 203-271.

Kozlowski, S. W. (2015). Advancing research on team process dynamics: Theoretical, methodological, and measurement considerations. Organizational Psychology Review, 5(4), 270-299.

Kozlowski, S. W„ & Ilgen, D. R. (2006). Enhancing the effectiveness of work groups and teams. Psychological Science in the Public Interest, 7(3), 77-124.

Lafferty, J. C., Eady, P. M., & Elmers, J. (1974). The desert survival problem. Plymouth, MI: Experimental Learning Methods.

Lancaster, G., Iatsenko, D., Pidde, A., Ticcinelli, V., & Stefanovska, A. (2018). Surrogate data for hypothesis testing of physical systems. Physics Reports, 748, 1-60.

Liu, H., & Li, G. (2017). To gain or not to lose? The effect of monetary reward on motivation and knowledge contribution. Journal of Knowledge Management, 2/(2), 397-415.

Mancuso, V. F.. Finomore, V. S„ Rah ill, К. M.. Blair. E. A.. & Funke, G. J. (2014). Effects of cognitive biases on distributed team decision making. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 58, 405-409.

McNeese, N. J., & Reddy. M. C. (2015). Concept mapping as a methodology to develop insights on cognition during collaborative information seeking. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 59, 245-249.

Mikolov, T, Chen, K., Corrado, G„ & Dean, J. (2013). Efficient estimation of word representations in vector space. arXivpreprint arXiv.I30l.378I.

Mohammed, S., Ferzandi, L., & Hamilton, K. (2010). Metaphor no more: A 15-year review of the team mental model construct. Journal of Management, 36(4), 876-910.

Mohammed, S., Hamilton, K., Sanchez-Manzanares, M., & Rico, R. (2017). Team cognition: Team mental models and situation awareness. In E. Salas, R. Rico, & J. Passmore (Eds.), The Wiley Blackwell handbook of the psychology of teamwork and collaborative processes. West Sussex, UK: John Wiley & Sons, Ltd.

Mohammed, S., Klimoski, R., & Rentsch, J. R. (2000). The measurement of team mental models: We have no shared schema. Organizational Research Methods, 3(2), 123-165.

Russell, S. M., Funke, G. J., Knott, B. A., & Strang, A. J. (2012). Recurrence quantification analysis used to assess team communication in simulated air battle management. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 56, 468-472.

Saggar, M., Sporns, O., Gonzalez-Castillo, J., Bandettini, P. A., Carlsson, G., Glover, G., & Reiss, A. L. (2018). Towards a new approach to reveal dynamical organization of the brain using topological data analysis. Nature Communications, 9(1), 1399.

Salas, E„ Sims. D. E., & Burke, C. S. (2005). Is there a “big five” in teamwork? Small Croup Research, 36(5). 555-599.

Seber, G. A. F. (2004). Multivariate observations. Hoboken, NJ: John Wiley & Sons.

Shockley, K., Santana, M. V., & Fowler, C. A. (2003). Mutual interpersonal postural constraints are involved in cooperative conversation. Journal of Experimental Psychology: Human Perception and Performance, 29(2), 326.

Thioulouse, J., Simier, M., & Chessel, D. (2004). Simultaneous analysis of a sequence of paired ecological tables. Ecology, 85( 1). 272-283.

Tolston, M. T, Finomore, V., Funke, G. J., Mancusco, V., Brown, R., Menke, L., & Riley, M. A. (2017). Effects of biasing information on the conceptual structure of team communications. Advances in Intelligent Systems and Computing, 488,433-455.

Tolston. M. T. Riley. M. A., Mancuso. V., Finomore. V., & Funke, G. J. (2019). Beyond frequency counts: Novel conceptual recurrence analysis metrics to index semantic coordination in team communications. Behavior Research Methods, 5/(1), 342-360.

Watson, В. M., Angus, D., Gore, L., & Farmer, J. (2015). Communication in open disclosure conversations about adverse events in hospitals. Language & Communication, 41, 57-70.

Webber, C., & Zbilut, J. P. (1994). Dynamical assessment of physiological systems and states using recurrence plot strategies. Journal of Applied Physiology, 76(2), 965-973.

Wiltshire, T. J., Butner, J. E., & Fiore, S. M. (2018). Problem-solving phase transitions during team collaboration. Cognitive Science, 42(1), 129-167.

Zaff, B. S., McNeese, M. D., & Snyder, D. E. (1993). Capturing multiple perspectives: A user-centered approach to knowledge and design acquisition. Knowledge Acquisition, 5(1), 79-116.

Zhang, J., & Small, M. (2006). Complex network from pseudoperiodic time series: Topology versus dynamics. Physical Review> Letters, 96(23), 238701.



abstraction, 79, 211, 216 accidents, 3 accuracy, 93, 136, 155 adaptivity, 57, 58, 62, 66, 71,94,207 affordances, 77, 78, 79, 83, 86 agency, 30, 38

aircraft, see fields of practice

air traffic control, see fields of practice

analyses, 79, 122, 134,231

Coordinated Awareness of Situation by Teams (CAST), 58 cognitive modeling (CMAP), 118 cognitive network analysis, 114 cognitive work analysis (CWA), 79-81, 86 communication analysis, 17, 36, 64, 65, 66, 94-96,98-1 o'l, 104,232-249 conceptual recurrence analysis (CRA), 61, 231-251

contextual inquiry, 13 coordination space framework. 111, 114, 125-126

Descriptive to Simulation modeling (DESIM), 111, 114, 117 distributed cognition for teamwork (DiCoT), 87

dynamic network analysis, 63 episodic analysis, 43

event analysis of systemic teamwork (EAST), 87 fuzzy cognitive model (FCM), 118, 119, 121, 122, 125, 126, 127 gesture and posture analyses, 100 goal directed task analysis (GDTA), 6,9 hierarchical task analysis, 209 interdependence analysis, 205, 206, 213, 214, 218,221-225

Joint Activity Graph (JAG), 205, 216,217, 222 recurrence matrix, 235 role alternative capability assessment, 219 situation awareness requirements analysis, 13 social network analysis, 63 task analysis, 209 time series analysis, 92 video analysis, 37,92 work analysis, 78-80 assessment, 18, 103 attacks, 187, 188, 189, 201 attention, 3, II, 13, 79, 101, 113, 139, 141, 143,

152, 167, 172, 173,210,212,220,224 automation, 133, 140, 141, 152,212,224 levels of automation, 142

see also expert systems; decision support

systems; levels of automation; human- automation teaming; intelligent agents; recommender systems aviation, see fields of practice


business, see fields of practice


cognitive engineering, 210 cognitive load, 135, 137, 139, 143, 159, 164, 165, 166, 170, 171, 172, 174, 175, see also information overload; overload; workload

cognitive reflection, 135, 143, 176 Cognitive Reflection Theory (CRT), 143, 159,

164, 169, 175

cognitive traits, 135, 136, 175, 176 collaborative, 7, 53, 59, 62, 116, 146, 147, 156 collaborative problem solving, 53, 59,62 command and control, see fields of practice command center, see fields of practice commander, 36, 37, 57,218 communication, 2,6-14, 29-45, 55, 56,59-66, 82,92,94,95, 97-100, 102, 104, 112, 115, 125, 139, 145, 184,232-241,


complex, 2,3,4,5, 15, 30-33,40-42,47,48,58, 62,63,67,72, 73,76, 87,90-92,98, 99, 116, 118, 127, 134-141, 144, 145, 164, 171, 173, 185, 193,208,216,226, 234, 235,238, 249

comprehension, 2, 3,9, 10, 11, 18,93, 115, 145, 172, see also shared understanding; situation awareness computation, 63 concepts, 91, 93

constraints, 54, 55, 56, 57,58,59, 63, 65,66,74, 75,79, 80, 81. 83, 84, 86, 102, 172,

174, 208,211,220,231,232 coordination, 6, 9, 10, 11, 13, 36,43,45, 56,58, 59,61,63,66,73,74,81, 111, 114, 116, 125, 126, 127, 184,207,211, 215,218, 247, see also team coordination coordination space framework. 111. 114, 125-126

crisis management, 31, 38-42, 93 crowd-sourcing, 114 cultural differences, 15


decision making, 133, 163, 165 decision support system, 38, 169, see also automation; expert systems; recommender systems

Descriptive to Simulation modeling (DESIM), see analyses


design principles/guidelines, 8, 9, 97, 172, 206, 208,210,213 interface design, 133 dispatch, 43

displays, 2,7, 8,9, 10, 14, 15, 18, 19, 118 augmented reality, 174 common operating picture (COP), 8 digital map, 38, 39,40,41 shared displays, 7-10, 14 multi-modal, 121, 123, 174 see also transparency

distributed, 7,8, 13. 14. 15, 16,29, 31,33, 36, 51, 59, 71, 72, 73,74. 75, 76.78,79, 80,

83, 86,87,92, 111, 112, 113, 114, 116, 125, 126, 127, 139,208,209,211,216, 217,236, 249

distributed team cognition, 71, 79, 111 Dunning-Kruger effect, 143, 144, 169 dynamics, 9. 16,30,31, 32,43,46, 50, 53, 55, 62, 63,65, 66,72, 91,92,93, 94, 118, 135, 140, 152, 165, 196, 207,232 dynamical systems, 53,55, 57,59,61, 63,65, 67.69


elicitation, 114. 118,233 embodied cognition, 138 emergency call center, see fields of practice Emergency Management System (EMS), see fields of practice event-based, 102

expertise, 2, 9,46,57, 85, 112, 120, 163, 169, 171, 183, 191, 193, 194, 226 expert systems, 133, 138, 140, 141, 173,

see also decision support systems; recommender systems

explanation, 53, 133, 137, 140, 141, 147, 152, 153, 156, 165, 167, 168, 170, 181


fields of practice

aircraft, 2,4, 6,7, 10, 13, 14, 15,72,73, 80, 82, 84, 126, 184, 193, 222 air traffic control, 2,4, 14, 15, 17,72, 101, 193 aviation, 2,6,7, 15,73, 78 business, 5

command and control, 6, 8.9, 14. 15, 16,

  • 18,29, 36,37,43,45, 53, 57, 58,
  • 121. 126

command center, 38,40 dispatch, 43

emergency call center, 42

Emergency Management System (EMS), 2,

~ 8, 14, 19, 32,45,72.73, 78, 184, 189, 190, 191, 200

fire fighting, 15, 31,33, 35, 40,42,59,73 healthcare, 2, 15, 57, 60,65, 73, 78,95, 127 maintenance, 7, 15 manufacturing, 139 medical, 5,7, 18, 19, 60, 61, 64, 66 military, 2, 10, 12, 15, 16, 19, 31, 36, 37,43, 44,45,57,65,73,78, 102, 121, 127 naval, 11,72.73, 76, 78, 83, 85, 87 pilots, 2-7, 10, 13, 14. 20,46,61, 95, 193, 222,

224.225 police, 15,43 power grid, 2, 139 power plant, 45, 72, 85 robotics, 138, 140, 172, 177, 209,211,219, 220, 221 solider, 2

submarine, 60,95,96 unmanned aerial vehicles, 139,218 unmanned ground vehicles, 218 fire fighting, see fields of practice function allocation, 207, 208,222,226 fuzzy cognitive maps, 119, 126, 184, 191, 193 causal relationships, 186 edge strengths, 194, 195 equilibrium, 185, 196 feedback loops, 185, 193, 194, 202 reachability matrix, 185, 202 see also analyses, fuzzy cognitive modeling


games, 150

goals, 2-9, 11-16,20, 31,38,43-35, 54-61,66, 67, 74, 80,83,94, 112, 115, 116, 137, 138, 141. 173, 174,210,216,217.221, 231,232, 237 groups, 236


healthcare. 2, 15, 57, 60,65, 73, 78.95. 127 history, 156

human-machine teaming, 112, 206, 207,208,

215,216,218,225,226 human-agent teaming, 142 human-automation interaction (HAI), 134, 135, 175. 176

human-autonomy teaming, 61

impact, 120, 121, 123

information overload, 14, see also cognitive load intelligent agents, 134, 135, 137, 142, 159, 175 interdependence, 3,4, 112,206,207, 208, 209, 211, 212, 213, 215, 216, 217,219,220, 221, 224, 225, 226,227 interviews, 94


joint activity, 206.207.209. 211-222, 226

joint activity theory, 215

joint work, i 15,209-211, 216,218, 225


knowledge. 111. 116, 123, 133, 136. 155, 170,252, see also team knowledge


leadership, 16 learning, 181,212


macrocognition, 59 maintenance, see fields of practice manufacturing, see fields of practice measurement, 1,53,91,94,96, 102, 103, 136, 142 activity, 97 eye-tracking, 17, 101 NASA-TLX, 206

physiological, 92, 95,96, 97, 102, 104, 138 Situation Awareness Global Assessment Technique (SAGAT), see situation awareness

unobtrusive measurement, 91-109 medical, see fields of practice memory, 3,93, 94, 125, 126, 127,210,232 mental load, see cognitive load mental models, 3, 10, 11,45,55,93,94, ИЗ, 114, 115, 117, 118, 119, 120, 124. 125. 126, 127, 247,249

causal mental models, 113, 114, 115, 116, 118, 123, 126, 127

divergent mental models, 11 shared mental model (SMM), 10-13, 16, 18, 43,93, 102, 112, 113, 125,232, 233 team mental models (TMM), 12 mentalmodeler, 118, 122, 127 microworlds, 29,30, 31,32,35, 37,38,43,44, 45.46

C3fire, 29, 30, 31,32,33, 34, 35, 36,37,38, 39,41,42,43,44,45,51

labview, 205 MATLAB, 205,238 Midland Flight, 92, 3 military, see fields of practice mobile. 37

modeling, 43,62,63,71,72,76-87,96,99,114,

  • 117, 118, 126,127,210,215-217,
  • 225.226

fuzzy cognitive maps, 183-203 quantitative modeling, 133-182 multi-agent systems, 135, 137, 139, 171, 172, 173, 174, 175


naval, see fields of practice neurophysiological, 65,95


observation, 46.71. 86,208,233,239 option awareness, 115

organization, 14, 16. 33,36,40,43,44,45, 54, 62, 63, 65, 71,72, 73,74,75, 76,78, 79,

80, 81, 82, 84, 85. 86, 87, 94, 95,96, 97, 122, 123 self-organization, 84

work organization, 73, 74, 80, 81, 83, 84, 87 overload, 4, 8,9, 10, 140, 143, 167, see also

cognitive load; information overload; workload


paradigms, 133 patterns, 96

perception, 2,93,96, 115, 142, 165, 169, 171, 173, 210, see also situation awareness pilots, see fields of practice planning, 9. 12, 13,36,43,74, 77. 82, 137, 208,


police, see fields of practice power grid, see fields of practice power plant, see fields of practice problem solving, 59

projection, 2,3, 6, 7,9-11, 18, 93, 115,235,239 psychology, 104


recommender systems, 133, 140. see also decision support systems; expert systems

reflection, 133, 136, 155 requirements, 205,220

design requirements, 220,226 situation awareness requirements, 13

resilient, 57, 62, 220 robotics, see fields of practice


scenario development, 103 schema, 3

self-organization, 72, 74, 76, 78, 80, 87 shared! 4, 7, 18, 29, 43, 88, 113, 123 shared mental models, see mental models shared situation awareness (SSA), 1,4, 5,6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,21, 159

shared understanding, 7, 12, 18, 29,43,44,

see also shared situation awareness simulation, 5, 18, 30, 31, 33, 34, 35,37,40,41,

  • 42,43,45,46, 60,63, 64,65,66, 92, 93,95,97, 101, 102, 114, 117, 118,
  • 120, 139, 172, 173,203, see also microworlds

situation awareness (SA), 1-20,29,36,46, 101, 115, 133, 136, 137, 142, 166, 172-174 SAG, see SAGAT Situation Awareness Based Agent

Transparency (SAT), 142, 154, 155, 158, 159, 164, 165, 166, 167, 169, 170, 172, 173, 174, 175, 176 Situation Awareness Global Assessment

Technique (SAGAT), 18, 19,20, 154, 155, 164, 172, 173

situation awareness requirements analysis, 13 see also perception; projection; shared

situation awareness (SSA); shared understanding; team situation awareness (TSA); understanding situation dynamics, 2 skill, 55, 57, 134,212,226 sociotechnical, 12, 71-87, 92 solider, see fields of practice structure, 217

submarine, see fields of practice T


Diner’s Dilemma (DD), 133-175 Lost In The Desert, 231,236,237,240,241,246 Movie Recommendation (MR), 133-175 taskwork, 11,93, 206,207 teams

team-based, 91, 116 team-based knowledge, 115, 116 team cognition, 29, 30, 32,45, 54, 55,57, 63, 65,66, 67,91,92, 93, 94, 95,97, 101, 102, 103, 104, 111, 113, 114, 232,233, 234,245,246,249

team coordination, 4, 10, 11, 12, 13, 17,60,

64. 67

team mental models (TMM), 11, 17,93, 102 team performance, 91 team processes, 11, 12, 13, 17, 18, 54, 94 team situation awareness (TSA), 1-21, 36,45, 93, 101, 102, 115

teamwork, 5, 11, 13, 14, 15, 17,29,36,44,45,60, 79,91,93,94,95, 111, 112, 125,210, 211,212,213,217,224, 231 technologies, 128 temporal knowledge, 115 tools, 7, 8, 9, 15, 18, 31, 33, 38, 39, 71. 73, 79,92, 93,98, 103, 104, 114, 118, 134, 141, 144, 145, 148, 173, 205,206, 207, 213, 226, 249

training, 4,5, 6, 10, 11, 16,21,30, 35,39,41,42, 45,46, 54,57, 60, 64,65,66, 78, 83, 84. 85, 86,92, 98,99. 103, 127, 153, 154, 173,209 cross-training, 11

transactive memory system (TMS), 93 transparency, 31, 141, 142, 154, 159, 161, 163,

165. 167, 169, 175 transportation, 72

trust, 15, 16,46, 133-143, 154, 155, 159, 160, 163, 167-176,212 trust propensity, 136, 155


uncertainty, 31.43, 72, 73,74. 78. 86. 142

understandability, 159

understanding, 2-20, 30, 44,45, 54, 55, 63,

  • 72-75. 86. 89-101. 104. 111-116,
  • 121, 123, 126, 127, 134, 135, 142,
  • 156, 165, 166, 170, 174-176, 193,
  • 194, 206,209-215, 221, 225, 226, 231-233,247,249, see also shared understanding

unmanned aerial vehicles, see fields of practice unmanned ground vehicles, see fields of practice

user experience, 133, 137, 141, 148, 156, 159, 164, 165, 166, 169, 170, 175 users, 146, 152, 153, 165, 171


validation, 103 virtual. 138


work analysis and design, 71 workflows, 205,214,215, 221,222,224 workload, 6,8, 13. 15, 16,20,45, 81, 92,97. 102, 103, 174

NASA-TLX, 206 see also cognitive load

< Prev   CONTENTS   Source