Distributed Cognition in Teams Is Influenced by Type of Task and Nature of Member Interactions
R. Scott Tindale, Jeremy R. Win get, and Verlin B. Hinsz
Simple Aggregation—No Interaction.................................................................93
Aggregation with Limited Information Exchange..............................................94
Fully Interacting Groups.....................................................................................97
Summary and Conclusions................................................................................105
The study of behavior in and by groups has a long history in the social and behavioral sciences (Triplett, 1897; Sherif, 1936; Lorge & Solomon, 1955). Due largely to military funding after WWII, many researchers began studying how teams performed in a variety of different contexts and under varying conditions. This work was integrated and summarized in a landmark volume by Steiner (1972). Two of Steiner’s main conclusions were that groups rarely perform up to their full potential and group performance was heavily influenced by the type of task on which they worked. These conclusions remain relevant to more recent attempts at theory and research on team performance. Steiner also focused on issues of both coordination and motivation in explaining team performance, which are also still present in current work in this area (cf. Kerr & Hertel, 2011; Rico, Sanchez-Manzanares, Gil, & Gibson, 2008). However, the term “cognition” was rarely if ever used to describe the work Steiner reviewed.
Shortly after Steiner’s review, the types of tasks on which groups worked began to change from mostly physical tasks to more cognitive or information processing tasks. In addition, the social and behavioral sciences were being swept up in the cognitive revolution (Lachman, Lachman, & Butterfield, 1979; Newell & Simon, 1972). Individuals (and later groups) began to be seen as information processing systems, and the theories that guided performance research began to focus both on information and the ways in which it was processed. Several review articles on groups published in the 1990s began to reflect this shift in emphasis (Hinsz, Tindale, & Vollrath, 1997; Larson & Christensen. 1993; Thompson & Fine. 1999). Much of the research from this period focused on how information that was distributed among the group members was processed or used by the group (Stasser & Titus, 1985,1987). Consistent with one of Steiner’s (1972) main conclusions, groups rarely performed up to their potential as information processors. However, the focus on information tended to overshadow Steiner’s emphases on other concepts like motivation and coordination. Research since the 1990s has begun to reintroduce such notions into models on group information processing (De Dreu, Nijstad, & van Knippenberg, 2008; Abele, Stasser. & Chartier. 2010).
Another recent trend in group research has focused on how simply aggregating individual judgments (sans interaction or communication) can lead to quite accurate group judgments. The power of groups (or more colloquially, the “wisdom of crowds,” Surowiecki, 2004) has led to the use of “big data” to help organizations make several different types of decisions (Tetlock & Gardner, 2015). In such instances, group members (broadly defined) serve as data points or information sources but computer algorithms aggregate the information for further processing and final judgments. This has led some researchers to argue member interaction is superfluous, or even detrimental, to team performance (Armstrong, 2006). This would imply using teams to gain information is useful, but the processing of such information should be done elsewhere. However, other research has shown that group member interaction can and should help groups process information in a number of task situations (Mellers et al., 2014; Kerr & Tindale. 2011).
Our goal in the present chapter is to review and integrate research on teams as cognitive or information processing systems (cf. Hinsz, 2001), taking into account how such processes function for, or are affected by, different types of tasks and the nature of member interaction. It will be a targeted review, attempting to highlight key task features and key aspects of member interaction (or lack thereof). One of the key task distinctions we will make draws from Steiner’s (1972) distinction between unitary and divisible tasks. Unitary tasks are those where all group members are basically working on the same task together. For example, a team of programmers correcting an error in a computer program would be a unitary task. Divisible tasks are those where each team member (or different subgroups of members) are working on ostensibly different tasks that when combined with the tasks performed by other members (or subgroups) will lead to some collective goal. An example of a divisible task would be an organization launching a new product with some of the team members working on marketing the product, others are working on production, and others still are working on staffing, etc. This distinction is in some senses arbitrary since many unitary tasks could be broken up into subtasks. However, the role of information exchange and interaction is different depending on how easily divisible the task tends to be. We will also discuss how context, task type, and amount of interaction affect how members’ knowledge and preferences are combined into final group products (Davis, 1982; Hinsz & Ladbury, 2012). How groups combine their individual preferences, knowledge, skills, etc. to reach groups goals has been an important topic for group performance for both fully interacting and non-interacting (simple aggregate) groups. Some combination processes are possible regardless of levels of interaction (i.e., averaging) but others require greater levels of interaction in order to emerge (Hinsz & Ladbury, 2012; Kerr & Tindale, 2011). Thus, we will emphasize the role of combination processes throughout the chapter. Finally, we will attempt to use current theory and research findings to make suggestions on how best to use teams as information processing systems and to discuss where the field might productively head in the future.
Simple Aggregation—No Interaction
Although the basic finding has been known since Gallon’s (1907) wisdom of the crowd, Surowiecki (2004) brought the notion of the wisdom of crowds to the forefront of popular culture. The basic idea behind the wisdom of crowds is that an aggregation of many individual judgments will tend to be more accurate than a randomly selected individual judgment and will often be more accurate than a single judgment from an expert (Steiner, 1972). This phenomenon has been replicated many times in a variety of different judgment task domains (Larrick & Soil, 2006; Surowiecki, 2004). Ariely et al. (2000) showed, assuming pairwise conditional independence and random individual error distributions (although rare in many decision contexts), the average of J probability estimates (J = the number of estimators) will always be better than any of the component individual estimates and as J increases, the average will tend toward perfect calibration diagnosticity (accurate representation of the true state of affairs), even when information provided to the various estimators is less than optimal. In addition, Johnson, Budescu, and Wallsten (2001) empirically showed the accuracy of the average probability estimate to be robust over several conditions, even when individual estimates were not independent. Recent work on forecasting has shown a simple average of multiple independent forecasts will perform better than individual experts and often perform as well as more sophisticated aggregation techniques (Armstrong, 2001).
Larrick and Soil (2006) have explained the advantage of simple averages over individual judgments using the concept of “bracketing.” If the group member judgments are independent, different members will make some of the estimates above the “true score” and others below it. Thus, the estimates “bracket” the true score. When this is true, it can be mathematically shown the mean of the multiple estimates will always be more accurate than the average individual judge. If the true score is well bracketed by the multiple estimates (near the median or mean), the aggregate accuracy will be far superior to the typical individual judge. However, even if the true score is closer to one of the tails of the distribution, the mean will still outperform the typical individual, though not to the same degree. Larrick and Soil (2006) also show even when the true score is not bracketed by the estimates, the group (average) will do no worse than the typical individual judge.
From an information processing perspective, bracketing is a function of distributed information. Different group members have different information about the particularjudgment context. This differential information access leads to judgments that vary as a function of that information. Assuming the generally available information is not biased toward a particular tail of the distribution, the judgments should randomly vary around the true score. Thus, the wisdom of crowds can be viewed as a function of the natural distribution of information across members.
Although central tendency aggregation models have been shown to do quite well in many situations (Larrick & Soli, 2006), a number of researchers have attempted to improve aggregate forecasts by modifying the aggregation procedure. Budescu and Chen (2014) formulated a method for improving group forecasts by eliminating members whose forecasts detract from the aggregate performance. They had individuals make probabilistic forecasts for a variety of events and then assessed whether the individual’s forecast was better or worse when each individual was included in (or removed from) the aggregate. By only including those individuals whose forecasts showed a positive influence on accuracy, they consistently improved the accuracy of the aggregate forecasts relative to the simple average and other less effective weighting schemes, and the improvements persisted for future judgments not used to define the inclusion criteria (see also Mellers et al., 2014). Mannes, Soli, and Larrick (2014) suggest a select-crowd strategy, which ranks judges based on a cue to ability (e.g., the accuracy of several recent judgments) and averages the opinions of the top judges (e.g., the top five). Through both simulation and an analysis of 90 archival data sets, results show select crowds of five knowledgeable judges yield very accurate judgments across a wide range of possible settings—the strategy is both accurate and robust (Mannes et al., 2014). Following this, they examine how people prefer to use information from a crowd. The authors’ findings demonstrate people are drawn to experts and dislike crowd averages, but importantly, they view the select-crowd strategy favorably and are willing to use it. The select-crowd strategy is accurate, robust, and appealing as a mechanism for helping individuals tap collective wisdom.
Aggregation with Limited Information Exchange
Although simple aggregation tends to produce fairly accurate decisions, there is little chance for members to share information or defend their positions. In addition, group members often remain unaware of others’ positions and the final group product. Although there is evidence that often little is gained by member exchanges of information for some judgment tasks (Armstrong, 2006; Lorenz, Rauhut, Schweitzer, & Helbing, 2011), it is difficult for members with insights or valuable information to have influence without some type of interaction among team members (Kerr & Tindale, 2011). Obviously full group deliberation allows members to share and defend their positions. However, there is evidence the most influential members in freely interacting groups (based on status or confidence) are not always the most accurate or correct (Littlepage, Robison, & Reddington, 1997). Thus, various approaches at compromise procedures have been suggested in which some information exchange is allowed but the procedures minimize conformity pressures and incidental influence.
Probably the most famous of these limited-exchange procedures is the Delphi technique (Dalkey, 1969; Rowe & Wright, 1999, 2001). The Delphi technique has been used frequently for idea generation and forecasting, but it has also been adapted to other situations (Rohrbaugh, 1979). The procedure starts by having a group of
(typically) experts make a series of estimates, rankings, idea lists, etc. on some topic of interest to the group or facilitator. The facilitator then compiles the list of member responses and summarizes them in a meaningful way (mean rank or probability estimate, list of ideas with generation frequencies, etc.). The summaries are given back to the group members, and they are allowed to revise their initial estimates. The group members are typically anonymous, and the summaries do not specify which ideas or ratings came from each member. This procedure allows information from the group to be shared among the group members but avoids conformity pressure or undue influence by high-status members. The procedure can be repeated as many times as seems warranted but is usually ended when few if any revisions are recorded. The final outcome can range from a frequency distribution of ideas to a choice for the most preferred outcome or the mean or median estimate. A number of related procedures (e.g., nominal group technique; Van de Ven & Delbecq, 1974) use similar procedures but vary in terms of how much information is shared and whether group members can communicate directly. Overall, the purpose of these procedures is to allow for some information exchange while holding potential distortions due to social influence in check. Research on the Delphi technique has tended to show positive outcomes. Delphi groups do better than single individuals and do at least as well as, if not better than, face-to-face groups (Rohrbaugh, 1979). They have also been found to work well in forecasting situations (Rowe & Wright, 1999, 2001).
A more recent technique is the use of prediction markets (cf. Wolfers & Zitzewitz, 2004). Much like financial markets, prediction markets use buyers’ willingness to invest in alternative events (e.g., Great Britain will vote to stay vs. leave the European Union, the U.S. will launch a cyber-attack against Iran in the next year, etc.) as a gauge of their likelihood. They typically do not prohibit direct communication among forecasters/investors/bettors, but in usual practice, there is little, if any, communication. However, because the value placed on the assets is typically set in an open market of buyers and sellers, those already in (or out) of the markets can be informed and swayed by various market indicators (e.g., movements in prices, trading volume, volatility), and thus mutual social influence can occur through such channels. Prediction markets are a dynamic and continuous aggregation process in which bids and offers can be made, accepted, and rejected by multiple parties, and the collective expectations of the “group” can continue to change right up to the occurrence of the event in question (e.g., an election). Except for those with ulterior motives (e.g., to manipulate the market, or to use the market as a form of insurance), investments in such markets are likely to reflect the investors’ honest judgments about the relative likelihood of events. Members can use current market values to adjust their thinking and learn from the behavior of other members. However, such investment choices are not accompanied by any explanation or justification. Indeed, such investors may even have incentives to withhold vital information that would make other investors’ choices more accurate (e.g., that might inflate the price of a “stock” one wants to accumulate). Thus, in terms of opportunities for mutual education and persuasion, prediction markets fall somewhere between statistical aggregation methods (which allow none) and face-to-face groups (which allow many).
There is now' a growing body of evidence supporting of the accuracy of prediction markets (Forsythe, Nelson, Neumann, & Wright, 1992; Rothchild, 2009; Wolfers
& Zitzewitz, 2004). Like much of human judgment, prediction markets sometimes overestimate the likelihood of very rare events (Kahneman &Tversky, 1979), but they have done extremely well at predicting various elections in the U.S. and elsewhere. There is also experimental evidence group members can learn from participating in market-type environments. Maciejovsky and Budescu (2007) had people participate in a competitive auction bidding for information in order to solve the Wason card task, which requires using disconfirming evidence for testing a hypothesis (i.e., overcoming the confirmation bias). Their results showed that participants were better at solving such problems (chose the appropriate evidence in an efficient manner) after having participated in the auctions. Thus, even with very minimal exchange, groups can be very accurate decision makers and their members can gain expertise during the process.
Vroom and Yetton (1973) argued one of the ways managers make decisions is through consultation: The decision is made by the manager but only after getting advice from key members of the team. Vroom and Yetton (1973) argued consultation is optimal when managers do not have all the information at their disposal to make a good decision. Thus, they utilize the information distributed among the rest of the team members. Sniezek and Buckley (1995) referred to this mode of social decision making as the “judge-advisor” systems approach. Such decision systems are quite common in the military and in many organizations (Sniezek et al., 1995). The judge is responsible for the final decision but he/she seeks out suggestions from various advisors. Judge-advisor systems have received a fair amount of research attention (see Bonaccio & Dalal, 2006 for a review). Based on the research just discussed, unless the judge had far more expertise than an advisor, the judge should weight the advice equal to their own opinion. Although receiving advice usually does improve judges’ decisions relative to when they receive no advice, a vast amount of research has shown judges tend to weight their own opinions more than twice as much as the advice they receive (Larrick, Mannes, & Soli, 2012). This has been referred to as egocentric advice discounting (Yaniv, 2004; Yaniv & Kleinberger, 2000). This effect has been found to be extremely robust and has been replicated in many decision situations with several types of judges and advisors (Bonaccio & Dalal, 2006).
Judges do take the expertise of the advisors into account when re-evaluating their position. Thus, judges discount less when the advisors are known experts or their past advice has been accurate (Goldsmith & Fitch, 1997). Judges are also more likely to use advice when making judgments in unfamiliar domains (Harvey & Fischer, 1997), and they learn to discount poor advice to a greater degree than good advice (Yaniv & Kleinberger, 2000). However, judges are not always accurate in their appraisals of advisor’s expertise. Sniezek and Van Swol (2001) have shown one of the best predictors of judges’ use of advice is advisor confidence, which is poorly correlated with advisor accuracy. Discounting occurs less for advice a judge solicits than advice a judge simply receives (Gibbons, Sniezek, & Dalal, 2003). In addition, judges discount less when the task is complex (Schrah, Dalal, & Sniezek, 2006), when there are financial incentives for being accurate (Sniezek & Van Swol, 2001), and when they trust the advisor (Van Swol & Sniezek, 2005). However, discounting is present in virtually all judge-advisor situations, and it almost always reduces decision accuracy.
Much of the research on judge-advisor systems has only allowed advisors to provide judgments or judgments with confidence ratings (Bonaccio & Dalal, 2006). This does not allow judges to hear arguments in support of particular positions or estimates. In addition, most of the judgment tasks used for this research (and for the simple aggregation research discussed previously) are what Laughlin (1980) would call “judgmental,” rather than “intellective,” tasks. Judgment tasks involve matters of opinion that do not allow group members to actually “demonstrate” the accuracy or correctness of their judgments. On the other hand, intellective tasks are matters of fact for which correct or more accurate responses exist. For intellective tasks, under particular conditions (Laughlin & Ellis, 1986), group members should be able to convince other members their position is correct or most accurate using the information available. However, to do this, members need to be able to discuss and share information relevant to performance on the task. Limited information exchange strategies tend not to allow for such interactions. Kerr and Tindale (2011) showed limited exchange strategies do well when the correct answer is bracketed by the group member preferences or when the correct solution is the most popular. Nevertheless, when correct solutions are only preferred by a minority of the members or are far from the mean or median member positions, such strategies lead to performance much below levels expected for interacting groups. Now, our attention turns toward fully interacting groups.
Fully Interacting Groups
Most of the research on group decision making has focused on groups in which the members meet face-to-face and discuss the decision problem until they reach consensus (Kerr & Tindale, 2004). Early research in this area tended to focus on member preferences as the major feature predicting group decision outcomes (Davis, 1973; Kameda, Tindale, & Davis, 2003). More recent research has focused on how groups process information (Hinsz et al., 1997) and the degree to which the group uses available information (Brodbeck, Kerschreiter, Mojzisch, Frey, & Schulz-Hardt, 2007; Lu, Yuan, & McLeod, 2012). Additionally, the motivational aspects of groups and group members have begun to receive attention (De Dreu et al., 2008). We will focus mainly on the two more recent areas in the sections below.
A popular approach to studying interacting groups working on a unitary task utilizes the hidden profile paradigm (Stasser & Titus, 1985). This paradigm is marked by a biased pattern of information distribution in which, prior to group discussion, some information is common to all group members and other information is unique to individual members. The common or fully shared information favors a suboptimal decision alternative, whereas all the unique information combined reveals the optimal alternative. Ultimately, this “hides” the optimal decision choice from the group as a whole. It can only be discovered when each individual shares their unique information and the group uses this information to inform its decision.
Research on hidden profile tasks has shown groups generally do not exchange information efficiently and decision quality suffers as a result. A meta-analysis of the hidden profile paradigm showed (1) groups mention more pieces of common information than unique information; (2) hidden profile groups are less likely to find the solution than are groups having full information; and (3) information pooling (i.e., percentage of unique information mentioned out of total available information, percentage of unique information out of total discussion) is positively related to decision quality. Moreover, communication medium (i.e., computer-mediated communication vs. face-to-face) does not affect (4) unique information pooling or (5) group decision quality (Lu, Yuan, & McLeod, 2012). However, group size, total information load, the proportion of unique information, task demonstrability, and hidden profile strength (i.e., degree of bias created by the hidden profile) moderated these effects.
Most of the current research findings have been nicely encapsulated by Brodbeck et al. (2007) in their Information Asymmetries Model of group decision making. The model categorizes the various conditions that lead to poor information sharing into three basic categories. The first category, negotiation focus, encompasses the various issues surrounding initial member preferences. If groups view the decisionmaking task mainly as a negotiation, members negotiating which alternative should be chosen tend to focus on alternatives and not on the information underlying them. The second category, discussion bias, encompasses those aspects of group discussion that tend to favor shared vs. unshared information (e.g., items shared by many members are more likely to be discussed). The third category, evaluation bias, encompasses the various positive perceptions associated with shared information (e.g., shared information is perceived as more valid, sharing shared information leads to positive evaluations by other group members). All three categories are good descriptions of typical group decision making and can lead to biased group decisions and inhibit cross-fertilization of ideas and individual member learning (Brodbeck et al.. 2007).
A key aspect of the Information Asymmetries Model is that the various aspects of information processing in typical groups only lead to negative outcomes when information is distributed asymmetrically across group members, as when a hidden profile is present. Although such situations do occur, and groups can make disastrous decisions under such circumstances (Janis, 1982; Messick, 2006), they are not typical of most group decision environments. In situations where members have independently gained their information through experience, the shared information they have is probably highly valid and more useful than unique information or beliefs held by only one member. Thus, the fact members share preferences and information in many group decision contexts is probably adaptive and has generally served human survival well (Hastie & Kameda, 2005; Kameda & Tindale, 2006). In addition, groups are often (but not always) sensitive to cues in the environment that indicate information is not symmetrically distributed (Brauner, Judd, & Jacquelin, 2001; Stewart & Stasser, 1998).
Although minorities often are not very influential in groups, if minority members have at their disposal critical information others do not have and that implies the initial group consensus may be wrong, other group members will pay attention to them. However, such minority effects may only be realized when groups are (or think they are) working on intellective tasks. Several studies have shown moderation effects of tasks having a demonstrably correct solution. Lu and colleagues (2012) found the likelihood of a manifest profile (i.e., all members have access to all information) over
a hidden profile group finding the optimal task solution increased when working on tasks with high (vs. low) solution demonstrability (i.e., odds ratios of 15.18 vs. 2.46, respectively). Their results indicate hidden profile tasks without a clear preferred solution are most detrimental to information sharing and decision quality, whereas highly demonstrable tasks increase information sharing (Lu et al., 2012). These findings are consistent with other research showing information pooling is more predictive of decision quality (Mesmer-Magnus & DeChurch, 2009) and group discussions are less likely to focus on common information during high demonstrability tasks (Reimer, Reimer, & Czienskowski, 2010).
Other research provides converging evidence for these claims of the influence of task demonstrability. Specifically, Laughlin, Bonner, and Miner (2002) had 82 four-person cooperative groups and 328 independent individuals solve a random coding of the letters A-J to the numbers 0-9. On each trial the group or individual proposed an equation in letters (e.g., A + D = ?), received the answer in letters (e.g., A + D = B), proposed one specific mapping (e.g., A = 3), received the answer (e.g., True, A = 3), and proposed the full mapping of the ten letters to the ten numbers. Researchers found groups needed fewer trials to find the solution, proposed more complex equations, and identified more letters per equation than each of the best, second-best, third-best, and fourth-best individuals. In this experiment, the nature of the task had a clearly appropriate solution (i.e., it was intellective rather than judgmental), which required demonstrable recognition of correct answers, demonstrable rejection of erroneous answers, and multiple insights into effective collective information processing strategies.
According to the theory of combinations of contributions, the outcomes of group interaction on a task can be predicted by two components: the contributions and the combinations (Hinsz & Ladbury, 2012). The contributions refer to the inputs group members bring with them to the task situation (e.g., cognitive skills, processing goals, etc.). The combinations refer to the aggregation principle by which the contributions are combined to lead to the group outcomes (e.g., strategies to pool, share, and integrate information). Importantly, contributions and combinations directly relate to the cognitive processes involved in how group inputs result in team outcomes on a task (Hinsz & Ladbury, 2012).
Groups always exist in a context, and they are sensitive to this context (Hinsz & Ladbury, 2012). Thus, the combinatorial rule that summarizes the processes by which inputs are transformed into outcomes is dependent on the context as well. One of the key findings concerning how teams process information is the common knowledge effect; that is, information shared by many team members plays a larger role in team process and performance than unshared information (Stasser & Titus, 1985). Given this finding, it seems that to increase the amount of information sharing within a team, all team members should have access to all the information available. Indeed, despite the benefits of such manifest profile groups (e.g., Lu et al., 2012), in such information-rich environments, assigning all information to all members may overload each member’s cognitive capabilities.
Tindale and Sheffey (2002) examined ways to optimally assign information to group members. Following a model proposed by Zajonc and Smoke (1959), the researchers assessed the effects of information assignment redundancy and group interaction on group memory performance. Participants in five-person groups received either a full list of consonant-verb-consonant non-word trigrams to memorize or a partial list with each trigram distributed to two group members. Groups recalled trigrams as either coacting or interacting groups. In terms of correct recall, coacting groups outperformed interacting groups, and partial redundancy produced better recall than total redundancy. However, intrusion errors were greatly reduced by group interaction and/or a reduction in the cognitive load on the individual group members (i.e., partial redundancy). Groups in the partial redundancy condition tended to perform near optimal levels. A thought experiment of a similar problem using the ideal group model (Sorkin & Dai, 1994) produced similar results (Wallace & Hinsz, 2010). By comparing distributions of information that were unique to each member of the group, partially redundant among group members, or completely redundant, the simulation indicated partially redundant distributions produced superior memory performance to that of unique or complete redundant conditions.
Motivation in groups has been a topic of interest in social psychology since its earliest days as a field of inquiry (Triplett, 1897). Many studies have focused on how groups affect the amount of effort expended by their members, and both motivation gains and losses have been demonstrated (Kerr & Tindale, 2004; Weber & Hertel, 2007). Motivation has also been an important topic in group, as well as individual, decision making, and until recently the basic motivational assumption was hedonism. Many models of collective decision-making use basic game theoretic, or utility maximization, principles to explain how members both choose initial preferences and move toward consensus (Kahn & Rapoport, 1984). Thus, much of the early work on group decision making tended to treat individual group members as players in a utility maximization game (Budescu, Erev, & Zwick, 1999). Game theory approaches are quite prevalent and also quite useful for understanding social behavior (Kameda & Tindale, 2006), but other motives more associated with the group level of analysis have also been found to be important (Levine & Kerr, 2007). In addition, many of these motivations were discovered because social behavior did not follow game theoretic expectations (Dawes, van de Kragt, & Orbell, 1988).
Probably the most heavily researched of these more recent motives in groups involves the ingroup bias (Hogg & Abrams, 1988). There is now substantial evidence that when group members think about themselves as a group (thus, sharing a social identity), they begin to behave in ways that protect the group from harm or enhance its overall welfare. Many of the implications of this bias are positive for the group, but there are situations where it prevents groups from making good decisions. For example, groups are more likely than individuals to lie about preferences and resources in a negotiation setting (Stawiski, Tindale, & Dykema-Engblade, 2009). Probably the most prominent example in which protecting or enhancing the group’s welfare leads to less than optimal decisions is the inter-individual-intergroup discontinuity effect (Wildschut, Pinter, Vevea, Insko, & Schopler, 2003). McCallum et al. (1985) initially demonstrated this effect by comparing individuals to groups when playing a prisoner’s dilemma game. The prisoner’s dilemma game is a mixed motive game where the dominant, or individually rational, response is not to cooperate with the other player. However, when both players make the non-cooperative choice, they both do poorly. The only collectively rational choice is for both players to cooperate, which leads to the greatest collective payoff and moderate positive gains for each player. When two individuals play the game and can discuss the game before making choices, they both end up cooperating better than 80% of the time. However, when two groups play the game and each group must choose between cooperation and non-cooperation, groups quite often choose not to cooperate. Over multiple plays of the game, groups end up locked in the mutual non-cooperation payoff and earn far worse payoffs compared to the inter-individual situation. This effect has been replicated many times using several types of mixed motive game structures and different sized groups (see Wildschut et al., 2003 for a review).
However, giving groups the right motivation can help groups to be better information processors. De Dreu et al. (2008) developed a model of group judgment and decision making based on the combination of epistemic and social motives. Called the “motivated information processing in groups” model (MIP-G), it argues information processing in groups is better understood by incorporating two somewhat orthogonal motives: high vs. low epistemic motivation and pro-social vs. pro-self motivation. Earlier work on negotiation had shown negotiators that share both high epistemic motivation and a pro-social orientation were better able to find mutually beneficial tradeoffs and reach better integrative agreements as compared to negotiators with any other combination of motives (De Dreu, 2010). Recent research now shows the same appears to hold true for groups working cooperatively to solve a problem or make a decision. According to the model, high epistemic motivation involves a goal to be accurate or correct, which should lead to deeper and more thorough information search and analysis (Kruglanski & Webster, 1996). Work on the information sharing effects has consistently demonstrated instilling a goal of accuracy or defining the task in terms of solving a problem both increase information sharing (Postmes, Spears, & Cihangir, 2001; Stasser & Stewart, 1992). Members high in prosocial motivation help to ensure all types of information held by each member are likely to be disseminated, rather than just information supporting the position held by an individual member. Recent research showing that members focusing on preferences rather than information tends to impede information sharing is quite consistent with this assertion (Mojzisch & Schulz-Hardt, 2010). The model predicts information processing in groups will only approach optimal levels when group members are high on both epistemic motivation and pro-social orientation. This is because it is the only combination that produces both systematic and thorough processing of information in an unbiased manner. The MIP-G model appears to do a good job of explaining several well replicated findings and has fared well in the few direct attempts to test it (Bechtoldt, De Dreu, Nijstad, & Choi, 2010; De Dreu, 2007).
Steiner’s (1972) definition of divisible tasks probably maps more closely onto most organizational team tasks than the unitary tasks discussed thus far. Unfortunately, much more work on information processing has been done on unitary tasks. This is partially a function of using laboratory studies to follow how information flows through a group. Unitary tasks are easier to use when time is limited, and the implications of information are more clearly defined for unitary tasks. However, recent research in organizational contexts has given cognition a much more prominent role (Salas, Goodwin, & Burke, 2009). Another difference between unitary and divisible tasks involves how information is used. For unitary tasks, information exchange and processing are usually oriented toward solving a specific problem or choosing a particular course of action. Perhaps a useful example of how information is processed for divisible tasks is in terms of command and control teams (Cooke, Gorman, Duran, & Taylor, 2007). Each member of the team has a role and responsibilities for the team’s performance. Moreover, for many command and control teams, the information is processed so a team decision can be reached.
For divisible tasks such as command and control, information processing can also serve a coordination function. Coordination is one of major obstacles to effective team functioning on divisible tasks. The information needed for performance is distributed among the team members, and communication of this information is required so the team can meet its objectives. Consequently, for divisible tasks, communication is part of the team cognition (Cooke, Gorman, Myers, & Duran, 2013) and is part of the processing of the team’s information (Hinsz et al., 1997). Moreover, in a number of command and control situations, subgroups may be working on different aspects of a task that are interdependent. Knowing when other subgroups may complete their task is important for knowing how to judge the timing and performance on their own subgroup (Marks, Mathieu, & Zaccaro, 2001). Thus, information processing is critical for performance on divisible tasks but in different ways (Cooke et al., 2013).
Probably one of the main cognitive constructs relevant to divisible tasks is shared mental models (Cannon-Bowers, Salas, & Converse, 1993; Hinsz, 1995: Mohammed, Ferzandi, & Hamilton, 2010). Mental models refer to mental representations of the task and the behaviors associated with performing the task (Rouse & Morris, 1986). At the team level, mental models also involve roles and interdependencies among team members (Klimoski & Mohammed, 1994; Mohammed et al., 2010). Cannon-Bowers et al. (1993) differentiated between task models and team models, and these were incorporated into the conceptualization of groups as information processors (Hinsz et al., 1997). Task models involve the various steps involved in the task and the resources (equipment, etc.) necessary to accomplish it. Group, or team, models involve the information and skills members have that are relevant to the task and the ways in which their skills and behaviors must be coordinated to move efficiently toward task completion. Such shared cognitive structures help team members coordinate actions and interpret information from other team members in consistent ways. They allow team members to develop similar explanations of the environment and to more effectively communicate implicitly, both of which improve team performance (Rico et al., 2008). Team mental models can enhance performance to the degree the models are accurate and the members all share the same model (Hinsz, 1995; Salas et al., 2009).
A few theorists have noted that a missing component of much team/groups research is time (McGrath & Tschan, 2004; Mohammed, Hamilton, & Lim, 2009). Time or timing is a critical component of divisible tasks where early outcomes inform later decisions and behaviors. Until recently, time has been a missing component of team mental models as well (Mohammed et al., 2009). However, recent research has demonstrated the importance of adding the time dimension to team mental models (Mohammed, Hamilton, Tesler, Mancuso, & McNeese, 2015). They argue that adding the notion of temporal team mental models to the general team mental model framework would improve our understanding of how teams coordinate their actions to achieve better outcomes. Using multiple operationalizations of temporal team mental models, they showed teams that incorporated temporal aspects into their mental models performed better than teams with mental models lacking in such aspects.
Team training on both task and team models tends to improve performance by insuring that all aspects of both models are shared (Cannon-Bowers et al., 1993). A well-known team-training program, Cockpit Resource Management (Weiner, Kanki, & Helmreich, 1993), shows how training helps to create effective mental models. In an attempt to decrease errors in airline cockpit crews, Weiner et al. (1993) had each crewmember cross train on their specific role or task as well as on every other role in the cockpit. This cross-training allowed team members to better understand how their role fit in with other roles and how the information they possessed affected other roles. In addition, team members were trained to feel comfortable communicating the information they had and to argue for its relevance in the presence of higher-status team members. Thus, teams were trained to share both a mental model of the cockpit but also the appropriateness of free-flowing information exchange across team members and status differences. Teams trained in this way showed substantial reductions in errors and an increase in airline safety. Similar performance enhancements have been shown for surgery and other teams in hospitals (King et al., 2008).
However, sharedness for either the task or group model will only enhance performance to the degree the model is accurate (Hinsz, 1995; Hinsz & Ladbury, 2012). Stasser and Augustinova (2008) have shown that hierarchical, distributed decision situations where each member has only incomplete information often produce better outcomes if information is simply sent up through the system by each group member without requiring any type of intermediary judgments by members. However, in practice, many groups assume allowing judgments from various members is useful and use such a model to guide their behavior. Although aggregate judgments by many actors with different types and amount of information tend to be more accurate than judgments made by single individuals (Kerr & Tindale, 2011), in distributed systems where each member has only one type of information, asking all the members to make judgments adds noise to the system. In addition, research has showm it is better for members not to know others might have the same information they do because it reduces their feelings of criticality and decreases the likelihood they will send all their relevant information forward (cf. Kerr & Hertel, 2011). Tschan et al. (2009) have shown critical information easily available to emergency medical teams is often overlooked because each member assumes someone else would have discovered and presented the information if it was relevant. Thus, intuitive mental models shared by group members can inhibit performance if they are inaccurate in terms of the task or if they lead to decreased information sharing.
Another type of cognitive construct that has received a fair amount of attention in the literature is transactive memory (Wegner, 1986; Peltokorpi, 2008). Using an individual-level metaphor, Wegner argued team members encode, store, and retrieve information much like single individuals do (see also Wegner, 1995, for a computer metaphor). However, unlike individuals, teams have multiple information storage units, each associated with a different member. Thus, the memory capacity of a team is considerably larger than that of any given team member. However, for the group to be able to use the additional memory storage efficiently, different team members must encode and store different information. For teams working on divisible tasks, the different aspects of the task often define which member will be responsible for encoding and storing certain types of information. For example, pilots may be responsible for knowing flight plans and schedules, whereas copilots maybe responsible for knowing current protocols for final safety checks. The copilot does not need to remember specific details on the flight plan because he/she could always retrieve them from the pilot, and vice versa. Although groups working on unitary tasks can divide up relevant information about the task and form transactive memory systems (Stewart & Stasser, 1995), such systems tend to form naturally for groups working on divisible tasks (Baumann & Bonner, 2011). Transactive memory systems allow for the efficient storage and retrieval of information and also increase team memory capacity. However, for a transactive memory system to work, the members must share a model of who knows what in that they understand how the information is distributed. Consequently, transactive memory systems are instrumental to the effective functioning of teams processing distributed information with divisible tasks.
Liang, Moreland, and Argote (1995) showed that training groups together as a team, rather than training each individual separately, naturally leads to the formation of more effective transactive memory systems. Three-person teams were trained on how to assemble a small radio. The assembly consisted of three component parts, and each team member was trained on each component. Half of the teams were trained together and practiced the different components as a team. The other half involved each individual member being trained individually and then the three individuals were brought together to work as a team. Teams trained together performed better than teams trained as individuals, and the development of a transactive memory system (which team members were better at and knew more about different components) accounted for the difference. These results have now been replicated a number of times (Moreland, Argote, & Krishnan, 1998; Moreland, 1999) and have generalized to training groups in more natural settings (Peltokorpi, 2008).
Another recent construct beginning to receive research attention is the notion of “macrocognition” (Fiore, Smith-Jentsch, Salas, Warner, & Letsky, 2010). The term, as originally used by McNeese (1986), referred to higher level cognition that would be needed to coordinate human-machine systems, such as Al systems in aircraft that aid pilots in flight. More recent conceptualizations have focused on how teams adapt to complex environments and the new knowledge and cognitive processes that emerge from such adaptations (Fiore et al., 2010). Though a fair amount of research has focused on how team mental models can influence performance, macrocognition research focuses on how team mental models change and become more complex as a function of team performance and adaptation.
Although Steiner (1972) referred to divisible tasks within groups in which different members perform different subtasks, recent theorizing on teams in organizations has conceptualized parts of organizations as multi-team systems (Marks, Mathieu, & Zaccaro, 2001; Zacarro, Marks, & DeChurch, 2011). A multi-team system involves “two or more teams that interface directly and interdependently in response to environmental contingencies toward the accomplishment of collective goals” (Marks et al., 2001, p. 290). Thus, the larger organizational task is divided among different teams. DeChurch and Mathieu (2009) argue such systems can be interdependent in at least three domains: inputs, processes, and outputs. As long as the teams share at least one over-arching goal and show interdependence in at least one domain, the teams can be seen as forming a system within the larger organization.
Many of the same issues associated with teams working on divisible tasks also appear for multi-team systems (Zacarro et al., 2011). Importantly, the two factors that contribute to suboptimal team performance (Steiner, 1972) have also been shown to be important for the motivation (Rico, Hinsz, Burke, & Salas, 2017) and coordination (Rico, Hinsz, Davison, & Salas, 2018) of multiteam systems. Moreover, the motive for ingroup bias would arise for the component teams of a multi-team system such that the other teams in a multi-team system will be perceived as outgroups (Hinsz & Betts, 2011). Consequently, like other forms a team structures, the types of interdependencies among members and teams will influence the nature of the interaction among the teams and their members.
The information processing among the component teams in a multi-team system also has similarities to information processing in teams. The knowledge and information residing within one team may serve as inputs to another team (Hinsz, Wallace, & Ladbury, 2009). Alternatively, the actions of one team are likely to require a certain degree of coordination over time to insure efficient system functioning (Hinsz et al., 2009). A conceptualization of multi-team systems can also serve as a bridge to the recent research focus on science teams (Fiore, 2008). Many wicked societal problems involve issues that span different levels of analysis and scientific disciplines. For example, understanding global warming and finding ways to ameliorate it involves meteorology, chemistry, psychology, and sociology, as well as additional disciplines. Thus, future research that helps to further explain how teams and team systems can operate most efficiently should contribute to the solution of other societal problems as well.
Summary and Conclusions
Research to date shows quite clearly that distributed cognition is one of the strongest influences upon the quality of team performance on a variety of tasks teams confront in organizations. The ability to combine and evaluate information from multiple sources to solve a problem or choose a course of action is what allows teams to perform better than individuals working alone (Hinsz et al., 1997). Recent research on aggregation has shown that even without inter-member communication, the diversity of knowledge across members enhances the accuracy of judgments groups make
(Larrick & Soil, 2006). However, for teams to maximize their potential and to ensure unique information is exchanged, other cognitive and motivational factors must be involved.
First, in addition to unique, distributed knowledge, teams need a core base of shared knowledge so all members can see the relevance of the unique information when it is shared (Laughlin & Ellis, 1986; Cannon-Bowers et al., 1993; Hinsz, 1995). Thus, appropriate shared background knowledge or accurate shared mental models allow distributed cognition to aid in team performance. Such shared cognitions also allow team members to better coordinate their efforts. Second, teams need to have high epistemic and social motivations in order to fully use the information at their disposal (De Dreu et al., 2008). Research has consistently shown viewing tasks as having a correct or optimal solution leads to better information sharing (Stewart & Stasser, 1995). In addition, recent work on team forecasting has found that being open to the opinions of other group members is one of the key predictors to team success and forecasting improvement over time (Mellers et al., 2014). When distributed cognition is yoked with accurate shared knowledge, appropriate motivation, and well-coordinated action, teams should be able to use their distributed knowledge to its fullest extent.
Research on the improved utilization of the distributed cognition in teams has contributed to our general understanding of information processing teams. Moreover, as this chapter illustrates, the research on distributed cognition in teams reifies our classic approaches to group productivity (Steiner, 1972). As the research on multi-team systems reinforces, the types of tasks team members face and the ways in which those tasks are assigned to members have dramatic influences on how team members interact and how they process information. It is also the nature of the interactions among team members, whether they share the information available or keep it to themselves, that impacts how teams distribute and process information. As this chapter illustrates, much has been learned about distributed cognition in teams and there is much more that can be learned by examining factors such as type of task and the nature of member interaction in the study of team performance. More work is also needed on how to design and utilize technology to aid groups in performing cognitive tasks. Many if not most teams now depend on technology to some degree for information storage and/or dissemination. However, as work in Al increases how well technology can learn and adjust from data, teams will probably become more dependent on technology for all aspects of task performance. New research in this area will be key to helping teams develop, use, and learn from distributed cognition environments.
Abele, S., Stasser, G., & Chartier, C. (2010). Conflict and coordination in the provision of public goods: A conceptual analysis of step-level and continuous games. Personality and Social Psychology Review, 14, 385-401.
Ariely, D., Au. W. T., Bender, R. H., Budescu. D. V., Dietz. С. B.. et al. (2000). The effect of averaging subjective probability estimates between and within groups. Journal of Experimental Psychology: Applied, 6, 130-147.
Armstrong, J. S. (2001). Principles of forecasting: A handbook for researchers and practitioners. Boston, MA: Kluwer Academic.
Armstrong, J. S. (2006). Should the forecasting process eliminate face-to-face meetings? Foresight: The International Journal of Applied Forecasting, 5, 3-8.
Baumann, M. R., & Bonner, B. L. (2011). Expected group longevity and expected task difficulty on learning and recall: Implications for the development of transactive memory. Croup Dynamics: Theory, Research and Practice, /5(3), 220-232.
Bechtoldt. M. N.. De Dreu, C. K. W„ Nijstad. B. A., & Choi. H. S. (2010). Motivated information processing, epistemic social tuning, and group creativity. Journal of Personality and Social Psychology, 99, 622-637.
Bonaccio, S., & Dalal, R. S. (2006). Advice taking and decision making: An integrative literature review and implications for the organizational sciences. Organizational Behavior and Human Decision Processes, 101, 127-151.
Brauner, M., Judd, C. M., & Jacquelin, V. (2001). The communication of social stereotypes: The effects of group discussion and information distribution on stereotypic appraisals. Journal of Personality and Social Psychology, 81, 463-471. https://doi. org/10.1037/0022-35188.8.131.523
Brodbeck, F. C„ Kerschreiter, R., Mojzisch, A., Frey. D.. & Schulz-Hardt. S. (2007). Group decision making under conditions of distributed knowledge: The information asymmetries model. Academy of Management Journal, 32,459-479.
Budescu, D. V., & Chen, E. (2014). Identifying expertise to extract the wisdom of crowds. Management Science. 61(2). 267-280.
Budescu, D. V., Erev, I., & Zwick, R. (Eds.) (1999). Carnes and human behavior. Mahwah, NJ: Lawrence Erlbaum Associates.
Cannon-Bowers, J. A., Salas, E., & Converse, S. (1993). Shared mental models in expert team decision making. In J. Castellan Jr. (Ed.), Current issues in individual and group decision making (pp. 221-246). Hillsdale, NJ: Lawrence Erlbaum.
Cooke, N. J.. Gorman, J. C.. Duran. J. L., & Taylor. A. R. (2007). Team Cognition in Experienced Command-and-Control Teams. Journal of Experimental Psychology: Applied, Special Issue on Capturing Expertise across Domains, 13, 146-157.
Cooke, N. J., Gorman, J. C.. Myers, C. W., & Duran, J. L. (2013). Interactive team cognition. Cognitive Science, 37. 255-285. https://doi.Org/10.l 111/cogs.12009
Dalkey, N. C. (1969). An experimental study of group opinion. Futures, 1(5), 408-426.
Davis, J. H. (1973). Group decision and social interaction: A theory of social decision schemes. Psychological Review, 80. 97-125.
Davis, J. H. (1982). Social interaction as a combinatorial process in group decision. In H. Brandstatter, J. H. Davis, & G. Stocker-Kreichgauer (Eds.), Croup decision making (pp. 27-58). London, UK: Academic Press.
Dawes, R. M., van de Kragt, A. J., & Orbell, J. M. (1988). Not me or thee but we: The importance of group identity in eliciting cooperation in dilemma situations. Experimental manipulations. Acta Psychologica, 68, 83-97.
De Dreu, C. K. W. (2007). Cooperative outcome interdependence, task reflexivity, and team effectiveness: A motivated information processing perspective. Journal of Applied Psychology, 92. 628-638.
De Dreu, C. K. W. (2010). Social conflict: The emergence and consequences of struggle and negotiation. In S. T. Fiske, D. T Gilbert, & H. Lindzey (Eds.), Handbook of social psychology (5th ed.. Vol. 2. pp. 983-1023). New York: Wiley.
De Dreu, C. K. W., Nijstad. B. A., & van Knippenberg, D. (2008). Motivated information processing in group judgment and decision making. Personality and Social Psychology Review, 12. 22-49.
DeChurch. L. A.. & Mathieu, J. E. (2009). Thinking in terms of multiteam systems. In Team effectiveness in complex organizations: Cross-disciplinary perspectives and approaches (pp. 267-292). New York: Psychology Press.
Fiore, S. M. (2008). Interdisciplinarity as teamwork: How the science of teams can inform team science. Small Group Research, 59(3). 251-277.
Fiore, S. M.. Smith-Jentsch, K. A.. Salas, E., Warner, N.. & Letsky, M. (2010). Towards an understanding of macrocognition in teams: Developing and defining complex collaborative processes and products. Theoretical Issues in Ergonomics Science, 11(4), 250-271. https://doi.org/10.1080/14639221003729128
Forsythe, R., Nelson, F., Neumann, G. R., & Wright, J. (1992). Anatomy of an experimental political stock market. American Economic Review, 82, 1142-1161.
Galton, F. (1907). Vox Populi—The wisdom of the crowd. Nature. 75(1949), 450-451.
Gibbons, A. M., Sniezek, J. A., & Dalal. R. S. (2003, November). Antecedents and consequences of unsolicited versus explicitly solicited advice. In D. Budescu (Chair), Symposium in Honor of Janet Sniezek. Symposium presented at the annual meeting of the society for judgment and decision making. Vancouver. BC.
Goldsmith, D. J., & Fitch, K. (1997). The normative context of advice as social support. Human Communication Research, 23,454-476.
Harvey, N.. & Fischer, I. (1997). Taking advice, accepting help, improving judgment and sharing responsibility. Organizational Behavior and Human Decision Processes, 70, 117-133.
Hastie, R., & Kameda, T. (2005). The robust beauty of majority rules in group decisions. Psychological Review, 112. 494-508.
Hinsz, V. B. (1995). Mental models of groups as social systems: Considerations of specification and assessment. Small Group Research, 26, 200-233.
Hinsz, V. B. (2001). A groups-as-information-processors perspective for technological support of intellectual teamwork. In M. D. McNeese. E. Salas, & M. R. Endsley (Eds.), New trends in collaborative activities: Understanding system dynamics in complex settings (pp. 22-45). Santa Monica, CA: Human Factors & Ergonomics Society.
Hinsz, V. B., & Betts, K. R. (2011). Conflict in multiple-team situations. In S. J. Zacarro, M. A. Marks. & L. DeChurch (Eds.), Multi-team systems (pp. 289-321). New York: Taylor & Francis.
Hinsz, V. B., & Ladbury, J. L. (2012). Combinations of contributions for sharing cognitions in teams. In E. Salas, S. M. Fiore, & M. P. Letsky (Eds.), Theories of team cognition: Cross-disciplinary perspectives (pp. 245-270). New York: Routledge.
Hinsz, V. B., Tindale, R. S.. & Vollrath, D. A. (1997). The emerging conception of groups as information processors. Psychological Bulletin, 121, 43-64.
Hinsz, V. B., Wallace, D. M., & Ladbury, J. L. (2009). Team performance in dynamic task environments. In G. P. Hodgkinson & J. K. Ford (Eds.), International review of industrial and organizational psychology (Vol. 24, pp. 183-216). New York: Wiley.
Hogg, M. A., & Abrams, D. (1988). Social identification: A social psychology of intergroup relations and group processes. London, UK: Routledge.
Janis, I. L. (1982). Groupthink: Psychological studies of policy decisions and fiascoes (2nd ed.). New York: Houghton Mifflin.
Johnson, T. R., Budescu, D. V, & Wallsten, T. S. (2001). Averaging probability judgments: Monte Carlo analyses of asymptotic diagnostic value. Journal of Behavioral Decision Making, 14. 123-140.
Kahn, J. P., & Rapoport, A. (1984). Theories of coalition formation. Hillsdale, NJ: Lawrence Erlbaum Associates.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263-291.
Kameda, T., & Tindale, R. S. (2006). Groups as adaptive devices: Human docility and group aggregation mechanisms in evolutionary context. In M. Schaller, J. A. Simpson, & D. T.
Kenrick (Eds.), Evolution and social psychology (pp. 317-342). New York: Psychology Press.
Kameda. T., Tindale, R. S., & Davis, J. H. (2003). Cognitions, preferences, and social sharedness: Past, present and future directions in group decision making. In S. L. Schneider & J. Shanteau (Eds.), Emerging perspectives on judgment and decision research (pp. 458-485). New York: Cambridge University Press.
Kerr, N. L., & Hertel, G. (2011). The Kohler group motivation gain: How to motivate the “weak links” in a group. Social and Personality Psychology Compass, 5(1), 43-55.
Kerr, N. L., & Tindale, R. S. (2004). Small group decision making and performance. Annual Review of Psychology, 55. 623-656.
Kerr, N. L., & Tindale, R. S. (2011). Group-based forecasting: A social psychological analysis. International Journal of Forecasting, 27, 14-40. https://doi.Org/10.1016/j. ijforecast.2010.02.001
King, H. B., Battles, J., Baker. D. P. et al. (2008). TeamSTEPPS™: Team strategies and tools to enhance performance and patient safety. In K. Henriksen, J. B. Battles, M. A. Keyes et al. (Eds.), Advances in patient safety: New directions and alternative approaches (Vol. 3: Performance and Tools). Rockville, MD: Agency for Healthcare Research and Quality (US). Retrieved from www.ncbi.nlm.nih.gov/books/NBK43686/
Klimoski, R., & Mohammed, S. (1994). Team Mental Model: Construct or metaphor? Journal of Management, 20(2), 403-437. https://doi.org/10.1177/014920639402000206
Kruglanski, A. W„ & Webster, D. M. (1996). Motivated closing of the mind: “Seizing” and “freezing”. Psychological Review, 103. 263-283.
Lachman, R., Lachman, J. L., & Butterfield, E. C. (1979). Cognitive psychology and information processing: An introduction. New York: Psychology Press.
Larrick, R. R, & Soli, J. B. (2006). Intuitions about combining opinions: Misappreciation of the averaging principle. Management Science, 52, 111-127.
Larrick, R. P. Mannes, A. E.. & Soil, J. B. (2012). The social psychology of the wisdom of crowds. In J. I. Krueger (Ed.), Social judgment and decision making (pp. 227-242). New York: Psychology Press.
Larson. J. R. Jr., & Christensen, C. (1993). Groups as problem-solving units: Towards a new meaning of social cognition. British Journal of Social Psychology, 32, 5-30.
Laughlin. P. R. (1980). Social combination processes in cooperative problem-solving groups on verbal intellective tasks. In M. Fishbein (Ed.), Progress in social psychology (pp. 127-155). Hillsdale. NJ: Erlbaum.
Laughlin, P. R., & Ellis, A. L. (1986). Demonstrability and social combination processes on mathematical intellective tasks. Journal of Experimental Social Psychology, 22, 177-189.
Laughlin, P. R., Bonner, B. L., & Miner, A. G. (2002). Groups perform better than the best individuals on letters-to-numbers problems. Organizational Behavior and Human Decision Processes, 88, 605-620.
Levine, J. M., & Kerr, N. L. (2007). Inclusion and exclusion: Implications for group processes. In A. E. Kruglanski & E. T. Higgins (Eds.), Social psychology: Handbook of basic principles (2nd ed., pp. 759-784). New York: Guilford Press.
Liang, D. W„ Moreland, R. L., & Argote, L. (1995). Group versus individual training and group performance: The mediating role of transactive memory. Personality and Social Psychology Bulletin. 21, 384-393.
Littlepage, G. E., Robison, W., & Reddington, K. (1997). Effects of task experience and group experience on performance, member ability, and recognition of expertise. Organizational Behavior and Human Decision Processes, 69, 133-147.
Lorenz, J., Rauhut, H.. Schweitzer, F., & Helbing, D. (2011). How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences, USA, 108,9020-9025.
Lorge, I.. & Solomon. H. (1955). Two models of group behavior in the solution of eureka-type problems. Psychometrica, 20, 139-148.
Lu, L., Yuan, Y., & McLeod, P. L. (2012). Twenty-five years of hidden profile studies: A metaanalysis. Personality and Social Psychology Review, 16, 54-75.
Maciejovsky, B., & Budescu, D. V. (2007). Collective induction without cooperation? Learning and knowledge transfer in cooperative groups and competitive auctions. Journal of Personality and Social Psychology, 92, 854-870. https://doi. org/10.1037/0022-35184.108.40.2064
Mannes, A. E., Soil, J. B., & Larrick, R. P. (2014). The wisdom of select crowds. Journal of Personality and Social Psychology, 107(2), 276.
Marks, M. A.. Mathieu, J. E., & Zaccaro, S. J. (2001). A temporally based framework and taxonomy of team processes. Academy of Management Review, 26(3), 356-376.
McCallum, D. M.. Harring. K., Gilmore. R., Drenan, S„ Chase, J.. Insko, C. A., et al. (1985). Competition between groups and between individuals. Journal of Experimental Social Psychology, 21. 310-320.
McGrath, J. E., & Tschan, F. (2004). Temporal matters in social psychology. Washington, DC: American Psychological Association.
McNeese, M. D. (1986). Humane intelligence: A human factors perspective for developing intelligent cockpits. IEEE Aerospace and Electronic Systems, 1(9), 6-12.
Mellers, B., Ungar, L., Baron, J.. Ramos, J., Burcay, B., Fincher, K. et al. (2014). Psychological strategies for winning a geopolitical forecasting tournament. Psychological Science, 25. 1106-1115. https://doi.org/10. 1177/095679761452455
Mesmer-Magnus, J. R., & DeChurch. L. A. (2009). Information sharing and team performance: A meta-analysis. Journal of Applied Psychology, 94(2), 535.
Messick. D. M. (2006). Ethics in groups: The road to hell. In E. Mannix, M. Neale, & A. Ten-Brunsel (Eds.), Research on managing groups and teams: Ethics in groups (Vol. 8). Oxford, UK: Elsevier Science Press.
Mohammed, S., Ferzandi, L.. & Hamilton, K. (2010). Metaphor no more: A 15-year review of the team mental model construct. Journal of Management, 36(4), 876-910. https://doi. org/10.1177/0149206309356804
Mohammed, S., Hamilton, K., & Lim, A. (2009). The incorporation of time in team research: Past, current, and future. In E. Salas, G. F. Goodwin, & C. S. Burke (Eds.), Team effectiveness in complex organizations: Cross-disciplinary perspective and approaches (pp. 321-348). New York: Routledge. Taylor & Francis Group.
Mohammed, S., Hamilton, K., Tesler, R.. Mancuso, V., & McNeese, M. (2015). Time for temporal team mental models: Expanding beyond “what” and “how” to incorporate “when”. European Journal of Work and Organizational Psychology, https://doi.org/10. 1080/1359432X.2015.1024664
Mojzisch, A., & Schulz-Hardt, S. (2010). Knowing others’ preferences degrades the quality of group decisions. Journal of Personality and Social Psychology, 98, 794-808. https:// doi.org/10.1037/a0017627
Moreland. R. L. (1999). Transactive memory: Learning who knows what in work groups and organizations. In L. L. Thompson, J. M. Levine, & D. M. Messick (Eds.), Shared cognition in organizations: The management of knowledge (pp. 3-31). Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
Moreland. R. L., Argote, L., & Krishnan, R. (1998). Training people to work in groups. In R. S. Tindale, L. Heath, J. Edwards, E. J. Posavac, F. B. Bryant, Y. Suarez-Balcazar,. . .
J. Myers (Eds.), Theory and research on small groups (pp. 37-60). New York: Plenum Press.
Newell, A., & Simon. H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice Hall.
Peltokorpi, V. (2008). Transactive memory systems. Review of General Psychology, 12, 378-394.
Postmes, T., Spears. R., & Cihangir, S. (2001). Quality of decision making and group norms. Journal of Personality and Social Psychology, 80(6), 918.
Reimer, T., Reimer, A., & Czienskowski, U. (2010). Decision-making groups attenuate the discussion bias in favor of shared information: A meta-analysis. Communication Monographs. 77(1). 121-142.
Rico, R., Hinsz, V. B„ Burke, S., & Salas, E. (2017). A multilevel model of multiteam performance. Organizational Psychology Review, 7, 197-226.
Rico, R., Hinsz, V. B.. Davison, R. B., & Salas, E. (2018). Structural and temporal influences upon coordination and performance multiteam systems. Human Resources Management Review, 28, 332-346.
Rico, R., Sanchez-Manzanares, M., Gil, F., & Gibson, C. (2008). Team implicit coordination processes: A team knowledge-based approach. Academy of Management Review, 33, 163-184.
Rohrbaugh, J. (1979). Improving the quality of group judgment: Social judgment analysis and the Delphi technique. Organizational Behavior and Human Performance, 24, 73-92.
Rothchild, D. (2009). Forecasting elections: Comparing prediction markets, polls, and their biases. Public Opinion Quarterly, 73, 895-916.
Rouse, W. B., & Morris, N. M. (1986). On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin, 100, 349-363.
Rowe, G„ & Wright. G. (1999). The Delphi technique as a forecasting tool: Issues and analysis. International Journal of Forecasting, 15, 353-375.
Rowe, G„ & Wright, G. (2001). Expert opinions in forecasting: Role of the Delphi technique. In J. S. Armstrong (Ed.), Principles of forecasting: A handbook of researchers and practitioners (pp. 125-144). Boston, MA: Kluwer Academic Publishers.
Salas, E., Goodwin, G. F.. & Burke, C. S. (2009). Team effectiveness in complex organizations: Cross-disciplinary perspectives and approaches (E. Salas, G. F. Goodwin, & C. S. Burke, Eds.). New York: Routledge/Taylor & Francis Group.
Schrah, G. E., Dalal. R. S.. & Sniezek, J. A. (2006). No decision-maker is an Island: Integrating expert advice with information acquisition. Journal of Behavioral Decision Making, 19(1), 43-60.
Sherif, M. (1936). The psychology of social norms. New York: Harper and Brothers.
Sniezek, J. A., & Buckley, T. (1995). Cueing and cognitive conflict in Judge-Advisor decision making. Organizational Behavior and Human Decision Processes, 62, 159-174.
Sniezek, J. A., & Van Swol, L. M. (2001). Trust, confidence, and expertise in a judgeadvisor system. Organizational Behavior and Human Decision Processes, 84(2), 288-307.
Sorkin, R. D., & Dai, H. (1994). Signal detection analysis of the IDEAL group. Organizational Behavior and Human Decision Processes, 60, 1-13.
Stasser, G.. & Augustinova, M. (2008). Social engineering in distributed decision making teams: some implications for leadership at a distance. In S. Weisband (Ed.), Leadership at a distance (pp. 151-167). New York: Lawrence Erlbaum Associates.
Stasser, G., & Stewart, D. (1992). Discovery of hidden profiles by decision-making groups: Solving a problem versus making a judgment. Journal of Personality and Social Psychology, 63(3), 426.
Stasser, G., & Titus, W. (1985). Pooling of unshared information in group decision making: Biased information sampling during discussion. Journal of Personality and Social Psychology, 48. 1467-1478.
Stasser, G., & Titus, W. (1987). Effects of information load and percentage of shared information on the dissemination of unshared information during group discussion. Journal of Personality and Social Psychology, 53, 81-93.
Stawiski, S., Tindale, R. S.. & Dykema-Engblade, A. (2009). The effects of ethical climate on group and individual level deception in negotiation. International Journal of Conflict Management. 20. 287-308.
Steiner, I. D. (1972). Group process and productivity. New York: Academic Press.
Stewart, D. D., & Stasser, G. (1995). Expert role assignment and information sampling during collective recall and decision making. Journal of Personality and Social Psychology, 69,619-628.
Stewart, D. D., & Stasser. G. (1998). The sampling of critical, unshared information in decision making groups: The role of an informed minority. European Journal of Social Psychology, 28.95-113.
Surowiecki, J. (2004). The wisdom of crowds. New York: Doubleday.
Tetlock, P. E., & Gardner. D. (2015). Superforcasting: The art and science of prediction. New York: Crown Publishers.
Thompson, L., & Fine, G. A. (1999). Socially shared cognition, affect, and behavior: A review and integration. Personality and Social Psychology Review, 3, 278-302.
Tindale, R. S., & Sheffey, S. (2002). Shared information, cognitive load, and group memory. Group Processes and Intergroup Relations, 5, 5-18.
Triplett, N. (1897). The dynamogenic factors in pacemaking and competition. American Journal of Psychology, 9, 507-533.
Tschan, E, Semmer, N. K.. Gurtner, A.. Bizzari, L., Spychiger, M., Breuer, M., & Marsch, S. U. (2009). Explicit reasoning, confirmation bias, and illusory transactive memory: A simulation study of group medical decision making. Small Group Research, 40, 271-300.
Van de Ven, A. H., & Delbecq, A. L. (1974). Nominal vs. interacting group processes for committee decision-making effectiveness. Academy of Management Journal, 14, 203-212.
Van Swol, L. M., & Sniezek, J. A. (2005). Factors affecting the acceptance of expert advice. British Journal of Social Psychology, 44(3), 443-461.
Vroom, V. H., & Yetton, P. (1973). Leadership and decision-making. Pittsburgh, PA: University of Pittsburgh Press.
Wallace, D. M., & Hinsz, V. B. (2010). Teams as technology: Applying theory and research to model macrocognition processes in teams. Theoretical Issues in Ergonomic Science, 11, 359-374.
Weber, B., & Hertel. G. (2007). Motivation gains of inferior group members: A meta-analytical review. Journal of Personality and Social Psychology, 93(6), 973-993.
Wegner, D. M. (1995). A computer network model of human transactive memory. Social Cognition, 13, 319-339.
Wegner, D. T. (1986). Transactive memory: A contemporary analysis of the group mind. In B. Mullen & G. R. Goethals (Eds.), Theories of group behavior (pp. 185-208). New York: Springer-Verlag.
Weiner, E. L., Kanki, B., & Helmreich, R. L. (1993). Cockpit resource management. San Diego, CA: Academic Press.
Wildschut, T.. Pinter, B.. Vevea, J. L.. Insko, C. A.. & Schopler, C. A. (2003). Beyond the group mind: A quantitative review of the interindividual-intergroup discontinuity effect. Psychological Bulletin, 129, 698-722
Wolfers, J., & Zitzewitz, E. (2004). Prediction markets. Journal of Economic Perspectives, 18(2), 107-126.
Yaniv, I. (2004). Receiving other people’s advice: Influence and benefits. Organizational Behavior and Human Decision Processes, 93, 1-13.
Yaniv, I., & Kleinberger, E. (2000). Advice taking in decision making: Egocentric discounting and reputation formation. Organizational Behavior and Human Decision Processes, 83, 260-281.
Zacarro, S. J., Marks, M. A., & DeChurch, L. (2011). Multi-team systems.
Zajonc, R. B., & Smoke, W. (1959). Redundancy in task assignments and group performance. Psychometrika, 24, 361-370.
Bees Do It