Bees Do It: Distributed Cognition and Psychophysical Laws

S. Orestis Paleemos



Extended Mind, Extended Cognition, and Functionalism.....................................117

Standard Argument for Extended Cognition.....................................................117

Standard Argument for Extended Mind............................................................117

Parity Principle.............................................................................................117

Which Functionalism?...........................................................................................120

Distributed Cognition and Functionalism..............................................................122

Standard Argument for Distributed Cognition..................................................122

Social Parity Principle.......................................................................................123

Argument for Distributed Cognition (ADC).....................................................123

Bees Do It..............................................................................................................125

Discussion: Psycho-Functionalism and Active Externalism..................................126




Within philosophy of mind and cognitive science, the hypothesis of distributed cognition puts forward the provocative assumption that appropriately interacting individuals can give rise to collective entities with cognitive properties, over and above the sum of their individual members’ cognitive properties. In such cases, we may speak of a distributed cognitive system or, more loosely, of a group mind (for various formulations of this idea, see Barnier, Sutton, Harris, & Wilson, 2008; Heylighen, Heath, & Van Overwalle, 2004; Hutchins, 1996; Sutton, Harris, Keil, & Barnier, 2010; Sutton, 2008; Theiner, Allen, & Goldstone, 2010; Theiner, 2013a, 2013b; Theiner & O'Connor, 2010; Tollefsen & Dale, 2012; Tollefsen, 2006; Wilson, 2005).

The hypothesis of distributed cognition is a version of a broader hypothesis within philosophy of mind, known as active externalism (Clark & Chalmers, 1998; Menary, 2007; Rowlands, 1999; Wilson, 2000, 2004). Active externalism is the assumption that the material realizers of mind and cognition are not necessarily restricted to the agent’s organismic machinery. Instead, mental states and cognitive processes may cross organismic boundaries in a number of ways. One assumption is that cognitive processes (such as memory, reasoning, and perception) may extend to the tools that the organism interacts with in order to perform a cognitive task; this is known as the hypothesis of extended cognition. Another assumption is that mental states (such as beliefs and desires) may be partly constituted by aspects of the agent’s environment; this is the extended mind thesis. Finally, there is also the assumption that mind and cognition may be distributed between several individuals (possibly along with their artifacts); this is the hypothesis of distributed cognition (or group minds). Though in this chapter I am mainly interested in the latter hypothesis, the discussion will often involve the other two versions of active externalism too.

In the first instance, active externalism (in all of its forms) is a metaphysical thesis about the nature of mind and cognition. Besides philosophy of mind, however, active externalism is also important from the point of view of philosophy of science and scientific practice itself. Lakatos, the famous philosopher of science, noted that, in the hard core of every scientific research program, there lies a set of fundamental metaphysical assumptions, which provide the program with its distinctive identity (Lakatos, 1970). Seen from this perspective, active externalism has the potential to set the tone for a number of scientific research programs within cognitive science. If cognition can indeed extend beyond the agent’s organismic boundaries or even be distributed between several agents at the same time, then cognitive scientists should allow this possibility to guide both the development of theory as well as the design of scientific experiments.

Indeed, a growing body of research within cognitive science appears to be implicitly motivated by or, at least, open to active externalism in the form of the hypothesis of distributed cognition. For example, an increasing volume of studies focuses not only on modeling and understanding swarm intelligence and the collective behavior of animals,1 but also human collective behaviors, such as sports team performance and interpersonal coordination.2 Though such studies do not always refer to collective behavior as cognitive behavior, some are open to employing such terminology. As Cooke et al. (2013, p. 256) note, for example:

The term “cognition” used in the team context refers to cognitive processes or activities that occur at a team level. Like the cognitive processes of individuals, the cognitive processes of teams include learning, planning, reasoning, decision making, problem solving, remembering, designing, and assessing situations [. . .]. Teams are cognitive (dynamical) systems in which cognition emerges through interactions.

Nevertheless, despite a few cognitive scientists’ tacit uptake of the hypothesis of distributed cognition, several philosophers and cognitive scientists remain skeptical of it. In an attempt to alleviate their concerns, I explore, in what follows, a worrying objection that may be raised against the view. The objection I have in mind has already been raised against the hypothesis of extended cognition and the extended mind thesis and it centers around their frequent reliance on common-sense functionalism. If common-sense functionalism were to be replaced by the more scientifically informed psycho-functionalism, then those two versions of active externalism would appear untenable, especially from the point of view of scientific practice. As I note, the same objection can be used to target the hypothesis of distributed cognition with equal force. As it happens, however, existing research within cognitive science suggests that the hypothesis of distributed cognition is immune to it. If that’s correct, then cognitive scientists (and philosophers alike) should be less disinclined to place active externalism—at least in the form of the hypothesis of distributed cognition— at the core of their research projects.



To appreciate what the objection is, it will be useful to see how it has already been raised against the hypothesis of extended cognition and the extended mind thesis. A standard argumentative line for the hypothesis of extended cognition and the extended mind thesis involves two steps and a conclusion.3

Standard Argument for Extended Cognition

  • (1) EXTENDED SYSTEM: Identify a case where organismically internal and external components integrate with each other in an extended system to realize a certain process.
  • (2) COGNITIVE PROCESS: Demonstrate that the process of the target extended system can be readily accepted as a cognitive process.

CONCLUSION: There exists an extended system (consisting of the integrated internal and external components) that realizes a cognitive process—i.e., there exists an extended cognitive system.

This standard argumentative structure can also be used to produce an argument for the extended mind. The only difference is that, instead of focusing on cognitive processes, the argument for the extended mind focuses on mental states.

Standard Argument for Extended Mind

  • (1’) EXTENDED SYSTEM: Identify a case where organismically internal and external components integrate with each other in an extended system to realize a certain state.
  • (2’) MENTAL STATE: Demonstrate that the state of the target extended system can be readily accepted as a mental state.

CONCLUSION’: There exists an extended system (consisting of the integrated internal and external components) that realizes a mental state—i.e., there exists an extended mind.

Part of the above reasoning is implicit in what, within the literature on active externalism, has come to be known as the Parity Principle.

Parity Principle

If, as we confront some task, a part of the world functions as a process which, were it go on in the head, we would have no hesitation in accepting as part of the cognitive process, then that part of the world is (for that time) part of the cognitive process.

(Clark & Chalmers, 1998, p. 8)

Proponents of active externalism offer the Parity Principle as an intuition pump (Clark, 2007; Menary, 2007, 2010). Its main purpose is to remind us that considerations about the constituents of mind and cognition should not be guided by spatial location alone (Clark, 2007). The principle, of course, is not theory free. Essentially, it restates the basic functionalist premise: So long as a process/state realizes a function that we would accept as a specific kind of cognitive/mental function, then our judgments about its cognitive/mental status should not be affected by its material realizers, or—in the case of cognitive and mental extension—where these realizers are located. In other words, the Parity Principle is not a full-fledged argument in itself. It only draws attention to the fact that, in the context of active externalism, and so long as functionalism is true, the location of cognition’s material realizers should be of no concern; if functionalism is accepted, then pointing merely to the boundaries of skin and skull in order to deny a process/state cognitive status is question-begging.

This is a good starting point—it prevents one from rejecting the hypothesis of extended cognition and the extended mind thesis from the outset. Nevertheless, to fully accept these two hypotheses, rather than remaining open to them, one should motivate the truth of the premises in the standard arguments just discussed. That is, one should demonstrate that, indeed, there is an overall integrated system that consists of both internal and external components, and that the process, or the state, of the extended system is one that we would readily accept as a cognitive process or as a mental state.

With respect to (1) and (!’), Clark (Clark & Chalmers, 1998; Clark, 2008) has attempted to specify when two systems are integrated by invoking the mathematical concept of a coupled system from Dynamical Systems Theory. Following Clark, in Palermos (2014), I have provided an extended analysis of when components form parts of an integrated coupled system by focusing on the underlying mathematical details (Chemero, 2011). Rehearsing the details of this analysis would take us too far afield from the present discussion (a summary is provided later on, in the section entitled “Distributed Cognition and Functionalism,” where I discuss the integration of group processes), but the main idea is the following: According to Dynamical Systems Theory, two (or more) systems are integrated, or “coupled,” if and only if they bring about a process or a state by mutually interacting with each other on the basis of ongoing feedback loops.4 To appreciate what this means, taking the existence of ongoing mutual interactions as the criterion of integration leads to the following claims: While a painter and his ladder would not qualify as an integrated system (because there are no ongoing mutual interactions between the two), an agent equipped with a tactile visual substitution system (Tyler, Danilov, & Bach-y-Rita, 2003) or a magnetic perception system (Nagel, Carl, Kringe, Märtin, & König, 2005) would.

Let’s now move to the defense of claims (2) and (2’): How do we accept a process or a state—be it wholly internal or extended—as a cognitive process or a mental state? While the Parity Principle invokes functionalism, it does not specify which kind of functionalism one should focus on. Clark and Chalmers (1998), however, seem to rely on common-sense functionalism, according to which mental states and processes should be categorised and understood in terms of our everyday commonsense folk psychology. This becomes apparent in the discussion of their famous case of Otto and his notebook, which they present as a case for the extended mind thesis. Otto is an Alzheimer’s patient who compensates for his failing memory by always carrying a well-organized notebook. In order to claim that Otto believes a piece of information inscribed in his notebook—say that MOMA is on 53rd Street—even before looking it up, Clark and Chalmers (1998) compare him to a normal subject, Inga. Upon hearing about an interesting exhibition, Inga thinks, recalls that the museum is on 53rd Street, and starts walking to the museum. Clark and Chalmers argue that if one wants to say that Inga had her belief before consulting her memory, one must also accept that Otto and his notebook believed the museum was on 53rd Street even before Otto looked up the address in his notebook. This is because the two cases are functionally on a par, given our everyday, common-sense understanding of how memory works:

the notebook plays for Otto the same role that memory plays for Inga; the information in the notebook functions just like the information [stored in Inga’s biological memory] constituting an ordinary non-occurrent [i.e., dispositional] belief; it just happens that this information lies beyond the skin.

(Clark & Chalmers, 1998, p. 13)

To strengthen their point, Clark and Chalmers spell out the relevant common-sensical intuitions by noting that, judging from the case of biological memory, the availability, portability, and reliability of the resource of information are functionally crucial in determining whether a piece of information can qualify as one’s dispositional belief. Specifically, they suggest, in order for any device of information storage to be included into an individual’s mind, the following criteria must be met:

  • (1) The resource [must] be reliably available and typically invoked.
  • (2) Any information thus retrieved [must] be more-or-less automatically endorsed.
  • (3) [Any] information contained in the resource should be easily accessible as and when required.
  • (Clark, 2010, p. 46)5

It is apparent, then, that according to Clark and Chalmers, in order to determine whether an extended process (or state) is a cognitive process (or mental state) one must consult their common-sense intuitions. Taking this route to argue for the hypothesis of extended cognition and the extended mind thesis is in fact unsurprising, since, as some authors have noted, these two hypotheses are actually a consequence of common-sense functionalism. Weiskopf (2008, p. 267) submits, for example, that “functionalism has all along been committed to the possibility of extrabodily states playing the role of beliefs and desires.”

This heavy reliance of the standard arguments on common-sense functionalism creates an easy target for opponents of the hypothesis of extended cognition and the extended mind thesis.6 One could simply suggest that philosophy of mind and cognitive science should steer away from common-sense functionalism and embrace instead a different kind of functionalism. To see how this strategy works, consider

Rupert (2004) who argues that, if we consider fine-grained functional details, Otto’s way of recalling information differs to biological memory to such an extent that the two mechanisms cannot be both treated as mental processes. Specifically, Rupert notes, retrieving information from the notebook does not seem likely to exhibit the “negative transfer” and the “generation” effects, which are typically manifested in the process of recalling information from biological memory.7 Similarly, Adams and Aizawa (2010, p. 63) note that the extended system of Otto and his notebook is unlikely to exhibit the “primacy” and “recency” effects.8

This is indeed a promising strategy for rejecting the claim that Inga and Otto and his notebook are functionally on a par. Essentially, it amounts to giving up common-sense functionalism for what is known as psycho-functionalism. Psychofunctionalism claims that “mental states and processes are just those entities, with just those properties, postulated by the best scientific explanation of human behaviour” (Levin, 2018). Contrary to common-sense functionalism, “the information used in the functional characterization of mental states and processes needn’t be restricted to what is considered common knowledge or common sense, but can include information available only by careful laboratory observation and experimentation” (Levin, 2018). For instance, Rupert’s and Adams and Aizawa’s resistance for functionally treating Inga and Otto and his notebook on a par invokes the “negative transfer,” “generation,” “recency,” and “primacy” effects, which are not part of our everyday understanding of how memory works and can only be revealed through careful scientific research.

Taking this line of criticism (as well as drawing on other arguments against the hypothesis of extended cognition and the extended mind thesis) several authors have expressed skepticism towards active externalism, in varying strengths. One may argue, for example, that active externalism is, in principle, incorrect. According to this view, which Rupert (2004, 2009) seems to embrace, cognition is necessarily brain- or at most organism-bound. A weaker form of skepticism is to hold that, as a contingent matter of fact, mind and cognition have, so far, been within the skull; future technological advancements, however, may allow them to extend beyond the brain. This is known as “contingent intracranialism” and it is the position that Adam and Aizawa hold: “Insofar as we are intracranialists, we are what might be called ‘contingent intracranialists,’ rather than ‘necessary intracranialists’” (Adams & Aizawa, 2001, p. 57).


So far, we have seen that invoking fine-grained details in identifying and categorizing mental states and processes would, necessarily or contingently, speak against the hypothesis of extended cognition and the extended mind thesis. However, this psycho-functionalist approach invites a different problem. By focusing on finegrained, human-specific details about human psychology, psycho-functionalism is open to the charge of being overly “chauvinistic” (Block, 1980). “Creatures whose internal states share the rough, but not fine-grained, causal patterns of ours wouldn’t count as sharing our mental states” (Levin, 2018). Unsurprisingly, it is precisely this problem for psycho-functionalism that Clark puts his finger on in his reply to Rupert’s objection against the case of Otto: “just because some alien neural system failed to match our own in various ways (perhaps they fail to exhibit the ‘generation effect’ during recall [. . .]) we should not thereby be forced to count the action of such systems as noncognitive” (Clark. 2008. pp. 114-115).

This appears to be a significant worry against psycho-functionalism, though psycho-functionalists may not be impressed by it. One reason for this, they could argue, is that humans are the only organisms we can be certain of having a rich cognitive and mental life. Since it is the level and kind of human cognition we are primarily interested in, it makes sense that we should set it as our exemplar in our studies and theorizing of cognition in general. Yet this kind of response in favor of psycho-functionalism and against common-sense functionalism may beg the question. For how do we decide whether some behavior is cognitive behavior, in the case of human beings? More crucially, how do cognitive scientists decide what behaviors to focus on, set them apart from behaviors that may not be classed as cognitive (such as merely biological or physiological behaviors), and finally study them in details that, subsequently, could inform psycho-functionalist judgements on what may count as cognitive? Psycho-functionalism seems to be the receiving end of this process, not the starting point.

Perhaps, there is a “mark of the cognitive” (Adams & Aizawa, 2001, 2008) that permeates all cognitive phenomena and which cognitive scientists could use in order to identify and isolate them. But in the actual practice of cognitive science, no such mark of the cognitive exists, and philosophical reflection has made apparent that devising one is particularly elusive, even in theory.9 So in the absence of a mark of the cognitive, how do cognitive scientists choose what to study?

The answer is that cognitive scientists seem to employ, perhaps only implicitly, common-sense functionalism. They choose to study behaviors that on the basis of common-sense intuitions we would readily classify as cognitive. Of course, one may worry that focusing on behavior to identify cognition sounds too close to the doctrine of analytical behaviorism (Huebner, 2013; Ludwig, 2015). But it should be obvious that this would be an uncharitable characterization of what cognitive scientists do. While cognitive scientists do take as their starting point that some process may be classed as cognitive, because it exhibits behavior that we would normally classify as such, contrary to analytical behaviorism, they do not assume that all there is to cognition is behavior alone.

In fact, this subtle yet important distinction between analytical behaviorism and the practice of employing intelligent behavior as evidence for the presence of cognition is not a novel point. As Graham (2015) notes, Sellars pointed this out a long time ago, by invoking the notion of “attitudinal behaviorism”:

Wilfred Sellars (1912-89), the distinguished philosopher, noted that a person may qualify as a behaviorist, loosely or attitudinally speaking, if they insist on confirming “hypotheses about psychological events in terms of behavioural criteria” (1963, p. 22). A behaviorist, so understood, is someone who demands behavioral evidence for any psychological hypothesis. [.. .] Arguably, there is nothing truly exciting about behaviorism loosely understood. It enthrones behavioral evidence, an arguably inescapable premise in not just psychological science but in ordinary discourse about mind and behavior. Just how behavioral evidence should be “enthroned” (especially in science) may be debated. But enthronement itself is not in question. Not so [analytical] behaviorism the doctrine.

Keeping in mind the distinction between “attitudinal” and analytical behaviorism is important, because, in its absence, we would be led to believe that cognitive science embraces, at its starting point at least, analytical behaviorism. In the absence of a mark of the cognitive, cognitive scientists’ only way for distinguishing between cognitive and non-cognitive phenomena is by employing their common-sense intuitions about what behavior may count as such—but unlike analytical behaviorism they do not further assume that all there is to cognition is mere behavior. This is what Sellars called “attitudinal” behaviorism, or what we may now refer to as common-sense functionalism.10 Wilson (2001) summarizes this general methodological attitude within philosophy of mind and cognitive science in the following way:

In order for something to have a mind, that thing must instantiate at least some psychological processes or abilities. Rather than attempting to offer a definition or analysis of what a psychological or mental process or ability is, let the following incomplete list suffice to fix our ideas: perception, memory, imagination (classical Faculties); attention, motivation, consciousness, decision-making, problem-solving (processes or abilities that are the focus of much contemporary work in the cognitive sciences); and believing, desiring, intending, trying, willing, fearing, and hoping (common, folk psychological states).

Common-sense functionalism therefore appears to play an important role within cognitive science, such that abandoning the view entirely and opting, instead, for psycho-functionalism alone is not as clear-cut a decision to make. Consequently, psycho-functionalist arguments against the hypothesis of extended cognition and the extended mind thesis are, to say the least, inconclusive. Nevertheless, by invoking the principle of charity, in what follows, I will assume that psycho-functionalism is the only correct approach to philosophy of mind and cognitive science. The reason for this is that I wish to consider whether the view, if it were to be retained to the exclusion of common-sense functionalism, could raise equally important problems for active externalism in the form of the hypothesis of distributed cognition.


Arguments for the hypothesis of distributed cognition usually follow a similar structure to standard arguments for the hypothesis of extended cognition and the extended mind thesis.11

Standard Argument for Distributed Cognition

  • (1*) DISTRIBUTED COGNITION: Identify a case where several individuals integrate with each other in a distributed system to realize a certain process.
  • (2*) COGNITIVE PROCESS: Demonstrate that the process of the target distributed system can be readily accepted as a cognitive process.

C: CONCLUSION*: There exists a distributed system (consisting of all participating individuals) that realizes a cognitive process—i.e., there exists a distributed cognitive system.

The requirement that these two premises be satisfied is evident, for example, in Theiner’s (Theiner et al., 2010; Theiner, 2013a) Social Parity Principle, which parallels Clark and Chalmers’ original Parity Principle.

Social Parity Principle

If, in confronting some task, a group collectively functions in a process which, were it done in the head, would be accepted as a cognitive process, then that group is performing that cognitive process.

Theiner et al. (2010, p. 384) note that the Social Parity Principle, like the original Parity Principle, is not a full-fledged argument in support of distributed cognition. It acts as an intuition pump forjudging whether a collective process may qualify as a collective cognitive process, on the basis of functionalism. In this way, the Social Parity Principle can motivate premise (2*) of the standard argument for distributed cognition. That is, it can help judge whether a given collective process can qualify as a cognitive process. Nevertheless, it cannot establish the truth of premise (1*), as it provides no reason for thinking that the target cognitive process is a collective cognitive process. Instead for arguing for this claim, the antecedent of the Social Parity Principle already presupposes it.

In an attempt to ensure that both premise (1*) and premise (2*) are satisfied, I have previously offered the following argument, by drawing on the details of Dynamical Systems Theory (Palermos, 2016).

Argument for Distributed Cognition (ADC)

Pl: A process D is brought about on the basis of mutual interactions between the members of a group.

P2: According to the systemic properties and ongoing feedback loops arguments, when component parts mutually interact with each other in order to bring about some process P, there exists (with respect to P) an overall system that consists of all the interacting components at the same time.

Cl (from Pl and P2): With respect to D, the underlying group constitutes a distributed system that consists of all the interacting individuals.

P3: D is a process that, on the basis of common-sense intuitions, we would readily classify as cognitive.

C2 (from Cl and P3): With respect to D, the underlying group constitutes a distributed cognitive system that consists of all the interacting individuals.

In this argument, premises Pl, P2, and the conclusion Cl seek to establish the truth of premise (1*) of the standard argument for distributed cognition—i.e., that there is a process (i.e., systemic behavior), which is generated by an integrated distributed system. While this is not the place to rehearse the details of the systemic properties and ongoing feedback loops arguments mentioned in P2, it is worth mentioning the main ideas behind them.

The systemic properties argument points out that, according to Dynamical Systems Theory, when individuals engage in ongoing mutual interactions, there arise novel systemic properties in the form of unprecedented regularities of behavior. These properties do not belong to any of the contributing members (or their aggregate), but to their coupled system as a whole. Accordingly, to account for such systemic properties, we have to postulate the overall distributed system. (Alternatively, distributed systems cannot be ontologically eliminated, because they are necessary for accounting for certain systemic properties.)

The ongoing feedback loops argument holds that, according to Dynamical Systems Theory, when individuals mutually interact—on the basis of ongoing feedback loops—to bring about some behavior, they form a nonlinear, causal amalgam that cannot be decomposed in terms of distinct inputs and outputs from the one agent to the other. The reason is that, because of the ongoing feedback loops, the way each individual affects others is not entirely endogenous to itself and, conversely, the way it is affected by others is not entirely exogenous either. Accordingly, to account for the way individuals behave, we cannot but postulate the overall distributed system they form part of.

This suggests that when individuals bring about a behavior by mutually interacting with each other, there are good reasons for postulating an overall distributed system that consists of all of them. In other words, there are good reasons for thinking that premise (1*) of the standard argument for distributed cognition is satisfied. Nevertheless, in order for the hypothesis of distributed cognition to be satisfied, it is also necessary to demonstrate that the relevant distributed system is a distributed cognitive system. This further step requires that premise (2*) of the standard argument for distributed cognition be true, which requires showing that the behavior of the distributed system is cognitive behavior. In ADC, this is established on the basis of premise P3, which invokes commonsense functionalism.

This reliance of premise P3 on common-sense functionalism means that psychofunctionalism can raise problems for the hypothesis of distributed cognition as this is motivated by ADC. Moreover, even if, unlike ADC, the Social Parity Principle or the standard argument for distributed cognition invoked psycho-functionalism, instead of common-sense functionalism, but there existed no distributed behavior that could be classed as cognitive behavior by the standards of our best scientific theories of cognition, then the prospects of vindicating the hypothesis of distributed cognition would appear bleak.

To counter this worry, in the following section, I review a recent study on the way bee colonies reach decisions on new nest locations. The study demonstrates that even if the Social Parity Principle, premise (2*) of the standard argument for distributed cognition, and premise P3 of ADC invoked psycho-functionalism (instead of common-sense functionalism), the decision-making process of bee colonies would still qualify as a case of distributed cognition.


Colonies of the European honey bee (Apis Mellifera) reproduce via fission. The old queen flees the nest taking with her thousands of scout bees. During this event, part of the swarm engages in a decision-making process to identify the best location for building a new nest. Scout bees explore the environment, and upon locating a potential nest location, they rejoin the swarm in order to recruit other bees to that location through the waggle dance. When a bee that is committed to a potential nest site meets a scout that propounds an alternative site, she may receive a stop signal. Bees that receive several stop signals fall back to an uncommitted state. This is an ongoing process, during which bees continuously affect each other on the basis of positive (recruitment signals) and negative (stop signals) feedback loops. Eventually, the process ends with a decision being made at the colony level, when bees committed to the same nest site reach a quorum.12

In a recent article, Reina, Bose, Trianni, and Marshall (2018) report that they have run computer-based stochastic simulations of a model representing the way the bee colony reaches the above decision. Their nonlinear dynamical model was empirically derived from previous field observations. Astonishingly, their simulations demonstrate that the superorganism’s (i.e., the bee colony’s) decision-making process manifests the same psychophysical laws—Weber’s Law, Hick-Hyman’s Law and Pieron’s Law—that human brains obey when they engage in decision making.

Weber’s Law states that the brain is able to detect the option of the best quality when the difference between the values of the qualities selected from is above a minimum value, and that this required minimum value increases as the values of the compared qualities increase. That is, there is a linear relationship between the values of the qualities selected from and the value of the minimum difference between the two options that is required for successful selection. For example, the human brain can tell whether a set of 30 dots is larger than a set of 25 dots, but it has problems when trying to reach a decision between a set of 150 and 155 dots, even though, in both cases, the difference is 5 dots. In Reina et al.’s (2018) study, the model of the bee colony manifested the same linear dependence between the required minimum quality difference between options (i.e., nest locations) and their mean quality. Pieron’s Law states that when the two options are of high quality, the brain reaches decisions faster compared to when the two options are of lower quality. Reina et al.’s study found that the honeybee colony model is faster to decide between two nest locations of high quality compared to two nest locations of low quality. Hick-Hyman’s Law suggests that the time the brain requires to reach a decision increases as the number of options increases. In line with this law, Reina et al. found that the model of the bee colony took longer to reach decisions when the amount of alternative nest locations increased.13

Drawing on these results the authors conclude that the bee colony, as a distributed dynamical system, obeys the same psychophysical laws as the human brain. In their words:

This study shows for the first time that groups of individuals, in our case honeybee colonies, considered as a single superorganism, might also be able to obey the same laws. Similarly to neurons, no individual explicitly encodes in its simple actions the dynamics determining the psychophysical laws; instead it is the group as a whole that displays such dynamics.

(Reina et al., 2018, p. 5)

A few points are in order here. First, the fact that the bees mutually interact with each other on the basis of ongoing positive (recruitment) and negative (inhibitory) feedback loops in order to reach a decision indicates that Pl of ADC is satisfied. Second, the fact that Reina et al. employ a nonlinear dynamical model suggests that their experiment is consistent with P2 of ADC, which presupposes the validity of two arguments derived from Dynamical Systems Theory. From this, C3 of ADC follows, which affirms that there exists an integrated distributed system—i.e., what Reina et al. refer to as the “superorganism.” Reina et al.’s study, therefore, satisfies premise (1*) of the standard argument for distributed cognition in precisely the way ADC suggests.

What of premise (2*)? Premise (2*) is meant to establish that the relevant distributed system manifests behavior that can be classed as cognitive behavior. ADC seeks to affirm this with P3, by invoking common-sense functionalism. As noted in the previous section, this is going to be problematic if we only retain psycho-functionalism— especially if there are no collective processes that would qualify as cognitive processes by psycho-functionalist standards. Nevertheless, Reina et al.’s experiment focuses on the cognitive process of decision making, not as this is understood on the basis of common-sense functionalism, but instead as it is perceived through the lens of psycho-functionalism. Their results demonstrate that the decision-making behavior of the bee colony obeys the same laws as human decision making, when the latter is studied on the basis of careful laboratory observation and experimentation.

This suggests that even if one were to modify ADC such that P3 invoked psychofunctionalism, instead of common-sense functionalism, Reina et al.’s study would still carry ADC to its final conclusion, C2: With respect to the bee colony’s decisionmaking process, there exists a distributed system—i.e., the “superorganism”—that is a distributed cognitive system.

Reina et al.’s study therefore indicates that, in the case of the bee colony’s decisionmaking process, all premises of both the standard argument for distributed cognition and (the modified version of) ADC can be satisfied, even by psycho-functionalist standards. In result, there is at least one case for thinking that psycho-functionalism cannot be used against the hypothesis of distributed cognition.



To reiterate, psycho-functionalist opponents of active externalism argue against the view by appealing to psychophysical laws discovered by the best scientific theories of human cognition and psychology:

Insofar as such laws govern processes in the core of the brain, but not combinations of brains, bodies, and environments, there are principled differences between intracranial brain processes and processes that span brains, bodies, and environments.

(Adams & Aizawa, 2010, p. 61)

Reina et al.’s study, however, indicates that, even by the psycho-functionalist’s standards, there is a good case to be made against not only necessary intracranial ism but also contingent intracranialism.

One possible objection to this is to claim that it does not present us with a case of human distributed cognition, but with a case of swarm cognition. Such an objection, however, would be missing the point. If the letter of functionalism (be it commonsense or psycho-functionalism) is to be followed, this should make no difference at all. According to functionalism, in all of its forms, one should not focus on the properties of the supervenient base. The fact that the underlying components of the distributed cognitive system are bees instead of humans should be irrelevant, especially when the psychophysical laws obeyed by the target system are revealed by scientific studies of the behavior of the human brain. Reina et al.’s experiment demonstrates that the bee colony’s process of decision making is human-like. Psycho-functionalists must therefore accept that this is a genuine case of distributed cognition, even if the underlying group is made up of bees. One may then add that given humans possess more complex and advanced means for communication and interaction, it is only a matter of time before we discover that distributed systems made up of humans obey (or, with advancements in communication technologies, will obey) the requisite psychophysical laws. Cognitive scientists must simply remain open-minded and keep their active externalist goggles on.14

But if active externalism in the form of the hypothesis of distributed cognition is by psycho-functionalist standards true, does this mean that it is also true in the form of the hypothesis of extended cognition and the extended mind thesis? Contentiously, one could argue that the hypothesis of distributed cognition is more radical than the hypothesis of extended cognition and the extended mind thesis, because of how widely cognition is dispersed in the former case. Accordingly, if the more radical hypothesis is true, then we should not be surprised if the more moderate versions of active externalism are also true—indeed we might expect them to be so. For example, one can imagine that, even though, currently, there are no extended systems that manifest the effects that Adams, Aizawa, and Rupert point to, in the future, there could be brain-machine interfaces that would augment individuals’ memory systems in a way that would obey the same psychophysical laws that biological memory does.

If we follow this line of thought, however, a number of questions arise: Why build into such memory extensions the downsides of biological memory as these are captured by some psychophysical laws? Why, for example, would we like to incorporate the negative transfer, primacy, and recency effects? And if we only incorporated the positive characteristics of biological memory, how many do we need for the extended system to qualify, by the lights of psycho-functionalism, as a cognitive system? Conversely, if a human subject does not exhibit all of the negative transfer, primacy, or recency effects, does their system for information encoding, storage, and retrieval fail to qualify as a memory system? As far as I can see, no straightforward answer exists to these puzzles, because behind them lies a more fundamental question: Which are the essential psychophysical laws that a process must manifest in order to qualify as a process of a specific cognitive kind, besides the coarse-grained characterization that common-sense functionalism has to offer for it?

Along with the other objections mentioned in the section entitled “Which Functionalism?,” these are important questions that psycho-functionalism needs to address before it can convincingly weight against active externalism in any of its forms. Nevertheless, even if proponents of psycho-functionalism did come up with solutions to all of these problems, the foregoing suggests that they should remain open to active externalism, at least in the form of the hypothesis of distributed cognition.

This leaves us with two general points. First, from the point of view of philosophy of mind, psycho-functionalism does not, in principle, commit one to either necessary or contingent intracranialism. Second and relatedly: From the point of view of philosophy of science and scientific practice itself, cognitive scientists should be less hesitant to incorporate the hypothesis of distributed cognition in the hard cores of their research programs, whether these are further guided by either psycho- or common-sense functionalism.


  • 1. See, for example (Obuko, 1986; Niwa. 1994; Parunak, 1997; Li. Yang, & Peng. 2009; Becco, Vandewalle, Delcourt. & Poncin, 2006; Peng, Li, Yang & Liu, 2010; Turnstrom et al.. 2013; Li. Peng, Kurths, Yang. & Schellnhuber. 2014; Attanasi et al.. 2015).
  • 2. See, for example (Schmidt, Bienvenu, Fitzpatrick, & Amazeen, 1998; Riley, Richardson, Shockley. & Rainenzoni. 2011; Marsh et al.. 2009; Schmidt & Richardson. 2008; Duarte et al.. 2013a. 2013b; Dale, Fusaroli. Duran. & Richardson. 2013; Richardson, Dale. & Marsh. 2014).
  • 3. A number of alternative arguments for the hypothesis of extended cognition exist. See, for example. Menary (2006; 2007); Heersmink (2015).
  • 4. A state can be thought of as a process that is at equilibrium.
  • 5. This paper has been available online since 2006. These criteria, however, date even earlier as they had already made their appearance in Clark & Chalmers (1998) (although the phrasing was somewhat different). Also, in Clark & Chalmers (1998, p. 17), the authors consider a further criterion: “Fourth, the information in the notebook has been consciously endorsed at some point in the past, and indeed is there as a consequence of this endorsement.” As the authors further note, however, “the status of the fourth feature as a criterion for belief is arguable (perhaps one can acquire beliefs through subliminal perception, or through memory tampering?),” so they subsequently drop it.
  • 6. A further problem for the extended mind thesis as motivated by the case of Otto and his notebook is that the two of them may not actually qualify as a coupled system, as this is understood within Dynamical Systems Theory. In Palermos (2014), I note that this is not necessarily an unwelcome result since, intuitively, it is not so obvious that Otto and his notebook form an extended mind. Nevertheless, Dynamical Systems Theory can still motivate the hypothesis of extended cognition on the basis of more plausible examples, such as agents densely interacting with tactile visual substitution systems and magnetic perception systems (for more details, see Palermos, 2014).
  • 7. According to the ‘negative transfer’ effect, previous learning negatively affects subjects’ ability to learn and recall new associations. According to the ‘generation’ effect, subjects who generate their own meaningful associations between stored information are better at recalling it. For more details see Rupert (2004).
  • 8. For example, in a free recall task, where subjects are asked to memorize a list of 20 words, the words at the beginning and at the end of the list are more likely to be correctly recalled than the rest. This is supposed to be the consequence of the primacy and recency effects on biological memory, respectively.
  • 9. For details on the debate on the mark of the cognitive, how it may be used against active externalism, and the considerable difficulty to come up with an unproblematic account for such a concept, see Clark (2010), Menary (2006), Adams and Aizawa (2001, 2008, 2010). Ross and Ladyman (2010).
  • 10. According to common-sense functionalism, behavior is a significant (though not the only) part of cognitive phenomena. Pain, for example, may be broadly characterized as “the state that tends to be caused by bodily injury, to produce the belief that something is wrong with the body and the desire to be out of that state, to produce anxiety, and, in the absence of any stronger, conflicting desires, to cause wincing or moaning” (Levin. 2018).
  • 11. Besides standard arguments, there are a number of ways that one could argue for the hypothesis of distributed cognition. For a comprehensive list of the existing approaches, see Theiner (2015).
  • 12 For more details, see Reina et al. (2018).
  • 13. See “Honeybees may unlock the secrets of how the human brain works” (March 27, 2018), (retrieved 17 December 2019).
  • 14. A research program that accepts active externalism could also be beneficial for understanding the brain. As Reina et al. (2018, p. 5) note, “research synergies between neuroscience and collective intelligence studies can highlight analogies that could help better to understand both systems.”


Adams, F., & Aizawa, K. (2001). The bounds of cognition. Philosophical Psychology, 14(1), 43-64. Adams, F., & Aizawa, K. (2008). The bounds of cognition. Oxford: Blackwell Publishing Ltd. Adams, F.. & Aizawa, K. (2010). Defending the bounds of cognition. In R. Menary (Ed.), The extended mind. Cambridge, MA: MIT Press.

Attanasi, A., Cavagna, A.. Del Castello, L., Giardina, L, Jelic, A., Melillo, S., et al. (2015). Emergence of collective changes in travel direction of starling flocks from individual birds’ fluctuations. Journal of the Royal Society, Interface, /2(108), 20150319.

Barnier, A. J., Sutton, J., Harris, C. B., & Wilson, R. A. (2008). A conceptual and empirical framework for the social distribution of cognition: The case of memory. Cognitive Systems Research. 9(1-2). 33-51. https://doi.Org/10.1016/j.cogsys.2007.07.002.

Becco, C., Vandewalle, N.. Delcourt, J., & Poncin, P. (2006). Experimental evidences of a structural and dynamical transition in fish school. Physica A: Statistical Mechanics and its Applications, 367,487-493.

Block, N. (1980). Troubles with functionalism. In Readings in the philosophy of psychology (Vols. 1 and 2). Cambridge, MA: Harvard University Press.

Chemero, A. (2011). Radical embodied cognitive science. MIT press.

Clark, A. (2007). Curing cognitive hiccups: A defense of the extended mind. The Journal of Philosophy, 104. 163-192.

Clark, A. (2008). Supersizing the mind: Embodiment, action, and cognitive extension. New York: Oxford University Press.

Clark, A. (2010). Memento’s revenge: The extended mind, extended. In R. Menary (Ed.). Extended mind. Cambridge, MA: MIT Press.

Clark. A.. & Chalmers. D. (1998). The extended mind. Analysis, 58, 10-23.

Cooke, N. J., Gorman, J. C.. Myers, C. W., & Duran, J. L. (2013). Interactive team cognition. Cognitive Science, 37(2), 255-285.

Dale, R.. Fusaroli, R., Duran, N.. & Richardson, D. C. (2013). The self-organization of human interaction. Psychology of Learning and Motivation, 59,43-95.

Duarte, R.. Araujo, D., Correia, V., Davids, K.. Marques, P., & Richardson, M. J. (2013a). Competing together: Assessing the dynamics of team-team and player-team synchrony in professional association football. Human Movement Science, 52(4), 555-566.

Duarte, R., Araujo, D., Folgado, H., Esteves, P., Marques, P., & Davids, K. (2013b). Capturing complex, non-linear team behaviours during competitive football performance. Journal of Systems Science and Complexity, 26(1), 62-72.

Graham. G. (2015). Behaviorism. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Spring 2015 ed.). Retrieved from entries/behaviorism

Heersmink, R. (2015). Dimensions of integration in embedded and extended cognitive systems. Phenomenology and the Cognitive Sciences, 14, 577-598.

Heylighen, E, Heath, M., & Van Overwalle, F. (2004). The Emergence of Distributed cognition: A conceptual framework. In Proceedings of collective intentionality IV.

Huebner, B. (2013). Macrocognition: Distributed minds and collective intentionality. New York: Oxford University Press.

Hutchins, E. (1996). Cognition in the wild. Cambridge. MA: MIT Press.

Lakatos, I. (1970). Falsification and the methodology of scientific research programmes. In I. Lakatos & A. Musgrave (Ed.), Criticism and the growth of knowledge. Cambridge: Cambridge University Press.

Levin, J. (2018, Fall). Functionalism. In Edward N. Zalta (Ed.). The Stanford Encyclopedia of Philosophy. Retrieved from functionalism/.

Li, L., Peng, H., Kurths, J., Yang, Y., & Schellnhuber, H. J. (2014). Chaos-order transition in foraging behavior of ants. Proceedings of the National Academy of Sciences, 111(23) 8392-8397.

Li, L., Yang, Y., & Peng, H. (2009). Fuzzy system identification via chaotic ant swarm. Chaos, Solitons & Fractals, 41(1), 401-409.

Ludwig, K. (2015). Is distributed cognition group level cognition? Journal of Social Ontology, 1(2), 189-224.

Marsh, K. L., Richardson. M. J., & Schmidt, R. C. (2009). Social connection through joint action and interpersonal coordination. Topics in Cognitive Science, 1(2), 320-339.

Menary, R. (2006). Attacking the bounds of cognition. Philosophical Psychology, /9(3), 329-344.

Menary, R. (2007). Cognitive integration: Mind and cognition unbound. London: Palgrave McMillan.

Menary, R. (2010). Introduction: The extended mind in focus. In R. Menary (Ed.), The extended mind. Cambridge, MA: MIT Press.

Nagel, S. K., Carl. C., Kringe, T., Märtin, R.. & König, P. (2005). Beyond sensory substitution—Learning the sixth sense. Journal of Neural Engineering, 2(4). R13—R26.

Niwa, H. S. (1994). Self-organizing dynamic model of fish schooling. Journal of Theoretical Biology, 171(2), 123-136.

Obuko, A. (1986). Dynamical aspects of animal grouping: Swarms, schools, flocks, and herds. Advances in Biophysics, 22, 1-94.

Palermos, S. O. (2014). Loops, constitution, and cognitive extension. Cognitive Systems Research. 27. 25-41.

Palermos, S. O. (2016). The dynamics of group cognition. Minds and Machines. 26(4), 409-440.

Parunak, H. V. D. (1997). “Go to the ant”: Engineering principles from natural multi-agent systems. Annals of Operations Research, 75, 69-101.

Peng, H., Li, L., Yang, Y., & Liu, F. (2010). Parameter estimation of dynamical systems via a chaotic ant swarm. Physical Review E, 81(1), 016207.

Reina, A., Bose, T., Trianni, V., & Marshall, J. A. (2018). Psychophysical Laws and the Superorganism. Scientific Reports, S(l), 4387.

Richardson, M. J., Dale, R., & Marsh, K. L. (2014). Complex dynamical systems in social and personality psychology: Theory, modeling, and analysis. In Handbook of research methods in social and personality psychology (pp. 253-282). Cambridge: Cambridge University Press.

Riley, M. A., Richardson, M. J., Shockley, K., & Ramenzoni, V. C. (2011). Interpersonal synergies. Frontiers in Psychology, 2, 38.

Ross, D.. & Ladyman, J. (2010). The alleged coupling-constitution fallacy and the mature sciences. In Menary (Ed.), The extended mind. Cambridge, MA: MIT Press.

Rowlands, M. (1999). The body in mind: Understanding cognitive processes. New York: Cambridge University Press.

Rupert, R. D. (2004). Challenges to the hypothesis of extended cognition. Journal of Philosophy, 101. 389-428.

Rupert, R. D. (2009). Cognitive systems and the extended mind. Oxford: Oxford University Press.

Schmidt, R. C., Bienvenu, M., Fitzpatrick, P. A., & Amazeen, P. G. (1998). A comparison of intra-and interpersonal interlimb coordination: Coordination breakdowns and coupling strength. Journal of Experimental Psychology: Human Perception and Performance, 24(3), 884.

Schmidt, R. C., & Richardson, M. J. (2008). Dynamics of interpersonal coordination. In Coordination: Neural, behavioral and social dynamics (pp. 281-308). Berlin: Springer.

Sellars, W. (1963). Philosophy and the scientific image of man. In Science, perception, and reality (pp. 1-40). New York: Routledge & Kegan Paul.

Sutton. J. (2008). Between individual and collective memory: Interaction, coordination, distribution. Social Research, 75(1), 23-48.

Sutton. J„ Harris. С. B., Keil, P. G.. & Barnier. A. J. (2010). The psychology of memory, extended cognition, and socially distributed remembering. Phenomenology and the Cognitive Sciences, 9(4). 521-560.

Theiner, G. (2013a). Onwards and upwards with the extended mind: From individual to collective epistemic action. In L. Caporael, J. Griesemer, & W. Wimsatt (Eds.), Developing scaffolds (pp. 191-208). Cambridge, MA: MIT Press.

Theiner, G. (2013b). Transactive memory systems: A mechanistic analysis of emergent group memory. Review of Philosophy and Psychology, 4(1), 65-89. S13164-012-0128-X.

Theiner, G. (2015). Group-sized distributed cognitive systems. In The Routledge handbook of collective intentionality. New York: Routledge.

Theiner, G., & O’Connor, T. (2010). The emergence of group cognition. In A. Corradini & T. O’Connor (Eds.), Emergence in science and philosophy. New York: Routledge.

Theiner, G., Allen, C., & Goldstone, R. L. (2010). Recognizing group cognition. Cognitive Systems Research. 11(4), 378-395. https://doi.Org/10.1016/j.cogsys.2010.07.002.

Tollefsen, D. P. (2006). From extended mind to collective mind. Cognitive Systems Research, 7(2-3), 140-150. https://doi.Org/10.1016/j.cogsys.2006.01.001.

Tollefsen, D„ & Dale. R. (2012). Naturalizing joint action: A process-based approach. Philosophical Psychology, 25(3), 385-407. 418.

Turnstrom. K., Katz. Y.. loannou, С. C.. Huepe, C„ Lutz, M. J.. & Couzin, I. D. (2013). Collective states, multistability and transitional behavior in schooling fish. PLoS Computational Biology, 9(2), el002915.

Tyler, M., Danilov, Y., & Bach-y-Rita, P. (2003). Closing an open-loop control system: Vestibular substitution through the tongue. Journal of Integrative Neuroscience, 2, 159-164.

Weiskopf, D. (2008). Patrolling the mind’s boundaries. Erkenntnis, 68. 265-276.

Wilson, R. A. (2000). The mind beyond itself. In D. Sperber (Ed.), Metarepresentations: A multidisciplinary perspective (pp. 31-52). New York: University Press.

Wilson. R. A. (2001). Group-level cognition. Philosophy of Science. 68(3), 262-273.

Wilson, R. A. (2004). Boundaries of the mind: The individual in the fragile sciences: Cognition. New York: Cambridge University Press.

Wilson, R. A. (2005). Collective memory, group minds, and the extended mind thesis. Cognitive Processing, 6(4), 227-236.

C. Collaborative Action

(CoAct) Theory

Socially Constructing

Shared Knowledge through

< Prev   CONTENTS   Source   Next >