Looking Under the Hood—Do the Processes Underlying Our Conscious Experience Fit Homo prospectus?

We have been speculating about psychological processes and their connection to prospection. We've suggested, for example, that conscious feelings and intuitions about language or about others' thoughts, feelings, and likely behavior might be manifestations of unconscious, but informationally rich, model-building, projective, evaluative, and decision-making mental processes. That is a lot to ask of the human brain, imperfect as it often seems to be. Such suggestions on our part make predictions about what neuroscientists will find as they penetrate beneath the conscious surface of mind. The last two decades in neuroscience have given an emphatic answer: There is a great deal of evidence that the brain is involved in just these sorts of informationally rich processes, even if our conscious mind has no clue of this.

Let's start with a look at the findings regarding the spontaneous empathic modeling of one another. Experiencing a mild shock, anticipating the arrival of such a shock, watching another person undergo such a shock, and imagining inflicting such a shock upon another appear to activate extensively overlapping or adjoint regions of the affective system (Decety & Ickes, 2009; Ruby & Decety, 2001; but see Singer & Lamm, 2009 for more detailed analysis of the areas involved). That is, the same or very similar core affective responses are observed in all these cases, and where they differ is what use the brain then makes of this affective encoding of "what it is like" to feel an electric shock. This information can serve to guide one's own immediate behavior, one's expectations and feelings about one's future possibilities, or one's expectations for and feelings about others. Moreover, those with profound deficits in these capacities find the "intuitive" ability to anticipate their own futures or to spontaneously understand the state of mind of others exceptionally difficult to acquire or use effectively (Baron-Cohen, 1997; Decety & Ickes, 2009).

Regarding actions, systems engineers had discovered the effectiveness of a control process for complex movements in which the issuance of a motor command is accompanied by generating a "forward model" of the predicted outcome of executing the command, permitting detection of discrepancies with desired outcomes even before sensory feedback can occur. And such anticipatory discrepancy in turn can be fed back via an "inverse model" to determine what changes in motor command might bring the system closer to the desired state (Craig, 1986). Underlying this capacity is, as we might suspect, a capacity to construct and update a causal model of the situation, available actions, and likely outcomes. Soon this approach was being applied theoretically to motor control in living systems (Lacquaniti, Borghese, & Carrozzo, 1992; Miall & Wolpert, 1996), and immediately the search for actual neural mechanisms began.

Within a decade, the theorized model-based motor control had been given substantial empirical grounding, and still more sophisticated versions of dynamic model-based control using Bayesian approaches to uncertainty and principles of optimality in behavior selection had come into play (Kording & Wolpert, 2006; Liu & Todorov, 2007; Todorov & Jordan, 2002). The slowness of actual sensory feedback means that the system's reliance on expectations is critical. It appears that this same predictive and inverse modeling capacity is used empathically to simulate internally and thereby predict and interpret the observed behavior of others. It can then be used to generate the expectations needed to guide our own actions with respect to them, preparing responses to actions not yet performed and immensely facilitating tasks, such as social learning and cooperation, although the mechanisms involved in such simulation remain subject to debate (Calvo-Merino, Glaser, Grezes, Passingham, & Haggard, 2005; Lamm, Batson, & Decety, 2007).

A causal, predictive internal model thus provides an integrated way of interpreting recent experience and selecting a response to it, which can be used equally for the self or others. Empathy is not a simple replication of the attitude or behavior of another by a kind of "emotional contagion." We don't just become angry when someone else is angry or bored when we see that someone else is bored. The simulation of empathy uses our own affective system, but "off line," so that the connection with action is mediated by representations of the self versus others. Anatomically, these differences in simulation appear to be neurally correlated with differences in self- versus other-representation in general (Ruby & Decety, 2001).

Given the tight social conditions under which humans and their immediate ancestors have come into the world, grown to maturity, lived, and reproduced, it would be surprising if a capacity to be attuned in real time to the evolving thoughts, feelings, intentions, and likely behavior of those around us were not part of the core functioning of the brain. What could be more important in small-scale human societies for the meeting of one's needs or accomplishing of one's goals than one's relations with others?

This picture of the working of empathy requires, however, that our minds be capable of sophisticated kinds of learning necessary to build and update the underlying predictive models. We must somehow acquire sufficient information about the causal and intentional structures of the world. And these learning processes must be able to proceed via "unsupervised" as well as supervised learning, both because explicit instruction or correction are comparatively rare in social situations, so that implicit learning is more the norm, and because, at the very beginning, the infant must herself learn which gestures or words of others correspond to instruction or correction. Could the learning systems we inherit from our animal and hominid ancestors be up to such an informationally intensive and computationally complex task?

As long as implicit personal and social learning were regarded as purely associative processes, an ingraining of stimulus-response dispositions on the basis of past reward or punishment, such a task seemed far out of reach. Such processes acquire information slowly, adapt poorly to changing environments, and effect "local" links between stimuli and responses, which do not support higher-level modeling of the explanatory mechanisms behind what is experienced.

However, a revolution has occurred in learning theory, profoundly enriching our picture of how intelligent animals actually learn. A breakthrough came when neuroscientists using microelectrode readings of the behavior of single neurons began to assemble a picture of the neural infrastructure of reward learning. Wolfram Schultz and co-investigators (Schultz, Dayan, & Montague, 1997) placed microelectrodes in so-called dopamine neurons in the midbrain of macaques, and they were able to record the train of spiking activity that took place when an animal first received an unexpected reward (a squirt of sweet fruit juice). Later, they conditioned the macaque to expect this reward by turning on a light 1.5 seconds before the arrival of the juice. They observed, perhaps unsurprisingly, that the dopamine neurons initially underwent a spike in firing when the unexpected juice arrived (the "unconditional stimulus"). After all, isn't dopamine about pleasure? But with increased exposure to the "conditioning stimulus" of the light, the spike moved from the time of the arrival of the juice to the time of the coming on of the light. And then when the juice arrived 1.5 seconds later, there was no spike. Had the macaque become indifferent to the juice, getting pleasure instead from the light? Schultz et al. (1997) tried turning on the light, and then, 1.5 seconds later, giving no juice. When this occurred, an extraordinary event took place. Normally in any collection of neurons, a certain amount of seemingly random firing is taking place, often seen as a minimal, baseline neural noise. But when the juice failed to arrive, the dopamine neurons fell nearly silent—nothing— right on cue at 1.5 seconds.

So the initial spike in the dopamine neurons was not pleasure or reward, it was information about prospective reward (Brooks & Berns, 2013; de la Fuente-Fernandez et al., 2002). Prior to the introduction of the conditioning stimulus, the unexpected arrival of the unconditioned stimulus, a squirt of sweet juice, was good news—a "better than expected" event. But after conditioning, the news value attached instead to the coming on of the light, which interrupted an otherwise boring bit of time in the lab. So the light became the good news, and when the squirt of juice was delivered right on time, 1.5 seconds after the light came on, its arrival was "no news"—the juice was neither better nor worse than expected, so no spiking behavior was observed. And then, when the juice failed to arrive 1.5 seconds after the light came on, this "worse than expected" result immediately induced a dramatic suppression of the normal activity of the dopamine neurons—a pronounced and distinctive neural "error signal." This error signal also functioned as a "teaching signal" for the macaque, and if the failure to deliver a squirt of juice continued, the spike associated with the coming on of the light would attenuate and disappear.

The macaque did not simply "associate" juice with the light, but internally generated a specific expectation of how the world would be, given the information provided by the light. This forward-looking representation set the macaque up to expect juice in 1.5 seconds, created a state in which the absence of juice in 1.5 seconds was not simply one more boring moment in a boring day, but a mistake, a distinctive neural event signaling an error in the forward representation. Through such "prediction-error" based learning, the animal seemed able to acquire and retain an internal representation of the probabilities and rewards of the world through error-reduction learning. Here's a schematic way of picturing the process:

(*) representation ^ expectation ^ observation ^ discrepancy

detection (feedback) ^ error-reducing representation revision ^

revised expectation ^ observation ^ discrepancy detection ...

By repeated application of a process of this kind, the animal's internal representations will tend to become "tuned" to the actual frequencies in its environment.

Researchers asked what would happen if one varied the probability with which juice followed the light, so that it was received 75%, 50%, or 25% of the time. Studies by Schultz and many others showed via individual neural recordings of spike rates that the primate and rat brains can keep accurate track of all of these variations. For example, when the probability is 50%, the spiking activity is roughly half as high as when the probability is 100%; when the probability is 75%, the spiking activity is midway between the two (Fiorillo, Tobler, & Schultz, 2003). And what would happen if one varied the value of the reward from time to time, increasing it from one squirt to two, or two to one? Neural firing activity in the orbital prefrontal cortex was monitored, and again, the cells' responses tracked variations in magnitude—not just absolute, but relative to prior expectations and preferences (Tremblay & Schultz, 1999). Further research found that macaques and rats not only formed separate representations of probability versus reward value, but also computed a joint product, expected value (probability x reward) that appears to guide choice behavior (Preuschoff, Bossaerts, & Quartz, 2006; Schultz, 2002; Singer, Critchley, & Preuschoff, 2009; Tobler, O'Doherty, Dolan, & Schultz, 2006). Eventually, a complete and coherent "marginal utility function" could be reconstructed from the monkeys' choices in gambles between banana slices and squirts of sweet juice (Stauffer, Lak, & Schultz, 2014).

Contrary to years of thinking about the animal mind, it appeared that intelligent mammals respond to information and rewards in their world very much in the manner rational decision theory would recommend.3 Moreover, their learning pattern closely resembled the behavior of idealized models of Bayesian probabilistic learners, for whom expectations are updated in light of experience via "condition- alization," a mathematical function that takes into account prior expectations as well as the likelihood of the new evidence, given what was antecedently expected. Successive episodes of updating will tend, as evidence grows, to generate an increasingly accurate array of forward expectations of outcomes. Indeed, there is something more: The brains of intelligent animals and humans respond to experience by forming expectations not only about individual events, but about higher-order regularities and types of events and outcomes, thus creating causal-explanatory models of the world much as a scientist does (Badre & Frank, 2012; Courville, Daw, & Touretzky, 2006; Frank & Badre, 2012; Gershman & Niv, 2015; Tenenbaum et al., 2011).

Philosophers, mathematicians, and statisticians had previously shown that Bayesian learning has a number of singularly important features for creatures such as ourselves and for the foraging mammals from which we descended. First, it links learning directly to the guidance of behavior by making expectation be at the center of learning: Learning is not about assembling a massive archive of past events, but about efficient extraction from ongoing experience of probabilistic information used to update model-based expectations. A record of past events, however accurate, is mute about what will happen next (the "time/date" coding of stored information will always be earlier than the present moment). The expectations resulting from (*)-l ike or Bayesian processes are, in effect, running summaries of the impact of past information, and thus afford instantaneous access to the net result of what one has learned without requiring an extensive search of memory in order to determine "the weight of evidence." For a creature living on the edge of existence, this is a tremendous saving of the neural structures it must grow and sustain. And for humans who ordinarily deliberate using intuitively "felt" strengths of belief or degrees of uncertainty, such processes suggest a mechanism by which such intuitions could be sensitive to what we have learned implicitly as well as explicitly, and how reliable this has been.

Second, (*)-like and Bayesian processes are "self-correcting." Expectation always introduces an element of bias, because it anticipates outcomes without waiting to see what actually happens. However, if expectations are consistently modified in the face of experience in a (*)-like or Bayesian manner, then over time, the influence of initial expectations will tend to diminish as new experiences "tune" expectations to actual frequencies through the reduction of prediction error. As experience grows in magnitude and diversity, Bayesians point out, initial expectations tend to "wash out," and individuals who began from different starting assumptions, but encountered similar experience, will tend to converge in their expectations. And importantly, they will tend to converge on the actual "natural statistics" of their environment (for further perspectives on the "Bayesian brain", see Chater & Oaksford, 2008 and Hohwy, 2013).

Associative learning, we noted, has been criticized as far too slow to permit fine-tuning to a rapidly changing environment. Such learning is cumulative in character, and the great weight of past experience tends to offset changes in recent experience (Lieberman, 2000). But statistical learners employing (*)-like or Bayesian methods see things differently; the more strongly a certain outcome is expected, the more surprise one experiences when the expected outcome does not occur, and the more one revises one's expectations going forward. If animals and humans are (*)-like or Bayesian learners, then they should pay greatest attention to incongruous rather than familiar experiences, as these afford the greatest potential information value for learning. And when researchers look at the attention patterns of very young infants, they find just this pattern. Even at a few weeks of age, infants show greater interest when they hear scrambled patterns of phonemes in their native language as opposed to normal speech (Saffran, Aslin, & Newport, 1996). And by 8 months, infants are able to discriminate incongruities in an artificial language and appear to update their conditional expectations accordingly (Aslin, Saffran, & Newport, 1998). More recent research indicates that infants in the first year of life use statistical information across contexts to resolve the reference of words (Smith & Yu, 2007) and exhibit quite general capacities for causal and statistical learning across a variety of domains (Kirkham, Slemmer, & Johnson, 2002; Sobel & Kirkham, 2006). Studies of causal learning by rats suggests that even they abjure purely associative learning and statistically learn to distinguish different causal models of situations and to discriminate between the absence of events and the lack of evidence (Blaisdell, Sawa, Leising, & Waldmann, 2006; Waldmann, Schmid, Wong, & Blaisdell, 2012). Recent work on "deep learning" models of object recognition and on Bayesian causal learning shows how the development of hierarchical information structures in the face of experience can even result in the formation of new categories. Drawing on the large amount information about objects encoded in learned "deep" hierarchies, it becomes possible to correctly identify examples in a new category even after experiencing a few instances—the way that children quickly spot the difference between a bicycle and a scooter, even after seeing only a few samples of each (Lake, Salakhutdinov, & Tenenbaum, 2015).

The importance of statistical learning sheds light on the difficult problem of implicit bias in human social interactions. The bad news is that children appear to learn over time to make implicit evaluations of the abilities and aptitudes of groups based on the biased samples of the total population to which they are typically exposed when growing up in the United States, which remains significantly segregated in residence and primary education. By age 6, children already show implicit learning of the biases of their society, even children who belong to the stigmatized group itself (Baron & Banaji, 2006). Because this learning is implicit, it can shape behavior even when the individual is trying consciously not to be prejudiced (Macrae, Bodenhausen, Milne, & Jetten, 1994). Is implicit bias, therefore, an intractable problem? The good news is that the same kinds of statistical learning mechanisms can operate to weaken implicit bias if individuals are exposed to samples less biased in their representation of abilities (Dasgupta, 2013). If bias is learned in a (*)-like or Bayesian implicit way from living in unequal social relationships, then it can be unlearned in a (*)-like or Bayesian implicit way by living in more equal social relationships. What looks like an intractable social problem arising from hardwired or inbred ingroup/outgroup attitudes seems instead to reflect, and thus be amenable to, statistical learning. We will return to the problem of implicit bias, and to a remarkable "natural experiment" that illustrates how learning can change even centuries-old biases, when we discuss morality in Chapter 9.

Reinforcement learning, so long as it involves a rich and varied environment, is not about entrenching habits by hammering them home by repetition, but about attending to the most informative cues in the environment to construct probabilistic representations of what to expect (Gallistel, Mark, King, & Latham, 2001; Rescorla, 1988). In Chapter 1, we introduced the idea of a "good regulator" and suggested that we should think of the challenge individuals face in the world as one of regulating their interchanges with their physical and social environment so as to meet their needs and realize their aims. A good regulator, we noted, must build an internal model of the system as a whole and make its decisions by consulting this model to understand how its actions will affect its world and itself.

(*)-like learning is not enough for a good regulator to build an accurate model of the system as a whole. The regulator would need to use the probabilistic information thus acquired to construct structural models of causal relationships (Tenenbaum et al., 2011).4 And considerable evidence attests that human children engage in this kind of causal modeling from early on (Gopnik et al., 2004; Sobel & Kirkham, 2006), and that intelligent animals develop and use model-like representations of spatial relationships and potential actions in navigating their environment (Moser, Kropff, & Moser, 2008). Recently, work indicating the use of causal-explanatory models by intelligent animals has also begun to appear (Blaisdell & Waldmann, 2012). Model-based learning is of special importance in understanding the pervasive dynamic flexibility of behavior in real time—a flexibility that is difficult to accommodate on habit-based models (Balleine & Dickinson, 1998), and even behaviors traditionally seen as habitual are being rethought as more complex (Smith & Graybiel, 2014).

In thinking about the ice fisherman's actions in Chapter 1, we appealed to the flexibility of model-based action to suggest how he was able to intelligently improvise a new approach and capture the fish (Gillan, Otto, Phelps, & Daw, 2015). He not only caught a fish, he caught a new way of catching fish. And he will be able to use this new approach the very next time he fishes, without waiting for a long history of reinforcement in that new pattern of behavior. But this does not require departure from broadly Bayesian learning. For (*)-like or Bayesian responses to causal events of diverse kinds over the course of his lifetime has given the fisherman not only first-order expectations about ways of catching fish, but higher-order expectations about the ways in which the world is regular, and this general knowledge can be brought to bear to permit extrapolation even from individual instances. That is, the fisherman has used something like Bayesian hierarchical learning to develop and assess causal models of his world. This is how, it seems, actual humans solve the age-old problem of induction—they don't. Instead of induction, they use causal modeling, starting off with prior expectations and updating these on the basis of experience (Gopnik et al., 2004). In the intensely social human world, the experience of exchanging information with others is of special importance. We speculate that humans can accomplish their remarkable adaptive improvisation not only because of their brains, but also because of the language and culture they inherit (the socialis in Homo prospectus so- cialis and the topic of Chapter 5). This inheritance enables them to acquire rich representations of the physical and social world that extend far beyond the course of first-person experience.

But humans are not the only ones good at adaptive innovation. To understand how deep the idea of prospection is in understanding the evolved brain and the great advantages it confers, we need to see how key powers of prospection earned their way into the minds of our ancestors long before the emergence of culture as we know it. Where better to start than the favored species of experimental psychology, the white rat? Rats suffered through decades of maze running to test the associative theory of learning, and a large research program, behaviorism, was built upon their backs. On the associative theory, a rat learns to turn right in a maze because the motor response of executing a right turn was soon followed by a reinforcement—a bit of food. By repeated running of the maze, this motor response was "trained" into the rat. But the great experimentalist Karl Lashley made an observation in 1920, in the heady early days of behaviorism, which raised questions about this idea of what the rat had learned. One day, a rat escaped the start box of its maze, climbing up on top of the structure. What would it do? The conditioned response model would predict that the rat would walk (say) 10 steps forward, turn right, walk 5 steps, turn right again, and find the food. After all, that was the motor pattern that had been so assiduously reinforced.

But instead the rat scampered diagonally across the top of the maze, directly to the food station (Lashley, 1929). Laboratory conditions prevented the rat from "following its nose" to the food—somehow, it was following something else, something more abstract that it had learned while running trials in the maze. Thus began the idea that rats might not be slaves to stimulus-response conditioning, they might form mental representations of locations and paths extending in space, permitting them to respond flexibly and intelligently to entirely novel opportunities afforded by the world. Not only that, but the novel behavior had never been reinforced. Could learning really take place in light of internally represented values or goals ("purposes"), without the external carrot of reward or stick of punishment?

It was the Berkeley psychologist Edward Tolman who took the next step. He held that rats could learn about their environment from exploration alone, without external reinforcement, and posited that rats developed a "cognitive map" of the spatial layout of the maze that was not tied to any specific pattern of motor responses. In a series of experiments in which rats had to perform novel actions to get to the food, such as swimming or managing to put together a sequence of rolling or rotating movements enabling them to turn right after their ability to turn right directly had been surgically removed, his hypothesis held up (Tolman, 1948). Tolman came to see rats as purposive creatures whose cognitive maps enabled them to pursue goals in a manner he called "autonomous": They can extract information from their confined experience of the maze to build general-purpose mental representations that give them an ability to shift the way they pursue their goals without new incentives.

Ironically, it was Tolman's concern with autonomy, not just running down the channels laid down for him, which led him to refuse to sign the loyalty oath required by the Regents of the University of California. Despite his eminence as a researcher, his refusal to sign the loyalty oath on grounds of academic freedom led the McCarthy- inspired Regents to seek his dismissal in the early 1950s. Tolman, of course, had the scientific accomplishments needed to obtain a position elsewhere, and he was able to continue his research at McGill University in Canada. But his commitment to autonomy wouldn't let him stop there, and he sued the California Regents in Tolman v. Underhill (1955), in which the California Supreme Court struck down the loyalty oath and ordered Tolman's reinstatement. Today, the building on the Berkeley campus built to house the departments of psychology and education is named Tolman Hall—a monument to standing up for the freedom to pursue one's own paths.

Tolman's cognitive maps, like the notions of "autonomy" and "purposive behaviorism" that went with them, were greeted with great skepticism by hard-core behaviorists, who remained steadfastly loyal to the stimulus-response, engrained motor-pattern model. Direct experimental testing of Tolman's ideas of rat mentation would await the development of sophisticated neuroscientific techniques seven decades later. And when testing did come, Tolman's "cognitive maps" hypothesis turned out to be true, in spades. As a rat explores the environment, "place cells" in the hippocampus and "grid cells" in the entorhinal cortex respectively construct relational and absolute maps of the environment. The system keeps track of where the animal currently is, but also represents a totality of possible locations (Ainge, Tamosiunaite, Worgotter, & Dudchenko, 2012; Derdikman & Moser, 2010; Langston et al., 2010). These maps have substantial independence from direct experience and show repeated activation when the rat is waiting its chance to run the maze or in rapid eye movement (REM) sleep after a day of training in the maze (Ji & Wilson, 2007). During these episodes, activation preferentially occurs in the areas the rat spent less time in, and follows backward as well as forward trajectories. These are patterns you would expect if animals make use of past experience to build a richer representation of the world behind experience, and the opposite of what you'd expect if animals operated by associationist principles. And Lashley's rat was finally explained: During REM sleep, activation in the maze includes the construction of shortcut paths across the grid, so that these actions can be readily available should circumstances permit them (Gupta, van der Meer, Touretzky, & Redish, 2010). Moreover, in the aftermath of these periods of brain activity simulating the running of the maze, performance improves: Learning takes place without external reinforcement, again, the opposite of what the behaviorists for so long preached must obviously be the case. Disrupt these periods of "off-line" activation, and performance deteriorates (Ward et al., 2009), a result that has been duplicated with human sleep and learning (Stickgold, 2013).

Now, the real test of the idea of prospection is: Does a rat actually running the maze consult the map prospectively when making its way through? Does it engage in prospective guidance by simulating alternate possible paths and using evaluative information to select among them? Do mapping and expected value estimation combine in the rat mind to yield genuinely prospective guidance? David Redish and colleagues at Minnesota tested this idea by watching activation patterns in the rat's cognitive maps as the rats reached choice points during the period when they were actively learning the maze. They found that prior to the rat turning left or right, while still poised at the junction, activation in its mental map spread alternately down the two arms of the maze, ahead of the rat's current location. These "sweeps" of possible pathways appear to serve as prospective models of possible actions that afford a projective frame to guide evaluation and action selection (Johnson & Redish, 2007; Johnson, van der Meer, & Redish, 2007). Without leaving the choice point, the rat has "looked down" both of the paths; and drawing on what it learned thus far about the probability and value of the rewards it received in prior exploration, it elects the arm of the maze with the higher expected value. Rats are not only good at Bayesian learning of probabilities and values, they are good at modeling this information in a spatially mapped array of possible actions to guide actual choices in line with maximizing expected value.

A key task for testing such a hypothesis about the guidance of animal behavior by causal modeling and expected value computations comes with foraging. Mammals need to gain their nutrition from the environment around them, and prior to the human invention of agriculture, this called for hunting and gathering (like our ice fisherman from Chapter 1). Foragers need to figure out the shape of their environment, the location and reliability of possible sources of food, the costs and risks of obtaining food from the different sources, the balance of nutritional needs, the trade-offs between exploring for new resources and exploiting known resources, and so on. Ethologists have observed that mammals and other intelligent species are able to develop nearly optimal foraging patterns via sampling the physical and social environment for food as well as other vital resources, such as partners for cooperation and mating (Dugatkin, 2004). An account of the mechanisms by which they do this has been missing. In effect, the animals face an optimal control problem, and systems theory tells us that we should look to model-based control as an effective way of solving such optimization-under-constraint problems (Braun, Nagengast, & Wolpert, 2011; Conant & Ashby, 1970). The very machinery that neuroscience has been discovering is ideally suited in the real world for animals to forage effectively.

And what about humans? Recent experiments in which human subjects face simulated foraging tasks, involving risk and money, rather than food and predation, indicate that given time to explore and sample, humans can also develop nearly optimal foraging patterns (Kolling, Behrens, Mars, & Rushworth, 2012) and can use the volatility of rewards to adjust decision-making for uncertainty in an optimal manner (Behrens, Woolrich, Walton, & Rushworth, 2007).

Model-based control theory has also come to play a central role in the study of skilled movement in animals and humans. Elite athletes appear to differ from merely excellent athletes not in the speed of their reflexes, ability to jump, or degree of training of basic motor patterns. The crucial difference appears to be that they possess more detailed and accurate models of complex movements and competitive situations, which allow them to get the drop on their rivals (e.g., placing a tennis shot where it can't be returned, identifying a fast-emerging scoring opportunity before the opponent can spot and close it) and to achieve more efficiency and effectiveness in exploiting the body's resources (just how to take off and twist in a high jump) (Yarrow, Brown, & Krakauer, 2009). Intriguingly, the elite athletes continue to show variability in large-scale motor performance (swinging a bat, golf club, or raquet) even when some of the small-scale components of such performance have achieved a high level of consistency (Yarrow et al., 2009). It appears that they are constantly, implicitly, experimenting. Famously, artisans expert at making cigars showed continued improvement in performance even after rolling cigars for 7 years (Crossman, 1959). Finally, expert models in competitive sports and games must include accurate evaluative information, because successful competitors must make trade-offs involving risks, benefits, and costs.

We are still at the beginning of the emerging understanding of how animals and humans are able to construct and use evaluative- causal models to perform effectively and efficiently in the face of the challenges facing them. But thanks to research in psychology and neuroscience, underpinned by a solid foundation in philosophy and systems theory, for the first time we are getting a unified account of how this might actually work, whether in a foraging field mouse, a skilled athlete, or an excellent diagnostic physician.

 
Source
< Prev   CONTENTS   Source   Next >