Inter-Modal Sensory Redundancy

Moreno and Mayer (2000) theorized a redundancy principle relative to integrating similar information in two modal forms in a single message to re-enforce the information. The same kind of principle applies to the theory of multimodal neuron processing. Bizley and King (2012) describe the visual-auditory link and how auditory information helps to process visual information, and both can be facilitated via multimodal neurons. Newell (2004) echoes this finding. A basic principle of cognition is the re-enforcement of information. This model recognizes this re-enforcement with this principle.

Generally, it appears that neurons associated with the first sense engaged attempt to process information toward cognition; however, they want more senses involved. This may be why multimodal neurons exist and become active with sensory information from various modes of representation. Multimodal neurons facilitate processing multimodal information, which, as Arnheim (1969) explains, facilitates a Gestalt effect—a more complete picture of the information. Further, many neurons seem to converge in a portion of the mid-brain (Clemo, Keniston, and Meredith, 2012; and King and Calvert, 2001). Neurons of all types— unimodal and multimodal—intersect or come to a point where they seem to intersect (Clemo, Keniston, and Meredith, p. 5). While different cortices are associated with particular stimuli, the neurons all follow paths into the mid-brain. Clemo et al. report on studies of cats and monkeys, but they generalize these findings to humans.

The initial neurons engaged processes information, but other modal neurons re-enforce or further define information, much as theorized by Moreno and Mayer (2000). For example, visual information shows what an object/abstract concept is or looks like. Other modes contribute to refining definition or composition of the object. In the example provided by Moreno and Mayer—learning about lightning—an image of a house, clouds, and lightning are provided, and text labels various attributes of the process involved in generating lightning. The pictures of the objects facilitate some information, but without the text it would be difficult to comprehend the process fully. The study group related to the animation and narration combination (visual-audio) was better able to learn information than was the group that had animation and text (strictly visual).

It also re-enforces what Mitchell (1995) states about “ekphrasis.” The concept related to optimal combinations of modes toward best articulating a message. Information related to a single sense can facilitate cognition; however, information from various modes facilitates cognition better. Perrault, Rowland, and Stein (2012) call this “multi-sensory enhancement” or “multi-sensory synergy.” Connecting neurophysiology to behavior, they observe that multisensory inputs tend to “elicit more vigorous responses than are evoked by the strongest of them individually” (p. 281).

This principle is echoed throughout the cases presented in subsequent chapters. For example, Chapter 5 discusses it relative to simulators and how they engage visual as well as tactile and spatial senses. Flight simulators, specifically, place a student pilot in the environment in which he or she would operate and allows that student to experience all the sensory stimuli a pilot flying a particular aircraft would experience.

< Prev   CONTENTS   Source   Next >