VSTM and perceptual integration

A second proposal regarding the function ofVSTM is thatVSTM supports the integration of perceptual information across disruptions in visual input (e.g., Brockmole, Irwin, & Wang, 2002; Irwin, 1992). In particular,VSTM has been proposed to play a central role in the integration of visual information across saccadic eye movements. In this view, as attention and the eyes are directed to objects in scenes, information from the attended target of the next saccade (and perhaps one or two additional objects) is consolidated into VSTM. Upon landing, newly acquired perceptual information is integrated with the stored information in VSTM. Support for this proposal has come from evidence that participants can remember properties of the saccade target object in VSTM across a saccade (Irwin, 1992; Irwin & Andrews, 1996) and that a preview of an object prior to a saccade leads to speeded naming of that object when the eyes land (Henderson & Anes, 1994; Henderson, Pollatsek, & Rayner, 1987; Pollatsek, Rayner, & Collins, 1984). Although these effects certainly demonstrate that visual representations can be stored in VSTM across an eye movement, they do not necessarily indicate thatVSTM is used to integrate perceptual information available on separate fixations into a composite representation. And, given the very limited capacity ofVSTM—one or two natural objects during scene viewing (Hollingworth, 2004)—any possible integration would have to be minimal and local; VSTM certainly could not support any large-scale integration of scene information.

A few studies have directly examined the role ofVSTM in visual integration. It is well established that visible persistence integrates with a trailing stimulus if the SOA between the two stimuli is very short (< 80 ms). For example, Di Lollo (1980) displayed sequentially two arrays of dots in a grid pattern. In the first array, half of the grid cells contained dots. In the second array, dots filled all but one of the cells that were unfilled in the first array. Between the two arrays, one grid cell did not contain a dot, and the task was to specify the location of the “missing dot”. At very short SOAs, the visible persistence of the first array integrates with perceptual processing of the second, and participants see a single array with all but one cell filled (which made the task very easy to perform). However, at slightly longer SOAs, no such integration was observed, likely to due to masking of the first array by the second.

Brockmole et al. (2002) extended this approach to examine integration at SOAs likely to be supported by VSTM. At long SOAs (greater than 1,000 ms), performance on the missing-dot task increased significantly, returning to levels similar to those observed at very short SOAs, when perceptual integration is known to occur. Brockmole et al. concluded that VSTM can indeed support perceptual integration. However, Hollingworth, Hyun, and Zhang (2005) and Jiang, Kumar, and Vickery (2005) found that, at long SOAs, the task typically is performed not by integrating information in VSTM but, rather, by comparing memory for the empty cells of the first array with the occupied locations in the second array (the one empty cell from the first array that does not have a dot in the second array is the location of the “missing dot”). This alternative is consistent with a general role for VSTM in perceptual comparison, reviewed subsequently. Although the results of Hollingworth et al. and Jiang et al. do not rule out the possibility that participants can solve the missing-dot task by integration in VSTM, high levels of performance at long ISIs cannot be taken as strong evidence of such integration. In summary, although VSTM could potentially support the integration of scene information, little direct evidence for integration in VSTM has been found, and the highly limited capacity ofVSTM dictates that any potential for integration must also be highly limited.

 
Source
< Prev   CONTENTS   Source   Next >