Efficient Coding in Visual Short-Term Memory: Evidence for an Information-Limited Capacity Timothy F. Brady (tfbrady@mit.edu) Talia Konkle (tkonkle@mit.edu) George A. Alvarez (alvarez@mit.edu) Department of Brain and Cognitive Sciences, MIT, Cambridge, MA instead depends only on the number of objects to be remembered – consistent with the idea of ‘chunks’ proposed by Miller (1956) and Cowan (2001). However, it has become clear recently that there is a serious cost in memory performance for increasing the information content of an object (e.g., objects with multiple colors that need to be stored; Wheeler & Treisman, 2002). This suggests that visual short-term memory (VSTM) cannot hold an unlimited amount of information just because it has been bound to a single object. Alvarez and Cavanagh (2004) proposed an alternate framework that specifically takes into account the amount of information needed to represent each object. They demonstrated that while observers can remember up to four simple objects, they can remember only 1 or 2 complex objects -- presumably because a greater amount of information is required for the complex objects to be remembered well enough to succeed at test. However, because of the nature of the real world objects used in their task, Alvarez and Cavanagh (2004) could not measure the true (information theoretic) information content of their stimuli. In the present study, we had observers remember color patches because it is possible to exactly quantify the information content of these stimuli in bits (Shannon, 1948). We varied the amount of information per stimulus not by changing the physical appearance of the patches, but by changing the probability of their co-occurrence. Introducing statistical redundancy reduces the amount of information needed to encode the items in the display. This manipulation enabled us to directly compare VSTM models which propose a capacity limit in terms of a fixed number objects versus a fixed amount of information (in bits). First, we conduct two behavioral experiments, in which we draw stimuli from a uniform distribution (Experiment 1) or from a distribution containing covariance information between presented colors (Experiment 2). Next, using a hierarchical Bayesian model of the learning process and a Huffman encoding scheme, we show that a computational model can predict VSTM performance. Abstract Previous work on visual short-term memory (VSTM) capacity has typically used patches of color or simple features which are drawn from a uniform distribution, and estimated the capacity of VSTM to be 3-4 items (Luck & Vogel, 1997). Here, we introduce covariance information between colors, and ask if VSTM can take advantage of this redundancy to form a more efficient representation of the displays. We find that observers can successfully remember 5 colors on these displays, significantly higher than the 3 colors remembered when the displays were changed to be uniformly distributed in the final block of the experiment. We suggest that quantifying capacity in terms of number of objects remembered fails to capture factors such as object complexity or statistical redundancy, and that information theoretic measures are better suited to characterizing the capacity of VSTM. We use Huffman coding to model our data, and demonstrate that the data are consistent with a fixed VSTM capacity in bits rather than in terms of number of objects. Keywords: Visual short-term memory; Working memory; Information theory; Memory capacity Introduction It is widely accepted that observers are highly sensitive to statistical regularities in the world. This capacity has been used to explain effects from speech segmentation to the emergence of visual objects (Saffran, Aslin & Newport, 1996; Turk-Browne, Isola, Scholl, & Treat, in press). Such regularities also provide an opportunity for memory systems to form more efficient representations by eliminating redundancies. This may be especially important for visual short-term memory, which is known to have a severely limited capacity. Previous work on VSTM capacity suggests that observers can remember about four objects, independent of the number of features remembered per object (Luck & Vogel, 1997; Vogel, Woodman, & Luck, 2001). In one experiment, observers were shown lines of different colors and orientations. When required to remember either color or orientation alone, they could remember 4 items. Surprisingly, when required to remember both color and orientation, observers could still remember 4 items. In fact, performance was the same when observers had to remember up to four features per object. These data suggested that the amount of information remembered per object is not a limiting factor in memory, and that memory capacity Experiment 1: Uniform Displays We first assessed the capacity of VSTM for colors drawn from a uniform distribution. This allowed us to get an estimate of the number of bits of color information people can remember under circumstances where no compression is possible.
[1]
B. Scholl,et al.
Multidimensional Visual Statistical Learning Visual Statistical Learning
,
2005
.
[2]
D. Pelli,et al.
The information capacity of visual attention
,
1992,
Vision Research.
[3]
Edward K. Vogel,et al.
The capacity of visual working memory for features and conjunctions
,
1997,
Nature.
[4]
A. Treisman,et al.
Binding in short-term visual memory.
,
2002,
Journal of experimental psychology. General.
[5]
M. Masson,et al.
Using confidence intervals in within-subject designs
,
1994,
Psychonomic bulletin & review.
[6]
Scott P. Johnson,et al.
Visual statistical learning in infancy: evidence for a domain general learning mechanism
,
2002,
Cognition.
[7]
M. Goldsmith,et al.
Statistical Learning by 8-Month-Old Infants
,
1996
.
[8]
G. Woodman,et al.
Storage of features, conjunctions and objects in visual working memory.
,
2001,
Journal of experimental psychology. Human perception and performance.
[9]
Sang Joon Kim,et al.
A Mathematical Theory of Communication
,
2006
.
[10]
D G Pelli,et al.
The VideoToolbox software for visual psychophysics: transforming numbers into movies.
,
1997,
Spatial vision.
[11]
M. Hauser,et al.
Segmentation of the speech stream in a non-human primate: statistical learning in cotton-top tamarins
,
2001,
Cognition.
[12]
Timothy F. Brady,et al.
PSYCHOLOGICAL SCIENCE Research Article Statistical Learning Using Real-World Scenes Extracting Categorical Regularities Without Conscious Intent
,
2022
.
[13]
Morten H. Christiansen,et al.
Modality-constrained statistical learning of tactile, visual, and auditory sequences.
,
2005,
Journal of experimental psychology. Learning, memory, and cognition.
[14]
George Sperling,et al.
The information available in brief visual presentations.
,
1960
.
[15]
B. Scholl,et al.
The Automaticity of Visual Statistical Learning Statistical Learning
,
2005
.
[16]
D H Brainard,et al.
The Psychophysics Toolbox.
,
1997,
Spatial vision.
[17]
David A. Huffman,et al.
A method for the construction of minimum-redundancy codes
,
1952,
Proceedings of the IRE.
[18]
G. A. Miller.
THE PSYCHOLOGICAL REVIEW THE MAGICAL NUMBER SEVEN, PLUS OR MINUS TWO: SOME LIMITS ON OUR CAPACITY FOR PROCESSING INFORMATION 1
,
1956
.
[19]
P. Cavanagh,et al.
The Capacity of Visual Short-Term Memory is Set Both by Visual Information Load and by Number of Objects
,
2004,
Psychological science.
[20]
N. Cowan.
The magical number 4 in short-term memory: A reconsideration of mental storage capacity
,
2001,
Behavioral and Brain Sciences.