Journal of Experimental Psychology:
Learning, Memory, and Cognition
2009, Vol. 35, No. 6, 1545–1551
© 2009 American Psychological Association
0278-7393/09/$12.00 DOI: 10.1037/a0017010
No Evidence for Temporal Decay in Working Memory
Stephan Lewandowsky
Klaus Oberauer
University of Western Australia
Universität Zürich
What drives forgetting in working memory? Recent evidence suggests that in a complex-span task in
which an irrelevant processing task alternates with presentation of the memoranda, recall declines when
the time taken to complete the processing task is extended while holding the time for rehearsal in between
processing steps constant (Portrat, Barrouillet, & Camos, 2008). This time-based forgetting was interpreted in support for the role of time-based decay in working memory. In this article, we argue the
contrary position by (a) showing in an experiment that the processing task in Portrat et al.’s (2008) study
gave rise to uncontrolled post-error processes that occupied the attentional bottleneck, thus preventing
restorative rehearsal, and (b) showing that when those post-error processes are statistically controlled,
there is no evidence for temporal decay in Portrat et al.’s study. We conclude that currently there exists
no direct evidence for temporal decay in the complex-span paradigm.
Keywords: working memory, short-term memory, forgetting
There has been much recent interest in the variables determining
forgetting over the short term. Why do we forget a stranger’s name
only seconds after being introduced? Why do we transpose digits
so readily between looking up a phone number and dialing it?
Does forgetting occur through the mere passage of time, by some
passive decay process, or does it require interference from subsequent events that disrupt the memory trace? Understanding forgetting is crucial because it underpins the pervasive and well-known
capacity limitations of short-term memory (STM) or working
memory (WM). This capacity limitation, in turn, is a strong predictor of people’s higher level cognitive abilities (e.g., Oberauer,
Süß, Wilhelm, & Sander, 2007), suggesting that understanding
forgetting over the short term will ultimately offer a window into
the very core of cognition.
To date, the issue of forgetting has been studied largely independently in two arenas—STM versus WM—that, despite their
obvious empirical and theoretical linkages, have been pursued in
parallel and without much attempt at integration. In the STM
literature, there has been considerable recent interest in the underlying causes of forgetting (e.g., Berman, Jonides, & Lewis, 2009;
Lewandowsky, Duncan, & Brown, 2004; Oberauer & Lewandowsky, 2008), with the debate focusing on whether memories
inexorably fade over time (e.g., Burgess & Hitch, 2006; Page &
Norris, 1998) or whether they may persist over time unless inter-
fered with by subsequent events (e.g., Lewandowsky & Farrell,
2008; Nairne, 1990). Recent evidence favors the latter alternative.
For example, Berman et al. (2009) showed that no longer relevant
materials persist in STM over time. They exploited the fact that in
short-term recognition, negative probes that were study items on
the preceding trial were rejected considerably more slowly than
novel lures, reflecting their persisting familiarity. Berman et al.
found that the persistent familiarity of no longer relevant items
diminished only negligibly when the intertrial interval was increased from 0.3 to 10 s or more, whereas it was eliminated by
insertion of a single intervening study-test trial of equal (10-s)
duration. Similarly, Lewandowsky et al. (2004) manipulated retention time during recall by training participants to recall at three
different speeds (0.4, 0.8, and 1.6 s per item). Rehearsal, which
could counteract decay, was blocked by articulatory suppression
(i.e., overt articulation of an irrelevant word). Although recall of
the last item was delayed by over 5 s at the slowest compared with
the fastest speed, this added delay caused no appreciable decrement in performance. This result has been replicated with children
as participants (Cowan et al., 2006), and it has been found to
persist even if a further, attention-demanding secondary task is
introduced during retrieval (thus blocking even attentional forms
of rehearsal; Oberauer & Lewandowsky, 2008).
In the WM literature, by contrast, recent evidence appears to
implicate time-based decay. This is best illustrated within the
framework of the time-based resource sharing model (TBRS;
Barrouillet, Bernardin, & Camos, 2004), which relies entirely on
decay to explain forgetting in the complex-span task that is at the
core of much WM research. In a complex-span task, a processing
task (e.g., reading a sentence, solving an arithmetic equation, or
performing a choice task) alternates with encoding of to-beremembered items. Thus, people might be presented with a sequence such as 2 ⫹ 3 ⫽ 5?, A, 5 ⫹ 1 ⫽ 7?, B, . . . in which the
equations have to be judged for correctness, and the letters must be
memorized for immediate serial recall after the sequence has been
completed. Not surprisingly, the additional processing task impairs
memory performance, and typical WM span values are consider-
Stephan Lewandowsky, School of Psychology, University of Western
Australia, Crawley, Australia; Klaus Oberauer, Department of Psychology,
Universität Zürich, Zurich, Switzerland.
Preparation of this article was facilitated by a Discovery Grant from the
Australian Research Council and an Australian Professorial Fellowship to
Stephan Lewandowsky, and a Linkage International Grant from the Australian Research Council to Stephan Lewandowsky, Klaus Oberauer,
Simon Farrell, and Gordon Brown.
Correspondence concerning this article should be addressed to Stephan
Lewandowsky, School of Psychology, University of Western Australia,
Crawley, Western Australia 6009, Australia. E-mail:
[email protected]
.edu.au
1545
1546
RESEARCH REPORTS
ably lower than STM spans. Is this impairment due to the additional time taken up by the processing task—thus delaying recall
and creating an opportunity for decay— or is it due to the interference associated with performing another task? The TBRS
firmly endorses the former option, and it does so in an elegant and
intriguing manner. In particular, the TBRS does not necessarily
predict increased forgetting as the duration of the processing task
is extended. The TBRS assumes the presence of an attentional
mechanism that can be deployed for multiple purposes, but it
cannot handle more than one attention-demanding task at a time
and, therefore, creates a bottleneck. This attentional mechanism is
thought to rapidly alternate between “refreshing” of the memory
traces and performing the processing task. Thus, although memory
is thought to decay while people engage in the processing task, the
model also postulates that brief pauses in between processing steps
can be used for attentional refreshing. Forgetting is thus thought to
be a function not of absolute duration of the processing period in
between presentation of two memory items but of the proportion of
that time that the attentional mechanism is actually devoted to the
processing task. This proportion is referred to as cognitive load and
is a joint function of the pace at which individual processing
operations are demanded and the duration of these operations.
When operations take long to complete and are required at a fast
pace, such that there is little or no time between successive
operations, then the model must predict time-based forgetting.
However, if there is ample time for refreshing in between processing steps (we refer to this time period as restoration time from here
on), then the model predicts little forgetting because decay during
the processing time can be counteracted by refreshing during the
restoration time.
The core piece of evidence for the TBRS consists of the pervasive finding that complex span is indeed a function of cognitive
load, that is, of the balance between processing duration and
restoration time, exactly as predicted by the model (e.g., Barrouillet et al., 2004; Barrouillet, Bernardin, Portrat, Vergauwe, &
Camos, 2007). In particular, there is no doubt that increasing
restoration time in between processing steps increases memory
performance, all other variables being equal. For example, in
Barrouillet et al.’s (2004) Experiment 7, span increased from
around 2.5 to nearly 4 when the total restoration time interspersed
between 12 processing steps was increased from 840 ms to nearly
5 s. This finding demonstrates that pauses in between processing
steps can indeed be used to improve memory for the list items, thus
counteracting short-term forgetting. The beneficial effect of extending restoration time does not, however, have any implications
for the source of forgetting, as was first noted by Oberauer and
Kliegl (2006). Forgetting could be caused by decay, but it could
equally be caused by interference. Regardless of whether traces are
degraded by decay or interference or some other cause, some type
of restoration process appears mandated by the observed beneficial
effects of additional restoration time.
Stronger evidence for decay as the cause of short-term forgetting would come from a complementary manipulation of cognitive
load, in which the duration of each processing step is varied while
holding everything else constant. The first evidence that directly
bears on this issue was provided by Portrat, Barrouillet, and Camos
(2008). They tested participants on a complex-span paradigm in
which encoding of letters alternated with a “burst” of several trials
of a spatial judgment task. Portrat et al. varied the processing
duration of each spatial judgment while holding the number of
judgments and the restoration time following each choice constant.
The methodology is summarized by the timelines of events in the
top two panels of Figure 1.
The judgment task involved a decision about the location of a
square (i.e., whether it was in the upper or lower portion of the
screen). The difficulty of the task—and hence processing duration—was manipulated via the spatial proximity of the two possible stimulus locations. When the two locations were close together, processing duration was longer than when the two locations
were further apart. Processing duration was measured by the
latency of individual judgments (recorded by participants pressing
one of two keys). Judgment difficulty was manipulated between
trials, so that on trials with difficult judgments, the retention
interval was up to 5 s longer (for the first item on 8-item lists) than
on trials with easy judgments. This timing manipulation engendered a small but significant decrement in memory performance of
around 4 percentage points. At first glance, as shown by the
hypothesized evolution of memory strength in Panels A and B of
Figure 1, this result seems to provide evidence for time-based
forgetting in a complex-span task. However, as we show next,
there are reasons to doubt this conclusion.
The crucial manipulation in the study by Portrat et al. (2008)
consisted of the difficulty of the processing task: In addition to the
intended increase in total processing duration, another consequence of the difficulty manipulation was a considerable increase
in error rates on the processing task itself (from 1% to 13%). We
suggest that this increase in error rate introduced a confound that
prevents an unambiguous interpretation of the results.
There is much consensus in the literature that responses following an error are considerably slower than responses that are preceded by a correct response (e.g., Laming, 1979). This post-error
slowing is presumed to arise because people often self-detect an
erroneous response, even in the absence of overt feedback. More
recently, Jentzsch and Dudschig (2008) analyzed the cause of this
pervasive post-error slowing and showed that it arises from a
process that follows after an error response (viz., evaluating the
response and making appropriate adjustments), which temporarily
occupies a central attentional bottleneck. We call this an attentional postponement effect because it delays processing of the next
stimulus (provided it follows in rapid succession) while post-error
processing occupies the bottleneck.
This result is of considerable relevance in the present context
because it provides an alternative explanation for the finding of
Portrat et al. (2008), which is sketched in Panels C and D of
Figure 1. A crucial premise in Portrat et al.’s argument was that
they held the restoration time after each processing step constant,
thus manipulating the potential time for decay independently of the
time available for restoring memory traces. This premise holds
only if the attentional bottleneck is free to refresh memory traces
as soon as participants enter their response to the processing
stimulus (i.e., the location judgment). The results of Jentzsch and
Dudschig (2008) clearly show that this cannot be the case on error
trials. It follows that because the difficult spatial judgments led to
more errors than the easy ones, they more often engendered
post-error processing (labeled PE in Figure 1) that occupied the
bottleneck during the restoration time following the overt response, thereby shortening the time available for refreshing memory traces (represented by the narrower upward triangles in Panel
RESEARCH REPORTS
A
E
Events
t
R
M1
P
R
R
P
1547
M2
P
Memory
Strength
B
Events
R
R
R
M2
M1
P
P
P
Memory
g
Strength
C
Events
R
M1
P
!
R
!
R
P
M2
P
Memory
Strength
D
Events
M1
P
PE
P
PE
P
PE
M2
Memory
g
Strength
Figure 1. Timeline of events during one processing episode in between two memory items (M1 and M2) in a
complex-span trial with a manipulation of processing difficulty (e.g., Portrat et al., 2008). Panels A and C show
easy processing stimuli, and Panels B and D show difficult processing stimuli (downward pointing triangles,
labeled P). Constant restoration times (shown by upward pointing triangles, labeled R) follow each processing
stimulus. In each panel, the assumed evolution of memory strength of the M1 item is shown below the events.
Panels A and B illustrate the assumption of the time-based resource sharing model; Panels C and D illustrate an
alternative explanation not involving decay. A: During short processing durations, memory decays; in the
following restoration time, traces are attentionally refreshed, thus fully compensating decay. B: During longer
processing (more difficult distractors), memory decays more, such that the following refreshing episode of
unchanged duration cannot fully restore memory. C: Distractors interfere with memory, reducing its strength
independent of processing duration. In the following restoration time, memory traces are repaired. D: Difficult
decisions take longer but cause no additional interference. Difficult decisions entail more errors, triggering
post-error processing (downward pointing triangles, labeled PE) that occupies part of the restoration time. In the
short remaining restoration time (peaked upward triangles), memory cannot be fully restored. See the text for
further details.
D than in Panel C in Figure 1). By implication, the differences in
memory performance between the easy and difficult processing
tasks that were observed by Portrat et al. may have reflected
differences in the time available for restoration by the attentional
bottleneck rather than the differences in time during which memory traces could decay.
In the remainder of this article, we first present an experiment
that establishes the presence of post-error processing in the distractor task used by Portrat et al. (2008). We then report a reanalysis of their data and show that when errors are statistically
controlled, the study by Portrat et al. provides no evidence for
temporal decay: Performance is a sole function of processing
1548
RESEARCH REPORTS
accuracy, not processing time. We conclude that at present there is
no direct evidence for time-based forgetting in WM.
Establishing the Presence of Post-Error Processing
We next present an experiment that examines whether the
square-location task used by Portrat et al. (2008; see also Barrouillet et al., 2007) entailed a postponement effect resulting from
post-error processing. We therefore modeled our procedure on the
method of Portrat et al. and incorporated only a few changes
necessary to detect post-error processing. Analysis focused primarily on the consequences associated with committing an error.
Method
Participants and apparatus. The participants were 16 members of the campus community at the University of Western
Australia who participated voluntarily in the 7-min experimental
session. A Windows computer running a Matlab program designed
with the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997) was
used to present stimuli and for the recording of responses. The
same apparatus was used in all remaining experiments.
Materials. Stimuli consisted of an unfilled square with a black
boundary (18 ⫻ 18 mm) that was presented either in the top or
bottom half of the screen. Participants’ task was to determine
which half of the screen stimuli appeared in. For easy stimuli, the
two possible screen locations were 68 mm apart vertically,
whereas for difficult stimuli, the vertical separation was 15 mm.
In addition, each square was horizontally offset from the
screen’s center by a random amount ranging from 0.61 to 6.06 cm.
This offset was introduced because without it, judgments become
very easy when a difficult stimulus follows another difficult stimulus on the opposite half of the screen—in this case, because of the
short response–stimulus interval, observers could detect the
change as a vertical apparent motion from just above to just below
the midline. This judgment shortcut was prevented by the random
horizontal shifts of the squares between successive trials.
Procedure. Participants were presented with two blocks of
250 trials reach. Following Portrat et al. (2008), all stimuli in a
block were either easy or difficult, with the order of the two blocks
randomly determined for each participant. On each trial, participants saw a single square that remained visible until a response
was made. Participants pressed the ?/ key to indicate a “below
centerline” response, and they used the Z key for the opposite
response. Each response was followed by presentation of the next
stimulus after 100 ms. The rapid succession of stimuli constituted
an important departure from the methodology of Portrat et al. and
was necessary to maximize the opportunity for detection of postponement effects. The total sequence of 500 trials was divided into
four segments that were separated by a self-paced break period.
were considerably slower (502 ms) than in the easy condition (411
ms). The remainder of the analysis focused on the consequences of
errors on subsequent trials.
We considered two types of trial pairs: a correct response that
was preceded by an error response (E⫺C), and a correct response preceded by another correct response (C⫺C). The mean
RTs for the two pair types and both levels of difficulty are
shown in Table 1.
A 2 ⫻ 2 (Difficulty ⫻ Trial Pair) within-subjects analysis of
variance confirmed the obvious pattern in the table, with a significant main effect of difficulty, F(1, 15) ⫽ 8.18, MSE ⫽ 0.034, p ⬵
.01, 2p ⫽ .35, and a significant main effect of trial pair, F(1, 15) ⫽
62.96, MSE ⫽ 0.012, p ⬍ .0001, 2p ⫽ .81. The interaction failed
to reach significance, F(1, 15) ⫽ 1.87, MSE ⫽ 0.019, p ⬎ .10,
2p ⫽ .11.
Discussion
We prefix our discussion by noting that the procedural differences between this experiment and the study by Portrat et al.
(2008) were mandated by the focus on post-error processing. Thus,
here there were no memoranda, and the stimuli followed each
other in rapid succession. The rapid succession, in turn, mandated
the introduction of a random horizontal offset of each stimulus. In
all other respects, the study was nearly identical to Portrat et al.’s
procedure.
The experiment suggests a very clear conclusion: The processing task used by Portrat et al. (2008) demonstrably entailed a
substantial attentional postponement effect, such that errors caused
a slowing of subsequent correct responses. The results of Jentzsch
and Dudschig (2008) have strongly implicated the attentional
bottleneck as the locus of this postponement effect. It follows that
the imbalance in error rates between conditions in Portrat et al.’s
study almost certainly entailed a differential engagement of the
attentional bottleneck during the restoration time, thus giving rise
to the appearance of time-based forgetting. To further test this
alternative possibility, we now present a re-analysis of Portrat et
al.’s (2008) data.
A Re-Analysis of Portrat et al. (2008):
No Time-Based Forgetting
We re-analyzed the results of Portrat et al. (2008) in two ways.
First, for comparability, we repeated their analysis of recall performance, including only trials on which participants made few
Table 1
Mean Response Times (and Standard Errors) in Milliseconds
Obtained in the Experiment As a Function of Difficulty and
Trial Pair
Results
Trials whose response time (RT) fell below 150 ms or exceeded
a participant’s mean RT for that task and level of difficulty by
more than three intraindividual standard deviations were eliminated from consideration (2% of all responses). The difficult
condition gave rise to slightly lower accuracy (0.96) than the easy
condition (0.97), and correct responses in the difficult condition
Difficulty
Trial pair
Easy
Difficult
C–C
E–C
405 (11)
582 (31)
492 (24)
760 (75)
Note. C–C ⫽ a correct response preceded by another correct response;
E–C ⫽ a correct response that was preceded by an error response.
RESEARCH REPORTS
errors on the processing task. For the empirical reasons just reported, we expected these trials to involve less post-error processing that could diminish the available restoration time, and we
therefore expected the effect of the difficulty manipulation on
memory to be much reduced.
We included only trials on which performance fell at or above
an accuracy criterion of 0.84 on the processing task. This criterion
was chosen because it was the most stringent possible criterion that
still retained all participants in the analysis; use of more stringent
criteria would have eliminated an increasing number of participants, thus reducing statistical power and raising the possibility
that only well-performing participants contributed to the observed
pattern.
The mean correct-in-position recall performance (averaging
across all list lengths) was 0.855 and 0.852, respectively, for the
easy and difficult processing stimuli (this compares with 0.82 vs.
0.78 in the original unconditionalized analysis reported by Portrat
et al., 2008).1 The tiny difference in the conditionalized data was
nowhere near statistically significant, notwithstanding the fact that
the processing times continued to differ substantially—344.4 versus 403.2 ms per stimulus, for the easy and difficult location
judgments, respectively. Thus, whereas the difference in time
available for temporal decay was unchanged from the values
presented by Portrat et al. (2008), the differences in memory
performance disappeared upon (a very lenient) conditionalization
on accuracy on the processing task.
In a second re-analysis, we did not conditionalize on accurate
performance on the processing task but used both accuracy and
latency on the processing task as a predictor of recall performance.
Specifically, we entered participants’ correct-in-position recall
scores (averaged across serial positions and all replications for a
given list-length and processing-difficulty condition) as a dependent measure into a multilevel regression. Multilevel regression
(also known as hierarchical regression or mixed-effects modeling;
see, e.g., Pinheiro & Bates, 2000) permits an aggregate analysis of
data from all participants without confounding within- and
between-subjects variability, and it has been used previously to
analyze data on the role of time in STM (for details, see Lewandowsky, Brown, & Thomas, 2009; Lewandowsky, Brown, Wright,
& Nimmo, 2006; Lewandowsky, Nimmo, & Brown, 2008). One
advantage of multilevel regression is that it maximizes statistical
power because it enables researchers to use all available data on a
low level of aggregation and without the loss of information
incurred by dichotomizing continuous variables (for a discussion,
see Hoffman & Rovine, 2007).
In the present case, multilevel regression enabled us to use the
actual measured time spent on the processing task as a continuous
predictor, rather than using the experimental manipulation of difficulty as a dichotomous proxy. We predicted memory performance on each list by three variables: list length (squared to
capture its nonlinear effect), processing duration for the spatial
judgment task, and accuracy on the spatial judgment task (the latter
two were averaged across all processing stimuli within a list). If
our reasoning above is correct, memory performance should be
predicted by accuracy but not by time spent on the spatial judgment task. The results of this analysis are shown in Table 2 and are
unambiguous: In confirmation of the earlier conditionalization,
there was no effect of processing duration after processing accuracy was also entered into the analysis.
1549
Table 2
Summary of the Re-Analysis of Portrat et al.’s (2008) Results
via Multilevel Regression
Effect
Estimate
SE
t(df ⫽21)
p
Intercept
Processing accuracy
Processing time
List length (squared)
0.64
0.37
0.0002
⫺0.006
0.19
0.14
0.0002
0.0006
3.35
2.62
0.80
⫺10.10
.003
.016
⬎.10
⬍.0001
We conclude that Portrat et al.’s (2008) data provide no evidence for time-based forgetting in WM. Instead, their results arose
from an attentional postponement effect that occurred whenever
people detected an error during the processing task, thus shortening the period available for attentional refreshing of memory
traces.
Discussion
Is forgetting in WM driven by time-based decay? Barrouillet et
al.’s (2004) TBRS model assumes that it is. The seemingly most
compelling evidence so far for decay in WM comes from Portrat
et al.’s (2008) finding that a more difficult—and hence more
time-consuming—processing task leads to more forgetting in the
complex-span paradigm, even when the time for restoration of
memory traces following each processing step is held constant. We
suggest that this finding calls for a different explanation that does
not involve decay: When the processing task was made more
difficult, participants committed more errors, and these errors led
to post-error processing, thus taking away part of the restoration
time. As a consequence, there was less opportunity for restoring
memory traces in the condition with the difficult processing task
than in the condition with the easy task, thus creating the false
appearance of temporal forgetting when in fact the observed performance differences reflected differences in the time available for
restoration.
The evidence for this alternative interpretation comes from
two sources. First, our experiment, in conjunction with the work
of Jentzsch and Dudschig (2008), provides direct evidence that
in the spatial judgment task used by Portrat et al. (2008),
post-error processes occupy an attentional bottleneck and
thereby slow down further processes immediately after an error.
Second, our re-analysis of Portrat et al.’s data shows that
memory was predicted by the number of errors on the processing task, not its duration. When the analysis was limited to trials
1
We thank Sophie Portrat for providing us with the raw data for this
analysis. The re-analysis is based on the data from 22 participants; two
further participants had to be eliminated from consideration here because
data on their responses for the processing task were either corrupted or
could not be unambiguously matched to their memory performance. Note
that unlike Portrat et al. (2008), we did not collapse across list length but
retained that important variable by computing a separate proportion correct
for each participant and list length using the trials that satisfied our
accuracy criterion for the processing task. Our conditionalization retained
all trials for the easy processing stimuli, and it retained 50, 50, 44, 45, 36,
and 40 trials, respectively, for List Lengths 3– 8 with the difficult processing task.
1550
RESEARCH REPORTS
with few errors on the processing task, there was no difference
in recall between trials with fast (easy) and trials with slow
(difficult) spatial judgments.
The apparent absence of forgetting when processing time was
manipulated provides direct evidence against time-based decay as
the cause of forgetting in the complex-span paradigm: Portrat et al.
(2008) extended the total processing duration by several seconds,
during which the WM system was busy processing material unrelated to the memoranda, and yet they found no effect on memory
accuracy when errors on the processing task were controlled. The
absence of forgetting as a function of longer processing durations
cannot be explained by compensatory rehearsal or some other
compensatory process because it is a key assumption of the TBRS
model—as of all other theoretical accounts of the complex-span
paradigm that appeal to decay (e.g., Towse, Hitch, & Hutton,
2000)—that the processing task prevents such compensatory activity. Our finding is, however, fully compatible with alternative
views that attribute forgetting in the complex-span paradigm, and
related paradigms in STM and WM research, to interference between representations (Farrell & Lewandowsky, 2002; Oberauer &
Kliegl, 2006; Oberauer & Lewandowsky, 2008; Saito & Miyake,
2004).
Conclusions
The present article supports two messages for researchers on
WM: one methodological and one substantive. The methodological conclusion is that, to study the causes of forgetting in paradigms that combine memory maintenance with a concurrent processing task, it is essential to control not only the timing of the
processing events (a point forcefully made by Barrouillet et al.,
2004) but also the errors on the processing task. Errors on the
processing task can have consequences for the time available for
restoration of the memory representations, and as we have learned
from the TBRS model, this time is a crucial limiting factor for
memory performance.
The substantive conclusion concerns the cause of forgetting in
WM. Since the work of Brown (1958) and Peterson and Peterson
(1959), it is known that STM representations are quickly forgotten
during a delay filled with a distracting processing task. A popular
explanation of that finding has been that memory traces quickly
decay, whereas the distracting task prevents rehearsal to maintain
them. The same explanation has been applied to the complex-span
paradigm (Barrouillet et al., 2004, 2007; Towse et al., 2000). An
inevitable prediction of that account is that when the duration of
processing steps on the concurrent task is increased while holding
all other task parameters constant, more forgetting must occur.
This was not found in the only experiment to have tested this
prediction (Portrat et al., 2008) after the effect of errors on the
processing task was controlled. Thus, in confirmation of other
recent surveys (Lewandowsky, Oberauer, & Brown, 2009; see also
Lewandowsky & Oberauer, 2008), there is no direct evidence that
decay is responsible for the rapid forgetting of information from
WM. The generality of this effect remains to be ascertained, and
alternative accounts of the exact mechanisms by which forgetting
occurs remain to be developed.
References
Barrouillet, P., Bernardin, S., & Camos, V. (2004). Time constraints and
resource sharing in adults’ working memory spans. Journal of Experimental Psychology: General, 133, 83–100.
Barrouillet, P., Bernardin, S., Portrat, S., Vergauwe, E., & Camos, V.
(2007). Time and cognitive load in working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 570 –585.
Berman, M. G., Jonides, J., & Lewis, R. L. (2009). In search of decay in
verbal short-term memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 317–333.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10,
433– 436.
Brown, J. (1958). Some tests of the decay theory of immediate memory.
Quarterly Journal of Experimental Psychology, 10, 12–21.
Burgess, N., & Hitch, G. J. (2006). A revised model of short-term memory
and long-term learning of verbal sequences. Journal of Memory and
Language, 55, 627– 652.
Cowan, N., Elliott, E., Saults, J., Nugent, L., Bomb, P., & Hismjatullina, A.
(2006). Rethinking speed theories of cognitive development: Increasing
the rate of recall without affecting accuracy. Psychological Science, 17,
67–73.
Farrell, S., & Lewandowsky, S. (2002). An endogenous distributed model
of ordering in serial recall. Psychonomic Bulletin & Review, 9, 59 –79.
Hoffman, L., & Rovine, M. J. (2007). Multilevel models for the experimental psychologist: Foundations and illustrative examples. Behavior
Research Methods, 39, 101–117.
Jentzsch, I., & Dudschig, C. (2008). Why do we slow down after an error?
Mechanisms underlying the effects of post-error slowing. Quarterly
Journal of Experimental Psychology, 62, 209 –218.
Laming, D. (1979). Choice reaction time performance following an error.
Acta Psychologica, 43, 199 –224.
Lewandowsky, S., Brown, G. D. A., & Thomas, J. L. (2009). Traveling
economically through memory space: Characterizing output order in
memory for serial order. Memory & Cognition, 37, 181–193.
Lewandowsky, S., Brown, G. D. A., Wright, T., & Nimmo, L. M. (2006).
Timeless memory: Evidence against temporal distinctiveness models of
short-term memory for serial order. Journal of Memory and Language,
54, 20 –38.
Lewandowsky, S., Duncan, M., & Brown, G. D. A. (2004). Time does not
cause forgetting in short-term serial recall. Psychonomic Bulletin &
Review, 11, 771–790.
Lewandowsky, S., & Farrell, S. (2008). Short-term memory: New data and
a model. In B. H. Ross (Ed.), The psychology of learning and motivation
(Vol. 49, pp. 1– 48). London, England: Elsevier.
Lewandowsky, S., Nimmo, L. M., & Brown, G. D. A. (2008). When
temporal isolation benefits memory for serial order. Journal of Memory
and Language, 58, 415– 428.
Lewandowsky, S., & Oberauer, K. (2008). The word length effect provides
no evidence for decay in short-term memory. Psychonomic Bulletin &
Review, 15, 875– 888.
Lewandowsky, S., Oberauer, K., & Brown, G. D. A. (2009). No temporal
decay in verbal short-term memory. Trends in Cognitive Sciences, 13,
120 –126.
Nairne, J. S. (1990). A feature model of immediate memory. Memory &
Cognition, 18, 251–269.
Oberauer, K., & Kliegl, R. (2006). A formal model of capacity limits in
working memory. Journal of Memory and Language, 55, 601– 626.
Oberauer, K., & Lewandowsky, S. (2008). Forgetting in immediate serial
recall: Decay, temporal distinctiveness, or interference? Psychological
Review, 115, 544 –576.
Oberauer, K., Süß, H.-M., Wilhelm, O., & Sander, N. (2007). Individual
differences in working memory capacity and reasoning ability. In
A. R. A. Conway, C. Jarrold, M. J. Kane, A. Miyake, & J. N. Towse
RESEARCH REPORTS
(Eds.), Variation in working memory (pp. 49 –75). New York, NY:
Oxford University Press.
Page, M. P. A., & Norris, D. (1998). The primacy model: A new model of
immediate serial recall. Psychological Review, 105, 761–781.
Pelli, D. G. (1997). The video toolbox software for visual psychophysics:
Transforming numbers into movies. Spatial Vision, 10, 437– 442.
Peterson, L. R., & Peterson, M. J. (1959). Short-term retention of individual verbal items. Journal of Experimental Psychology, 58, 193–198.
Pinheiro, J. C., & Bates, D. M. (2000). Mixed-effects models in S and
S-Plus. New York, NY: Springer.
Portrat, S., Barrouillet, P., & Camos, V. (2008). Time-related decay or
1551
interference-based forgetting in working memory? Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 1561–1564.
Saito, S., & Miyake, A. (2004). On the nature of forgetting and the
processing-storage relationship in the reading span performance. Journal
of Memory and Language, 50, 425– 443.
Towse, J. N., Hitch, G. J., & Hutton, U. (2000). On the interpretation of
working memory span in adults. Memory & Cognition, 28, 341–348.
Received December 3, 2008
Revision received May 26, 2009
Accepted May 27, 2009 䡲