Educational Psychology
An International Journal of Experimental Educational Psychology
ISSN: 0144-3410 (Print) 1469-5820 (Online) Journal homepage: https://www.tandfonline.com/loi/cedp20
Dividing attention in the classroom reduces exam
performance
Arnold L. Glass & Mengxue Kang
To cite this article: Arnold L. Glass & Mengxue Kang (2019) Dividing attention in the
classroom reduces exam performance, Educational Psychology, 39:3, 395-408, DOI:
10.1080/01443410.2018.1489046
To link to this article: https://doi.org/10.1080/01443410.2018.1489046
Published online: 26 Jul 2018.
Submit your article to this journal
Article views: 48422
View related articles
View Crossmark data
Citing articles: 8 View citing articles
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=cedp20
EDUCATIONAL PSYCHOLOGY
2019, VOL. 39, NO. 3, 395–408
https://doi.org/10.1080/01443410.2018.1489046
Dividing attention in the classroom reduces exam
performance
Arnold L. Glass
and Mengxue Kang
Department of Psychology, Rutgers University, Piscataway, NJ, USA
ABSTRACT
ARTICLE HISTORY
The intrusion of internet-enabled electronic devices (laptop, tablet, and cell phone) has transformed the modern college lecture
into a divided attention task. This study measured the effect of
using an electronic device for a non-academic purpose during
class on subsequent exam performance. In a two-section college
course, electronic devices were permitted in half the lectures, so
the effect of the devices was assessed in a within-student, withinitem counterbalanced experimental design. Dividing attention
between an electronic device and the classroom lecture did not
reduce comprehension of the lecture, as measured by within-class
quiz questions. Instead, divided attention reduced long-term
retention of the classroom lecture, which impaired subsequent
unit exam and final exam performance. Students self-reported
whether they had used an electronic device in each class. Exam
performance was significantly worse than the no-device control
condition both for students who did and did not use electronic
devices during that class.
Received 7 June 2017
Accepted 7 June 2018
KEYWORDS
Divided attention;
instruction; long-term
retention; social distraction;
exam performance
Introduction
The senior author can remember when etiquette at all levels of schooling required
students to devote full attention to the teacher. Regardless of class size, dividing ones
attention between a lecture and some other activity, such as doing homework for
another course, was not a possibility. This cultural prohibition was beneficial because
60 years of laboratory studies have found that when a person divides attention
between two tasks they remember less from at least one of them (Cherry, 1953; Wood
& Cowan, 1995). However, technological advances have changed cultural norms. The
intrusion of electronic devices (laptop, tablet, and cell phone) allows students to divide
attention between the classroom and unrelated activities, for example, texting, watching a movie, playing a game, and shopping. Recent laboratory studies in which one of
the tasks requires using an electronic device on the internet have replicated this finding (Grace-Martin & Gay, 2001; Hembrooke & Gay, 2003; Sana, Weston, & Cepeda,
CONTACT Arnold L. Glass
[email protected]
Department of Psychology, Rutgers University, Piscataway,
NJ, USA
Color versions of one or more of the figures in the article can be found online at www.tandfonline.com/cedp
ß 2018 Informa UK Limited, trading as Taylor & Francis Group
396
A. L. GLASS AND M. KANG
2013). So, when students divide attention between an electronic device and the classroom lesson they should be learning less. In fact, several classroom studies have found
a negative correlation between the amount of time spent using an electronic device
for non-academic purposes during class and subsequent exam performance using
both self-report measures of electronic device use (Fried, 2008; Jacobsen & Forste,
2011; Junco, 2012; Ravizza, Hambrick, & Fenn, 2014) and direct electronic monitoring
(Kraushaar & Novak, 2010; Ravizza, Uitvlugt, & Fenn, 2017).
Unfortunately, all of these informative studies are correlational. They only establish
that students who divide attention in class do worse on exams. They do not establish
that it is the division of attention in class that causes the poorer performance. These
results (Fried, 2008; Jacobsen & Forste, 2011; Junco, 2012; Kraushaar & Novak, 2010;
Ravizza et al., 2014, 2017) cannot rule out the plausible hypothesis that the students
who are more likely to divide attention between academic and non-academic tasks in
class are also more likely to do so out of class. So generally, less attention to study
both in and out of class ultimately reduces exam performance rather than a specific
effect of divided attention on classroom learning.
A recent experimental classroom study (Carter, Greenberg, & Walker, 2016) partly
fills this gap. In some course sections, electronic devices were forbidden in the classroom but in other sections, they were not. There was poorer final exam performance
for those sections in which electronic devices were permitted. Since the students had
no control over their assignment to a course section, it may be inferred that the exam
performance was an effect of assignment to either an electronic devices allowed or
electronic devices not allowed section. Hence, this study does show an ultimate effect
of divided attention in class on final exam performance.
However, the study (Carter et al., 2016) did not address important questions about
how divided attention affects exam performance. Laboratory studies have shown at
least three distinct costs of divided attention. The first effect is the selection effect. For
example, when more than one person is speaking, while listening to one person speak
a listener hears nothing of what the other speaker is saying (Cherry, 1953; Wood &
Cowan, 1995). The second effect is the switching effect. When two tasks are being performed, there is a switching time between tasks when neither task is being performed
(Jersild, 1927; Rogers & Monsell, 1995; Ruthruff, Pashler, & Klaassen, 2001; Spector &
Biederman, 1976). Both selection and switching immediately degrade performance on
at least one, and usually both tasks, causing an immediate effect of divided attention.
However, there is a third, delayed effect of divided attention on retention. When attention is divided between two tasks, fewer targets of a study task are subsequently
remembered (Mulligan & Picklesimer, 2016). Even when there is little or no selection
or switching effect, divided attention reduces retention of the targets for both tasks.
The effect of divided attention on retention is one example of a more general effect
of limiting the amount of time available for study when the study material is initially
presented. There are two ways to limit study time. One way is to present the study
material rapidly. The other way is to require the performance of a secondary task that
occurs either simultaneously or immediately after the study material. The effect of
reducing the time available for study is to reduce the robustness of the memory of
the study material so that it more rapidly degrades from the perception of similar
EDUCATIONAL PSYCHOLOGY
397
material and so is rapidly forgotten. When study time is completely eliminated, forgetting is immediate. For example, Rundus and Atkinson (1970) found that when a serially presented list of words is presented, long-term is retention is determined by the
number of distributed rehearsals (Modigliani, 1980) that each word receives. The
beginning of the list is remembered better (the primacy effect) because early list
members receive more distributed rehearsals. When the rate of presentation is
increased, the number of rehearsals is reduced, consequently reducing both primacy
and overall recall. In the limit, Potter (1975, 1976) found that when a list of words or
pictures is presented at a rate of 114 milliseconds per item, everything is recognised
and nothing, except for a single target item that elicits a response, is remembered.
€ller and Pilzecker (1900) had participants study two lists in sucSimilarly, when Mu
cession, long-term recall of the first list was impaired when the second list was presented immediately after the first, hence eliminating continued rehearsal of the first
list. When the performance of a secondary distracter task was both required and
immediate, Brown (1958) and Peterson and Peterson (1959) found that recall of the
study item was immediately impaired. However, when the secondary task was less
demanding and so more similar to the effect of the voluntary division of attention
studied here, only an effect on delayed retention was found (Mulligan &
Picklesimer, 2016).
The well-established dual-system description of mammalian memory (Yin &
Knowlton, 2006) provides an explanation of the essential role of study in long-term
retention. According to the dual system hypothesis, two distinct systems comprise the
mammalian memory system. The instrumental system, which includes the posterior
hippocampus and surrounding medial temporal region, encodes the perceptual representations of novel inputs that are subsequently used for navigation and recognition.
The habit system, which includes the caudate nucleus, the surrounding basal ganglia
structures, and the anterior hippocampus, encodes sequences of actions (Woolley
et al., 2013).
Packard and McGaugh (1996) found that when a perceptual target and a successful
response are repeated, a more robust, hence much-longer-retained representation is
preserved in the habit system than is initially encoded by the instrumental system.
Despite the fact that it encodes sequences of actions, the habit system is also involved
in the long-term retention of perceptual sequences, which are encoded as response–effect sequences (Ziessler & Nattkemper, 2001). Because the dual-system hypothesis
directly predicts that long-term retention is not a consequence of repeated perception,
but of repeated responses to the same perceptual or semantic target, it directly predicts the testing effect, that long-term retention of semantic material is increased by
question-answering when compared with reading or listening, a prediction confirmed
by both laboratory (Karpicke & Roediger, 2007, 2008) and classroom studies (Glass,
Brill, & Ingate, 2008; McDermott, Agarwal, D’Antonio, Roediger, & McDaniel, 2014).
More specifically, the dual-system hypothesis predicts an interaction such that question-answering does not increase performance on immediate or short-term but only
on long-term retention because its effect is mnemonic. It does not facilitate learning
but long-term retention. The specific effect on only long-term retention has been
398
A. L. GLASS AND M. KANG
confirmed by both laboratory (Karpicke & Roediger, 2007, 2008) and classroom studies
(Glass, Ingate, & Sinha, 2013).
Similarly, the dual-system hypothesis directly predicts that divided attention will
impair long-term retention because of reduced opportunity for mnemonic activities,
e.g. rehearsal, that increase long-term retention. The specific effect of divided attention on long-term retention has been confirmed in at least one laboratory study
(Mulligan & Picklesimer, 2016) and is tested by this classroom study for the first time.
If selection and/or switching contribute to the ultimate impairment of exam performance then engaging in these mental activities must interfere with the comprehension of the material presented during the lecture. An immediate question during class
should reveal the impaired comprehension. However, if the entire effect of divided
attention is on retention then immediate comprehension, as measured by classroom
questions, may be unimpaired and only delayed comprehension may be impaired, as
found in the laboratory by Mulligan and Picklesimer (2016).
Selection, switching and retention effects are all effects of an individual dividing
attention on the individual’s own performance. When considered in the context of a
classroom, allowing students to divide attention between an electronic device for a
non-academic purpose and the classroom may also have the social effect of distracting other students who are not trying to divide attention. The social effect of distraction was tested in a laboratory simulation in which all students took notes on a
lecture and some students also had to perform internet tasks on laptops in addition
to the note-taking (Sana et al., 2013). Both the students performing the internet tasks
and the students who sat near them performed worse on a subsequent exam on the
content of the lecture.
A critical gap in the ability to devise an instructional strategy to respond to the
intrusion of electronic devices into the classroom for recreational, non-academic purposes has been the lack of understanding of the effect of divided attention by some
students in classroom on the comprehension and retention of the lesson by all of the
students in the classroom. To fill the gap in the understanding of the effect of classroom divided attention on academic performance, the study reported here embedded
a within-subject, within-item experimental design in an actual multi-section college
lecture course in which tight control was maintained over the teaching of each section
in order to obtain the full power of an analysis of variance. The purpose of the study
was to measure the effect of attention to electronic devices for content and/or tasks
unrelated to the immediate classroom activity on both classroom and subsequent
exam performance within the framework of an experimental design that would permit
a causal attribution of the effect of divided attention on both classroom and exam
performance.
During the course of the semester, students answered 126 questions in class on
just-presented content using personal response software which allowed them to use
any electronic device. Each of these questions was paired with another question that
appeared on both a unit exam and the final exam. The same principle or fact statement implied both the answer to the classroom question and the answer to the exam
question. The primary purpose of the classroom questioning was as an instructional
methodology to ensure long-term retention of the course material. However, it also
EDUCATIONAL PSYCHOLOGY
399
provided precise information about the level of student knowledge during the
class lesson.
Two sections received exactly the same 25 lessons and questions back-to-back.
During the first two lessons, during the first week of class, the rules for the use of
electronic devices during the remainder of the course were repeatedly explained to
the students. For the remaining 23 lessons, the classes were identical except for when
electronic devices were permitted. Except during the brief time periods when used to
answer a question, a proctor ensured that students did not have a laptop or cell
phone open for the odd-numbered lessons for section 1 and for the even-numbered
lessons for section 2. For the other lessons, students were permitted to have laptops
and cell phones open. As a consequence of the within-subject, within-item design,
item difficulty was counterbalanced across the condition in which electronic device
use was not allowed (Not Allow condition) and the condition in which electronic
device use was allowed (Allow condition).
Method
Participants
A total of 118 students, attending one of two sections of an upper-level college cognitive psychology course taught by the same instructor in an identical manner and
scheduled during successive class periods in the same lecture hall, participated in the
experiment. They were 39% Caucasian, 31% Asian, 13% Latino, 6% African, 4% Mixed
Race, and 7% Other. There were 72 students in one section and 46 students in the
other. There were 34 males, 82 females, and 2 no responses. There were 114 students
between the ages of 18 and 24 and 4 students between the ages of 24 and 36. There
was no factor that would have influenced the selection of one identical section over
the other. The university was a flagship state university in the United States.
Materials
A total of 126 pairs of five-alternative multiple-choice questions were selected from
the question sets accompanying the course textbook (Glass, 2016). The principles and
statements implying the answers to all of these questions could be found within the
text. One question from each pair was asked in class and the other question appeared
on the unit exam and final exam.
Procedure
Two sections of a college course on cognitive psychology were taught by the same
instructor back-to-back. Both sections received the same Tuesday and Thursday 80minute lectures, quizzes, and exams during 28 back-to-back class sessions. There were
three unit exams during the class sessions of the semester and a cumulative 3-hour
final exam during finals week. The first two classes of the course were not included in
the experimental design and three classes were devoted to unit exams. This left 23
lecture classes included in the experimental design. There was complete redundancy
400
A. L. GLASS AND M. KANG
between the textbook and the 23 classroom lectures so that the answer to every
exam question was presented both in the textbook and on a PowerPoint slide presented during the lecture by the instructor. All of the lecture slides were available to
the students on the course website at the beginning of the semester.
During the course of the semester, students answered 126 multiple-choice questions in class on just-presented content on the previous slide using personal response
software which allowed them to use any electronic device. The question appeared on
a slide. The instructor read the question from the slide once and then activated a 10second countdown, visible on the slide, during which students could respond with
any (properly registered) electronic device. At the conclusion of the countdown, the
percent of students who made each response was presented. Next, a checkmark indicated the correct answer. Finally, the instructor asked if there were any questions
about why that response was correct.
Sections 1 and 2 received exactly the same 23 lessons and questions back-to-back.
However, as was stated on the syllabus and at the start of each class, the students
were only allowed to use electronic devices for half of the lectures and this prohibition
was enforced by a proctor that only attended classes for which dividing attention to
electronic devices was forbidden. Except during the brief periods when used to
answer a question, a proctor ensured that students did not have a laptop or a cell
phone open for the odd-numbered lessons for section 1 (held on Tuesday) and for
the even-numbered lessons for section 2 (held on Thursday). When electronic devices
were not allowed, students were told to leave them closed on the arm extensions of
their seats and to touch them only to answer questions. The lecture hall had arena
style seating in which each row of seats is above its predecessor. So, a proctor at the
back of the lecture hall had a good view of the electronic devices on the arm extensions below. When the proctor noticed an open electronic device, the proctor
approached the student and instructed that it be closed. Students always complied
immediately. After the first three weeks, the prohibition became routine and was no
longer violated, though the proctor continued to attend until the end of the semester.
For the other lessons, students were permitted to have laptops and cell phones open.
At the end of each class at which electronic devices were permitted, the students
answered the following question: During class today, did you look at your laptop or cell
phone? (It was made clear that this was for a use other than answering a question.)
The interval between a classroom lecture and unit exam varied from 5 d to 28 d. The
interval between a unit exam and the final exam varied from 5 d to 3.5 months.
Since nothing was done in this course that was not also done in other courses
throughout the university; the only difference being that the effects of divergent,
widely used, classroom decorum rules were being systematically evaluated, the
method received an exemption from additional informed consent requirements from
the Institutional Review Board.
Results
Divided attention in the classroom did not affect classroom performance but did affect
subsequent exam performance. Exam performance was poorer for material taught in
EDUCATIONAL PSYCHOLOGY
401
classes that permitted electronic device use both for students who did and did not
direct attention to an electronic device for a non-academic purpose during
those classes.
Responses to the classroom quizzes, unit exams, and final exam were analysed in a
3 2 within-subject, within item-design in which the independent variables were
Question (class quiz, unit exam, and final exam) and Electronic Device Use (Allowed,
Not Allowed) and the dependent variable was percent correct on the classroom and
exam questions. The results are shown in Figure 1. The effects of both Question F(2,
234) ¼ 140.5, p < .01 (partial eta-squared ¼ 0.10) and Electronic Device Use F(1,
117) ¼ 12.8, p < .01 (partial eta-squared ¼ 0.55) and their interaction F(2, 234) ¼ 12.4,
p < .01 (partial eta-squared ¼ 0.10) were significant. Post hoc comparisons confirmed
that the difference between Allowed and Not Allowed was not significant for classroom questions, F(1, 117) ¼ 0.7, p ¼ .40 (partial eta-squared ¼ 0.03), but was significant
for the unit exam, F(1, 117) ¼ 5.9, p ¼ .02 (partial eta-squared ¼ 0.17), and the final
exam, F(1, 117) ¼ 56.5, p < .01 (partial eta-squared ¼ 0.15), questions.
We excluded from further analysis those students who reported whether they used
electronic devices only once. Among the remaining students, a total of 79 students
reported that they used electronic devices in more than half of the lectures in which
they were allowed and among these 43 reported always using electronic devices. A
total of 30 students reported that they did not use an electronic device in more than
half of the lectures in which they were allowed and among these six reported never
using an electronic device. To assess the performance of students who did use electronic devices versus those who did not use electronic devices when they were
allowed, two analyses were performed. Every response of every student was included
in at least one of these two analyses.
A between-subjects analysis was performed in which a student was assigned to the
Use group if the student reported using an electronic device in the majority of the
classes in which it was allowed and a student was assigned to the Not Use group if
the student reported not using an electronic device in the majority of the classes in
which it was allowed. If a student was in the Use group then any responses made in
Figure 1. The effect of allowing electronic devices in the classroom on percent correct on classroom quizzes and a subsequent unit exam and the final exam.
402
A. L. GLASS AND M. KANG
the Allow condition when the student reported not using an electronic device were
not included in the analysis. Similarly, if a student was in the Not Use group then any
responses made in the Allow condition when the student reported using an electronic
device were not included in the analysis. Then, a 2 3 2 between-subject analysis of
variance was performed in which the independent variables were Group (Use, Not
Use), Question (class quiz, unit exam, and final exam), and Electronic Device Use
(Allowed, Not Allowed), and the dependent variable was the percent of correct
responses in the classroom and exam questions. The results of this analysis are shown
in Figure 2. Neither the effect of Group, F(1, 107) ¼ 2.1, p ¼ .15 (partial etasquared ¼ 0.02), nor any of its interactions were significant; however, the effects of
both Question F(2, 214) ¼ 114.5, p < .01 (partial eta-squared ¼ 0.52) and Electronic
Device Use F(1, 107) ¼ 0.8, p < .01 (partial eta-squared ¼ 0.15) and their interaction F(2,
214) ¼ 10.4, p < .01 (partial eta-squared ¼ 0.09) were significant. As shown in Figure 2,
the significant interaction was the same for both the Use and Not Use groups and
replicated the interaction shown in Figure 1. Post hoc comparisons confirmed that the
difference between Allow and Not Allow was not significant for classroom questions,
F(1, 108) ¼ 0.58, p ¼ .45 (partial eta-squared ¼ 0.05), and was not significant for the unit
exam, F(1, 108) ¼ 2.65, p ¼ .11 (partial eta-squared ¼ 0.02), but was significant for the
final exam, F(1, 108) ¼ 50.06, p < .01 (partial eta-squared ¼ 0.32), questions. There were
no significant differences for Not Use versus Use.
Since the group size of Not Use and Use was unequal, we also did a correlation
and an ANCOVA test to demonstrate there were no differences for Not Use versus
Use. The correlation between the number of classes students reported using electronic
device and the percent correct was not significant for classroom questions (Pearson
r ¼ 0.03, p ¼ .74), for the unit exam (Pearson r ¼ 0.09, p ¼ .37), nor for the final exam
(Pearson r ¼ 0.12, p ¼ .21).
If we added the number of classes as a covariate, the results were the same as the
between-subjects analysis we did above. The effects of both Question F(2, 214) ¼ 16.2,
p < .01 (partial eta-squared ¼ 0.13) and Electronic Device Use F(1, 107) ¼ 0.03, p ¼ .04
Figure 2. The results of a between-student analysis of the effects of allowing and using electronic
devices in the classroom on percent correct on classroom quizzes and a subsequent unit exam and
the final exam.
EDUCATIONAL PSYCHOLOGY
403
(partial eta-squared ¼ 0.04) and their interaction F(2, 214) ¼ 11.8, p < .01 (partial etasquared ¼ 0.10) were significant.
Also, a within-subjects analysis was carried out on the performance of the 39 students who sometimes did and sometimes did not use an electronic device when it
was allowed. A 3 3 within-subject analysis was performed in which the independent
variables were Question (class quiz, unit exam, and final exam) and Electronic Device
Use (Used, Not Used, and Not Allowed) and the dependent variable was percent correct on the classroom and exam questions. The results of this analysis are shown in
Figure 3. The effect of Question was significant, F(2, 76) ¼ 23.7, p < .01 (partial etasquared ¼ 0.38). The effect of Electronic Device Use was not significant, F(2, 76) ¼ 1.3,
p ¼ .27 (partial eta-squared ¼ 0.03). However, their interaction F(4, 152) ¼ 3.3, p ¼ .01
(partial eta-squared ¼ 0.08) was significant, which replicated the between-subjects
interaction of Figures 1 and 2 with this within-subjects interaction shown in Figure 3.
Post hoc comparisons found that classroom performance was actually better in the
Allow condition, F(1, 38) ¼ 5.1, p ¼ .03 (partial eta-squared ¼ 0.07), but again performance was better in the Not Allow condition for the unit exam, F(1, 38) ¼ 4.6, p ¼ .01
(partial eta-squared ¼ 0.01), and was significant for the final exam, F(1, 38) ¼ 16.8,
p < .01 (partial eta-squared ¼ 0.11), questions. There were no significant differences for
Not Use versus Use.
Discussion
Divided attention had no effect on performance of the classroom lesson questions.
However, following the lessons in which cell phones and laptops were allowed, performance was poorer on the unit exam and final exam questions. This finding demonstrates for the first time that the main effect of divided attention in the classroom is
not an immediate effect of selection or switching on comprehension but a long-term
effect of divided attention on retention. Hence, the effect on long-term retention was
Figure 3. The results of a within-student analysis of the effects of allowing and using electronic
devices in the classroom on percent correct on classroom quizzes and a subsequent unit exam and
the final exam.
404
A. L. GLASS AND M. KANG
obtained in the absence of an immediate effect on comprehension and the effect on
retention was largest at the largest retention interval, on the final exam.
Furthermore, when the use of electronic devices was allowed in class, performance
on the unit exams and final exams was poorer for students who did not use electronic
devices during the class as well as for the students who did use an electronic device.
This is the first-ever finding in an actual classroom of the social effect of classroom distraction on subsequent exam performance. The effect of classroom distraction on
exam performance confirms the laboratory finding of the social effect of distraction
(Sana et al., 2013).
Obviously, the effect of allowing electronic devices on those students who chose
not to divide attention was the result of a large majority of the students (87) choosing
to use devices in most classes while only six students never used them at all. This ubiquitous use certainly changed the social character of the class from an occasion for
joint attention to more like a group of individuals in a waiting room occasionally looking up. It meant that for the few students who tried to direct attention to the instructor
there was distracting activity on both sides and in front of them. The instructor often
noticed two students giggling as they together viewed an image on a laptop. It
seemed that such behaviour would be distracting to individuals around them.
Confirmation of dual-system hypothesis
The specific effect of distraction on long-term retention is further confirmation that
the dual-system model of mammalian memory extends to humans. Within the framework of the dual-system model, long-term retention is a consequence of voluntary
action within the habit system. Consequently, any distraction that reduces mnemonic
activities reduces long-term retention.
The severity of the effect of classroom divided attention on exam performance
Divided attention to a laptop or cell phone during a classroom lesson clearly impaired
long-term retention of the lesson, as demonstrated by the poorer performance on the
corresponding review, unit exam, and final exam questions. The magnitude of the
effect, a 5% decrease, was a meaningful amount, about half a letter grade in conventional scoring systems.
However, the design of the experiment minimised the effect of divided attention
during a lesson. First, all of the course material was presented redundantly. Students
who failed to retain material presented in the classroom could have learned it from
the textbook and the online lecture slides. In a course in which redundant presentation of the course material may be less systematic and comprehensive, there may be
less opportunity to compensate for divided attention or distraction in the classroom.
Thus, the resulting long-term effect of the inattention would be greater.
Second, in this experiment, immediately after a fact was presented in lecture the
students had to answer a multiple-choice question that tested immediate retention.
Requiring students to answer a question reduces the effect of divided attention on
retention compared with a similar condition without question-answering (Mulligan &
EDUCATIONAL PSYCHOLOGY
405
Picklesimer, 2016). Furthermore, answering the question increases long-term retention
of the answer and the testing effect alone causes a meaningful increase in classroom
performance (Glass, 2009; Glass & Sinha, 2013; Roediger, Agarwal, McDaniel, &
McDermott, 2011). If a question had not followed the presentation of the fact statement, but merely passive listening was required, then there would have been a much
greater reduction in retention of the content being presented. Hence, the results of
this study represent the minimum reduction in exam performance as the result of divided attention and distraction in class.
Third, notice that the insidious nature of the retention deficit complicates the task
of what to study for a subsequent exam. One possible strategy is to subsequently
study those items previously gotten wrong on a quiz or exam. However, the effect of
divided attention is rapid forgetting of those items answered correctly. So, a student
would not only have to study to learn what she did not already know but also what
she was about to forget.
Failure to find an immediate effect of divided attention on performance
The finding of an effect of divided attention on long-term performance does not preclude an additional immediate effect on classroom performance. As mentioned above,
such immediate effects of divided attention are well-known from laboratory studies.
Why then was such an effect not found in the classroom?
Divided attention has an immediate effect when the demands of two tasks create
an irreconcilable conflict between which of two competing responses should be
made. When a participant can schedule when to respond to each task and gives one
task priority, no decrement in the performance of that task is observed (Schumacher
et al., 2001). This is the situation in the classroom. A student momentarily being entertained by cell phone content or distracted by the student in the adjacent seat can
nevertheless focus on each question when it is asked in class. Consequently, as measured in this study, there was no decrement in immediate memory when a student
was asked a question about a just-presented slide. Students had good reason to prioritise answering classroom questions because their responses influenced their grades.
Laboratory studies that produced an immediate effect of divided attention on retention (e.g. Risko, Buchanan, Medimorec, & Kingstone, 2013) are not comparable,
because they did not provide a similarly strong incentive to prioritise the immediate
memory test in the presence of distraction. Furthermore, students watched a video of
a lecture in the laboratory studies. Wammes and Smilek (2017) found that less attention is directed to a video of a lecture than to a live classroom lecture; so, a video lecture is more susceptible to an immediate effect of distraction.
Multi-tasker awareness of the effect of divided attention on performance
The finding that divided attention affected exam performance through its long-term
effect on retention rather than through an immediate effect of selection or task
switching on comprehension may explain why students who are experienced at
multi-tasking are unaware of any negative effect it has on their performance (Ophir,
406
A. L. GLASS AND M. KANG
Nass, & Wagner, 2009; Sana et al., 2013). In a real-life multi-tasking situation, the multitasker may be aware of any immediate error on one of the tasks. However, when an
error has not been made or noticed, there is no reason for a multi-tasker to suspect
that long-term retention of the current experience is being compromised.
Social interaction and classroom learning
Given the comprehensive redundancy of the course, it is notable that merely dividing
attention in the classroom had any effect on subsequent performance on an exam
weeks later. After all, besides attending class, students often study the course material
immediately prior to taking an exam. This is why the performance was greater on the
final exam than on the unit exams. Why does not this post-class, pre-exam studying
completely overwhelm and eliminate the effect of classroom attention?
The post-class study may be inherently inferior to classroom instruction for many individuals because human cognitive capabilities evolved to support human social abilities
(Dunbar & Sutcliffe, 2013) and for many humans, learning occurs more in a social context
than an asocial context. Consequently, most humans of all ages, from infants (Kuhl, Tsao,
& Liu, 2003), to young children (Nussenbaum & Amso, 2016; Roseberry, Hirsh-Pasek, &
Golinkoff, 2014), to college students (Keefe, 2003), retain more from face-to-face social
interactions than they do from nearly equivalent instructional situations (e.g. involving
video or other online instruction) that do not involve social interaction. Also, how much is
learned in the classroom may influence the quality of post-class study.
Policy implication
The finding that divided classroom attention ultimately reduces exam performance
suggests a re-calculation of the risk/benefit ratio of technology that disables devices
and applications not for educational engagement. While laboratory studies of learning
are essential, it is only through experimental studies conducted in the classroom
measuring actual exam performance that the effects of learning variables on classroom
study and exam performance can be determined.
Disclosure statement
No potential conflict of interest was reported by the authors.
ORCID
Arnold L. Glass
http://orcid.org/0000-0002-3202-5953
References
Brown, J. (1958). Some tests of the decay theory of immediate memory. Quarterly Journal of
Experimental Psychology, 10, 12–21. doi:10.1080/17470215808416249
Carter, S. P., Greenberg, K., & Walker, M. (2016). The impact of computer usage on academic performance: Evidence from a randomized trial at the United States Military Academy. Economics
of Education Review, 56, 118–132.
EDUCATIONAL PSYCHOLOGY
407
Cherry, E. C. (1953). Some experiments on the recognition of speech, with one and two ears.
Journal of the Acoustical Society of America, 25, 975–979. doi:10.1121/1.1907229
Dunbar, R. I. M., & Sutcliffe, A. G. (2013). Social complexity and intelligence In J. Vonk & T. K.
Shackelford (Eds.), The Oxford handbook of comparative evolutionary psychology (pp. 102–117).
New York, NY: Oxford University Press.
Fried, C. B. (2008). In-class laptop use and its effects on student learning. Computers & Education,
50, 906–914. doi:10.1016/j.compedu.2006.09.006
Glass, A. L. (2009). The effect of distributed questioning with varied examples on exam performance on inference questions. Educational Psychology, 29, 831–848. doi:10.1080/
01443410903310674
Glass, A. L. (2016). Cognition: A neuroscience approach. Cambridge, UK: Cambridge University Press.
Glass, A. L., Brill, G., & Ingate, M. (2008). Combined online and in- class pre-testing improves
exam performance in general psychology. Educational Psychology, 28, 483–503. doi:10.1080/
01443410701777280
Glass, A. L., Ingate, M., & Sinha, N. (2013). The effect of a final exam on long-term retention.
Journal of General Psychology, 140, 224–241. doi:10.1080/00221309.2013.797379
Glass, A. L., & Sinha, N. (2013). Multiple-choice questioning is an efficient instructional methodology that may be widely implemented in academic courses to improve exam performance.
Current Directions in Psychological Science, 22, 471–477. doi:10.1177/0963721413495870
Grace-Martin, M., & Gay, G. (2001). Web browsing, mobile computing, and academic performance. Journal of Educational Technology & Society, 4, 95–107.
Hembrooke, H., & Gay, G. (2003). The laptop and the lecture: The effects of multitasking in learning
environments. Journal of Computing in Higher Education, 15, 46–64. doi:10.1007/BF02940852
Jacobsen W. C., & Forste, R. (2011). The wired generation: Academic and social outcomes of
electronic media use among university students. Cyberpsychology, Behavior, and Social
Networking, 14, 275–280. doi:10.1089/cyber.2010.0135
Jersild, A. T. (1927). Mental set and shift. Archives of Psychology, 14, 81.
Junco, R. (2012). In-class multitasking and academic performance. Computers in Human Behavior,
28, 2236–2243. doi:10.1016/j.chb.2012.06.031
Karpicke, J. D., & Roediger, H. L., III. (2007). Repeated retrieval during learning is the key to longterm retention. Journal of Memory and Language, 57, 151–162. doi:10.1016/j.jml.2006.09.004
Karpicke, J. D., & Roediger, H. L., III. (2008). The critical importance of retrieval for learning.
Science, 319, 966–968. doi:10.1126/science.1152408
Keefe, T. J. (2003). Using technology to enhance a course: The importance of interaction.
EDUCAUSE Quarterly, 26, 24–34.
Kraushaar, J. M., & Novak, D. C. (2010). Examining the affects [sic] of student multitasking with
laptops during the lecture. Journal of Information Systems Education, 21, 241–251.
Kuhl, P. K., Tsao, F.-M., & Liu, H.-M. (2003). Foreign-language experience in infancy: Effects of
short-term exposure and social interaction on phonetic learning. Journal of Experimental
Psychology: Learning, Memory and Cognition, 100, 9096–9101.
McDermott, K. B., Agarwal, P. K., D’Antonio, L., Roediger, H. L., III, & McDaniel, M. A. (2014). Both
multiple-choice and short-answer quizzes enhance later exam performance in middle and
high school classes. Journal of Experimental Psychology: Applied, 20, 3–21. doi:10.1037/
xap0000004
Modigliani, V. (1980). Immediate rehearsal and initial retention interval in free recall. Journal of
Experimental Psychology: Human Learning and Memory, 6, 241–253. doi:10.1037/02787393.6.3.241
€ller, G. E., & Pilzecker, A. (1900). Experimentelle Beitrage zur Lehre vom Gedachtnis. Zeitung
Mu
Psychologische Erganzungsband, 1, 1–300.
Mulligan, N. W., & Picklesimer, M. (2016). Short-term retention of individual verbal items. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 42, 938–950. doi:10.1037/xlm0000227
Nussenbaum, K., & Amso, D. (2016). An attentional Goldilocks effect: An optimal amount of
social interactivity promotes word learning from video. Journal of Cognition and Development,
17, 30–40. doi:10.1080/15248372.2015.1034316
408
A. L. GLASS AND M. KANG
Ophir, E., Nass, C., & Wagner, A. D. (2009). Cognitive control in media multitaskers. Proceedings
of the National Academy of Science, 106, 15583–15587. doi:10.1073/pnas.0903620106
Packard, M. G., & McGaugh, J. L. (1996). Inactivation of hippocampus or caudate nucleus with
lidocaine differentially affects expression of place versus response learning. Neurobiology of
Learning and Memory, 65, 65–72. doi:10.1006/nlme.1996.0007
Peterson, L. R., & Peterson, M. J. (1959). Short term retention of individual items. Journal of
Experimental Psychology, 58, 193–198. doi:10.1037/h0049234
Potter, M. C. (1975). Meaning in visual search. Science, 187, 965–966. doi:10.1126/science.1145183
Potter, M. C. (1976). Short-term conceptual memory for pictures. Journal of Experimental
Psychology: Human Learning and Memory, 2, 509–522. doi:10.1037/0278-7393.2.5.509
Ravizza, S. M., Hambrick, D. Z., & Fenn, K. M. (2014). Nonacademic Internet use in the classroom
is negatively related to classroom learning regardless of intellectual ability. Computers &
Education, 78, 109–114. doi:10.1016/j.compedu.2014.05.007
Ravizza, S. R., Uitvlugt, M. G., & Fenn, K. M. (2017). Logged in and zoned out: How laptop internet
use impacts classroom learning. Psychological Science, 28, 171–180. doi:10.1177/0956797616677314
Risko, E. F., Buchanan, D., Medimorec, S., & Kingstone, A. (2013). Every day attention: Mind wandering and computer use during lectures. Computers & Education, 68, 275–283. doi:10.1016/
j.compedu.2013.05.001
Roediger, H. L., III, Agarwal, P. K., McDaniel, M. A., & McDermott, K. B. (2011). Processing bottlenecks in dual-task performance: Structural limitation or strategic postponement? Journal of
Experimental Psychology: Applied, 17, 382–395. doi:10.1037/a0026252
Rogers, R. D., & Monsell, S. (1995). Costs of a predictable switch between simple cognitive tasks.
Journal of Experimental Psychology: General, 124, 207–231. doi:10.1037/0096-3445.124.2.207
Roseberry, S., Hirsh-Pasek, K., & Golinkoff, R. M. (2014). Skype me! Socially contingent interactions
help toddlers learn language, Child Development, 85, 956–970. doi:10.1111/cdev.12166
Rundus, D., & Atkinson, R. C. (1970). Rehearsal processes in free recall: A procedure for direct
observation. Journal of Verbal Learning and Verbal Behavior, 9, 99–105. doi:10.1016/S00225371(70)80015-9
Ruthruff, E., Pashler, H. E., & Klaassen, A. (2001). Processing bottlenecks in dual-task performance:
Structural limitation or strategic postponement? Psychonomic Bulletin & Review, 8, 73–80.
doi:10.3758/BF03196141
Sana, F., Weston, T., & Cepeda, N. (2013). Laptop multitasking hinders classroom learning for both
users and nearby peers. Computers & Education, 62, 24–31. doi:10.1016/j.compedu.2012.10.003
Schumacher, E. H., Seymour, T. L., Glass, J. M., Fencsik, D. E., Lauber, E. J., Kieras, D. E., & Meyer,
D. E. (2001). Virtually perfect time sharing in dual-task performance: Uncorking the central
cognitive bottleneck. Psychological Science, 12, 101–108. doi:10.1111/1467-9280.00318
Spector, A., & Biederman, I. (1976). Mental set and mental shift revisited. American Journal of
Psychology, 89, 669–679. doi:10.2307/1421465
Wammes, J. D., & Smilek, D. (2017). Examining the influence of lecture format on degree of
mind wandering. Journal of Applied Research in Memory and Cognition, 6, 173–184.
Wood, N. L., & Cowan, N. (1995). The cocktail party phenomenon revisited: Attention and memory in the classic selective listening procedure of Cherry (1953). Journal of Experimental
Psychology: General, 124, 243–262. doi:10.1037/0096-3445.124.3.243
Woolley, D. G., Laeremans, A., Gantois, I., Mantini, D., Vermaercke, B., Op de Beeck, H. P., …
Rudi D’Hooge, D. (2013). Homologous involvement of striatum and prefrontal cortex in rodent
and human water maze learning. Proceedings of the National Academy of Sciences of the
United States of America, 110, 3131–3136. doi:10.1073/pnas.1217832110
Yin, H. H., & Knowlton, B. J. (2006). The role of the basal ganglia in habit formation. Nature
Reviews Neuroscience, 7, 464–476. doi:10.1038/nrn1919
Ziessler, M., & Nattkemper, D. (2001). Learning of event sequences is based on response-effect
learning: Further evidence from a serial reaction task. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 27, 595–613. doi:10.1037/0278-7393.27.3.595