E Reading VS Paper Based Reading
E Reading VS Paper Based Reading
E Reading VS Paper Based Reading
System
journal homepage: www.elsevier.com/locate/system
a r t i c l e i n f o a b s t r a c t
Article history: This exploratory study compared the effects of two different mediums of read-
Received 23 February 2020 ingdinteractive e-book reading and paper-based readingdon learners’ reading compre-
Received in revised form 7 December 2020 hension. A specific focus was placed on literal vs. inferential comprehension. Thirty Korean
Accepted 7 December 2020
middle school English language learners were randomly assigned to an interactive e-book
Available online 10 December 2020
reading (treatment) group or a paper-based reading (control) group. A pre-test and five
comprehension tests were administered to both groups over a total of six reading sessions.
Keywords:
A survey was also conducted to investigate students’ perceptions of how helpful interac-
Interactive e-books
L2 reading comprehension
tive e-book features were in aiding their reading comprehension. Results from the reading
Digital reading comprehension test scores showed no statistical differences between the groups across the
Multimedia five tests. Analysis of literal and inferential questions also showed no significant difference
CALL overall. Findings from the survey data along with researcher observation notes suggest
Online learning that interactive features that are not designed to aid students’ understanding can distract
Paper-based vs. digital reading students from the task of reading which may hinder their comprehension. The overall
results of this study suggest that it may not be the medium of reading, but how students
engage with each medium that can affect their comprehension of text.
© 2020 Elsevier Ltd. All rights reserved.
1. Introduction
Most, if not all, second language (L2) learners of English have at some point in time engaged in reading, listening to stories,
and watching videos in the target language to foster their L2 development (Bada & Okan, 2000). With advances in technology
over the years, new opportunities have arisen through computer-assisted language learning (CALL) in which audio, video, and
text can be integrated through digital platforms. This has led to the development of interactive e-books which include highly
interactive multimedia features that allow readers to engage in various ways with electronic text. As the number of users of
smartphones, individual computers, and tablets has increased, the number of users of interactive e-books has also steadily
increased (Smeets & Bus, 2013). Thus, in recent years a great deal of research has looked into ways in which interactive e-
books may benefit students’ L2 learning process (Biancarosa & Griffiths, 2012; Chen et al., 2013; De Jong & Bus, 2004; Moreno
& Mayer, 2007; Wood et al., 2018).
* Corresponding author.
E-mail addresses: [email protected] (J. Lim), [email protected] (G.E.K. Whitehead), [email protected] (Y. Choi).
https://doi.org/10.1016/j.system.2020.102434
0346-251X/© 2020 Elsevier Ltd. All rights reserved.
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
To date, various positive effects of interactive e-books on students’ L1 and L2 reading performance have been reported
with benefits including vocabulary growth (Korat & Shamir, 2007; Smeets & Bus, 2015), improvements in reading
comprehension (Kao et al., 2016; Marzban, 2011a, 2011b; Takacs & Bus, 2016), and the enhancement of phonological
awareness (Korat, 2009, 2010). However, negative effects have also been found relating to students’ reading comprehension
as a result of eye fatigue from reading on a screen (Jeong, 2012), and formed habits of skimming and scanning screen content
(Lenhard et al., 2017; Singer Trakhman et al., 2018). As it currently stands, understanding the impacts of interactive e-books on
L2 learning is still a work in progress. Furthermore, when comparing the benefits of interactive e-book reading to paper-based
reading, it is still unclear whether one form is better than the other, and in what capacity (Jeong, 2012).
To further understand the effects of this new digital-reading platform, this small-scale exploratory study examined the
effects of reading interactive e-books on Korean middle school English learners’ reading comprehension in comparison with
paper-based reading. Particular attention was given to literal and inferential comprehension. Additionally, students’ per-
ceptions of the helpfulness of various interactive features in assisting their reading comprehension were investigated. The
findings of this study contribute to a deeper understanding of the effects of interactive e-books on L2 learners’ reading
comprehension and provide stakeholders with meaningful information about users’ perceptions of the roles certain inter-
active features play in aiding their understanding.
2. Literature review
Reading comprehension has been defined in various ways over the years. Kucer (2005) puts forth that reading compre-
hension is achieved when readers have understood a text and can discuss various aspects of what they have read in detail.
Torres and Constain (2009) describe reading comprehension as “a complex problem-solving process in which the reader
makes sense out of a text not just from the words and sentences on the page, but from ideas, memories, and knowledge
evoked by those words and sentences as well as experience” (p.56). Others have defined reading comprehension in more
general terms, stating that it is simply the understanding of the meaning of written materialdgetting the correct message as a
final result from interacting with what one has read (Nutall, 1982; Torres & Constain, 2009). In this paper, the researchers
adopt Grabe and Stoller’s (2011) general definition of reading comprehension, “the ability to understand information in a text
and interpret it appropriately” (p.11).
2
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
comprehension processes that make use of the reader’s background knowledge and inferencing abilities. As they explain,
lower-level processes work together to process the meanings of words and grammatical information in a text. Simulta-
neously, higher-level processes work to link what is being read to the reader’s background knowledge, and, at the same time,
monitor comprehension and use of reading strategies. A more detailed summary of the lower-level and higher-level processes
is presented in the following Table 1.
Table 1
Lower-level and higher-level processes.
Note. Adapted from “Teaching and researching reading (2nd ed.)”, by Grabe, W., & Stoller, F., 2011, p. 14, Pearson Education.
The lower-level and higher-level cognitive processes presented in the table above should be viewed as a network of in-
formation and related processes that are constantly being engaged while trying to comprehend what one is reading (Grabe &
Stoller, 2011). This is something that is commonly referred to as ‘working memory’. Working memory connects information
from one’s long term memory with new information coming in (Cain et al., 2004), and therefore is necessary for coherently
processing text (Joh & Plakans, 2017). While working memory is acknowledged as crucial for reading comprehension to be
achieved, as Sweller et al. (1998) explain, our working memory has a limited capacity of dealing with 2e3 pieces of infor-
mation at a time. Thus, according to Sweller’s (1988) cognitive load theory, if the total mental activity (aka. cognitive load)
imposed on the working memory exceeds its limits, a reader’s comprehension may be negatively affected.
In recent years, with the increase in multi-media learning, researchers have cautioned that the processing demands
evoked by the stimulation of word, pictures, and tasks may exceed the processing capacity of the cognitive system resulting in
cognitive overload (Mayer & Moreno, 2003). Additionally, if learners are involved in cognitive tasks that are not related to the
instructional goal of the task, and/or additional information or material is given that is not needed to obtain the goal, learners
may suffer extraneous processing overload, also referred to as extraneous cognitive processing or extraneous cognitive load
(Sweller, 1999). Therefore, the design of interactive e-books and the features they offer should be taken into consideration
when discussing learners’ reading comprehension of input consumed via that medium.
With so many textual factors and cognitive processes involved in comprehending what one has read, at present, it is
uncertain whether readers are able to comprehend e-books with interactive features any better than paper-based books
without them. It is also unclear exactly how the design of different interactive e-book features may impact learners’ reading
comprehension. Thus, this study contributes to gaps in both of these areas by comparing the effects of interactive e-book
reading with paper-based reading, and reporting researcher observations as well as learners’ perceptions of the helpfulness of
different interactive features in facilitating reading comprehension.
E-books have come a long way since Michael Hart initiated project Gutenberg and the digitization and archiving of cultural
works in 1971 (Hart, 1992). In the beginning, e-books were nothing more than text that had been converted to digital form.
Nowadays, however, the majority of educational computer-based e-books include interactive features as a way to provide an
exciting learning experience and help improve students’ reading performance (Guernsey et al., 2012). These interactive e-
books have the potential to change the way our students read and comprehend what they read (Schugar et al., 2013) as the
features are responsive to them as they engage with text (Moreno & Mayer, 2007).
In recent years studies have started to emerge which look at how interactive e-books affect different levels of reading
comprehension. Kao et al. (2016) found that high interactive e-books may positively contribute to students’ literal and
evaluative comprehension but not inferential comprehension. They explained that high interactive features (features which
included guidance, prompt, and feedback) might help students easily locate necessary information while focusing on texts
and discover causes and effects to accurately answer evaluative questions. On the other hand, Danaei et al. (2020) concluded
that popular e-book features such as animated pictures and narration could enhance students’ inferential comprehension
3
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
since the features made students focus on the story and enabled them to infer more easily from visual and auditory clues.
Considering the mixed results of previous studies, further research is required that investigates the roles that interactive
features play in facilitating different levels of reading comprehension. The sections below outline common interactive fea-
tures and empirical findings on their impact on reading comprehension.
2.2.4. Hotspots
Hotspots refer to a function that is activated when users click pictures, words, or sentences (Korat et al., 2014; Korat &
Shamir, 2007; Smeets & Bus, 2015). When clicked, pop-ups or animations are initiated to aid learners’ understanding by
providing additional information to the text which is not originally available (Korat et al., 2014). This can include dictionary
features such as the activation of a gloss when students click on a word to help them understand the meaning (Homer et al.,
2014; Smeets & Bus, 2015). Such dictionary hotspot features have been found to make a noticeable contribution to vocabulary
learning and growth, which can facilitate learners’ reading comprehension (Constantinescu, 2007; Khezrlou et al., 2017; Korat
et al., 2013; Marzban, 2011b).
Although hotspots support students’ reading comprehension, De Jong and Bus (2002) found that when students read e-
books with hotspots, some of them (especially lower level students) became distracted and played with hotspots. This led to
the students reading e-books with hotspots comprehending less than students reading paper-based books. In contrast, other
studies found that although students played with hotspots, they could comprehend as much as those who read paper-based
books. This was possibly because the hotspots were specifically designed by the developers not to distract students by
limiting the number of hotspots, making hotspots relevant to the stories, and allowing students to explore them only after
reading (De Jong & Bus, 2004; Korat, 2009; Shamir et al., 2008).
2.2.5. Games
As Li (2017) discusses, games are also widely used in interactive e-books as means for promoting learning as they often
incorporate the opportunity to practice language, apply what is learned from the content of the reading, or demonstrate
comprehension of what is read through game-based activities. However, research into how games affect students’ reading
comprehension has produced mixed results (Bus et al., 2015). De Jong and Bus (2002) found that when games irrelevant to
what students were reading were available without any restrictions, students’ understanding of the text was negatively
influenced. What is interesting in their study was that games had such a strong appeal that students became absorbed in
playing games and spent less time reading and using hotspots. On the other hand, other studies found that games can help
enhance students’ reading comprehension (Homer et al., 2014; Zipke, 2017). According to Bus et al. (2015), games can have
positive results if the games do not distract students away from the main task of comprehending the text, and programs have
built-in features that limit the use of game features.
Summarizing the research on interactive e-book features concerning students’ reading comprehension to date, interactive
e-books must be designed carefully to include features closely related to facilitating students’ comprehension of what they
4
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
are reading. At the same time, restrictions on the features should be imposed (such as when and how long) to avoid students
being distracted by the features which can result in a negative impact on their reading comprehension (Bus et al., 2015; Kao
et al., 2016; Sargeant, 2015; Takacs & Bus, 2016; Zipke, 2017). Nevertheless, it is hard to exclude the possibility that some
interactive features can be harmful no matter how carefully they are designed (Takacs et al., 2015). The reason for this is,
considering that students frequently and repeatedly shift from reading to engaging with interactive features, students’
reading comprehension can be compromised due to extraneous cognitive processing which can overload a reader's working
memory and interfere with their reading comprehension (Mayer, 2009).
To date, most of the research has focused on the effects of interactive e-books on young learners (e.g., Chambers et al.,
2006; Kao et al., 2016; Korat, 2009, 2010; Korat & Shamir, 2007; Schugar et al., 2013; Takacs & Bus, 2016) or college stu-
dents (Chen et al., 2013; Marzban, 2011a). This has left the effectiveness of interactive e-books on middle school students
underexplored. Additionally, some methodological problems identified in the previous studies result in additional gaps in the
literature regarding this topic. Singer Trakhman and Alexander (2017b) outline some of the issues as follows:
1. A lack of sufficient information about the texts in terms of the length and type
2. A limited use of different question types and levels of difficulties
3. A failure to reflect various levels/types of comprehension (i.e., literal, inferential, evaluative, and applied)
The purpose of this small-scale exploratory study was to address the research gap and methodological limitations outlined
by Singer Trakhman and Alexander (2017b) and provide recommendations for future research that this study inspires. Thus,
the researchers aimed to investigate how interactive e-books affect middle school students’ reading performance in com-
parison with paper-based reading on literal and inferential comprehension questions while providing clear explanations
about the materials used. This includes details of the text characteristics (type, length, and difficulty) and question formats
(true/false, multiple-choice, and open-ended questions). Furthermore, it also sought to examine middle school students’
perceptions of how helpful specific interactive features were in aiding their comprehension of short stories. Thus, the
following research questions formed the basis of this study:
1. How do different mediums of reading (interactive e-book reading vs. paper-based reading) affect students’ performance
on comprehension tests?
2. How does students’ performance on literal and inferential reading test items differ depending on mediums of reading,
interactive e-books vs. paper-based reading?
3. What interactive e-book features do learners find most and least helpful in aiding their reading comprehension when
reading independently?
3. Method
3.1. Participants
A convenience sample of thirty Korean middle school students, who were attending a private supplementary English
language institute, participated in this study. Different from the public-school setting in South Korea, private schools are more
5
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
often equipped with computer labs in which students can engage with various learning programs including interactive e-
books. Those attending private supplementary English institutes are therefore the most likely to engage with the materials
being examined in this study. Thus, the data collection site and participants were specifically chosen to obtain a represen-
tative sample of Korean middle students who are likely to interact with both paper-based and interactive e-books regularly.
Table 2 below presents the demographic information about the participating students ranging from the 1st year of middle
school (USA grade 7 equivalent) to the 3rd year of middle school (USA grade 9 equivalent).
Table 2
Participant demographics.
Participants all had prior English language learning experience ranging from five to seven years, and prior experience with
both English interactive e-books and paper-based reading. To measure participants’ language reading level, The Test of English
for International Communication® (TOEIC) Bridge mock tests was administered before formal data collection. The test results
indicated their reading levels ranged from A1 to A2 in the Common European Framework of Reference (CEFR). According to
the Council of Europe (2018), those that fall within an A1-A2 descriptor level are considered basic users. They can
comprehend the main ideas of short, simple texts which contain familiar words and sentences and are related to their
interests.
3.2. Materials
6
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
public textbooks and mimicking the format students normally encounter. Thus, each comprehension test followed the same
format of three true/false, five multiple-choice, and two open-ended questions and had a Lexile measure of 500e600, which
corresponds to the A2 level in the CEFR (University of Colorado Boulder, 2018). As suggested by various scholars (e.g., Carr,
2011; Kirschner et al., 1996; Shohamy, 1984), to avoid students answering questions wrong as a result of misunder-
standing the questions, test items, and examples were written in students’ native language. Translated examples of the pre-
test and Test 1 are provided in Appendix A.
3.3. Procedures
The participants were randomly assigned to one of two groups through a lottery name draw so that each participant had
an equal opportunity to be assigned to either the treatment or the control group. Both the e-book reading group and the
paper-based reading group consisted of 15 participants each. As displayed in Fig. 1 below, after the random assignment, a
paper-based pre-test was conducted to ensure the comparability of the two groups in their reading comprehension ability on
Day 1 before the experiment (Mackey & Gass, 2015). In the pre-test, all of the participants individually read a written text for
20 min and then answered the 10 comprehension questions which took, on average, about 25 min. While answering
questions, students were allowed to refer to the book to minimize the possible effects of forgetting what they had read on
their reading performance (Nation, 2009). After the pre-test, the reading sessions were held over five consecutive days from
Day 2 to Day 6 in a computer lab at the supplementary English language institute equipped with twenty individual desktop
computers. Before the 1st reading session on Day 2, students in the treatment group were introduced to the available features
of e-books.
In each reading session, students in both groups read the same book for 20 min in the format their group was assigned to.
Students reading an interactive e-book in the treatment group were free to choose and use whatever functions they desired.
On the other hand, students in the control group simply read the equivalent paper-based version of the story. To provide
further insights into students’ interactions with interactive e-books vs. paper-based books, during each reading session the
principal researcher acted as a non-participant observer and wrote field notes detailing how students were interacting with
the different reading mediums. Immediately after each reading session, students in both groups were required to complete a
reading comprehension test. During the tests, the principal researcher observed and monitored the students to prevent any
form of cheating which could affect the internal validity of the results. Field notes were also written while observing which
focused specifically on how the learners interacted with the different reading mediums. On Day 7, students in the treatment
group responded to the survey.
For each comprehension question in the pre- and five reading comprehension tests (Tests 1 to 5), one correct answer
accounted for one point; thus, the total maximum possible score was 10 across the tests. The two open-ended questions
requiring simple short answers were scored independently by the researchers and their ratings were cross-analyzed to
provide inter-rater reliability. Exact agreement between the researchers was 88.7%, which is considered acceptable (Mackey
& Gass, 2015). The independent-samples t-test was performed in SPSS 22 to compare the scores of the pre- and reading
comprehension tests between the two groups. The assumptions of the t-test (an interval-level dependent variable, inde-
pendence of observation, normal distribution of data, and equal variances) examined before the main analysis indicated the
7
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
data (except Test 2 scores) did not meet the assumptions of normal distribution and equal variances. Consequently, boot-
strapped Bias corrected accelerated (BCa) Confidence Intervals (CIs) for the t-test were employed (Larson-Hall, 2016). The
bootstrapped BCa is a robust statistic that does not require the assumptions of normality and equal variances (Larson-Hall,
2016). Differences in the literal and inferential test scores depending on groups were analyzed by the same statistical
technique because the assumption of normal distribution was not met in all tests. Effect sizes were measured using Cohen’s
d (specifically Glass’s dÞ:The standard deviations (SDs) of the paper-reading group were used as standardizers (Glass, 1976;
Olejnik & Algina, 2000).
Before the analysis, survey response data were reverse coded due to the fact that the original scales were in a different
order. As aforementioned, smaller numbers originally indicated agreement while larger numbers represented disagreement
regarding the effectiveness of the interactive features in facilitating reading comprehension. After the reverse coding, 1
indicated strong disagreement (Strongly Disagree) and 5 strong agreement (Strongly Agree), whereas 3 remained neutral
(Neither Agree nor Disagree). 4 became representing Agree and 2 Disagree, respectively. The re-coded data were analyzed
descriptively in SPSS.
4. Results
4.1. Differences in overall reading comprehension test scores between the interactive e-book readers and paper-based readers
Table 3 presents the descriptive statistics, BCa 95% CIs for the test score differences and effect sizes across the six tests. The
mean scores of the paper-based reading group were slightly higher than those of the e-book reading group on the pre-test,
Tests 1, 3, and 4. On the other hand, the mean of the e-book reading group was somewhat higher than that of its paper-based
counterpart on Test 5. The scores of the two groups on Test 2 were equal.
Table 3
Descriptive statistics, BCa 95% CIs for test score differences, and effect sizes for overall comprehension scores.
The BCa 95% CI for the mean difference of the pre-test scores indicates the two groups were not statistically different in
reading comprehension ability before the treatment, as the CI includes 0. The magnitude of the difference was trivial (Plonsky
& Oswald, 2014). Likewise, the test found no evidence of a statistical difference between the groups on any of the five reading
comprehension tests; all CIs went through 0. This suggests that mediums of reading did not have any bearing on the students’
performances on reading comprehension tests. However, the somewhat wide CIs indicate the estimates about the score
differences are not precise due to the small sample size (Larson-Hall, 2016).
The effect sizes of Tests 1, 3, 4, and 5 were close to 0.4, and that of Test 2 was 0. This indicates the magnitudes of the score
differences across the five tests were either small or nil (Plonsky & Oswald, 2014), meaning the reading comprehension scores
were attributed to the mediums of reading to a small extent at the most.
4.2. Mediums of reading and its effects on literal and inferential reading comprehension
As Table 4 shows, the mean scores of the two groups on both literal and inferential questions across the six tests were
equivalent or similar. The CIs for the pre-test indicate the two groups were comparable to each other regarding both literal
and inferential reading abilities before the treatment. Similarly, the CIs on Tests 1 to 5 show the scores of the two groups were
not statistically different regardless of the types of questions with small-sized effects. This suggests the mediums of reading
exerted effects on the students’ literal and inferential reading abilities to a small extent. However, the two groups’ scores on
the literal questions in Test 4 were significantly different with a medium-sized effect (Plonsky & Oswald, 2014), which shows
the students’ literal reading comprehension scores were dependent upon the different reading mediums to a noticeable
extent on Test 4. It seems that their literal comprehension benefitted much more from a paper-based reading condition.
Overall, the BCa 95% CIs were rather wide, which indicates the estimates about the score differences are not accurate possibly
because of the small-sized sample (Larson-Hall, 2016).
8
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
Table 4
Descriptive statistics, BCa 95% CIs for the score differences in the literal and inferential questions, and effect sizes.
Test Literal vs inferential Mean (SD) BCa 95% CI Effect size (d)
4.3. Interactive features most and least favored by students to aid reading comprehension
Table 5 presents the mean ratings with percentages of the students’ perceptions of the effectiveness of the interactive
features as aiding tools for their reading comprehension. In terms of mean ratings, the feature of sentence highlighting, which
is synchronized with narration, was believed to be the most helpful function with the highest mean score (4.07), followed by
the page control button (4.00), animated pictures (3.93), picture dictionary (3.73), text re-reading (3.67), word game (3.60),
and voice recording (3.60). When it comes to the percentages, 80% of the students either agreed or strongly agreed that the
page control button and animated pictures were important. 66.7% of them either agreed or strongly agreed sentence high-
lighting, picture dictionary, and text re-reading features facilitated their reading comprehension. Additionally, the word game
and voice recording features were perceived as helpful tools by 60% of the students.
Table 5
Mean ratings of students’ perception of the interactive features (N ¼ 15).
Survey question Strongly Disagree Neither Agree nor Disagree Agree Strongly Mean
Disagree Agree
1. The sentence highlighting feature helped me to better understand the text. 0 0 5 4 6 4.07
(0%) (0%) (33.3%) (26.7%) (40.0%)
2. The page control button helped me to better understand the text. 1 1 1 6 6 4.00
(6.7%) (6.7%) (6.7%) (40.0%) (40.0%)
3. The animated pictures helped me to better understand the text. 0 1 2 9 3 3.93
(0%) (6.7%) (13.3%) (60.0%) (20.0%)
4. The picture dictionary helped me to better understand the text. 0 2 3 7 3 3.73
(0%) (13.3%) (20.0%) (46.7%) (20.0%)
5. The feature of text re-reading helped me to better understand the text. 0 1 4 9 1 3.67
(0%) (6.7%) (26.7%) (60.0%) (6.7%)
6. The word game helped me to better understand the text. 0 1 5 8 1 3.60
(0%) (6.7%) (33.3%) (53.3%) (6.7%)
7. The voice recording helped me to better understand the text. 0 2 4 7 2 3.60
(0%) (13.3%) (26.7%) (46.7%) (13.3%)
8. The incentive game helped me to better understand the text. 3 5 5 0 2 2.53
(20.0%) (33.3%) (33.3%) (0%) (13.3%)
In contrast, the incentive game was thought by the students to be the least helpful (2.53) in facilitating their reading
comprehension. Specifically, more than half of the students (53.3%) expressed their disagreement regarding the effectiveness
of the feature, and an additional one-third of them (33.3%) neither agreed nor disagreed.
5. Discussion
The first research question we set out to examine was how interactive e-book reading affects reading comprehension in
comparison with paper-based reading. Despite the interactive e-book group having access to various features in the program
that could facilitate their comprehension of the text, no statistical difference in students’ reading comprehension was found
between the two different mediums of reading. One possible reason is that, while observing students in the interactive e-
book group, the principal researcher noted that some students were distracted by the interactive features. More specifically,
the primary researcher observed that students reading e-books spent a portion of their time playing with interactive features
9
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
rather than engaging directly with the text (specifically the games and hotspot features) while those in the paper-based
reading group tended to spend time reading and re-reading the texts several times. This resulted in the interactive e-book
readers spending less time and attention on the texts overall than those in the paper-based group. As De Jong and her
colleague (2002) pointed out, all levels of students may be distracted by games which can take the learner’s focus away from
the task of comprehending the text. This may have negated the comprehension benefits that may have been gained from the
interactive features. In other words, it is possible that even with less time and attention spent on reading, students in the
interactive e-book group were still able to obtain similar scores to their paper-based reading counterparts due to the
comprehension benefits gained from playing with the interactive features. Further research would need to be carried out to
investigate if this was, in fact, the case.
In addition, considering extraneous cognitive processing (Mayer, 2009), it is also possible that students’ working memory
was pushed beyond its capacity by students alternatively consuming reading texts and interactive features, especially if those
interactive features were not aimed to, or pedagogically designed to facilitate learners’ comprehension (Paas & Sweller, 2014).
This may have been the case with the incentive game and the picture dictionary as the incentive game was simply not aimed
to provide support for reading comprehension and therefore distracted students from focusing on reading and compre-
hending, and the picture dictionary was poorly designed to aid learners’ comprehension. More specifically, the picture dic-
tionary simply provided a matching picture for each word entry with some of the pictures being ambiguous and failing to
clearly depict the meaning. The lack of clarity between the word and the meaning being presented in the accompanying
picture may have caused confusion and led to additional extraneous processing in students, resulting in little to no gains in
their comprehension of the interactive e-book text. From the results of this study, we cannot be certain that this was the case
and therefore suggest further research be conducted which investigates the possible links between interactive features,
extraneous cognitive processing, and reading comprehension.
Considering the second research question examining the difference in literal and inferential comprehension depending on
the mediums of reading, the results revealed that there was no statistical difference in students’ inferential comprehension
depending on different reading mediums. On the other hand, a statistical difference in literal comprehension between the
two groups was found in one of the five tests. The results are contrary to Kao et al.‘s findings (2016), where students who read
e-books with high interactivity performed better in literal and critical comprehension questions than students reading low-
interactive e-books. The different results may be explained by a poorly developed interactive design. The interactive features
in the research by Kao et al. (2016) were developed to have students actively engage in reading and think deeply whereas
interactive features provided in the program used for this research did not. In line with the findings of De Jong and Bus (2002),
poorly designed features in the interactive e-books used in this study may have led to less repeated reading and focus on
reading texts which may account for the lack of increase in comprehension. Another possible reason might be that the
majority of inferential comprehension questions adopted a multiple-choice format (3 out of 4). Thus, students may have been
able to get the correct answer by the process of elimination or guessing rather than genuinely being able to make inferences
based on their comprehension of the text.
The results of the third research question investigating what interactive features students find the most and the least
helpful in relation to aiding their reading comprehension revealed that sentence highlighting was felt to be the most helpful
interactive feature in the e-books used in the research, followed by the page control button, animated pictures, picture
dictionary, text re-reading, word game, and voice recording. The average mean ratings of those features were higher than 3.6,
which indicates that students felt those features were somewhat helpful in aiding their comprehension when reading e-
books. The incentive game, however, was considered unhelpful as more than half of the students (53%) expressed
disagreement that the feature aided their comprehension with the mean being 2.53. It is not surprising that the incentive
game was thought to be the least helpful of the eight features because it was not designed to contribute to facilitating the
reading process but rather aimed to entertain and motivate students. As previously mentioned, Moreno and Mayer (2007)
argued that interactive games should be developed based on empirical findings of how people learn. It seems that the
game did not follow their recommendation because it only promoted behavioral activity and did not reflect features such as
coaching (providing explanations or advice), self-explanation (providing questions asking players to explain or select ex-
planations from a menu), pre-training (providing pre-game activities), modality (presenting words in spoken form), and
personalization (presenting words in conversational style) (Clark & Mayer, 2016).
Many researchers agree on the importance of careful consideration and planning in interactive e-book designs (Korat,
2010; Sargeant, 2015; Takacs et al., 2015; Takacs & Bus, 2016). When instructional design is poorly developed (i.e., layout,
design, or features that distract from reading, or confuse students), students’ limited working memory may be taken up by an
extraneous cognitive load which can hinder their comprehension of the digital text (Mayer, 2014; Paas & Sweller, 2014). As we
found in this study, games should be given careful consideration before adopting them in interactive e-books. It might be a
good way for games to be accessible before or after reading to avoid negatively influencing the reading process. Thus, it is of
primary importance that the instructional design is developed in a way that facilitates reading comprehension while at the
same time reducing extraneous cognitive load (Paas & Sweller, 2014). In line with previous studies (see Biancarosa & Griffiths,
2012), we strongly believe that the teacher's role is critically important to provide guidance and support during interactive e-
book reading. Teachers should be well aware of what features will be helpful and what will not aid students’ reading
comprehension and need to discourage students from using distracting features or features that do not aim to facilitate their
understanding of what they are reading. Additionally, as Moreno and Mayer (2007) stated, materials developers need to
consider what interactive features should or should not be included in these e-book programs based on empirical research.
10
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
Relating to previous literature on the topic, students’ reading comprehension in this study may have also been affected by
formed habits of digital text reading. As previously presented, students tend to read digital materials casually by casual
browsing and scanning (Liu, 2005) and therefore, students’ reading comprehension can be compromised (Lenhard et al.,
2017; Liu, 2005; Singer Trakhman et al., 2018). Singer Trakhman et al. (2018) reported that students reading digital texts
seemed to read books faster than those reading paper texts, and their overall comprehension scores were lower than paper-
based reading group’s. Like previous studies, the primary researcher also witnessed students read e-books quickly and then
spend their time playing with interactive features.
We must caution at this point, however, that individual differences in reading skillsddifferences in abilities in lower-level
processes and higher-level processesdcan affect students’ reading comprehension more than the medium in which they read
the content (Singer Trakhman & Alexander, 2017b). For example, Alisaari et al. (2018) found that students with good decoding
skills at the word level (lower-level processes) performed equally well on a digital comprehension test following on-screen
reading as they did on a paper-based comprehension test following a paper-based reading. This finding lent support for
previous studies that pointed out the importance of fast, automatic, and accurate word recognition for the comprehension of
written discourse (Alderson, 2000; Grabe, 2009). Students who read digital texts slowly and engaged in in-depth thinking
processes (higher-level processes) performed better in measures of reading comprehension than others who did not (Singer
Trakhman et al., 2018). Similarly, Yeom and Jun (2020) reported that high-performing English language learners regardless of
medium read through texts thoroughly utilizing reading comprehension strategies and background knowledge to aid their
comprehension (higher-level processes). Interestingly, similar reading comprehension strategies (higher-level processes)
seem to be applied in both mediums (Rockinson-Szapkiw et al., 2013; Song et al., 2020; Yeom & Jun 2020). Taken all together,
it may be reasonable to conclude that how one reads rather than whether it is text on paper or digital text that is of crucial
importance concerning students’ reading comprehension.
Thus, in the case of interactive e-books where readers may have unknowingly picked up skimming and scanning habits
during their everyday interaction with digital text, materials developers need to introduce research-based designs, features,
and tasks that offset such formed habits. As Singer Trakhman et al. (2018) suggested, introducing designs that promote the
slow reading of digital texts can promote deeper thinking which can ultimately lead to better comprehension. So, imposing
longer time mandates for each portion of a text while also incorporating tasks that require students to read back and forth
three or four times may be a way of counteracting students’ digital reading habits. Or, setting a timer lock on the page function
so that learners cannot simply skim and flip the pages as they wish.
Along with material developers, teachers must guide students towards effective ways of reading or engaging with
interactive e-books. To do so, teachers need to understand how e-book reading habits may influence students’ reading
performance, promote slower reading practices when students engage with digital text, and involve students in tasks that
require deeper cognitive processing while reading.
6. Conclusion
As CALL technology advances, English language learners are engaging more and more with interactive e-books. In this
study, the impact of interactive e-books on students’ reading comprehension and students’ opinions on interactive features
were investigated. The results did not find significant effects of interactive e-books on students’ reading comprehension
compared to paper-based reading overall, as well as when comparing literal to inferential comprehension. The results of the
survey shed light on what interactive features students felt helped them to comprehend what they read. Although the
majority of the features themselves were perceived to be useful to some extent, in some cases the design within those features
could have hindered rather than aided students’ comprehension. A good example of this is the picture dictionary which
provided ambiguous pictures for keywords with no further support. This sort of design can lead to confusion in learners
where they are trying to decipher the meaning of the words they do not know and are left having to guess the meaning based
on the vague picture provided. Therefore, it is important that interactive e-book designers not only consider the features but
the design within those features as well to ensure they support the reading comprehension process. To take this point one
step further, it is imperative that the development of such materials is research-based. Furthermore, features that are
included just for fun, or that distract students from the reading process should be reconsidered by developers of future
interactive e-books as they can take away from the aims of reading and the fostering students’ autonomous understanding of
what they read.
This study aimed to contribute a better understanding of how middle school students perform when reading interactive e-
books compared to paper-based reading and initiate further research on the topic. As aforementioned, the previous research
on interactive e-book reading has mainly focused on either young learners or college students, and the research about how
middle school students perform in reading interactive e-books has not been given proper attention. Moreover, this study
addressed shortfalls that Singer Trakhman and Alexander (2017b) found in previous research by providing detailed infor-
mation on texts (the type, the length, and the difficulty of texts) and question formats (true/false, multiple-choice, and open-
ended questions), examining different levels of comprehension (literal and inferential comprehension) and considering
students’ language proficiency.
It should be noted, however, that this study has a few notable limitations. Therefore, caution should be exercised when
interpreting the results. First, the sample size is small which raises a few concerns regarding the representativeness of the
sample, generalizability of the research findings, and accuracy of the effect estimates (Larson-Hall, 2016). Nevertheless, we do
11
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
believe it is a starting point from which more research in this area can (and should) be conducted. To address the issues future
research should determine the necessary sample size based on a targeted precision (Larson-Hall, 2016) and replicate the
current study with larger samples. In addition, the current study took mainly a quantitative approach, focusing on the product
of reading (test scores) as well as the survey data. The students could have been asked about their perceptions of beneficial
interactive features after each reading session and reasons for using certain features. Thus, why the students perceived each of
the interactive features to be useful for their reading comprehension remains unknown. Accordingly, future studies are
needed to explore this topic by incorporating qualitative data collection techniques (e.g., interviews, diaries) and analysis
methods to gain a fuller understanding (Alderson, 2000; Do € rnyei & Taguchi, 2010). An additional limitation was in the reading
comprehension test design. Since these tests were designed to mimic the current reading comprehension testing norms in
middle school English classrooms in South Korea, the tests were predominantly multiple-choice questions and did not
contain many open-ended questions. Future studies should incorporate a variety of open and closed-ended question types in
order to gain deeper insight into students’ reading comprehension.
The findings of this study provide various avenues for future studies. First, the results of this study shed light on students’
reading comprehension ability depending on different mediums of reading. Nonetheless, a question of if the mediums of
reading differentially affect the improvement of students’ reading comprehension ability and in what capacity remains to be
examined mainly due to the short duration of the experiment (i.e., the total amount of reading time was 100 min). As outlined
in the literature review, reading comprehension is a complex task that involves various lower-level and higher-level cognitive
processes; therefore, development in reading comprehension takes time (see Oakhill & Cain, 2007). Thus, longitudinal
research investigations must be undertaken, employing longer and/or more reading sessions that look into effects on
different levels of reading comprehension (i.e. literal, inferential, applied, and evaluative).
Second, as research into this area is still in its infancy, more studies must be conducted in a variety of contexts with a
variety of interactive e-book materials. It would also be of interest to investigate whether training students to be actively
involved in deeper processing while reading interactive e-books can lead to improvements in students’ reading
comprehension.
Jongyun Lim: Conceptualization, Investigation, Data curation, Formal analysis, Writing - original draft, Writing - review &
editing. George E.K. Whitehead: Conceptualization, Methodology, Supervision, Writing - original draft, Writing - review &
editing. YunDeok Choi: Formal analysis, Writing - original draft, Writing - review & editing.
References
12
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
De Jong, M. T., & Bus, A. G. (2004). The efficacy of electronic books in fostering kindergarten children’s emergent story understanding. Reading Research
Quarterly, 39(4), 378e393. https://doi.org/10.1598/rrq.39.4.2
Do€rnyei, Z., & Taguchi, T. (2010). Questionnaires in second language research: Construction, administration, and processing (2nd ed.). Routledge.
Gass, S. M. (1997). Input, interaction, and the second language learner. Lawrence Erlbaum Associates.
Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5(10), 3e8. https://doi.org/10.3102/0013189x005010003
Grabe, W. (2009). Reading in a second language: Moving from theory to practice. New York: Cambridge University Press.
Grabe, W., & Stoller, F. (2011). Teaching and researching reading (2nd ed.). Pearson Education.
Guernsey, L., Levine, M., Chiong, C., & Severns, M. D. (2012). Pioneering literacy in the digital wild west: Empowering parents and educators. Retrieved from
https://joanganzcooneycenter.org/wp-content/uploads/2012/12/GLR_TechnologyGuide_final.pdf.
Hart, M. (1992, August). The history and philosophy of project Gutenberg by Michael Hart. Project guttenberg. https://www.gutenberg.org/wiki/Gutenberg:
The_History_and_Philosophy_of_Project_Gutenberg_by_Michael_Hart.
Homer, B. D., Kinzer, C. K., Plass, J. L., Letourneau, S. M., Hoffman, D., Bromley, M., Hayward, E. O., Turkay, S., & Kornak, Y. (2014). Moved to learn: The effects
of interactivity in a Kinect-based literacy game for beginning readers. Computers & Education, 74, 37e49. https://doi.org/10.1016/j.compedu.2014.01.007
Jenkins, J. R., & Pany, D. (1981). Instructional variables in reading comprehension. In J. T. Guthrie (Ed.), Comprehension and teaching: Research reviews (pp.
163e202). International Reading Association.
Jeong, H. (2012). A comparison of the influence of electronic books and paper books on reading comprehension, eye fatigue, and perception. The Electronic
Library, 30(3), 390e408. https://doi.org/10.1108/02640471211241663
Joh, J., & Plakans, L. (2017). Working memory in L2 reading comprehension: The influence of prior knowledge. System, 70, 107e120. https://doi.org/10.1016/j.
system.2017.07.007
Kao, G. Y. M., Tsai, C. C., Liu, C. Y., & Yang, C. H. (2016). The effects of high/low interactive electronic storybooks on elementary school students’ reading
motivation, story comprehension and chromatics concepts. Computers & Education, 100, 56e70. https://doi.org/10.1016/j.compedu.2016.04.013
Khezrlou, S., Ellis, R., & Sadeghi, K. (2017). Effects of computer-assisted glosses on EFL learners’ vocabulary acquisition and reading comprehension in three
learning conditions. System, 65, 104e116. https://doi.org/10.1016/j.system.2017.01.009
Kirschner, M., Spector-Cohen, E., & Wexler, C. (1996). A teacher education workshop on the construction of EFL tests and materials. Tesol Quarterly, 30(1),
85e112. https://doi.org/10.2307/3587608
Knk International Homeschool. KnK CAPPYTOWN. n.d, [Online learning program]. Retrieved from www.cappytown.com.
Korat, O. (2009). The effects of CD-ROM storybook reading on Israeli children’s early literacy as a function of age group and repeated reading. Education and
Information Technologies, 14(1), 39e53. https://doi.org/10.1007/s10639-008-9063-y
Korat, O. (2010). Reading electronic books as a support for vocabulary, story comprehension and word reading in kindergarten and first grade. Computers &
Education, 55(1), 24e31. https://doi.org/10.1016/j.compedu.2009.11.014
Korat, O., Levin, I., Atishkin, S., & Turgeman, M. (2013). E-book as facilitator of vocabulary acquisition: Support of adult, dynamic dictionary and static
dictionary. Reading and Writing, 27, 613e629. https://doi:10.1007/s11145-013-9474-z.
Korat, O., & Shamir, A. (2007). Electronic books versus adult readers: Effects on children’s emergent literacy as a function of social class. Journal of Computer
Assisted Learning, 23(3), 248e259. https://doi.org/10.1111/j.1365-2729.2006.00213.x
Korat, O., Shamir, A., & Segal-Drori, O. (2014). E-books as a support for young children’s language and literacy: The case of Hebrew-speaking children. Early
Child Development and Care, 184(7), 998e1016. https://doi.org/10.1080/03004430.2013.833195
Kucer, S. B. (2005). Dimensions of literacy (2nd ed.). Lawrence Erlbaum.
Larson-Hall, J. (2016). A guide to doing statistics in second language research using SPSS and R (2nd ed.). Routledge.
Leeser, M. J. (2007). Learner-based factors in L2 reading comprehension and processing grammatical form: Topic familiarity and working memory. Language
and Learning, 57(2), 229e270. https://doi.org/10.1111/j.1467-9922.2007.00408.x
Lenhard, W., Schroeders, U., & Lenhard, A. (2017). Equivalence of screen versus print reading comprehension depends on task complexity and proficiency.
Discourse Processes, 54(5e6), 427e445. https://doi.org/10.1080/0163853X.2017.1319653
Li, L. (2017). New technologies and language learning. Palgrave Macmillan.
Liu, Z. (2005). Reading behavior in the digital environment. Journal of Documentation, 61(6), 700e712. https://doi.org/10.1108/00220410510632040
Mackey, A., & Gass, S. M. (2015). Second language research: Methodology and design. Routledge.
Marzban, A. (2011a). Improvement of reading comprehension through computer-assisted language learning in Iranian intermediate EFL students. Procedia
Computer Science, 3, 3e10. https://doi.org/10.1016/j.procs.2010.12.003
Marzban, A. (2011b). Investigating the role of multimedia annotations in EFL reading comprehension. Procedia-Social and Behavioral Sciences, 28, 72e77.
https://doi.org/10.1016/j.sbspro.2011.11.015
Mayer, R. E. (2009). Multimedia learning (2nd ed.). Cambridge University Press.
Mayer, R. E. (2014). Cognitive theory of multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 43e71). Cambridge
University Press.
Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43e52. https://doi.org/10.1207/
S15326985EP3801_6
Moreno, R., & Mayer, R. E. (2007). Interactive multimodal learning environments. Educational Psychology Review, 19(3), 309e326. https://doi.org/10.1007/
s10648-007-9047-2
Nation, I. S. P. (2009). Teaching ESL/EFL reading and writing. Routledge.
Nuttall, C. (1982). Teaching reading skills in a foreign language. Heinemann Educational.
Oakhill, J. V., & Cain, K. (2007). Introduction to comprehension development. In K. Cain, & J. Oakhill (Eds.), Children’s comprehension problems in oral and
written language: A cognitive perspective (pp. 3e40). New York: Guilford.
Olejnik, S., & Algina, J. (2000). Measures of effect size for comparative studies: Applications, interpretations, and limitations. Contemporary Educational
Psychology, 25(3), 241e286.
Paas, F., & Sweller, J. (2014). Implications of cognitive load theory for multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia
learning (pp. 27e42). Cambridge University Press.
Pearson, P. D., & Camperell, K. (1981). Comprehension of text structures. Comprehension and teaching: Research reviews. Retrieved from https://files.eric.ed.
gov/fulltext/ED203299.pdf#page¼35.
Plonsky, L., & Oswald, F. L. (2014). How big is “big”? Interpreting effect sizes in L2 research. Language Learning, 64(4), 878e912. https://doi.org/10.1111/lang.
12079
Rockinson-Szapkiw, A. J., Courduff, J., Carter, K., & Bennett, D. (2013). Electronic versus traditional print textbooks: A comparison study on the influence of
university students’ learning. Computers & Education, 63, 259e266. https://doi.org/10.1016/j.compedu.2012.11.022
Sadeghi, K. (2007). The key for successful reader-writer interaction: Factors affecting reading comprehension in L2 revisited. Asian EFL Journal, 9(3),
198e220. Retrieved from https://www.researchgate.net/profile/Massoud_Yaghoubi-Notash3/publication/43480419_Examining_gender-based_
variability_in_task-prompted_monologic_L2_oral_performance/links/555a3b9708ae6fd2d8281fd8/Examining-gender-based-variability-in-task-
prompted-monologic-L2-oral-performance.pdf#page¼198.
Sargeant, B. (2015). What is an ebook? What is a book app? And why should we care? An analysis of contemporary digital picture books. Children’s
Literature in Education, 46(4), 454e466. https://doi.org/10.1007/s10583-015-9243-5
Schugar, H. R., Smith, C. A., & Schugar, J. T. (2013). Teaching with interactive picture e-books in grades K-6. The Reading Teacher, 66(8), 615e624. https://doi.
org/10.1002/trtr.1168
13
J. Lim, G.E.K. Whitehead and Y. Choi System 97 (2021) 102434
Shamir, A., Korat, O., & Barbi, N. (2008). The effects of CD-ROM storybook reading on low SES kindergarteners’ emergent literacy as a function of learning
context. Computers & Education, 51(1), 354e367. https://doi.org/10.1016/j.compedu.2007.05.010
Shohamy, E. (1984). Does the testing method make a difference? The case of reading comprehension. Language Testing, 1(2), 147e170. https://doi.org/10.
1177/026553228400100203
Singer Trakhman, L. M., & Alexander, P. A. (2017a). Reading across mediums: Effects of reading digital and print texts on comprehension and calibration. The
Journal of Experimental Education, 85(1), 155e172. https://doi.org/10.1080/00220973.2016.1143794
Singer Trakhman, L. M., & Alexander, P. A. (2017b). Reading on paper and digitally: What the past decades of empirical research reveal. Review of Educational
Research, 87(6), 1007e1041. https://doi.org/10.3102/0034654317722961
Singer Trakhman, L. M., Alexander, P. A., & Silverman, A. B. (2018). Profiling reading in print and digital mediums. Learning and Instruction, 57, 5e17. https://
doi.org/10.1016/j.learninstruc.2018.04.001
Smeets, D. J. H., & Bus, A. G. (2013). Picture storybooks go digital: Pros and cons. In S. B. Neuman, & L. B. Gambrell (Eds.), Quality reading instruction in the age
of common core standards (pp. 176e189). International Reading Association.
Smeets, D. J. H., & Bus, A. G. (2015). The interactive animated e-book as a word learning device for kindergartners. Applied PsychoLinguistics, 36(4), 899e920.
https://doi.org/10.1017/s0142716413000556
Song, K., Na, B., & Kwon, H. J. (2020). A comprehensive review of research on reading comprehension strategies of learners reading in English-as-an
additional language. Educational Research Review, 29, 100308.
Sweet, A. P., & Snow, C. E. (2003). Reading for comprehension. In C. E. Snow, & A. P. Sweet (Eds.), Rethinking reading comprehension (pp. 1e11). Guilford Press.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257e285. https://doi.org/10.1016/0364-0213(88)
90023-7
Sweller, J. (1999). Instructional design in technical areas. ACER Press.
Sweller, J., VanMerirenboer, J. J. G., & Paas, F. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251e296. https://
doi.org/10.1023/A:1022193728205
Takacs, Z. K., & Bus, A. G. (2016). Benefits of motion in animated storybooks for children’s visual attention and story comprehension. An eye-tracking study.
Frontiers in Psychology, 7, 1591. https://doi.org/10.3389/fpsyg.2016.01591
Takacs, Z. K., Swart, E. K., & Bus, A. G. (2015). Benefits and pitfalls of multimedia and interactive features in technology-enhanced storybooks. Review of
Educational Research, 85(4), 698e739. https://doi.org/10.3102/0034654314566989
The Extensive Reading Foundation. (2011). Guide to extensive reading. Retrieved from https://erfoundation.org/wordpress/guides/.
Tompkins, G. (2006). Literacy for the 21st century: A balanced approach (4th ed.). Pearson Education.
Tompkins, G. (2015). Literacy in the early grades: A successful start for PreK-4 readers and writers (4th ed.). Pearson Education.
(2009). Improving reading comprehension skills through reading strategies used by a group of foreign language learners.
Torres, N. G., & Constain, J. J.A.
How, 16(1), 55e70. Retrieved from https://dialnet.unirioja.es/descarga/articulo/5249670.pdf.
University of Colorado Boulder. (2018). Reading metrics explanations and justifications. Retrieved from https://www.colorado.edu/flatironsforum/2018/05/
14/reading-metrics-explanations-and-justifications.
Vacca, J. A. L., Vacca, R. T., Gove, M. K., Burkey, L. C., Lenhart, L. A., & McKeon, C. A. (2015). Reading and learning to read (9th ed.). Pearson Education.
Wood, S. G., Moxley, J. H., Tighe, E. L., & Wagner, R. K. (2018). Does use of text-to-speech and related read-aloud tools improve reading comprehension for
students with reading disabilities? A meta-analysis. Journal of Learning Disabilities, 51(1), 73e84. https://doi.org/10.1177/0022219416688170
Yeom, S., & Jun, H. (2020). Young Korean EFL learners’ reading and test-taking strategies in a paper and a computer-based reading comprehension tests.
Language Assessment Quarterly, 1e18. https://doi.org/10.1080/15434303.2020.1731753
Zipke, M. (2017). Preschoolers explore interactive storybook apps: The effect on word recognition and story comprehension. Education and Information
Technologies, 22, 1695e1712. https://doi.org/10.1007/s10639-016-9513-x
14