Alty2006 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Interacting with Computers 18 (2006) 891–909

www.elsevier.com/locate/intcom

When humans form media and media form


humans: An experimental study examining
the effects different digital media have
on the learning outcomes of students
who have different learning styles
a,*
J.L. Alty , A. Al-Sharrah b, N. Beacham a

a
Department of Computer Science, Loughborough University, Loughborough, Leics LE11 3TU, UK
b
Department of Computer Science, Business College, Kuwait

Received 18 April 2006; accepted 21 April 2006


Available online 30 June 2006

Abstract

A set of computer-based experiments are reported that investigate the understanding


achieved by learners when studying a complex domain (statistics) in a real e-learning environ-
ment using three different media combinations—Text only, Text and Diagrams and Spoken Text
and Diagrams, and the results agree with earlier work carried out on more limited domains. The
work is then extended to examine how student interaction and student learning styles affect the
learning outcomes. Different responses to the media combinations are observed and significant
differences occur between learners classified as Sensing and Reflective learners. The experiment
also identified some important differences in performance with the different media combinations
by students registered as Dyslexic. The experiment was therefore repeated with a much larger
sample of Dyslexic learners and the earlier effects were found to be significant. The results were
surprising and may provide useful guidance for the design of material for Dyslexic students.
 2006 Elsevier B.V. All rights reserved.

Keywords: Multimedia; Learning; Learning style; Dyslexia; Sensing and intuitive learners; Experimental study

*
Corresponding author. Tel.: + 44 116 294 088.
E-mail addresses: [email protected], [email protected] (J.L. Alty), [email protected]
(A. Al-Sharrah), [email protected] (N. Beacham).

0953-5438/$ - see front matter  2006 Elsevier B.V. All rights reserved.
doi:10.1016/j.intcom.2006.04.002
892 J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909

1. Introduction

Until graphical and multimedia user interfaces emerged in the 1990s, the choice of
digital media for representing information on a computer screen was limited to text,
iconic symbols and sounds. Today, a wide choice of media are available ranging
from video, animation, and sound to music, gesture and speech. Designers of com-
puter systems are now faced, therefore, with a major problem—that of choosing the
best digital media (or media combination) for a particular task, user and domain
combination. They have to ‘form’ or develop multimedia interfaces ‘that best match
all the resources of their target learners’ (Cobb, 1997, p. 12) and understand how
such interfaces assist in ‘forming’ or developing understanding (or mental models,
for example) in the user. Therefore, it is important for the designers of multimedia
interfaces to have a clear understanding of how information that is presented in dif-
ferent digital media is stored, manipulated and recalled by learners.
However, different resources are often required for the same application depend-
ing on the users’ goals. Users may have a variety of different task sub-goals that
they wish to achieve, such as completing the task as quickly as possible, improving
their understanding of what is presented, or simply enjoying the interaction, but
the media choices that support the learning goal, for example, might actually inhib-
it efficiency. A task analysis approach will clarify what the user sub-goals are (Dia-
per and Stanton, 2004) but will not necessarily identify the most appropriate media
to be used.

2. Studies of digital media effects

Studies into media effects using long-established technologies such as paper-based


studies (Bissell et al., 1971) have encouraged subsequent research into digital media
effects. These later studies have suggested that some media combinations may
improve performance in tasks such as:

• Learner motivation
• Improving understanding
• Making sense of large data sets
• Reducing cognitive workload
• Providing information to users with special needs

Examples of interdisciplinary studies investigating digital media effects can be


found within the literature in the areas of Psychology, Education and Computing,
and within subjects such as Instructional Design, Computer-Based Learning,
Human–Computer Interaction, and Cognitive and Educational Psychology (Alty,
2002; Beacham et al., 2002; Clark and Paivio, 1991; Paivio, 1986, 1991; Sadoski
and Pavio, 2001; Mayer, 2001). The work has involved empirical studies on the util-
ity of different media combinations include work on process control interfaces (Alty
et al., 1993; Alty, 1999) and the effects of animation (Faraday and Sutcliffe, 1997,
J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909 893

1998). Such work has suggested general rules for media selection and use, but more
commonly little empirical evidence is produced to substantiate decisions for selecting
appropriate media (Race, 1988).
Mayer and his colleagues have carried out a number of experiments on the
effects of media on learning—a summary of which can be found in Mayer
(2001). Subjects were provided with material in a number of different multimedia
presentations. The material presented explained for example, how a lightning storm
developed, how a car’s braking system worked or how a bicycle tyre pump worked.
The material was presented in many different multimedia forms and the subjects’
remembering, and understanding of the material, was measured in a series of tests.
The results obtained gave rise to a set of design principles about multimedia design.
However, Scaife and Rogers (1996), in an analysis of static and graphical represen-
tations, have argued that the absence of cognitive processing models has still
resulted in a lack of practical guidelines for interface designers. A good review of
the work currently in progress is given in the Handbook of Multimedia Learning
(Mayer, 2005).

3. Alternative views on media effects

The use of digital technologies has generated a new paradigm in our educational
methodologies and strategies (Ken and Mai Neo, 2002), and some have claimed that
an overemphasis on technology is a distraction to the main issue (Clarke, 1994). The
introduction of new technologies has always led to predictions of massive effects on
learning that have often not been borne out in practice. Clarke claimed that informa-
tion can be represented using any number of different media and that pedagogy is the
really important issue. Whilst we support the importance of pedagogy, we also support
Cobb’s (1997) position that digital media can still affect learning outcomes from the
perspective of cognitive efficiency ‘‘Efficient instructional media systems are symbol
systems that do some of the learner’s cognitive work for them. It goes without saying
that the most efficient medium would not necessarily be ideal for every stage of learn-
ing’’ (Cobb, 1997, p. 11).
More recently, Narayanan and Hegarty (1998) have stressed the importance of
employing a cognitive process model when designing multi-modal material. They
carried out empirical studies investigating how this could affect the learning perfor-
mance of students particularly when communicating dynamic information using ani-
mation techniques (Narayanan and Hegarty, 2002). Their approach is based upon a
set of design principles derived from applying the model. They found that there was
no significant difference in the performance of subjects being taught using a multi-
modal presentation compared with the performance obtained when a paper based
representation of the same model was used. The work suggests that structure and
content are more important than the dynamics and interactivity offered by the mul-
ti-modal approach. However, it is important to note that their domains were that of
the operation of a mechanical system and the execution of an algorithm. In each
case, interactivity and animation played key roles.
894 J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909

4. A cognitive theory of multimedia learning

The Cognitive Theory of Multimedia Learning (Mayer, 2001) is based on the


Dual Coding Theory of Paivio (Paivio, 1986). The theory proposes that information
is processed through one of two independent channels (verbal information such as
text, the spoken word and auditory events) and visual information (such as dia-
grams, animations and photographs). The verbal and non-verbal processing systems
can function independently, though there are cross linkages between the two. Thus
one would expect the recall of material to be affected by the way it is presented, so
different media may be more suitable than others for allowing people to recognise,
retain and recall particular types of information, and this may be further affected
by individual differences.
Mayer’s Theory is a compromise between Paivio’s two channel ‘presentation
mode’ approach and Baddeley’s ‘sensory-modality’ approach (Baddeley, 1986).
The former focuses on verbal and non-verbal stimuli, and the latter on processing
through the eyes or ears. Schnotz has developed an integrative model of text and pic-
ture comprehension based upon a similar distinction between descriptive and depic-
tive representations (Schnotz and Bannert, 2003).
In Fig. 1 (taken from Mayer, 2001), the theory provides useful insights into why
different combinations of media can have different effects on comprehension and
learning. Mayer divides Sensory and Working Memory into two channels that deal
with verbal and non-verbal representations and the information is presented visually
or aurally, processed either by the eyes or the ears. However, once in memory, words
(which may have been sensed visually) may then be converted to auditory words and
processed through the auditory channel and vice versa. Such conversions can involve
additional cognitive processing.
By applying this theory to the empirical studies described above, Mayer suggested
a number of multimedia design principles:
Multimedia
Sensory Long Term
Working Memory
Presentation Memory Memory

Select Organising
Words Words
Verbal
Words Ears Sounds
Model
integrating

Prior
Knowledge
Pictorial
Pictures Eyes
Images Model
Select Organising
Images Images

Fig. 1. Mayer’s cognitive theory of multimedia learning.


J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909 895

• Spatial Contiguity: Words and Pictures should be presented close together.


• Temporal Contiguity: Words and Pictures should be presented simultaneously.
• Coherence: avoid unnecessary words, music and pictures.
• Modality: text is better spoken when presented with animations or pictures.
• Redundancy: the same spoken and written text when presented together can
inhibit learning.
• Prior Knowledge Effect: design effects are larger for low knowledge learners than
for high knowledge learners.

The Spatial Contiguity Principle has been supported by other work including that
of Sweller and Chandler (1994), and research on textbook illustrations (Mayer et al.,
1995). The Coherence Principle is related to the idea of expressiveness (Alty, 1999;
Williams and Alty, 1998) where it is suggested that media effects can be expressed
in terms of a Signal-to-Noise ratio.
There are alternative views of memory storage that challenge Dual Coding
Theory. One challenge is Propositional Theory (Rieber, 1994), which proposes that
a transformation of linguistic information takes place into a semantic form of
storage in long-term memory. The Propositional Theory disputes the superiority
of pictures over words because people process and rehearse pictures more fully than
words. However, approaches based on the two channel approach still seem to be
useful today, in spite the advancements in new technology and changes in education
(Paivio, 1991; Sadoski and Pavio, 2001).

5. The experimental approach

Earlier experiments that have examined the effects of media on learning have been
constructed over relatively simple subject domains—for example, the operation of a
braking system or cistern, so we decided to carry out a series of multimedia learning
experiments on a much more complex domain to see how well the results would scale
up. One problem with using complex domains is student motivation. If the material
is complex and there is no over-riding goal to support improving learning and under-
standing, and it is likely that students will lose interest. A domain is required that
students need as part of their studies (and therefore are motivated).
A domain within a University context that is acknowledged to be inherently dif-
ficult, and yet for many students constitutes a most desirable skill to attain, is the
domain of statistics. Most masters and doctoral students require statistical knowl-
edge for analysing their experiments, and this sub-domain of statistics is reasonably
compact. At Loughborough University, there is a Masters course on Multimedia
Interface Design, and an important aspect of the course is the design and evaluation
of HCI experiments using statistics. The statistics material is typically taught in four
one-hour lectures on the course and covers basic information about the Null
Hypothesis, the Binomial Distribution, Non-parametric tests and Normal Distribu-
tions and their use in HCI experiments. The material used in this course was there-
fore chosen for our experiments.
896 J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909

Four computer-based teaching modules (lasting between 12 and 16 min each)


were created covering the basic statistics required for experimental analysis. The
modules were: The Null Hypothesis, The Binomial Approach, The Non-parametric
Approach, and Normal Distributions. These presentations had previously been
given on the course using a traditional lecturing style. The four modules provide
the basis for an extended and realistic test of the effects of media combinations on
learning.
The material was presented in the four modules using combinations of three
media—text, diagrams and the spoken voice. The three combinations chosen
were—Text Only, Text + Diagrams, and Spoken Text + Diagrams. These three
media were chosen so that the results could be compared with those of Mayer.
Furthermore, they are typical multimedia presentation combinations used in many
e-learning situations. The initial hypothesis is that students will show overall
improved learning when information was presented using the Sound + Diagrams
or Text + Diagrams combinations compared with a Text-Only presentation in a
realistic learning situation. A second hypothesis is that student performance will vary
between media combinations for modules with different content. This work differs
from other work in that the domain is a complex one in a real teaching environment
where the students are highly motivated. The four modules take over 1 h to present
and the students needed the information for the end of module examination. The
students should therefore be highly motivated.
All the three different media presentations were based upon identical material.
For example, the written text and spoken text were identical. The diagrams were
presented on the left-hand-side of the screen and the text on the right and the audio
presentation used the speakers. An example screen for a Text Only presentation from
Module 3 is shown in Fig. 2 (though the actual screens were in colour). This
illustrates how many different sums of ranks can be built up from the rankings 1
to 6. The diagrams and the text were progressively built up, synchronized in stages.
The material was constructed using Macro-Media Flash 5 (Ulrich, 2000) and each
module was divided into a number of Flash scenes. The organization of the scenes
was transparent to subjects, but this approach was used so that at a later date the
material could be presented in a more parallel, interactive manner. In this experi-
ment, students passively watched the presentation without interaction.
Students on the MSc course were all graduates and from a variety of disciplines.
They were told that the four created modules would be examined at the end of the
semester, but, because students would experience different styles of presentation
during the experiment, there was a risk that, if presentation style really did make a dif-
ference to learning outcomes, some students might be disadvantaged in their examina-
tion depending on which presentation they used. To avoid disadvantaging students, a
repeat of the presentation in a standard lecture format was given by the same tutor to all
students after the conclusion of each experimental module presentation.
The text-only module was first constructed. Considerable attention was given to the
structure and content of the module so that ideas were gradually introduced at a high
level and then progressively decomposed. Then the Text + Diagrams module was
designed. This required some changes to the Text-only module to keep the text identi-
J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909 897

Fig. 2. A typical screen for the text presentation in module 3.

cal. Finally, the text was replaced by an identical verbal description. Again some mod-
ifications were required to ensure the content and structure was, as far as possible,
equivalent. Extensive trials with users resulted in a number of improvements.

6. Taking into account student learning style

One individual difference that may be affected by media combinations is learning


style, so it was decided to include a measurement of learning style in the study.
Choosing a particular approach to the measurement of learning styles, however, is
not simple. For example, Coffield et al. (2004) have identified 71 learning style mod-
els and have broadly categorised them into the thirteen major models (shown in
Table 1) together with their assessments of the models.
Choosing the most appropriate learning style model from these 71 models to carry
out an empirical investigation into the effects of different media combinations on learn-
ing outcomes for different learning styles is not a simple task. Coffield et al. examined
each major model for evidence that it could show internal consistency, test–re-test reli-
ability and construct and predictive validity. They concluded that only three of the thir-
teen models came close to meeting the criteria—the models of Allinson and Hayes,
Apter, and Vermunt—whilst a further three of the major models—those of Entwistle,
Herrmann, and Myers-Briggs—met two of the criteria.
898 J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909

Table 1
The 13 major learning style models identified by Coffield et al. (2004)
Test Assessment
Allinson and Hayes CSI (1996) Best evidence of reliability and validity. Pedagogical implications
not fully explored. Suitable tool
Apter (2001) Merits further research in an educational context
Dunn and Griggs (2003) Lack of independent research on the model. Forceful claims about
impact are questionable
Entwistle (1998) Potentially useful but needs more development
Gregorc (1994) Theoretically and psychometrically flawed
Herrmann (1989) Although largely ignored offers promise. Is more inclusive and
systematic
Honey and Mumford (2000) Widely used but needs to be redesigned to address weaknesses
Jackson (2002) Has promise for wider use and consequential refinement
Kolb (1999) Problems about reliability, validity and learning cycle
Myers and McCaulley (1985) Not clear which 16 elements are most relevant
Riding and Rayner (1998) Potential value not well served by an unreliable instrument
Sternberg (1999) An unnecessary addition to the many models
Vermunt (1998) A rich model with potential use for post-16 education where text-
based learning is important

The factors, which influenced us in deciding which particular learning style test to
choose, were:

– it should be a test that can be completed in a reasonable time


– it should be a test that is aimed at adults (not children)
– it should be a test that is easy to take with minimal instruction
– it should be a test that is pleasant and informative
– it should be a test suitable for learning engineering and scientific material

The model adopted was that of Felder and Soloman (Felder, 1993). This model
characterises learning style on four major axes—Sensing versus Intuitive, Sequential
versus Global, Active versus Reflective and Visual versus Verbal learning styles. One
important reason for choosing this approach was its previous use in scientific and
engineering situations. Furthermore, in Coffield’s classification, the Felder model
is termed a ‘flexibly stable learning preferences’ learning style, and two of the three
major models which came close to meeting Coffield’s consistency, reliability and
validity criteria were also in this learning style family. The inventory has been inde-
pendently tested and validated, and shown to produce reliable results (Zywno, 2003).
The test is also easy to administer.
Since the proposed research was of an exploratory nature and the test was readily
available it was decided to use it though the test has been criticised for confounding
aural and symbolic modalities. Table 2 summarises some differences across the four
axes of the model.
The position of the learner on the four Felder axes is determined by administering a
test with 44 questions about attitudes. The results of the test are expressed as an odd
integer (1–11) followed by the letter ‘a’ or the letter ‘b’ (e.g. 7a). The ‘a’ and ‘b’ refer
J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909 899

Table 2
Brief description of the different Felder learning styles
One polarity Opposite polarity
Sensing learners—prefer facts and prefer using Intuitive learners—prefer to discover possibilities
well-known relationships and relationships
Sequential learners—tend to learn material Global learners—absorb material often randomly
in steps without necessarily seeing the connections
Active learners—prefer rushing in and doing Reflective learners—prefer to reflect before starting
Visual learners—prefer pictures and visual Verbal learners—prefer written and spoken text
material

to the polar styles and the integer is the strength of the tendency towards that style.
Thus a 9a on the Sensing/Intuitive axis suggests a strong tendency to a Sensing style,
whereas a 5b would indicate a moderate tendency to an Intuitive learning style. Usually
the different learning styles are spread evenly across the population as a whole. How-
ever, on the Visual/Verbal axis, there is usually a predominance of visual learners. Two
example questions from the Felder test are given in Table 3. Question 17 is concerned
with Active/Reflective Learning and 20 is concerned with Sequential/Global Learning.
Before the presentation of the first module, therefore, students were asked to
answer the 44 questions in the Felder questionnaire to determine their learning style.
The resulting distribution of learning styles over the student class is shown in Fig. 3.
All students really enjoyed taking this test and they were interested in their indi-
vidual result. As expected, the visual style was much more common than the verbal
style, so this axis was ignored in these experiments. The Visual/Verbal distribution
shows the typical visual bias in the population obtained in the Felder test.
The nature of the domain did not lend itself to animation. However, ideas and
diagrams were built up progressively on the screen. As new diagrammatic elements
were introduced, the text would simultaneously appear on the screen or the spoken
commentary would occur. Occasionally, blinking was used to emphasise elements
being discussed. Colour was also used to connect important sections of text or dia-
grams. In this experiment, the students did not interact with the presentations.
A post-test was administered after each module presentation. The post-tests for
all four presentations carried 19 marks. Some typical questions are shown in Table
4. There were recall questions, recognition questions and questions that tested trans-
fer knowledge. The experimenters independently marked the questions, and the
marks awarded were almost identical in each case. Where there were disagreements
these were resolved by discussion (but there were few).

Table 3
Two example questions from the Felder test
17. When I start a homework problem, I am more likely to:
(a) start working on the solution immediately
(b) try to fully understand the problem first
20. It is more important to me to me that an instructor:
(a) lays out the material in clear logical steps
(b) gives me an overall picture and relates the material to other subjects
900 J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909

Active / Reflective Distribution (All) Sensing/Intuitive Distribution (All)


14 20
12
10 15

8
10
6
4 5
2
0 0
11a 9a 7a 5a 3a 1a 1b 3b 5b 7b 9b 11b 11a 9a 7a 5a 3a 1a 1b 3b 5b 7b 9b 11b

Visual/Verbal Distribution (All) Sequential/Global Distribution (All)


20 15

15
10
10

5
5

0
0
11a 9a 7a 5a 3a 1a 1b 3b 5b 7b 9b 11b 11a 9a 7a 5a 3a 1a 1b 3b 5b 7b 9b 11b

Fig. 3. The distribution of learning styles across the subjects.

Table 4
Typical post-test questions: the number indicates from which module the question is taken
1. Some statistical methods require the data to be transformed into a particular format. Is anything lost as
a result? Can you give an example?
1. What examples were given in the lesson of one-tailed and two-tailed hypotheses?
1. When we carry out an experiment and the results appear to support our hypothesis what actually might
be happening?
2. Which of these are properties of the binomial distribution?
+ Equal chances of success and failure
+ Chance of success = 1 chance of failure
+ Only symmetrical distributions
+ Fixed number of trials
3. For two conditions in an experiment we have the following differences expressed as ranks.
+1+2 3+4 5 6 7
+ What is the smallest Rank Sum?
+ What is the largest Rank Sum?
+ What Positive Sum would you expect if this was a random result?
4. Why is the Normal Distribution so important? How does it relate to the other distributions such as
Ranking and Binomial?

7. The results obtained

The students were divided into three groups (A, B, and C) with each group bal-
anced for gender and learning styles as far as possible. Groups were then given
the four modules (in the different presentation formats) as detailed below (Table 5).
The presentations were given on succeeding days of the course (Monday, Tues-
day, Thursday and Friday) and the post-tests were conducted immediately after
J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909 901

Table 5
Groups and presentation formats
Presentations Text Text + Diagrams Voice + Diagrams
Null hypothesis Group A Group B Group C
Binomial distribution Group B Group C Group A
Ranking Group C Group A Group B
Normal distribution Group A Group B Group C

Mean Scores
14
13
12
11
10
9 Sound+Diagrams
8
Score

7 Text+Diagrams
6
5 Text Only
4
3
2
1
0
Null Binomial Ranking Normal
Hypothesis
Module

Fig. 4. The knowledge test scores across the four modules.

the presentations. As can be seen in Table 5, the groups of students were moved
between the presentation formats on succeeding days to avoid biasing the results
by the characteristics of any group. An analysis of the answers to previous knowl-
edge revealed that very few students had previous knowledge of the subject area,
and even some of that knowledge was incorrect. Any student who indicated that they
had more than 30% previous knowledge was eliminated from the test. In fact, only
one student was eliminated on the first day and three on the second day. None were
excluded on the third and fourth days. As a check, a full analysis was carried out
ignoring previous knowledge and the results were almost identical. Altogether there
were 61, 66 and 66 students in each of the three presentation types.
The most striking result was the superiority of the Sound + Diagrams presenta-
tion format over the other two. The scores achieved in the four modules are shown
in Fig. 4.
A one-way ANOVA analysis on performance between the three presentation
styles revealed a significant difference (F = 4.612, 2, 190, p < 0.011). The means
and standard deviations are Text Only 9.82 (3.7), Text + Diagrams 9.94 (4.09) and
Sound + Diagrams 11.69 (3.92).
A post hoc LSD comparison test showed the superiority of the Sound + Dia-
grams presentation over the other two (Table 6).
902 J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909

Table 6
LSD Post hoc comparison between the three presentations
Presentation comparison Significance level
Text Only
Text + Diagrams 0.857
Sound + Diagrams 0.007
Text + Diagrams
Text Only 0.857
Sound + Diagrams 0.011
Sound + Diagrams
Text Only 0.007
Text + Diagrams 0.011
Dependent variable: score.

Furthermore, the effect persists across the different module contents even though the
nature of the content varied considerably. For example, the first module (Null Hypoth-
esis) is very descriptive, whereas the Binomial and Ranking modules are more mathemat-
ical in nature. Although scores generally increase over the 4 days there was no
appreciable learning effect. The means and standard deviations for the total score for
each module over the 4 days are 9.60 (4.71), 10.18 (3.43), 10.79 (3.7) and 11.41 (3.52).
An analysis of variance indicates no significant performance difference over the 4 days
(F = 1.915, 3, 189, p < 0.129). A post hoc LSD comparison indicated that there was a sig-
nificance level of p < 0.024 between days 1 and 4 but this may be due to the nature of the
material presented on day 4 compared with day 1 rather than a learning effect.
The results agree with those of Mayer (2001). A similar improved performance for
the Sound + Diagrams presentation is observed as with Mayer’s Sound + Pictures
presentation. The similarity in performance between Text + Diagrams and Text
Only surprised us. Dual Coding Theory (and Mayer’s results) predicts that the form-
er will be more effective. We suspect that the way in which the material was presented
affected this result. Text was placed on one side of the screen and the Diagrams on
the other (see Fig. 2). This violates Mayer’s Spatial Contiguity principle and proba-

Sound + Diagrams Sensing versus Intuitive Learning Style Scores


Text + Diagrams
Text Only
20

15
Mean Score

10

0
Sense Intuit Sense Intuit Sense Intuit Sense Intuit
Null Binomial Ranking Normal
Module

Fig. 5. The performance of sensing and intuitive learners.


J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909 903

bly results in additional cognitive processing. It is likely that the Text + Diagrams
presentation would have been better than Text-only if this had been done.
An analysis was carried out to determine if the participants’ learning style had an
effect on learning. There were no clear effects for Global versus Sequential learners,
or for Active versus Reflective learners. However, there were interesting differences
for Sensing versus Intuitive learners. The results for this group are shown in Fig. 5.
The dependent variable, mean score, is the same as used for the previous analysis.
A 2·3 ANOVA analysis with presentation and learning style as factors yielded the
results displayed in Tables 7 and 8.
In all cases the means are higher for Intuitive Learners, though less marked for the
Text Only case. The standard deviations are well within a factor of two and Levene’s
test for equality of error variances yielded a significance factor of p < 0.973 showing
that the error variance was equal across groups.
The Analysis of variance shows a highly significant effect between Sensing and
Intuitive learners (F = 18.506, 1, 187, p < 0.0001), but interestingly no interaction
effect between Presentation and Learning style.
A 2·4 analysis of variance yielded the following results for performance over the
four modules. There is a highly significant difference in the performance of Sensing
Table 7
Descriptive statistics for sensing and intuitive learners
Present Sens/Int Mean SD N
Text Only Sens 9.31 3.623 45
Int 10.90 3.714 21
Total 9.82 3.700 66
Text + Diag Sens 8.80 3.816 46
Int 12.55 3.546 20
Total 9.94 4.095 66
Sound + Diag Sens 11.09 3.569 44
Int 13.24 3.882 17
Total 11.69 3.753 61
Total Sens 9.72 3.775 135
Int 12.16 3.773 58
Total 10.45 3.928 193
Dependent variable: score.

Table 8
2·3 ANOVA results for sensing and intuitive learners
Source Type III sum of squares df Mean square F Sig.
Corrected model 425.444a 5 85.089 6.273 0.000
Intercept 19460.921 1 19460.921 1434.821 0.000
Present 116.847 2 58.423 4.307 0.015
Sensint 250.998 1 250.998 18.506 0.000
Present* sensint 34.942 2 17.471 1.288 0.278
Error 2536.338 187 13.563
Total 24041.000 193
Corrected total 2961.782 192
a
R squared = 0.144 (adjusted R squared = 0.121).
904 J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909

and Intuitive learners over the four modules (F = 17.525, 1, 185, p < 0.0001). There
is no significant difference in the performance in the individual modules (F = 1.729,
3, 185, p < 0.163) and there is no significant interaction effect (F = 2.3462, 3, 185,
p < 0.918).
It is not completely clear why the Intuitive style learners performed better overall. It
is not due to the type of presentation nor the information in each module. However, the
content of the four modules is more theoretical than practical and this might explain the
result. Perhaps then theoretical nature of the material favoured the Intuitive learners.
We are planning a much more practically orientated learning experiment involving the
replacement of components of a computer. This will hopefully benefit Sensing Learn-
ers. More carefully designed experiments are needed in which the tasks and media cho-
sen more closely match the learning style characteristics.

8. The effects of student Dyslexia

In the study described in Section 7, there were six registered Dyslexic students and
we were able to examine the difference in scores in the post-test between Dyslexic stu-
dents and non-Dyslexic students. Although the sample was too small to achieve signif-
icance the experimental results suggested that computer-based media combinations
might affect learners who have Dyslexia differently to non-Dyslexic learners. This
was unexpected, since the learning materials used involve both verbal and nonverbal
content.
The experiment was therefore repeated with 30 Dyslexic students from Loughbor-
ough University. The participants were taken from various courses taught at the
University and all volunteered for the study. Participants were mainly from Science
departments (10), and Engineering departments (12), but there were eight students
from Arts departments. The participants completed a number of cognitive assess-
ments using the Lucid Adult Dyslexia Screening software (LADS, 2000) and a Visual
Perceptual Problems Inventory (VPPI). Whilst each participant was being assessed,
the participant’s data from the Learning Style test was analysed and used to place
him or her in one of three groups according to his or her Sensing/Intuitive learning
style. As far as possible, each group was also balanced according to gender and
learning styles.
Because of the different backgrounds of the students in this experiment each par-
ticipant was given a pre-test. The three groups were then were presented with the
material from the first module (The Null Hypothesis module) in the three different
media combinations—sound and diagrams, text and diagrams, and text alone. Then
after seeing the presentation, each participant was given a post-test. The results are
shown in Fig. 6. A one-way ANOVA analysis showed that the difference was highly
significant (F = 3.735, 2, 27, p < 0.037) and a post hoc analysis LSD test showed sig-
nificant differences between Text Only and both Sound + Diagrams (p < 0.016) and
Text and Diagrams (p < 0.049) the Text Only having better learning scores. If this is
compared with the earlier results (the first part of Fig. 4) it is interesting that the Dys-
lexic students responded quite differently to Non-Dyslexic students.
J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909 905

60.0

% of correct answers
50.0
Sound and diagrams
40.0
Text and diagrams
30.0
Text only
20.0
Total
10.0

0.0
Pre test Post test Difference

Fig. 6. The pre- and post-test results for Dyslexic students.

Interestingly, the analysis of the performance of the Dyslexic learners in relation


to learning styles produced different results than the previously reported experiment.
In this case, the media differences produced significant results for Active, Sensing,
Visual or Global Learners (p < 0.031, 0.002, 0.015, 0.04) but not for Reflective, Intu-
itive or Sequential learners (a full report of the experiment is given in Beacham and
Alty, 2006).
It should be noted that the Dyslexic students came from a wider population than
the students in the previous experiment. The students in the former experiment were
all computer literate students who had an interest in Computer Science. In the Dys-
lexic student group, two thirds of the subjects were Scientists or Engineers, one third
of the students came from Art and Design, Social Sciences and Business Studies.
This perhaps may have affected the result and needs further study.
The results from the study suggest that the combinations of media affected the
understanding achieved by Dyslexic and non-Dyslexic students in different ways.
For example, the media combination Sound + Diagrams gave significant perfor-
mance difference for non-Dyslexic students, but not for Dyslexic students. In con-
trast, Text-only presentations had the opposite result—Dyslexic students
performing better with this media than with the other combinations. The superior
performance with the Text-only presentation for Dyslexic students was surprising
and contrary to what might have been expected, since Dyslexic students are usually
thought to have difficulties with textual presentations. It is possible that this differ-
ence might be due to the development of compensating strategies for handling text.
These findings also broadly agree with the ideas associated with Dual Coding
Theory (Paivio and Begg, 1981; Paivio, 1991; Sadoski and Pavio, 2001). Paivio
reported that whilst in general using text and diagrams is more effective than text
alone in conveying information, because of individual differences there are cases
where this may not prove true. However, Paivio was unable to provide a clear expla-
nation for this in terms of Dyslexic learners.
It is likely that that the e-learning materials used in this study placed different cog-
nitive demands on the Dyslexic learners compared to the cognitive demands on non-
Dyslexic learners in the original study. This resulted in Dyslexic learners recalling
more information when presented with text alone. The media combinations may
have exacerbated the difficulties that Dyslexic learners experience due to their differ-
906 J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909

ent level of cognitive skills, learning style preferences, competence and experience.
The sound and diagrams presentation may have assisted in the retention of non-ver-
bal information but resulted in little retention of the verbal information. The presen-
tation containing text and diagrams may have resulted in little retention of the visual
verbal and nonverbal information because of a split-attention effect (Sweller et al.,
1998). The effect may exacerbate the difficulties Dyslexic learners experience due
to their skills being less fluent, more demanding and more error prone (Peer, 2003).

9. Conclusions

The initial experiments replicated the effects observed by Mayer and his col-
leagues. However, the domain of learning used was a more complex domain—that
of Statistics applied to Null Hypothesis testing, and the students were highly moti-
vated. The experiment has now been carried out three times (on large numbers of
students) and in each case the Sound + Diagrams media combination significantly
outperforms the Text + Diagrams and the Text-only presentations. Dual Coding
Theory suggests that a difference ought to have observed between the Text + Dia-
grams and Text-only presentations but performance was actually very similar. We
suspect this was because our design introduced a ‘split attention’ effect from the Text
and Diagrams being physically separated on the page. This required increased cog-
nitive effort by the learner and reduced the Text + Diagrams effectiveness. In future
experiments, we will endeavour to eliminate this effect.
The results differ from those of Narayanan and Hegarty in that significant differ-
ences were observed between the different media combinations. We suspect that the
nature of the domain and the lack of interactivity contributed to this difference.
There was no animation in our presentations, and the progressive explanation of
the nature and use of statistics is very different from explaining a mechanical oper-
ation (where animation is usually highly relevant). An experiment that uses media
combinations to guide the replacement of components in a computer, where the
effects of Narayanan and Hegarty might be observed, is now being planned.
The second set of experiments showed that there was a significant effect of one of
the student learning styles—Sensing verses Intuitive learning—though no significant
differences were observed on the other two axes (Sequential versus Global or Active
versus Reflective). It is not obvious why this difference was observed and this
requires further study perhaps by using a more sensitive Learning Style test. It is
likely to be related both to the nature of material presented (practical examples ver-
sus theory) but the media used may also have an effect. For example, the use of video
material (not used in our experiments) might favour Sensing learners over Intuitive
learners, as would a very practical task. On the other hand, a learning situation
where theory was heavily used is likely to favour Intuitive learners. The effects of dif-
ferent media combinations on learning in practical tasks (such as changing compo-
nents in a computer) might yield interesting learning style differences.
One of the most interesting effects found in the study was the very different
response to media combinations of Dyslexic students, initially observed in the first
J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909 907

experiment, but replicated in the special study which concentrated on Dyslexic stu-
dent subjects only. The experiment suggested that different computer-based media
combinations affected learners who have Dyslexia differently to non-Dyslexic learn-
ers. This was unexpected, since the learning materials used consisted of both verbal
and nonverbal content. Whilst we would have expected the Dyslexic subjects to have
problems with text alone, it was not expected they would have problems with text
and diagrams or with sound and diagrams. Interestingly, some Dyslexic subjects
obtained higher scores when having information presented as text alone than with
text and diagrams.
The findings from the study also suggest that information presented using text
and diagrams for non-Dyslexic learners may not be the most efficient way of present-
ing information to Dyslexic learners. Furthermore, the results could not be com-
pletely explained by Dual Coding Theory (Paivio and Begg, 1981). It is possible
that the different media combinations put different cognitive loads on the Dyslexic
students compared with non-Dyslexic students. Perhaps the Text-only presentation
facilitated Dyslexic coping strategies. Interestingly, the presentation style we thought
might have given the best results (Text and Diagrams) actually resulted in the worst
performance. For a more detailed analysis of these results for Dyslexic students, see
Beacham and Alty (2006).
Findings from the study have raised a number of new and important issues.
Firstly, how might computer-based media affect Dyslexic learners differently to
non-Dyslexic learners, and secondly, should e-learning materials be designed specif-
ically for Dyslexic students? The study suggests that that varying the combinations
of computer-based media affects the learning outcomes of Dyslexic students in a
different way to those of non-Dyslexic students implying that serious consideration
needs to be given as to the way e-learning materials are designed and delivered.

References

Allinson, C., Hayes, J., 1996. The cognitive styles index. Journal of Management Studies 33, 119–135.
Alty, J.L., 1999. Multimedia and process control: signals or noise?. Transactions of the Institute of
Measurement and Control 21 (4/5) 181–190.
Alty, J.L., (2002), Dual Coding Theory and education: some media experiments to examine the effects of
different media on learning. In: Proceedings of the Ed-MEDIA: World Conference on Educational
Multimedia and Telecommunications, Denver, Colorado, USA, pp. 42–47.
Alty, J.L., Bergan, M., Craufurd, P., Dolphin, C., 1993. Multimedia and process control: some initial
experimental results. Computers and Graphics 17 (3), 205–218.
Apter, M.J., 2001. Motivation Styles in Everyday Life: a Guide to Reversal Theory. American
Psychological Association, Washington, DC, USA.
Baddeley, A., 1986. Working Memory. Clarendon Press, Oxford, England.
Beacham, N.A., Alty, J.L., 2006. An Investigation into the effects that digital media can have on the
learning outcomes of individuals who have Dyslexia. Computers and Education Journal 41 (1), 74–93.
Beacham, N., Elliott, A., Alty, J.L., Al-Sharrah, A., (2002). Media combinations and learning styles: a
dual coding theory approach. In: Proceedings of ED-MEDIA: World Conference on Educational
Multimedia, Hypermedia and Telecommunications, Denver, Colorado, pp. 111–116.
Bissell, J., White, S., Zivin, G., 1971. Sensory modalities in children’s learning. In: Lesser, G.S. (Ed.),
Psychology and Educational Practice. Scott, Foresman and Company, London, pp. 130–155.
908 J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909

Clark, J.M., Paivio, A., 1991. Dual Coding Theory and Education. Educational Psychology Review 3,
149–210.
Clarke, R.E., 1994. Media will never influence learning. Educational Technology Research Development
42 (2), 21–29.
Cobb, T., 1997. Cognitive efficiency: towards a revised theory of media. Educational Technology Research
and Development 45 (4), 21–35.
Coffield, F., Moseley, D., Hall, E., Ecclestone, K., (2004). Should we be using Learning Styles? What
research has to say in practice. Report of the Learning and Skills. Development Agency, Regent
Arcade House, Argyle St. London.
Diaper, D., Stanton, N.A., 2004. The Handbook of task Analysis for Human Computer Interaction.
Lawrence Erlbaum Associates, Mahwah, NJ.
Dunn, R., Griggs, S., 2003. Synthesis of the Dunn and Dunn Learning Styles Model Research: Who,
What, When, Where and so What—the Dunn and Dunn Learning Styles Model and its Theoretical
Cornerstone. St Johns University, New York, USA.
Entwistle, N.J., 1998. Improving teaching through research on student learning. In: Forrest, J.J.F. (Ed.),
University Teaching: International Perspectives. Garland, NY.
Faraday, P.M., Sutcliffe, A.G., (1997). Multimedia: design for the moment. In: Proceedings of the
Multimedia’97, ACM, pp. 183–193.
Faraday, P.M., Sutcliffe, A.G., (1998). Providing advice for multimedia designers. In: Proceedings of the
CHI’98, ACM, p. 124–131.
Felder, R., 1993. Reaching the second tier: Learning and teaching styles in college science education.
Journal of College Science Teaching 23, 286–290.
Gregorc, A.F., 1984. Style as a symptom: a phenomenological perspective. Theory and Practice 23 (1), 51–55.
Herrmann, N. et al., 1989. The Creative Brain: Brain Books. The Ned Herrmann Group, North Carolina,
USA.
Honey, P., Mumford, A., 2000. The Learning Styles Helpers Guide. Peter Honey Publications Ltd,
Maidenhead, UK.
Jackson, C., (2002). Manual of the Learning Styles Profiler, www.psi-press.co.uk
Ken, Neo, Mai, Neo, 2002. Building a constructivist learning environment using a multimedia design
project—a Malaysian experience. Journal of Multimedia and Hypermedia 11 (2), 141–153.
Kolb, D.A., 1999. The Kolb Learning Style Inventory: Version 3. Hay Group, Boston, MA.
LADS, (2002). Lucid Adult Dyslexia Screening Administrator’s Manual, Version 1.0, C. Singleton, K.
Thomas, (Eds.).
Mayer, R.E., 2001. Multimedia Learning. Cambridge University Press, Cambridge, UK.
Mayer, R.E., 2005. In: Mayer, R.E. (Ed.), The Cambridge Handbook of Multimedia Learning.
Cambridge University Press, Cambridge.
Mayer, R.E., Sims, V., Tajika, H., 1995. For whom is a picture worth a thousand words? Extensions of a
Dual Coding Theory of multimedia learning. Journal of Educational Psychology 84, 389–401.
Myers, I.B., McCaulley, M.H., 1985. Manual: a guide to the development and use of the Myers-Briggs
Type indicator. Consulting Psychologists Press, Palo Alto, CA.
Narayanan, N.H., Hegarty, M., 1998. On designing comprehensible interactive hypermedia manuals.
International Journal of Human—Computer Studies 48, 267–301.
Narayanan, N.H., Hegarty, M., 2002. Multimedia design for communication of dynamic information.
International Journal of Human—Computer Studies 57, 279–315.
Paivio, A., 1986. Mental Representations: A Dual Coding Approach. Oxford University Press, New York.
Paivio, A., 1991. Dual Coding Theory: Retrospect and current status. Canadian Journal of Psychology 45,
255–287.
Paivio, A., Begg, I., 1981. Psychology of Language. Prentice-Hall, London, UK.
Peer, L., (2003). Ethnic minority learners with dyslexia. British Dyslexia Association Series of Termly
Papers, January.
Race, P., 1988. 500 Tips for Open and Flexible Learning. Kogan Page, London.
Riding, R., Rayner, S., 1998. Cognitive Styles and Learning Strategies: Understanding Style Differences in
Learning Behaviour. David Fulton, London, UK.
J.L. Alty et al. / Interacting with Computers 18 (2006) 891–909 909

Rieber, L.P., 1994. Computers, Graphics and Learning. WCB Brown & Benchmark, Madison, WI.
Sadoski, M., Pavio, A., 2001. Imagery and Text: A Dual Coding Theory of Reading and Writing.
Lawrence Erlbaum Associates, Inc..
Scaife, M., Rogers, Y., 1996. External Cognition: how do graphical representations work? International
Journal of Human–Computer Studies 45, 185–213.
Schnotz, W., Bannert, M., 2003. Construction and interference in learning from multiple representations.
Learning and Instruction 13, 141–156.
Sweller, J., Chandler, P., 1994. Why some material is difficult to learn. Cognition and Instruction 12,
185–233.
Sweller, J., van Merrienboer, J.J.G., Paas, F.G.W.C., 1998. Cognitive architecture and instructional
design. Educational Psychology Review 10 (3), 251–296.
Ulrich, K., 2000. Flash 5 for Windows and Macintosh: Visual Quick Start Guide, third ed. Peachpit Press.
Vermunt, J.D., 1998. The regulation of constructive learning processes. British Journal of Educational
Psychology 68, 149–171.
Williams, D.M.L., Alty, J.L., 1998. Expressiveness and multimedia interface design. In: Ottman, T.,
Tomek, I. (Eds.), Proceedings of Edmedia-98. Freiburg, Germany, pp. 1505–1510.
Zywno, M.S. (2003). A Contribution to validation of score meaning for felder-Soloman’s index of learning
styles, session 2351. In: The Proceedings of the 2003 ASEE Annual Conference and Exposition,
Nashville, Tennessee, June 23–25, 2003. (Available online at: http://www.ncsu.edu/felderpublic/
ILSdir/Zywno_Validation_Study.pdf).

You might also like