INTERNATIONAL JOURNAL OF EDUCATION AND INFORMATION TECHNOLOGIES
Issue 1, Volume 3, 2009
Meaningful Hybrid e-Training Model via
POPEYE Orientation
Rosseni Din, Mohamad Shanudin Zakaria, Khairul Anwar Mastor,
Norizan Abdul Razak, Mohamed Amin Embi & Siti Rahayah Ariffin
Abstract— This present study is aimed at developing a meaningful
hybrid e-training model for ICT trainers by distinguishing the
usefulness of its’ content, delivery, service, outcome and
infrastructure. In doing so, the study sought to establish the content
validity, test reliability and construct validate factors affecting
usefulness of the hybrid e-training approach. Overall reliability
coefficients of the instrument examined when analyzed with SPSS
15.0 using Cronbach Alpha reliability test were .986 while reliability
at the scale levels were also acceptable ranging from .886 to .971.
Subsequently external construct validity was conducted by employing
structural equation modeling using confirmatory factor analysis
(CFA) with AMOS 7.0. Overall analyses suggested that the
instrument is valid and reliable to measure the usefulness of a hybrid
e-Training module or program. Internal consistency was still
maintained after CFA with overall reliability coefficient of .959 and
at the scale level ranging from .814 to .909. A revised model was
developed from the hypothesized measurement model with findings
showing evidences for construct validity. Goodness-of-fit measures
of comparative fit index (CFI) and non-normed fix index (NNFI also
known TLI) were above suggested threshold > .90 (CFI=.943;
TLI=.930). The paper will also showcase some of the instructional
media and method used in the study to promote good practice of the
problem oriented project based hybrid e-training (POPEYE)
orientation.
II. AIM OF THE STUDY
The aim of this study is to examine the reliability and
validity of the Hybrid e-Training Instrument (HiTs) used to
measure usefulness of a Hybrid e-training (HiT) module and
eventually develop a model by comparing the measurement
model against reality as represented by the sample data. The
HiT module was designed to adopt the problem oriented
project based hybrid e-training (POPEYE) orientation to
deliver computer and technology courses via various
instructional media. The module’s aim is to provide a
meaningful e-Training experience by integrating online
learning strategy into the regular face-to-face and self-learning
method.
Constructs of this instrument were adapted from the Demand
Driven Learning Model (DDLM) inventory which is a webbased learning model [3] and evaluation tool developed by
MacDonald et al. [4]. DDLM was defined by five key
constructs: Structure, Content, Delivery, Service and
Outcomes. The 59-item DDLM inventory were then modified
and adapted for HiTs to fit the Asian and local university’s
culture. Adaptation was mainly guided by result of interaction
and document analysis done during feasibility phase of the
study. The first version of the adapted instrument yielded 61
items regarding e-Training for adult learners in a hybrid
environment on a Likert-type scale.
Keywords— E-training model, reliability, structural equation
modeling, validity, problem oriented project based learning.
I. INTRODUCTION
A. Research Objectives
Specifically, the objectives of the study were to (i) establish
face and content validity, and then to (ii) determine the
reliability and internal consistency of HiTs and finally to (iii)
investigate its constructs validity by developing a revised
model of Hybrid e-Training using confirmatory factor
analysis.
I
It is evident that in order to progress further into the area of
e-Learning, particularly e-Training for ICT trainers, an
appropriate measurement scale is required. This scale
would ideally, distinguish the usefulness of a program in
terms of its’ content, delivery, service, outcome and
infrastructure. This is in line with the rapid change in
information & communication technology (ICT) and
businesses practices & innovations that warrant for
realignment of the IT curriculum to suit the needs for business
strategies [1][2].
B. Research Question
Upon establishing the first and second objective of the study
as discussed in the following methodology section in Section
VI, the study was then guided by the following research
question to achieve the third objective, “Is trainee’s
perspective towards usefulness of the Hybrid e-Training
module influenced by the module’s content, delivery, service,
structure and outcome?”
This research is part of a three year multi disciplinary project funded by the
Malaysian Government under the Research University Grant UKM-GUPTMK-03-08-308.
56
INTERNATIONAL JOURNAL OF EDUCATION AND INFORMATION TECHNOLOGIES
Issue 1, Volume 3, 2009
III. OPERATIONAL DEFINITION
H. Correlation
In reference to Fig. 1, the curved arrow between latent
variables indicates that they are correlated. If the curve were
changed to a straight one-headed arrow, a hypothesized direct
relationship between the two latent variables would be
indicated.
In addition, the directional path would be
considered a structural component of the model [5], [6] and
[7].
Terminology associated with the use of structural equation
modeling and major terms for this study are operationally
defined as follows. More elaborated terminology however,
will be discussed in the subsequent section. This includes
discussions about structural equation modeling and
confirmatory factor analysis.
A. Hybrid
The term hybrid refers to a combination of learning and
instructional strategy comprising of face-to-face, online and
self learning.
B. E-Training
E-Training in this study refers to a course, module or
program delivered in a hybrid environment as a process of
developing knowledge, skills, and abilities in ICT trainers for
the achievement of organizational goals.
I. Face Validity
Face validity is the extent to which the content of the items
is consistent with the construct definition, base solely on the
researcher’s judgment [8]. In this study, after face validity
was done, judgment by fellow researchers and English as a
second language expert was sought to ensure the sentences
constructed are of the technical and language level
understandable by targeted respondents.
J. Content Validity
Content validity is the assessment of the degree of
correspondence between the items selected to constitute a
summated scale and its conceptual definition [8]. In this study
expert judgment from the field of education and training,
measurement and evaluation, educational technology, general
studies, information system, ICT and computer education
were used to assess whether the hybrid e-training instrument
measures what it is proposed to measure.
C. ICT trainers
ICT trainers in this study refers to university staff appointed
by the university’s ICT Center, whose role is to support and
direct staff in the area of ICT and Computer Science; (ii)
educational developers and learning technologists attached to
the university’s Computer Center, whose role is to work with
or alongside practitioners to enable and enhance e-learning
researchers into learning and e-learning, including academic
researchers, action researchers and research-project staffs and
assistants; (iii) appointed ICT trainers, teachers and teacher
trainees and (iv) ICT educators in the country or Asia in
general.
K. Construct Reliability
Measure of reliability and internal consistency of the
measured variables representing a latent construct. It must be
established before construct validity can be assessed [8].
D. Observed variables
Observed variables in this study also termed as measured,
indicator or manifest variables. Researchers traditionally use
a square or rectangle to designate them graphically. Response
to the likert-scaled item in this study is an example of an
observed variable.
E. Unobserved variables
Unobserved variables in this study are termed latent factors.
Factors or constructs are depicted graphically with circles or
ovals. Common factor is another term used because the
effects of unobserved variables are shared in common with
one or more observed variables [5]. In Fig. 1, the large circle
labeled with a prefix unobserved is an unobserved or latent
variable.
L. Construct Validity
Construct validity is the extent to which a set of measured
variables are actually representing the theoretical latent
construct they are designed to measure [8].
M. Confirmatory Factor Analysis
The use of factor analysis to test hypotheses about the latent
traits that underlie a set of measured variables [8].
IV.
STRUCTURAL EQUATION MODELING
Structural equation modeling (SEM) is more of a
confirmatory technique but it can also be used for exploratory
purposes [5], [6] and [7]. SEM encompasses two components,
(i) a measurement model, essentially the confirmatory factor
analysis (CFA) and (ii) a structural model. The measurement
model of SEM is CFA as in Fig. 1. It depicts the pattern of
observed variables for those latent constructs in the
hypothesized model.
A major component of a CFA, which is to test the reliability
of observed variables were conducted in this study. As part of
the process, factor loadings, unique variances, and
modification indexes (should a variable be dropped or a path
added) are estimated to derive the best indicators of latent
variables prior to testing a structural model. However,
F. Unique factor
In reference to Fig. 1, the small circles labeled with the
prefix letter “e” are the unique factors or measurement errors
in the variables. The unique factors are different from the
latent factors because their effect is associated with only one
observed variable.
G. Causal effect
In reference to Figure 1, the straight line pointing from a
latent variable to the observed variable indicates the causal
effect of the latent variable on the observed variables.
57
INTERNATIONAL JOURNAL OF EDUCATION AND INFORMATION TECHNOLOGIES
Issue 1, Volume 3, 2009
discussion about the testing of the structural model which is
the larger part of this study is beyond discussion of this paper.
V.
feedbacks (616 postings) from ICT trainers who attended the
hybrid training courses in UKM, the national university of
Malaysia in the year 2003-2005. Task analysis was then
conducted to identify what is significant to be worth included
in the new, updated curriculum and to identify the learner
needs. Fig. 2 shows the preliminary model while Table 1 and
Table 2 shows result of task analysis conducted to determine
appropriate and demand driven content as well as instructional
media and method to be used for the hybrid course.
Consequently, based on findings of this early study, a
handbook for instruction of computer training delivery course
was developed as shown in Fig. 3. Table 3 shows the learning
matrix embedded into the course handbook. The learning
matrix specifies all the learning outcomes expected from the
course with the associated learning process and assessment.
This section will also show some captured screens of learning
resources developed using the university learning
management system (Fig. 4) and the course blog (Fig.5-6).
CONFIRMATORY FACTOR ANALYSIS
Confirmatory factor analysis (CFA) is a confirmatory
technique that is theory driven and therefore the planning of
the analysis is driven by the theoretical relationships among
observed and unobserved variables. In this research, when
CFA was conducted, the researchers hoped to minimize the
difference between estimated and observed matrices. In
reference to the example in Figure 1, each of the two latent
variables is measured with five observed variables. The ten
observed variables are responses from statements from two
Likert-based scales. The numbers “1” in the diagram indicate
that the regression coefficient has been fixed to 1.
Coefficients are fixed to a number to minimize the number of
parameters estimated in the model. In Figure 1, the curved
arrow between latent variables indicates that they are
correlated. If the curve were changed to a straight one-headed
arrow, a hypothesized direct relationship between the two
latent variables would be indicated.
In addition, the
directional path would be considered a structural component
of the model [5]-[7].
1
e1
Observed1
1
1
e2
Observed2
Unobserved1
1
e3
Observed3
1
e4
Observed4
1
e5
Observed5
1
e6
Observed6
1
1
e7
Observed7
Unobserved2
1
e8
Observed8
1
e9
Observed9
1
e10
Observed10
Fig.2 A reconstructed preliminary model based on literature review
and the Demand Driven Learning Model [3] in combination with
the new themes that emerged based on data collected during training
sessions.
Fig.1 Generic example of structural equation modeling application
to test for factorial validity of a theoretical construct using the
confirmatory factor analysis. e = error
VI. RESEARCH METHOD
B. Sample
A number of different communities of users are referred to in
this study. Broadly speaking they are ICT trainers as defined
in the terminology section.
Despite their internal
complexities, these communities will be referred to in this
paper, simply as ICT trainers. The pilot sample was 42 ICT
trainers from the same institution. The subsequent sample
A. Feasibility & Early Study
A preliminary model was constructed based on literature
review and DDLM, the Demand Driven Learning Model [3].
The model was then further reconstructed based on new
themes emerged from the data collected during training
sessions, interview and content analysis of interactions and
58
INTERNATIONAL JOURNAL OF EDUCATION AND INFORMATION TECHNOLOGIES
Issue 1, Volume 3, 2009
originally encompasses 213 participants, 172 females and 37
males, studying at a public university in Malaysia. The
trainees were enrolled in credit-bearing education and
computer education courses. The age of trainees range from
20 to 48 years old. Highest frequency is in the range of 21-25
years old; that is 62% (132) of the whole sample. The trainees
represented four origin, (31.9% (68) from East Malaysia,
51.6% (110) from West Malaysia, 1.4% (3) from Brunei and
14.6 (31) from main land China. They make up four main
races with 71.4% (152) Malays, 23.9% Chinese, 2.8% (6)
Indians and 1.4% (3) other from other races. All but 28.2%
(60) of the participants had none or less than one year
teaching experience.
type scales [9]. Likert actually scaled the category labels he
used. Although the instrument for this study uses a scale of 15, no scaling has been done to determine the anchors. In
addition, a response category for “Not Applicable” was added
for each Likert item [10]. As such we refer them as a "Likerttype" scale.
First phase of the study was to establish face and content
validity and to test reliability and internal consistency of
HiTS. The instrument were reviewed in various aspects;
technical, language and instructional design in terms of (i)
pedagogical/learning strategy, (ii) theories in practice, (iii)
cosmetic design of instructional media and (iv) course
functionality. The 61-item instrument still contains 5
constructs at this point namely Content (9-item), Delivery (9item), Service (7-item), Outcome (12-item) and Structure (24item). Respondents rated aspects of the course on a 1 to 5
scale where 1 equals "strongly disagree" and 5 equals
"strongly agree"; 1 represents the lowest and most negative
impression on the scale, 3 represents an adequate impression,
and 5 represents the highest and most positive impression.
They chose N/A if the item is not appropriate or not
applicable to the course. Table 4 shows the contents of HiTs
after face and content validation.
Table 1 Task analysis for computer education content
Table 2 Task analysis for instructional media and method
C. Instrument
The first version of the adapted instrument yielded 61
items to measure usefulness of a hybrid e-Training course on a
Likert-type scale. Likert scale has five points from strongly
agree to strongly disagree; those with 6, 7 or 8, etc. are Likert59
INTERNATIONAL JOURNAL OF EDUCATION AND INFORMATION TECHNOLOGIES
Issue 1, Volume 3, 2009
Fig.3 A Course Handbook for the updated curriculum of the
course Instruction for Computer Training Delivery
E. Reliability and Internal Consistency
For the assessment of reliability, this 61-item instrument
was administered to 42 computer trainees in a pre-test then to
another 213 respondents at a higher learning institution. The
cronbach alpha reliability analysis was conducted to ensure
the internal consistency was at least maintained if not
improved from the pre-test reliability. In the pre-test with 42
respondents, the result indicates overall cronbach’s alpha of
0.957. Reliability test using data from the 213 respondents
reveal an overall cronbach’s alpha of .986 as shown in Table
2. After deleting five cases for missing data and outliers, the
cronbach’s alpha came out to .987. As seen in Table 5, the
alphas of the hybrid e-training measures were high in each of
the five constructs. They range from 0.886 to 0.971. Overall
analyses suggested that the instrument is reliable to measure
usefulness of the hybrid e-training module.
D. Face and Content Validation
In order to achieve face and content validity, we
thoroughly reviewed related literature and conduct interaction
analysis as well as document analysis. Following discussions
with language and technical experts a judgment process by a
jury of ten experts from the field of educational technology,
measurement and evaluation, general studies, information
system, computer training and education was carried out.
Similar method was employed by Mohamad Sahari et al. [13]
in their studies. A pre-test involving 42 students who fits the
description of computer trainers at an institution of higher
learning in Malaysia was completed. As a result, we came up
with a 61-item HiT instrument. Although the scales were
previously established scales, expert judgment was still seek
out to ensure adaptations, deletions and additions were
justified. When two items have virtually identical content,
one was dropped. Items, upon which judges cannot agree,
were also dropped. Summated scales were created from the
pre-test and items with item-total correlation of less than 0.5
were deleted [8]. Factor analysis was not done at this time
since the sample size was less than 50.
Table 3 Learning Matrix
60
INTERNATIONAL JOURNAL OF EDUCATION AND INFORMATION TECHNOLOGIES
Issue 1, Volume 3, 2009
Fig.5 The front page of the computer education blog which is the
official course blog at http://rosseni.wordpress.com
Fig.6. An example reflection of the course instructor in the Malay
version of the official course blog
Fig.4 Some of the resources for computer education available in the
university’s LMS
Table 4 Contents of HiTs
Factors
Item ID
Total Item
Content
C01 - C09
9
Delivery
C10 - C18
9
Service
C19 - C25
7
Outcome
C26 - C37
12
Structure
C38 - C61
24
*Total items = 61 (before extraction during principal component analysis)
F. Preparation for Confirmatory Factor Analysis
The last step taken after achieving research objective one
and two is to achieve research objective number three in
preparation for confirmatory factor analysis which is
necessary to answer the research question. This preparation
61
INTERNATIONAL JOURNAL OF EDUCATION AND INFORMATION TECHNOLOGIES
Issue 1, Volume 3, 2009
was done using principal component analysis with varimax
rotation. Varimax rotation method has proved successful as
an analytic approach to obtaining an orthogonal rotation of
factors which is the most widely used rotation method for data
reduction [8][11] meant for subsequent use in other
multivariate techniques [8]. According to Kaiser (1958) as in
[11], varimax orthogonal rotation attempts to maximize the
variance on factors by minimizing the number of variables
loading highly on the separate factors. This process is the
default in SPSS. The method normalizes the loadings on pairs
of factors prior to rotation and tends to promote finding simple
structures in which loadings are high on one factor and near
zero on others.
A preliminary examination of the factor matrix in terms of
the factor loading was made based on theory and practical
significance. Factors in the range of .30 to .40 which are
considered the minimal level for interpretation of structure
were kept. However, research has shown that factor loadings
have substantially larger standard errors than typical
correlations [8]. Thus, to obtain a power level of 80 percent
with the use of .05 significance level by a sample size of 208,
a factor loading of at least .40 is required for significance [8].
Table 6 shows the contents of HiTS after principal component
analysis.
features reasonably fast download of
files
Instructor was well prepared.
0.886 for
SERVICE
measures
of the
hybrid eTraining
system N
of items =
7
0.948 for
OUTCOME
measures
of the
hybrid eTraining
system N
of items =
12
Table 5 Reliability Analysis With Overall Reliability Coefficient
Equals 0.986.
Cronbach's
Alpha for
construct
measure
0.933 for
CONTENT
measures of
the hybrid
e-Training
system N of
items = 9
0.921 for
DELIVERY
measures of
the hybrid
e-Training
system N of
items = 9
Item
Face to face instruction was helpful.
Online resources are useful.
.628
.921
.719
.864
.760
.860
.741
.862
Online support from peers was
helpful.
.789
.856
Sufficient time was given to complete
the project.
. 620
.876
Comments are responded to within
reasonable time.
.501
.898
Suggestions are quickly responded
to.
.696
.867
Online support from peers was
helpful.
.678
.946
Course project is in line with my
expectations.
.795
.942
I gained more knowledge about
technology
.727
.945
I have acquired proficiency in
blogging with wordpress.
.781
.943
I have developed new skill
.783
.943
My attitude has changed.
.767
.943
I can use the new skill throughout
my career
.746
.944
I have applied the new knowledge
in my life.
.813
.942
I initiated new ideas from the new
knowledge
.718
.945
Interactive blogging was essential
in the course.
.787
.943
.749
.944
I completed the required tasks for
the project
.734
.944
Free wireless connection is important
for learning
.757
.970
The university provides free wireless
connection.
.499
.972
.381
.976
The course uses interactive
technology.
.651
.971
The course engages me in the
learning experience.
.780
.970
The course builds my confidence in
problem solving.
.855
.969
The course builds my confidence in
planning.
.791
.970
.766
.970
Corrected
Item-Total
Correlation
Cronbach'
s Alpha if
Item
Deleted
I am aware of the course
prerequisites
.747
.925
I had the course prerequisite
knowledge and skills
.750
.925
.787
.923
.761
.924
course relevant to my job
.700
.928
Reading materials are relevant to the
course.
.699
.928
Strong links between theory and
practice.
.755
.925
Content includes knowledge
applicable in life.
.765
.924
Content covers current technology
use.
.786
.923
Is concise and uncluttered.
.703
.913
Uses appropriate style for display.
.796
.908
features aesthetically pleasing
graphics
.768
.910
Provides descriptions to all links.
.724
.912
Provides materials that stimulate
curiosity.
.687
.914
The instructor act as a partner in
learning
.840
.970
Have useful functions.
.780
.908
support face to face lecture
.774
.970
.732
.912
My opinions are considered in the
course
uses appropriate technology
.735
.912
The instructor was empathetic to my
needs
.748
.970
I was well informed about the course
objectives.
Course lived up to my expectations.
Assessment criteria were fair.
0.971 for
STRUC
TURE
measures
of the
hybrid eTraining
system N
of items =
24
The course content meets my need.
The course is interactive
62
INTERNATIONAL JOURNAL OF EDUCATION AND INFORMATION TECHNOLOGIES
Issue 1, Volume 3, 2009
The course creates a positive learning
environment
.747
.970
The course activities support learning
goals
.829
.970
The instructor facilitates self-directed
learning
.746
.970
The instructor makes his/her
expectations clear
.798
.970
The instructor embeds learning in
realistic contexts
.823
.970
.800
.970
The course provides sufficient
practice opportunity
.840
.970
The course provides opportunities for
self-reflection
.856
.970
The course provides opportunities for
self-evaluation
.881
.969
The course supports exploratory
learning
.837
.970
.845
.970
The course provides -steps/ links to
further my learning
.856
.969
Free wireless connection is important
for learning
.757
.970
The course allow me to make choices
The course enhanced my learning
each but the error terms are not allowed to relate to any other
measured variable. Two constructs (Content and Service) are
indicated by four measured indicators, another two (Delivery
and Outcome) are indicated by five measured indicators and
one is indicated by nine indicators. Every individual construct
is identified.
The overall model has more degrees of freedom than paths to
be estimated. Therefore, abiding with the rule of thumb [8]
recommending a minimum of three indicators per construct but
encouraging at least four, the order condition is satisfied which
means the model is over identified. Given the number of
indicators and sufficient sample size of 208, no problem with
the rank condition are expected either.
Stage 3 requires that the study be designed and executed to
collect data for testing the measurement model constructed in
Stage 2. Having done that, AMOS 7.0 was selected to estimate
parameters in the measurement model drawn using the
graphical interface earlier as depicted in Fig. 7. The model was
estimated using the default maximum likelihood estimation.
Result of estimation is shown as in Fig. 8.
The next stage is Stage 4: Assessing Measurement Model
Validity.
This is done by comparing the theoretical
measurement model against reality as represented by the
sample. Key fit statistics and the parameter estimates from
Figure 8 and subsequent iterations were reviewed.
Table 6 Contents of HiTs After PCA
Factors
Content
Delivery
Service
Outcome
Structure
Item ID
C03, C04, C05, C06
C10, C11, C12, C17, C18
C19, C20, C21, C23
C28, C31, C35, C33, C37
C38, C42, C46, C48, C54, C56,
C58, C60, C61
Total Item
4
5
4
5
9
*Total items = 27
I. FINDINGS
This section presents the results of the study by answering
the research question - Are trainees perspective towards
usefulness of the Hybrid e-Training module influenced by the
module’s content, delivery, service, structure and outcome?
This is done by reporting the results of structural equation
model process using confirmatory factor analysis to achieve
external construct validation.
Fig. 7 The first hypothesized measurement model
A. CFA and Construct Validity
This section will illustrate the first four-stage procedure of
performing CFA [8] to confirm the hypothesized hybrid model.
Having completed Stage 1: Defining Individual Constructs, as
explained previously in the methods section, Stage 2:
Developing the Overall Measurement Model was constructed.
A visual diagram depicting the first hypothesized measurement
model consisting of 27 measured indicator variables and five
latent constructs is shown in Figure 7.
As prescribed in the CFA stages procedure [8], all
constructs are allowed to correlate with all other constructs and
all measured items are allowed to load on only one construct
63
B. Data Analysis
To arrive at the conclusion, a confirmatory factor analysis
was conducted on the hypothesized five-factor structure model
using AMOS [5] model-fitting program. The program adopted
maximum likelihood estimation to generate estimates in the
full-fledged measurement model. To assess the fit of the
measurement model, the analysis relied on a number of
descriptive fit indices, which included the (1) relative chisquare (χ2/df), (2) comparative fit index (CFI), (3) TuckerLewis coefficient (TLI), and (4) root mean square error
approximation (RMSEA). Wheaton et al. in Hair et al. [3]
suggest the use of relative chi-square (chi-square/df) as a fit
measure. They suggest a ratio of approximately five or less as
beginning to be reasonable. Carmines and McIver in [8]
INTERNATIONAL JOURNAL OF EDUCATION AND INFORMATION TECHNOLOGIES
Issue 1, Volume 3, 2009
however stated from their experience, chi-square/df in the
range of two to three are indicative of an acceptable fit
between the hypothetical model and the sample data. The
possible values of CFI and TLI range from zero to one, with
values close to one demonstrate a good fit [5]. Finally a value
of approximately .08 or less for RMSEA shows a reasonable
error of estimation.
C. Hypothesized Model
Fig. 8 presents the estimated five-factor model for the hybrid
module using the data drawn out from the test sample
(N=208). Items from each scale are assumed to load only on
their respective latent variables and some of the overall fit
indicators and parameter values are shown in the figure. The
results indicated that the parameters were free from offending
estimates, ranging from .56 to .87. Both fit indicators (CFI &
TLI) exceeded threshold of .90, the standard deemed important for
model fit. However, the root-mean square error of .088
approximation reflect a possible fit problem.
To validate the likelihood of the revised five-factor model,
another confirmatory factor analysis was applied on the same
sample. Fig. 8 shows the revised hypothesized measurement
model for the hybrid e-training module while Fig. 9 shows the
final revised model. Note that in the revised model, there are
only 18 indicators left. Three constructs (Content, Service and
Structure) are indicated by four measured indicators and two
other constructs (Delivery and Outcome) are indicated by
three measured indicators. Based on the modification indeces,
9 items were deleted after 9 iterations to bring down the
RMSEA to approach the required threshold of 0.08 for an
adequately fit model in the revised model as shown in Fig. 8.
The overall fit of the final 16-item revised measurement
model is summarized in Fig.9. The magnitude of the factor
loadings were substantially significant with CFI=.943 &
TLI=.930 while RMSEA has slightly improved at 0.086.
Since not much improvement on RMSEA were made and all
loadings and residuals are acceptable, we stop the iteration
with what is shown in Figure 9 as the final revised model.
The model is free from offending estimates, ranging from
.75 to .87. The internal consistency estimates satisfied the
standard deemed necessary in scale construction. The
cronbach alphas for the five sub-constructs after CFA are
range from .814 to .909 (content=.831, delivery=.865,
service= .885, outcome=.814 and structure=.909) while
cronbach alpha for the whole section measures = .959.
Fig. 8 The revised hypothesized e-Training: C3-C58 represents
observed variables; e4-e19 represents error variances; single
headed arrows from factors depict factor loadings.
D. Revised Model
A closer examination of the results revealed one possible reason
for the model’s lack of fit in the term of the RMSEA. Evidently, the
residuals associated with observed indicators C6 (e1=.32) and C18
Fig. 9 The Final Revised Model for the Hybrid Module
(e5=.76) may have created some problem. Typically residuals
of less than |2.5| do not suggest a problem; conversely
residuals greater than |4| raise a red flag and suggest a
potentially unacceptable degree of error. To deal with the
“noises”, the hypothesized model was revised, with the two
problematic indicators being excluded in the subsequent
analysis [8].
E. Descriptive Findings
In this study at three phases, the overall Cronbach’s Alpha
for the instrument succeeded the standard. In addition, the
principle component analysis results indicated that there were
five dimensions emerged for the Hybrid scales, namely
content (c3, c4 and c5); delivery (c11, c12 and c17); service
(c19,c20 and c21); outcome (c31, c35 and c37) and structure
64
INTERNATIONAL JOURNAL OF EDUCATION AND INFORMATION TECHNOLOGIES
Issue 1, Volume 3, 2009
(c46, c48, c56 and c58). In order to confirm which items
belong to what constructs, i.e., to test the construct validity of
the Hybrid Module, confirmatory factor analysis was
conducted. Findings showed evidences for construct validity.
As such, the answer to research question as to whether the
trainee’s perspective towards usefulness of the Hybrid eTraining module was influenced by the module’s content,
delivery, service, structure and outcome are valid as portrayed
in the descriptive result for this particular group of trainers in
Table 7. The result in Table 7 shows the mean score of all
items measuring the usefulness of the hybrid e-training
module. Considering a mean score of 1 is very low, 2 is low,
3 is average, 4 is high and 5 is very high, it is save to round up
the average mean to 4.0 to consider it as high or leave it at
3.97 and consider it as on the high side of average
approaching the high side.
would not want to employ a model with a RMSEA greater
than 0.1. As such we consider the RMSEA for the final
revised model of 0.86 as acceptable although generally the
general accepted threshold would be RMSEA < .08.
In this study, SEM estimates the degree to which a
hypothesized model fits the data. In the CFA test, goodnessof-fit indexes are estimated for the latent variable Hybrid
Module
as a distinct structural model. Although it is wise and
appropriate for one to measure items found in other studies
such as the items to make up the DDLM from [3] and [4] to
form a certain construct, it is not appropriate to assume that a
certain group of items found to form a valid and reliable
construct in another study will form an equally valid and
reliable construct when measured in a different set of data.
Similarly, constructs tested on a national data set are valid in a
new study only in the rare instance when the new study uses
the identical observations analysis in the same data with the
same theoretical underpinning. Divergent choices addressing
the problem of missing data will normally change construct
validity results such that a new confirmatory analysis is
appropriate [5].
Table 7. Average mean score of items measuring usefulness of HiT
II. CONCLUSION
As a national and research university UKM is among the
most established universities in Malaysia. Building on its past
and present successes, the university will continue to move
ahead and carve a name internationally along with other
reputable universities. This can be achieved by exploring new
opportunities such as implementing the hybrid method which
has been empirically tested as a verified local model.
Implementing the hybrid model will guide the university
particularly the academicians and trainers in optimising on
existing resources and by leveraging the university’s strengths
to make the global impact.
ACKNOWLEDGMENT
In accomplishing this report, we would like to convey our
greatest appreciation to Professor Sahari Nordin, Dean of
Research and Innovation Centre, International Islamic
University Malaysia who have generously given us his time
and insights on details of data analysis using the Structural
Equation Modeling; Dr. Igusti Darmawan from the Adelaide
University, Australia for his expert judgement in reviewing the
methods and for all the valuable insights into various other
hierarchical and alternative mathematical modeling; Dr.
Philippa Gerbic from the Auckland University of Technology,
New Zealand who have given much of her precious time and
expertise in the field hybrid learning by auditing various
versions of the instruments not to mention the rigorous expert
review checklists and heuristic reviews conducted and all
other experts involved in this study and many others who were
involved during face and content validation which I could not
possibly mention all the names here. Last but not least, my
sincere gratitude to Associate Professor Dr. Kamisah Osman
and Dr. Sharifah for allowing us to impose their students as
our respondents and Pn. Kemboja from the Language Center
of the university for reviewing and editing the language of the
I. DISCUSSION
In summary, a psychometrically sound instrument is
evidence by a high reliability and validity. Therefore, a
rigorous effort has been invested in developing the Hybrid ETraining Instrument. According to Hair [8] and Thorndike
[12], the generally agreed upon lower limit for Cronbach’s
alpha is .70. As mentioned in the previous section, at three
phases of the study, the overall Cronbach’s Alpha for the
instrument succeeded the standard. The results indicated that
the instrument is a highly reliable instrument.
Goodness-of-fit measures of comparative fit index (CFI) and
non-normed fix index (NNFI also known TLI) were above
suggested threshold > .90 (CFI=.943; TLI=.930). In reference
to model fit, researchers use numerous goodness-of-fit
indicators to assess a model but in general, for one time
analysis TLI, CFI and RMSEA are preferred [5]. According
to Browne and Cudeck in [6], a value of 0.08 or less for the
RMSEA would indicate a reasonable error approximation and
65
INTERNATIONAL JOURNAL OF EDUCATION AND INFORMATION TECHNOLOGIES
Issue 1, Volume 3, 2009
Siti Rahayah Ariffin is a Professor and Dean of the Faculty of Education,
Universiti Kebangsaan Malaysia, Bangi 43600 Selangor Malaysia (e-mail:
[email protected]).
instruments up to the latest version 7.1 and finally, Professor
Colla MacDonald from the University of Ottawa, Canada for
giving us permission to use and adapt the Demand Driven
Learning Model and instrument.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
A.R. Ahlan, M.A. Suhaimi, H. Hussin, and Y. Arshad. “Assessing future
needs of IT Education in Malaysia: A preliminary result,” in Proceedings
of 4th WSEAS/IASME International Conference on educational
Technology (EDUTE’08), Corfu, Greece, October 26-28, 2008, pp. 193–
196.
A.R. Ahlan, Y. Arshad M.A. Suhaimi, and H. Hussin. “The future skillsets expectations of IT graduates in Malaysia IT outsourcing industry,”
presented at the 7th WSEAS International Conference on E-Activities,
Cairo, Egypt, Dec 29-31, 2008, Paper 605—256.
C. J. MacDonald, E. Stodel, L, Farres, K, Breithaupt, and M. A. Gabriel.
The Demand Driven Learning Model: A framework for Web-based
learning. The Internet and Higher Education, 1(4), 2001, pp. 9–30.
C. J. MacDonald, K, Breithaupt, E. Stodel, L, Farres, and M. A. Gabriel.
Evaluation of Web-based Educational Programs: A pilot study of the
Demand-Driven Learning Model. International Journal of Testing, 2(1),
2002, pp. 35 – 61.
B.B. James, F.K, Stage, F.K., J. King, A. Nora. Reporting Structural
Equation Modeling and Confirmatory Factor Analysis Results: A
Review. The journal of education research, Heldref Publications,
Vol.99, No.6, 2006, pp. 323-337
B.M. Byrne. Structural Equation Modeling with AMOS: Basic concepts,
applications and programming. New Jersey: Lawrence Erlbaum
Associates, 2001.
R. B. Kline. Principles and practice of structural equation modeling
(2nd Ed.). New York: Guildford Press, 2005.
J.F. Hair, R.E. Anderson, R.L.Tatham, W.C. Black, Multivariate Data
Analysis. (6th Ed.). Upper Saddle River, N.J.: Prentice Hall, 2006.
R. Likert. A technique for the measurement of attitudes. Archives of
Psychology, 140, 52, 1932.
J. Palant. SPSS Survival Manual. Australia: Allen & Unwin. 2001.
J.C. Reinard.
Communication Research Statistics.
USA: Sage
Publications, 2006, p. 419
M.D. Gall, J. Gall, and W.R. Borg. Educational research: An
introduction (7th Ed.). Boston: Allyn & Bacon, 2003.
N. Sahari, AA.A. Ghani, H. Selamat, and A.S.M. Yunus. “Mathematics
Courseware Usefulness Items Construction,” presented at the 7th
WSEAS International Conference on E-Activities, Cairo, Egypt, Dec 2931, 2008, Paper 605—300.
Manuscript received November 11, 2008: Revised version received
December 31, 2008. This work was supported by the Malaysian Government
under the Research University Grant UKM-GUP-TMK-08-03-308.
Rosseni Din is a senior lecturer in the field of Computer Education and ELearning at the Faculty of Education at the Universiti Kebangsaan Malaysia
which is the National University of Malaysia, Bangi, 43600 Selangor
Malaysia (Phone: +6016-225-6420; Fax: +603-8925-4372; e-mail: rosseni@
yahoo.com).
Mohamad Shanudin Zakaria is an Associate Professor and Head of the
Centre of Excellence for Artificial Intelligence at the Faculty of Technology
and Information Science, Universiti Kebangsaan Malaysia, Bangi 43600
Selangor, Malaysia (e-mail:
[email protected]).
Khairul Anwar Mastor is an Associate Professor and Head of the Centre
for General Studies, Universiti Kebangsaan Malaysia, Bangi 43600 Selangor
Malaysia (e-mail:
[email protected]).
Norizan Abdul Razak is an Associate Professor and Head of the Centre of
Excellence for E-Community Research at the Faculty of Humanities and
Social Science, Universiti Kebangsaan Malaysia, Bangi 43600 Selangor
Malaysia (e-mail:
[email protected]).
Mohamed Amin Embi is a Professor and Head of E-Learning at the
Center of Academic Advancement, Universiti Kebangsaan Malaysia, Bangi
43600 Selangor Malaysia (e-mail:
[email protected]).
66