Journal Prob Solving 1
Journal Prob Solving 1
Journal Prob Solving 1
A R T I C L E
I N F O
Article history:
Available online 28 March 2015
Keywords:
Learning progress
Cognitive change
Problem solving
Mental models
Latent class
A B S T R A C T
This study proposes that learning is a process of transitioning from one stage to another stage within a
knowledge base that features concepts and relations. Drawing on the theories of expertise, this study
explored two different models of learning progress (i.e., three- and two-stage models) in the context of
classroom learning and identied a model that was a good t to the data. Participants in this investigation included 136 students and 7 professors from six different universities in the United States. In order
to detect and validate stages of learning progress in participants written responses to an ill-structured
and complex problem scenario, this study utilized Exploratory Factor Analysis (EFA) and the Continuous Log-Linear Cognitive Diagnostic Model (C-LCDM) method (Bozard, 2010). The results demonstrate
that the three latent classes matched the three stages of the three-stage model. This study provides an
account of a diagnostic model of learning progress and associated assessment methods, yet further studies
are required to investigate different conditions.
2015 Elsevier Inc. All rights reserved.
1. Introduction
Learning is fundamentally about the change in knowledge and
skills needed to solve problems (Bransford, Brown, & Cocking, 2000;
Spector, 2004). Over the past few decades, many studies have addressed how people learn through complex problem solving in
diverse disciplines, including cognitive psychologies (Sinnott, 1989),
applied psychologies (Zsambok & Klein, 1997), and educational psychologies (Jonassen, 1997; Spiro, Feltovich, & Coulson, 1996), yet
understanding changes in the ability to solve a problem within the
context of classroom teaching remains a challenge due to the uncertain and complex nature of problems (Choi & Lee, 2009) and
changes that take place in the short term.
The theories of mental models and expertise development can
help address these issues. The theory of mental models explains that
problem solving involves a process of building mental representations of a problem situation (Johnson-Laird, 1983). Mental models
are holistic structural representations of the facts, concepts, variables, objects, and their relationships within a problem situation
(Dochy, Segers, Van den Bossche, & Gijbels, 2003; Jonassen, Beissner,
& Yacci, 1993; Segers, 1997). Cognitive change takes place when
learners confront unfamiliar and challenging situations (diSessa,
2006; Festinger, 1962; Piaget, 1964) or a pre-structural lack of knowledge (Biggs & Collis, 1982). When striving to resolve problem
situations, learners experience changes in their mental representations whereby the problem situations are recognized, dened, and
organized (Seel, 2003, 2004).
Learners possibly experience qualitatively different levels of
knowledge structure when engaged in problem solving. In line with
the idea that children experience qualitatively distinct but sequential knowledge states (Piaget, 1964), developmental psychologists
have seen that learning and development evolve as the learner constructs a qualitatively distinct knowledge structure (Alexander, 2003,
2004; Flavell & Miller, 1998; Siegler, 2005; Siegler, Thompson, &
Opfer, 2009; Vygotsky, 1934/1978; Werner, 1957). A number of experimental studies have demonstrated that qualitatively different
cognitive stages take place when learners respond to problems in
both the short term and the long term (e.g., Alexander & Murphy,
1998; Chen & Siegler, 2000; Opfer & Siegler, 2004; Siegler et al., 2009;
Vosniadou & Vamvakoussi, 2008).
Expertise studies have sought evidence to explain the development of expertise. As Fig. 1 illustrates, traditional expert-novice
studies dene an expert as one who consistently and successfully
performs in a specic, selected domain due to a highly enhanced,
ecient, and effective long-term memory (Ericsson & Kintsch, 1995;
Flavell, 1985; Simon & Chase, 1973). In addition, contemporary
studies of expertise do not restrict expertise development to chunking and pattern recognition. For example, Ericsson (2003, 2005, 2006)
suggested that expertise is developed by deliberate practice in which
learners engage in appropriate and challenging tasks carefully chosen
by masters, devote years of practice to improve their performance, and rene their cognitive mechanisms through selfmonitoring efforts, such as planning, reection, and evaluation. From
this point of view, expertise development in a particular domain
requires long-term devotion (e.g., ten or more years) to disciplined and focused practice (Ericsson, 2003, 2005, 2006).
This study addresses two major limitations often found in the
traditional approach. One is that traditional studies tend to contrast two extremes, experts and novices, resulting in lack of
developmental focus (Alexander, 2004, p. 278). Missing accounts
of the middle stages leave the developmental process somewhat
unclear. Furthermore, Alexander (2004) argued that the educators
cannot condently apply the ndings of research conducted in a
non-academic setting.
In response to these problems, Alexander (2003, 2004) developed a Model of Domain Learning (MDL) to explain multiple stages
of expertise development in the educational context and validated the model (Alexander & Murphy, 1998; Alexander, Sperl, Buehl,
Fives, & Chiu, 2004). In spite of the unique value of the model, it
still assumes that long-term changes during schooling are required to master a domain.
The current study shifts the focus of expertise development to
problem-solving situations (i.e., task level) in the classroom and examines how short-term changes can lead to expertise. Several studies
(e.g., Chi, 2006; Newell & Simon, 1972) have suggested that an individuals understanding of a problem situation reects levels of
expertise in solving that problem. Based on the same idea, Gobet
and Wood (1999) tested an explicit knowledge model of learning
and expertise for the design of computer-based tutoring systems.
Concerning problem solving in the classroom, recent studies have
investigated differences between and changes in expertise (Ifenthaler,
Masduki, & Seel, 2009; Schlomske & Pirnay-Dummer, 2008; Spector
& Koszalka, 2004). For example, Ifenthaler et al. (2009) investigated longitudinal changes using hierarchical linear modeling (HLM)
techniques. However, a common limitation of these studies is a lack
of developmental focus (Alexander, 2004, p. 278), either simplifying the difference to expert vs. novice or falling short of explicitly
modeling stages of expertise development.
Using conrmatory analysis, the current study explored potential models of learning progress in solving ill-structured, complex
problem situations in a classroom setting. A valid model can provide
primary activities: (a) recognizing the problem, (b) dening and representing the problem, (c) developing a solution strategy, (d)
organizing ones knowledge about the problem, (e) allocating mental
resources for solving the problem, (f) monitoring ones progress
toward the goal, and (h) evaluating the solution. These problemsolving activities can typically be summarized into two main phases:
(a) problem representation and (b) solution development. Concerning the rst phase, Newell and Simon (1972) explained that a
problem solver conceptualizes the problem space in which all of
the possible conditions of a problem exist. Studies of expertise have
demonstrated clear distinctions among the mentally represented
problem spaces of experts and novices (Chase & Ericsson, 1982; Chi,
Glaser, & Farr, 1988; Ericsson, 2005; Ericsson & Staszewsli, 1989;
Spector, 2008; Spector & Koszalka, 2004). The focus of the current
study was to model the stages of building a problem space (i.e.,
problem representation).
2.2. Learning progress as changes in an individuals problem space
Learning progress can be dened as a series of gradual or sudden
changes in a learners understanding (i.e., problem space) that are
facilitated by instruction. The theory of mental models accounts for
these cognitive changes. Mental models are cognitive artifacts that
a learner constructs in order to understand a given problem situation (Anzai & Yokoyama, 1984; Craik, 1943; Greeno, 1989;
Johnson-Laird, 2005a, 2005b; Kieras & Bovair, 1984; Mayer, 1989;
Norman, 1983; Seel, 2003, 2004). A problem situation is mentally
represented in a learners mind when he or she is involved in the
problem-solving process. Mental models depend primarily on a learners prior knowledge (i.e., prior mental models), and they change
over time (Seel, 1999, 2001). Mental model changes in a learning
situation are not simple shifts without direction but transformations that lead closer to a pre-dened goal. In this way, such changes
can be considered progress. Thus, progress in mental modeling involves learning-dependent and developmental transitions between
preconceptions and causal explanations (Anzai & Yokoyama, 1984;
Carley & Palmquist, 1992; Collins & Gentner, 1987; Johnson-Laird,
1983; Mayer, 1989; Seel, 2001, 2003, 2004; Seel & Dinter, 1995; Shute
& Zapata-Rivera, 2008; Smith, diSessa, & Roschelle, 1993; Snow,
1990).
Mental models improve over time as a learner develops mastery
in a given problem situation (Johnson-Laird, 2005a, 2005b; Seel,
2003, 2004). Progress in mental models might involve both quantitative and qualitative change. For instance, a mental model might
be enlarged when a learner imports a new concept into an existing model (e.g., A sharp increase in the world population could be
a major cause of global warming.). Or a mental model might fundamentally change to adapt to a new situation (e.g., The current wild
re rate is not that high because this frequency has been observed
in the past. Therefore, global warming might not be associated with
wild res.).
2.3. 3S knowledge structure dimensions: surface, structural, and
semantic dimension
Studies have shown that mental models progress as structural
knowledge heightens (Johnson-Laird, 2005a, 2005b; Jonassen et al.,
1993). When problem solving is dened as a mental activity that
relies on structurally represented mental models (Dochy et al., 2003;
Segers, 1997), the assessment of problem-solving knowledge and
skills must be sensitive to the structural characteristics of the knowledge base (Gijbels, Dochy, Van den Bossche, & Segers, 2005).
Some have argued that knowledge structure consists of (a)
surface, (b) structural, and (c) semantic dimensions (Ifenthaler, 2006;
Kim, 2012b; Pirnay-Dummer, 2006; Spector & Koszalka, 2004). For
example, Kim (2012b) demonstrated a theoretical basis for these
Table 1
Models of learning progress in the development of expertise.
Level
Three-stage model
Two-stage model
1
2
3
Acclimation
Competence
Prociency
Misconception
Conception
empirically identied using analytic methods such as cluster analysis (Alexander & Murphy, 1998; Alexander et al., 2004). While
Alexanders Model of Domain Learning theorizes that expertise develops in the long term, the current study deploys the three-stage
model to explain short-term cognitive changes while solving a
problem in a classroom setting. The premise that qualitatively distinct changes take place remains the same in both the short term
and the long term (Siegler et al., 2009; Vosniadou & Vamvakoussi,
2008; Vygotsky, 1934/1978; Werner, 1957).
Table 2 describes the knowledge structure that characterizes each
stage according to the current study. The acclimation stage refers
to the initial point at which a learner becomes familiar with an unfamiliar problem. Accordingly, learners at this stage typically have
a pre-structural lack of knowledge (Biggs & Collis, 1982). Another
type of acclimating learner is one who has only limited and fragmented knowledge that is neither cohesive nor interconnected.
Learners become more competent in understanding a problem
through teaching and learning. For some competent learners, knowledge structures contain a small number of concepts and relations
that appear to be structured well but still require a better sense of
what is important in a particular situation. Other competent learners internalize a larger number of principles that have been taught
and exhibit a good semantic dimension, but poorly organized in contextual concepts.
The last stage is prociency-expertise. In this stage, procient learners, with increasing experience, construct sucient
contextual information and organize their knowledge base for a
problem situation that includes some domain-specic principles.
They conceptualize a sucient problem space in all three dimensions that accommodates the real features of a given problem
situation. The probability of resolving a problem markedly increases. Procient learners sometimes represent a relatively small
but ecient knowledge structure in which sucient key concepts
are well organized (good structural and semantic dimensions with
a surface dimension that is lacking). These knowledge structures
are in accord with the idea that experts sometimes create mental
models that contain an optimal rather than maximum number
of concepts and relations (Glaser, Abelson, & Garrison, 1983).
3.2. Two-stage model
The two-stage model is another possible candidate to explain
learning progress in the classroom setting where a particular problem
is presented. Some studies of conceptual change provide accounts
that suggest why only two stages are likely to occur.
Table 2
Three-stage model of learning progress.
Stage
Acclimation
(a) All dimensions (surface, structural, and semantic) are quite dissimilar to expert models; or (b) knowledge structures have similar
surface dimension to expert models but are missing some structural and semantic dimensions.
(a) Structural dimension might appear to be mastered because mental models, which consist of a small amount of contextual and
abstract knowledge, are likely to look cohesive and connected; or (b) student and expert models are highly similar in semantic
dimension but dissimilar in surface and structural dimensions.
(a) Structural dimension shows sucient complexity while surface dimension is adequate, but not enough to guarantee a semantic
t; (b) knowledge structures are well-featured in all dimensions (surface, structure, and semantic); or (c) a signicant number of
principles (semantic) create a cohesive structure (structural) but with a small number of concepts (surface).
Competence
Prociency-expertise
Table 3
Participant demographics.
Total
Gender
Class year
Course taken
Teacher experience
Teaching career
a
b
Female
Male
Freshman
Sophomore
Junior
Senior
None
ET-relateda
Otherb
Less than 2 years
No
Missing
Very interested
Interested
Somewhat
Not so
Not at all
Total
T1
T2
T3
T4
N/A
136
113 (83.1%)
23 (16.9%)
9 (6.6%)
38 (27.9%)
52 (38.2%)
37 (27.2%)
56 (42.2%)
8 (5.9%)
72 (52.9%)
6 (4.4%)
127 (93.4%)
3 (2.2%)
33 (24.3%)
19 (14.0%)
32 (23.5%)
33 (24.3%)
19 (14.0%)
13
11
2
0
5
4
4
3
2
8
0
13
4
4
2
2
1
76
62
14
6
16
31
23
37
4
35
5
70
20
7
18
20
11
17
14
3
1
8
3
5
9
0
8
0
16
3
0
6
4
4
25
22
3
2
8
13
2
6
2
17
1
24
5
8
5
5
2
5
4
1
0
1
1
1
1
0
4
0
4
1
0
1
2
1
respective doctorate-level teaching assistants majoring in Instructional Technology (T1, T3, and T4). The full-time instructor had six
years teaching experience in middle schools and had taught the
course for over six years. For quality control, the instructor supervised the sessions taught by the three doctoral students.
Most participating students were undergraduate pre-service
teachers who had considered teaching in K-12. As Table 3 shows,
female students comprised 83% of the study participants, while 17%
were male. The majority of students were juniors and seniors (65%),
followed by sophomores (28%) and freshmen (7%). For 42% of the
students, the course was their rst related to educational technology, while only 6% had taken previous courses on the use of
educational technology. However, over half of students had taken
previous education-related courses (52%). In contrast to their coursework experience, over 93% of the students had no teaching
experience. Only 38% indicated a positive interest in being a teacher
even though they were all taking a course designed for preservice teachers. The same proportion of students (38%) indicated
little interest in teaching in K-12 schools.
Participants also included seven professors teaching at six different universities in the United States. They participated in the study
to build a reference model for a given problem situation. In order
to recruit these professors, we rst created a pool of potential candidates who were members of a scholarly association related to
Educational Technology and then made a short list of professors
based on pre-set criteria: (a) professors in Instructional Technology or related elds; (b) professors teaching a course titled
Instructional Design or Technology Integration in Learning; (c)
professors who research technology-integration in classroom learning; and (d) professors whose doctorates were received at least three
years ago. Seventeen professors were invited to participate in the
study, and seven professors were willing to contribute their expertise to the research.
Table 4
Delphi procedure.
Round
Activity
R1. Brainstorming
R2. Narrowing down/ranking
R3. Renement
Note. Adapted from Kim (2013, p. 960).
s = 1
f1 f2
max ( f1, f2 )
where f1 is the frequency of a student model and f2 is the frequency of a reference model. The similarity ranged from 0 to 1, 0 s 1.
Otherwise, if f1 was not less than f2 (f1 f2), the similarity value was
set to 1 because the student value was greater than the reference value, indicating that the student model exceeded the reference
model in the relevant criteria.
For the three similarity measures (concept, proposition, and balanced semantic matching score), conceptual similarity was applied
because these measures concern the proportion of fully identical
elements between two concept maps. The conceptual similarity
method used in this study applied Tverskys (1977) similarity
formula, which assumes that the similarity of object A and object
B is not merely a function of the features common to A and B but
relies on the unique features of each object. As in the case of numerical similarity, an adjustment was made. Just as a picture
resembles an object rather than the inverse, a student model resembles the reference model, which is more salient. In this
asymmetric relation, the features of the student model are weighted
more heavily than the reference model features (Colman & Shar,
2008; Tversky & Shar, 2004). When the conceptual similarities were
calculated using Tverskys (1977) formula, in this study was
weighted more heavily than ( = 0.7 and = 0.3).
Table 5
Similarity measures.
Similarity measure
1. Number of concepts
2. Number of relationsa
3. Average degree
Denition
4. Density of graphs
5. Mean distance
6. Diametera
7. Connectedness
8. Subgroups
9. Concept matchinga
s=
Parameters comparedb
Technical denition
Operationalization
Qualitative comparison
Based on the values of two similarities:
Concept and Propositional matching
These similarity measures are adopted from Pirnay-Dummer and Ifenthaler (2010).
Similarity measures are calculated by comparing the parameters that are suggested in Wasserman and Faust (1994).
The shortest path between two nodes is referred to as a geodesic.
These parameters are also introduced by Ifenthaler (2010).
f ( A B)
f ( A B) + i f ( A B) + i f (B A )
Table 6
Class-to-prole table.
Latent class
Class 1 (C1)
Class 2 (C2)
Class 3 (C3)
Class 4 (C4)
Class 5 (C5)
Class 6 (C6)
Class 7 (C7)
Class 8 (C8)
3S attribute
S1
S2
S3
Three-stage model
Two-stage model
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
Acclimation
Competence
Competence
Prociency
Acclimation
Prociency
Prociency
Misconception
Conception
Misconception
Conception
Conception
For instance, class 4 has a mastery prole (i.e., S1: 0, S2: 1, and
S3: 1) that is matched to the procient learner in the three-stage
model, and conception in the two-stage model. These matches were
determined by theoretical assumptions about knowledge status at
each level. For example, a procient learner represents a relatively small number of concepts and relations (S1: 0 = surface feature
is absent) but an ecient knowledge structure (S2: 1 = structure
feature is present) in which sucient key concepts are wellstructured (S3: 1 = semantic feature is present).
as continuous variables (0 s 1). As Table 7 presents, most similarity measures showed biased distributions. The similarity measures
M9 to M11 were distributed in the lower area of the similarity band,
ranging from 0 to 0.68, with lower means ranging from 0.05 to 0.24,
while the distributions of M3, M5, M6, and M7 ranged from 0.15
to 1.00, with means ranging from 0.73 to 0.82. M1, M2, M4, and M8
had means between 0.31 and 0.40. These descriptive results suggest
that the similarity measures likely indicate different constructs, which
the current study assumes to be the different dimensions of knowledge structure.
Next, as shown in Table 8, correlations among the similarity measures were calculated. Very high correlations (r > .9) were identied
among variables M1, M2, M4, and M8. Strong correlations also
emerged between two paired variables, M4 and M8 (r = 0.85) and
M10 and M11 (r = 0.86). These high correlations might provoke multicollinearity concerns for a general linear model, such as multivariate
regression; however, they were less problematic for C-LCDM because
LCDMs allow multiple items (i.e., observed variables) to measure
the same set or similar sets of attributes (Rupp et al., 2010b). Consequently, we retained all 11 measures for further analysis. The
overall KaiserMeyerOlkin (KMO) value was 0.7, within the range
of good (between 0.7 and 0.8), even though two individual measures were below the recommended value of > 0.5 (M7 = 0.42 and
M14 = 0.42). Barletts test of sphericity, x2 (55) = 2570.92, p < .01, indicated that correlations between variables were signicantly large;
therefore, factor analysis was appropriate.
5. Results
5.2. The relations between similarity measures and 3S knowledge
structure dimensions
Table 7
Descriptive statistics of similarity measures.
M1. Concept
M2. Relation
M3. Average degree
M4. Density
M5. Mean distance
M6. Diameter
M7. Connectedness
M8. Subgroups
M9. Concept matching
M10. Propositional matching
M11. Balanced matching
Minimum
Maximum
Mean
SD
140
140
140
140
140
140
140
140
140
140
140
0.04
0.02
0.42
0.05
0.28
0.14
0.15
0.00
0.05
0.00
0.00
0.96
0.98
1.00
0.96
0.99
1.00
1.00
1.00
0.57
0.28
0.68
0.35
0.31
0.82
0.40
0.73
0.75
0.73
0.30
0.24
0.05
0.18
0.18
0.20
0.14
0.19
0.17
0.22
0.25
0.20
0.08
0.04
0.15
We anticipated that the eleven similarity measures would be associated with three latent factors: the surface, structural, and
semantic dimensions of knowledge structure. In order to test this
assumption, we ran an initial analysis to determine the number of
factors we had to extract from the data. The principle component
analysis calculated eigenvalues for each component and produced
two types of scree plots (see Fig. 3).
Three factors on the left graph had eigenvalues over Kaisers
criterion of 1, and both graphs suggested that these three factors
were above the cut-off point of inexion, explaining 76% of the
variance. Consequently, three factors were retained for nal
analysis.
Table 9 shows the factor loadings after oblique rotation. The similarity measures that clustered on the same factors suggest that factor
1 represents surface, factor 2 structural, and factor 3 semantic. For instance, M4. Density, clustered on factor 1 (surface), is
dened in graph theory as the proportion of existing links to all possible lines in the graph (Wasserman & Faust, 1994). Density is
associated with the number of existing links (i.e., a surface feature).
Table 8
Correlations of the similarity measures.
M1
M2
M3
M4
M5
M6
M7
M8
M9
M10
M11
M1
M2
M3
M4
M5
M6
M7
M8
M9
M10
M11
1
.97*
.60*
.95*
.64*
.65*
.01
.92*
.54*
.38*
0.16
1
.71*
.85*
.61*
.62*
.16
.92*
.56*
.43*
.21*
1
.39*
.65*
.62*
.51*
.63*
.49*
.37*
.24*
1
.59*
.63*
-.20*
.85*
.46*
.27*
0.08
1
.94*
.36*
.71*
.43*
.21*
0.08
1
.21*
.72*
.46*
.22*
0.08
1
.14
.06
.14
0.13
1
.50*
.37*
.15
1
.61*
0.26*
1
.86*
n = 140.
* Correlation is signicant at the 0.05 level (2-tailed).
10
Table 9
Summary of exploratory factor analysis.
Similarity measure
Communality
M1. Concept
M2. Relation
M3. Average Degree
M4. Density
M5. Mean Distance
M6. Diameter
M7. Connectedness
M8. Subgroups
M9. Concept matching
M10. Propositional matching
M11. Balanced matching
99.50
94.71
51.72
90.47
99.50
89.90
22.85
88.17
49.65
99.50
77.99
(0.35)
0.82
(0.30)
Structure
Semantic
0.42
0.96
0.86
0.55
0.46
1.01
0.95
Note: Factor loadings in parentheses are excluded when the 0.4 cut-off value is applied.
Table 10
Similarity measures associated with attributes of knowledge structure (Q-matrix).
Similarity measure
M1. Concept
M2. Relation
M3. Average degree
M4. Density
M5. Mean distance
M6. Diameter
M7. Connectedness
M8. Subgroups
M9. Concept matching
M10. Propositional matching
M11. Balanced matching
3S attribute
Table 12
The estimated nal class counts and proportions.
Latent class
Surface (S1)
Structure (S2)
Semantic (S3)
1
1
(1)
1
0
0
(1)
1
(1)
0
0
0
0
1
0
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
11
Class 1 (C1)
Class 2 (C2)
Class 3 (C3)
Class 4 (C4)
Class 5 (C5)
Class 6 (C6)
Class 7 (C7)
Class 8 (C8)
Q-matrix
S1
S2
S3
Model 1
Model 2
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
0.288 (38)
0.427 (62)
0.285 (40)
0.274 (37)
0.418 (60)
0.308 (43)
Note: According to the factor loadings (see Table 9), the mastery associations in parentheses (i.e., for measures M3, M7, and M9) were excluded (cut-off value: 0.4).
Table 11
Comparison of relative t of C-LCDMs.
Model
Log-likelihood
AIC
BIC
1078.495
1093.882
2076.990
2095.763
1959.324
1960.448
Latent class
Three-stage model
Two-stage model
Class 1 (C1)
Class 2 (C2)
Class 3 (C3)
Class 4 (C4)
Class 5 (C5)
Class 6 (C6)
Class 7 (C7)
Class 8 (C8)
Acclimation
Competence
Competence
Prociency
Acclimation
Prociency
Prociency
Misconception
Conception
Misconception
Conception
Conception
Note: The boldfaced characters indicates the identied classes and matched stages.
the C-LCDM with a small number of samples (N = 140), the interpretation of the parameter estimates should be limited to this study.
Lastly, based on the three-stage model, we investigated the associations between the similarities and the three identied latent classes:
class 1 (Acclimation); class 3 (Competence); and class 7 (Prociency).
As Table 14 shows, a series of Spearman rank-order correlations indicated that the ordered classes had a signicant positive relationship with
all similarity measures. Group mean difference tests proved that there
were signicant differences among the groups in all similarity measures except M11 (Balanced matching) (F (2, 130) = 1.97, p > .05).
6. Conclusions
6.1. Discussions and implications
The primary goal of this study was two-fold. One was to model
different levels of learning progress in the context of teaching and
Table 13
Parameter estimates for main effect only (model 1).
Parameter
Estimate
Standard error
M1_intercept
M1_main effect of surface
M2_intercept
M2_main effect of surface
M3_intercept
M3_main effect of structure
M4_intercept
M4_main effect of surface
M5_intercept
M5_main effect of structure
M6_intercept
M6_main effect of structure
M7_intercept
M7_main effect of structure
M8_intercept
M8_main effect of surface
M9_intercept
M9_main effect of semantic
M10_intercept
M10_main effect of semantic
M11_intercept
M11_main effect of semantic
0.420
0.167
0.384
0.178
0.772
0.109
0.462
0.157
0.671
0.145
0.676
0.179
0.678
0.124
0.377
0.173
0.664
0.423
0.522
0.476
0.645
0.465
0.037
0.020
0.038
0.017
0.015
0.011
0.038
0.025
0.019
0.013
0.025
0.019
0.026
0.027
0.041
0.027
0.006
0.000
0.004
0.000
0.013
0.000
12
Table 14
Means, standard deviations, Spearmans rho, and group difference statistics.
M1. Concept
M2. Relation
M3. Average degree
M4. Density
M5. Mean distance
M6. Diameter
M7. Connectedness
M8. Subgroups
M9. Concept matching
M10. Propositional matching
M11. Balanced matching
Class 1
Mean (SD)
Class 3
Mean (SD)
Class 7
Mean (SD)
rs(2)
Sig.
.194 (.080)
.130 (.057)
.655 (.101)
.270 (.122)
.515 (.108)
.485 (.157)
.547 (.282)
.119 (.065)
.199 (.075)
.032 (.036)
.154 (.170)
.290 (.070)
.252 (.079)
.850 (.092)
.326 (.087)
.777 (.104)
.802 (.138)
.825 (.197)
.256 (.070)
.245 (.055)
.042 (.034)
.171 (.139)
.587 (.140)
.563 (.161)
.923 (.079)
.618 (.154)
.870 (.084)
.929 (.112)
.759 (.185)
.551 (.164)
.274 (.087)
.066 (.059)
.217 (.138)
.829**
.873**
.725**
.691**
.779**
.756**
.266**
.880**
.332**
.223**
.175*
178.69
180.81
91.43
102.02
134.19
109.61
19.18
175.92
11.20
6.59
1.97
.000
.000
.000
.000
.000
.000
.000
.000
.000
.002
.144
Note: rs(2) denotes Spearmans rank order correlations with 2 degrees of freedom. Standard deviations appear in parentheses next to means.
* p .05.
** p .01.
learning when solving ill-structured, complex problems. Theoretical discussions about expertise development underpin the models
of learning progress, including two-stage and three-stage models.
The other goal was to investigate a model of learning progress that
better ts the data obtained from 133 students and 7 expert professors. The validation analysis using C-LCDM demonstrated that
the data contained three latent classes that exactly corresponded
to the three-stage model (see Fig. 4): (a) acclimation, (b) competence, and (c) prociency.
Although the two-stage model was also explained by these three
latent classes, the study suggests that the three-stage model provides a better framework for measuring learning progress in complex
problem solving. In the three-stage model, the middle stage overcomes the limitation associated with contrasting extremes, such as
conception (i.e., experts) and misconception (i.e., novice).
The three levels of learning progress explain cognitive changes
over a short period of practice, such as in-class problem solving, that
correspond to Alexanders (2003, 2004) three levels of domain learning that develop over a relatively long period of time (i.e., featured
acclimation cluster, competence cluster, and prociency cluster). Interestingly, whether in the short term or the long term, the three
stages suggest that learners are likely to experience three qualitatively distinct levels in learning and development (Siegler et al., 2009;
Vygotsky, 1934/1978; Werner, 1957).
The current study observed two key conditions that relate to the
stages of learning progress: (a) teaching and learning and (b) prior
knowledge. The research context implies that students prior experience played an essential role in determining their levels of
understanding. Since data were collected early in the semester, participants in this study responded to the problem without instruction
Fig. 4. The measurable three stages of learning progress modied from Kim (2012b).
in related content. Moreover, all participants were undergraduates, most of whom were non-education majors. In this limitedinstruction condition, association with teaching experience showed
a signicant chi square value, yet there was no signicant association between levels of learning progress and student demographics:
school year, interest in becoming a teacher, and prior coursework.
Still, we withheld its generalized interpretation because some
of the expected cell values were less than ve frequencies (see
Appendix C).
We also had to consider the ill-structured and complex nature
of a given problem (e.g., its unknown elements and high number
of interactions between associated factors; see Choi & Lee, 2009;
Eseryel et al., 2013; Jonassen, 2014; Kim, 2012b). The nature of the
problem explains the reason there was no other latent class pertaining to the prociency stage. For example, due to the problems
unknown elements and high complexity, the participants rarely
conceptualized a knowledge structure featuring all dimensions (i.e.,
Class 8).
Interestingly, all seven experts were identied as procient, but
unexpectedly, their knowledge structure did not perfectly match the
reference model in all three dimensions. The current study began
with the assumption that experts in a domain would build similar
understandings of a problem despite its ill-structuredness and high
complexity. However, the results showed that their prociency was
built on their own belief, forming somewhat different knowledge
structures that emphasized some concepts and facts in the reference model. Although they constructed a reference model in
collaboration with each other, the individual expert models did not
include most of the internal and external factors associated with a
given problem situation (c.f., Jonassen, 2014).
These ndings imply that we might observe different sets of latent
classes within the three-stage model. For example, complex yet wellstructured problems are likely produce common and convergent
answers (Bogard et al., 2013; Jonassen, 2014; Shin et al., 2003). These
answers might represent two types of prociency: (a) latent class
4 (a cohesive knowledge structure with a signicant number of principles (semantic) but a small number of concepts (surface)); or (b)
latent class 8 (a well-featured knowledge structure at all levels
(surface, structural, and semantic)).
Lastly, the three-stage model of learning progress and proposed measurement could guide the development of technologyenhanced assessment applications. Current demands for studentcentered learning require prompt and precise understanding of
student changes, both cognitive and non-cognitive (Lee & Park, 2007).
Enhanced knowledge about students is critical to helping students engage in their own learning. For example, one such
application might be an in-class response system. If a teacher
13
14
Freshman
Sophomore
Junior
Senior
Total
*p .05.
Class
Acclimation
Competence
Prociency
3
8
17
10
38
4
20
23
15
62
2
9
12
10
33
Total
x2 (df)
9
37
52
35
133
1.914 (6)
.927
Total x2 (df)
Class
Not interested
5
at all
Not so
9
interested
Somewhat
9
Interested
Interested
5
Very interested 10
Total
38
10
17
14
32
14
32
8
16
62
6
7
33
19
33
133
*p .05.
Total x2 (df)
Class
None
33
Yes but less
5
than 2 years
Total
38
59
1
32
0
9
37
60
32
133
*p .05.
Total x2(df)
Class
None
20
Other education 16
courses
Instructional
2
Design courses
Total
38
22
36
12
19
54
71
62
33
133
*p .05.
References
Agresti, A. (2007). An introduction to categorical data analysis. Hoboken, NJ: Wiley
Inter-Science.
Alexander, P. A. (1997). Mapping the multidimensional nature of domain learning:
The interplay of cognitive, motivational, and strategic forces. In M. L. Maehr &
P. R. Pintrich (Eds.), Advances in motivation and achievement (Vol. 10, pp. 213250).
Greenwich, CT: JAI Press.
Alexander, P. A. (2003). The development of expertise: The journey from acclimation
to prociency. Educational Researcher, 32(8), 1014.
Alexander, P. A. (2004). A model of domain learning: Reinterpreting expertise as a
multidimensional, multistage process. In D. Y. Dai & R. J. Sternberg (Eds.),
Motivation, emotion, and cognition: Integrative perspectives on intellectual
functioning and development (pp. 273298). Mahwah, NJ: Lawrence Erlbaum
Associates, Publishers.
Alexander, P. A., & Murphy, P. K. (1998). Proling the differences in students
knowledge, interest, and strategic processing. Journal of Educational Psychology,
90, 435447.
Alexander, P. A., Sperl, C. T., Buehl, M. M., Fives, H., & Chiu, S. (2004). Modeling domain
learning: Proles from the eld of special education. Journal of Educational
Psychology, 96(3), 545557. doi:10.1037/0022-0663.96.3.545.
Anzai, Y., & Yokoyama, T. (1984). Internal models in physics problem solving. Cognition
and Instruction, 1, 397450.
Biggs, J., & Collis, K. (1982). Evaluating the quality of learning: the SOLO taxonomy.
New York: Academic Press.
Bogard, T., Liu, M., & Chiang, Y. V. (2013). Thresholds of knowledge development in
complex problem solving: a multiple-case study of advanced learners cognitive
processes. Educational Technology Research and Development, 61(3), 465503.
<http://doi.org/10.1007/s11423-013-9295-4>.
Bozard, J. L. (2010). Invariance testing in diagnostic classication models (Unpublished
masters thesis). University of Georgia.
Bransford, J. D., Barclay, J. R., & Franks, J. J. (1972). Sentence memory: A constructive
versus interpretive approach. Cognitive Psychology, 3, 193209.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.), (2000). Learning and transfer.
In How people learn: Brain, mind, experience, and school (pp. 3178). Washington,
DC: National Academy Press.
Bransford, J. D., & Franks, J. J. (1972). The abstraction of linguistic ideas. Cognitive
Psychology, 2, 331350.
15
Fodor, J. A., Bever, T. G., & Garrett, M. F. (1974). The psychology of language: An
introduction to psycholinguistics and generative grammar. New York: McGraw-Hill.
Gentner, D., & Medina, J. (1998). Similarity and the development of rules. Cognition,
65, 263297.
Gijbels, D., Dochy, F., Van den Bossche, P., & Segers, M. (2005). Effects of problem-based
learning: A meta-analysis from the angle of assessment. Review of educational
research, 75(1), 2761.
Glaser, E. M., Abelson, H. H., & Garrison, K. N. (1983). Putting knowledge to use:
Facilitating the diffusion of knowledge and the implementation of planned change
(1st ed.). San Francisco: Jossey-Bass.
Gobet, F., & Wood, D. (1999). Expertise, models of learning and computer-based
tutoring. Computers & Education, 33(23), 189207.
Goldsmith, T. E., & Kraiger, K. (1997). Applications of structural knowledge assessment
to training evaluation. In J. K. Ford (Ed.), Improving training effectiveness in
organizations (pp. 7395). Mahwah, NJ: Lawrence Erlbaum Associates.
Goodman, C. M. (1987). The Delphi technique: A critique. Journal of Advanced Nursing,
12(6), 729734.
Greeno, J. G. (1989). Situations, mental models, and generative knowledge. In D. Klahr
& K. Kotovsky (Eds.), Complex information processing (pp. 285318). Hillsdale, NJ:
Lawrence Erlbaum Associates, Publishers.
Grow-Maienza, J., Hahn, D., & Joo, C. (2001). Mathematics instruction in Korean
primary schools: Structures, processes, and a linguistic analysis of questioning.
Journal of Educational Psychology, 93(2), 363376.
Guadagnoli, E., & Velicer, W. F. (1988). Relation of sample size to the stability of
component patterns. Psychological Bulletin, 103(2), 265275.
Hatano, G., & Inagaki, K. (1994). Young childrens naive theory of biology. Cognition,
50(13), 171188. doi:10.1016/0010-0277(94)90027-2.
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to
achievement. New York: Taylor & Francis Group.
Heinen, T. (1996). Latent class and discrete latent trait models: Similarities and differences.
Thousand Oaks, CA: Sage Publication.
Henson, R. A., Templin, J. L., & Willse, J. T. (2009). Dening a family of cognitive
diagnosis models using log-linear models with latent variables. Psychometrika,
74(2), 191210.
Holyoak, K. J., & Koh, K. (1987). Surface and structural similarity in analogical transfer.
Memory & Cognition, 15, 332340.
Hsu, C., & Sandford, B. (2007). The Delphi technique: Making sense of consensus.
Practical Assessment Research & Evaluation, 12(10).
Ifenthaler, D. (2006). Diagnose lernabhngiger Vernderung mentaler Modelle.
Entwicklung der SMD-Technologie als methodologische Verfahren zur relationalen,
strukturellen und semantischen Analyse individueller Modellkonstruktionen. Freiburg:
Universitts-Dissertation.
Ifenthaler, D. (2010). Relational, structural, and semantic analysis of graphical
representations and concept maps. Educational Technology Research &
Development, 58(1), 8197. doi:10.1007/s11423-008-9087-4.
Ifenthaler, D., Masduki, I., & Seel, N. M. (2009). The mystery of cognitive structure
and how we can detect it: Tracking the development of cognitive structures over
time. Instructional Science, doi:10.1007/s11251-009-9097-6.
Ifenthaler, D., & Seel, N. M. (2005). The measurement of change: Learning-dependent
progression of mental models. Technology, Instruction, Cognition, and Learning,
2(4), 321340.
Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language,
inference, and consciousness. Cambridge: Cambridge University Press.
Johnson-Laird, P. N. (2005a). Mental models and thoughts. In K. J. Holyoak (Ed.), The
Cambridge handbook of thinking and reasoning (pp. 185208). Cambridge
University Press.
Johnson-Laird, P. N. (2005b). The history of mental models. In K. I. Manktelow & M.
C. Chung (Eds.), Psychology of reasoning: Theoretical and historical perspectives
(pp. 179212). Psychology Press.
Jonassen, D. H. (1997). Instructional design models for well-structured and illstructured problem-solving learning outcomes. Educational Technology Research
and Development, 45(1), 6594.
Jonassen, D. H. (2000). Toward a design theory of problem solving. Educational
Technology Research and Development, 48(4), 6385.
Jonassen, D. H. (2014). Assessing Problem Solving. In J. M. Spector, M. D. Merrill, J.
Elen, & M. J. Bishop (Eds.), Handbook of Research on Educational Communications
and Technology (pp. 269288). Springer New York. Retrieved January 17, 2015,
from: <http://link.springer.com/chapter/10.1007/978-1-4614-3185-5_22>.
Jonassen, D. H., Beissner, K., & Yacci, M. (1993). Structural knowledge: Techniques
for representing, conveying, and acquiring structural knowledge. Structural
knowledge: Techniques for representing, conveying, and acquiring structural
knowledge.
Kaplan, D. (2008). An overview of Markov chain methods for the study of
stage-sequential development processes. Developmental Psychology, 44(2),
457467.
Katz, J. J., & Postal, P. M. (1964). An integrated theory of linguistic descriptions.
Cambridge: M.I.T. Press.
Kieras, D. E., & Bovair, S. (1984). The role of a mental model in learning to operate
a device. Cognitive Science, 8, 255273.
Kim, M. (2012a). Cross-validation study on methods and technologies to assess mental
models in a complex problem-solving situation. Computers in Human Behavior,
28(2), 703717. doi:10.1016/j.chb.2011.11.018.
Kim, M. (2012b). Theoretically grounded guidelines for assessing learning progress:
Cognitive changes in ill-structured complex problem-solving contexts. Educational
Technology Research and Development, 60(4), 601622. doi:10.1007/s11423-0129247-4.
16
Kim, M. (2013). Concept map engineering: Methods and tools based on the semantic
relation approach. Educational Technology Research and Development, 61(6),
951978. doi:10.1007/s11423-013-9316-3.
Kintsch, W., & van Dijk, T. A. (1978). Toward a model of text comprehension and
production. Psychological Review, 85(5), 363394. doi:10.1037/0033295X.85.5.363.
Kitchner, K. S. (1983). Cognition, metacognition, and epistemic cognition:
A three-level model of cognitive processing. Human Development, 4, 222232.
Lee, J., & Park, O. (2007). Adaptive instructional systems. In J. M. Spector, M. D. Merill,
J. van Merrienboer, & M. P. Driscoll (Eds.), Handbook of research for educational
communications and technology (pp. 469484). Routledge: Taylor & Francis Group.
Martin, R. A., Velicer, W. F., & Fava, J. L. (1996). Latent transition analysis to the stages
of change for smoking cessation. Additive Behaviors, 21(1), 6780.
Mayer, R. E. (1989). Models for understanding. Review of Educational Research, 59(1),
4364.
McKeown, J. O. (2009). Using annotated concept map assessments as predictors of
performance and understanding of complex problems for teacher technology
integration (Unpublished doctoral dissertation). Tallahassee, FL: Florida State
University.
Monge, P. R., & Contractor, N. S. (2003). Theories of communication networks. New
York: Oxford University Press.
Narayanan, V. K. (2005). Causal mapping: An historical overview. In V. K. Narayanan
& D. J. Armstrong (Eds.), Causal mapping for research in information technology
(pp. 119). Hershey, PA: Idea Group Publishing.
Newell, A., & Simon, H. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice
Hall.
Norman, D. (1983). Some observations on mental models. In D. Gentner & A. L. Stevens
(Eds.), Mental models (pp. 714). Hillsdale, NJ: Erlbaum.
Novak, J. D., & Caas, A. J. (2006). The origins of the concept mapping tool and the
continuing evolution of the tool. http://citeseerx.ist.psu.edu/viewdoc/summary?doi
=10.1.1.106.3382 Retrieved May 6, 2012.
Okoli, C., & Pawlowski, S. D. (2004). The Delphi method as a research tool: An example,
design considerations and applications. Information & Management, 42(1), 1529.
doi:10.1016/j.im.2003.11.002.
Opfer, J. E., & Siegler, R. S. (2004). Revisiting preschoolers living things concept: A
microgenetic analysis of conceptual change in basic biology. Cognitive Psychology,
49, 301332.
Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.), (2001). Knowing what students
know. Washington, DC: National Academy Press.
Piaget, J. (1964). Development and learning. In R. E. Ripple & V. N. Rockcastle (Eds.),
Piaget rediscovered (pp. 720). Ithaca, NY: Cornell University.
Pirnay-Dummer, P. (2006). Expertise und modellbildung MITOCAR. Freiburg:
Universitts-Dissertation.
Pirnay-Dummer, P., & Ifenthaler, D. (2010). Automated knowledge visualization and
assessment. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computerbased diagnostics and systematic analysis of knowledge. New York: Springer.
Pirnay-Dummer, P., Ifenthaler, D., & Spector, J. (2010). Highly integrated model
assessment technology and tools. Educational Technology Research & Development,
58(1), 318. doi:10.1007/s11423-009-9119-8.
Pretz, J. E., Naples, A. J., & Sternberg, R. J. (2003). Recognizing dening, and
representing problems. In J. E. Davidson & R. J. Sternberg (Eds.), The psychology
of problem solving (pp. 330). New York: Cambridge University Press.
Rost, J., & Langeheine, R. (Eds.), (1997). Applications of latent trait and latent class models
in the social sciences. New York: Waxmann.
Rupp, A. A., Sweet, S., & Choi, Y. (2010a). Modeling learning trajectories with epistemic
network analysis: A simulation-based investigation of a novel analytic method for
epistemic games. Presented at the annual meeting of the International Society
for Educational Data Mining (EDM), Pittsburgh, PA.
Rupp, A. A., Templin, J. L., & Henson, R. A. (2010b). Diagnostic measurement: Theory,
methods, and applications. New York: The Guilford Press.
Schlomske, N., & Pirnay-Dummer, P. (2008). Model based assessment of learning
dependent change during a two semester class. In Kinshuk, D. Sampson, & M.
Spector (Eds.), Proceedings of IADIS International Conference Cognition and
Exploratory Learning in Digital Age 2008 (pp. 4553). Freiburg, Germany.
Schraw, G., Dunkle, M. E., & Bendixen, L. D. (1995). Cognitive processes in well-dened
and ill-dened problem solving. Applied Cognitive Psychology, 9, 116.
Schvaneveldt, R. W., Durso, F. T., Goldsmith, T. E., Breen, T. J., & Cooke, N. M. (1985).
Measuring the structure of expertise. International Journal of Man-Machine Studies,
23, 699728.
Seel, N. M. (1999). Semiotics and structural learning theory. Journal of Structural
Learning and Intelligent Systems, 14(1), 1128.
Seel, N. M. (2001). Epistemology, situated cognition, and mental models: Like a bridge
over troubled water. Instructional Science, 29(45), 403427.
Seel, N. M. (2003). Model-centered learning and instruction. Technology, Instruction,
Cognition, and Learning, 1(1), 5985.