HESTENES Toward A Modeling Theory of Physics Instruction

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/216743200

Toward a modeling theory of physics instruction

Article  in  American Journal of Physics · May 1987


DOI: 10.1119/1.15129

CITATIONS READS
629 4,105

1 author:

David Hestenes
Arizona State University
118 PUBLICATIONS   15,342 CITATIONS   

SEE PROFILE

All content following this page was uploaded by David Hestenes on 03 April 2015.

The user has requested enhancement of the downloaded file.


Published in: Am. J. Phys. 55 (5), May 1987, pp 440-454.

Toward a modeling theory of physics instruction a)


David Hestenes
Department of Physics, Arizona State University, Tempe, Arizona 85287

An analysis of the conceptual structure of physics identifies essential factual and


procedural knowledge which is not explicitly formulated and taught in physics courses. It
leads to the conclusion that mathematical modeling of the physical world should be the
central theme of physics instruction. There are reasons to believe that traditional methods
for teaching physics are inefficient and substantial improvements in instruction can be
achieved by a vigorous program of pedagogical research and development.

I. WHO NEEDS A THEORY OF INSTRUCTION?


The generally unsatisfactory outcome of instruction in introductory physics is too
familiar to require documentation. Blame is usually placed on poor prior training
in mathematics and science. However, cognitive research in the last decade has
documented serious deficiencies in traditional physics instruction. There is reason
to doubt that these deficiencies can be eliminated without extensive pedagogical
research and development.
Pedagogical theory is generally held in low esteem by university
scientists. But their own practices show how sorely it is needed. They practice in
the classroom what they would never tolerate in the laboratory. In the laboratory
they are keen to understand the phenomena and critically evaluate reasonable
alternative hypotheses. But their teaching is guided by unsubstantiated beliefs
about students and learning which are often wrong or partial truths at best. This
kind of behavior would be as disastrous in the laboratory as it is in the classroom.
Why don’t they evaluate their teaching practices with the same critical standards
they apply to scientific research?
Although deficiencies in physics instruction are most serious at the
introductory level, there is no reason to believe that they are insignificant at
higher levels. To be sure, some excellent physicists emerge from our graduate
programs. But symptoms of a problem are easy to find in the frequent jokes and
lamentations by faculty over the poor performance of graduate students,
especially on oral exams where students are supposed to demonstrate a coherent
understanding of their subject. The possibility that this might be a consequence of
deficient instruction seems never to occur to the faculty. In the absence of
evidence you can believe what you like. The fact that, with sufficient time and
effort, some students learn physics in our universities should not make us
complacent. The question is not whether students can learn physics, but whether
instruction can be designed to help them learn it more efficiently.
Most physics professors take their teaching seriously, so it seems strange
that they have not promoted the kind of coherent research program to improve
teaching which they know is essential to the development of physics. My purpose
in this article is to discuss what is needed to get such a program started. I aim to
formulate the rudiments of an instructional theory in sufficient detail to serve as a
basis for criticizing current instructional practice and guiding pedagogical
research. And I hope to call attention to related research in cognitive science
which can be expected to contribute to the development of instructional theory.
The ultimate goal of pedagogical research should be to establish a mature
instructional theory which consolidates and organizes a nontrivial body of
knowledge about teaching. Without such a theory, little pedagogical knowledge
can be transmitted between generations of teachers, teachers cannot improve
without repeating mistakes of their predecessors, and only the most capable and
dedicated can progress to teaching with a moderate degree of insight and subtlety.
Such is the situation today. Without the consolidation of physical knowledge in
theory, could we expect any physicists to develop beyond the insights of Galileo?
A theory of instruction must answer two questions: "What are the essentials of the
subject to be taught?" and "How can the essentials be taught effectively?" Section
II of this article offers an answer to the first question from an analysis of the
structure of scientific knowledge. A specific theory of mathematical models and
modeling is outlined for pedagogical purposes. This provides the basis for a
model-centered instructional strategy, which is the heart of the instructional
theory.
For the most part, the modeling theory should appear obvious to
physicists, since it is supposed to provide an explicit formulation of things they
know very well. That does not mean that the theory is trivial or unnecessary.
Much of the knowledge it explicates is so basic and well known to physicists that
they take it for granted and fail to realize that it should be taught to students. A
systematic explication of basic knowledge is an obvious prerequisite to the
development of an instructional program which assures that the basics are
adequately taught. When an instructor takes certain basics for granted and fails to
teach them, the students flounder until they rediscover those basics for themselves
or, more likely, develop inferior alternatives to cope with their difficulties. I
submit that this unfortunate state of affairs is rampant in physics courses and
contributes heavily to their legendary difficulty.
At the risk of purveying jargon, I have introduced some new terminology
to designate important concepts in instructional science. Hopefully, this will
contribute to further sharpening and exploitation of these concepts in the future.
Like any other science, instructional science needs to develop its own specialized
vocabulary and conceptual structure.
As a nontrivial application of the theory, I use it for a systematic
explication of basic knowledge which should be taught in introductory mechanics.
This leads to specific criticisms of current teaching practice. My intent is not to
condemn, but to ascertain how teaching can be improved. If I seem to be
articulating the obvious, let it be noted how haphazardly basic knowledge is
taught in physics courses. If my analysis is defective, let that be a point of
departure for improving the theory.
While Sec. II offers an answer to the question about what should be
taught, a satisfactory answer to the question about how it can be taught effectively
is not to be expected without extensive pedagogical research. To help guide and
stimulate the necessary research, Sec. III is devoted to delineating some of the
issues and reviewing relevant facts and ideas from cognitive science and other
sources.

2
II. THE STRUCTURE OF SCIENTIFIC KNOWLEDGE
Scientific knowledge is of two kinds, factual and procedural. The factual
knowledge consists of theories, models, and empirical data interpreted (to some
degree) by models in accordance with theory. A theory is to be regarded as
factual, rather than hypothetical, because the laws of the theory have been
corroborated, though theories differ in range of application and degree of
corroboration. The procedural knowledge of science consists of strategies, tactics,
and techniques for developing, validating, and utilizing factual knowledge. This
rather vague and disorganized system, or some part of it, is commonly referred to
as the scientific method.
Factual knowledge is presented in science textbooks in a fairly explicit
and orderly fashion, though rather haphazardly, with frequent logical gaps and
hidden assumptions. However, the usual textbook treatment of procedural
knowledge is almost totally inadequate, consisting of little more than platitudes
about the power of scientific method and off-hand remarks about problem
solving. Students are left to discover essential procedural knowledge for
themselves by struggling with practice problems and observing the performance
of professors and teaching assistants. This is as difficult for students as it has been
for the philosophers, who have failed to give an adequate account of scientific
method. No wonder that so many students fail in this endeavor. But the fact that
so many succeed testifies to the widespread creative powers of the human
intellect.
To teach procedural knowledge efficiently, we need a theory to organize
it. This will depend on how we characterize the structure of factual knowledge.
Scientists generally agree that such structure is supplied by models and theories,
but satisfactory definitions of the terms "model" and "theory" are not to be found
in standard physics textbooks, and few scientists could supply them. For most
scientists their meanings are derived from familiarity with a large collection of
examples. But we can hardly hope to impart clear concepts of "model" and
"theory" to students, who lack the background of scientists, unless we can
characterize these concepts explicitly. Our first task in this section will be to
supply such a characterization. The results will help us identify logical gaps and
tacit assumptions in conventional textbooks which surely leave students confused.
More important, we shall see that the concept of theory presupposes the concept
of model. This leads us to the identification of model development and
deployment as the main activities of scientists, and thus provides the key to a
coherent theory of procedural knowledge in science.
The second task in this section will be to explicate the principles and
techniques of modeling for pedagogical purposes. I will be specifically concerned
with applications to the teaching of introductory mechanics, since that is where
physics instruction usually begins. But I aim to formulate general modeling
principles applicable to every branch of physics, indeed, to every branch of
science.
Our discussion of models, theories and modeling in this section hits only
the highlights of greatest pedagogical interest. A more detailed discussion is given
in Ref. 1, from which the main ideas in this section were taken. A valuable
analysis of knowledge structure in mechanics from a closely related point of view
is given by Reif and Heller.2

3
A. Model

A model is a surrogate object, a conceptual representation of a real thing. The


models in physics are mathematical models, which is to say that physical
properties are represented by quantitative variables in the models.
A mathematical model has four components:
(1) A set of names for the object and agents that interact with it, as well as
for any part of the object represented in the model.
(2) A set of descriptive variables (or descriptors) representing properties of
the object.
(3) Equations of the model, describing its structure and time evolution.
(4) An interpretation relating the descriptive variables to properties of some
object which the model represents.
There are three types of descriptors: object variables, state variables, and
interaction variables.
Object variables represent intrinsic properties of the object. For example,
mass and charge are object variables for an electron, while moment of inertia and
specifications of size and shape are object variables for a rigid body. The object
variables have fixed values for a particular object, so they are indeed variables
from the viewpoint of modeling theory.
State variables represent intrinsic properties with values which may vary
with time. For example, position and velocity are state variables for a particle. A
descriptor regarded as a state variable in one model may be regarded as an object
variable in another model. Mass, for example, is a state variable in a particle
model of a rocket, though it is constant in most particle models. Thus, object
variables can be regarded as state variables with constant values.
An interaction variable represents the interaction of some external object
(called an agent) with the object being modeled. The basic interaction variable in
mechanics is the force vector. Work, potential energy, and torque are alternative
interaction variables.
In particle mechanics, the equations of a model typically consist of
equations of motion (dynamical equations) for each particle in the model and
possibly equations of constraint describing certain kinds of interaction. For some
purposes it is convenient to replace the equations of motion by conservation laws
relating state variables at different times. This gives an alternative representation
of the object, but it is not a different model unless the specified conservation laws
contain less information than the equations of motion, as is sometimes the case. In
the equations of a model the "internal interaction variables," describing
interactions among parts of a composite object, are expressed as functions of the
state variables, so they are dependent variables which can be eliminated
mathematically. Nevertheless, they are essential to the interpretation of the model.
Interpretations are treated so casually in physics textbooks that one should
not be surprised to find them muddled by students. Indeed, a common practice
among physicists and mathematicians is to identify the equations of a model with
the model itself. This, of course, takes the interpretation of the model for granted,
which may be okay for experienced scientists, though the interpretation is not
infrequently a serious bone of contention. But students need to recognize the
interpretation as a critical component of a model. Without an interpretation the
equations of a model represent nothing; they are merely abstract relations among
mathematical variables. Undoubtedly, this is how the equations often appear to

4
confused physics students, who have not developed the ability of the instructor to
supply an interpretation automatically.

B. Theory
A scientific theory can be regarded as a system of design principles for modeling
real objects. This viewpoint makes it clear that the concept of theory presupposes
the concept of model. Indeed, a scientific theory can be related to experience only
by using it to construct specific models which can be compared with real objects.
The laws of a theory can be tested and validated only by testing and validating
models derived from the theory.
A scientific theory has three major components:
(I) A framework of generic and specific laws characterizing the descriptive
variables of the theory.
(II) A semantic base of correspondence rules relating the descriptive
variables to properties of real objects.
(III) A superstructure of definitions, conventions and theorems to facilitate
modeling in a variety of situations.

The framework determines the structure of the theory, while the semantic
base determines the interpretation of the theory and any models derived from it.
The framework and semantic base are essential components of the theory, and any
significant change in them produces a new theory. However, the superstructure is
subsidiary, growing and changing with new applications of the theory.
’the concept of a scientific law is widely recognized as the key concept in a
scientific theory, yet textbooks rarely attempt to define it, or even distinguish
clearly between the different types of law. A scientific law is a relation among
descriptive variables which is presumed to represent a relation among properties
of real objects, because it has been validated in some empirical domain by the
testing of models. Most of the laws of physics are expressed as mathematical
equations. The laws of a theory are either basic or derived. The basic laws, such
as Newton’s laws of motion, are independent assumptions in the framework of the
theory. The derived laws, such as the Work-Energy Theorem and Galileo’s law of
falling bodies, are theorems in the superstructure of the theory.
Generic laws define the basic descriptive variables of the theory. Generic
laws apply to every model derived from the theory, whereas specific laws apply
only under special conditions. Newton’s three laws of motion are generic laws of
classical mechanics defining the basic variables mass and force. Unfortunately,
textbooks give the false impression that these are the only generic laws of
mechanics, and they fail to point out that Newton’s formulation is insufficient to
define the concept of force completely. A complete formulation and analysis of
the generic laws of mechanics is given in Ref. 1. I will not go into such detail
here, because beginning students are not equipped to appreciate it. However, a
thorough analysis helps identify serious deficiencies in the conventional
formulation which are likely to cause students difficulty.
Newton’s laws are often paraphrased in the textbooks to make them more
intelligible to students, but the deficiencies in Newton’s original formulation are
retained. First, the laws fail to explicitly state that every force has an agent, that
every force is a binary function describing the action of an agent on an object. The
seriousness of this deficiency is shown by empirical evidence3,4 that the majority

5
of students hold the "impetus belief” that a force can be imparted to an object and
act on it independently of any agent; moreover, few students change this belief
after instruction in mechanics. Second, Newton's formulation speaks of the force
on a body rather than a particle. A rigorous formulation begins with forces on and
by particles and later defines the force on a body as the sum of forces on its
particles. Most textbooks do not make it clear that Newton's second law cannot be
applied directly to a body unless the body is modeled as a particle. This blurs the
distinction between Newton's second law and the center of mass theorem. Surely,
the blurred distinction between body and particle contributes to the difficulties
students have in identifying the location of forces acting on an extended body. A
third deficiency is Newton's failure to state the force superposition principle as a
separate law, because he mistakenly believed it could be derived from his other
laws. The superposition principle is sometimes stated as part of the second law,
but it is so important it deserves separate billing. There are more subtle difficulties
with the formulation of Newton's laws which I need not go into here, because they
are of lesser pedagogical interest.
Besides deficiencies in the formulation of Newton's Laws, the whole set of
laws is incomplete in two important respects: First, the basic kinematical laws
defining the concepts of position, time and motion are not explicitly formulated.
Second, the logical status of specific force laws, like Newton's law of gravitation,
is not sharply delineated. Students are not informed that the concept of a force law
is an essential part of the concept of a force. Incompleteness in the formulation of
the basic laws of mechanics is a matter of pedagogical concern. For how are
students to distinguish basic concepts and laws from derived concepts and laws?
How are they to distinguish the essential from the peripheral? How are they to
identify discrepancies between their own beliefs and scientific concepts if the
latter are not sharply delineated?
Instead of merely listing Newton's laws, as is usually done, it would be
better to classify the laws according to their roles in the theory. The laws of
mechanics are of three types: kinematical, dynamical, and interaction laws. This
classification applies to basic as well as derived laws. And it has the added
advantage of wide applicability outside of mechanics. Awareness of the
classification should help guide the students in applying the laws.
The basic kinematical laws define the concepts of physical space, time,
reference frame, particle, position, and trajectory. Newton's first law belongs to
this class, because it defines inertial frames by distinguishing them from
accelerated reference frames. Aside from this law, textbooks introduce the other
kinematical laws informally and unsystematically without identifying the laws. It
is all very well to teach kinematics informally, but students need help to
distinguish between physical laws and mere mathematical formulas. Why not let
them know that the familiar Pythagorean Theorem applied to physical space is a
physical law, because it specifies a relation between independent measurements
of length? This prepares them for the idea of curved space in Einstein's general
theory. Why don't the textbooks identify the velocity addition theorem (relating
velocities in different reference systems) as a derived kinematical law? That
would prepare students for the eventual realization that the law is only
approximately true according to Einstein's special theory. Without formulating the
basic kinematical laws of classical physics explicitly, how are students to
appreciate that Einstein's special and general theories of relativity are both
modifications of those laws? If this is deemed to be too esoteric for an

6
introductory physics class, let it be asked, "To what degree is the confusion of
students on relations between physical descriptions in different reference systems
the result of insufficient specifications of the physical laws involved?"
Dynamical laws determine the time evolution of state variables in models.
The basic dynamical law of mechanics is, of course, Newton’s second law. But
there are many other derived dynamical laws that apply under special conditions,
e.g., the laws of energy, momentum and angular momentum conservation. Other
dynamical laws apply to special model types. For example, two dynamical laws
are needed to characterize rigid body motion: the center-of-mass theorem to
characterize translational motion, and the torque-angular momentum theorem to
characterize rotational motion.
The basic interaction laws of Newtonian mechanics include Newton’s third
law and the force superposition law as well as a variety of specific force laws,
such as Newton’s law of gravitation. Conservative interactions can alternatively
be characterized by potential energy functions. Sometimes interactions are
expressed by equations of constraint. Students could benefit from a complete and
systematic classification and description of interactions and interaction laws, with
emphasis on the agents for each interaction type and conditions under which the
interactions are significant. Research3,4 shows that, under conventional
instruction, students are slow to master interaction concepts, perhaps because they
are not clear about what to master.
I Description Stage
Object Description
• Model Type
• Object Variables

Motion Description Interaction Description


• Reference System • Agent and Type
• Motion Variables • Interaction Variables
(Basic or Derived) (Basic or Derived)

I I Formulation Stage

Motion Laws Interaction Laws

Abstract MODEL Object


• Descriptive Variables
• Equations of Motion
• Equations of Constraint

• Initial Conditions

I I I Ramification Stage

Ramified Model
• Trajectories
• Energy Gain or Loss

I V Validation Stage

Fig. 1. Model development in mechanics.

7
C. Modeling and problem solving
The cognitive process of applying the design principles of a theory to produce a
model of some physical object or process is called model development or simply
modeling. A strategy for model development in mechanics is outlined
schematically in Fig. 1. The strategy coordinates the application of scientific and
mathematical knowledge to the modeling of physical objects and processes. It
subdivides the model development process into four major stages to be
implemented successively, as indicated by the arrows on the substages in Fig. 1.
The implementation of each substage is directed by special modeling tactics for
the particular kind of model being developed.
The modeling strategy outlined in Fig. 1 is obvious to physicists, since
they have learned to follow it automatically in the analysis of physical situations
and problems. Indeed, Fig. 1 may be regarded as an outline of essential steps in
the modeling process instead of a prescribed strategy. However, since each step is
essential to modeling, the prescribed strategy must be followed, though there is
some leeway in the order in which the steps are taken and back-tracking is often
necessary. The physicist has learned the modeling strategy from long experience,
and beginning physics students will flounder until they learn it themselves. The
teaching of explicitly formulated modeling strategies and tactics should accelerate
the learning of effective modeling skills.
I submit that problem solving in physics is primarily a modeling process.
Accordingly, I propose the modeling strategy of Fig. 1 as a general problem
solving strategy to be taught explicitly to physics students. To understand how the
strategy applies, we need to see how it coordinates specific modeling tactics and
techniques. With that objective, let us discuss the four stages of modeling—(I)
Description, (II) Formulation, (III) Ramification, and (IV) Validation—in the
order of their implementation.
(I) The Description Stage is severely constrained by our choice of
mechanics as the theory to be applied, for the theory specifies what kind of
objects and properties can be modeled. Note that the three components of the
descriptive stage correspond to the three types of descriptive variables. The main
output of the descriptive state is a complete set of names and descriptive variables
for the model, along with physical interpretations for all the variables.
The object description requires a decision as to the type of model to be
developed. For example, a given solid object could be modeled as a material
particle, a rigid body, or an elastic solid. The theory provides special modeling
principles and techniques for each different model type.
In a motion description (Fig. 1) the state variables of the model are
specified. The state variables may be either basic or derived. Basic variables are
defined implicitly by generic laws of the theory, while derived variables are
defined explicitly in terms of basic variables. In mechanics, particle positions and
velocities are the basic state variables, while center-of-mass position, kinetic
energy, momentum, and angular momentum are derived variables. To determine
the optimal choice of state variables is a tactical problem whose solution depends
on the type of process being modeled. For example, position and velocity
variables are usually best for projectile motion, while momentum and kinetic
energy are usually best for collisions. Note that the state variables are not well-
defined without (tacitly, at least) invoking kinematical laws and specifying a
reference frame.

8
Besides specifying the state variables, a motion description characterizes
the motion as a whole, at least qualitatively, and specifies any known values of
the state variables. A variety of modeling techniques have been developed for this
purpose, including tables, maps, and graphs. For pedagogical purposes, motion
maps, such as those in Fig. 2, deserve special emphasis. A motion map for a

Kinematical Defining equations Solution


Motion Map
Type vectorial coordinate vectorial coordinate
v=v v=v v v
Uniform
Velocity
r = r 0+ vt x = x + vt • 0
x0 • • v x-axis
(a = 0) (a = 0) x

v = v + at a a a
• • •
a a 1 2 •v •v •
1-dim ax = a x x=x + at + vt 0
v
axv=0 2
a a a
=a 2 2 • • •
v = v + at v =v + 2 a(x x ) • v0 • •v
Uniform v
∆r=v
Acceleration 1 2
t+ at v=v v
2 x
ay = a 0y v0 v0y
a=a =a 2
v =v 2
+ 2a • ∆r x=x +v xt
v0x
v
a
v vy
v
v y = v y + at
∆r
2Ðdim
a x v =/ 0 ax = 0 r v0 a vx = v0x
= r 1 2 a
y=y + v t+ at
y 2 v
v x =/ 0
vy2 = v 2
+ 2 a( y y ) a
y a

r=r
θ = ωt +θ v0
v= ωr 2 a ωt
Uniform a•v = 0 ω ω v v
a = ω2 r = r θ0 j
Circular a=a •

r
Motion
v=v ω= r = r ( i cos θ + j sin θ ) i
dt v = v ( j sin θ + i cosθ )

x= A x=0 x =A
• • •
r = A cos ( ω t + δ ) v= 0 v v v v= 0
Simple a = ω2 r x = A cos ( ω t + δ ) • • • • •
ω2x v = V sin ( ω t + δ )
Harmonic axv=0 a= v = ω A sin ( ω t + δ )
Motion
V= ωA • •v • v •v •
a a a= 0 a a
• • • • •

Fig. 2. Kinematic models.

particle is a diagram of its trajectory in position space, with vector or scalar labels
for kinematical variables only. The vectors indicate velocity, acceleration, and
position (as appropriate) at critical and typical times, such as the beginning, end
and middle of the trajectory. As Heller and Reif have emphasized,5 forces do not
belong on a motion map, else they get confused with the kinematical variables.
Of course, an accurate motion map cannot usually be drawn until the
equations of motion have been solved, so the motion description may have to
remain incomplete until then. Just the same, a qualitative description based on the
kinematical assumption (actually, law) that motion is continuous is possible in the
initial modeling stage.
In an interaction description, each agent acting on the object is identified
along with the type of interaction. Then interaction variables are introduced to
represent the interactions, and features of the interaction are described
qualitatively using diagramatic techniques. The appropriate technique depends on
the model and interaction type. Force diagrams are appropriate when interactions
are represented by forces. But energy diagrams are appropriate when interactions
are represented by potentials (derived interaction variables). However, in contrast
to a force diagram, an energy diagram cannot be drawn until the interaction law
has been specified, so it is less useful for an initial description. For pedagogical
purposes, interaction maps deserve special emphasis, as Reif and Heller have

9
shown. An interaction map describes the forces acting on a particle at key points
on its trajectory. It is like a motion map, except that force diagrams are drawn at
the key points instead of kinematical diagrams indicating velocity and
acceleration. By comparing an interaction map with the corresponding motion
map, students can check for agreement between resultant force and acceleration
vectors. This is a valuable check for consistency between their motion and
interaction descriptions, as well as an important test of basic understanding.
Introductory textbooks are liberally decorated with diagrams, but they fail
to convey to students the essential role of diagrams in problem solving or, indeed,
to distinguish the roles of different kinds of diagram. It is true that an expert
sometimes solves problems without using diagrams, but then the information in a
diagram must be given an equivalent representation in the expert’s head. Students
need to deal with the explicit representation of information in diagrams, because
they have not developed the necessary "physical intuition" to get along without
them. Indeed, practice in constructing and interpreting diagrams of various kinds
probably contributes greatly to the development of physical intuition.
The purpose of a map in modeling is to represent geometrical relations
among objects in the model, or kinematic features of a trajectory in a motion map.
Labels for geometrical variables like distances, angles, directions, and position
coordinates are integral parts of a map, because they specify the physical referents
or, if you will, the physical interpretation of symbols that appear in the
mathematical equations of the model.
Forces should be represented on a map by arrows with tails attached to the
points where the forces act. Unfortunately, introductory physics and engineering
textbooks often disregard this important convention. Sometimes they place a force
arrowhead at the point of application to indicate a "push" instead of a "pull." This
little bit of anthropomorphism only makes it more difficult for students to
distinguish their sensory perceptions of contact with objects from the objective
physical concept of force. Moreover, to develop a clear conception of a force
field, it is essential to associate each force with a particular point at which it acts.
This is reason enough to teach students to associate the tail of a force vector with
a point of application from the beginning.
To write the equations of motion for a material object, one must
conceptually separate the object from its environment. The construction of "free-
body diagrams" helps students learn to do this. Such a diagram represents only the
forces on a body and the points at which they act. It should be distinguished from
a force diagram which represents forces alone. Unfortunately, many textbooks
fail to do this. They draw a full free-body diagram for some rigid body such as a
block on an inclined plane; then they write down equations for a point particle
model of the body, without even mentioning that, in doing so, they have ignored
information on the diagram about where the forces act. This is good opportunity
to emphasize to students that every model is only a partial representation of an
object, sometimes ignoring obvious properties. In this case, the particle model
ignores the size and shape of an object, properties which are later taken into
account in a more complete rigid body model. Accordingly, when a particle
model is employed, the free-body diagram for an extended body should be
reduced to a force diagram, the free-body diagram for a particle.
The pedagogical importance of drawing labeled force diagrams can hardly
be overestimated. To draw such a diagram, the student must first identify the
relevant forces. The labels on the diagram provide a physical interpretation for

10
symbols in the equations of motion. A complete diagram with tails of the force
vectors at one point provides the guide a student needs to write down the correct
equations of motion. Few students realize the significance of force diagrams
unless it is stressed in instruction.
Our discussion of the description stage has been rather long-winded,
because this is where students have the most difficulty, yet description is the
modeling stage passed over most quickly by textbooks and teachers.
(II) In The Formulation Stage of model development, the physical laws of
motion and interaction are applied to determine definite equations of motion for
the model object and any subsidiary equations of constraint. The passage from
force diagrams to equations of motion is by no means automatic. The main point
to be understood is that f = ma becomes an equation of motion only when the
functional form of f has been specified with one or more specific force laws.
Textbooks and instructors have been known to write down equations of motion
without helping students identify all the assumptions involved. In particular, there
is a tendency to pull equations of constraint "out of a hat." Textbooks usually fail
to explain that equations of constraint in mechanics arise from tacit assumptions
about internal forces. Students should be alerted to this by analyzing particular
examples so they understand that every physical connection in mechanics comes
from forces.
(III) In The Ramification Stage, the special properties and implications of
the model are worked out. The equations of motion are solved to determine
trajectories with various initial conditions; the time dependence of derived
descriptors such as energy is determined; results are represented analytically and
graphically and then analyzed. A ramification which describes the time evolution
of some descriptive variable, such as energy, can be regarded as a process model.
Let us refer to a model object together with one or more of its main ramifications
as a ramified model.
The ramification process is largely mathematical and textbooks usually
treat it adequately, for example, in analyzing simple harmonic motion as a
ramification of the abstract model for a particle bound by a linear force. Some
general ramification techniques, such as the method of constraint satisfaction, are
discussed by Reif and Heller.2 The main deficiency in textbook treatments is that
ramifications are not clearly identified as such and integrated with the general
modeling process. This contributes to the difficulty students have in recognizing
when a particular ramification is called for.
Figure 2 displays ramifications for the principal kinematical models of
particle mechanics. The models are specified by defining equations in vectorial or
coordinate form. The ramifications include solutions and motion maps as aids to
interpret the solutions. Figure 2 is offered here for general use as an instructional
aid, displaying the most important things students need to know about
ramifications in introductory particle mechanics. I have found it helpful to supply
a copy to each student for quick reference when problem solving in class and
homework. I recommend that students become thoroughly familiar with the
ramified models of each kinematical type in the figure before studying the
associated dynamics. Then they should be required to identify the kinematical
type in every dynamical model until it becomes automatic. Thus, for example,
they should come to recognize that a constant force, whatever its origin, implies
uniform acceleration, so they have its ramifications already in Fig. 2. They should

11
become so familiar with the contents of the figure that the printed copy is un-
necessary at exam time.
(IV) The Validation Stage is concerned with empirical evaluation of the
ramified model. In a textbook problem this may amount to no more than assessing
the reasonableness of numerical results. However, in scientific research it may
involve an elaborate experimental test.
Students frequently fail to realize when the answer to a textbook problem
is unreasonable and have no idea how the answer might be checked. I submit that
a major reason for such failure is that the students are only vaguely aware of the
model underlying their results. They do not realize that the complete solution to a
problem is based on a model from which any numerical answers come as
subsidiary results. It is the whole model which needs to be evaluated when a
solution is checked. As long as students regard the solution as a mere number or
formula, the only way they have to check it is by comparison with an answer key.
The approach I am advocating here is aptly characterized by the slogan
THE MODEL IS THE MESSAGE.6 Students should be taught that the key to
solving a typical physics problem is the development of a model from the given
information. Indeed, the problem cannot be fully understood until the model has
been constructed. Moreover, the information given in a problem is invariably
insufficient even for understanding the problem. It must be supplemented by
theoretical knowledge to construct a model. Thereafter, the problem solution
follows from some ramification of the model. This, I submit, is the core of truth in
the old saying "a problem understood is half solved!"
The modeling strategy I have been discussing provides a coherent
framework for lectures about models throughout a physics course. I recommend
that the instructor explain the relevance to the appropriate modeling stages of
everything he discusses. If he doesn’t know, he may learn something by thinking
about it. He should strive, also, to show students how models are used to
"understand" empirical phenomena. In particular, every lecture demonstration or
experiment should be accompanied by a clear explication of the model or partial
model used to interpret it. THE MODEL IS THE MESSAGE! And it is
worthwhile to compare alternative models to show how empirical evidence is
used to determine which one is "better." DIFFERENT MODELS, DIFFERENT
MESSAGES! Finally, students should be encouraged to employ the modeling
strategy in analyzing what they read in the textbook. They should learn to
recognize in their reading when THE MODEL IS THE MESSAGE.
For problem solving, our modeling strategy needs to be supplemented by
some additional procedural knowledge. Any physics problem can be attacked
with the following general model deployment strategy:
(I) Develop a suitable model of the situation specified by the problem (if
possible).
(II) Ramify the model to generate the desired information (if possible).
This deployment strategy directs model development toward a specific goal. A
physicist possesses a battery of abstract models with ramifications already worked
out or easily generated. He solves many problems routinely by simply selecting a
ramified model from this battery and matching it to the situation in the problem.
For example, once he has identified a problem as a "projectile problem," he is
ready immediately to deploy the ramification for uniform acceleration in Fig. 2 to
solve it. If necessary, he can readily generate further ramifications, such as a
projectile "range formula." For problems like this, the key to model deployment is
12
simply choosing the right ramified model. For other problems, it is necessary to
develop a model from scratch. To implement the general deployment strategy we
need some deployment tactics:
(1) The attack on a problem begins by extracting the information which can be
used in model development and representing it in some schematic form.
This information is of two types: about objects and their properties or
about processes.
(2) The initial analysis of the problem is completed by formulating the goal in
terms of information about objects or processes to be determined.
(3) From the given information about properties one can determine the
relevant scientific theory and select model types for the objects of interest.
For a problem in mechanics, we can proceed with the model development
process outlined in Fig. 1.
(4) Before generating a model description, one must decide whether to use
basic or derived variables. The best decision depends on specialized
knowledge about the processes in the problem. For example, we know
from experience that momentum variables are most convenient for
describing collision processes.
(5) After a model has been formulated, it should be checked to see if the
specified information is theoretically sufficient to determine the desired
information. At this point it should also be possible to identify any
specified information which is contradictory or irrelevant to the goal.
(6) To get most quickly to the goal, it is often best to select or derive
equations for desired variables from the laws of the model, and then
proceed to solve those equations. The main point here is that, in model
deployment, ramification is directed toward a specific goal, whereas, in
general model development, the purpose of ramification is to explore and
survey implications of the model. Such exploration and survey are the
main source of the specific information needed to guide model
deployment, including decisions as to the best choice of variables in the
descriptive stage.
Physicists have learned such modeling strategies and tactics from long
experience. The pedagogical question is whether students can learn them more
quickly and efficiently when they are explicitly formulated and taught.
Problem solving research shows that for routine problems physicists pass
quickly and effortlessly through the descriptive stage of model development.
They have mastered the descriptive process so thoroughly that most of it can be
carried out mentally with only a few overt manifestations, such as a rapidly drawn
diagram. But students frequently fail to complete some part of the descriptive
stage and are consequently unable to complete the modeling needed to solve the
problem. On the other hand, for complex or "tricky" problems the expert typically
spends much more time than the novice on the descriptive stage. The expert does
not move on to the formulation stage until a satisfactory description has been
achieved. Since description is the first stage in modeling, all this suggests that
improved instruction on the descriptive stage in modeling is the most critical step
toward improving student understanding and performance. Empirical evidence in
support of this surmise will be given in a subsequent paper.

13
D. Modeling theory
Modeling theory is a general theory of procedural knowledge in science. In the
preceding pages modeling theory has been formulated with an eye to specific
applications in mechanics. But we are equally interested in developing a general
theory of instruction applicable to the teaching of any part of physics. To show
that the formation of modeling theory is easily generalized, the model
development process is outlined in Fig. 3 as a straightforward generalization of
Fig. 1.

I Description Stage
Object Description
• Type
• Composition
• Object Variables

Process Description Interaction Description


• Reference System • Type and Agent
• State Variables • Interaction Variables

I I Formulation Stage

Dynamical Laws Interaction Laws

MODEL OBJECT
• Descriptive Variables
• Equations of Change
• Equations of Constraint

• Boundary Conditions

I I I Ramification Stage

Ramified Model
• Emergent Properties
• Process

I V Validation Stage

Fig. 3. General model development.

The formulation of model development schematized in Fig. 3 applies not


only to every branch of physics, but to every field of science. Modeling theory
should be regarded as an adjunct of Systems Theory, a general theory of the
structure and function of mathematical models which has been under development
during the last few decades chiefly by engineers and applied mathematicians.
Mario Bunge7 has developed a System Theory into a general theory of the
structure of science. The applicability of modeling theory to every scientific field
is evident in his work.
Let us note some distinctive features of Fig. 3, especially in comparison
with Fig. 1. The object to be modeled may be a composite object, in other words,
a system composed of more than one object. Accordingly, a description of its

14
composition must specify the type of each component object as determined by the
relevant scientific theory. The motion description of Fig. 1 has been generalized
to the concept of process description, but still the state variables represent
intrinsic properties of the system which may vary with time. The interaction
description includes a description of the structure of the system by specifying the
internal connections (or interactions) between component objects. The specific
relations of the internal interactions to the state variables is determined by the
relevant theory. Especially important is the determination of emergent properties
in the ramification stage. These are distinctive properties of the system as a whole
which are not properties of any of its component objects. The determination of
emergent properties can be quite difficult for "nonlinear systems."
To be assured that the characterization of model development in Fig. 3 is
not impractically vague, let us consider its application to modeling outside the
domain of mechanics. Its application to modeling a simple electrical circuit is
shown in Fig. 4. Note that classification of the object as an electrical circuit
amounts to a decision that electrical circuit theory is the relevant theory to employ
in modeling it. Circuit theory specifies the kinds of variables to be employed in a
description and tells us that other variables, such as the mass of the object, are
irrelevant. Circuit theory has its own system of special modeling techniques,
including rules for constructing and interpreting circuit diagrams such as the one
in Fig. 4. Note how the diagram relates to the three descriptive substages.

L R C

Fig. 4. Modeling an LRC circuit.

I. Description
A. Object description. Type: Electric circuit. Composition: Inductor, resistor, capacitor.
Object Variables: L,R,C.
B. Process description (see ramifications). State variables: Current I and capacitor charge Q.
C. Interaction description. Type and agent: AC generator. Internal connections (among
components): see diagram. External connections: some source of Emf. Interaction
variable: Emf E.
II. Formulation
A. Interaction laws: (1) E = E0 sin W(2) Potentials across components: LI, RI, Q /C.
B. Dynamical laws: Kirchhoff’s laws.
C. Abstract model: Equations of change: L I& + RI + Q/C = E, I = Q&
III. Ramifications (some examples): Steady-state solution described by phasor diagram. Emergent
properties: resonance and tuning.

Circuit theory could be formulated as a self-contained theory for modeling


a special class of objects, including applicability and utility conditions for
deciding when the theory is relevant. This would facilitate the use of circuit
theory in model development and deployment. However, in the typical
introductory physics textbook the principles and special techniques of circuit
theory are scattered about and never brought together in a systematic and
complete formulation. This seems to be due to the fact that circuit theory is not a

15
fundamental theory of physics, so the textbooks are preoccupied with establishing
the basis for circuit theory from more fundamental principles such as the laws of
Ampere and Faraday. To be sure, the derivation of circuit theory principles is of
great importance, because it establishes the limitations as well as the generic
origins of those special principles. However, the derivation should be clearly
separated from the formulation of circuit theory for theoretical reasons as well as
the practical reasons I have already mentioned. Circuit theory provides students
with an excellent example of the level structure of science, where the principles
for modeling objects at one level are self-contained, but derivable from principles
at a more generic level. As Bunge7 shows, elucidation of the level structure of
science is the grand theme of Systems Theory for interrelating all the sciences.
Circuit theory also provides striking examples of emergent properties that appear
in systems of increasing complexity, in particular the properties of resonance and
tuning that appear in ramifications of an LRC circuit.
These observations are meant to suggest that a systematic use of modeling
theory in instruction should help students gain a unified and coherent view of
science. To be sure, modeling theory stands as much in need of development as of
application. And the task is far from trivial, though in large part it will consist of
articulating and organizing well-known ideas.

III. COGNITIVE SCIENCE AND INSTRUCTIONAL THEORY


Most physicists pay scant attention to psychology, but they cannot avoid it in their
teaching. Instructional practice is necessarily grounded in some system of beliefs
about knowing and learning, however tacit or rudimentary. Cognitive psychology
is still too immature to provide a secure foundation for instructional theory, but
the situation is changing. Recent cognitive research has identified serious flaws in
traditional instructional practice and clarified difficult problems that need to be
addressed. My purpose here is to call attention to some of this work and its
pedagogical implications. This extends and updates the discussion in a previous
article,8 which will be taken for granted as background.
The development of a scientific theory of cognitive processes is too
difficult and too important to be left to the psychologists alone. Indeed, it has
already evolved into a multidisciplinary research program called Cognitive
Science.9 The general aim is to produce a comprehensive theory of intelligent
systems, including artificial intelligence on one hand and human intelligence on
the other. The program thus cuts across every field of intellectual endeavor from
computer science to psychology, including the history and philosophy of science
as well as linguistics, mathematics, and the various sciences. The processes of
learning and understanding physics and mathematics have become a focus of
cognitive research, because these subjects are especially clear cut and well
developed. I will emphasize such work because of its special relevance to physics
teaching. But my main concern is to place the development of instructional theory
within the broader program of cognitive science.

A. Problem solving and expert systems


The analysis of cognitive processes in problem solving has emerged as a major
line of cognitive research in the last two decades.10 This includes research on
solving textbook problems in introductory physics.11-15 The research has been
concerned with identifying and documenting empirical differences between expert

16
and novice problem solving performance, and developing a plausible theory to
account for those differences. Although the expert, of course, possesses much
more factual knowledge than the novice, superior problem-solving performance is
due mainly to procedural knowledge that enables the expert to bring the right
facts and principles to bear on a problem at the right time. To identify and
describe that procedural knowledge precisely is a difficult problem in cognitive
research, for much of it is tacit knowledge which is not recognized even by the
expert who possesses it. The pedagogical value of such research in any cognitive
domain is obvious, for it should help pinpoint precisely what needs to be learned
for skilled performance in that domain. Another application is to the design of
expert systems. An expert system is a computer program capable of skilled
problem-solving performance in some specific domain. Clearly, such a program
cannot be written until the necessary procedural knowledge can be precisely
described. The development of expert systems for specialized applications, from
the daily adjustment of airline fares to medical diagnoses, is expanding rapidly.
This will undoubtedly have a major economic and social impact over the next two
decades.
In a specified cognitive domain, such as Newtonian mechanics, the
procedural knowledge required for problem solving can be described as a system
of productions. A production16 is defined as a condition-action pair expressible in
the form: If condition A is satisfied, then perform action B. A good test for the
adequacy of a procedural description is using it to write a computer program to
emulate the problem solving performance of a human subject. This is often
difficult or impossible with current computer systems, because they cannot match
human perceptual capabilities for recognizing when conditions are satisfied. Even
so, the demanding goal of computer emulation has stimulated researchers to
sharpen their descriptions of procedural knowledge and identify gaps therein. The
difficulty of the task has made it clear that expert knowledge, even in a "simple"
domain like elementary mechanics, is much more complex and extensive than the
experts themselves generally realize.
Problem solving research and theory has important implications for
physics teaching. To begin with, the fact that experts do not solve problems the
way they say that they do has been carefully documented. What the expert says
looks like a recital from a standard textbook, but what the expert does is quite
different. Sometime ago Einstein noted a similar discrepancy in what theoretical
physicists say about the methods they use, and he offered the advicel7: "Don’t
listen to their words, fix your attention on their deeds." I believe this attitude is
one of the sources of Einstein’s genius, for his greatest work came from a critique
of physical presuppositions which everyone else overlooked. All this suggests that
the typical physics professor is not likely to be very good at teaching problem
solving to beginners.
A number of general problem solving strategies, such as "means-end
analysis," have been identified and thoroughly studied. A strategy is "general" if it
applies to a wide range of cognitive domains. Such strategies are also "weak" in
the sense that, by themselves, they support inefficient problem solving methods.
Strong strategies and methods employ domain specific knowledge, so they have a
limited range of applicability. There has not been much success in formulating
strategies of intermediate strength. Thus, most expert systems developed so far
are little more than bundles of productions, without significant strategies to
organize them. And most problem solving research in physics has recognized only

17
very weak and very strong methods. However, we have seen in Sec. II that
modeling theory provides a powerful problem solving strategy with wide
applicability in science.
Accordingly, I submit that experts tacitly employ a model-centered
strategy for solving physics problems. This is to say that the expert typically
attacks a physics problem by first constructing an abstract model from the
"givens" in the problem and then deploying the model to determine the desired
"unknowns." The expert’s tactical tricks for problem solving are thus coordinated
in a general modeling strategy. In other words, the modeling theory discussed in
Part I provides the basis for a detailed modeling theory of expert problem solving
in physics. It can be expected to give a good account of empirical data on expert
problem solving, but we are more concerned here with the way it fits into a
general instructional theory.
Problem solving is traditionally taught by providing examples for the
student to emulate. The drawback is the students tend to emulate what they see.
What they see, typically, is that after a little talk some formulas are written down
from which a numerical solution is obtained by manipulation and substitution.
The teacher or textbook may say that it is important to do such things as to "draw
a diagram," but they seldom say why, and the student can see that the answer
comes from a formula, so why bother with a diagram? Little wonder that students
come to see selection of the correct formulas as the key to problem solving. Thus,
they tend to develop a formula-centered problem solving strategy15 like the
following: (a) search the problem statement for a list of given and unknown
variables, (b) search a list of formulas for an equation which involves those
variables alone, and (c) solve the equation for the unknown and presto, the
solution! This strategy is especially effective for homework problems when the
necessary formulas can be found in the chapter from which the problems are
assigned. Dedicated students learn this strategy well by working a lot of assigned
problems, for they know that "practice makes perfect!" Indeed, they may become
quite adept at formula hunting.
The trouble with teaching by example is that so much of the expert’s
procedural knowledge is invisible to the student. Considering the research
required to identify the expert’s tacit knowledge, one wonders how such
knowledge ever gets transmitted to students under traditional instruction. Perhaps
it is not transmitted, and it is only learned by those few students who rediscover it
for themselves. At any rate, tacit expert knowledge should be more teachable after
it has been given explicit formulation.
Complex procedural knowledge is not easy to teach even when an explicit
formulation of the procedures is available. In his studies of mathematical problem
solving, Schoenfeld18 has shown how a student’s intellectual performance is
profoundly affected by noncognitive beliefs about self, the discipline, the
environment and the task at hand. For example, a student who believes that
"science is a collection of facts" is likely to approach physics as a fact collector
and so be blind to the structure of physics. Students are not easily weaned from a
formula-centered problem solving strategy that has been successful in the past.
They must be confronted with situations where the formula-centered strategy
clearly fails, and recognized that a better strategy is available. To facilitate the
transition to a powerful model-centered strategy, the instructor needs a clear
understanding of modeling theory and a systematic method for teaching it.

18
B. Representations of knowledge
There is a temptation to regard computer emulations of human problem solving
performance as models of human cognitive processes. It should be realized,
therefore, that such models are severely limited and in some ways down-right
misleading. Most computer models of human cognition to date suffer two major
defects. In the first place, they are incapable of learning outside of mere data
accumulation. Their performances can be improved only by programmers who do
the learning for them. In the second place, they represent and process information
serially, whereas the human brain is undoubtedly a parallel processor. Of course,
one reason for this is the fact that available computers are serial processors. But
serial models also result from modeling problem solving behavior directly,
because behavior is necessarily serial even if the underlying processes that direct
it are not. Computer emulation of human perceptual skills like object recognition
has proved to be especially difficult, in large part, no doubt, because this is a
modeling of parallel processes by a serial computer.
Aside from computer emulations, serial models of cognition have been
seriously considered by philosophers and psychologists operating on the premise
that language is essential to cognition, for language production appears to be a
serial process, on the surface at least. Beginning with George Boole in 1854,
analysis of language syntax led to the development of propositional (or symbolic)
logic, which has often been regarded as a theory of basic operations underlying
cognition. A variant of this idea was proposed by psychologist Jean Piaget,
though Piaget held that logic is not a derivative of language, but originates rather
in the total coordination of actions.19 Piaget suggested that a (mathematical) group
of four operations, called the INRC group, develops spontaneously in humans as a
system of mental operations underlying his "Formal Operational Stage," a level of
cognitive development at which mathematical and scientific thinking becomes
possible. This bold hypothesis is strong enough to be subjected to empirical test.
Unfortunately, the conclusion after careful evaluation is that it fails.20 Evidently
the relation between basic mental operations and logical operations is not so
simple. But much of Piaget’s theory remains viable, including his characterization
of stages in cognitive development.8
The idea of using propositional logic to model cognitive processes faces
many other difficulties, most notably in explaining what it means to understand a
proposition. John-son-Laird has argued persuasively that one understands a
proposition about the real world only by constructing a mental model of the
situation to which the proposition refers.21 This idea has far-reaching pedagogical
implications, and I propose to take it as a premise of our pedagogical theory. The
idea that the development of mental models is crucial to "physical understanding"
has been the subject of a number of cognitive studies.22 However, there is little
agreement on what constitutes a mental model.
Modeling theory tells us that a situation in the real world is accounted for
"physically" by constructing a mathematical model to represent it. Now we must
distinguish two kinds of representation for the model: an external (objective)
representation in terms of mathematical symbols, maps, diagrams, etc., and an
internal (mental or subjective) representation in the brain of someone who
understands it. In other words, to understand a mathematical model one needs a
corresponding mental model. Evidently an ability to construct such mental models
is what physicists mean by "physical intuition." Perhaps cognitive development
could best be described as development of mental modeling skills. We don’t know
19
much about mental representations, but we do know they are related to external
representations, and there is every reason to believe that the development of
physical intuition is stimulated by suitable experience with external
representations.
Any bit of knowledge can be represented in many different ways. The
most appropriate mode of representation depends on how the information is to be
used. Specific modes of representation have often played critical roles in scientific
discovery. This has been documented by Arthur Miller in his studies of the role of
imagery in important discoveries in physics.23,24 By "imagery" he evidently means
some form of mental representation, but his analysis emphasizes the development
of external representations for that imagery, such as energy level diagrams and
Feynman diagrams.
Surely diagrammatic techniques are as important in pedagogy and
understanding as in research and discovery. Yet their pedagogical role has hardly
been studied. It is not enough to know what the diagrammatic techniques are.
Research is needed to determine what it takes to teach skillful use of the
techniques. We know that most students fail to master the use of "free-body
diagrams" in introductory physics. Surely, there is greater failure to understand
the more complex techniques for representing electric and magnetic fields. Every
physicist knows these techniques are essential to understanding the field concept,
and textbooks show that great efforts have been expended to teach them.
However, these efforts have all been directed toward the techniques themselves,
without any study of the difficulties students have in learning them or even what
constitutes expert utilization of the techniques.
The upshot of my argument here is that mastery of various modes of
external knowledge representation is essential to the development of physical
intuition, and substantial pedagogical research is needed to determine how this
can be taught efficiently. Although pedagogical research should be able to go
along way by systematically studying how external knowledge representations are
learned and employed, pedagogical theory should be based ultimately on a theory
of internal knowledge representation in the brain. Few people realize that a
significant theory of this kind exists already and is developing rapidly. As the
theory is not far from the point where it will have pedagogical implications, a
brief report on its status may be useful.
Neural network theories of information processing in the brain are based
on the "neuron doctrine," which holds that neurons are the basic processing
elements. Although the neuron doctrine has not been established beyond doubt,
there is considerable evidence in its favor, and no significant alternatives are
available. Moreover, there is wide agreement on the modes of neuronal signal
transmission, though this is a subject of active research. Stephen Grossberg has
developed a neural network theory with a handful of basic mechanisms and
principles which accounts for an impressive range of data on perception, learning,
memory, attention, motor control, and even emotions.25,26 According to this
theory, the basic representational unit of information in the brain is a spatial
pattern. Even temporal information is represented as a spatial pattern. The theory
identifies specific neural mechanisms for what psychologists call short-term
memory (STM) and long-term memory (LTM). The STM is a mechanism for
temporary information storage which allows rapid processing and the selection of
information worth storing permanently by the LTM mechanism. The LTM
mechanism for permanent information storage and retrieval is also a universal

20
learning mechanism. Every kind of human learning is explained by embedding
this single mechanism in a suitable network.
Grossberg has formulated a number of general problems, principles and
theorems to guide the design of networks with particular information processing
capabilities. A key problem is the design of an adaptive filter for encoding
significant patterns. The successful solution of this problem explains how the
brain spontaneously develops and updates a code for classifying incoming
patterns. It tells how to design a self-organizing system for extracting invariants
from its experience. The same design explains the unitization process whereby
coherent clusters of coded information are compressed into single units, a process
called chunking by psychologists. Any adaptive system capable of developing and
updating codes for incoming information needs a code protection mechanism to
prevent the adventitous encoding of erroneous information. Grossberg calls this
the stability-plasticity dilemma: "How can an organism’s adaptive mechanisms be
stable enough to resist environmental fluctuations which do not alter its
behavioral success, but plastic enough to rapidly change in response to
environmental demands that do alter its behavioral success." His brilliant solution
to this problem has profound implications for learning theory. He maintains that
familiar and novel events must be processed differently. To accomplish that, he
proposes a system with two complementary subsystems: an attentional subsystem
for processing familiar (or expected) events and an orienting subsystem activated
by unfamiliar (or unexpected) events. The entire system controls the allocation of
conscious attention, a mechanism for identifying meaningful patterns and storing
them in LTM (learning).
These qualitative observations barely skim the surface of Grossberg’s
theory. I should add that the theory has a mathematical formulation in the
tradition of theoretical physics. It is sufficiently well defined to be emulated on
computers. Indeed, it provides the theory for the design of new computer systems
with real time interactive parallel processing, hence capabilities beyond the reach
of serial computers.27 Such systems are being built even now and promise to take
the computer revolution beyond the so-called "fifth generation computers."

C. Dialectics of conceptual change


There is new substantial evidence that beginning students In physics have prior
common sense (mis)conceptions about the physical world which interfere with
learning and understanding of the subject (Refs. 3, 4, and references cited
therein). These misconceptions are very stable in the sense that conventional
instruction is ineffective in correcting them. Moreover, it appears that some
serious misconceptions about physics are inadvertently promoted by instruction;
for example, the formula-centered approach to problem solving. To design
efficient instruction we need to account for the stability of these misconceptions
and the conditions necessary and sufficient to correct them. There is precious little
pedagogical research bearing directly on this issue. My purpose here is to set forth
an intellectual perspective for such research.
The first thing to realize is the stability of misconceptions about physics is
not a sign of stupidity among students. On the contrary, it is a consequence of the
resistance to conceptual change which every intelligent system must possess to
escape the horns of the stability-plasticity dilemma. It is a sign that the necessary
conditions for conceptual change have not been met. What are those conditions?
Plaget’s theory of cognitive development suggests an answer.
21
According to Piaget, cognition has a figurative aspect and an operative
19
aspect. The figurative aspect concerns the representation of information in static
configurations (or states). The operative aspect concerns operations which
transform one state to another. These operations are integrated into systems or
"mental structures," called schemata, which are used to interpret and respond to
environmental input. He holds that "the central problem of (cognitive)
development is to understand the formation, elaboration, organization, and
functioning of these structures." Grossberg goes well beyond Piaget in identifying
the configurations as spatial patterns of a definite type and characterizing the
basic operations on these patterns as well as mechanisms for performing the
operations and organizing them into schemata.
Following biological theory, Piaget suggests that there are two kinds of
learning, which he calls assimilation and accommodation. Assimilation is the
integration of information into an existing schema. Accommodation is the
modification of a schema to be consistent with new information. Under
appropriate conditions schema change occurs spontaneously by a process called
equilibration or self-regulation, which involves a feedback loop relating an action
to its consequences. Here again Grossberg has gone well beyond Piaget in
investigating the design of neural mechanisms capable of both kinds of learning.
These include the attentional and orienting subsystems mentioned earlier. It
appears that Piaget’s "assimilation" can be identified with processing by the
attentional subsystem. This can be regarded as an encoding of information into an
existing code, while "accommodation" involves the formation of new codes.
Actually, in the neural theory any encoding of information produces subtle
changes in the code. Grossberg28 argues that "learners are minimal adaptive
predictors who change their internal representations of external events when
behavior based upon these representations generates unexpected environmental
feedback." This supports "an epistemological framework wherein each observer
can possess idiosyncratic and personal definitions of objects or events, and these
definitions are refined only when they are maladaptive."
The same general principles of adaptive self-organization within an
individual can be seen operating in the scientific community. Historian Thomas
Kuhn distinguishes between the processes of "normal science" and "revolutionary
science" which are analogous to the processes of assimilation and
accommodation.29 In a scientific revolution the conceptual structure of science is
revised. Resistance to scientific revolution must be strong to insure the stability of
science against faddish fluctuations in opinion. Revolutionary ideas are never
incorporated in the framework of science until they have been thoroughly tested
in competition with alternatives. On closer examination, a distinction between the
processes of normal and revolutionary science is difficult to make.30 Thus, in
normal science the accommodation process is continually operating to reconcile
discrepancies between scientific models and empirical data.
The process by which new ideas about the real world are tested, accepted
and integrated into a conceptual framework is of such importance that it deserves
a name. I propose to call it a dialectical process. The term "dialectical" has
unfortunate connotations today, but it calls to mind the rich and controversial
history of the underlying idea. Dialectics is about controversy and its resolution
by rational means, the engine of conceptual change in every intellectual domain.
The essence of the dialectical process is a competition between incompatible
ideas, within the context of some conceptual framework, to achieve a resolution

22
called "truth." The competition in science follows fairly definite rules, though a
coherent account of the rules cannot be found in any physics textbook. These
rules, in fact, can be regarded as rules of inference, so let me speak of dialectical
inference in science instead of a dialectical process. A coherent, though
incomplete, theory of dialectical inference has been put together by Edwin Jaynes
and successfully applied to a great range of problems in physics and
engineering.31 It should have a central place in the science curriculum. But the
main point of interest here is that dialectical inference is an extension, or perhaps
a reflection, of the self-regulation process in human cognition. The main function
of this process, within the individual or the entire scientific community, is to
produce an optimal representation of the real world consistent with presently
available information and prior knowledge.
A scientific revolution is a revolution in the conceptual framework of
individual scientists. For beginning students the transition from a common sense
conceptual framework to the Newtonian framework is a revolution of comparable
magnitude. Indeed, it corresponds to one of the great scientific revolutions. From
this perspective, there is no wonder that students have difficulty making the
transition. However, the same perspective suggests that the transition can be made
only by a dialectical process. So we should be able to use what we know about the
dialectical process to design an instructional strategy to promote it. Let me call
that a dialectical teaching strategy.
A dialectical teaching strategy is appropriate when the student has prior
concepts (schemata) which are incompatible with concepts to be taught. The
strategy is to provoke in the student a state of "cognitive conflict" between the
new and prior concepts and induce the student to resolve the conflict by rational
means. Hopefully, then, the desired transition will occur because the student
recognizes deficiencies in the prior concepts and merit in the new.
Before a dialectical teaching strategy can be implemented, something must
be known about the initial conceptual state of the student. The more that is
known, the more effective the implementation is likely to be. Unfortunately, such
knowledge is not easy to obtain. It is stored in the student’s schemata, which are
not open to direct inspection by the instructor. The common sense physical
intuition of a student entering a first course in physics was developed from long
personal experience, mostly without conscious effort. Traditional physics
instruction does not adequately take the intuitions of students into account, so it
frequently fails to establish the conditions of cognitive conflict needed to drive a
transition from common sense intuition to the more veridical intuition of a
physicist.
To design instruction that takes the intuitions of individual students into
account, one needs to know something about the structure and composition of
physical intuition. A "physical" schema is a kind of unarticulated belief which can
be observed only indirectly in the behavior patterns of an individual.
Nevertheless, schemata can be studied, classified, and represented (partially, at
least) as verbally formulated beliefs. This has been done with some success for
common sense intuitions about motion and its causes.4 Fortunately, the physical
intuitions of untutored individuals are not totally idiosyncratic. Rather, common
sense beliefs (schemata) fall into a small number of fairly well-defined categories.
Therefore, practical instruction can be designed to deal with these categories of
common sense beliefs without worrying about the idiosyncrasies of individual
students.

23
An individual’s schemata (beliefs) are interrelated, so good instructional
design should take this into account whenever possible. Physical intuition can be
regarded as a system of unarticulated beliefs. Research shows that the common
sense beliefs of most individuals are only weakly interrelated and frequently
inconsistent.4 In contrast, the beliefs of a physicists are strongly interrelated and
highly consistent. This suggests that beliefs of a physicist may be more resistant
to change (as in a scientific revolution) than the beliefs of a student.
To provoke cognitive conflict and direct equilibration toward an internally
consistent system of beliefs in full accord with experience, that is, a scientific
belief system, I recommend a dialectical teaching strategy with the following
elements:
(1) Explicit formulation. Students should be engaged in considering systems
of explicitly formulated common sense beliefs (not necessarily their own).
Usually, the instructor will have to supply the formulation, though it is
desirable to have students contribute whenever possible. Research on
common sense beliefs makes it possible to do this systematically.
(2) Check for external validity. Students should be induced to check the
beliefs for consistency with empirical evidence.
(3) Check for internal consistency. Students should be induced to check for
mutual consistency among beliefs.
(4) Comparison with alternative beliefs. Students should be induced to
compare and decide between conflicting beliefs and beliefs systems,
including, of course, relevant scientific beliefs.
The dialectical teaching strategy can be implemented in a variety of different
teaching methods which should be carefully evaluated for effectiveness. Some
success using this strategy in a method for teaching problem solving is reported in
a subsequent article.32 Minstrell reports impressive success in teaching the
Newtonian force concept to high school students33; if his results can be replicated
and extended, then we will have convincing empirical evidence that suitable
instruction can transform most students into Newtonian thinkers. I should mention
that others have argued for teaching methods to promote cognitive conflict.34 Of
course, the venerable Socratic method employs a dialectical strategy, and the
Socratic dialog35 is a proven means for provoking cognitive conflict. However,
the strategy proposed here is more systematic and goal directed than the usual
open-ended Socratic method.
Dialectical teaching methods are slow and ponderous, though they can be
exciting and effective when skillfully employed. Fortunately, our theory tells us
for what purpose dialectical methods are most appropriate, namely, to root out
deep-seated and unarticulated beliefs for critical evaluation. For other
instructional purposes didactical methods are certainly more efficient and possibly
more effective.
The objective of dialectical teaching should be much more than alteration
of deep-seated beliefs. It should aim to teach objective procedures and criteria for
evaluating beliefs. Students should learn to recognize the flaws in common sense
beliefs and justify their own beliefs. They should learn the value of explicit and
precise formulations, the need for careful empirical tests, and the processes of
objective evaluation. In short, dialectical science teaching should aim to teach
rational (scientific) methods for evaluating beliefs about the physical world.

24
IV. CONCLUSION AND RECOMMENDATIONS FOR RESEARCH
The main contention of this article is that mathematical modeling should be the
central theme of physics instruction. This means that the teaching of physical
facts and theories should be subsidiary to teaching the principles and techniques
of mathematical modeling. It calls for a severe reorganization of priorities in
physics teaching which can be justified on strong epistemological and
psychological grounds.
Since modeling theory is concerned with procedural knowledge, it is best
learned in a context of specific modeling activities, where the theory is developed
gradually to monitor and guide those activities. I call this a model-centered
instructional strategy. Active modeling requires the coordination and integration
of facts with scientific theory, rather than a mere passive collection of facts and
formulas. Modeling principles organize the information in a scientific theory for
application to concrete physical situations and problems. Therefore, model-
centered instruction should be an efficient means for teaching the organization of
scientific knowledge. Moreover, such instruction focuses on the construction and
deployment of well-defined models of physical objects and processes. A model
object is a conceptual representation of a real thing which behaves in accordance
with physical laws. Thus, model-centered instruction focuses on conceptual
reconstructions of physical reality. It should be expected, therefore, to be effective
in developing the physical intuition of students.
Although some encouraging empirical evidence is available, the best
justification for model-centered instruction comes from an analysis of the
scientific enterprise. Science has developed rapidly by emulating Newtonian
mechanics. I submit that the success of this emulation derives from a transfer to
other domains of the general modeling principles imbedded in Newtonian science.
Modeling theory is an extraction and analysis of these principles, producing a
formulation of modeling strategy and tactics applicable to all the sciences.
Instruction in modeling theory is, therefore, instruction in the unity of science and
scientific method. But modeling theory is also concerned with domain-specific
modeling techniques. The modeling techniques we teach should exemplify good
scientific practice, to minimize the artificiality of textbook exercises and
maximize transferability to genuine scientific research.
Substantial pedagogical research and development will be required to
implement effective model-centered instruction. A formulation of domain-specific
modeling techniques sufficient for instructional purposes has been completed only
for Newtonian mechanics. And this is the only domain for which a fairly complete
profile of student misconceptions is available. But effective methods of
instruction which make optimal use of this information remain to be developed.
Although we have good reason to believe that basic misconceptions can be
eliminated with dialectical methods, this has yet to be demonstrated in practice.
Although modeling theory should be helpful in the design of instruction, how
much of the theory should be explicitly taught to students remains to be
determined. Also, the efficient control of problem solving instruction a "model-
theoretic" analysis of all assigned problems may be necessary. Finally, it will
probably be necessary to integrate modeling theory into an introductory textbook
on mechanics.
Newtonian mechanics should certainly be the domain where the
instructional theory is strenuously tested to see how effective physics teaching can
be. Application of modeling theory to other physical domains is a nontrivial task

25
requiring a high order of scientific insight. I submit, for example, that the archaic
formulation of thermodynamics in current textbooks would be substantially
modified by a thoroughgoing analysis from the perspective of modeling theory. In
every physical domain there is need for analysis of domain specific modeling
techniques, including diagramatic techniques of concept representation, to
determine how they can be used most effectively in instruction and perhaps even
improved.
Unfortunately, the prospects for pedagogical research in physics are bleak,
because adequate support by the physics community is lacking. There is no
shortage of young people who would like to become engaged in such research,
but the opportunities are almost nonexistent. Only a handful of U. S. physics
departments have allowed people to complete doctorates in the subject. And most
of these people soon turned to something else, as they learned there would be no
rewards in academia for continuing their research. Indeed, even a tenured physics
professor is more likely to be penalized than rewarded for pedagogical research.
No wonder that most pedagogical research is of such poor quality! In any
field it is extremely difficult for an isolated individual to carry out significant
research. A productive research program requires a community of active
investigators. Efforts to establish a meaningful program of research in science
education have been repeatedly squelched by the scientists at the national as well
as the departmental level. When science education is forced to compete with
scientific research for the budget pie, it is invariably left with the crumbs. The
National Science Foundation had to be forced by Congress to allocate significant
funds to science education research. It took more than a decade and the vigorous
advocacy of Robert Karplus for the NSF Science Education directorate to
assemble the necessary expertise and develop programs promoting meaningful
research. All this was destroyed overnight when the Reagan administration came
to power.
As awareness of a national crisis in science education has increased
recently, substantial federal funds have been allocated to cope with the crises on
the secondary level. However, little of this is directed toward significant
pedagogical research, and much of it promotes a reactionary "back-to-basics"
approach. I am not alone in the dour prediction that the main result of this
movement will be more bad science teaching and in the opinion that substantial
pedagogical research will be essential to a more salutary outcome.36,37

a)
Partially supported by a grant from TRW Corporation.
1
D. Hestenes, New Foundations for Classical Mechanics (Reidel,
Dordrecht/Boston, 1986), Chap. 9. (A reasonably priced student edition will be
available in the summer 1987.)
2
F. Reif and J. I. Heller, "Knowledge Structure and Problem Solving in Physics,"
Educ. Psychol. 17, 102-127 (1982).
3
I. A. Halloun and D. Hestenes, "The Initial Knowledge State of College Physics
Students," Am. J. Phys. 53, 1043 (1985).
4
I. A. Halloun and D. Hestenes, "Common Sense Concepts about Motion," Am. J.
Phys. 53, 1086 (1985).
5
J. I. Heller and F. Reif, "Prescribing Effective Human Problem Solving
Processes: Problem Description in Physics," Cognition and Instruction I, 177-
216 (1984).

26
6
This slogan was coined by psychologist Robert I. Stemberg in an article about
models of intelligence, Science 230, 1111 (1985).
7
M. Bunge, The Furniture of the World, Volume 4 of Treatise of Basic
Philosophy (Reidel, Dordrecht/Boston, 1973).
8
D. Hestenes, "Wherefore a Science of Teaching?", Phys. Teach. 17, 235-242
(1979).
9
D. A. Norman, ed., Perspectives on Cognitive Science (Lawrence Erlbaum
Associates, Hillsdale, New Jersey, 1981).
10
J. R. Anderson, ed., Cognitive Skills and Their Acquisition (Lawrence Erlbaum
Associates, Hillsdale, New Jersey, 1981).
11
J. H. Larkin, J. McDermott, D. P. Simon, and H. A. Simon, "Expert and Novice
Performance in Solving Physics Problems," Science 208, 1335-1342 (1980).
12
M. Chi, P. J. Feltovich, and R. Glaser, "Categorization and Representation of
Physics Problems by Experts and Novices," Cognitive Sci. 5, 121-152 (1981).
13
J. H. Larkin and F. Reif, "Understanding and Teaching Problem Solving in
Physics," Eur. J. Sci. Educ. I, 191-203 (1979).
14
J. H. Larkin, J. McDermott, D. P. Simon, and H. A. Simon, "Models of
Competences in Solving Physics Problems," Cognitive Sci. 4, 317-345 (1980).
15
J. H. Larkin, "Cognition of Learning Physics," Am. J. Phys. 49, 534-541 (1981).
16
Alan Newell, "Production Systems: Models of Control Structures," W. G.
Chase, ed., Visual Information Processing (Academic, New York, 1973).
17
A. P. French, ed., Einstein (Harvard U.P., Cambridge, 1979), p. 310. IRA.
18
Schoenfeld, Mathematical Problem Solving (Academic, New York, 1985).
19
J. Piaget, "Cognitive Development in Children: Development and Learning," J.
Res. in Sci. Teach. 2, 176-186 ( 1964).
20
W. Wollman, Developmental Implications of Science Teaching; Early
Adolescence (ERIC Clearinghouse for Science, Mathematics and Environmental
Education, Ohio State University, Columbus, Ohio, 1978).
21
P. N. Johnson-Laird, Mental Models (Harvard U.P., Cambridge, 1983).
22
D. Gentner and A. L. Stephens, eds., Mental Models (Lawrence Erlbaum
Associates, Hillsdale, NJ, 1983) .
23
A. I. Miller, Imagery in Scientific Thought; Creating 20th Century Physics
(Birkhauser, Boston, 1984).
24
A. I. Miller, "Werner Heisenberg and the Beginning of Nuclear Physics," Phys.
Today 38, 60-68 (1985).
25
S. Grossberg, Studies of Mind and Brain (Reidel, Dordrecht, 1982).
26
An introduction to Grossberg’s Theory for physicists and engineers is given by
D. Hestenes, "How the Brain Works," in Maximum Entropy and Bayesian
Methods in Spectral Analysis and Estimation Problems, edited by C. Ray Smith
(Reidel, Dordrecht/Boston, 1987).
27
R. Hecht-Nielsen, "Neural Analog Processing," Proc. SPIE 360, 180-199
(1982).
28
Ref. 25, p. 229.
29
T. S. Kuhn, The Structure of Scientific Revolutions (University of Chicago
Press, Chicago, 1970), 2nd ed.
30
I. B. Cohen, Revolution in Science (Harvard U.P., Cambridge, 1985).
31
E. T: Jaynes; Papers on Probability, Statistics and Statistical Physics, R. D.
Rosenkrantz, ed. (Reidel, Dordrecht, 1983). An analytical review of this work is
given by D. Hestenes, Found. Phys. 14, 187-191 (1984).

27
32
I. Halloun and D. Hestenes, "Modeling Instruction in Mechanics," Am. J. Phys.
55, 455 (1987).
33
J. Minstrell, "Teaching for the Understanding of Ideas: Forces on Moving
Objects." In the 1984 Yearbook of the Association for the Education of
Teachers, ERIC Clearinghouse for Science, Mathematics and Environmental
Education, Ohio State University, Columbus, Ohio.
34
Peter Hewson and Mariana A’Beckett Hewson, "The Role of Conceptual
Conflict in Conceptual Change and the Design of Science Instruction,"
Instructional Sci. 13, I (1984) .
35
A. Arons, "Thinking, Reasoning and Understanding in Introductory Physics
Courses," Phys. Teach. 19, 166-172 (1981).
36
Charles W. Anderson and Edward L. Smith, "Teaching Science," in The
Educator’s Handbook. A Research Perspective, Virginia Koehler, ed.
(Longman, New York, 1986).
37
F. Reif, "Educational Challenges for the University," Sci. 184, 537-542 (1974).

28

View publication stats

You might also like