The Parsons Game The First Simulation of Talcott P
The Parsons Game The First Simulation of Talcott P
The Parsons Game The First Simulation of Talcott P
net/publication/35948028
The Parsons Game: The First Simulation of Talcott Parsons' Theory of Action
CITATIONS READS
8 6,324
1 author:
S. Rifkin
Master Systems Inc.
29 PUBLICATIONS 354 CITATIONS
SEE PROFILE
All content following this page was uploaded by S. Rifkin on 19 March 2015.
by Stan Rifkin
Committee:
Contents
Abstract ............................................................................................................v
Acknowledgments.......................................................................................... vi
I. Introduction................................................................................................1
How this study differs from its predecessors.............................................4
Novelty of results.......................................................................................5
Conceptual framework...............................................................................7
Problem ....................................................................................................16
Research question ....................................................................................17
Significance..............................................................................................17
Limitations ...............................................................................................21
II. Literature review.....................................................................................25
Theory, model, and simulation ................................................................25
The theory of action .................................................................................25
Tension management and learning ..........................................................26
Place of Parsons' theory of action in sociology .......................................27
The bad news ...........................................................................................29
Locating this work within all of Parsons'.................................................29
Models......................................................................................................30
Formalization ...........................................................................................32
Time .........................................................................................................34
Process .....................................................................................................36
Simulations of social systems ..................................................................38
Discrete event simulation.........................................................................39
III. Methods....................................................................................................43
Research overview ...................................................................................43
Research methods ....................................................................................43
Delimitations............................................................................................52
IV. The model and simulation........................................................................55
Why simulate? .........................................................................................55
Model construction ..................................................................................56
Basic concept ...........................................................................................57
Model of tension and learning .................................................................58
Operation of the simulation .....................................................................63
Rules ........................................................................................................67
Assumptions.............................................................................................69
Mapping the model to theory...................................................................71
V. Results......................................................................................................76
Example ...................................................................................................76
Base cases ................................................................................................80
Extension..................................................................................................84
VI. Conclusions and recommendations for further study ..............................87
Review of purpose and research question................................................87
Review of findings...................................................................................87
Discussion ................................................................................................88
Page iv
Implications..............................................................................................91
Recommendations for further study.........................................................93
In sum.......................................................................................................95
Epilogue .........................................................................................................96
References......................................................................................................99
Appendix – Attestation of an Expert ...........................................................114
Appendix – Simulation Program Listing .....................................................115
Figures
Figure 1. Sastry’s "simplified causal diagram of the punctuated change theory." .............4
Figure 2. The components of action systems. ..................................................................12
Figure 3. The action system in relation to its environment. .............................................14
Figure 4. Interchange media (whose paths are represented by arrows) in the general
theory of action.. ..............................................................................................15
Figure 5. Phases in the relationship of a system to its situation. ......................................16
Figure 6. Flow from theory to action. ...............................................................................18
Figure 7. Relationship among process, event, and state (notional). ..................................37
Figure 8. Intersection of the theory of action and system simulation. ..............................43
Figure 9. Classification of social systems simulators, indicating the position of this
research in bold. ...............................................................................................46
Figure 10. Thorngate’s one-armed clock. .........................................................................21
Figure 11. Evolution of computer simulations of organizations. .....................................22
Figure 12. Structure of the three-parameter hyperbolic learning curve model. ...............61
Figure 13. Illustration of a negative exponential distribution as a "forgetting" function..62
Figure 14. User view of dedicated Excel spreadsheet.......................................................64
Figure 15. User view of the simulation. ............................................................................65
Figure 16. User display for example with long window. ..................................................78
Figure 17. Energy for the long window example. .............................................................79
Figure 18. Energy for the short window example. ............................................................79
Figure 19. Base case for affect vs. affect-neutrality..........................................................81
Figure 20. Pattern of internal energy following external with a strong culture. ...............82
Figure 21. Pattern of internal energy following external with a weak culture. .................83
Figure 22. Pattern of internal energy following external energy with very weak culture.84
Figure 23. Simulation after two energetic events per year, both with affect. Illustrates
queuing effects. ................................................................................................85
Tables
Table 1. Works by Parsons, his supporters, and his critics................................................2
Table 2. Additional delimitations of the study ..................................................................22
Table 3. The system dynamics modeling process across the classic literature. ..............57
Table 4. Rules of the simulation........................................................................................67
Table 5. Assumptions made in the simulation...................................................................69
Table 6. Map of the theory to the model. ..........................................................................72
Table 7. Correspondence between what was required and what was developed.............88
v
ABSTRACT
Talcott Parsons was perhaps the best known American sociologist of the 20th
Century. He postulated a general theory of the structure and function of social systems,
one at all levels of analysis and all levels of abstraction. The center of his theory is action,
which he defined in his own terms of situation, conditions, ends, and norms.
For those familiar with Parsons' work, the creation described here simulated using
a digital computer a very small subset of Parsons' theory of action, including his frame-
work of four functions or functional prerequisites, one of the four pairs of pattern vari-
ables, the cybernetic hierarchy, and interchange media. The simulation was meant to be a
proof-of-concept, a toy demonstration of the feasibility of modeling such a complete and
richly described social action theory.
Most simulations of social systems use a modeling technique called system
dynamics, a way of characterizing flows and accumulations over time. Other researchers
have tried to simulate the theory of action using system dynamics but have failed. One
contribution of this research is the application of a different technique, discrete event
simulation, to social systems. There are only two published applications of discrete event
simulation applied to social systems. Accordingly, this work offers some insight into how
to incorporate time ordering into reasoning about social systems.
Simulations were executed to demonstrate consistency with outcomes predicted
by the theory. One finding was that Parsons neglected to take into account the disposition
of (motivational) energy transiting through a system or organization when the energy is
blocked by having to wait for the processing of predecessor energy. The effect of this
oversight can be a very long wait for the availability of a prerequisite function and no
guidance on whether, for example, the energy decays, dissipates, waits, interrupts, or is
channeled elsewhere.
Page vi
ACKNOWLEDGMENTS
"We shape our models and then our models shape us."
- Michael Schrage. (2000). Serious play: How the world's best companies simulate to innovate,
Harvard Business School Press, p. 3.
Several fellow students in the Executive Leadership Program (ELP) cohort have
sustained my enthusiasm, including, but not limited to, Dr. Betty Beene, Dr. Brenda
Conley, Linda Hodo, and Dr. Ted Willey. Dr. Margaret Gorman has been a steady force
moving me ahead and she has been a safe harbor for my administrative challenges. I
always feel like Margaret treats me as a distinctive student, a gift she has for making each
one of us feel special. And her dissertation was breath-taking.
I am grateful to Laura Reid of Simul8 Corporation for attesting to the veracity of
the technical operation of the simulation described in this dissertation and along the way
helping me to improve its operation and correctness. I am also grateful for those who
came before me in the application of computation methods to organizational problems, in
particular Profs. Rich Burton at Duke, and John Kunz and Ray Levitt at Stanford, for per-
sonally helping me to see that engineering tools such as discrete event simulation could
be beneficially applied to social systems. Profs. Kathleen Carley at Carnegie Mellon and
Anjali Sastry at MIT were also instrumental in personally inspiring me to try to apply
engineering methods to the questions of social systems.
Dr. Chris Johnson gave me the courage to undertake the study of Talcott Parsons.
Dr. Johnson is a Parsons scholar and his energy and enthusiasm about Parsons are conta-
gious and set the bar high. He is very, very busy and I am especially grateful that he has
taken on three roles: expert who attests to the mapping of the theory of action to the
simulation model, an outside reader for the dissertation defense, and a friend and col-
league.
I consider myself an adequate researcher, but it took me too long to find Prof.
Tom Fararo, a professor in the School of Sociology at the University of Pittsburgh and
another Parsons scholar. Prof. Fararo, besides being an inspiration in his writing and
interpretation of Parsons, has been a ready and energetic reader of my manuscripts. I am
grateful for his generous expenditure of time and energy on my behalf.
Professors Walter Brown and Robert Hanneman, members of the dissertation
committee, have been generous with their time and energy. They have in their unique
ways given me important, stimulating feedback.
Prof. Dave Schwandt has been my close mentor throughout this long journey. He
is the person who spoke to me at the very beginning about joining ELP. I was struck im-
mediately then by his boundary spanning, openness to people not in his field(s), and how
gentle he was with people like me who knew nothing of human resource development. A
few years ago he introduced me as, "This is Stan. He is the only person I know who has
the whole dissertation in his head!" Prof. Schwandt has been so generous with his time
that I feel guilty. It should not have taken all of this prodding to get me to finish, but I am
slow and Dave has been a patient, steady, and exacting influence. He, too, is a Parsons
scholar and has not been put off by my excursions into what I felt might be important,
while he kept me focused and fixed. He has the gift, too, of making each one of his stu-
dents feel special and unique, and I am forever grateful for his friendship and guidance.
Page vii
All doctoral work is a family effort. My wife, Jan, from the very first moment we
spoke about the Program and the time it would mean away from her, has cheered me on,
even during the lonely days and evenings she spent as I studied and wrote. Everyone
loves Jan and she, too, has made friends among my ELP colleagues and leaders. She has
made this long journey worthwhile and possible. She also proof-read this final version, a
sacrifice well beyond the pale. I am responsible for any remaining errors, faults, failures,
oversights, defects, and misinterpretations.
I thank all of the people above for their gently persuading me to finish.
1
I. INTRODUCTION
"All models are wrong. Some are useful." - George E.P. Box
Table 1.
Works by Parsons, his supporters, and his critics
By applying this kind of modeling to the study of social systems, a researcher can
watch the interaction of social forces reveal themselves with time slowed down or
speeded up inside a computer and can obtain a very detailed understanding of the contri-
butions of structure and of function to the specific observed behavior of the social sys-
tem.
Also, the replication of theory inside a computer is not a new idea, not even for
theories of organization, as illustrated in early histories provided by Hanneman (1988)
and Garson (1994). The first comprehensive simulation of an organization was probably
A Behavioral Theory of the Firm (Cyert & March, 1963). This work was a tour de force
integration of microeconomics (that is, the setting by a firm of output level and product
Page 3
price) with organizational goal-setting, choice, and rational decision-making. While the
unit of analysis was the description of a single firm among competitors, it could be
expanded to the description of aggregates of firms and to non-business organizations,
and to the normative analysis of a single firm and of economic policy. Incidentally, while
not citing Parsons' constructs, there are many references to them without acknowledg-
ment.1
The replication of theory inside a computer is experiencing recent interest in the
social sciences as more researchers come to the social sciences from other, hard science
areas (e.g., computer science, mathematics, psychology) (Burton & Obel, 1995; Carley &
Prietula, 1994; Coleman, 1965; Conte, Hegselmann, & Terno, 1997; Gilbert & Conte,
1995; Gullahorn & Gullahorn, 1963; Hanneman, 1988; Ilgen & Hulin, 2000; Jacobsen &
Bronson, 1995; Jacobsen & Bronson, 1987; Jacobsen & Bronson, 1997; Phelan, 1995;
Prietula, Carley, & Gasser, 1998; Samuelson, 2000). The normal course of research in
computational and mathematical organization theory is to wring structure and function
from a theory, operationalize or animate them, and then make changes in the simulated
environment or the structure/function and watch the computer’s results for interesting,
informative patterns, including validation with respect to the underlying theory. To "ani-
mate" in this context means to bring to life graphically on a computer screen.
For example, Sastry for her Massachusetts Institute of Technology doctoral dis-
sertation, redacted in Sastry (1997), took a detailed, simulation-oriented look at the struc-
ture and operation of how Tushman and Romanelli (1985) explained strategic change.
She was able to show inconsistencies in their explanation, a more parsimonious explana-
tion, and to more clearly reason about cause and effect. She showed, among other things,
that there were loops that reinforced positive or negative feedback, thereby speeding up
or retarding the change, respectively. The lines in Figure 1, below, represent flows of
information, and the noun phrases (e.g., "Strategic orientation required") represent either
inputs or accumulations of values. By simulating the operation of strategic change at the
organizational level, Sastry was able to identify which postulated stores (accumulations)
would be a priority to measure in the real world because of their dominant effects and
which would be a lower priority because they may have only secondary effects.
1
The particular constructs, the four functional prerequisites, are explained later in the text, on p. 11. In Cyert & March
(1963, chap. 6, pp. 114-127) "A summary of basic concepts in the behavioral theory of the firm," there are goals,
expectations, and choices. Regarding organizational goals, e.g., "... we have argued that organizational goals change
as new participants enter and old participants leave the coalition [making the decisions]." p. 115. This is latent
pattern maintenance. Organizational expectations are based on "search," an analog to environmental interface, the
adaptation function. p. 116. And as to organizational choice, "the standard decision rules are affected primarily by
the past experience of the organization and past record of organizational slack," which are references to pattern
maintenance and integration functions, respectively. p. 116.
Page 4
Sastry accomplished her task by reading the Tushman and Romanelli article
(1985) and coding each passage as applying to a definition, a structure (i.e., static
relationship), dynamic behavior, or not applicable for her study. She collected the state-
ments about structure, for example, and derived constructs consistent with her modeling
choice (system dynamics in that case) and training as a system dynamicist. She con-
structed a computer replica of the static and dynamic components and animated it by
having information from the simulated environment flow along the lines of the diagram.
She then graphed the changes in accumulations and how well the strategic orientation
tracked the required one. She made changes in the flow rates and accumulation rules as
experiments. Her article essentially reports the patterns she observed based on varying
what she postulated were independent variables. In conclusion she observed by simula-
tion six novel ways that managing strategic change failed (Sastry, 1997).
The approach of this study was to construct a high fidelity replica of the essential
aspects of the theory of action, so the replica mirrored the elements of action that Parsons
described as indispensable: the situation, conditions, ends, and norms (Parsons, 1968a, p.
44). As well, it modeled time because Parsons’ theory described time-varying behavior:
action by its definition and nature is dynamic.
How this study differs from its predecessors
Sastry (1997) and Jacobsen and Bronson (1985) both applied system dynamics
(Hanneman, 1988), a symbolic representation of systems of differential equations, to
organizational and social studies, respectively. Jacobsen and Bronson (1997), in a paper
summarizing their 15 years of social system simulation, note, "A ... theory we tried to
model was Parsons' General Theory of Action. We chose it for its renown and because of
the controversies around it. We soon found that it could not be modeled at all because of
the unclarity and inconsistencies in Parsons' use of concepts." (p. 98 ) Their challenge
was to translate elements of Parsons’ theory of action into the standard system dynamics
form of information flows among accumulations. They tried having material flow (the
Page 5
kind that is physical and is allocated as part of Goal Attainment) be a model of roles or
resources. They tried modeling the way of acting and the method of giving meaning to
actions, but were left wondering where the next generation arose. They tried having three
of the functions concentric around one of them, but that would contradict Parsons' dia-
gram that shows them all interconnected. They tried a causal loop diagram among the
four functions, but it traveled in the opposite direction that Parsons envisioned. They tried
using the four functions as "valves" or controls on the stock of loyalty, power, order, and
goods. They considered pattern variables as "the ranges on which the other concepts
could be measured." In the end they abandoned their modeling effort. (Jacobsen, personal
communication, October 31, 2000).
Parsons (1953a) writes, referring to his theory of action, "Since we are dealing
with processes which occur in a temporal order, therefore we must treat systems and the
processes of their units as changing over time." (p. 167) [italics in original.] "The first
important implication is that an act is always a process in time. The time category is basic
to the scheme." (Parsons, 1968a, p. 45). Jacobsen and Bronson can justify their (failed)
approach by these statements (alone) because they sought to replicate the theory in terms
of time-varying constructs. The present research took a (slightly) different approach and
applied the iconic model, per Jackson’s advice to construct a high-fidelity model
(Jackson, 1983, pp. 4-5)2, not the abstract mathematical one of system dynamics. This
way there was no need to guess what in Parsons’ theory corresponded to the system
dynamics constructs of flows and accumulations (which Jacobsen and Bronson had to).
Instead, in this research there was a more literal translation between the elements of Par-
sons’ theory and the simulation – though the translation was not total, as many, many bits
of the theory were left out. For example, Parsons' posits four pairs of "pattern variables"
and this research only models one of them, the one dealing with affect vs. rationality.
In particular, this research will concentrate on the "temporal order" aspect of Par-
sons’ descriptions.
The idea of a mathematical model as theory in mathematical form began to take
hold [in the 1950s]. Writers of variant interests all agreed that such models per-
mitted the logical derivation of falsifiable claims about some class of phenomena.
This differentiated mathematical models from curve-fitting and data analytic
reduction methods. (Fararo, 1984, p. 152)
2
Not all models seek fidelity. One operational measure of fidelity is correspondence: for every important construct in
the world there is a (corresponding) construct in the model. Another term for correspondence might be requisite
variety: for every variation in input there is a corresponding control or regulation such that the output matches the
variation (Ashby, 1956, chap. 11). Some researchers deliberately translate what they sense into frameworks that are
not mirrors of the originals, thereby not seeking correspondence or requisite variety.
Page 6
what would happen in an organization that faced a high frequency of changes in its envi-
ronment, too much for it to absorb in the interval in which the changes could be made
sense of an acted upon (so called "permanent white water" (Vaill, 1996)). Would the
changes accumulate, be discarded, decay, or queue? Perhaps a computer acting like a
system of action could shed some light.
What is surprise, what is novelty? Van Fraassen (2002) argues that if science is
objective, then when a scientist sets up an experiment he/she anticipates that the values of
measured parameters will fall within certain bounds; the experimental setup is established
to measure just those parameters and just at those levels. This, after all, is the orthodoxy
of the science being invoked. So what can be regarded as novel that would be sensed
during such an experiment? In part it might be that the objectively measured results
would not match those anticipated by the theory, even though the experimental apparatus
were established within the theory in the first place.
And van Fraassen (2002) reminds us that Kuhn (1970) has struggled with the
same dilemma and concluded that novelty, when it can be sensed, may yield a change in
the orthodoxy – incidentally, still in terms of scientific methods that imply objectivity –
if not the facts of the particular theory; there would be no resort to mythology or religion
(because that would alter the method). So, novelty in van Fraassen and Kuhn's views is
possible and admissible.
Shackle (1969) postulates a calculus of surprise by introducing the notion of
potential surprise.
A man cannot, in general, tell what will happen, but his conception of the nature
of things, the nature of men and other their institutions and affairs and of the non-
human world, enables him to form a judgement as to whether any suggested thing
can happen. In telling himself that such and such a s thing 'can' happen, he means
that its occurrence would not surprise him; for we are surprised by the occurrence
of what we had supposed to be against nature. (p. 67) [italics in original]
Shackle first divides a spectrum into extremes: perfect possibility would not sur-
prise a person and impossibility would engage the absolute maximum degree of surprise
(p. 68). Between them are degrees of possibility with their corresponding inverse degrees
of surprise. While not important for the research here, Shackle posited that the dispersion
of degree of possibility and corresponding degree of surprise is not a probability distribu-
tion, but rather is non-distributional, that is, is not a function. One can have many events
that are not a surprise and their probability would not sum to unity.
To summarize, Shackle relates the degree of belief inversely with the degree of
surprise: we are surprised by that which we believe cannot happen.
What might surprise look like in the research to be described here? First, the
reader would have to think it was impossible to achieve. To a small subjective degree,
this has happened. When this researcher mentioned to other members of his school cohort
what he is trying to do many of them expressed doubt that it would be possible. Further,
Jacobsen and Bronson tried it and failed, so there is a hint of impossibility. "Some people
don't believe that models of human behavior can be developed." (Sterman, 2000)
Second, the method of inquiry, a computer simulation, is far less restrictive of an
experimental setup than a traditional laboratory so some behavior might be observed that
was not within expectation, not predicted by the orthodoxy, and therefore would be sur-
prising within the ambit described by van Fraassen and Kuhn.
Page 7
There are perhaps two more reasons that surprise might be expected, both because
Parsons has written so much. First among these is that it is predictable that there might be
contradictions or gaps in there somewhere, the precise nature of which might generate
surprise.3 And second among these is that so many people have already vetted Parsons'
theory that anything new would be unexpected at this late date.
Conceptual framework
The fundamental framework that informed this research is that of systems. A sys-
tem is a collection of elements and interactions whose structure and function can be
understood by looking at the whole, the summation, the interaction of elements. This
description highlights a tension in systems study. Some scholars infer qualities of the
whole by studying the parts (methodological reductionism, typical in normal science),
others claim that that one can never appreciate the whole by dissecting the parts (holism)
(Honderich, 1995, pp. 750, 372); (Sibeon, 1999).
The approach in this study was somewhere in the middle: the whole was studied
by understanding its parts, but not separately, rather as they interacted and collaborated in
patterns to define the whole. Thus, the emphasis was on how the parts were connected,
what flowed along those connections, and how the interplay of connections and flows
defined an organic whole.
Parsons (1968b) wrote:
Action systems have properties that are emergent only on a certain level of com-
plexity in the relations of unit acts to each other. These properties cannot be iden-
tified in any single unit act considered apart from its relation to others in the same
system. They cannot be derived by a process of direct generalization of the prop-
erties of the unit act. (p. 739) [emphasis in original]
The term energy used in this research denotes the material in the environment of
the system that is external to it and that can be sensed by the system. That is, energy is
the term used to label the elements in the world external to the system under study that
can be used to stimulate the system, that can energize and activate the system to respond
to the environment. Sometimes Parsons refers to this energy as motivation (Parsons et al.,
1953a). Concretely, the energy could be news, ideas, or information, for example. News,
of say an invention or a move by a competitor, could stimulate the system (in our case an
organization) to evaluate the content and respond to what it sensed in the external world.
One important point is that the term energy used here is not the same as that used in
physics; in physics energy is conserved, that is neither consumed nor created, but in the
use here (social) energy may be infinitely created and transformed and possibly even
consumed. Parsons postulated a law of conservation for motivational energy, which in its
central part would claim that motivational energy is exchanged for changes in the system
(Parsons et al., 1953a, p. 168). Alas, the merits of such a law of conservation of social
energy is beyond the ken of this research.
If one considers a unit act to begin with the importation of exogenous energy,
then the social system presented by this research will follow the trajectory of that energy
as it transits the replica of an organization. In order to imagine what the emergent proper-
3
Brownstein (1982) has found contradictions and gaps by converting a portion of the theory of action into a set of
logic statements and showing the inconsistencies therein.
Page 8
While the unit act is the atomic level, the social system describes social interac-
tion, behavior. Behavior that directly concerned the "cultural level" Parsons called action.
Said another way, relying on Weber, which Parsons often did, particularly related to
action:
Interpretive understanding of social actions is a prerequisite for the causal analy-
sis of social structures and processes. In modern form, we can put it this way:
there is an actual world of events and some events are behaviors. Behaviors are
treated as actions when they are analyzed relative to cultural frames of reference,
according to which the behavior means one or more things to the producer of the
behavior and to the situational interpreters of the behavior. (Fararo & Skvoretz,
1984, p. 148) [italics in original]
Action includes four generic types of subsystems (that is, collections at which unit
acts occur): organism (atomic level, the individual), social system (generated by the
interaction among individual units), cultural systems (patternings of meaning, such as
beliefs and ideas), and personality (the learned patterns of social and cultural interaction)
(Parsons, 1977b, p. 178). We might re-phrase these units of analysis, in order from small-
est to largest, in terms of the sciences that usually describe and study them: biology, the
organism's physics and chemistry, its "atomic" structure; psychology, the individual's
learned behavior and decisions; sociology, the collective structure and action of individu-
als; and anthropology, the national or religious influence.
4
"System seems to me to be an indispensable master concept …." (Parsons, 1977b, p. 101).
5
The non-randomness is the subject of an entire work, Parsons, Bales & Shils (1953b), according to Parsons (1960, p.
195).
Page 9
Parsons described his theory, in the terms most important for this research, using
several constructs: pattern variables, functional prerequisites, interchange media, and the
cybernetic hierarchy. Parsons theory was much larger than these constituents, but they
were the ones replicated in this research. It was an untestable (and therefore not falsifi-
able) assumption of this research that the axes mentioned are the core of the theory of
action. Or stated more positively, if one can simulate these elements then the theory of
action can be simulated.
Pattern variables.
Robert Bales, a student of Parsons', was studying small group interaction. Bales'
method of primary research was observation: he would watch actual groups deal with real
situations. He came to see patterns, broadly tasks and emotions. And he saw in groups
that questions about tasks and emotions were asked and answered. He subdivided the
patterns into what he called four problem areas: expressive-integrative social-emotive
positive and negative reactions, and instrumental-adaptive task area questions and
answers. The modern depiction of this is illustrated in Bales (1999, figure 6.1, p. 165).
In a few words, Bales observed small groups and saw patterns in the interactions
among the participants. He saw questions and corresponding answers, he saw attention to
the work or tasks of the group, he saw positive and negative emotions, he saw reactions
to external and internal stimuli, he saw planning of tasks and work processes, and he saw
setting of norms and expectations and the response of performance to them, among
others.
Parsons adopted Bales' framework and adapted it to describe the patterned struc-
ture and function of organizations. He called the original five, later reduced to four, pairs
pattern variables (Parsons, 1960):
Each variable defines one property of a particular class of components. In the first
instance, they distinguish between two sets of components, orientations and
modalities. Orientation concerns that actor's relationship to the objects in his
situation and is conceptualized by the two "attitudinal" variables of diffuseness-
specificity and affectivity-neutrality. … Modality concerns the meaning of the
object for the actor and is conceptualized by the two "object-categorization" vari-
ables of quality-performance and universalism-particularlism. It refers to those
aspects of the object that have meaning for the actor, given the situation. (p. 468)
[italics in original.]
The purpose of the classification was to suggest propositions about action systems
in terms of the components and the type of act their combination defines and controls. In
this section pattern variables are described, then their patterned movement is described,
and finally the patterned movement is structured to yield what becomes in the section
after this one the four functional prerequisites.
At base, action is grounded in motivation and emotion or its polar opposite, disci-
pline. The emotional pole is called affect and the discipline or deferred gratification pole
is called affect-neutral. The affect pole is considered an expressive action and the affect-
neutral pole is considered a rational or cognitive pole. Fararo (2001) illustrates the differ-
ence: "In the judge-defendant social relation in American society, in the public trial
situation, the judge is expected to restrain herself from expressing feelings of liking or
not liking the defendant. This constitutes a specific aspect of socially responsible action
expected of a judge." (p. 150)
Page 10
6
"Analytically" is used in the sense of Kant (1896), namely that it is true by definition or logic or deduction, not by
experience (which would be synthetic, inductive, empirical). At one point, Parsons writes (1968a, p. 34), "It is these
general attributes of concrete phenomena relevant within the framework of a given descriptive frame of reference …
to which the term 'analytical elements' will be applied." [italics in original.]
7
"The difference between system and environment has two especially important implications. One is the existence and
importance of boundaries between the two. Thus, the individual living organism is bounded by something like a
'skin' inside of which a different state prevails from that outside it. … The second basic property … is that in some
sense they [organisms] are self-regulating. The maintenance of relative stability, including stability of certain
processes of change like growth …, in the face of substantially greater environmental variability, means that … there
must be 'mechanisms' that adjust the state of the system relative to changes in its environment." (Parsons, 1977b, p.
101)
Page 12
The Adaptation (A) function imports and filters energy from the external world,
the environment, and, based on the external and internal norms, attaches symbolic
meaning to it. The Goal Attainment (G) function sets goals (that is, ends) and allocates
resources in the service of those goals, based on the symbolic meaning of achieving them.
Integration (I) aligns the structure and function of the organization to the goals in accor-
dance with the resources allocated. Latent Pattern-Maintenance (L) establishes and then
maintains the internal norms. "Latent" is used to refer to something unseen, the opposite
of manifest, and the pattern being maintained is what lay persons call organizational cul-
ture. Parsons (1977b) says:
"The most important single condition of the integration of an interaction system is
a shared basis of normative order. Because it must operate to control the disrup-
tive potentialities (for the system of reference) of the autonomy of units, as well
as to guide autonomous action into channels which, through reinforcement,
enhance the potential for autonomy of both the system as a whole and its member
units, such a basis of order must be normative." (p. 168) [italics in original]
Figure 2 also illustrates a collateral point: Parsons set out to develop a grand uni-
fied theory that could be applied up and down the units of analysis, from individual to
collective to culture. Accordingly, each of the four functional prerequisites can be sub-
Page 13
divided into four more functional prerequisites, and so on infinitely. The figure shows the
first two divisions, one at the systems level and then the next at the level of each func-
tional prerequisite. Note that the lower right quadrant, the one corresponding to Integra-
tion, contains the four functions in the same order as the square containing it. This illus-
trates the importance and centrality of Integration, as indicated by the quotation in the
paragraph above. And it also illustrates that the diagrams can be used to designate differ-
ent levels of abstraction or granularity.
The conceptualization of the pattern variables potentiated Parsons' understanding
of the four functional prerequisites because they all fit together so harmoniously.
Cybernetic hierarchy.
Figure 3 is the same as Figure 2 in the sense that it contains the same 16 pattern
variable combinations (listed in the upper right hand corner of each box), but the rows
and columns are arranged differently; it is not necessary to understand everything in the
figure. The rows (i.e., functional prerequisites) are now in the order L-I-G-A, and the
columns in the order L-I-A-G. Note along the left edge that there is a direction of control
and a direction of limiting conditions. These are the cybernetic hierarchy of control. The
organization is controlled, first and foremost, by its internal norms. The norms even con-
trol which energy is imported and the sense is made of it; which particular energy is im-
ported and what particular sense is made of it depends upon the value ascribed to the
norm. Therefore, L is the most controlling and A the least.
Each cell categorizes the necessary but not sufficient conditions for operation of
the cell next about it in the column, and in the opposite direction, the categories of
each cell control the processes categorized in the one below it. For instance, the
definition of an end or goal controls the selection of means for its attainment
(Parsons, 1960, p. 477).
Page 14
Interchange media.
Parsons next postulated the means by which the 2 x 2 quadrants intercommuni-
cated. Each quadrant is a function and, to form a system, it communicates to and is com-
municated from each other one. As one can see in Figure 4 there are 12 such paths (lines
with arrowheads to and from each of the four functional prerequisites); it is not necessary
to understand everything in the figure. He called the paths interchange media and along
them pass symbols, not (usually) physical objects. That is, each function produces and
consumes symbols, and that is how each function intercommunicates. One particularly
salient depiction of the interchange media is Figure 5, in which a cycle or phase move-
ment is illustrated; note the (clockwise) sequence 1, 2, 3, and 4 among the functional pre-
requisites in AGIL order. It is not necessary to understand everything in the figure, only
the order in which the phases occur with respect to the situation of the organization.
Page 15
Adaptation Goal
Attainment
Integration
Latent Pattern
Maintenance
The intuition is that the Adaptation function scans and senses the external envi-
ronment and might find some information there that could be imported as energy and
passed along (via the interchange medium) to the Goal Attainment function. The Goal
Attainment function then might use that information either to change its goals or to
change its resource allocation. These changes, expressed symbolically as new goals or
new resource budgets, would travel along an interchange medium to the Integration
function. The Integration function would then decide how best to arrange the elements of
the organization in order to achieve the goals in light of the resources. One can imagine,
for example, goals around improved quality and productivity and these would get trans-
lated by the Integration function into organizational entities (e.g., VP of Quality or the
Quality Improvement Department), job descriptions, new methods of rating job and unit
performance, new methods of incenting the desired behavior, new methods of recruit-
ment and advancement, and new training. In turn these new structures and functions
would activate the Latent Pattern Maintenance function via an interchange medium and
the L function would respond, principally by trying to reset the organization to the status
quo in ante. L communicates via interchange media connected to the other three func-
tions.
The L function works internally by manipulating a construct called tension, which
is the difference between what the organization aspires (expressed by goals and resource
allocation, that is, the Goal Attainment function) and what it achieves. When achieve-
ment is low with respect to aspiration and the environmental situation, the L function is
more controlling, it tries to track more closely the energy being imported so that it can
match the organization to the environment. Symmetrically, when there is little difference
between aspirations and achievement, that is, when tension is low, then the L function is
less controlling, more "quiet."
Page 16
Recapitulation.
To recapitulate the structure of the theory of action, the 16 possible combinations
of the four pairs of pattern variables gave rise to the 2 x 2 arrangement at the next higher
level, the so-called AGIL framework that captures the functional prerequisites that every
organization has to address to establish and sustain itself. The quadrants communicate via
interchange media and there is a priority of control in that communication, in accordance
with the cybernetic hierarchy.
As stated in Section I, Introduction, no one knows a priori how much or little is
needed to simulate a particular social system. The researcher speculated that in order to
simulate the theory of action there must be at least representatives of the pattern vari-
ables, functional prerequisites, interchange media, and cybernetic hierarchy.
Problem
The interface between description of systems and social action was informed by
soft systems methodology (Checkland, 1999; Checkland & Scholes, 1999). Checkland
realized that many "hard" engineering projects fail because they do not take into account
the "soft" factors that humans introduce, such as the power structure around the project.
He offered a step-by-step method for integrating hard systems and soft ones. His was a
Page 17
pioneering effort to integrate social and engineered systems, and was a source of inspira-
tion for this research.
However, present computer simulations require specific details about the structure
and behavior of variables and the interactions they animate. In addition, much of the
work in social systems is limited in detailed description, what Checkland calls "rich
description." (Checkland & Scholes, 1999, p. 45). Thus, one of the problems is that either
sufficient data for detailed description or a comprehensive and appropriately complex
theory (e.g., that with requisite variety (Ashby, 1956)) needs to be found, and then one
needs to see if it is sufficient for a computer simulation to be constructed and operated.
More specifically, can Parsons’ large body of descriptive text be understood? Can
the salient factors (structure and function) be extracted? Finally, is it possible to instanti-
ate, make concrete, those salient factors so that a high fidelity representation of the
descriptive theory of action can be constructed?
Even if the questions could be answered, one is left with: Are there any novel
insights? Is there anything useful to be learned? Can anything significant be predicted?
Can the simulation predict something on which Parsons is silent? Is it possible to obtain
enough confidence in such an undertaking that it could function as "the right answer"?
Asked a different way, "Is it possible to develop a laboratory replica of the theory of
action, and if it is then can anything interesting be inferred from operating it"?
In addition, there is no published attempt that successfully simulates any part of
Parsons’ theories. Also, there are few published applications of discrete event simulation
to social systems. Therefore, this contribution is an early and humble set of results in the
use of that tool to be added to the existing scarcity.
Research question
The question guiding this research was "What is the minimum set of structures
and related functions that can simulate Parson’s theory of action to some criteria of
validity?" That is, what was the most parsimonious selection of theory of action con-
structs that, when animated, achieved a given level of fidelity? Can the theory of action
be simulated using only the functional prerequisites, (one pair of) the pattern variables,
(four of) the interchange media, and the cybernetic hierarchy of social control.
Significance
This study contributes to the three areas traditionally addressed by social science
research:
• Theory building – This work may enrich Parsons’ description by making con-
nections that Parsons did not, for example between the frequency of changes
in the environment and the rate at which change can be sensed and incorpo-
rated by an organization. In addition it may identify gaps in description, at
least gaps needing to be filled in order to simulate. In addition, this study will
contribute to the evolution of applying Parsons’ theory to additional contexts,
following a long tradition (Black, 1961; Etzioni, 1975).
• Methods – This work may add methods of translating theory statements into
structure and function constructs. These constructs can in turn act as testable
hypotheses amenable to a range of theory validation techniques. It also may
help to make the case for additional study of time-varying research.
Page 18
•Theory (of science) – This work may add a brick in the discussion of where
simulation fits into the practice of science: it is a tool of theory understanding,
a tool for theory building, and/or a tool for theory testing.
The reason "game" appears in the title of the dissertation is that there is something
of a game that the user of the simulation described in this research can play by varying
the inputs and seeing what an execution will produce.8
Simulation
The conceptual framework for constructing a simulation from descriptive text
emanates from the flow from theory to action, Figure 6.
Theory
Model
Constructs
Variables
Data
Analysis
Action
Theories are our ontologies, they are the bases for our beliefs about what we can
know for sure (epistemology) and what constitutes valid activities to seek certainty
(methodology). We extract from theories various features and organize them, calling that
organization a model, which is the theory with some parts left out (that is, the translation
from the theory to the model is incomplete). The features and organization are at a level
of abstraction, usually the highest, the one with the largest blocks and thickest lines
between them. Sometimes collections of the blocks and connections are named or
renamed. The thing renamed is called a construct. For example, we use the term "orienta-
tion" to identify the performance and learning perspectives of Parsons theory of action.
We invent the term "orientation" to be used in that sense. Constructs in turn are com-
posed of variables, factors that take on different values, that is, that eponymously vary.
The collection of values is called data, which are analyzed so that inferences about
actions can be obtained.
The description of Parsons' theory of action forms the theory referred to in Figure
6. The model in that figure is the same theory but only with certain (not all) elements and
connections and is the subject of this research. As stated above, one should at least be
able to discern in the model to be presented the pattern variables, four functional prereq-
uisites, interchange media, and cybernetic hierarchy because they are the cornerstones of
the descriptive theory.
8
Using the terms described in the Conceptual Framework section, the "game" is to see whether latent pattern
maintenance will follow the input energy.
Page 19
The goal of any simulation is to animate the elements, to put the time-varying
elements onto a "canvas" or work space where there "movement" through time can
somehow be visualized. In this research the canvas is a computer screen with a drawing
resembling Figure 5 on it and "behind" the picture, in a way not seen by the user, the
computer simulates the path of energy entering the organization and transiting in turn
through the AGIL cells. Details are provided in Chapter III.
The simulation represents a set of choices – and inventions.9 Explicating what
choices are available and what choices were made and why is the subject of this sub-
section. Farraro and Hummon (1994, pp. 29 ff), mirroring Fararo (1989, ch. 2), provide a
conceptual framework for presenting the choices and for making clearer which parts of
the simulation are provided by Parsons and which are provided by the researcher. There
are six "key menus" that have to be selected and explained (these are categories and their
scales):
i. State space: categorical or continuous
ii. Parameter space: categorical or continuous
iii. Time domain: discrete or continuous
iv. Timing of events: regular, incessant, or irregular
v. Generator: deterministic or stochastic
vi. Postulational basis: equations or transition rules
The state space is the cross product – the combination – of all valid values of all
variables, including how the "boxes" are connected and what flows among them. In the
instant case the space is made up of category values, not continuous ones. For example,
the Adaptation function is connected to the Goal Attainment function; both of these
functions are categories, as is "connected." Parameter space is the cross product of all
fixed properties of the system. In this case, parameters include, but are not limited to:
• The magnitude of energy entering the system – A small integer, ordinal scale.
• Energy threshold; value that has to be compared and if true then the energy passes
into the system – Same units as the magnitude of energy.
• Sense of the comparison test – Category: >, >=, =, <, <=
• Whether the energy will be dealt with affectively or not – Boolean.
• Time to be spent in each functional area if affective – A quantity of simulated
time; without loss of generality time is represented as a positive integer.
• Time to be spent in each functional area if not affective – A quantity of simulated
time.
• With respect to learning and forgetting:
o Value of prior learning – A quantity of simulated time.
o Time to reach the current pattern – A quantity of simulated time.
o Time since the last change – A quantity of simulated time.
o Starting value of Latent Pattern Maintenance energy – Same units as magni-
tude of energy
• Length of time to simulate – A quantity of simulated time.
9
This point is important because we define model as a subset, an incomplete isomorphism, of the theory. So the model
cannot contain anything invented. But the simulation might in the service of finding a computer method of
replicating the elements, structure, or flow of the model. And that is what is meant by the additional clause
"inventions."
Page 20
The time domain is described in discrete units. Parsons did not indicate what
reasonable time values were, so the researcher assumed that the basic unit was one day.
Accordingly, one day passed for every click of the simulated clock, and all clicks occur
in integer multiples: time is discrete, not continuous. The timing of events was irregular
and depended upon what has happened before. In fact, the simulation clock did not click,
rather it moved ahead to the time of the next event and in general that interval cannot be
known a priori. And the state variables will only be defined for the discrete, integer time
units.
By the generator of the process, Fararo and Hummon (1994, pp.30-31) signify the
means or mechanism by which the system produces changes in its state. The two mecha-
nisms are by rolling the dice or determinism. Rolling the dice, or having the transition be
probabilistic, can be accomplished in discrete event systems; in fact, any probability dis-
tribution can be imitated. Deterministic means that there is certainty (probability = 1) that
a state changes from the current one to the next. In the research described here, the tran-
sitions were deterministic, there was no randomness to selecting the next state.10
By postulational, the authors mean the mechanism by which transition to the next
state is specified. Typically, in discrete event simulation the next state depends directly
upon the current state and the transition rules. For the research described here, the pri-
mary transition rule was: when it is time for energy to move from one functional area to
the next, the system attempts the move; if the next functional area is already occupied
then the energy is moved to a corresponding queue instead, otherwise it moves the energy
to the (empty) functional area.
Which of the foregoing has been described by Parsons and which was
invented/created by the research? The categorical state space has been specified in Par-
sons, Bales and Shils (1953a) and so has much of the parameter space, though the actual
values of the parameter space was assumed in the research; without loss of generality any
user of the simulation can change any of the parameter values, so this invention does no
violence to the structure of the theory. That the time domain is discrete is a computational
convenience and is not suggested one way or the other by Parsons. The timing of events
as irregular is consistent with Parsons, Bales and Shils (1953a) and the other two options
(regular and incessant) would not be. The deterministic generation of next states is im-
plied in Parsons, Bales and Shils (1953a) because there are no probabilities mentioned or
suggested. And the postulational basis is clearly not equations, so transition rules are im-
plied.
Therefore, to create a model to represent the dynamics of Parsons' scheme we
developed a system that managed energy in discrete units and moved those bundles of
energy through the processes in accordance with the AGIL framework and governed by
the feedback and control hierarchy. Specifically, we envisioned a concrete organization
that processed inputs from its environment, though perhaps Parsons would have argued
for the generality of the processes at every level of analysis.
10
Here is an example of a probabilistic transition. Imagine a simulation of a retail store, such as a grocery. A shopper
would select a random number of items to purchase and that number would put the shopper into the cashier line for
the appropriate number of items, such as the regular line or 15 or less. It would be impossible to know in advance
how many shoppers ended up on the 15 items or less line because the selection is random and generated during the
execution of the simulation.
Page 21
Limitations
The limitations of a study are those characteristics of design or method that set
parameters on the application or interpretation of the results; that is, the constraints on
generalizability and utility of findings that are the result of the devices of design or
method that establish internal and external validity. In a quantitative study the most
obvious limitation would relate to the ability to draw descriptive or inferential conclu-
sions from sample data about a larger group.
There are two viewpoints that both properly identify this first attempt at simulat-
ing Parsons’ theory of action:
1. Thorngate (1976), in attempting to describe the range of explanatory power of theory,
drew the one-armed clock in Figure 7. He stated that a particular model or theory can-
not simultaneously be general, simple, and accurate. Rather, the researcher must trade
among those outcomes. Clearly, the research described here was simple, so it was
neither general nor accurate. Accordingly, the results will have to be used with great
care (not general) and will not apply to any actual social system (not accurate).
General
Accurate Simple
2. Thomsen, Levitt and Kunz (1999) suggested that simulations go through stages,
Figure 8. The first stage is to build a "toy" to see if the simulation can even be built
and whether it will have interesting properties. Again, clearly that was the stage of the
simulation presented here. Accordingly, the results are vigorously disclaimed as a
modest first attempt, really a toy, that may not be applicable to any set of facts, but
rather should be seen as a foundation to be enhanced and expanded. Indeed, some
elements of the simulation were given arbitrary values in order to achieve simplicity
and the arbitrariness detracts from the significance of the outcome (Fararo &
Hummon, 1994).
This theoretical approach to models [theory in mathematical form] included the
idea of "successive approximation" articulated in sociological theory first by
Comte, then by Pareto and the later stressed by Homans. Models were not
expected to be correct in every detail nor to cover the entire potential scope of
interest in a class of phenomena. They were to be modified and generalized (in a
formal sense) over time. Even though such a model might include entities and
processes not presently observable, the logical connections among ideas in a con-
ceptual network assured that the theory was testable. (Fararo, 1984, p. 152)
Page 22
when applied to a holistic theory as Parsons mit manipulation or reporting at the atomic
purported his to be. The challenge was to (what Parsons calls the unit) level. This is
maintain the holistic nature of the theory of consistent with Parsons' view that the unit
action and simulate something that is act cannot be viewed by itself but rather in
whole. The next level down of this chal- a much, much larger context.
lenge is that Parsons described the func-
tional components of the theory of action in
terms that looked like tokens that travel
along wires (media of interchange) among
nodes (functional prerequisites). Therefore,
the simulation can have the appearance of
rats in a maze because at some level that
resembles Parsons’ description.
Simulation often postulates a sequence of Noted as the nature of simulation vs. a pro-
states through which the system passes. duction system (i.e., grammar) orientation
Simulation, then, presents the states that (see Fararo & Skvoretz (1984) for an
were encountered, but not all of the possi- example of the production system
ble states.15 That is, simulation cannot give approach, described above beginning on p.
the richness that a grammar or contingent 32).
approach could (Fararo, 1984, p. 146).
It may be worth mentioning that a significant limitation is that elements outside of
Parsons' theory are outside of the simulation of it.
15
This is the same distinction in biology between ontogeny (an individual instance seen in nature) and phylogeny (all
possible instances for a species), between genotypes (the expression of genes found in an instance) and phenotypes
(everything that is possible genetically).
Page 25
16
One sight gag is when the protagonist, Neo, gives some contraband diskettes to "clients." He hides those diskettes in
a hollowed out edition of Baudrillard (1995).
Page 26
Parsons, 1968b). "The most immediate interpretative thesis was that the four – and they
did not stand alone – had converged on what was essentially a single conceptual scheme.
In the intellectual milieu of the time this was by no means simple common sense."
(Parsons, 1977a, pp. 25-26)
The conceptualization that Parsons created flowed from his observation that the
theories of Marshall, Pareto, Weber, and Durkheim had in common an action system,
first suggested by Weber and then elaborated by Parsons; the result of the conceptualiza-
tion was Parsons’ theory of action (Parsons, 1968a). That is, what these seemingly dispa-
rate writers described in common was a system of actions, human actions that had pat-
terns that could be described in accordance with a framework. Parsons has said that
scientists of his era were informed by the progress in the conception of systems using
mechanics and physico-chemistry (Parsons, 1977a, p. 27). In those disciplines one starts
at the atomic level and defines what is meant by a "unit."
Accordingly, Parsons started by defining the "unit act." (Parsons, 1968a, pp. 43
ff):
(1) It implies an agent, an "actor." (2) For purposes of definition the act must have
an "end," a future state of affairs toward which the process of action is oriented.
(3) It must be initiated in a "situation" of which the trends of development differ
in one or more important respects from the state of affairs to which the action is
oriented, the end. This situation is in turn analyzable into two elements: those
over which the actor has no control … and those over which he has such control.
The former may be termed the "conditions" of action, and latter the "means."
Finally, (4) there is inherent in the conception of this unit, in its analytical uses, a
certain mode of relationship between these elements. That is, in the choice of
alternative means to the end, insofar as the situation allows alternatives, there is a
"normative orientation" of action. (p. 44)
fore an adequate model of learning has to account for accumulation. In essence Nembard
and his collaborators studied possible descriptions of the gain in productivity due to
learning in a factory and tried to fit it to a mathematical model, evaluating 11 models
against actual factory floor data (Nembhard & Uzumeri, 2000). The authors used almost
4,000 data points to test the models, which included all of the popular ones, such as
exponential, log-linear, and S-curve. One model fit best under a broad set of criteria, the
three-parameter hyperbolic learning curve. It is described – and applied – in the Model
section on page 58.
Place of Parsons' theory of action in sociology
The purpose of this section is to place Parson's theory in the spectrum of socio-
logical theories of the time and to address a few of the many criticisms, particularly those
applicable to the research described here.
• Action systems as unification. Finding a unifying thread in such diverse theories
as those of Marshall, Pareto, Weber, and Durkheim was a breakthrough of major
proportions. Later Parsons added Marx and Freud, no small accomplishment. The
unification put Parsons on the map in sociological theory and he spent the rest of
his professional life refining the theory of action.
• Structural-functionalism. Structural-functionalism is a school of thought within
sociology that concentrates on describing social systems by describing their
(static) structures and (dynamic) functions. Structural-functionalists tend not to
address questions of how the structures or functions arise nor whether some are
better than others.18 In the framework of Burrell and Morgan (1979), structural
functionalists are more interested in the sociology of regulation than of radical
change, more interested in objectivity than subjectivity, and "tend to assume that
the social world is composed of relatively concrete empirical artefacts and rela-
tionships which can be identified, studied and measured through approaches
derived from the natural sciences. The use of mechanical and biological analogies
as a means of modeling and understanding the social world is particularly
favoured." p. 26. Parsons(1977b) wrote:
"I well remember at a meeting of the International Sociological Associa-
tion, held in Washington D.C. , in 1961, [Robert] Merton very cogently
made the point of objecting to the phrase 'structural-functionalism.' He
particularly did not like having it labeled an 'ism' and suggested that the
simple descriptive phrase "functional analysis" was more appropriate. I
heartily concur in this judgment." "The two concepts 'structure' and 'func-
tion' are not parallel. … [T]he concept 'structure' does not stand at the
same level as that of function, but at a lower analytical level. It is a cog-
nate with the concept of 'process,' not function. Sometimes, the levels are
consolidated or fused by reference not to functions but to functioning.
From this point of view, the verb form may be considered to be a synonym
for process. We do not wish to hypostatize structure. It is any set of rela-
tions among parts of a living system which on empirical grounds can be
assumed or shown to be stable over a time period…. Thus, … the concept
18
These questions are not usually considered part of positivism, of which structural-functionalism is a school.
Positivists try not to go beyond what can be verified, lest their work be considered metaphysics and religion (Keat &
Urry, 1982, p. 5).
Page 28
It might be worth mentioning that the whole idea of functional analysis and its
related synonyms had come into question in the heyday of Parsons and his collaborators.
The central issues were questions of what is a theory, does it have to be empirically veri-
fiable or can it be a framework, a naming of the parts. Davis (1959) argues that investi-
gating the functions and functioning of a social system is not a special method, does not
need a special method, and is not a school of thought.
• Homeostasis. The structural-functionalists have been criticized for postulating sta-
ble structures, for not taking account of social revolution, of a set of norms that
aim to upset the status quo. Clearly, Parsons admired and sometimes quoted
biologists describing homeostasis, the dynamic balance of elements inside an
organization/organism and balance of the organization/organism with respect to
changes in its environment (Parsons, 1977a, p. 28). While the structural-
functionalists do not, indeed, emphasize upsetting the legitimization mechanism
of social entities, they do not obviate it either. Moore (1959) wrote, "I have come
to the personal conviction that for most purposes the equilibrium model of social
systems must be abandoned, as leading to too much distortion, particularly in
treating change as external, accidental, and in any event regrettable." (p. 718) A
balanced discussion, relying on cybernetics (à la Ashby (1956)) and the kind of
control Parsons characterized, can be found in Cadwallader (1959). Parsons him-
self (1977b) wrote:
"[Functional analysis] has nothing essentially to do with judgments about
the specific balances between elements of integration in social systems
and elements of conflict and/or disorganization. … Biology does not have
two basic theoretical schemes, a theory of healthy organisms and one of
pathological phenomena in organisms, but health and pathological states
are understandable in basically the same general theoretical terms. … A
related polemical orientation is the claim frequently put forward that
'functionalists' are incapable of accounting for social change: that is, their
type of theory has a built-in 'static' bias. This is also entirely untrue. If we
have any claim to competence as social scientists, we must be fully aware
that there are problems both of stability and of change, as there are prob-
lems of positive integration and malintegration." (pp. 108-109)
Perhaps the most succinct critique was Berger and Zelditch (1968), which took
Parsons to task on four grounds in 4+ pages. In the context of a book review, their ques-
tions were: (a) had there been an improvement in confirmation status, have there been
empirical studies confirming the theory of action; (b) was there increased rigor in the
framework or its arguments, had it become more logically structured; (c) had the theory
become more precise, more accurate; and (d) had the scope increased in order to make
the theory more general?19 In every case the authors believed that Parsons failed. They
went on to ask what was the importance of Parsons, why was he (still) read. They con-
cluded that there were several reasons: the admiration for the ambitiousness of his enter-
prise and as an "inexhaustible source of ideas." p. 450
Locating this work within all of Parsons'
A small subset of everything written about the theory of action by Parsons and
others was sought that could form the basis for animating the theory. Accordingly, in
harmony with the state of simulation as reviewed in the next large section, rich descrip-
tions were sought that illustrated the structure and function of the theory of action. That
is, detailed descriptions of static structures and dynamic (time-varying) functions or proc-
esses were sought. When found then the same processes that Sastry used were applied,
namely parsing them into simulation constructs. In a sense this operation is a culling of a
19
The comparison to Thorngate's one-armed clock is palpable.
Page 30
specific description of the theory of action in order to find the passages best suited to the
narrow purpose of simulation.
As stated on p. 48, one writing stood out with respect to this search: Parsons,
Bales and Shils (1953a). The volume in which the chapter appears, Parsons, Bales and
Shils (1953b), is an historical account of the development of the theory of action, and
Parsons, Bales and Shils (1953a) is the last chapter, therefore the most recent in the
collection. The chapter traces the path (Parsons et al. (1953a, p. 167) called it an orbit) of
energy moving among the four functional prerequisites in accordance with the pattern
variables. It is a step-by-step description of how energy enters a social system and trav-
erses the four units, possibly transforming the unit or itself or attributes of the social sys-
tem as it is passed from unit to unit.
Table 6, below on p. 72, presents in some detail the description of the transit of
energy through a social system described in Parsons, Bales & Shils (1953a) and the
corresponding elements of the simulation.
Models
Young journalist (YJ): Why do you work with models? Why don't you work with
the real world?
Albert Einstein (AE): Are you married, young man?
YJ: Yes.
AE: Do you have a picture of your wife?
YJ: (Fetches his wallet and digs around, finally producing a photo and handing it
to AE.) Yes, here.
AE: (Looks at the photo for a moment and hands it back.) She is rather small.
- Ronald W. Clark. (1971). Einstein: Life and times. New York: World Publishing.
Model ships appear frequently in bottles; model boys in heaven only. Model ships
are copies of real ones. Asked to describe a ship, we could point to its model. A
model boy, on the other hand, having no earthly counterpart, is everything a boy
ought to be. (Brodbeck, 1959) [italics in original]
of declarative sentences." (Brodbeck, 1959) The sentences contain two kinds of words in
them: names for characteristics or attributes or events, and for relations among them.
Characteristics are the name of some state, such as grass that is in the state where its color
attribute has the value of green. Relations require at least two individuals or operands,
such as before, faster, and between. Some terms, such as north, fast, and first, appear to
be about a single individual, but are in fact in relation to some standard, and therefore
about at least two elements. While sentences may contain only attributes and relations,
the subject matter, the content, differs from one scientific discipline to another.
Sentences may have meaning because of the facts, the content, or they may have
meaning no matter what their content. For example, "He is tall and he is blond" is of the
form "X is A and X is B." One can speak of the truth value of either sentence, but in our
example the truth value of the first will be the only one we would be able to ascertain,
based on whether it were true that the subject was both tall and blond. In other words, the
truth value of the form cannot be known unless we know the values of X, A, and B. If we
can know the truth value of a form, then it is called a logical truth because it is true for all
possible values of the variables; it is also called tautological or analytic. An example is X
= X, because this is truth for all possible values of X in the sense that we usually give to
the equal sign. "Sentences whose truth depends upon their descriptive words as well as on
their form are called empirical statements, or also contingent or synthetic." (Brodbeck,
1959) [italics in original]
Perhaps the most common class of logical truths is arithmetic. All statements in
arithmetic are true by definition, by form, not because we examine the subject matter of
the sentences and from them deduce the truth value.
A concept is a term referring to a descriptive property or relation. A fact is a par-
ticular or specific thing, characteristic, event, or kind of event. To state a fact is to state
that a concept has an instance. Facts are significant when they are connected with other
facts to form generalizations or laws. "A law states that whenever there is an instance of
one kind of fact, then there is also an instance of another. Laws, therefore, are empirical
generalizations." (Brodbeck, 1959) A theory is a deductively connected set of laws. Some
of the laws, called axioms or postulates of the theory, imply others, called theorems. Axi-
oms are presupposed, their truth is taken for granted, at least for the purposes of the exer-
cise of seeing what else is true of they are.
"Two theories whose laws have the same form are isomorphic or structurally
similar to each other. If the laws of one theory have the same form as the laws of another,
then one may be said to be a model for the other." (Brodbeck, 1959) [italics in original]
How does one know if one theory is the model of another? One puts the second into one-
to-one correspondence with the first. If the forms are the same and the relations are pre-
served, then they are isomorphic. That is, one translates the form of the second into the
first and then ascertains whether the truth of the relations is preserved. If it is, then the
translation demonstrates the isomorphism between the theories, and the second can be
said to be a model of the first.20
"It is all too easy to overestimate the significance of structural isomorphism. The
fact that all or some of the laws in one area [of discourse] have the same form as those of
20
In fairness, the relation between the two theories is completely symmetric, one could be the model of the other and
vice versa.
Page 32
another need not signify anything whatsoever about any connection between the two
areas." (Brodbeck, 1959) The example she gives is all things that can be ranked and
measured: they are structurally isomorphic with arithmetic addition, yet that is quite pos-
sibly the only thing they have in common. Another example is "taller" and "smarter"
because they are both transitive (p. 394). Accordingly, we should not be misled by that
structural isomorphism is anything but a necessary condition for complete isomorphism.
The relations have to hold as well for there to be complete isomorphism.
The flow in the research described here is from (1) Parsons theory to (2) a model
constructed by the researcher that is an incomplete isomorphism to (3) a simulation con-
structed by the researcher that is both an extension and contraction of the model. That is,
the model redacts elements, structures, and relations from the theory, and then the simu-
lation further reduces the elements, structures, and relations, and also adds some ele-
ments, structures, and relations that are neither in the model nor in the theory.
Formalization
In lay terms, formalization is an expression of something so that it can be reason-
ed about. The most common formalization is mathematical, but there are other forms,
too. Two others that will be dealt with here are logic, which is more officially called first
order predicate calculus, and production systems. The place of formalism is that simula-
tion can also be a formalism because it represents an opportunity to reason.
Production systems
Approximately how many sentences are there in English (or any natural language;
natural languages are the ones we speak and read)? The short answer is: infinite,
approximately. How do we learn an infinite thing? How do we teach one? We look for
what is finite about it, and in the case of languages, as with many other things, it is the
(list of) rules that is finite. The collective rules of the construction of a language is called
grammar. The rules can be viewed either as specifying what is legal to read or what is
legal to construct, generate. That is, we can hand a sentence to a grammar and ask "Is this
sentence in the language, that is, is it properly formed according to the rules?" Or we can
"run" the rules of a grammar and generate correct sentences in the language. The rules
enable us to say, "That is not a sentence," or more properly, "That is not a sentence that is
allowed by the rules of grammar."
Note that the rules at this point evaluate or generate content-free sentences. The
rules (of grammar) have nothing to say about the content, only about the form. The form
is called syntax. That is, grammar describes syntax, without regard to (truth) value of the
words.
The appearance of sentences in a language is guided by grammar and by what
symbols and symbol combinations are valid. The symbols (e.g., letters of the alphabet)
are the lexical aspect of language. In principle there are two types of symbols: terminal
and non-terminal. Terminal are the ones we read, that are being read right now. Non-ter-
minal describe constructs in the language, such as sentence, paragraph, verb-phrase, sub-
ordinate clause, genitive case, pronoun, etc. In English, as in most natural languages, the
non-terminals are also terminals, so it is a bit confusing. But when describing artificial
languages there is attention paid to the difference between sentences in the language
(terminals) and terms used to describe sentences in the language (non-terminals).
Fararo and his collaborators have developed a grammar of social actions (Axten
& Fararo, 1977; Fararo & Skvoretz, 1984; Skvoretz & Fararo, 1980; Skvoretz, Fararo, &
Page 33
Axten, 1980) and symbolic interaction (Skvoretz & Fararo, 1996), drawing on the work
of Nowakowska (1973) and Hayes (1981). According to Fararo et al. one of their inspira-
tions was Harré (1972), where there are descriptions of rule-condition and role-condition
forms. Each of them can be thought of as "if … then" statements: if the condition is true
then this rule is executed by the role that matches the role-condition. That is precisely the
structure of grammar rules: if the right hand side of the rule matches the state of the
parsing, then the state is changed to the value of the left hand side, and the matching pro-
ceeds until that are no more matches possible. If the final match is what is called the dis-
tinguished symbol21 then the whole sentence or social action is recognized and declared
valid, otherwise the sentence/social action is not one that is described by the grammar
and is therefore noted as impossible or an error.
Another inspiration was Heise (1979): "[A] simple event is conceived as a
syntactically ordered conjunction of cognitive elements (usually culturally defined) des-
ignating actor, act, and object." Heise's method of "processing" situations that give rise to
actions is to scan actor-object combinations. When a match it found, the associated action
is executed. This is precisely the steps that Fararo et al. take in their grammar approach
(loc. cit.).
Fararo and his colleagues have created descriptions of valid constructs such that
the descriptions can be used like any other grammar, either to assess the validity of an
existing "sentence" (that is, social action) or to generate valid social actions. The most
important aspect from this dissertation's point of view is that the application of much of
Fararo's work was Parsons' theory of action. The grammar described social actions that
enacted the theory of action.
Logic
Brownstein (1982) has formalized the theory of action, too. He used first order
predicate logic, the same axioms and method used in high school geometry and trigo-
nometry proofs. In some-odd 300 pages Brownstein in the standard language of logic
tries to reconstruct the propositions Parsons intended. His conclusions are a bit discour-
aging.
Though his [Parsons'] scheme calls for functional explanations, precise, explicit
specifications of functional relations are not particularly salient. Moreover, sub-
stantive propositions, definitions, regulative principles, preliminary redescrip-
tions, and the like are rarely distinguished, thereby rendering it difficult to assess
… its conceptual health. … For a scheme with as many fundamental conceptual
disorders as Parsons', conceptual analysis becomes of primary concern …. Grave
difficulties attend the conceptualizations of the pattern variables. … Assessment
… leads to the conclusion that a proper analysis of action in Parsons' terms
demands a revision of the basic analytical framework which Parsons has con-
structed. (Brownstein, 1982)
Dubin (1960) used a form of logic, too. He looked at the pattern variables at the
personality level of analysis. He succeeded in enumerating all of the possible combina-
tions of the pattern variables at that level and offered that the choice among them in a
particular instance of action might be based on probability. In other words, Dubin used
21
This non-terminal is usually called "sentence" or in our case "social action."
Page 34
the logic of arithmetic to reason about the number of states that are available for an actor
to get into.
Time
If one skims the literature on the subject of "social time," one finds complaints
everywhere about the neglect and marginality of the time problem in sociology,
formulated concisely by Kurt Lüscher in the title of an essay: "Time: A much-
neglected dimension in social theory and research." (1974) … However, even
more decisive, in my opinion, for the impression of marginality and neglect is the
minimal theoretical basis of many of the available studies. … Many authors lose
themselves entirely in the momentum of their subject by making philosophical,
anthropological and everyday observations without even beginning to achieve
conceptual precision and a categorization of time within a sociological theory.
(Bergmann, 1992)
Greater emphasis has been given to statics than to dynamics in most social sci-
ence theorizing. And, while comparative static analysis is a necessary and impor-
tant task, too much emphasis can deflect attention from other important theory-
building tasks. To the extent that social scientist's theoretical activities seek to
build explanations of phenomena, rather than descriptions, they must focus on
causal processes that occur over time. (Hanneman, 1988)
relate variables linearly have "infected" thinking so that some social science researchers
believe that reality is generally linear22. He bases his argument in relevant part on:
1. Fixed entities with attributes. Linear equations, such as those used in regression
and structural equation modeling, assume fixed entities that have attributes. The entities
are fixed in such equations and the attributes can change value. This clearly assumes that
the existence of the entities is stable over the modeling period. Oddly enough, many of
the subjects studied in sociology are not fixed, such as occupations, roles, social move-
ments, and organizations. Abbott asks us to compare this fixed nature with its most com-
mon alternative, central-subject/event model.
A historical narrative is organized around a central subject. This central subject
may be a sequence of events, a transformation of an entity or set of entities into a
new one, or indeed a simple entity. The central subject includes or endures a
number of events, which may be large or small, directly relevant or tangential,
specific or vague. (Abbott, 2001) [parenthetical material omitted]
Fixed entities ignore changes that occur due to birth, death, combination, division,
and transformation. These changes will need to be simulated in the present research
because Parsons describes them in his theory.
2. Monotonic causal flow. The right sides of (linear) equations do not differentiate
the value over time or in time that each variable would contribute to the dependent vari-
able. They are all equal throughout all of time. That is, each right-hand side variable is
equally relevant at all times. There is no contingent time. Perhaps worse, the time horizon
for all variables in a single equation is identical. That is, if we are trying to measure the
effect of several factors on an outcome, all of the factors would have to be measured over
the same time scale and the outcome would have to be expressed in that time scale, too.
One can see how this could be a problem in the theory of action on several counts: (a)
actions happen on a smaller scale inside the organization than are sensed outside it, and
(b) there may be a different scale altogether in each functional prerequisite (that is, there
is nothing a priori to suggest that the time scales inside each functional prerequisite are
commensurate).
3. Univocal meaning. In linear modeling each variable can have only a positive or
negative effect, not one and then another under different situations. But (Abbott, 2001)
illustrates many cases where a variable may have at first a positive effect and then later
on a negative one.
4. Absence of sequence effects. The order of events does not affect the values of
variables in a linear combination, so that the actual time story or path or trajectory or un-
folding is completely lost using normal statistical tools. In the present research order
matters a great deal, because the timing of an external event has a great impact on the
organization's response, in light of its history to date of responses.
While the picture of taking time into account in social setting is a bit dark, there
are new methods for dealing with time in structural equation modeling, e.g., (Collins &
Sayer, 2001; Hamagami & McArdle, 2001),23 and there has always been auto-regressive
22
The term linear can have many meanings. The shortest one for our purpose is that a change in the value of an
independent variable causes a proportional change in the dependent variable.
23
To indicate the extent that time is rarely accounted for in sociological studies and make the point a bit closer to
home, Ralph O. Mueller is the chair of the George Washington University Graduate School of Education’s
Page 36
integrated moving average time series analysis (ARIMA, also called Box-Jenkins), but it
has been applied almost exclusively to economic data until recently (McCleary & Hay,
1980). And ARIMA usually addresses only a single entity and a single variable (Abbott,
2001).
There is some modeling of time in sociology, including longitudinal studies, such
as Durkheim’s famous one of suicide (Durkheim, 1951). There is some new interest in
time, for example, a 2002 special issue of Academy of Management Journal (Barkema,
Baum, & Mannix, 2002).
The treatment of time, above, contains a subtlety: it refers to a combination of
clock and social time. Clock time is what one gets from calendars, clocks, and other time
pieces. It is the time in physics, the one with which derivatives are taken; it is even rela-
tivistic time in the Einsteinian sense. Social time is socially constructed and includes such
diverse activities/entities as lunch time, waiting, graduation, career progression, and
stages of life. Which type drives Parsons' theory? It must be social time because there are
none of the attributes of clock time in Parsons' description, such as uniformity of
cadence. How can social time be simulated? Oddly, since it is socially constructed a uni-
form cadence can simulate social time as long as the social constructions demarking
events are present. After all, calendar time is an adequate backdrop for social time.
In the present case, here are some examples of Parsons' social constructions of
time in his theory of action: energy arrives at the system boundary at a particular mo-
ment, a functional prerequisite consumes time to perform its function, energy passes (in a
time interval) from one functional prerequisite to another in accord with the cybernetic
hierarchy, and a message is transmitted across a medium of interchange (in a time inter-
val). In fact, Parsons himself recognized the importance of time, "The first important im-
plication is that an act is always a process in time. The time category is basic to the
scheme." (1968a, p. 45)
The simulation of this social time is simply the ticking of a notional clock whose
moments are normatively agreed to mark forward time in an interval small enough to
permit the shortest social event to transpire.
Process
In the last decade a number of writers have proposed narrative as the foundation
for sociological methodology. By this they do not mean narrative in its common
senses of words as opposed to number and complexity as opposed to formaliza-
tion. Rather, they mean narrative in the more generic sense of process or story.
They want to make processes the fundamental building block of sociological
analysis. For them social reality happens in sequences of actions located within
constraining or enabling structures. It is a matter of particular social actors, in
particular social places, at particular social times.
Department of Educational Leadership. He is also the author of an introduction to structure equation modeling
(SEM) (Mueller, 1996) that does not mention the problem of time in the general linear model nor the newer
approaches to modeling time in SEM.
Page 37
ries disappear. The only narratives present in such methods are just-so stories jus-
tifying this or that relation between variables. Contingent narrative is impossible.
… While action and process have largely disappeared from empirical sociology,
they are by contrast central to much of sociological theory, both classic and
recent. (Abbott, 2001), reprinted from (Abbott, 1992)
Too much emphasis in empirical research ... has been placed on the study of indi-
viduals rather than social systems, and on single-time points in these systems
rather than on their continuing process. [Editors' note] Despite the early preoccu-
pation of sociologists with research on social stability and change, much of to-
day's research is neither dynamic nor oriented to social systems. [italics in origi-
nal] (In a volume honoring Talcott Parsons, Riley & Nelson, 1971, p. 407)
Lave and March (1993) advise modelers to "think process." By this the authors
meant that one should seek to describe, explain, predict the unfolding of the interaction of
social forces and the emergence of the resulting outcomes. Process has been variously
defined as "change that follows a stable pattern long enough for us to recognize continu-
ity, transient as the continuity itself may be," "a series of progressive and interdependent
steps by which an end is attained," "the interweaving of invariance and variance," "a
becoming of continuity," and "a tension between linear succession and sequential recur-
rence," as summarized in Abbott (1989).
Process is related to time in a straightforward way: the steps in a process are
described from a time perspective. "It is clear that process is inherently temporal."
(Rowell, 1989) Time in this sense may be an ordering, such as before, during, or after.
Or, "when this happens, then that happens." Or it may be in terms of delays, such as Act
B happens about six months after Act A. Or it may be any other indication of time or
timing. And it may be necessary to mention that time in the process meaning is social
time, not necessarily clock time, that is, how time is sensed, not how it clicks off of an
absolute clock.
The intuition is that what happens in a process is that events occur and something
inside those events trigger changes in the system state that in turn cause other events to
happen. In this way, process, system state, and events are related as follows:
Process
Figure 9. Relationship among process, event, and state (notional).
Time is what travels on the lines in the direction of the arrowheads, indicating that
State 1 happens before State2, etc. State is the value of all of the variables in the system.
In principle, then, a system rests with its variables having some fixed value, then an event
happens that changes the values of some variables, putting the system into a different
Page 38
state. The event may consume time, the state change may consume time, and the interval
between them may consume time. To the extent that there is a pattern in the transition of
states and events, we call it a process and usually give it a name (e.g., adoption of a new
idea). This is pure construct; there is no commitment that such juxtaposition of event and
state exist independently in nature (this is one meaning that Parsons makes of "analyti-
cal."). Sometimes the sequence of state/event pairs (also called feeling-activity states
(Bergmann, 1992)) is called history, trace, story, time track, course of events, life cycle,
narrative, enchainment, or trajectory. Some sociologists call it cause (particularly those
committed to statistical methods, especially structural equation modeling), but we do not.
And process is linked to structure: "theories that focus on 'process' or social
dynamics must have (at least implicitly) models of structure embedded in them."
(Hanneman, 1988)
The research described here is the process kind. It attempts in very crude and
rough terms to explain, among other things, how, for example, Latent Pattern Mainte-
nance impacts the Adaptation function with respect to which energy it (Adaptation)
allows into the system. There are many steps in the flow between the entry of energy into
a system and the response of Latent Pattern Maintenance, and Parsons explains them
notionally. The simulation described here attempted to imitate and animate that flow, to
cause the stand-in for Latent Pattern Maintenance to react to perturbations of energy
entering and flowing through the organization.
The process view imposed a considerable burden on the researcher because much
has to be mechanized, in comparison with, say, another handy tool of sociological
research, structural equation modeling, in which the researcher collects and feeds num-
bers into a "black box" computation engine and interprets the stream of numbers that
come out. There is far less of a burden to construct the intricate relations among the
moving parts and how exactly each one interacts and impacts the other. There is no
"answer" in a simulation: the simulation itself is the answer!
Simulations of social systems
Computer simulations of social and organizational systems is not new (Bronson &
Jacobsen, 1986; Bronson, Jacobsen, & Crawford, 1988; Burton & Obel, 1995; Carley &
Prietula, 1994; Coleman, 1965; Conte et al., 1997; Coyle, 2000; Cyert & March, 1963;
Cyert & March, 1992; Epstein & Axtell, 1996; Gilbert & Conte, 1995; Gullahorn &
Gullahorn, 1963; Hamblin, Jacobsen, & Miller, 1973; Hanneman & Patrick, 1997;
Hanneman, 1988; Ilgen & Hulin, 2000; Jacobsen & Bronson, 1995; Jacobsen & Bronson,
1985; Jacobsen & Bronson, 1987; Jacobsen & Bronson, 1997; Jacobsen et al., 1990;
Lane, 2001; Leik & Meeker, 1995; Lin, 2000; Markley, 1967; Moss, 2000; Phelan, 1995;
Prietula et al., 1998; Rasmussen, 1985; Samuelson, 2000; Sastry, 1997; Senge, 1990;
Thomsen et al., 1999; Tuma & Hannan, 1984). Even the use of simulation games to illus-
trate concepts and let sociology students try their hands at applying what they already
have learned by more passive means, such as reading and discussion, are not new
(Simulation and Gaming and the Teaching of Sociology, 1997; Coleman, 1965; Cross,
1980; Dukes, 1975; Hanneman & Patrick, 1997; Hanneman, 1988; Markley, 1967; Pfahl,
Laitenberger, Dorsch, & Ruhe, 2003). There is an annual conference on computational
and mathematical organization theory, including social systems simulation (Computa-
tional, Social and Organizational Science), several professional societies (the American
Page 39
24
Mathematical sociology is a topic much larger than simulation, but simulation is included in its ambit.
Page 40
on the solution was demand for a surge manufacturing capability in a US defense build-
up for the Cold War.
And the advent of the Internet, then called ARPANET, presented questions about
how big the computer storage on the network had to be in order to hold messages in the
event of transient outages, and was it better to have a few long messages or lots of short
ones.
The first DES systems were used by manufacturing, transportation, and telecom-
munications engineers. Then Leonard Kleinrock, a UCLA engineering professor who
pioneered much of the design of ARPANET, analytically solved many of the queuing
problems in closed form (Kleinrock, 1975-1976) and some of the pressure to simulate
waned.25 Kleinrock’s formulæ assume that inputs arrive at a random rate according to
some distribution and are serviced/ processed/transformed at another random rate, possi-
bly according to a different distribution; that is, that there is a probabilistic element to the
operation of the systems under study.
DES views the world as compartments among which items and information flow.
The items have to be "born" as they cross the boundary into the system, then are trans-
formed or processed or serviced, and then possibly consumed, and finally they exit the
boundary of the system and in effect "die." This view is sometimes called process, esp.
by social science researchers (for example, (Lave & March, 1993)) who are trying to
differentiate themselves from others who take a more static view.
There are two approaches to DES: event-scheduling and process-interaction
(Fishman, 2001). At the outset it is important to understand that the results are the same
independent of the approach, but during simulation construction there is a trade-off
between simulation simplicity and simulation control depending upon which approach is
used. "Every discrete-event system has a collection of state variables that change values
as time elapses. A change in a state variable is called an event" (Fishman, 2001)[italics in
original]. The simulation is thought of, comprised of, a set of events, such as, in this
research, energy appears at the boundary of the system, Adaptation filters are altered
based on the tension between internal stability and external energy level, if affective
energy is present then it takes priority over affective neutral, etc. In this conceptual
scheme, "each event contains all decision flows and state variables. Simulation is the
execution of a sequence of events ordered chronologically on desired execution times. No
time elapses within an event" (Fishman, 2001).
In the process-interaction approach the focus centers on the processing or trans-
forming entities, those parts of the simulation that take inputs and transform them. The
approach provides a sequence of activities (events) in a time order, other terms for which
are flow and process. In other words, the process-interaction approach concentrates on the
time history of the transactions and their transformers; it is not a "disconnected" list of
events that change the state of the simulation.
The research reported here uses the process-interaction approach as a way to trace
the time history of energy as it transits the organization and the time history of the
organization as it responds to the energy. This approach was selected because it more
25
This researcher may have written the first discrete event simulation program in a simulation language in Southern
California, in 1965-1966, and was a graduate student at UCLA in the department of and at the time of Kleinrock’s
work.
Page 41
closely follows Parsons' style of description in which flows are described in a series of
time-related steps.
To reiterate the baking example in the Methods section, p. 47, what would happen
if dough could be formed into loaves more quickly than the time it took to cook the
loaves in a batch? If the average rate at which loaves were created exceeded the average
cooking time then an infinite queue would grow in front of the ovens. If the average rate
at which loaves are created is about the same or lower than the rate at which loaves are
cooked, then on average any queue that would form would be finite and a queuing theory
formula can tell us how long it might be at its maximum. Formulæ could also be em-
ployed to evaluate whether there should be multiple slow ovens to compensate for fast
loaf making, etc.
In terms of simulating the theory of action, the transactions to be "born" are units
of energy, and this researcher thinks of them as news, such as a new idea. The DES com-
partments would be Parsons’ functional prerequisites, the processing or transforming
would be what happens inside each function (e.g., scanning the environment in Adapta-
tion; setting goals and allocating resources in Goal Attainment; recruiting and training
new staff, and integrating new processes in Integration; and establishing the filters by
which sense is made in the other functions in Latent Pattern Maintenance), and the dying
would be what happens to the imported energy after the last function in the flow, Latent
Pattern Maintenance, has responded. And the flow would be of (a) energy, and (b) mes-
sages along the paths of the interchange media among the functions. If the rate of arrival
of energy and other interchange media exceeded the rate by which it could be processed
then queues would grow between the producer and consumer. Such attention to rates and
queues, while an integral part of DES, is absent from Parsons’ conceptualization and
therefore writing. This may be an important indication that using DES is inappropriate for
simulating the theory of action, as prominently mentioned in the Limitations section, p.
20.
Without loss of generality, the arrival rate of news and service rates of the func-
tions are set to be fixed amounts, so the simulation here is deterministic. The reason is
that Parsons gives no insight into the rates, so the assumption of randomness, while more
realistic in terms of the real world, would only reflect invention by the researcher in Par-
sonian terms.
The researcher could only find two applications of discrete event simulation
applied to social systems. Fararo and Hummon (1994) used DES to analyze several
aspects of social networks; the senior author is well-known for his contributions and
extensions to the theory of action (e.g., (Fararo & Skvoretz, 1984)), so it is worth noting
that he (with collaborators) did not employ DES to simulate it. Jin and Levitt (1996)
described knowledge-work projects in terms of its how they are organized, the tasks to be
performed, and the links between the two. Then tokens, as stand-ins for real work items,
are moved through the task network (PERT chart) in a simulation of the work to be
accomplished as delays, errors, and noise were inserted into the project so that final per-
formance of the project was predicted, taking into account important social (particularly
team and organizational) aspects of knowledge work. The actual mechanism of the
Page 42
simulation was discrete-event.26 This project simulation system was described in lay
terms in Samuelson (2000).
Accordingly, while simulation itself was no stranger to social systems, discrete
event simulation was very rarely used.
26
Disclosure: the system described is an educational version. There were also several commercial versions and the
researcher's employer was a distributor and partner of the Stanford University spin-off created to enhance and market
the commercial version of the simulator. The researcher was the in-house expert of his employer.
Page 43
III. METHODS
Research overview
Like Sastry’s, this study was a test of our understanding of the interface between
(narrative) description and the technical goal of predicting the future of a social system
by constructing and "bringing to life" a laboratory replica of such a social system (see
Figure 10). In another sense, it was an application of the translation of description into
enactable constructs that can mirror the structure and function of a social system. The
study, therefore, had two conceptual forks: (a) understand the description of the theory,
and then (b) reify that understanding so that a laboratory replica can be created and oper-
ated. Expanded into more detail it looked like:
1. Understand the theory of action
a. Read what Parsons, his disciples, and his critics wrote.
b. Select descriptions.
c. Translate into constructs (say, structure and function).
d. Validate the simulation when it is completed.
2. Construct a computer simulation program
a. Develop constructs from the authoritative text, as needed.
b. "Program" the constructs into a computer simulation program.
c. Assure the correct technical operation.
System Parsons’
simulation theory of
action
Subject of this
dissertation
Figure 10. Intersection of the theory of action and system simulation.
Research methods
Place of simulation and theory
Parsons (1977b) wrote:
Methodologically, one must distinguish a theoretical system, which is a complex
of assumptions, concepts, and propositions having both logical integration and
empirical reference, from an empirical system, which is a set of phenomena in the
observable world that can be described and analyzed by means of a theoretical
system. An empirical system … is never a totally concrete entity but, rather, a
selective organization of those properties of the concrete entity defined as relevant
to the theoretical system in question. (p. 177)
Parsons above restates the definition of "model," just like the subject of this
research. In other words, a model is the theory with some details left out, abstracted
Page 44
away. So, is a model the same as a theory? Is a model the same as a simulation of a
theory? These are questions still being debated by the social simulation community.
Perhaps the clearest explanation is from an electronic mail message that was in
response to this question, posed on the social systems simulation SIMSOC LISTSERV
([email protected]):
Is a simulation a (logical) derivation of a theory? Either (a) the simulation is a logical
consequence of the theory and is [therefore] a theorem of this theory and then "If you can
derive a contradiction from a given set of axioms, then that [sic] axioms are invalid"; or
(b) it is an entity that is something like a theory with its "own"
axioms and theorems:
theory: axioms(T) -> theorems(T)
sim(1): axioms(sim(1)) -> theorems(sim(1))
…
sim(n): axioms(sim(n)) -> theorems(sim(n))
So, now what is the relationship between the axioms of T and sim(1)...sim(n)? In the
philosophy of science there are several attempts to clarify the relationship between
theories and models (here: simulation). One attempt is from Morrison/Morgan [(Morrison
& Morgan, 1999)]. Another attempt is from Sneed, Stegmüller and other authors
[(Balzer, Sneed, & Moulines, 2000; Stegmüller, 1979)]: the structuralist conception of
theories (structuralism) (cf. Klaus G. Troitzsch on simulation and structuralism
[(Troitzsch, 1998)])27. According to structuralism a theory is a theory-net consisting of
several theory-elements connected to a basic theory-element. There are several links
between the theory-elements: specialization, extension etc. From the perspective of
structuralism, you can specify the relationship (links) between "theory" and simulation(s).
The entity called "discursive sociological theory"28 is the basic theory-element that speci-
fies the basic concepts, the basic axioms. A simulation is a theory-element, which speci-
fies additional, new concepts, new functions and introduces new axioms. The introduc-
tion of new ("gap-filling") axioms is in principle not a problem. It becomes only a prob-
lem if you add new axioms that are contradictory to the axioms of the basic theory-ele-
ment. To the point "explanation and simulation": According to structuralism a simulation
is an extension/specialization etc. of a "discursive sociological theory" and can explain
certain aspects of reality. In the case of specialization it refers only to a subset of the
applications of the "discursive sociological theory"!
27
As evidence that this is not a settled matter, Troitzsch has called for a workshop on Epistemological Perspectives on
Simulation, in Koblenz, Germany, 1-2 July 2004, http://www.uni-koblenz.de/EPOS/. He states in his call for
abstracts "Simulation has been a research instrument for long in various disciplines. In recent years, it is gaining
further attention. This may be contributed to the lack of theories that would allow for explaining and predicting the
behaviour of complex systems. In addition to that, new modelling paradigms, associated with object-oriented
concepts, intelligent agents, or models of (business) processes inspire the use of simulation. … Furthermore, it seems
that simulation is regarded by some as an alternative to research methods that do not provide convincing support for
certain research topics. At the same time, the epistemological status of simulation remains unclear. This is, for
instance, the case for its relationship to core epistemological concepts, like truth and reason. Against this
background, it seems worthwhile to reflect upon the preconditions of using simulation successfully as a research
tool."
28
Parsons' theory of action is this type, a descriptive sociological theory, as opposed to, say, a mathematical or formal
expression of a theory.
Page 45
In a word, then, a simulation of a theory is not the theory itself, but rather an
extension and specialization, perhaps with some elements added ("gap-filling" axioms)
because in order to conduct the simulation they had to be. That is, one of the challenges
in simulating a theory is placing into the simulation that which is missing in the theory,
but is necessary for the simulation to proceed. An example in the instant case is queues or
buffers. There are no queues or buffers or waiting lines in Parsons' theory of action, but
what if there is a quick succession of interchange messages, too quickly for a function to
process or absorb. What happens to the interchange message? Is it queued or lost or
resent? Whatever the answer, it is a "gap-filling" axiom that is added to the simulation
but absent from the theory – in order to get the simulation to run.
This research was about a method: applying the method of simulation to a theory
of sociology. This section describes simulation and how it was applied in this instance.
There would be a scientific elegance if the steps were performed in the order in the out-
line. In fact, the steps were applied in a messy fashion because choices made in any step
may not work downstream. And any researcher seeking to simulate a theory is, as she
reads, mentally applying the techniques known to the researcher, even if subconsciously.
The techniques form a part of the grounding (a bias) that any researcher brings to a
simulation problem. The particular assortment of techniques that a researcher knows
colors her perception of the problem. Accordingly, in this research a prototype approach
was taken:
1. Read a representative sample of the theory of action.
2. Try a few descriptions as the basis of a pilot exploration.
3. Try a simulation technique (say, system dynamics or discrete event).
4. Encode the description using the simulation technique’s representation system
(that is, programming language).
5. Operate the simulation to see if the results correspond to what the theory
describes.
If the pilot obtains results of sufficient fidelity (an unevaluated term), then the
researcher will expand on the sample, will expand on the descriptions to be encoded, will
stay with the simulation technique but may increase the fidelity of the representation
(which in principle can be done nearly infinitely), and will operate a number of cases to
increase confidence that the simulated situation corresponds to the description of theory.
In addition, the operation of the computer program so constructed was validated.
One of the primary contributions of this research was the application of a par-
ticular simulation technique to a sociological theory: discrete event. It was central to the
contribution even though its choice will not occur first in the sequence of research events.
Choose appropriate simulation technology
There are styles of (social system) simulation. Two at the top of the description
tree (Figure 11) are the main ones: continuous and discrete event. The continuous style
mirrors systems that are continuous in time, such as most physical phenomena (e.g., dis-
tance, velocity, acceleration). The discrete mirrors step-by-step events that occur at a
particular time or in a particular order and for a particular duration, such as taking a test,
filling a car with gasoline, cooking a meal, etc. One of the differences is the mathematics
involved and the underlying mechanical way that time is advanced by the computer
simulator.
Page 46
the discrete event technique, moving "work" through a series of transformations (work
stations) that operate on the work in some simulation-useful fashion. The user will spec-
ify at what rate or interval "work" arrives, where it will go as it traverses the network of
transformations, and how it will move outside the boundary of the simulation (that is,
dies).
One can imagine an industrial baking oven in which dough loaves arrive at some
rate, move through an oven at some rate, and then move out of the system at some point.
Each of the "stations" has attributes, such as oven temperature. And each of the units of
"work" has attributes, such as composition (rye, pumpernickel, etc.). And any of the
attributes can be a random variable or a stochastic variable (that is, depend upon a ran-
dom value).
There are two parts to a simulation system: the engine, which interprets constructs
in the simulation language, and the simulation language itself. The engine moves work
along in accordance with the specification stated in the language. The language in the
case of the discrete event technique is often represented as boxes and lines between them.
Work travels along the lines and is transformed inside the boxes. Sometimes the work
arrives more quickly than the boxes can service it, so the work has to wait. Work waits in
a queue. One can again think of the baking example: what happens if dough loaves arrive
more quickly than the ovens can cook what has already arrived? The new dough loaves
wait. What if the ovens are always slower than the process that creates the dough loaves?
An infinite queue builds if the arrival rate exceeds the service rate.
The internals of the operation of a simulator are beyond the scope of this disserta-
tion. Suffice it to say that simulation is a mature discipline and that there are many
choices available to the researcher so that she does not have to build a simulation engine
or develop a simulation language (Banks & Carson, 1984; Mize & Cox, 1968; Zeigler,
Praehofer, & Kim, 2000). The criteria to be applied in the search for such a combination
of discrete event simulation engine and language for this research was, in priority order:
1. Building-block approach, with the existence of many pre-built components. This
refers to the simulation language. Many computer programming languages are
functional, where each line instructs the computer to perform a specific function,
such as arithmetic or printing. Building blocks, in contrast, specify the compo-
nents (works stations, units that transform the work) and connections in a net-
work. The advantage of a building block approach is that less has to be specified
and what is specified is particular to discrete event simulation. The pre-built com-
ponents simplify the job of specification because at some level of abstraction all
discrete event simulations are similar. All create work at some rate, distribute it
among work stations, transform the work item, collect like items, and then have
them exit the simulation at some rate.
2. The ability to create blocks that are not already in existence (by writing a com-
puter program) if the appropriate pre-built component is not available. In the
event the selected language could not specify something important in this
research, then it would be valuable if the block could be created, even if that
meant writing a (presumably) small computer program. This attribute is called
extensibility in the computing literature.
3. Ability to create a graphical user interface for the user, in which input values can
be requested and outputs can be viewed graphically. The primary users were envi-
Page 48
29
Researchers in artificial intelligence call such a source an "oracle" in order to convey the status as an authority.
Page 49
By truth the authors mean correctness. The model should accurately (synonym for
correctly) reflect the assumptions and derivations of the underlying theory. The emphasis
is on testing the derivations, not the assumptions because assumptions are often axioms
and therefore true by definition. Truth is sought by several means:
• Testing for circularity. Are the definitions tautological?
• Promulgation and evaluation of alternative derivations. Seeking alternatives can
expose errors in the original derivations and at least help sharpen models.
• Differentiation among competing derivations. Can an experiment verify or favor
one derivation over another?
30
The comment about the appreciation of nuance can be related to March's long association with models, beginning
with Cyert & March (1963), one of the first simulations of an organization.
Page 50
not physical objects and the laws of physics do not apply. On the other hand, computer
programs can be mathematical objects and a branch of mathematics could be used to
prove properties. The intuition is use to the logic we employed in high school geometry
when we proved theorems about triangles, such as congruence. We could say that the
lines in a computer program are like a geometry proof, where our task is to show that
each line is an entry in a proof that given the inputs and the transformations being
applied, the output was what we want. This way computer programmers would be able to
prove that their programs worked before they were ever executed, before they were every
tried on any computer.
As appealing as this approach was conceptually, it has not been widely applied in
practice. For example, even though they are prime candidates for this mathematical proof
of correctness, the most popular computer programs for statistical analysis (e.g., SAS,
SPSS, BMDP, and Systat) did not use the approach. Rather they, and nearly all other
computer programs, are tested as a method of improving confidence in the results pro-
duced by the computer program. Testing can only show the existence of errors, never
their absence.
Testing is essentially a (serious) mathematics problem. Even a simple computer
program has more states in it than there are assumed to be molecules in the universe.
Therefore, exhaustive testing – testing of every state that can be obtained in a computer
program – is not feasible as a matter of practice. Even with the fastest computers it would
take hundreds of years to try all of the states in a simple program.
Accordingly, one application of math to the problem is to find/compute equiva-
lence classes, one example of a large subset of the possible states, and have that one
example stand-in for all of them. For instance, if we were testing the printing of United
States ZIP codes, those five-digit numbers that indicate the general geographic location
of mail destinations, then we might select a sample of them instead of all 00000-99999 =
one hundred thousand possibilities. In fact, it might be normal to select only three values
to test the printing: 00000, 99999, and a random choice in between.
This leads to another testing approach that is ad hoc but often used: test the places
in a computer program where errors are known to "hide": boundary conditions and inter-
faces. Boundary conditions are the extreme values that a computer program takes in or
puts out, such as a large negative number, zero, and a large positive number. Interfaces
are places where one computer program uses the services of another, such as, in the cur-
rent research, the simulation engine invokes Microsoft Excel for input from a table.
Upon closer examination of the research described here errors could occur in the
following places in computer programs:
1. Errors in the specification of the simulation, that is, in what can be specified in
the SIMUL8 language. The researcher is the author of the specification. This
type of error can be discovered by reading and by being traced back from
anomalous results.
2. Errors in the execution of the SIMUL8 language program. The provider of
SIMUL8, SIMUL8 Corporation, is the author of the engine. Therefore, errors
of this type are the most difficult to discern, hopefully are the most rare, and
can be discovered by tracing back from anomalous results. Also, SIMUL8
Corporation regularly updates the engine based on user input from world-wide
usage.
Page 52
3. Errors in the values provided in the Excel sheet that is read in during program
execution. The researcher is the author of the specification. This type of error
can be discovered by reading and by being traced back from anomalous
results.
In sum, the quality of the results of the simulation cannot be proved and will
always be suspect. Confidence in the results can be improved by increased testing, which
is a problem of multiplicity of states, of trying to test as many states as practical given the
constraints on the research resources.
Delimitations
The delimitations of a study are those characteristics that limit the scope (i.e.,
define the boundaries) of the inquiry as determined by the conscious decisions to exclude
and include that were made. Among these were the choice of objectives and questions,
variables of interest, alternative theoretical perspectives that could have been adopted,
etc. The first limiting step was the choice of problem itself.
This study was highly biased by the search for strong time orderings; that is the
basis of discrete event simulation. In order to construct the simulation the researcher
scoured Parsons, Bales and Shils (1953a) for even the remotest indications of time order-
ing and surely this biased the fidelity of the simulation. Furthermore, it could never be
argued that the time ordering in Parsons theory was an essential feature, surely not on the
level of the pattern variables, functional prerequisites, interchange media, and cybernetic
hierarchy. Accordingly, the claim must be made that the research here was one transla-
tion of the theory, not the (definitive) translation. The simulation was not comprehensive;
at best it is intended to be a scaffold that other researchers will use to build higher fidel-
ity, more comprehensive simulations of the theory of action.
In addition, the theory of action contains many permutations and layers. The study
here limited its scope to:
1. Performance; neglects learning – The Parsons theory can be applied to two views
of organizations: their performance in pursuit of their "exterior" goals, and their
"interior" learning as they perform. In the performance case, energy flows from
Adaptation to Goal Attainment; in the learning case energy flows from Integration
to Goal Attainment (Parsons, 1960, p. 217). This research addressed only the per-
formance view/flow. As an aside, the organization simulated does learn how to
perform (well, how to reduce tension, the difference between the energy outside
the organization and the energy circulating within), using classical conditioning,
which is not what Parsons implied in his learning vs. performance dichotomy.
2. A single unit of analysis: the organization – Parsons illustrated that the unit of
analysis of his theory can be any size, from, for example, individual to a nation or
national culture. This research selected a single unit in order to demonstrate feasi-
bility. Also, because of a single unit of analysis the research did not address inter-
penetration, the impact of levels of analysis on each other, as, for example, norms
for the personality level can impact the performance of an individual at the col-
lective level.
3. Single level of the four functional prerequisites (not the infinite regress) – In addi-
tion to multiple units and levels of analysis, Parsons illustrated that each of the
four functional prerequisites can, in turn, be subdivided into four units, and each
of those four units into four more, ad infinitum. This research addressed a single
Page 53
32
Four pairs of pattern variables generate 24 = 16 combinations.
Page 54
Latent Pattern Maintenance, and then Latent Pattern Maintenance back to Adap-
tation.
9. Very few of the possible process features in Parsons, Bales and Shils (1953a)
were simulated, not all of them.
In every case, the choice of boundary was caused by the nature of this research: a
toy simulation to investigate the feasibility of simulating Parsons' theory. That is, by its
nature this research was bounded.
Page 55
there is a preliminary (usually mental, hypothetical) theory that is elaborated as more and
more concepts are added. That first version of the theory is sufficient if there is a place to
put each elaboration without "too much" work. This is the cognitive mechanism behind
grounded theory, a creative process of deriving an explanation from raw data. It is also
the cognitive mechanism behind the process point of view: a creative process of deriving
the mechanism that yields an outcome.
Numerous authors (e.g., Bainbridge (1992), relying on Rodney Stark's "sociologi-
cal process" (2003)) have described their steps for building a theory or model. Here is a
sample (Lave & March, 1993):
1. Observe some facts.
2. Look at the facts as though they were the end result of some unknown process
(model). Then speculate about the processes that might have produced such a
result.
3. Then deduce other results (implications/consequences/predictions) from the
model.
4. Then ask yourself whether these other implications are true and produce new
models if necessary.
A recent table, below, shows the variety of steps possible, summarizing those
offered by various system dynamics authorities:
Table 3.
The system dynamics modeling process across the classic literature. (Luna-Reyes & Andersen, 2003)
While all of the proposals, above, appear logical and linear, in fact the process is a
non-linear, iterative one of creative speculation and hypothesis testing. No more will be
said of this inchoate process; the point is to appreciate the difference between what is
prescribed as a set of steps to establish a theory or model and what really transpires inside
the mind and workbench of the theorist or model builder.
The plan of this chapter is, first, a description of the elementary model, followed
by a short tutorial on the implementation of learning and tension. After that is a descrip-
tion of what the user saw as she operated the simulation, along with the rules and
assumptions that were implemented. The chapter concludes with a parsing of selections
from Parsons, Bales and Shils (1953a) and their correspondence in the possibly more
elaborated model to illustrate the fruits of the process that built a bridge between Parsons'
text and the simulation model.
Basic concept
The basic concept of the simulation is that of a baking oven fed by a conveyor
belt on which is raw dough. The dough represents the energy outside the system under
study (the oven). The oven represents the heat that will be applied in successive internal
Page 58
chambers as the raw dough is transformed into its cooked form. The goal of the oven
(system) is to produce bread that is cooked "just right." This particular oven senses how
large the dough mass is and adjusts its internal heat based on it. And, in fact, it adjusts
based on the pattern of dough masses as it sees them one at a time as they enter the oven
door.
In Parsonian terms, the dough represents energy from outside an organization, say
news or a new idea. The news will pass through the four functional prerequisites as it is
transformed and it transforms the organization. Success is measured by how well the
latent pattern maintenance matches the pattern of energy entering the system. The differ-
ence between energy presenting itself to the system and the energy inside the system (at
the latent pattern maintenance stage) is called tension. Our goal is to minimize tension, so
the goal of the interaction of the internal functions and structures is to match or fit the
latent pattern maintenance energy to that that entering the organization.
To increase the fidelity a bit, there are not only different bread masses, but differ-
ent kinds of bread (rye, poppy, sourdough, etc.). For each the oven has to react differently
because in order to cook properly it is not only a matter of temperature but also of time.
Some dough has to be cooked more quickly, some more slowly, even at the same tem-
perature.
In Parsonian terms, in addition to raw energy in the environment, there are values
of pattern variables that are intrinsic to different types of energy. The effect of processing
of energy that has one type of pattern variable, affectivity/neutrality, is modification of
the time that the system takes to respond to the energy. An affective value moves the
energy more quickly; an affective neutral (that is, rational) value moves the energy more
slowly. So, external energy is typed – by the value of the affectivity/neutrality pattern
variable.
Model of tension and learning
To increase the fidelity a bit more, imagine that the oven knows that it works best
with raw dough that exceeds a certain mass, that is, the dough has certain characteristics
or a "signature." So it filters out – rejects – dough that is not heavy enough. It takes in
only a certain size and above. And -- here we are stretching -- the filter at the opening of
the oven is operated from inside the oven: the intuition is that the oven comes to learn the
minimum value that it will accept and adjusts the filter as it learns. The filter setting
could be different for every lump of raw dough as the oven learns.
In Parsonian terms, the Adaptation function filters the energy that it accepts at the
boundary of the system. That filter is set by Latent Pattern Maintenance. If the pattern
maintenance function sets the filter too high, then some useful energy in the environment
will not be imported, potentially creating tension. If the filter is set too low, then the sys-
tem responds to everything and patterns are difficult to develop, expending system
energy with no added patterned capability.
Another way to think of the simulation, that is, another analogy, is target tracking.
The organization being simulated is trying to track the pattern of energy in the world out-
side of it, just as radar tracks a target. If the target turns out to be a "bad guy," then the
tracking attention should increase. If the target is noise, not an important thing at all, then
it should quickly identify that and not expend extra energy. The difference between what
is expended and what should be expended is tension, a quantity to be minimized.
Page 59
Before providing more detail it is important to take note of advice that Parsons
sprinkled liberally throughout his writing (for example, (1968a, p. 47)): be careful about
the unit and level of analysis. If the system under study is an organization, then it is
treated as an indivisible "black box." We should look only at its input and output, not
how it processed the input to achieve the observed output. But our situation was a bit dif-
ferent – not that we are inattentive to Parsons’ generous advice – because the simulation
created here generated the output by processing the input. Therefore, the researcher had
to know something of the inner workings lest he could not have transformed the input
into the output. Or, put another way, the computer simulation IS the black box.
Accordingly, the flow of energy from outside the system was first filtered in the
Adaptation function, as stated above. If the energy passes through the filter, whose value
was set by the Latent Pattern Maintenance function, then it will pass to the Goal Attain-
ment function after a delay depending upon whether the energy to be responded to is
affect or is affect-neutral. If it is to be responded to by affect, then it will traverse now
and through the rest of its journey quickly, according to a user-set value. If it is affect-
neutral, then it will travel more slowly, consuming time to "think." The energy will then
pass from Goal Attainment to the Integration function according to delay and selection
rules, and then it will pass from the Integration function to the Latent Pattern Mainte-
nance function according to delay rules. Different delay values can be set by the user for
each of the functional prerequisites x pattern variable value (affect or affect-neutral).
Systems are goal-oriented, so there must exist some mechanism that matches what
outside energy is allowed in compared with the goals of the system. The goal of the toy
system described here is to reduce tension – that is, the difference between the energy
outside the system and the energy circulating inside the system. The mechanism, then, is
to adjust the filter of the in-coming energy so that the energy circulating inside the system
matches the outside energy. In principle there are many ways to accomplish the match.
In Latent Pattern Maintenance a complex interaction will occur that will set the
filter on the Adaptation function. In essence, the Latent Pattern Maintenance function has
as its goal to seek to minimize tension; that is, the system's goal is administered by the
Latent Pattern Maintenance function. Symbolically it will do this by learning the pattern
of energy arriving from the outside and matching the filter to it in order to let in enough
energy to sustain the enterprise. Parsons, Bales and Shils (1953a, Fig. 7, p. 223) call the
style of learning classical conditioning. Accordingly, this is the style of learning that was
simulated. But it is learning by an organization, not by an individual, despite a question
about whether organizations can be classically conditioned.
Classical conditioning
As mentioned in the Literature Review, one of the challenges in simulation is
transforming qualitative concepts into quantities so that a digital computer can manipu-
late them. Unfortunately, there were almost no reports of quantitative measures of
organizational learning, despite the abundance of references to organizational learning
and learning organizations. One of the only quantitative studies of organizational learning
was the formulation of Nembhard and Uzumeri(2000), which was used in this study:
Page 60
⎛ x+ p ⎞
y = k ⎜⎜ ⎟⎟ , a three-parameter hyperbolic function, where
⎝x+ p+r⎠
p = cumulative prior learning, in clock ticks33. Must be a positive integer. Incre-
ments with every clock tick. Default is 500.
x = units of time since the last change in k. Must be a positive integer or zero.
Increments with every clock tick. Default is zero.
Figure 12 illustrates the intuition. k is the value being sought by the system, the
value that L is trying to obtain by adjusting the energy filter on A. Given this value,
Latent Pattern Maintenance must compute a possibly new value for the energy it lets into
the system via the Adaptation function. Basically, the formula smoothes prior values in
order to reach its goal of k in a stable and planned way. There are two cases: approaching
the target k from above and approaching it from below, both of which are illustrated.
Imagine that the L function has determined – in a process opaque to us at the moment –
that the target value of some important variable is 2. If the outside environment is pre-
senting, say, 4, then L clamps down on the filter that lets values in so that the 4s are not
permitted to enter. This is Case 2 in the figure. Given the same target of 2, imagine that
the outside energy is less than 2, then L opens up the filter and lets it all inside. This is
Case 1.
In each case, the L function responds to the difference between its target value
and the value circulating within the system, that is, the value let in. It then exercises its
considerable control (it is the highest in the cybernetic hierarchy on the control dimen-
sion) to bring the circulating energy closer to the target, either from above or below.
33
Clock tick represents an interval of time. In the simulation the clock "ticks" once every business day, as a default.
Page 61
4
Energy
0
0 10 20 30 40 50 60
Time
34
It would be a trivial upgrade to the simulation to have the simulation itself change the values or at certain intervals
or on the occurrence of certain events ask the user whether changes were wanted.
Page 64
Once the spreadsheet is completed then the simulation program was invoked and
a screen like Figure 15 appeared. The user typically performs just two operations on this
screen: reset the clock (and all other variables) to zero and then start the simulation. The
default duration is 2400 business days, or approximately ten years. There is a row of
buttons along the top of the display: . The left most one is
reset; the next one is "step," which advances the clock one tick each time it is touched;
and the next one is "run," which starts the clock and the simulation runs automatically
until the final value of the clock is reached. If the run button is pushed during the actual
simulation then the program pauses; touching it again starts the simulation where it left
off. The other icons are not used by the user, only by the researcher to develop the simu-
lation in the first place.
Based on the results of the inputs the user can view the convergence of internal
and external energy on a graph after the simulation has ended; it is in the file containing
the Excel spreadsheet. That is, the user can judge how well Latent Pattern Maintenance
performs its function of restoring spikes or challenges to its target value of "culture,"
which in the simulated case is instantiated by external energy.
Page 65
Figure 15 is the tableau on which the user witnesses the simulation. Energy enters
from the left on the device that looks like a conveyor belt, something like the bakery
example. The value of energy and its affect/affect-neutral pattern variable comes from
successively reading the Excel spreadsheet. This external energy is presented to the test
in the spreadsheet: Pass Energy if Energy [operator] Filter Threshold, where
operator and Filter Threshold are read from the spreadsheet. If the energy does not pass,
then it goes to the element Energy that does not enter. If the energy enters then the Adapta-
tion function looks at its affect/affect-neutral pattern variable value. If it is affect then the
energy takes the top path, the one marked Affect path, and is processed for a period read
from the spreadsheet. During that time, Latent Pattern Maintenance, in an unseen (hence
latent) process resets to Adaptation filter to a possibly new value in order to reduce ten-
sion. After that the energy goes to Goal Attainment, where it may have to wait in a queue
of the GA function is busy. If the energy in Adaptation is affect-neutral, then it takes the
lower path, possibly to a queue, where it waits for the affect-neutral processing for the
period of time specified on the spreadsheet. At the end of that processing the Latent Pat-
Page 66
tern Maintenance function possibly resets that Adaptation filter in order to reduce ten-
sion; that is, on each of the possible exits from Adaptation (affect and affect-neutral),
Latent Pattern Maintenance potentially resets the Adaptation filter for the next time it
encounters outside energy.
The icon depicts a "work station," where the input is transformed into an
output for some duration and decisions are made about where to go next. In our case each
icon is where energy exits the simulation. The number above the picture is the num-
ber of "dead transactions."
Energy can enter Goal Attainment from two sources, both of which are paths out
of Adaptation. The top is the Affect path and it is fed to the Goal Attainment function im-
mediately, unless G is already working on energy affectively. If G is already occupied
with an affective action, then the in-coming affective energy is queued. If energy enters
on the lower path then it is affect-neutral and it enters a queue for rational processing
once every resource allocation interval. When the interval occurs, then all of the queued
resource requests are read by Prepare budget proposal and a percentage of them are passed
on to the Integration function and the rest exit the organization and end up in Ideas not
resourced. The percentage of ideas that are not resourced is set on the spreadsheet and
remains constant for the duration of the simulation.
Energy enters Integration from two sources, too, both from Goal Attainment. If
the energy is to be responded to affectively then it goes directly to the Integration func-
tion. If the Integration function was already working affectively-neutral, then that process
is suspended and held in Interrupted Integration until the affective processing is completed
and then it is restored for the remainder of its time. If the Integration function is working
rationally on energy when the next batch of rational energy to be integrated arrives, that
arriving batch waits and is processed one at a time on a first come, first served basis. If
the energy takes the Affect path from Goal Attainment to Integration and Integration is
already working on energy that is to be affectively integrated, then it waits in the queue
on the Affect path until the Integration function has completed its processing of the current
affective activity.
After the energy is integrated it passes to Latent Pattern Maintenance, where in
the current model nothing happens except that the energy is passed out of the organiza-
tion, out of its system boundary. L has already had its effect by potentially altering the
Adaptation function every time energy leaves Adaptation. The alteration of the filter is
truly latent here.
It is very important to note the cumulative delay that has occurred between when
the energy first enters the system and when it finally impacts Latent Pattern Maintenance.
The delay is the sum of the processing times in Adaptation, Goal Attainment, and Inte-
gration, plus waiting times. It is not insubstantial. Delay in time-varying systems can
Page 67
cause many kinds of dysfunctional behavior, including most notably oscillation as the
organization tries to respond to reduce tension.
Rules
All models are the union of their parameters and rules, as mentioned on page 19.
Here is a list of rules programmed into the simulation.
Table 4.
Rules of the simulation.
35
For example, if the external energy is of magnitude 5, the spreadsheet relation is >, and the spreadsheet value is 2,
then the energy bundle passes because 5>2 is true.
Page 68
6. Adaptation and Goal Attainment can The capability of the organization to respond
work on only two energy bundles at to news is limited to one emotional event and
once, one requiring affect and the one rational event at the same time.
other not requiring affect.
7. Integration and Latent Pattern Only one (major, funded) process can be
Maintenance can only work on one integrated at a time, be it rational or emo-
energy bundle at a time. tional, e.g. Total Quality Management or the
loss of the CEO. And LPM can respond to
only one event at a time, too, thought its
effect can be quite long-lasting, as it controls
how much news is let in.
8. If A, G, or I are finished processing News waits to be responded to, it does not
an energy bundle but its successor is drop or go away.
not ready to accept the bundle, then
the bundle is put into a queue
between them and the function is
given more energy to process if any
has arrived at that point in the cycle.
9. If energy is the type that will be proc- News that will be responded to emotionally
essed by affect, then the (media inter- takes a "fast path" through the functions.
change) path between A and G and G
and I are different than if not proc-
essed by affect.
10. Goal Attainment enqueues the energy Goal Attainment simulates the rational budget
passed to it from Adaptation and process (if the news is affect-neutral) and
processes it all at once at a given queues resource (that is, funding) requests
interval if not affect; if affect, then it until a definite period has transpired, such as
is processed as it arrives, consuming every six months.
the affect-neutral delay according to
the spreadsheet.
11. Not every queued affect-neutral Think of these Goal Attainment energy pack-
energy package transits from Goal ages as funding requests. The Goal Attain-
Attainment to Integration, only a per- ment function processes the budget requests
centage does. That percentage is set all at once every six months, passing on only
at simulation run time. a portion of them as "approved."
12. Integration addresses the incoming Imagine that an integration activity is going
energy for the duration specified in on, such as implementing Total Quality Man-
the spreadsheet, if not affect. If agement. Then the organization learns that its
affect, then any non-affect current CEO is suddenly, unexpectedly deceased.
integration efforts are suspended (that The organization suspends the TQM initiative
is, made to pause and put into a and focuses on succession and how to
special queue) and the affect energy respond to the urgent news. After responding
is given priority of Integration. After to the urgency, the organization goes back to
all of the interrupting (that is, affect) implementing TQM.
Page 69
ance from Parsons on the numerical or is no telling how many business days
relative values. each function consumes.
4. The paths through Adaptation and There is a "fast path" through some
Goal Attainment are separate and par- functions in order to simulate the effects
allel for affect and affect-neutral of immediate response implied by
energy. That is, as energy enters the "emotional events" vs. the more studied
organization it is identified as affect or and time-consuming one of the rational
affect-neutral and then put on its own response.
path.
5. The paths are separate because one is
accelerated and the other is not.
Energy on the accelerated path then
may be handled differently than that
on the affect-neutral path.
6. The affect-neutral path to Goal Attain-
ment queues energy such that the Goal
Attainment function empties that
queue only every so many days, to
simulate the periodic (that is, calendar-
driven) resource allocation review
process.
7. The paths come together at Integration Integration is so consuming that only
because an organization can only inte- one organizational initiative can be ac-
grate one set of processes at a time. complished at a time. And emotional
Therefore, the energy that has been ones have priority.
identified as affective preempts the
energy that has been identified as
affectively-neutral.
8. Preempted energy is enqueued. That Rational Integration functions that are
is, it is put aside and waits for the pre- interrupted by emotional ones are not
empting force to finish and then it lost, but rather are delayed by the time it
resumes. No energy is lost, it is stored takes to address the emotional one. Then
for later use. Its strength does not when the integration of the emotional
diminish during storage. event has been completed either another
emotional event can be addressed if
there is one, or the rational event that
was interrupted will be restored and
continue to process as if nothing had
interrupted it.
9. There is a single path through Latent Latent Pattern Maintenance actually
Pattern Maintenance. appears in two places: during Adaptation
it changes the filter on incoming news to
let more or less in based on its response
to tension (the difference between what
Page 71
Table 6.
Map of the theory to the model.
Phrase in Parsons, Bales and Shils (1953a), Interpretation and place in the model
with original orthography and punctuation
We make four major assumptions in our Parsons was referring to Newtonian inertia,
analysis of boundary-maintaining systems something that is observed (or defined) in
which are composed of a plurality of units, the physical world. Surely the terms
or "particles". We assume first, the princi- "direction" and "constant rate" have a dif-
ple of inertia, namely that a unit or "parti- ferent meaning in an organizational setting.
cle" always tends to move in the same By direction, the model assumes that
direction at a constant rate unless deflected energy passes first through A, then G, then
or impeded. p. 164 I, and then exits the system after passing
through L. By rate, the model assumes
"cognitive rate," the rate at which energy is
made sense of in organizations. Since there
is no conversion factor to rates in the
physical world, the simulation lets the user
set the duration (called dwell) for each
quadrant; the simulation permits different
values for affect and affect-neutral energy,
so there are eight possible user-set dura-
tions (four functional prerequisites x two
values of a single pattern variable).
In no concrete case of system-process can Indeed, as energy transits the system the
this constancy of direction and rate be rate of process can vary (in this research
maintained for any span of time, since the deterministically, not randomly). There are
interdependence of units is the very essence several cases where direction can vary: at
of the conception of system. p. 164 the outset energy might not be let into the
system due to the setting of the filter in
Adaptation; not all energy (news or ideas)
will be allocated resources in Goal Attain-
ment, so some will travel onward (those
approved) and some will exit the system;
and if the energy is to be affectively
responded to then in Integration it can at
least temporarily displace activities that
were being addressed affectively-neutral.
The unit in a stable state of the system will The fixed set of rules and assumptions
tend to follow a sequential pattern of guarantee that a sequential (in this case
changes of direction as its relations to the deterministic) pattern, both in relation to
other units in the systems and to the exter- each of the predecessor functions and the
nal situation change over time. p. 164 external situation over time.
This sequence may be oscillatory or cycli- The pattern in this research depends pri-
cal or it may have some other form, but it marily on the pattern of the external energy
Page 73
will always involve changes of direction over time, and the initial settings of the
(and of rate). These changes will always values that control dwell times, rate of
follow a pattern, although there may be learning and forgetting, and the window
some random elements intermixed with the over which the system looks back in which
pattern. p. 164 to formulate its latent pattern maintenance
response.
We assume the principle of action and The implementation of "equal and oppo-
reaction tend to be equal in "force" and site" is the setting of the filter on Adapta-
opposite in direction. We interpret this to tion by the Latent Pattern Maintenance
be another version of, or a premise under- function. If Adaptation lets in "too much"
lying, the conception of system-equilib- energy LPM compensates by decreasing
rium. No more than the statement of the the amount to be let in in the future and
principle of inertia does the statement of mutates mutandis for "too little." And if the
this principle imply that actions and re- reaction of LPM is not appropriately "equal
actions empirically are always equal and and opposite" then tension increases, which
opposite; it does imply that where they are in turn increases the pressure on LPM to
not equal and opposite, a problem is pre- adjust the incoming energy.
sented. p. 164
We assume the principle of acceleration The model does not handle this. Rates are
which asserts that changes of rates of proc- not adjusted based on the consumption of
ess must be accounted for by "forces" energy. There are increases and losses of
operating on (or in) the unit(s) in question. energy in the sense of sinks and sources in
An increase of rate implies an "input" of the model, those changes do not affect the
energy from a source outside the unit in rates of anything.
question, and decrease of rate, a loss of
energy, and "output" of some sort from the
unit. p. 165
We assume the principle of system-inte- The system and its boundary are given in
gration. We interpret this to mean that, the simulation and cannot be changed. On
independently of the operation of the other the other hand, the components are com-
three principles, there is an imperative patible with each other in the sense that
placed on systems of action which require they non-destructively interchange infor-
that pattern-elements in the organization of mation. If there is a question of coexistence
their components should be compatible of the system in its environment, then it is
with each other while maintaining the reflected by increased tension, that is, by an
boundaries of the system vis-a-vis its increase in the difference between the pat-
external situation. p. 165 tern of energy of LPM and the pattern of
energy external to the system.
Central to our scheme is the conception of The model simulates action as process by
action as a process occurring in or consti- the four dimensions and one of the four
tuting boundary-maintaining systems con- pattern variables.
ceived within a given frame of reference.
This frame of reference involves, above all,
the four dimensions … and … four pattern
variables. p. 165
Page 74
An orientation cannot be both affective and Each bundle of energy crossing the system
neutral [at the same time]. p. 166 boundary is tagged as either that which is
reacted to by affect or by affect-neutrality.
There are no degrees of affect; affect and
affect-neutrality are mutually-exclusive and
exhaustive.
The dimensions are, we assume, essentially Energy enters the system and moves along
directional coordinates with reference to pre-determined paths, operating in one
which the process of action is analyzed. place at a time. Energy in the simulation is
Motivational energy entering the system always specifically located. And the units
from an organism cannot simultaneously are connected by definite interchange
operate in all possible processes which go paths, along which energy and information
to make up the system. It must be specifi- move.
cally located, in the sense that it must be
allocated to one or more units of the sys-
tem. But at any given time this unit must be
located at some definite point in the action
space, and must be moving … in a definite
manner. p. 166
The system operates through interaction of This describes interchange media, of which
its member units. Every change of state of only two types have been implemented in
one unit … will affect all of the other units the model: feed forward, the forward transit
in the system and in turn the effects of of energy, and a single feedback path from
these effects on the other units will "feed LPM to Adaptation that sets the filter on A
back" to the original unit. p. 167 to determine how much energy to let in.
So, there is only a single instance of feed-
back in the current model; it is not fully-
connected.
We derive the conclusion that systems of Time is a fundamental construct in the
action must be treated as differentiated model, indeed in discrete-event simulation.
systems. It then becomes clear that this dif-
ferentiation will work out in two ways.
Since we are dealing processes which occur
in a temporal order, there we must treat
systems and the processes of the units as
changing over time. p. 167
The one way character of the process we The energy that transits the system does
have deduced from the nature of motiva- produce a kind of reactive consequence: the
tional energy—the fact that it is setting of the filter on the Adaptation func-
"expended" in action. We assume through- tion so that the inputs and outputs are,
out … there is, if not a law of conservation indeed, in balance. It does this in a numeri-
of motivational energy, a law of "equiva- cal, quantitative way, but without any
lence" in the sense that this energy does not measurement in The Real World. That is,
simply disappear, but, "produces" some the numerical aspect was created in the
kind of consequences, that there is a bal- model for purely illustrative purposes.
Page 75
V. RESULTS
The results of the simulation are divided into three broad areas: an example, base
cases, and an extension. The base cases and extension are quite similar in their pattern of
presentation: briefly explain the theory and what it would predict, indicate what the
inputs to the simulation were, and then illustrate the output of the simulation run com-
pared to the theory prediction. They directly address Parsons' theory of action. The
example has been offered to provide some concreteness to the application of the theory, a
topic largely beyond the scope of this research.
Example
Parsons' theory is opaque, so the model of it was correspondingly abstract. An
example taken from real events might aid the comprehension of the theory and therefore
the model. At the outset we must be mindful of Parsons' admonitions about misplaced
concreteness (see p. 28 above), about the value of analytic thought, so this example must
be disclaimed from the outset as present here for illustrative purposes only. Nothing is
intended to be proved by it.
Up to now patterns of flows among functions have been described (Parsons'
phases), but there have been no acts! The example here is an attempt to show how the
flows and functions could describe actual human action.
For the example a situation was sought in which the energy outside of the system
was relatively uniform for a long time (stable) and then there was a jolt, an impulse of
sudden energy, mirroring some of the events to be presented below in the base cases and
extension. Waller (1999) examined the order and timing of events in a commercial airline
cockpit simulator during training drills with real airline flight crews while they were
addressing "nonroutine" events, the kind that were associated with high outside energy.
On the one hand a real situation was sought, but the example explored here was,
too, a simulation of such a real series of events. The problem is a scarcity of reports about
real world events in which timing and order are recorded, along with outcomes. There-
fore, a simulated though realistic setting is presented.
There were ten flight crews of three persons each, so each crew was a small
group, the kind that exhibits collective behavior. The setting was naturalistic, as such
training simulators are constructed precisely to mirror real world situations and condi-
tions. The nonroutine events were arranged in a sequence of six unexpected items of
news during a planned 60-minute flight from Los Angeles to San Francisco.
The unexpected events were:
1. Poor weather forecast; bad weather at San Francisco and its alternates; heavy
takeoff weight.
2. Light to moderate turbulence during the climb and cruise phase.
3. Fast, noisy descent required by air traffic control during approach to San
Francisco.
4. The approach was missed due to hydraulic failure; crew must select an alter-
native destination (Sacramento).
5. During the flight to Sacramento emergency procedures had to be performed,
including trying to manually extend the nose landing gear and force the flap
that experienced the hydraulic failure.
Page 77
6. During the landing the crew had to compensate for the non-responsive flap, no
steering possible with the nose landing gear, and high landing speed because
the non-working flap also acted as an air brake.
Waller hypothesized that success at handling nonroutine events would depend
upon information collection and dissemination, task prioritization, and task distribution.
She noted that these all dealt with the level of the behavior, but not the timing [emphasis
hers]. She noted, for example, "rather than viewing the time of change as a function of
internal stages or clocks, the time of change may be seen as more tightly linked to exter-
nal events." p. 130, relying on Ancona and Chong (1996, p. 263)
Therefore, she hypothesized and tested for timing by studying whether there was
a relationship between the time an external event occurred and when it was reacted to.
Waller found that, for example, there was no difference in the level of workload between
crews that responded quickly and those that responded less quickly, though the crews that
responded quickly to external events all performed much better than the crews that
responded less quickly or did not let the external events come to their notice. In other
words, there was no difference in the level of behavior, but the difference in timing made
the significant difference in crew performance outcome. As Waller pointed out, the
higher performing teams did not work harder, did not perform more tasks, but did achieve
more, all because of timing, because of noticing and following significant external
events. (p. 134)
Two scenarios are described within the theory of action simulation, one with a
relatively long window and one with a relatively short one. The window, as one may
recall, is how far back the Latent Pattern Maintenance function looks back in order to
"remember" what happened historically. Strong cultures look back a long ways and weak
ones a short time. Here was the input from the user for the long window, the whole flight.
Page 78
Each period of the simulation is one minute and there are 60 periods, to mirror
Waller's experiment. The pattern of external energy, not shown, is eight minutes of rela-
tive calm followed by a single one-minute message of high energy (Energy=4) that has to
be dealt with affectively. Figure 16 shows that the assumed period of prior learning is
approximately ten years (in minutes! "1.00E+08" is 108 minutes) and the period of time
to reach the current level of expertise is one year. All external events are permitted to
enter (Energy filter=1 and Threshold to respond to change=1). The Adaptation function
looks at the external environment once every minute.
Here is the pattern of internal and external energy, assuming such a strong culture
that the strength of the culture remains constant during the 60-minute flight.
Page 79
4.5
4
3.5
Energy level
3
2.5 Internal
2 External
1.5
1
0.5
0
0 10 20 30 40 50 60
Simulated time (minutes)
One can see that after the first (of six) non-routine events, the culture only lets in
really high energy events as an indication that it has learned that such non-routine events
can occur and that attention has to be paid to them immediately.
Now we make a single change: the window is reduced to a few minutes, as if the
crew forgets the (disruptive) impact of each non-routine event. The window is set to five
minutes and here are the results:
4.5
4
3.5
Energy level
3
2.5 Internal
2 External
1.5
1
0.5
0
0 10 20 30 40 50 60
Simulated time (minutes)
The energy bounces up and down as the culture more closely follows the pattern
of the external energy, with a non-routine event every nine minutes. This, too, is pre-
dicted by the theory of action because a weak culture (= short window) will respond
much more quickly to changes in outside energy and therefore will maintain less of a
pattern, will enforce less of a culture.
In summary, Waller notes that groups can "match the rhythms of [their] task-ori-
ented behaviors to exogenous events, rhythms, or deadlines." p. 135. This is precisely
what the model described in this dissertation: better performance is achieved by matching
the energy in the environment with the energy circulating internally, presumably con-
trolled by the most powerful function in the cybernetic hierarchy, Latent Pattern Mainte-
nance.
In the Waller example cast in Parsonian terms, the crew executes the Adaptation
function itself, sometimes by asking for news (such as weather conditions) or by noticing
indicators (such as the nose gear not engaging and the noisy approach descent). Based on
its sense-making during Adaptation the crew determines whether each event is routine or
not. When it was not routine, then the best crews responded to it affectively, accelerating
the transit of the event through the crew's equivalent of Goal Attainment (redirect atten-
tion towards the new event, immediately "approving" it for (attention) funding), and Inte-
gration (executing the standard procedure for that unexpected event, but interrupting or
suspending standard processing).
Base cases
Affect vs. affect-neutrality
Affect and affect-neutrality are traditionally ascribed to each of the four functions:
affect to G and I, and affect-neutrality to A and L (see Figure 5). That is, A and L are to
be more cognitive, rational, thought-out, and G and I address gratification and emotional
aspects, they are not rational. There is also the view that energy dealt with affectively
transits the functions more quickly than energy that is dealt with in a studied, reflective,
rational way. One case, then, examines the extent to which energy that is dealt with affect
passes through the organization more quickly than energy dealt with affectively-neutral.
One run of the simulation had the following values:
Page 81
The simulation ran for 2400 simulated business days (about ten years) and an
energy stream of all 2's that were to be dealt with affectively-neutral, except that every
240 business days (approx. one business year) there was an event of energy 4 and it was
to be dealt with affectively (which one would assume, as it represents a large departure
from "normal"). Accordingly, there were ten such events of magnitude 4, including one
on the last business day simulated. The results: there were 480 events (one every business
week, which was the frequency of scanning the environment by the Adaptation function),
408 events did not pass through the Adaptation filter and therefore were not processed
further, 49 were not resourced, four were in processing in the four functions, and 22
completed the journey through all four functions. Of those 22 were ALL nine of the mag-
nitude 4 events that were to be dealt with affectively, leaving the remaining 14 to be the
"normal" affectively neutral events. Clearly, the affectively-neutral events speeded
through the organization.
Strong vs. weak culture
Organizations with strong cultures have, in essence, a strong Latent Pattern
Maintenance function, one that restores any disturbances that might enter. In fact, one of
the ways LPM can limit disturbances is to not let them in in the first place, by limiting the
information/energy that the Adaptation function lets pass. By restricting the filter on the
Adaptation function, LPM limits the excursions of energy inside the organization. In a
weak culture, the organization more closely follows the external energy, in a sense
tracking it with all of its ups and downs.
Figure 19, above, represents the tableau of a strong culture. First, its memory win-
dow is long, 45 events; assuming one event per five days, that's almost one business year.
In other words, the reaction time of this organization can be delayed by a year, after
Page 82
which it again reacts to the outside energy. Here is the pattern of external energy and the
pattern of following it internally:
4.5
4
3.5
Energy level
3
2.5 Internal
2 External
1.5
1
0.5
0
0 480 960 1440 1920 2400
Simulated time (business days)
As can be seen, there are annual jumps in energy to a value of 4, with long peri-
ods of 2 between them. This strong culture "forgets" the high values over time until
another one hits and then its internal energy jumps up again to follow it. The figure illus-
trates the deterministic, repeated pattern of how internal energy followed external pertur-
bations. Consistent with the prescription of classical conditioning, there was no long term
learning, the pattern of internal energy is completely determined by the pattern of exter-
nal energy.
By changing just two parameters, the window to 25, about half of the previous
value, and the Threshold to respond to change from 1 to 0.5, the organization mirrors a
weaker culture. Here are the results, with the same stream of energy as in the figure
above:
Page 83
4.5
4
3.5
Energy level
3
2.5 Internal
2 External
1.5
1
0.5
0
0 480 960 1440 1920 2400
Simulated time (business days)
In this case the organization gradually forgets the high values and then when the
window has passed it says to itself, in effect, "Let's stop responding to old news and get
synchronized with what is happening now, let's loosen the reins a bit and let some new
energy in." But, again, consistent with the prescription of classical conditioning, there
was still no long-term learning, the pattern of internal energy is completely determined by
the pattern of external energy. The only change was the period of looking back.
And here are the results in an organization with really weak culture: the window
was set to about one month. This would be the case in an organization where something
like the terrorist attacks of 9/11 happened every year and within a month the organization
was incorporating the weekly news, as if nothing had happened. It would be as if there
were no heritage, no legacy. No pattern maintenance.
Page 84
4.5
4
3.5
Energy level
3
2.5 Internal
2 External
1.5
1
0.5
0
0 480 960 1440 1920 2400
Simulated time (business days)
Extension
The outcomes in the previous cases were easily predicted by the theory of action.
Here is a case in which there is no theory to guide predictions. In some sense, the simula-
tion is the prediction.
In this case, Figure 19 is used. The pattern of inputs was varied slightly: instead of
there being 49 weeks of a constant value of Energy=2 and no affective processing fol-
lowed by a single week of Energy=4 with affect, there are 25 weeks of Energy=2 with no
affect followed by one week of Energy=3 with affect, followed by 25 weeks of Energy=2
and no affect, followed one week of Energy=4 with affect. In all there are 52 weeks,
during which there is an energetic event in the middle and one at the very end, both the
only two events to be dealt with affectively. Everything else is the same dull news, to be
dealt with affectively-neutral.
Here was the simulation at the end of 2400 business days:
Page 85
Figure 23. Simulation after two energetic events per year, both with affect. Illustrates queuing effects.
While it is difficult to read, the small numbers inside the main quadrants repre-
sented the number of energy bundles presently being processed. As can be seen, there
were 28 pending "requests" for Integration, along with 16 Integration processes that were
interrupted while higher priority ones were being processed (presumably those that had to
be dealt with affectively). Why are they all waiting? It is because the current Integration
activity is processing energy affectively. So, with the current values, it will be about two
times the 60 business days each that each affective process will wait to complete Integra-
tion (that is, about 120 business days, six months) before the first affect-neutral Integra-
tion process could even begin. A total of 44 affective-neutral events were funded in Goal
Attainment and all of them are queued: 28 in the Integration queue and 16 of them in
Interrupted Integration. The queues build because, according to Figure 19, each value for
the dwell time increased as the energy made its way around the AGIL circuit. This was
logical because Adaptation took less time than Goal Attainment, which in turn took less
time than Integration.
To reiterate, all 17 items in Spent Energy, those that have completely transited the
organization, are limited to those that were dealt with affectively. That is, in the ten years
simulated NO affect-neutral events were processed all the way through! That is due to the
Page 86
frequency of the events that had to be dealt with affectively (two per year) and the long
time it takes to integrate the responses to them (60 business days, three calendar months).
Therefore, one of the extensions to the theory of action is the impact of time spent
in each functional unit as a function of the rate at which inputs and messages arrive. If the
time spent is on average greater than the inter-arrival rate of the inputs + messages, then
queues will build. This is a fundamental principle of queuing theory (Kleinrock, 1975-
1976). Parsons did not write about what happens when some energy has to wait three
years to be integrated.
In sum, the results of operating the simulation both were accurately predicted by
the theory and without effort demonstrated potential extensions to the theory. The results
were collectively an encouraging step towards a workbench for scientists to use to char-
acterize and experiment with their understanding of social systems.
Page 87
researcher might use the model, which adds weight to the practicality of the future pros-
pect of a more extensive (deeper and broader) exploration of the theory of action. That is,
the current research was conducted by beginning with a modest baseline of operational
capabilities and successively adding to it in order to increase the coverage of, and there-
fore fidelity to, the theory.
The fidelity was illustrated by several "base cases," whose outcomes were pre-
dicted by the theory. One case was for a strong culture, one with a strong Latent Pattern
Maintenance function that could remember for a long time. Theory would predict that
such an LPM function could reset new information entering the organization by quickly
decreasing the amount of new information permitted in until the organization was "over"
(in the sense of forgot) the out-of-the-ordinary impulses. Another case of weak culture
illustrated what the theory of action predicts: the organization under study closely follows
the pattern of external information (as though it had no memory) and was therefore "whip
sawn" by the shape of external events; there was virtually no counter-force to the im-
pulses entering the Adaptation function from a possibly-turbulent environment.
On account of these base cases, one can have a degree of confidence that the
model enacts the theory. In addition, an Appendix contains the attestation of an expert on
the simulation language selected that the modeling and the model achieved what was
sought.
Discussion
As presented above in the Literature review, p. 25, there was a notional set of
steps to be taken to build the simulation:
Table 7.
Correspondence between what was required and what was developed.
1953a), in particular phases and cycles. Sequence by its definition suggests ordering and
therefore time and timing. The translation of Parsons' time to the model developed here
was aided in a straightforward way using the trick of discrete event simulation, a method
of simulation that explicitly specified order and time.
The challenge with respect to time was obtaining times (intervals) for the duration
of each of the four functional prerequisites. It was not sufficient to leave that to the user.
The arbitrary unit of a single business day was selected as the atomic unit of time, with-
out loss of generality. Then the user of the simulation specified durations in units of busi-
ness days in the hopes that that somehow was consistent with what Parsons envisioned
but never wrote. Again, the unit of time could be changed throughout the simulation to
another other one, as long as it was the same atomic unit everywhere. That is, there is no
subjective or social time in the simulation; all time is in terms of a clock tick or cadence
of equal duration.
The impact of using uniform time instead of subjective time is not clear. If there
were some way to model subjective time then the mismatch among durations of the first
three functional prerequisites would still exist, queues would still build, and learning
would still be time-based. That is, in the main the results would be the same whether time
was modeled as uniform or subjective.
Process
The description of social systems as process was introduced in the Literature
Review on p. 36. There it was argued that the process focus imposed a heavy burden be-
cause it required a rich description of the mechanism and steps by which states change
inside an organization in response to external stimuli, as opposed to what one usually
finds in School of Education dissertations, which are statistical analyses of scores; there
is no dynamics, no detailed mechanism of how a score gets its value.
The heavy burden is manifest in writing a simulation because the computer has to
be told everything! Not only was the structure and function to be made manifest for the
computer, but also the many details about which Parsons gave no guidance: were there
queues between functions, how exactly did Latent Pattern Maintenance learn (we know
that it was classical conditioning, but what was the model and what were the values of its
parameters?), how exactly did Latent Pattern Maintenance affect Adaptation (that is, how
did LPM affect the energy that Adaptation sensed or not?), how did LPM measure or
sense tension, and then what exactly did it do to Adaptation in order to present a counter-
force to energy that disturbed the previous state, etc.
As the simulation results unfolded, another process question arose: did Parsons
foresee that organizations had a capacity to respond that is finite over an arbitrarily small
period? Did he foresee that the functions might get to a state where they could no longer
absorb or respond to any more energy? And then would he have predicted what would
happen? While these questions might properly belong in the section below on recom-
mendations for further study, in fact rather they suggest the fruits of the process view, as
taken during this research.
Discrete event simulation as a technique
Most modelers of social systems use the techniques of system dynamics for good
reason: there is a community of practice centered around MIT and other distinguished
universities (e.g., System Dynamics Society), an excellent text with many examples
(Hanneman, 1988), and there is a growing corpus of applications (quarterly System
Page 91
Dynamics Review and the annual Proceedings of the International Conference of the
System Dynamics Society). However, system dynamics did not appear to be up to the
task of modeling the theory of action, particularly using the direct words of Parsons as the
oracle.
Probably the breakthrough in this research came when Parsons' description of the
phase movement was seen as a partial time-ordering and then discrete event simulation
techniques were applied to see if there was a fit. The single largest contribution of this
research may be the application of discrete event simulation to a social system, as this
was only the third recorded instance of such an application.
Those who most often use discrete event simulation are trying to understand how
waiting lines form, so it was no surprise that the waiting lines in the theory of action were
exposed. This, too, may be a contribution of this research, as the topic appeared to be
neglected by Parsons, his supporters, and his critics.
Other difficulties
The effort to model the theory of action was difficult for several additional
reasons: (a) so much is written by and about Parsons; (b) what is written is difficult to
understand; and (c) the paucity (well, complete absence) of empirical, time-varying
results that could be used to verify the simulation.
Each of these difficulties was addressed, though not all to the same level of rigor.
In order to not claim any relation to the totality of Parsons' work, a single chapter was
selected as indicative, and then only a very small portion of it was selected to be simu-
lated. There is likely no antidote to the difficulty of understanding what has been written
by and about Parsons. And one can only try to triangulate among Parsons and experts,
and then to have it reviewed by experts in order to increase confidence in the fidelity of
the understanding.
Above all, this research should be seen for what it was: a small, toy experiment –
without verification – to see if something bigger is possible. Only by seeing that bigger
thing, produced by a future researcher perhaps built on the foundation presented here, can
the import of the current research be assessed.
Implications
For theory of social systems
One implication for social systems research is the consequence of framing inter-
actions in terms of time sequences but not attending to the impact of those time
sequences. For example, the instant research illustrated the impact of not attending to the
relationship between arrival rates and service rates. That is, in an external or even internal
environment of turmoil and "white water," (Vaill, 1996) a scan of that environment can
identify many items that need the attention of the organization (high arrival rate). Will
there be enough time (high enough service rate) to attend to them all? What happens to
the ones not attended to? These are questions about which Parsons offered no guidance.
In addition to the typical problem of queues building when the average arrival rate
exceeds the average service rate, there is also the question of priority queues and high
priority processing. While usually the domain of industrial engineering and operations
research, those topics entered this research during the simulation of an organization
responding affectively to stimuli. In the affective case the service rate is faster, higher
(Parsons et al., 1953a, p. 201), at least one reason for which is that affect is by definition
emotional and its opposite, affect-neutrality is rational, reasoned, cognitive, and it takes
Page 92
longer to be rational than not rational. The faster service rate of energy being addressed
affectively can compensate for a higher arrival rate of external energy. The dilemma is
that the quality of decisions arrived at affectively is lower than those decided affectively-
neutral (that is, rationally). As Fararo (2001, p. 157) avers, the problem for further
research is finding the "sweet spot," the stable region, between the two. Fararo said it is
the kind of theorem one would like to see for the operation of pattern maintenance (loc.
cit.).
For research in simulation
Clearly discrete event simulation is underused in social systems research; only
two previous examples were found. Perhaps the most compelling reason for that is the
paucity of social systems research that incorporates time; time is the main independent
variable in discrete event simulation. In that sense Parsons was many decades ahead of
social systems research. And it might be premature to suppose that sociology has caught
up with his practice of seeing social processes as events in a time sequence, the kind of
string of actions that are ideally-suited to be simulated in a discrete event framework.
Another force that might augur for additional application of the discrete-event
approach is the increased cross-over between sociology and other computer- and mathe-
matics-related disciplines. One finds the CMOT (computer and mathematical organiza-
tion theory) community increasingly using engineering-oriented tools to address socio-
logical problems. For example, Burton and Obel (1995), management scientists, have
found that the design of an organization (structure) can be optimum, a term never used by
sociologist or organizational designers. Burton and Obel cast the problem as one in linear
programming, where an objective function was trying to be maximized (such as decision
speed or decision quality) or minimized (such as communication expense or overhead, or
rework), subject to constraints. This framing as a linear programming problem is an
example of the intersection of two disciplines and that new techniques grew from it.
The application here is that perhaps the confluence of people outside of sociology
being interested in the theory of action with the increased capabilities and expressiveness
of simulation languages and tools might result in a renaissance in investigations of the
meaning of the theory of action using the mechanism of a workbench that a researcher
could manipulate to explore understanding.
For practice
The scientist finds his reward in what Henri Poincaré calls the joy of comprehen-
sion, and not in the possibilities of application to which any discovery may lead.
- Albert Einstein36
There is very little here for the practitioner. After all, this is a simulation of a
theory, in effect an abstraction of an already abstract theory. If there is one finding for a
practitioner it is the power of affect and the need to balance it with affect-neutrality for
the long-term health of the organization. This was not lost on Fararo (2001), who notes
"A 'functional necessity' or 'functional imperative' for an ongoing social system is that the
element of affective neutrality be built into it (i.e., action in some situations should take
the form of disciplined attention to instrumental and moral considerations in priority over
immediate gratification)." p. 137
36
In Alice Calaprice (ed.). (1996). The quotable Einstein. Princeton, NJ: Princeton University Press, p. 173, in turn
quoting from "Prologue" in Max Planck. (1932). Where is science going? New York: Norton, p. 211.
Page 93
One might be able to speculate under what conditions that the balance between
affect-neutrality and purely affective response might be struck, namely that tension is not
being tracked, that Latent Pattern Maintenance is not matching the internal energy of the
organization with the energy of the environment and that the pattern of matching is
diverging over time, getting worse, more tense. In that case, one cause might be that
greater affect-neutrality (that is, cognition) is required in order to make higher quality
decisions and those decisions in turn would better match the internal energy to the
external.
Recommendations for further study
Surely the most commonly-heard expression by a reader upon reaching the end of
this dissertation will be "Why did he stop here? This is just the beginning. There is so
much further to go." The dilemma when attempting something never done before is to
determine when a beginning has been accomplished, when is it time to declare the end of
one phase so that another may begin (possibly conducted by another researcher).
This dissertation presented a simulation of a part of Talcott Parsons’ theory of
action. Like all simulations its fidelity can be improved in a number of dimensions, in
this case: taking more theory into account, being more accurate, being more general,
being more user-friendly, accounting for more pattern variables and their interaction,
having finer granularity, having courser granularity, having adjustable granularity, and
dealing with more interchange media. In fact, one way to generate a list of considerations
for further study would be to systematically address this study's Limitations section, p.
52.
The decision about where/when to stop was based on a single judgment: could
another researcher pick up where this work left off and continue along a path of refine-
ment or generality? While the author could have gone further (in fact, without limit), it is
the researcher's judgment that the state of the simulation is complete enough so that an-
other person can carry on. Accordingly, while this work could have continued, it is also
true that other researchers can join in the fun now.
[The model here] exhibits a logic of "theoretical models in progress." This usually
means starting with initial simplifications and then adding complications in suc-
cessive revisions. In [computer] programming terms, any one theoretical model
becomes successively embodied in a series of program, the later programs cor-
recting and extending the earlier. … At any one point in this series of develop-
ments, a simulation model is both a theoretical model and a program. There is
really never a last program in the series, only a place of rest or termination
through exhaustion of the creative possibilities or diversion into work on other
such projects. (Fararo & Hummon, 1994)
4. The remaining three pattern variables could be simulated, along with their (combi-
natorial, multiple order) effects on each other. Dubin (1960) gives insight into the size
of the combinatorics and suggests a probability associated with each pattern variable
occurrence, something quite beyond Parsons' conceptualization. While there, though,
it would be possible to augment the measure of the magnitude of external energy (0 to
4 in the current simulation) with some measure of the certainty (or ambiguity) of the
information communicated by the energy, and thereby give the probabilistic approach
greater richness in explaining potentially non-deterministic situations. Again, we note
that this was quite beyond what Parsons explained.
In addition, the unit of analysis could be changed either up (to culture) or down
(to personality) or one embedded in the other, which is a variation of item 2, above.
Zelditch (1955) adds considerable details to the explanations of an orbit and phase
movement that originate in Parsons et al. (1953a), so incorporating Zelditch's work might
be an important exercise to determine whether the simulation presented here would be
extensible along the lines that Parsons and his colleagues might have taken it.
Apply to more reported situations where there is a response to external energy
Only one application was made in the Results chapter to an instance reported in
scholarly journals. Therefore, it would be instructive to move from the demonstration of a
toy to a tool that explained reported structural and functions responses to external stimuli.
Candidates might include (Audia, Locke, & Smith, 2000; Barr, 1998; Chattopadhyay,
Glick, & Huber, 2001; Haveman, Russo, & Meyer, 2001; Hoffman, 1999; Holmwood,
1983; Marcus & Nichols, 1999).
In addition, a future approach might also focus on affect vs. affect-neutrality in
decision making, relying on such sources as " Toxic decision processes: A study of emo-
tion and organizational decision making" (Maitlis, 2004), The neurotic organization:
Diagnosing and changing counterproductive styles of management (Kets de Vries, 1984),
Unstable at the top: Inside the troubled organization (Kets de Vries, 1987), and The
Icarus paradox: How exceptional companies bring about their own downfall; new lessons
in the dynamics of corporate success, decline, and renewal (Miller, 1990). One aspect of
the focus on affect vs. affect-neutrality that is missing in the current research is that of
decision quality or organizational fitness: is there a better or worse Latent Pattern Main-
tenance function with respect to a realistic goal to be optimized. The current research
makes the single and naïve goal of matching internal energy to the pattern of external.
Clearly there is much room here for improvement.
Apply to agent-based systems
In the Research Methods section of the Methods chapter on the topic of selecting
the appropriate simulation technology, the observation was made that agent-based simu-
lation systems were gaining currency. In addition, there it was stated,
Simulating agents in Parsons’ theory of action might be a future application of the
simulation described here. If one viewed agents as co-operating and communi-
cating sequential processes (Hoare, 1985) (in the context of agent-based simula-
tion), then this study gives insight into the program that might be inside each
agent, that is, the instant research is a necessary precursor to an agent-based
simulation of Parsons’ theory of action.
Now, therefore, to apply the instant research to agent-based simulation, one must
construct a hierarchy or network into which agents fit. Much of this has been worked out
Page 95
for the theory of action in a different context (Fararo & Skvoretz, 1984), namely, a hierar-
chy of interconnected automata that operate at different levels of interpenetrating
abstraction, different units of analysis. One of the advantages of the approach described
by Fararo and Skvoretz (1984) is that it preserves the non-determinism of agent-based
systems and, again, it is entirely grounded in the theory of action.
Address the dual of performance: organizational learning
The theory of action contains a duality of performance and learning. The simula-
tion reported here deals only with performance and neglects learning. Therefore the
simulation could be expanded to take into account learning. It is not clear how organiza-
tions learn and particularly how Parsons thought they did. Therefore, further research
could experiment with how each function makes sense of the energy presented to it and
how it changes its internal processing correspondingly.
Increase technical robustness
The human-computer interface could be improved. The current version is, to be
charitable, unusable by anyone but the researcher. There is a significant literature written
about how to construct effective interfaces between computers and humans. The ecologi-
cal interface seems particularly applicable (Bennett & Flach, 1992; Chistoffersen, Hunter,
& Vicente, 1998; Goldstein, 1969; Hoffman & Ocasio, 2001; Howie & Vicente, 1998a;
Howie & Vicente, 1998b; Howie, Sy, Ford, & Vicente, 2000; Janzen & Vicente, 1998;
Mitchell & Miller, 1986; Pawlak & Vicente, 1996; Rasmussen & Batstone, n.d.;
Rasmussen, Duncan, & Leplat, 1987; Shneiderman, 1983; Vicente & Rasmussen, 1990;
Vicente & Rasmussen, 1992; Weir, 1991; Woods, 1984; Woods, 1991).
In addition, the simulation could be rewritten with provable correctness in mind,
so that testing and evaluation by a third party would be less important because the com-
puter program could be proved correct, given its specification. There are several (award
winning) methods for constructing and proving correct computer programs where the
programs contain timing (Hoare, 1985; Manna & Pnueli, 1991; Manna & Pnueli, 1995).
In sum
Alas, the real estimate of whether the model reported here will be sufficient for
further enrichment – which was the purpose of this research – can only be made by the
next researcher in turn, who will evaluate this scaffold for its ability to continue the con-
struction of a high fidelity replica of Parsons' theory of action.
Page 96
EPILOGUE
As can be said of so many other doctoral candidates, this was not the dissertation I
set out to write. My first idea was to create a method of translating causal loop diagrams
(CLDs) into system dynamics models (an example of which is shown in Figure 1, p. 4).
Causal loop diagrams are informal drawings that show what are presumably causes and
effects among circular influences. They were made popular in Senge (1990); Senge was a
student of Jay Forrester, the "father" of system dynamics (Forrester, 1968). No one has
ever been able to translate from CLDs to system dynamics models because there is (so
much) information missing. I found some patterns that – when a few additional questions
were asked and answered – would provide a first draft system dynamics model from
CLDs. I was going to use some then-new results from qualitative physics, a branch of
mathematics that does not rely on exact quantities, in order to reason about the relation-
ships among variables.
In addition, I thought that causal loop diagrams might help with an endemic
problem in system dynamics modeling: the misperception of feedback (Diehl & Sterman,
1995; Kleinmuntz, 1993; Paich & Sterman, 1993; Sterman, 1989a; Sterman, 1989b). It
seems that our human cognition is not very good at seeing non-linear or cyclic or attenu-
ated cause and effect connections. And this has been demonstrated even among people
who construct such connections every day.
During tea at a George Washington University function I was chatting with Karl
Weick about my work because I knew that he was interested (I had written a school paper
in which I pointed out that I thought he was mistaken in Weick (1979, p. 69 ff) about cer-
tain system dynamics applications). He asked me whether I thought I was solving a
problem of ambiguity or of uncertainty. These are his shorthand terms for the two types
of equivocality. Weick has written that the purpose of organizations is to reduce equivo-
cality. Uncertainty is the want of information. Ambiguity is the want of sense(-making),
there may be enough information and it may be contradictory.
I was stunned because I did not know the answer to that simple question. I pon-
dered it a long time and spoke with system dynamics experts, including the author of
Figure 1. I came away with no answer, so I abandoned that work, in which I had invested
a significant portion of my research energy.
My next attempt was to see if I could apply some of the concepts of complex
adaptive systems (CAS) – also called chaos or complexity theory – to some real organ-
izational events. I had an idea that some of the arguments in the field – particularly about
whether change is (a) rapid and cataclysmic (averred by those supporting punctuated
equilibrium (Romanelli & Tushman, 1994) and quantum change (Miller, Friesen, &
Mintzberg, 1984)), or (b) incremental (Donaldson, 1996) – are simply on a continuum of
rate of change and that those changes could be more parsimoniously (and less polemi-
cally) explained by a fact of (non-linear) differential equations, the staple of CAS.
The problem was that I could not figure out what to measure. I still cannot, and
nor it seems can anyone who studies organizations from the CAS perspective. CAS
appears to be a metaphor, not really yet a computational tool.37
37
"Despite the promise indicated by various authors within the field, complexity science has thus far failed to deliver
tangible tools that might be utilized in the examination of complex systems." (Richardson, Cilliers, & Lissack, 2001)
Page 97
One of the turning points in my search to apply what I knew as an engineer and
physical scientist to social systems came when David Schwandt, the chair of the disserta-
tion committee and the Director of the Executive Leadership Program, invited me to read
Daft and Weick (1984), which relied on Boulding (1956). Basically, that work argues that
social systems interpret the forces that impinge upon them, they do not "robotically"
absorb and then reflect the energy that is aimed at them, as billiard balls would. That is, a
social system could absorb energy, reflect energy, multiply energy, delay energy, con-
sume energy, or do with energy whatever it wanted to, completely and totally unlike
physical systems; physical systems conserve energy. That is, in physical systems there is
a fixed amount of energy and for an object to gain more means a loss of some somewhere
else, and vice versa. In social systems there is no conservation, no limit to the energy in
the system. Nearly all of physics is based upon an equation, an equality, that connects
energy to its other embodiments. What would the energy in a social system be equal to?
What equality would be preserved/conserved across social acts? I could not and cannot
answer those questions, so I dropped my search for physics-like thinking, especially
complex adaptive systems (also called complexity theory), applied to social systems.
The current research flowed from my interest in Parsons theory of action because
I apply it every day in the delivery of advisory services. I use the theory of action to
evaluate the situation, diagnose the current state, and look for leverage for change. I
wondered whether I could animate the theory, as so many have for other social systems
before me.
I started to go to college by attending night school. It was the time of the military
draft and students were deferred if they made normal progress. In trying to make normal
progress I was forced to take courses that I could get into, whether I had the inclination or
prerequisites or not. During an early semester I took a computer course and did badly.
The next semester the only course for which I really had taken the prerequisite was the
follow-on computer course. In that more advanced course (it was the most advanced
offered at the university at the time) the instructor asked me to learn about a new thing,
discrete event simulation (DES). I learned the principles (current in 1967) and wrote a
computer program in the General Purpose System Simulation (GPSS) language, which
was brand new at the time, that simulated a grocery store, in particular something that
was first being tried in that era: designated lines for a small number of items. I was curi-
ous about whether those lines worked or not.
The effect on waiting times notwithstanding, my GPSS program impressed quite a
few people, so it ended up impressing me! And by that experience simulation became
something of a lens through which I viewed a part of the world, particularly the world of
management decision-making, which was to become my undergraduate focus in business
administration.
In 1975 for my employer at the time I was trying to predict the growth of adoption
of a new product. I already knew about the usual S-shaped growth pattern that one gets in
a restricted medium, like a Petri dish, and it had been applied to the adoption of technol-
ogy despite the obvious violation of assumptions. I was looking for something, well,
more human.
Limits to growth (Meadows, Meadows, Randers, & Behrens, 1972) had been
recently published and made fascinating reading. It was a simulation of how the world
would grow in the next century. It was my first exposure to system dynamics and it was
Page 98
an impressive one. I wrote a simulation of product adoption based on what I learned from
Limits to growth. And I tried to stay current with what was called the world model by the
system dynamics community.
In the end I did not let my initial exposure to computing in that first university
course influence my final direction, mostly because I did so much better in the second
course, and that by learning and applying discrete event simulation. DES and system
dynamics have different heritages and often appear as intellectual schools that fight over
the same turf, much like any school of thought before it becomes "normal" (Kuhn, 1970).
Having some facility in both and no commitment to either, this would not be the last time
I would be spanning boundaries.
By the time I had earned my undergraduate degree in business I was very inter-
ested in computers, so tried to pursue a graduate program in that field, ostensibly inside a
graduate school of business. The business school I selected turned out to be having a bat-
tle about the place of computing inside it and I could see myself as becoming a pawn in
the conflict, so I sought another place at the same university where I could learn com-
puting in a different setting. In the end I entered the school of engineering and applied
science, for which I virtually completely lacked the prerequisites and did not understand
most of the course titles! I had a lot of catching up to do.
I completed my masters work with a thesis that was widely regarded and earned
me a visiting scientist position for a year at a distinguished physics institute. I was work-
ing on my PhD dissertation there, a simulation system that would permit an arbitrary
level of detail. One of the challenges in creating any simulation is that there are some
things you care about and some you do not. In each simulation system what a researcher
might select as the choices to care about and not to are already made. I wanted to permit
the modeler an arbitrary level of concern. As part of my literature search I read about 300
engineering dissertations, nearly all of which had been one year of work and did not build
on previous work, so none of them could attack the arbitrariness of the level of detail. I
ran out of time, too, and never completed the research. And there has never been a simu-
lation system that lets the researcher select an arbitrary level of detail/concern/ abstrac-
tion.
Much later in my career I became a consultant to the parts of organizations in
which software is developed. Gradually the level of my clients inside those organizations
rose and the nature of their questions changed from technical to organizational: "You are
advocating that we work in teams. How long does it take a team to do its work?" "What’s
the best way for me to organize the 7500 people who work for me?"
This set of questions, and ones like it about how innovation is adopted, took me
away from my technical background and placed me on weak ground. So I pursued the
learning offered by the Executive Leadership Program’s doctoral degree. Again, I had
none of the prerequisites and had to study very hard just to catch up and then stay in
place.
I use every day what I learned and I believe that, despite my fits and starts on a
dissertation topic, I am a poster-child for the Program, a Program that encourages bound-
ary spanning by the example of its leader, Prof. Dave Schwandt, who is a recovering
physicist.
Page 99
REFERENCES
(1997). Simulation and Gaming and the Teaching of Sociology. ASA Resources
Materials for Teaching. American Sociological Association. 19 pages.
Abbott, A. (1988). Transcending general linear reality. Sociological Theory, 6(2), 169-
186.
Abbott, A. (1992). From causes to events. Sociological Methods and Research, 20, 428-
455.
Abbott, A. (2001). Time matters: On theory and method. Chicago: University of Chicago
Press.
Achterkamp, M., & Imhof, P. (1999). The importance of being systematically surprise-
able: Comparative social simulation as experimental technique. Journal of Mathe-
matical Sociology, 23( 4), 327-347.
Alexander, J. C. (1983). The later period (1): The interchange model and Parsons' final
approach to multidimensional theory. In J. C. Alexander, The modern reconstruction
of classical thought: Talcott Parsons (Vol. Four, pp. 73-118). Berkeley, CA:
University of California Press.
Ancona, D. G., & Chong, C. (1996). Entrainment: Pace, cycle, and rhythm in
organizational behavior. In L. L. Cummings, & B. M. Staw, (Eds.), Research in
organizational behavior (Vol. 18,Chap. 251-284, ). Greenwich, CT: JAI Press.
Audia, P. G., Locke, E. A., & Smith, K. G. (2000). The paradox of success: An archival
and a laboratory study of strategic persistence following radical environmental
change. Academy of Management Journal , 43(5), 837-853.
Banks, J., & Carson, J. S.II. (1984). Discrete-event system simulation. Englewood Cliffs,
NJ: Prentice-Hall.
Barber, B., & Inkeles, A. (Eds.). (1971). Stability and change: A volume in honor of
Talcott Parsons. Boston: Little, Brown and Co.
Barkema, H. G., Baum, J. A. C., & Mannix, E. A. (Eds.). (2002). A new time. [Special
research forum]. Academy of Management Journal, 45(5).
Bennett, K. B., & Flach, J. M. (1992). Graphical displays: Implications for divided
attention, focused attention, and problem solving. Human Factors, 34(5), 513-534.
Berger, J., & Zelditch, M., Jr. (1968). Sociological theory and modern society. [Book
review]. American Sociological Review, 33(3), 446-450.
Black, M. (Ed.). (1961). The social theories of Talcott Parsons. Englewood Cliffs, NJ:
Prentice-Hall.
Bluth, B. J. (1982). Parsons' general theory of action: A summary of the basic theory.
Granada Hills, CA: NBS.
Bronson, R., & Jacobsen, C. (1986). Simulation and social theory. Simulation, 47(2 ), 58-
62.
Bronson, R., Jacobsen, C., & Crawford, J. (1988). Estimating functional relationships in a
macrosociological model. Mathematical Computer Modelling, 11, 386-390.
Burrell, G., & Morgan, G. (1979). Sociological paradigms and organisational analysis.
Portsmouth, NH: Heinemann.
Burton, R. M., & Obel, B. (Eds.). (1995). Design models for hierarchical organizations:
Computation, information, and decentralization. Boston, MA: Kluwer Academic
Publishers.
Camic, C. (1998). Reconstructing the theory of action. Sociological Theory, 16(3), 283-
291.
Checkland, P., & Scholes, J. (1999). Soft systems methodology in action (30-year
retrospective ed.). West Sussex, England: John Wiley & Sons.
Cherns, A. (1980). Work and values: Shifting patterns in industrial society. International
Social Science Journal, 32(3 ), 427-441.
Chistoffersen, K., Hunter, C. N., & Vicente, K. J. (1998). A longitudinal study of the
effects of ecological interface design on deep knowledge. International Journal of
Human-Computer Studies, 48(6), 729-762.
Page 102
Collins, L. M., & Sayer, A. G. (Eds.). (2001). New methods for the analysis of change.
Washington DC: American Psychological Association.
Conte, R., Hegselmann, R., & Terno, P. (Eds.). ( 1997). Simulating social phenomena.
Heidelberg, Germany: Springer.
Conway, R. W., & McClain, J. O. (2003). The conduct of an effective simulation study.
INFORMS Transactions on Education, 3(3).
Cyert, R. M., & March, J. G. (1963). A behavioral theory of the firm. Englewood Cliffs,
NJ: Prentice-Hall.
Cyert, R. M., & March, J. G. (1992). A behavioral theory of the firm (2nd ed.).
Cambridge, MA: Blackwell.
Davis, K. (1959). The myth of functional analysis as a special method in sociology and
anthropology. American Sociological Review, 24( 6), 757-772.
Diehl, E., & Sterman, J. D. (1995). Effects of feedback complexity on dynamic decision
making. Organizational Behavior & Human Decision Processes, 62(2), 198-215.
Donaldson, L. (1996). For positivist organization theory: Proving the hard core. London:
Sage.
Sociological Association .
Epstein, J. M., & Axtell, R. (1996). Growing artificial societies: Social science from the
bottom up. Washington, DC: Brookings Institution Press.
Fararo, T. J., & Hummon, N. P. (1994). Discrete event simulation and theoretical models
in sociology. In B. Markovsky, K. Heimer, & J. O'Brien, (Eds.), Advances in group
processes (Vol. 11,pp. 25-66). Greenwich, CT: JAI Press.
Garson, G. D. (1994). Social science computer simulation: Its history, design, and future.
Social Science Computer Review, 12(1), 55-82.
Gilbert, N., & Conte, R. (Eds.). (1995). Artificial societies: The computer simulation of
social life. London: UCL Press.
Harré, R., & Secord, P. F. (1972). The explanation of social behaviour. Oxford, England:
Basil Blackwell.
Hassard, J. (1990). Introduction: The sociological study of time. In J. Hassard, (Ed.), The
sociology of time (pp. 1-18). New York: St. Martin's Press.
Heise, D. R. (1979). Understanding events: Affect and the construction of social actions.
Cambridge, England: Cambridge University Press.
Hills, R. J. (1968). Towards a science of organization. Eugene, OR: Center for the
Advanced Study of Educational Administration.
Hoffman, A. J., & Ocasio, W. (2001). Not all events are attended equally: Toward a
middle-range theory of industry attention to external events. Organization Science,
12(4), 414-434.
Holmwood, J. (1996). Founding sociology? Talcott Parsons and the idea of general
theory. New York: Longman.
Holmwood, J. M. (1983). Action, system and norm in the action frame of reference:
Talcott Parsons and his critics. Sociological Review, New Series, 31, 310-336.
Page 105
Honderich, T. (Ed.). (1995). Oxford companion to philosophy. New York, NY: Oxford
University Press.
Howie, D. E., & Vicente, K. J. (1998a). Making the most of ecological interface design:
The role of self-explanation. International Journal of Human-Computer Studies,
49(5), 651-674.
Howie, E., Sy, S., Ford, L., & Vicente, K. J. (2000). Human-computer interface design
can reduce misperceptions of feedback. System Dynamics Review, 16(3), 151-171.
Jaber, M. Y., & Sikström, S. (2004). A note on "An empirical comparison of forgetting
models". IEEE Transactions on Engineering Management, 51(2), 233-234.
Jacobsen, C., & Bronson, R. (1995). Computer simulation and empirical testing of
sociological theory. Sociological Methods & Research, 23(4), 479-506.
Jacobsen, C., & Bronson, R. (1985). Simulating violators. Operations Research Society
of America [now Institute for Operations Research and Management Science].
Jacobsen, C., & Bronson, R. (1987). Defining sociological concepts as variables for
system dynamics modeling. System Dynamics Review, 3(1), 1-7.
Jacobsen, C., & Bronson, R. (1997). Computer simulated empirical tests of social theory:
Lessons from 15 years' experience. In R. Conte, R. Hegselmann, & P. Terno (Eds.),
Simulating social phenomena (pp. 97-102). Heidelberg, Germany: Springer.
Jacobsen, C., Bronson, R., & Vekstein, D. (1990). A strategy for testing the empirical
adequacy of macro-sociological theories. Journal of Mathematical Sociology, 15 (2),
137-148.
Janzen, M. E., & Vicente, K. J. (1998). Attention allocation within the abstraction
hierarchy. International Journal of Human-Computer Studies, 48(4), 521-545.
Jin, Y., & Levitt, R. (1996). The virtual design team: A computational model of project
organizations. Computational & Mathematical Organization Theory, 2(3), 171-196.
Keat, R., & Urry, J. (1982). Social theory as science. London: Routledge & Kegan Paul.
Page 106
Kets de Vries, M. F. R. (1987). Unstable at the top: Inside the troubled organization. New
York, NY: New American Library.
Lackey, P. N. (1987). Invitation to Talcott Parsons' theory. Houston: Cap and Gown
Press.
Lave, C. A., & March, J. G. (1993). An introduction to models in the social sciences.
Lanham, MD: University Press of America.
Leik, R. K., & Meeker, B. F. (1995). Computer simulation for exploring theories: Models
of interpersonal cooperation and competition. Sociological Perspectives, 38(4), 463-
482.
Lin, Z. (2000). Organizational performance under critical situations -- exploring the role
of computer modeling in crisis case analysis. Computational & Mathematical
Organization Theory, 6(3), 277-310.
Loubser, J. J., Baum, R. C., Effrat, A., & Lidz, V. M. (Eds.). (1976a). Explorations in
general theory in social science: Essays in honor of Talcott Parsons Vol. I. New York:
Free Press.
Loubser, J. J., Baum, R. C., Effrat, A., & Lidz, V. M. (Eds.). (1976b). Explorations in
general theory in social science: Essays in honor of Talcott Parsons Vol. II. New
York: Free Press.
Luna-Reyes, L. F., & Andersen, D. L. (2003). Collecting and analyzing qualitative data
Page 107
for system dynamics: Methods and models. System Dynamics Review, 19(4), 271-
296.
Manna, Z., & Pnueli, A. (1991). The temporal logic of reactive and concurrent systems.
New York: Springer.
Manna, Z., & Pnueli, A. (1995). Temporal verification of reactive systems: Safety. New
York: Springer.
Marcus, A. A., & Nichols, M. L. (1999). On the edge: Heeding the warnings of unusual
events. Organization Science, 10(4), 482-499.
McCleary, R., & Hay, R. A., Jr. (1980). Applied time series analysis for the social
sciences. Beverly Hills, CA: Sage.
Meadows, D. H., Meadows, D. L., Randers, J., & Behrens, W. W. (1972). The limits to
growth. New York, NY: Universe Books.
Miller, D. (1990). The Icarus paradox: How exceptional companies bring about their own
downfall; new lessons in the dynamics of corporate success, decline, and renewal.
New York, NY: Harper Business.
Miller, D., Friesen, P. H., & Mintzberg, H. (1984). Organizations: A quantum view.
Englewood Cliffs, NJ: Prentice-Hall.
Mitchell, C. M., & Miller, R. A. (1986). A discrete control model of operator function: A
methodology for information display design. IEEE Transactions on Systems, Man,
and Cybernetics, SMC-16(3), 343-357.
Mize, J. H., & Cox, J. G. (1968). Essentials of simulation. Englewood Cliffs, NJ:
Prentice-Hall.
Moore, W. E. (1959). The whole state of sociology. [Book reviews of Sociology today:
Page 108
Moss, S. (2000). Canonical tasks, environments and models for social simulation.
Computational & Mathematical Organization Theory, 6(3), 249-275.
Nembhard, D. A., & Osothsilp, N. (2004). Authors' reply to "A note on 'An empirical
comparison of forgetting models'". IEEE Transactions on Engineering Management,
51(2), 235.
Paich, M., & Sterman, J. D. (1993). Boom, bust, and failures to learn in experimental
markets. Management Science, 39(12), 1439-1458.
Parsons, T. (1954). Essays in sociological theory (Revised ed.). New York: NY: Free
Press.
Parsons, T. (1961c). The point of view of the author. In M. Black (Ed.), The social
Page 109
Parsons, T. (1968a). The structure of social action: A study in social theory with special
reference to a group of recent European Writers (With a new introduction ed.). Vol. I.
New York: NY: Free Press.
Parsons, T. (1968b). The structure of social action: A study in social theory with special
reference to a group of recent European Writers (With a new introduction ed.). Vol.
II. New York: NY: Free Press.
Parsons, T. (1970). The impact of technology on culture and emerging new modes of
behaviour. International Social Science Journal, XXII(4), 607-627.
Parsons, T. (1977b). Social systems and the evolution of action theory. New York: NY:
Free Press.
Parsons, T., Bales, R. F., & Shils, E. A. (1953a). Phase movement in relation to
motivation, symbol formation, and role structure. In T. Parsons, R. Bales, & E. A.
Shils, Working papers in the theory of action (Chap. 5, pp. 163-269). Glencoe, IL:
Free Press.
Parsons, T., Bales, R. F., & Shils, E. A. (Eds.). (1953b). Working papers in the theory of
Page 110
Parsons, T., & Platt, G. M. (1973). The American university. Cambridge, MA: Harvard
University Press.
Parsons, T., & Shils, E. A. (1951). Toward a general theory of action. Cambridge, MA:
Harvard University Press.
Parsons, T., & Smelser, N. J. (1956). Economy and society: A study in the integration of
economic and social theory. Glencoe, IL: Free Press.
Pawlak, W. S., & Vicente, K. J. (1996). Inducing effective operator control through
ecological interface design. International Journal of Human-Computer Studies, 44(5),
653-688.
Pfahl, D., Laitenberger, O., Dorsch, J., & Ruhe, G. (2003). An externally replicated
experiment for evaluating the learning effectiveness of using simulations in software
project management education. Empirical Software Engineering, 8(4), 367-395.
Podell, L. (1966). Sex and role conflict. Journal of Marriage and the Family, 28(2), 163-
1165.
Rasmussen, J., & Batstone, R. (n.d.). Why do complex organizational systems fail?
World Bank.
Rasmussen, J., Duncan, K., & Leplat, J. (Eds.). (1987). New technology and human error.
Chichester, England: John Wiley & Sons.
Richardson, K. A., Cilliers, P., & Lissack, M. (2001). Complexity science: A "gray"
science for the "stuff in between". Emergence: A Journal of Complexity Issues in
Organizations and Management, 3(2), 6-18.
Page 111
Riley, M. W., & Nelson, E. E. (1971). Research on stability and change in social systems.
In B. Barber, & A. Inkeles, (Eds), Stability and social change (pp. 407-449). Boston,
MA: Little, Brown and Co.
Rocher, G. (1975). Talcott Parsons and American sociology. New York: Barnes & Noble.
Rowell, L. (1989). Foreword. In Time and process: Interdisciplinary issues (The Study of
Time VII) (p. vii-ix). Madison, CT: International Universities Press.
Savage, S. P. (1981). The theories of Talcott Parsons: The social relations of action. New
York, NY: St. Martin's Press.
Senge, P. M. (1990). The fifth discipline: The art & practice of the learning organization.
New York, NY: Doubleday.
Shackle, G. L. S. (1969). Decision order and time in human affairs (2nd ed.). Cambridge,
England: Cambridge University Press.
Skvoretz, J., & Fararo, T. J. (1980). Languages and grammars of action and interaction:
A contribution to the formal theory of action. Behavioral Science, 25(1), 9-22.
Skvoretz, J., & Fararo, T. J. (1996). Generating symbolic interaction: Production system
models. Sociological Methods & Research, 25(1), 60-102.
Skvoretz, J., Fararo, T. J., & Axten, N. (1980). Role-programme models and the analysis
of institutional structure. Sociology, 14(1), 49-67.
Page 112
Sterman, J. D. (2000). Business dynamics: Systems thinking and modeling for a complex
world. Boston: Irwin McGraw-Hill.
Thomsen, J., Levitt, R. E., & Kunz, J. (1999). A trajectory for validating computational
emulation models of organizations. Computational & Mathematical Organization
Theory, 5(4), 385-401.
Thorngate, W. (1976). "In general" vs. "It depends": some comments on the Gergen-
Schlenker debate. Personality & Social Psychology Bulletin, 2, 404-410.
Tsuchiya, S. (1966). A new role for computerized simulation in social science: Summary
thoughts on a case study. Simulation & Gaming, 27( 1), 103-109.
Tuma, N. B., & Hannan, M. T. (1984). Social dynamics: Models and methods. Orlando,
FL: Academic Press.
Turner, B. S. (Ed.). (1999). The Talcott Parsons reader. Malden, MA: Blackwell.
Udy, S. H., Jr. (1960). Structure and process in modern societies. [Book review].
American Journal of Sociology, 66(1), 96.
Uzmeri, M., & Nembhard, D. (1998). A population of learners: A new way to measure
organizational learning. Journal of Operations Management, 16(5), 515-528.
van Fraassen, B. C. (2002). The empirical stance. New Haven, CT: Yale University Press
.
Vicente, K. J., & Rasmussen, J. (1990). The ecology of human-machine systems II:
Page 113
Weick, K. (1979). The social psychology of organizing (2nd ed.). New York: McGraw-
Hill.
Whitehead, A. N. (1927). Science and the modern world: Lowell lectures, 1925 . New
York, NY: Macmillan.
Williams, R. M., Jr. (1959). Friendship and social values in a suburban community: An
exploratory study. Pacific Sociological Review, 2(1), 3-10.
Zeigler, B. P., Praehofer, H., & Kim, T. G. (2000). Theory of modeling and simulation:
Integrating discrete event and continuous complex dynamic systems (2nd ed.). San
Diego, CA: Academic Press.
Zelditch, M., Jr. (1955). A note on the analysis of equilibrium systems. In T. Parsons, &
R. F. Bales, Family, socialization and interaction process (pp. 401-408). New York:
Free Press.
Page 114
This section contains the complete simulation model in the language of SIMUL8,
a product from Simul8 Corporation, 26th Floor, 225 Franklin Street, Boston, MA 02110;
telephone: 800 547 6024.. More information is available at http://www.simul8.com
The model shown here was written and executed in Release 10 Standard.
The most important part of the listing is the last one, Learning Model Common. It
enacted the learning portion of Latent Pattern Maintenance and adjusted the Adaptation
filter in order to reduce tension. It was invoked on each exit from the Adaptation
function.
There is a narrated illustration of the model in action at
http://www.Master-Systems.com/Parsons.ivnu
***********************************************************************
General Simulation Information
------------------------------
Distributions
SetEnergy
External
Column of Data
Cell R6C2
GameInput.xls
SetAffect
External
Column of Data
Cell R6C1
GameInput.XLS
Energy
Label Based :Energy
ADwell
Label Based :ADwell
GDwell
Page 116
Labels
Energy
(Number)
Affect
(Number)
Label
(Number)
ADwell
(Number)
GDwell
(Number)
IDwell
(Number)
LDwell
(Number)
DUE
(Number)
AlwaysOne
(Number)
WAIT TIME
(Number)
WORK TIME
(Number)
PRIORITY
(Number)
Images
Default Image Entry
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Storage Bin
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Work Center
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Exit
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Resource
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Conveyor
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Tank
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Rotz
Page 117
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Process
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Loader
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Vehicle
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Component
Width: 32 Height: 32
Transparent Color: 16777215
Default Image 3D Light
Width: 32 Height: 32
Transparent Color: 16777215
Default Image 3D Object
Width: 32 Height: 32
Transparent Color: 16777215
Bolt_m
Width: 16 Height: 13
Transparent Color: 16777215
Image 2
Width: 32 Height: 32
Transparent Color: 16777215
Image 3
Width: 31 Height: 32
Transparent Color: 16777215
Open
Icon Location X:640 Y:497 W:32 H:32
Window Location X:4 Y:124 W:1255 H:823
Color 16777215
PRIORITY
***********************************************************************
Simulation Objects
------------------
Work Entry
External World
--------------
Display Parameters 4
X:205 Y:298 W:32 H:32
Xinc -10 Yinc 0
Image 0 Default Image Conveyor
Show Image
Do not collect results
Work Item Type: Main Work Item Type
Inter-arrival time
Distribution Detail:
Fixed 5 0 0 0
Route Out Objects
AFilter
On Label Action Visual Logic:
VL SECTION: Set dwell time properties
SET PRIORITY = Affect
'If Affect = 1 then Affect is required.
IF Affect = 1
SET ADwell = Table[10,6]
SET GDwell = Table[10,9]
SET IDwell = Table[10,12]
SET LDwell = Table[10,15]
ELSE
SET ADwell = Table[10,5]
SET GDwell = Table[10,8]
SET IDwell = Table[10,11]
SET LDwell = Table[10,14]
SET DUE = IDwell
Label Actions
PRIORITY
None
AlwaysOne
Set
Distribution Detail:
Fixed 1 0 0 0
Affect
Set
Distribution Detail:
Uses: SetAffect
External (Excel)
Energy
Set
Distribution Detail:
Uses: SetEnergy
External (Excel)
GDwell
None
Page 119
ADwell
None
LDwell
None
IDwell
None
DUE
None
Work Center
AFilter
-------
Display Parameters 4
X:345 Y:225 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Do not collect results
Priority 50
Routing In
Priority
Route In Objects
External World
Require resources before collecting any work items
Routing Out
Label
On label: Label
Preference only
Route Out Objects
Energy that does not enter
Waiting for Adaptation
Affect processing
Release resources as soon as task complete
Operation Time
Distribution Detail:
Fixed 0 0 0 0
On Label Action Visual Logic:
VL SECTION: AFilter Action Logic
CALL Learning Model Common
'GT is the greater than relation (>) and GTE is greater than or
equal to (>=).
'Temp contains the new threshold, computed from the learning
function.
IF Relation = GT
IF Energy > Temp
IF Affect = 0
SET Label = 2
'Path 2 = (Normal, affect neutral) Adaptation
ELSE
SET Label = 3
'Path 3 = Adaptation with Affect
ELSE
SET Label = 1
'Path 1 = exit the organization
ELSE IF Relation = GTE
Page 120
Work Center
Adaptation
----------
Display Parameters 4
X:444 Y:292 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Do not collect results
Priority 50
Routing In
Priority
Route In Objects
Waiting for Adaptation
Require resources before collecting any work items
Routing Out
Circulate
Preference only
Route Out Objects
Queue for Goal Attainment
Release resources as soon as task complete
Operation Time
Distribution Detail:
Uses: ADwell
Label Based
Work Center
Integration
-----------
Display Parameters 4
X:660 Y:509 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Priority label: Affect
Routing In
Priority
Page 121
Route In Objects
Queue from GA affect neutral
Goal Attainment Affect
Interrupted Integration
Require resources before collecting any work items
Routing Out
Circulate
Preference only
Route Out Objects
Queue for Latent Pattern Maintenance
Release resources as soon as task complete
Operation Time
Distribution Detail:
Fixed 120 0 0 0
Interruptable
Work Center
Latent Pattern Maintenance
--------------------------
Display Parameters 4
X:376 Y:506 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Do not collect results
Priority 50
Routing In
Priority
Route In Objects
Queue for Latent Pattern Maintenance
Require resources before collecting any work items
Routing Out
Circulate
Preference only
Route Out Objects
Spent Energy
Release resources as soon as task complete
Operation Time
Distribution Detail:
Uses: LDwell
Label Based
Label Actions
Affect
None
AFilter
Storage Bin
Waiting for Adaptation
----------------------
Passed the Adaptation filter and now waits for a blocked Adaptation
activity, which is evidently busy making sense of "old" news.
Display Parameters 0
X:392 Y:292 W:32 H:32
Xinc -10 Yinc 0
Show Count
Show Image
Do not collect results
Capacity: -1
Input Objects
AFilter
Output Objects
Adaptation
Storage Bin
Queue for Goal Attainment
-------------------------
Display Parameters 5
X:563 Y:292 W:32 H:32
Xinc -10 Yinc 0
Show Count
Show Image
Do not collect results
Capacity: -1
Input Objects
Adaptation
Output Objects
Prepare budget proposal
Storage Bin
Queue from GA affect neutral
----------------------------
Display Parameters 0
X:658 Y:438 W:32 H:32
Xinc -10 Yinc 0
Show Count
Show Image
Capacity: -1
Input Objects
Prepare budget proposal
Output Objects
Integration
Storage Bin
Queue for Latent Pattern Maintenance
------------------------------------
Display Parameters 0
X:473 Y:508 W:32 H:32
Xinc -10 Yinc 0
Show Count
Show Image
Do not collect results
Page 123
Capacity: -1
Input Objects
Integration
Output Objects
Latent Pattern Maintenance
Work Center
Goal Attainment Affect
----------------------
Display Parameters 4
X:733 Y:225 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Do not collect results
Priority 50
Routing In
Priority
Route In Objects
Queue for Goal Attainment Affect
Require resources before collecting any work items
Routing Out
Circulate
Preference only
Route Out Objects
Integration
Release resources as soon as task complete
Batching
Product type of fixed value: 1
Operation Time
Distribution Detail:
Uses: GDwell
Label Based
Storage Bin
Interrupted Integration
-----------------------
Display Parameters 0
X:836 Y:520 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Page 124
Capacity: -1
Output Objects
Integration
Storage Bin
Queue for Goal Attainment Affect
--------------------------------
Display Parameters 0
X:610 Y:225 W:32 H:32
Xinc -10 Yinc 0
Show Count
Show Image
Do not collect results
Capacity: -1
Input Objects
Affect processing
Output Objects
Goal Attainment Affect
Work Center
Affect processing
-----------------
Display Parameters 4
X:496 Y:225 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Do not collect results
Priority 50
Routing In
Passive
Route In Objects
AFilter
Require resources before collecting any work items
Routing Out
Circulate
Preference only
Route Out Objects
Queue for Goal Attainment Affect
Release resources as soon as task complete
Operation Time
Distribution Detail:
Uses: ADwell
Label Based
Label Actions
Label
None
Show Count
Show Image
Do not collect results
Input Objects
Prepare budget proposal
Work Center
Prepare budget proposal
-----------------------
Display Parameters 4
X:623 Y:292 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Do not collect results
Priority 50
Routing In
Priority
Route In Objects
Queue for Goal Attainment
Require resources before collecting any work items
Label Batching
Label
Min batch quantity 1
Max batch quantity 10000
Routing Out
Percent
Route Out Objects
Ideas not resourced
(0%)
Queue from GA affect neutral
(100%)
Release resources as soon as task complete
Operation Time
Distribution Detail:
Uses: GDwell
Label Based
Information Store
-----------------
Simulation Time
---------------
SIMUL8 Data
Current Value 0
Warm Up Period
--------------
SIMUL8 Data
Current Value 0
Overhead Cost
-------------
SIMUL8 Data
Current Value 0
Overhead Revenue
----------------
SIMUL8 Data
Current Value 0
Temp
----
Number
Current Value 0
Reset Value 0
Table
-----
Spreadsheet
EnergyThreshold
---------------
Number
Current Value 0
Reset Value 2
Relation
--------
Text
Current Value >=
Reset Value NOCHANGE
GT
--
Text
Current Value >
Reset Value NOCHANGE
GTE
---
Text
Current Value >=
Reset Value NOCHANGE
-----------------
Spreadsheet
y
-
Number
Current Value 0
Reset Value 0
x
-
Number
Current Value 0
Reset Value 0
p
-
Number
Current Value 0
Reset Value 0
k
-
Number
Current Value 0
Reset Value 0
r
-
Number
Current Value 0
Reset Value 0
Saved_clock
-----------
Time
Current Value 0
Reset Value 0
RowCtr
------
Number
Current Value 29
Reset Value -2147483648
ValCol
------
Number
Current Value 10
Reset Value 10
BotRow
------
Number
Current Value 29
Reset Value 29
Page 128
TimeOfMax
---------
Time
Current Value 0
Reset Value 0
MaxEnergy
---------
Number
Current Value 0
Reset Value 0
ThresholdToRespond
------------------
Number
Current Value 0
Reset Value 0
PercentNotFunded
----------------
Number
Current Value 0
Reset Value -2147483648
DecayCoefficient
----------------
Number
Current Value 0
Reset Value -0.001
ResponseWindow
--------------
Number
Current Value 0
Reset Value 52
MovingAverage
-------------
Number
Current Value 0
Reset Value 0
NbrInputs
---------
Number
Current Value 0
Reset Value 0
RunningTotal
------------
Number
Current Value 0
Reset Value 0
Page 129
TempRow
-------
Number
Current Value 0
Reset Value 0
I
-
Number
Current Value 0
Reset Value 0
Divisor
-------
Number
Current Value 0
Reset Value 0
'Obeyed every time the user clicks the RUN button (at any
simulation time)