The Parsons Game The First Simulation of Talcott P

Download as pdf or txt
Download as pdf or txt
You are on page 1of 139

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/35948028

The Parsons Game: The First Simulation of Talcott Parsons' Theory of Action

Article · January 2004


Source: OAI

CITATIONS READS
8 6,324

1 author:

S. Rifkin
Master Systems Inc.
29 PUBLICATIONS 354 CITATIONS

SEE PROFILE

All content following this page was uploaded by S. Rifkin on 19 March 2015.

The user has requested enhancement of the downloaded file.


i

THE PARSONS GAME:


THE FIRST SIMULATION OF TALCOTT PARSONS' THEORY OF ACTION

by Stan Rifkin

B.S., Business Administration (Quantitative Methods), School of Business &


Economics, California State University at Northridge, 1968
M.S., Computer Science, School of Engineering & Applied Science,
University of California at Los Angeles, 1972

A dissertation submitted to:

The Faculty of the


Graduate School of Education and Human Development
of The George Washington University
in partial fulfillment of the requirements
for the degree of Doctor of Education

January 30, 2005

Dissertation directed by:

Dr. David Schwandt, Professor of Human Resource Development, and


Director, Center for the Study of Learning and
Executive Leadership Doctoral Program,
Graduate School of Education and Human Development,
The George Washington University

Committee:

Dr. Walter Andre Brown, Assistant Professor of Higher Education Administration,


Graduate School of Education and Human Development,
The George Washington University

Dr. Robert Hanneman, Professor of Sociology,


College of Humanities, Arts, and Social Sciences,
University of California at Riverside

1.0 – 7 Dec 2004


Page ii

© Copyright 2004 Stan Rifkin


All rights reserved
Page iii

Contents
Abstract ............................................................................................................v
Acknowledgments.......................................................................................... vi
I. Introduction................................................................................................1
How this study differs from its predecessors.............................................4
Novelty of results.......................................................................................5
Conceptual framework...............................................................................7
Problem ....................................................................................................16
Research question ....................................................................................17
Significance..............................................................................................17
Limitations ...............................................................................................21
II. Literature review.....................................................................................25
Theory, model, and simulation ................................................................25
The theory of action .................................................................................25
Tension management and learning ..........................................................26
Place of Parsons' theory of action in sociology .......................................27
The bad news ...........................................................................................29
Locating this work within all of Parsons'.................................................29
Models......................................................................................................30
Formalization ...........................................................................................32
Time .........................................................................................................34
Process .....................................................................................................36
Simulations of social systems ..................................................................38
Discrete event simulation.........................................................................39
III. Methods....................................................................................................43
Research overview ...................................................................................43
Research methods ....................................................................................43
Delimitations............................................................................................52
IV. The model and simulation........................................................................55
Why simulate? .........................................................................................55
Model construction ..................................................................................56
Basic concept ...........................................................................................57
Model of tension and learning .................................................................58
Operation of the simulation .....................................................................63
Rules ........................................................................................................67
Assumptions.............................................................................................69
Mapping the model to theory...................................................................71
V. Results......................................................................................................76
Example ...................................................................................................76
Base cases ................................................................................................80
Extension..................................................................................................84
VI. Conclusions and recommendations for further study ..............................87
Review of purpose and research question................................................87
Review of findings...................................................................................87
Discussion ................................................................................................88
Page iv

Implications..............................................................................................91
Recommendations for further study.........................................................93
In sum.......................................................................................................95
Epilogue .........................................................................................................96
References......................................................................................................99
Appendix – Attestation of an Expert ...........................................................114
Appendix – Simulation Program Listing .....................................................115

Figures
Figure 1. Sastry’s "simplified causal diagram of the punctuated change theory." .............4
Figure 2. The components of action systems. ..................................................................12
Figure 3. The action system in relation to its environment. .............................................14
Figure 4. Interchange media (whose paths are represented by arrows) in the general
theory of action.. ..............................................................................................15
Figure 5. Phases in the relationship of a system to its situation. ......................................16
Figure 6. Flow from theory to action. ...............................................................................18
Figure 7. Relationship among process, event, and state (notional). ..................................37
Figure 8. Intersection of the theory of action and system simulation. ..............................43
Figure 9. Classification of social systems simulators, indicating the position of this
research in bold. ...............................................................................................46
Figure 10. Thorngate’s one-armed clock. .........................................................................21
Figure 11. Evolution of computer simulations of organizations. .....................................22
Figure 12. Structure of the three-parameter hyperbolic learning curve model. ...............61
Figure 13. Illustration of a negative exponential distribution as a "forgetting" function..62
Figure 14. User view of dedicated Excel spreadsheet.......................................................64
Figure 15. User view of the simulation. ............................................................................65
Figure 16. User display for example with long window. ..................................................78
Figure 17. Energy for the long window example. .............................................................79
Figure 18. Energy for the short window example. ............................................................79
Figure 19. Base case for affect vs. affect-neutrality..........................................................81
Figure 20. Pattern of internal energy following external with a strong culture. ...............82
Figure 21. Pattern of internal energy following external with a weak culture. .................83
Figure 22. Pattern of internal energy following external energy with very weak culture.84
Figure 23. Simulation after two energetic events per year, both with affect. Illustrates
queuing effects. ................................................................................................85
Tables
Table 1. Works by Parsons, his supporters, and his critics................................................2
Table 2. Additional delimitations of the study ..................................................................22
Table 3. The system dynamics modeling process across the classic literature. ..............57
Table 4. Rules of the simulation........................................................................................67
Table 5. Assumptions made in the simulation...................................................................69
Table 6. Map of the theory to the model. ..........................................................................72
Table 7. Correspondence between what was required and what was developed.............88
v

ABSTRACT
Talcott Parsons was perhaps the best known American sociologist of the 20th
Century. He postulated a general theory of the structure and function of social systems,
one at all levels of analysis and all levels of abstraction. The center of his theory is action,
which he defined in his own terms of situation, conditions, ends, and norms.
For those familiar with Parsons' work, the creation described here simulated using
a digital computer a very small subset of Parsons' theory of action, including his frame-
work of four functions or functional prerequisites, one of the four pairs of pattern vari-
ables, the cybernetic hierarchy, and interchange media. The simulation was meant to be a
proof-of-concept, a toy demonstration of the feasibility of modeling such a complete and
richly described social action theory.
Most simulations of social systems use a modeling technique called system
dynamics, a way of characterizing flows and accumulations over time. Other researchers
have tried to simulate the theory of action using system dynamics but have failed. One
contribution of this research is the application of a different technique, discrete event
simulation, to social systems. There are only two published applications of discrete event
simulation applied to social systems. Accordingly, this work offers some insight into how
to incorporate time ordering into reasoning about social systems.
Simulations were executed to demonstrate consistency with outcomes predicted
by the theory. One finding was that Parsons neglected to take into account the disposition
of (motivational) energy transiting through a system or organization when the energy is
blocked by having to wait for the processing of predecessor energy. The effect of this
oversight can be a very long wait for the availability of a prerequisite function and no
guidance on whether, for example, the energy decays, dissipates, waits, interrupts, or is
channeled elsewhere.
Page vi

ACKNOWLEDGMENTS
"We shape our models and then our models shape us."
- Michael Schrage. (2000). Serious play: How the world's best companies simulate to innovate,
Harvard Business School Press, p. 3.

Several fellow students in the Executive Leadership Program (ELP) cohort have
sustained my enthusiasm, including, but not limited to, Dr. Betty Beene, Dr. Brenda
Conley, Linda Hodo, and Dr. Ted Willey. Dr. Margaret Gorman has been a steady force
moving me ahead and she has been a safe harbor for my administrative challenges. I
always feel like Margaret treats me as a distinctive student, a gift she has for making each
one of us feel special. And her dissertation was breath-taking.
I am grateful to Laura Reid of Simul8 Corporation for attesting to the veracity of
the technical operation of the simulation described in this dissertation and along the way
helping me to improve its operation and correctness. I am also grateful for those who
came before me in the application of computation methods to organizational problems, in
particular Profs. Rich Burton at Duke, and John Kunz and Ray Levitt at Stanford, for per-
sonally helping me to see that engineering tools such as discrete event simulation could
be beneficially applied to social systems. Profs. Kathleen Carley at Carnegie Mellon and
Anjali Sastry at MIT were also instrumental in personally inspiring me to try to apply
engineering methods to the questions of social systems.
Dr. Chris Johnson gave me the courage to undertake the study of Talcott Parsons.
Dr. Johnson is a Parsons scholar and his energy and enthusiasm about Parsons are conta-
gious and set the bar high. He is very, very busy and I am especially grateful that he has
taken on three roles: expert who attests to the mapping of the theory of action to the
simulation model, an outside reader for the dissertation defense, and a friend and col-
league.
I consider myself an adequate researcher, but it took me too long to find Prof.
Tom Fararo, a professor in the School of Sociology at the University of Pittsburgh and
another Parsons scholar. Prof. Fararo, besides being an inspiration in his writing and
interpretation of Parsons, has been a ready and energetic reader of my manuscripts. I am
grateful for his generous expenditure of time and energy on my behalf.
Professors Walter Brown and Robert Hanneman, members of the dissertation
committee, have been generous with their time and energy. They have in their unique
ways given me important, stimulating feedback.
Prof. Dave Schwandt has been my close mentor throughout this long journey. He
is the person who spoke to me at the very beginning about joining ELP. I was struck im-
mediately then by his boundary spanning, openness to people not in his field(s), and how
gentle he was with people like me who knew nothing of human resource development. A
few years ago he introduced me as, "This is Stan. He is the only person I know who has
the whole dissertation in his head!" Prof. Schwandt has been so generous with his time
that I feel guilty. It should not have taken all of this prodding to get me to finish, but I am
slow and Dave has been a patient, steady, and exacting influence. He, too, is a Parsons
scholar and has not been put off by my excursions into what I felt might be important,
while he kept me focused and fixed. He has the gift, too, of making each one of his stu-
dents feel special and unique, and I am forever grateful for his friendship and guidance.
Page vii

All doctoral work is a family effort. My wife, Jan, from the very first moment we
spoke about the Program and the time it would mean away from her, has cheered me on,
even during the lonely days and evenings she spent as I studied and wrote. Everyone
loves Jan and she, too, has made friends among my ELP colleagues and leaders. She has
made this long journey worthwhile and possible. She also proof-read this final version, a
sacrifice well beyond the pale. I am responsible for any remaining errors, faults, failures,
oversights, defects, and misinterpretations.
I thank all of the people above for their gently persuading me to finish.
1

I. INTRODUCTION
"All models are wrong. Some are useful." - George E.P. Box

Creating a computer program to simulate Talcott Parsons’ general theory of


action has proved elusive. Two researchers who have had considerable success simulat-
ing social systems (e.g., Jacobsen & Bronson, 1985; Jacobsen & Bronson, 1987;
Jacobsen, Bronson, & Vekstein, 1990) have written that they were unable to construct a
computer program that would simulate Parsons’ theory (Jacobsen & Bronson, 1997). The
purpose of this dissertation is to describe a proof of the concept that it is possible to cre-
ate such a computer program.
Many simulations of social systems cast those systems in terms of rates of change
of key constructs (Hanneman, 1988). They are systems of differential equations, even if
somewhat disguised. The current research framed organizations in Parsons’ terms as
having a significant time-ordering of events, not in terms of rates of change. Time-
ordering, directly according to Parsons’ words, was expressed as "before," "then," "next,"
"cycles," "phases," etc. Using a time-ordering style of simulation yielded a toy or proof of
concept that was animated inside a digital computer and used to ask and answer a new set
of questions about the theory, and might serve as a basis for the construction of a high
fidelity simulation.
Social scientists have long sought a test bed for their ideas, concepts, and theories.
Typically "the scientific method" applies to the "hard" sciences, where researchers con-
sult a theory, postulate a question or hypothesis, enter a laboratory where they control
environmental factors, and put together an apparatus to generate some phenomena and
measure the outcomes (Kuhn, 1970). "Normal" science laboratories do not exist in the
social sciences, primarily because environments cannot be controlled – or, more accu-
rately, controlling the environment changes it, so normal science cannot be applied. As a
result, there is some appeal in attempting to create an environment inside of a digital
computer where the structure and function of a social system could be mirrored. Com-
puter simulation, if appropriately validated, has offered a means to instantiate and exer-
cise richly described social systems (Hanneman, 1988).
In order to instruct a computer to simulate (that is, "act like," in a reified
context) a social system – to bring the social system to life – it was required to have a
"sufficient" description of the (static) structure of the constructs along with a description
of the (dynamic, time-varying) functioning of the static constructs. While no one knows
what the definition of "sufficient" is a priori, Parsons’ theory of action was a candidate at
least from a volume perspective because Parsons himself wrote thousands of pages about
it, supporters have written thousands, and critics another thousand or so (see Table 1
below). It remained to read the contributions to the theory of action and extract its
descriptions of salient structure and function to see if they were sufficiently detailed to
simulate a social system resembling the one described by Parsons. The main purpose of
this dissertation is to document the extraction and resulting simulated social system.
It is useful to note that there are detractors as well as supporters of Parsons' work.
Clearly there is not universal agreement on the meaning and importance of his model. A
few of the areas of disagreement are described at the end of "Place of Parsons' theory of
action in sociology," in the Literature Review chapter, p. 27.
Page 2

Table 1.
Works by Parsons, his supporters, and his critics

Works by Parsons Works by supporters Works by critics


Parsons, 1951-1978b; Alexander, 1983; Alexander Black, 1961; Camic, 1996;
Parsons & Bales, 1953; & Sciortino, 1996; Barber Camic, 1998; Dubin, 1960;
Parsons, Bales, & Shils, & Inkeles, 1971; Bluth, Habermas, 1981; Kolb,
1953a; Parsons, Bales, & 1982; Boudon & 1962; McGowan, 1998;
Shils, 1953b; Parsons & Bourricaud, 1989; Savage, 1981; Selznick,
Platt, 1973; Parsons & Bourricaud, 1981; 1961
Shils, 1951; Parsons & Brownstein, 1982; Etzioni,
Smelser, 1956 1975; Hills, 1968;
Holmwood, 1996, 1983;
Lackey, 1987; Loubser,
Baum, Effrat, & Lidz,
1976a, 1976b; Park, 1967;
Rocher, 1975; Turner, 1999

Constructing a model of time-variation, especially of mathematical equations,


inside a computer is not new and is used often in the "hard" sciences. Jacobsen and Bron-
son (1985) point out:
Generally, models can be classified as one of three types: iconic, analog, or
abstract. An iconic model is one that looks identical to the system it represents.
An example is a wood and paint mockup of an automobile shell. From a distance,
the mockup appears to be an automobile, but as the mockup has no engine or inte-
rior, it is not a complete representation of an automobile. Yet if the purpose of the
model is to determine the aerodynamic characteristics of the system it represents
by subjecting it to wind tunnel tests, then this iconic model is quite adequate. An
analog model is one that does not look like the systems it represents but has corre-
sponding behavior. Engineers often build electrical systems as analog models of
mechanical systems. By measuring the current in an analogous electrical systems,
they can predict the motion in a mechanical spring system. An abstract model is a
set of statements about the structure of a real system. If these are formulated as
mathematical equations, they may be solved and used to predict the behavior of
the real system. It is this last type, abstract mathematical models, which form the
basis of continuous simulation. … (pp. 57-58)

By applying this kind of modeling to the study of social systems, a researcher can
watch the interaction of social forces reveal themselves with time slowed down or
speeded up inside a computer and can obtain a very detailed understanding of the contri-
butions of structure and of function to the specific observed behavior of the social sys-
tem.
Also, the replication of theory inside a computer is not a new idea, not even for
theories of organization, as illustrated in early histories provided by Hanneman (1988)
and Garson (1994). The first comprehensive simulation of an organization was probably
A Behavioral Theory of the Firm (Cyert & March, 1963). This work was a tour de force
integration of microeconomics (that is, the setting by a firm of output level and product
Page 3

price) with organizational goal-setting, choice, and rational decision-making. While the
unit of analysis was the description of a single firm among competitors, it could be
expanded to the description of aggregates of firms and to non-business organizations,
and to the normative analysis of a single firm and of economic policy. Incidentally, while
not citing Parsons' constructs, there are many references to them without acknowledg-
ment.1
The replication of theory inside a computer is experiencing recent interest in the
social sciences as more researchers come to the social sciences from other, hard science
areas (e.g., computer science, mathematics, psychology) (Burton & Obel, 1995; Carley &
Prietula, 1994; Coleman, 1965; Conte, Hegselmann, & Terno, 1997; Gilbert & Conte,
1995; Gullahorn & Gullahorn, 1963; Hanneman, 1988; Ilgen & Hulin, 2000; Jacobsen &
Bronson, 1995; Jacobsen & Bronson, 1987; Jacobsen & Bronson, 1997; Phelan, 1995;
Prietula, Carley, & Gasser, 1998; Samuelson, 2000). The normal course of research in
computational and mathematical organization theory is to wring structure and function
from a theory, operationalize or animate them, and then make changes in the simulated
environment or the structure/function and watch the computer’s results for interesting,
informative patterns, including validation with respect to the underlying theory. To "ani-
mate" in this context means to bring to life graphically on a computer screen.
For example, Sastry for her Massachusetts Institute of Technology doctoral dis-
sertation, redacted in Sastry (1997), took a detailed, simulation-oriented look at the struc-
ture and operation of how Tushman and Romanelli (1985) explained strategic change.
She was able to show inconsistencies in their explanation, a more parsimonious explana-
tion, and to more clearly reason about cause and effect. She showed, among other things,
that there were loops that reinforced positive or negative feedback, thereby speeding up
or retarding the change, respectively. The lines in Figure 1, below, represent flows of
information, and the noun phrases (e.g., "Strategic orientation required") represent either
inputs or accumulations of values. By simulating the operation of strategic change at the
organizational level, Sastry was able to identify which postulated stores (accumulations)
would be a priority to measure in the real world because of their dominant effects and
which would be a lower priority because they may have only secondary effects.

1
The particular constructs, the four functional prerequisites, are explained later in the text, on p. 11. In Cyert & March
(1963, chap. 6, pp. 114-127) "A summary of basic concepts in the behavioral theory of the firm," there are goals,
expectations, and choices. Regarding organizational goals, e.g., "... we have argued that organizational goals change
as new participants enter and old participants leave the coalition [making the decisions]." p. 115. This is latent
pattern maintenance. Organizational expectations are based on "search," an analog to environmental interface, the
adaptation function. p. 116. And as to organizational choice, "the standard decision rules are affected primarily by
the past experience of the organization and past record of organizational slack," which are references to pattern
maintenance and integration functions, respectively. p. 116.
Page 4

Figure 1. Sastry’s "simplified causal diagram of the punctuated change


theory." (Sastry, 1997)

Sastry accomplished her task by reading the Tushman and Romanelli article
(1985) and coding each passage as applying to a definition, a structure (i.e., static
relationship), dynamic behavior, or not applicable for her study. She collected the state-
ments about structure, for example, and derived constructs consistent with her modeling
choice (system dynamics in that case) and training as a system dynamicist. She con-
structed a computer replica of the static and dynamic components and animated it by
having information from the simulated environment flow along the lines of the diagram.
She then graphed the changes in accumulations and how well the strategic orientation
tracked the required one. She made changes in the flow rates and accumulation rules as
experiments. Her article essentially reports the patterns she observed based on varying
what she postulated were independent variables. In conclusion she observed by simula-
tion six novel ways that managing strategic change failed (Sastry, 1997).
The approach of this study was to construct a high fidelity replica of the essential
aspects of the theory of action, so the replica mirrored the elements of action that Parsons
described as indispensable: the situation, conditions, ends, and norms (Parsons, 1968a, p.
44). As well, it modeled time because Parsons’ theory described time-varying behavior:
action by its definition and nature is dynamic.
How this study differs from its predecessors
Sastry (1997) and Jacobsen and Bronson (1985) both applied system dynamics
(Hanneman, 1988), a symbolic representation of systems of differential equations, to
organizational and social studies, respectively. Jacobsen and Bronson (1997), in a paper
summarizing their 15 years of social system simulation, note, "A ... theory we tried to
model was Parsons' General Theory of Action. We chose it for its renown and because of
the controversies around it. We soon found that it could not be modeled at all because of
the unclarity and inconsistencies in Parsons' use of concepts." (p. 98 ) Their challenge
was to translate elements of Parsons’ theory of action into the standard system dynamics
form of information flows among accumulations. They tried having material flow (the
Page 5

kind that is physical and is allocated as part of Goal Attainment) be a model of roles or
resources. They tried modeling the way of acting and the method of giving meaning to
actions, but were left wondering where the next generation arose. They tried having three
of the functions concentric around one of them, but that would contradict Parsons' dia-
gram that shows them all interconnected. They tried a causal loop diagram among the
four functions, but it traveled in the opposite direction that Parsons envisioned. They tried
using the four functions as "valves" or controls on the stock of loyalty, power, order, and
goods. They considered pattern variables as "the ranges on which the other concepts
could be measured." In the end they abandoned their modeling effort. (Jacobsen, personal
communication, October 31, 2000).
Parsons (1953a) writes, referring to his theory of action, "Since we are dealing
with processes which occur in a temporal order, therefore we must treat systems and the
processes of their units as changing over time." (p. 167) [italics in original.] "The first
important implication is that an act is always a process in time. The time category is basic
to the scheme." (Parsons, 1968a, p. 45). Jacobsen and Bronson can justify their (failed)
approach by these statements (alone) because they sought to replicate the theory in terms
of time-varying constructs. The present research took a (slightly) different approach and
applied the iconic model, per Jackson’s advice to construct a high-fidelity model
(Jackson, 1983, pp. 4-5)2, not the abstract mathematical one of system dynamics. This
way there was no need to guess what in Parsons’ theory corresponded to the system
dynamics constructs of flows and accumulations (which Jacobsen and Bronson had to).
Instead, in this research there was a more literal translation between the elements of Par-
sons’ theory and the simulation – though the translation was not total, as many, many bits
of the theory were left out. For example, Parsons' posits four pairs of "pattern variables"
and this research only models one of them, the one dealing with affect vs. rationality.
In particular, this research will concentrate on the "temporal order" aspect of Par-
sons’ descriptions.
The idea of a mathematical model as theory in mathematical form began to take
hold [in the 1950s]. Writers of variant interests all agreed that such models per-
mitted the logical derivation of falsifiable claims about some class of phenomena.
This differentiated mathematical models from curve-fitting and data analytic
reduction methods. (Fararo, 1984, p. 152)

Indeed, this dissertation relies on a "cousin" of mathematical models, simulation,


and therefore is not of the curve-fitting or data analytic reduction variety. There are no
correlations, no Cronbach's alpha, no significance tests. In fact, there are almost no sta-
tistics at all because, in great part, it deals with a theory at the analytic level.
Novelty of results
Results were sought that are important, significant, but what might be the defini-
tion of those terms? Should "novel," "useful," "interesting" or "surprising" be added? In
the spirit of propounding testable hypotheses, Parsons' theory of action does not predict

2
Not all models seek fidelity. One operational measure of fidelity is correspondence: for every important construct in
the world there is a (corresponding) construct in the model. Another term for correspondence might be requisite
variety: for every variation in input there is a corresponding control or regulation such that the output matches the
variation (Ashby, 1956, chap. 11). Some researchers deliberately translate what they sense into frameworks that are
not mirrors of the originals, thereby not seeking correspondence or requisite variety.
Page 6

what would happen in an organization that faced a high frequency of changes in its envi-
ronment, too much for it to absorb in the interval in which the changes could be made
sense of an acted upon (so called "permanent white water" (Vaill, 1996)). Would the
changes accumulate, be discarded, decay, or queue? Perhaps a computer acting like a
system of action could shed some light.
What is surprise, what is novelty? Van Fraassen (2002) argues that if science is
objective, then when a scientist sets up an experiment he/she anticipates that the values of
measured parameters will fall within certain bounds; the experimental setup is established
to measure just those parameters and just at those levels. This, after all, is the orthodoxy
of the science being invoked. So what can be regarded as novel that would be sensed
during such an experiment? In part it might be that the objectively measured results
would not match those anticipated by the theory, even though the experimental apparatus
were established within the theory in the first place.
And van Fraassen (2002) reminds us that Kuhn (1970) has struggled with the
same dilemma and concluded that novelty, when it can be sensed, may yield a change in
the orthodoxy – incidentally, still in terms of scientific methods that imply objectivity –
if not the facts of the particular theory; there would be no resort to mythology or religion
(because that would alter the method). So, novelty in van Fraassen and Kuhn's views is
possible and admissible.
Shackle (1969) postulates a calculus of surprise by introducing the notion of
potential surprise.
A man cannot, in general, tell what will happen, but his conception of the nature
of things, the nature of men and other their institutions and affairs and of the non-
human world, enables him to form a judgement as to whether any suggested thing
can happen. In telling himself that such and such a s thing 'can' happen, he means
that its occurrence would not surprise him; for we are surprised by the occurrence
of what we had supposed to be against nature. (p. 67) [italics in original]

Shackle first divides a spectrum into extremes: perfect possibility would not sur-
prise a person and impossibility would engage the absolute maximum degree of surprise
(p. 68). Between them are degrees of possibility with their corresponding inverse degrees
of surprise. While not important for the research here, Shackle posited that the dispersion
of degree of possibility and corresponding degree of surprise is not a probability distribu-
tion, but rather is non-distributional, that is, is not a function. One can have many events
that are not a surprise and their probability would not sum to unity.
To summarize, Shackle relates the degree of belief inversely with the degree of
surprise: we are surprised by that which we believe cannot happen.
What might surprise look like in the research to be described here? First, the
reader would have to think it was impossible to achieve. To a small subjective degree,
this has happened. When this researcher mentioned to other members of his school cohort
what he is trying to do many of them expressed doubt that it would be possible. Further,
Jacobsen and Bronson tried it and failed, so there is a hint of impossibility. "Some people
don't believe that models of human behavior can be developed." (Sterman, 2000)
Second, the method of inquiry, a computer simulation, is far less restrictive of an
experimental setup than a traditional laboratory so some behavior might be observed that
was not within expectation, not predicted by the orthodoxy, and therefore would be sur-
prising within the ambit described by van Fraassen and Kuhn.
Page 7

There are perhaps two more reasons that surprise might be expected, both because
Parsons has written so much. First among these is that it is predictable that there might be
contradictions or gaps in there somewhere, the precise nature of which might generate
surprise.3 And second among these is that so many people have already vetted Parsons'
theory that anything new would be unexpected at this late date.
Conceptual framework
The fundamental framework that informed this research is that of systems. A sys-
tem is a collection of elements and interactions whose structure and function can be
understood by looking at the whole, the summation, the interaction of elements. This
description highlights a tension in systems study. Some scholars infer qualities of the
whole by studying the parts (methodological reductionism, typical in normal science),
others claim that that one can never appreciate the whole by dissecting the parts (holism)
(Honderich, 1995, pp. 750, 372); (Sibeon, 1999).
The approach in this study was somewhere in the middle: the whole was studied
by understanding its parts, but not separately, rather as they interacted and collaborated in
patterns to define the whole. Thus, the emphasis was on how the parts were connected,
what flowed along those connections, and how the interplay of connections and flows
defined an organic whole.
Parsons (1968b) wrote:
Action systems have properties that are emergent only on a certain level of com-
plexity in the relations of unit acts to each other. These properties cannot be iden-
tified in any single unit act considered apart from its relation to others in the same
system. They cannot be derived by a process of direct generalization of the prop-
erties of the unit act. (p. 739) [emphasis in original]

The term energy used in this research denotes the material in the environment of
the system that is external to it and that can be sensed by the system. That is, energy is
the term used to label the elements in the world external to the system under study that
can be used to stimulate the system, that can energize and activate the system to respond
to the environment. Sometimes Parsons refers to this energy as motivation (Parsons et al.,
1953a). Concretely, the energy could be news, ideas, or information, for example. News,
of say an invention or a move by a competitor, could stimulate the system (in our case an
organization) to evaluate the content and respond to what it sensed in the external world.
One important point is that the term energy used here is not the same as that used in
physics; in physics energy is conserved, that is neither consumed nor created, but in the
use here (social) energy may be infinitely created and transformed and possibly even
consumed. Parsons postulated a law of conservation for motivational energy, which in its
central part would claim that motivational energy is exchanged for changes in the system
(Parsons et al., 1953a, p. 168). Alas, the merits of such a law of conservation of social
energy is beyond the ken of this research.
If one considers a unit act to begin with the importation of exogenous energy,
then the social system presented by this research will follow the trajectory of that energy
as it transits the replica of an organization. In order to imagine what the emergent proper-

3
Brownstein (1982) has found contradictions and gaps by converting a portion of the theory of action into a set of
logic statements and showing the inconsistencies therein.
Page 8

ties could be possible it is necessary to understand at least the rudiments of Parsons'


theory of action.
Parsons' theory of action
Talcott Parsons, perhaps the best known American sociologist of the 20th century,
attempted to construct a theory that explained organizations at all units of analysis. He
was attracted to and influenced by the systems view4, among many other influences, and
noted that organizations are structured in specific ways, not randomly. He noted that
structure and function interplayed, that the functions of an organization were executed by
elements of its structure. He noted that the execution also was not random, that it
responded to normative pressures that could be applied exogenously and endogenously.5
He sought to understand the patterns of structure and function. His view was not
reductionist, he was trying to see organizations at their highest levels of abstraction. In
order to construct a high-level framework, Parsons (1960) defined the atomic unit, the
unit act, to which everything else would refer:
The unit act involves the relationship of an actor to a situation composed of
objects …. The unit act, however, does not occur independently but as one unit in
the context of a wider system of actor-situation relationships, … referred to as an
action system. … Action is thus viewed as a process occurring between two
structural parts of a system – actor and situation. (p. 467) [italics in original]

While the unit act is the atomic level, the social system describes social interac-
tion, behavior. Behavior that directly concerned the "cultural level" Parsons called action.
Said another way, relying on Weber, which Parsons often did, particularly related to
action:
Interpretive understanding of social actions is a prerequisite for the causal analy-
sis of social structures and processes. In modern form, we can put it this way:
there is an actual world of events and some events are behaviors. Behaviors are
treated as actions when they are analyzed relative to cultural frames of reference,
according to which the behavior means one or more things to the producer of the
behavior and to the situational interpreters of the behavior. (Fararo & Skvoretz,
1984, p. 148) [italics in original]

Action includes four generic types of subsystems (that is, collections at which unit
acts occur): organism (atomic level, the individual), social system (generated by the
interaction among individual units), cultural systems (patternings of meaning, such as
beliefs and ideas), and personality (the learned patterns of social and cultural interaction)
(Parsons, 1977b, p. 178). We might re-phrase these units of analysis, in order from small-
est to largest, in terms of the sciences that usually describe and study them: biology, the
organism's physics and chemistry, its "atomic" structure; psychology, the individual's
learned behavior and decisions; sociology, the collective structure and action of individu-
als; and anthropology, the national or religious influence.

4
"System seems to me to be an indispensable master concept …." (Parsons, 1977b, p. 101).
5
The non-randomness is the subject of an entire work, Parsons, Bales & Shils (1953b), according to Parsons (1960, p.
195).
Page 9

Parsons described his theory, in the terms most important for this research, using
several constructs: pattern variables, functional prerequisites, interchange media, and the
cybernetic hierarchy. Parsons theory was much larger than these constituents, but they
were the ones replicated in this research. It was an untestable (and therefore not falsifi-
able) assumption of this research that the axes mentioned are the core of the theory of
action. Or stated more positively, if one can simulate these elements then the theory of
action can be simulated.
Pattern variables.
Robert Bales, a student of Parsons', was studying small group interaction. Bales'
method of primary research was observation: he would watch actual groups deal with real
situations. He came to see patterns, broadly tasks and emotions. And he saw in groups
that questions about tasks and emotions were asked and answered. He subdivided the
patterns into what he called four problem areas: expressive-integrative social-emotive
positive and negative reactions, and instrumental-adaptive task area questions and
answers. The modern depiction of this is illustrated in Bales (1999, figure 6.1, p. 165).
In a few words, Bales observed small groups and saw patterns in the interactions
among the participants. He saw questions and corresponding answers, he saw attention to
the work or tasks of the group, he saw positive and negative emotions, he saw reactions
to external and internal stimuli, he saw planning of tasks and work processes, and he saw
setting of norms and expectations and the response of performance to them, among
others.
Parsons adopted Bales' framework and adapted it to describe the patterned struc-
ture and function of organizations. He called the original five, later reduced to four, pairs
pattern variables (Parsons, 1960):
Each variable defines one property of a particular class of components. In the first
instance, they distinguish between two sets of components, orientations and
modalities. Orientation concerns that actor's relationship to the objects in his
situation and is conceptualized by the two "attitudinal" variables of diffuseness-
specificity and affectivity-neutrality. … Modality concerns the meaning of the
object for the actor and is conceptualized by the two "object-categorization" vari-
ables of quality-performance and universalism-particularlism. It refers to those
aspects of the object that have meaning for the actor, given the situation. (p. 468)
[italics in original.]

The purpose of the classification was to suggest propositions about action systems
in terms of the components and the type of act their combination defines and controls. In
this section pattern variables are described, then their patterned movement is described,
and finally the patterned movement is structured to yield what becomes in the section
after this one the four functional prerequisites.
At base, action is grounded in motivation and emotion or its polar opposite, disci-
pline. The emotional pole is called affect and the discipline or deferred gratification pole
is called affect-neutral. The affect pole is considered an expressive action and the affect-
neutral pole is considered a rational or cognitive pole. Fararo (2001) illustrates the differ-
ence: "In the judge-defendant social relation in American society, in the public trial
situation, the judge is expected to restrain herself from expressing feelings of liking or
not liking the defendant. This constitutes a specific aspect of socially responsible action
expected of a judge." (p. 150)
Page 10

Parsons averred – based on Bales' small group interaction observations – that


cognitive standards were expressed in a more general form that transcended any particu-
lar group, while emotional or "appreciative" standards were applied to more particular
collections. The polar opposites then became the universalism-particularism pattern vari-
able. Fararo (2001) illustrates the difference: "A judge is expected to apply the same body
of law to any defendant before her. Socially responsible action defined by this universal-
ism means transcendence of the particular relationship to the specific defendant before
her." [italics in original] (p. 151)
The outcome of social action can be characterized either by the result of interac-
tion or by the role-status of an actor or actors. The pair of opposites is variously called
quality-performance or ascription-achievement. Fararo (2001) illustrates the difference:
"To be appointed as a federal judge, a person must satisfy certain performance or
achievement criteria pertaining to education and experience. The judge is evaluated by
reference to performance, not according to race or gender." (p. 152)
In social action each actor may focus attention on a specific social object or in a
"plurality" of social objects. The pair is called the specificity-diffuseness pattern variable.
Fararo (2001) illustrates the difference: "A judge is expected to confine her interest in the
defendant to trial-related matters." (p. 152) That is, the judge would have a specific focus,
not a diffuse interest in the affairs (that is, social actions) of the defendant.
Before relating these pattern variables to each other, it is worth mentioning that
either separately or in the combinations to be described next the values of the each pair
represents in each social situation what is acceptable, the norm, the expected, the institu-
tionalized pattern of appropriate orientation. In this sense, as Fararo reminds us (2001,
p.149), the value of pattern variables acts as part of a (yet to be described) control mecha-
nism to stabilize social action. When the values are the expected ones, then there is rein-
forcement; when the values are not the expected ones then the social system responds to
set the value right.
Parsons unlinked and then re-linked the pattern variables, each orientation with
each modality, this way: universalism with specificity, particularism with diffuseness,
performance with affectivity, and quality with neutrality. While not evident at this point
in the exposition, the re-linking corresponded (Parsons said "converged," (1960, p. 468))
to the classification of functional problems or prerequisites that Bales had earlier formu-
lated (Bales, 1950).
The researcher asserts that in order to demonstrate the feasibility of simulating
Parsons' theory, only one pattern variable needed to be selected. As will be explained on
page 46, below, affective and affective-neutrality were selected. The affective orientation
is that the actor responds to the situation emotionally, and its opposite, the affective-neu-
tral orientation, is that the actor responds rationally, cognitively, not emotionally. (Heise,
1979) states "Events cause people to respond affectively." Clearly, Heise is not going to
agree with Parsons on this issue! This would have been important if Heise's work on af-
fect, situated action, affective reactions, events, and social processes (op. cit.) were going
to inform the work reported here. Instead, Heise notes in his comprehensive and accessi-
ble work that his framework is incommensurate with Parsons' (Heise, 1979).
Page 11

An action system is not characterized solely by the actor's orientations and


modalities; it is also a structured system with analytically independent6 aspects that the
pattern variable combinations cannot take into account. That is, pattern variables are a
necessary but not sufficient categorization. In particular, in a structured system both actor
and object share norms (Parsons, 1960, p. 468); this is one definition of interpenetration.
Four functional prerequisites.
Starting with the pattern variables and after a set of steps that consumed hundreds
of pages in Parsons (1960), Figure 2, below, the components of action systems, was ulti-
mately offered; it is not necessary to understand everything in the figure. At its heart are
four major quadrants at the intersections of external-internal and instrumental-consum-
matory. These correspond to the four functional problems that Bales identified and Par-
sons refined. Internal and external refer to inside and outside of the organization, endoge-
nous and exogenous, respectively. Instrumental applies to means and consummatory
applies to ends. The names of the major quadrants, starting in the upper left corner and
moving clockwise, are Adaptation, Goal Attainment, Integration, and (Latent) Pattern-
Maintenance (AGIL). These four are imperatives, prerequisites for any organization, in
fact any organism, to address in order to survive, that is, in systems terminology to
maintain its boundary.7 They are also the functions performed with respect to social
actions.

6
"Analytically" is used in the sense of Kant (1896), namely that it is true by definition or logic or deduction, not by
experience (which would be synthetic, inductive, empirical). At one point, Parsons writes (1968a, p. 34), "It is these
general attributes of concrete phenomena relevant within the framework of a given descriptive frame of reference …
to which the term 'analytical elements' will be applied." [italics in original.]
7
"The difference between system and environment has two especially important implications. One is the existence and
importance of boundaries between the two. Thus, the individual living organism is bounded by something like a
'skin' inside of which a different state prevails from that outside it. … The second basic property … is that in some
sense they [organisms] are self-regulating. The maintenance of relative stability, including stability of certain
processes of change like growth …, in the face of substantially greater environmental variability, means that … there
must be 'mechanisms' that adjust the state of the system relative to changes in its environment." (Parsons, 1977b, p.
101)
Page 12

Figure 2. The components of action systems. (Parsons, 1960, p. 470)

The Adaptation (A) function imports and filters energy from the external world,
the environment, and, based on the external and internal norms, attaches symbolic
meaning to it. The Goal Attainment (G) function sets goals (that is, ends) and allocates
resources in the service of those goals, based on the symbolic meaning of achieving them.
Integration (I) aligns the structure and function of the organization to the goals in accor-
dance with the resources allocated. Latent Pattern-Maintenance (L) establishes and then
maintains the internal norms. "Latent" is used to refer to something unseen, the opposite
of manifest, and the pattern being maintained is what lay persons call organizational cul-
ture. Parsons (1977b) says:
"The most important single condition of the integration of an interaction system is
a shared basis of normative order. Because it must operate to control the disrup-
tive potentialities (for the system of reference) of the autonomy of units, as well
as to guide autonomous action into channels which, through reinforcement,
enhance the potential for autonomy of both the system as a whole and its member
units, such a basis of order must be normative." (p. 168) [italics in original]

Figure 2 also illustrates a collateral point: Parsons set out to develop a grand uni-
fied theory that could be applied up and down the units of analysis, from individual to
collective to culture. Accordingly, each of the four functional prerequisites can be sub-
Page 13

divided into four more functional prerequisites, and so on infinitely. The figure shows the
first two divisions, one at the systems level and then the next at the level of each func-
tional prerequisite. Note that the lower right quadrant, the one corresponding to Integra-
tion, contains the four functions in the same order as the square containing it. This illus-
trates the importance and centrality of Integration, as indicated by the quotation in the
paragraph above. And it also illustrates that the diagrams can be used to designate differ-
ent levels of abstraction or granularity.
The conceptualization of the pattern variables potentiated Parsons' understanding
of the four functional prerequisites because they all fit together so harmoniously.
Cybernetic hierarchy.
Figure 3 is the same as Figure 2 in the sense that it contains the same 16 pattern
variable combinations (listed in the upper right hand corner of each box), but the rows
and columns are arranged differently; it is not necessary to understand everything in the
figure. The rows (i.e., functional prerequisites) are now in the order L-I-G-A, and the
columns in the order L-I-A-G. Note along the left edge that there is a direction of control
and a direction of limiting conditions. These are the cybernetic hierarchy of control. The
organization is controlled, first and foremost, by its internal norms. The norms even con-
trol which energy is imported and the sense is made of it; which particular energy is im-
ported and what particular sense is made of it depends upon the value ascribed to the
norm. Therefore, L is the most controlling and A the least.
Each cell categorizes the necessary but not sufficient conditions for operation of
the cell next about it in the column, and in the opposite direction, the categories of
each cell control the processes categorized in the one below it. For instance, the
definition of an end or goal controls the selection of means for its attainment
(Parsons, 1960, p. 477).
Page 14

Figure 3. The action system in relation to its environment.


(Parsons, 1960, p. 476)

Interchange media.
Parsons next postulated the means by which the 2 x 2 quadrants intercommuni-
cated. Each quadrant is a function and, to form a system, it communicates to and is com-
municated from each other one. As one can see in Figure 4 there are 12 such paths (lines
with arrowheads to and from each of the four functional prerequisites); it is not necessary
to understand everything in the figure. He called the paths interchange media and along
them pass symbols, not (usually) physical objects. That is, each function produces and
consumes symbols, and that is how each function intercommunicates. One particularly
salient depiction of the interchange media is Figure 5, in which a cycle or phase move-
ment is illustrated; note the (clockwise) sequence 1, 2, 3, and 4 among the functional pre-
requisites in AGIL order. It is not necessary to understand everything in the figure, only
the order in which the phases occur with respect to the situation of the organization.
Page 15

Adaptation Goal
Attainment

Integration
Latent Pattern
Maintenance

Figure 4. Interchange media (whose paths are represented by arrows) in


the general theory of action. (Adapted from Parsons & Platt, 1973, p.
435); see also Parsons, 1977c, p. 263).

The intuition is that the Adaptation function scans and senses the external envi-
ronment and might find some information there that could be imported as energy and
passed along (via the interchange medium) to the Goal Attainment function. The Goal
Attainment function then might use that information either to change its goals or to
change its resource allocation. These changes, expressed symbolically as new goals or
new resource budgets, would travel along an interchange medium to the Integration
function. The Integration function would then decide how best to arrange the elements of
the organization in order to achieve the goals in light of the resources. One can imagine,
for example, goals around improved quality and productivity and these would get trans-
lated by the Integration function into organizational entities (e.g., VP of Quality or the
Quality Improvement Department), job descriptions, new methods of rating job and unit
performance, new methods of incenting the desired behavior, new methods of recruit-
ment and advancement, and new training. In turn these new structures and functions
would activate the Latent Pattern Maintenance function via an interchange medium and
the L function would respond, principally by trying to reset the organization to the status
quo in ante. L communicates via interchange media connected to the other three func-
tions.
The L function works internally by manipulating a construct called tension, which
is the difference between what the organization aspires (expressed by goals and resource
allocation, that is, the Goal Attainment function) and what it achieves. When achieve-
ment is low with respect to aspiration and the environmental situation, the L function is
more controlling, it tries to track more closely the energy being imported so that it can
match the organization to the environment. Symmetrically, when there is little difference
between aspirations and achievement, that is, when tension is low, then the L function is
less controlling, more "quiet."
Page 16

Figure 5. Phases in the relationship of a system to its situation.


(Parsons et al., 1953a, p. 182)

Recapitulation.
To recapitulate the structure of the theory of action, the 16 possible combinations
of the four pairs of pattern variables gave rise to the 2 x 2 arrangement at the next higher
level, the so-called AGIL framework that captures the functional prerequisites that every
organization has to address to establish and sustain itself. The quadrants communicate via
interchange media and there is a priority of control in that communication, in accordance
with the cybernetic hierarchy.
As stated in Section I, Introduction, no one knows a priori how much or little is
needed to simulate a particular social system. The researcher speculated that in order to
simulate the theory of action there must be at least representatives of the pattern vari-
ables, functional prerequisites, interchange media, and cybernetic hierarchy.
Problem
The interface between description of systems and social action was informed by
soft systems methodology (Checkland, 1999; Checkland & Scholes, 1999). Checkland
realized that many "hard" engineering projects fail because they do not take into account
the "soft" factors that humans introduce, such as the power structure around the project.
He offered a step-by-step method for integrating hard systems and soft ones. His was a
Page 17

pioneering effort to integrate social and engineered systems, and was a source of inspira-
tion for this research.
However, present computer simulations require specific details about the structure
and behavior of variables and the interactions they animate. In addition, much of the
work in social systems is limited in detailed description, what Checkland calls "rich
description." (Checkland & Scholes, 1999, p. 45). Thus, one of the problems is that either
sufficient data for detailed description or a comprehensive and appropriately complex
theory (e.g., that with requisite variety (Ashby, 1956)) needs to be found, and then one
needs to see if it is sufficient for a computer simulation to be constructed and operated.
More specifically, can Parsons’ large body of descriptive text be understood? Can
the salient factors (structure and function) be extracted? Finally, is it possible to instanti-
ate, make concrete, those salient factors so that a high fidelity representation of the
descriptive theory of action can be constructed?
Even if the questions could be answered, one is left with: Are there any novel
insights? Is there anything useful to be learned? Can anything significant be predicted?
Can the simulation predict something on which Parsons is silent? Is it possible to obtain
enough confidence in such an undertaking that it could function as "the right answer"?
Asked a different way, "Is it possible to develop a laboratory replica of the theory of
action, and if it is then can anything interesting be inferred from operating it"?
In addition, there is no published attempt that successfully simulates any part of
Parsons’ theories. Also, there are few published applications of discrete event simulation
to social systems. Therefore, this contribution is an early and humble set of results in the
use of that tool to be added to the existing scarcity.
Research question
The question guiding this research was "What is the minimum set of structures
and related functions that can simulate Parson’s theory of action to some criteria of
validity?" That is, what was the most parsimonious selection of theory of action con-
structs that, when animated, achieved a given level of fidelity? Can the theory of action
be simulated using only the functional prerequisites, (one pair of) the pattern variables,
(four of) the interchange media, and the cybernetic hierarchy of social control.
Significance
This study contributes to the three areas traditionally addressed by social science
research:
• Theory building – This work may enrich Parsons’ description by making con-
nections that Parsons did not, for example between the frequency of changes
in the environment and the rate at which change can be sensed and incorpo-
rated by an organization. In addition it may identify gaps in description, at
least gaps needing to be filled in order to simulate. In addition, this study will
contribute to the evolution of applying Parsons’ theory to additional contexts,
following a long tradition (Black, 1961; Etzioni, 1975).
• Methods – This work may add methods of translating theory statements into
structure and function constructs. These constructs can in turn act as testable
hypotheses amenable to a range of theory validation techniques. It also may
help to make the case for additional study of time-varying research.
Page 18

•Theory (of science) – This work may add a brick in the discussion of where
simulation fits into the practice of science: it is a tool of theory understanding,
a tool for theory building, and/or a tool for theory testing.
The reason "game" appears in the title of the dissertation is that there is something
of a game that the user of the simulation described in this research can play by varying
the inputs and seeing what an execution will produce.8
Simulation
The conceptual framework for constructing a simulation from descriptive text
emanates from the flow from theory to action, Figure 6.
Theory

Model

Constructs

Variables

Data

Analysis

Action

Figure 6. Flow from theory to action. (David Schwandt, personal communication)

Theories are our ontologies, they are the bases for our beliefs about what we can
know for sure (epistemology) and what constitutes valid activities to seek certainty
(methodology). We extract from theories various features and organize them, calling that
organization a model, which is the theory with some parts left out (that is, the translation
from the theory to the model is incomplete). The features and organization are at a level
of abstraction, usually the highest, the one with the largest blocks and thickest lines
between them. Sometimes collections of the blocks and connections are named or
renamed. The thing renamed is called a construct. For example, we use the term "orienta-
tion" to identify the performance and learning perspectives of Parsons theory of action.
We invent the term "orientation" to be used in that sense. Constructs in turn are com-
posed of variables, factors that take on different values, that is, that eponymously vary.
The collection of values is called data, which are analyzed so that inferences about
actions can be obtained.
The description of Parsons' theory of action forms the theory referred to in Figure
6. The model in that figure is the same theory but only with certain (not all) elements and
connections and is the subject of this research. As stated above, one should at least be
able to discern in the model to be presented the pattern variables, four functional prereq-
uisites, interchange media, and cybernetic hierarchy because they are the cornerstones of
the descriptive theory.

8
Using the terms described in the Conceptual Framework section, the "game" is to see whether latent pattern
maintenance will follow the input energy.
Page 19

The goal of any simulation is to animate the elements, to put the time-varying
elements onto a "canvas" or work space where there "movement" through time can
somehow be visualized. In this research the canvas is a computer screen with a drawing
resembling Figure 5 on it and "behind" the picture, in a way not seen by the user, the
computer simulates the path of energy entering the organization and transiting in turn
through the AGIL cells. Details are provided in Chapter III.
The simulation represents a set of choices – and inventions.9 Explicating what
choices are available and what choices were made and why is the subject of this sub-
section. Farraro and Hummon (1994, pp. 29 ff), mirroring Fararo (1989, ch. 2), provide a
conceptual framework for presenting the choices and for making clearer which parts of
the simulation are provided by Parsons and which are provided by the researcher. There
are six "key menus" that have to be selected and explained (these are categories and their
scales):
i. State space: categorical or continuous
ii. Parameter space: categorical or continuous
iii. Time domain: discrete or continuous
iv. Timing of events: regular, incessant, or irregular
v. Generator: deterministic or stochastic
vi. Postulational basis: equations or transition rules
The state space is the cross product – the combination – of all valid values of all
variables, including how the "boxes" are connected and what flows among them. In the
instant case the space is made up of category values, not continuous ones. For example,
the Adaptation function is connected to the Goal Attainment function; both of these
functions are categories, as is "connected." Parameter space is the cross product of all
fixed properties of the system. In this case, parameters include, but are not limited to:
• The magnitude of energy entering the system – A small integer, ordinal scale.
• Energy threshold; value that has to be compared and if true then the energy passes
into the system – Same units as the magnitude of energy.
• Sense of the comparison test – Category: >, >=, =, <, <=
• Whether the energy will be dealt with affectively or not – Boolean.
• Time to be spent in each functional area if affective – A quantity of simulated
time; without loss of generality time is represented as a positive integer.
• Time to be spent in each functional area if not affective – A quantity of simulated
time.
• With respect to learning and forgetting:
o Value of prior learning – A quantity of simulated time.
o Time to reach the current pattern – A quantity of simulated time.
o Time since the last change – A quantity of simulated time.
o Starting value of Latent Pattern Maintenance energy – Same units as magni-
tude of energy
• Length of time to simulate – A quantity of simulated time.

9
This point is important because we define model as a subset, an incomplete isomorphism, of the theory. So the model
cannot contain anything invented. But the simulation might in the service of finding a computer method of
replicating the elements, structure, or flow of the model. And that is what is meant by the additional clause
"inventions."
Page 20

The time domain is described in discrete units. Parsons did not indicate what
reasonable time values were, so the researcher assumed that the basic unit was one day.
Accordingly, one day passed for every click of the simulated clock, and all clicks occur
in integer multiples: time is discrete, not continuous. The timing of events was irregular
and depended upon what has happened before. In fact, the simulation clock did not click,
rather it moved ahead to the time of the next event and in general that interval cannot be
known a priori. And the state variables will only be defined for the discrete, integer time
units.
By the generator of the process, Fararo and Hummon (1994, pp.30-31) signify the
means or mechanism by which the system produces changes in its state. The two mecha-
nisms are by rolling the dice or determinism. Rolling the dice, or having the transition be
probabilistic, can be accomplished in discrete event systems; in fact, any probability dis-
tribution can be imitated. Deterministic means that there is certainty (probability = 1) that
a state changes from the current one to the next. In the research described here, the tran-
sitions were deterministic, there was no randomness to selecting the next state.10
By postulational, the authors mean the mechanism by which transition to the next
state is specified. Typically, in discrete event simulation the next state depends directly
upon the current state and the transition rules. For the research described here, the pri-
mary transition rule was: when it is time for energy to move from one functional area to
the next, the system attempts the move; if the next functional area is already occupied
then the energy is moved to a corresponding queue instead, otherwise it moves the energy
to the (empty) functional area.
Which of the foregoing has been described by Parsons and which was
invented/created by the research? The categorical state space has been specified in Par-
sons, Bales and Shils (1953a) and so has much of the parameter space, though the actual
values of the parameter space was assumed in the research; without loss of generality any
user of the simulation can change any of the parameter values, so this invention does no
violence to the structure of the theory. That the time domain is discrete is a computational
convenience and is not suggested one way or the other by Parsons. The timing of events
as irregular is consistent with Parsons, Bales and Shils (1953a) and the other two options
(regular and incessant) would not be. The deterministic generation of next states is im-
plied in Parsons, Bales and Shils (1953a) because there are no probabilities mentioned or
suggested. And the postulational basis is clearly not equations, so transition rules are im-
plied.
Therefore, to create a model to represent the dynamics of Parsons' scheme we
developed a system that managed energy in discrete units and moved those bundles of
energy through the processes in accordance with the AGIL framework and governed by
the feedback and control hierarchy. Specifically, we envisioned a concrete organization
that processed inputs from its environment, though perhaps Parsons would have argued
for the generality of the processes at every level of analysis.

10
Here is an example of a probabilistic transition. Imagine a simulation of a retail store, such as a grocery. A shopper
would select a random number of items to purchase and that number would put the shopper into the cashier line for
the appropriate number of items, such as the regular line or 15 or less. It would be impossible to know in advance
how many shoppers ended up on the 15 items or less line because the selection is random and generated during the
execution of the simulation.
Page 21

Limitations
The limitations of a study are those characteristics of design or method that set
parameters on the application or interpretation of the results; that is, the constraints on
generalizability and utility of findings that are the result of the devices of design or
method that establish internal and external validity. In a quantitative study the most
obvious limitation would relate to the ability to draw descriptive or inferential conclu-
sions from sample data about a larger group.
There are two viewpoints that both properly identify this first attempt at simulat-
ing Parsons’ theory of action:
1. Thorngate (1976), in attempting to describe the range of explanatory power of theory,
drew the one-armed clock in Figure 7. He stated that a particular model or theory can-
not simultaneously be general, simple, and accurate. Rather, the researcher must trade
among those outcomes. Clearly, the research described here was simple, so it was
neither general nor accurate. Accordingly, the results will have to be used with great
care (not general) and will not apply to any actual social system (not accurate).

General

Accurate Simple

Figure 7. Thorngate’s one-armed clock.


(Adapted from Thorngate, 1976, p. 406)

2. Thomsen, Levitt and Kunz (1999) suggested that simulations go through stages,
Figure 8. The first stage is to build a "toy" to see if the simulation can even be built
and whether it will have interesting properties. Again, clearly that was the stage of the
simulation presented here. Accordingly, the results are vigorously disclaimed as a
modest first attempt, really a toy, that may not be applicable to any set of facts, but
rather should be seen as a foundation to be enhanced and expanded. Indeed, some
elements of the simulation were given arbitrary values in order to achieve simplicity
and the arbitrariness detracts from the significance of the outcome (Fararo &
Hummon, 1994).
This theoretical approach to models [theory in mathematical form] included the
idea of "successive approximation" articulated in sociological theory first by
Comte, then by Pareto and the later stressed by Homans. Models were not
expected to be correct in every detail nor to cover the entire potential scope of
interest in a class of phenomena. They were to be modified and generalized (in a
formal sense) over time. Even though such a model might include entities and
processes not presently observable, the logical connections among ideas in a con-
ceptual network assured that the theory was testable. (Fararo, 1984, p. 152)
Page 22

Figure 8. Evolution of computer simulations of organizations.


(Thomsen et al., 1999)

As stated in the Introduction, the purpose of this dissertation was to report on a


first attempt to simulate Parsons’ theory of action. Accordingly, the study did not seek to
operationalize or transform into constructs everything Parsons wrote on the subject, but
rather it functioned as a starting point of a single working simulation. Even applying that
working simulation will leave much for future research. It was necessary to select the few
constructs that formed the kernel of this simulation from all of the possible candidates.
At the outset these additional limitations have been identified in Table 2.
Table 2.
Additional limitations of the study

Origin of limitation Limiting action


Parsons never intended his description of It is not known why Parsons’ theory cannot
the theory of action to be granular enough be simulated unless and until there has
to support simulation. He stated that his been an attempt. On the other hand, Jacob-
theory was not at the logico-deductive son and Bronson (1997) reported failure,
stage yet (Parsons, 1961c, p. 321). There and they are experienced, published soci-
have been other attempts to identify propo- ologists and modelers. Also, in fairness,
sitions in Parsons’ work and to test whether Parsons (Parsons, 1968a, pp. 77 ff) contra-
they form a set that is logical in the sense dicted his own observation by providing
that conclusions can be deduced from those some formalization that could have been
propositions (Brownstein, 1982). Those interpreted as a beginning of a logico-
other attempts have found gaps in Parsons’ deductive base.
Page 23

reasoning that Parsons acknowledged and


does not apologize for because the theory
was never intended to be "logical" yet.
Parsons was a prolific writer. He has been Identify a single work (Parsons et al.,
roundly criticized for being difficult to 1953a) and acknowledge the implications
understand (Selznick, 1961)11; (Kolb, on generalizability.
1962)12; (Bressler, 1961)13. One reason for
the criticism is that Parsons would write a
sentence and then in the next use different
terms for what appeared to be the same
thought. Was Parsons restating the previ-
ous sentence but in a different way aimed
at increasing our understanding through
redundancy or repetition or was he saying
something that was (slightly) different than
the first sentence?14 Parsons’ writing style
confounds understanding, which was a
problem because the study sought a deep,
detailed understanding so that it illustrated
the theory by replicating the understanding
inside a computer.
One never knows when to stop trying to Use the heuristic: can the computer simu-
increase the fidelity of a simulation. This is lation serve as a foundation for further
a problem with all simulations. It is research work where only incremental
equivalent to the question of validation: enhancements would be needed, not
when is the computer simulation suffi- wholesale simulation changes. The simula-
ciently like the Real World to be trusted? tion constructed here has instances of many
One can always try one more interesting of the important structures and functions,
case, one more tweak that will increase so that additional structures and functions
fidelity. can be based on those already represented.
It is tempting to label simulation as reduc- Keep the unit of analysis at the system,
tionist, an especially unfortunate moniker structure, and function levels. Do not per-
11
"It is a case of the Emperor's clothes. Is his [Talcott Parsons'] complexly textured raiment really there? Or is it all (or
largely) an illusion, a conjuration, a bad and costly joke?" p. 932 "The problem of arriving at a reassured assessment
of Parsons' thought is greatly complicated by a remarkable obscurity of structure and style. Even those accustomed
to abstract philosophical discussion find it a considerable chore to decide what is being said on any page, let alone
also to assess its intellectual worth. I suspect that a great many sociologists, otherwise sympathetic to the need for
general theory, have simply abandoned the effort." p. 932
12
"The essays ... establish beyond question ... the difficulty of understanding his [Parsons'] work at any but the most
generalized level." p. 590 "... [T]here is concern with the obscurity of Parsons' language, the shifting meaning of
some of his terms, ..." p. 591
13
""His detractors have chided Parsons for a linguistic style which reads like pure hardtack." p. 149
14
For example: "This conception of the orbit of the action-process is integral to that of phase movement which will
figure prominently in our subsequent discussion. It is applicable both to the unit and to the system as a whole, the
latter distinction being a matter of points of reference, not of the concrete structure of processes." (Parsons et al.,
1953a, p. 164) In the second sentence to what does "It" apply? Orbit? Phase movement? Subsequent discussion?
Page 24

when applied to a holistic theory as Parsons mit manipulation or reporting at the atomic
purported his to be. The challenge was to (what Parsons calls the unit) level. This is
maintain the holistic nature of the theory of consistent with Parsons' view that the unit
action and simulate something that is act cannot be viewed by itself but rather in
whole. The next level down of this chal- a much, much larger context.
lenge is that Parsons described the func-
tional components of the theory of action in
terms that looked like tokens that travel
along wires (media of interchange) among
nodes (functional prerequisites). Therefore,
the simulation can have the appearance of
rats in a maze because at some level that
resembles Parsons’ description.
Simulation often postulates a sequence of Noted as the nature of simulation vs. a pro-
states through which the system passes. duction system (i.e., grammar) orientation
Simulation, then, presents the states that (see Fararo & Skvoretz (1984) for an
were encountered, but not all of the possi- example of the production system
ble states.15 That is, simulation cannot give approach, described above beginning on p.
the richness that a grammar or contingent 32).
approach could (Fararo, 1984, p. 146).
It may be worth mentioning that a significant limitation is that elements outside of
Parsons' theory are outside of the simulation of it.

15
This is the same distinction in biology between ontogeny (an individual instance seen in nature) and phylogeny (all
possible instances for a species), between genotypes (the expression of genes found in an instance) and phenotypes
(everything that is possible genetically).
Page 25

II. LITERATURE REVIEW


The review of the literature is divided into parts. The first part examines the ques-
tion of what is a theory, what is a model, and how does simulation interact with them.
The theory to be simulated is reviewed next, with an eye on a particular articulation from
Parsons himself. The salient features are identified, as they formed the basis of the selec-
tion of the subset of all constructs that shape the simulation. The review then shifts to the
art and science of computer simulation applied to social science theories.
Theory, model, and simulation
Simulation-building and theory-building are identical in their notional steps (the
steps are from Hanneman (1988)):
1. Define boundaries of the system.
2. Define the elements of the state space and partition the state space into
subsystems.
3. Describe the connectivity of the state space elements, and the forms of
relations among the states of the system.
4. Define the dynamic aspects of the relations among state space elements.
Simulation is a not the real thing, it is an imitation. In the instant case it is very far
removed from anything real because the research here is a simulation of a theory and that
theory has never been asserted to be related to reality: Parsons' theory is a frame of refer-
ence. "In a certain sense, all theories about social action and interaction are simulations –
theories are designed to mimic (albeit in highly selected and abstract ways) characteris-
tics of real social action." (Hanneman, 1988) Baudrillard (1995) has coined the term
simulacra for a copy without an original. One committee member remarked that this dis-
sertation was like The Matrix, a popular movie that incorporates much of Baudrillard
(1995), especially the question of simulacra, in it, where it is difficult for the viewer to
tease out what is real and what is simulated.16 In fact, simulation is also the name of a
social theory that addresses the problem that so much of the culture of developed coun-
tries is simulated, not real, put there by advertising and other media (Cubitt, 2001). This
dissertation does not use simulation in the sense of a stand-in or substitute, but rather in
the sense of an animation or reification, a coming to life of something inanimate (in this
case a theory). Again, the simulation is not the real thing, the real thing is Parsons' theory
of action.
The theory of action
Parsons traced the history of the development of the theory of action in Parsons
(1977a). He came to sociology from economics, starting in about 1930. "It gradually
become clear to me that economic theory should be conceived as standing within some
sort of theoretical matrix in which sociological theory was also included." p. 24. Parsons
studied and tried to find the common threads among Alfred Marshall, a dominant English
neoclassical economist (one of his students was John Maynard Keynes); Vilfredo Pareto,
an Italian economist and sociologist; Max Weber, a German scholar who had ideas on the
nature of modern capitalism and how to organize for economic gain; and Emile Durk-
heim, a French scholar who, among many other subjects, wrote about the division of
labor. Parsons' effort culminated in 1937 in the two-volume work (Parsons, 1968a;

16
One sight gag is when the protagonist, Neo, gives some contraband diskettes to "clients." He hides those diskettes in
a hollowed out edition of Baudrillard (1995).
Page 26

Parsons, 1968b). "The most immediate interpretative thesis was that the four – and they
did not stand alone – had converged on what was essentially a single conceptual scheme.
In the intellectual milieu of the time this was by no means simple common sense."
(Parsons, 1977a, pp. 25-26)
The conceptualization that Parsons created flowed from his observation that the
theories of Marshall, Pareto, Weber, and Durkheim had in common an action system,
first suggested by Weber and then elaborated by Parsons; the result of the conceptualiza-
tion was Parsons’ theory of action (Parsons, 1968a). That is, what these seemingly dispa-
rate writers described in common was a system of actions, human actions that had pat-
terns that could be described in accordance with a framework. Parsons has said that
scientists of his era were informed by the progress in the conception of systems using
mechanics and physico-chemistry (Parsons, 1977a, p. 27). In those disciplines one starts
at the atomic level and defines what is meant by a "unit."
Accordingly, Parsons started by defining the "unit act." (Parsons, 1968a, pp. 43
ff):
(1) It implies an agent, an "actor." (2) For purposes of definition the act must have
an "end," a future state of affairs toward which the process of action is oriented.
(3) It must be initiated in a "situation" of which the trends of development differ
in one or more important respects from the state of affairs to which the action is
oriented, the end. This situation is in turn analyzable into two elements: those
over which the actor has no control … and those over which he has such control.
The former may be termed the "conditions" of action, and latter the "means."
Finally, (4) there is inherent in the conception of this unit, in its analytical uses, a
certain mode of relationship between these elements. That is, in the choice of
alternative means to the end, insofar as the situation allows alternatives, there is a
"normative orientation" of action. (p. 44)

Tension management and learning


One of the functions of Latent Pattern Maintenance is tension management. Ten-
sion is the difference between what the inflexible, external environment demands and
what the social system provides inside its boundaries (Parsons et al., 1953a, p. 212). Par-
sons did not make completely clear the mechanism that Latent Pattern Maintenance used
to manage this dynamic tension, but he did say that Latent Pattern Maintenance learned
how to perform the function. In addition, he described classical conditioning at the par-
ticular learning style (Parsons et al., 1953a, p. 226).
While there is an extensive literature on how organizations learn, there is very,
very little of a quantitative nature; Dar-El (2000) informed the literature review of
quantitative organizational learning. And what little existed of a quantitative nature was
nearly the only literature on the actual mechanism, on how learning actually took place in
an organization.
One line of study particularly stood out as applicable to this research because of
its quantitative aspirations: Nembhard and Uzumeri (2000), Nembhard and Osothsilp
(2001)17, and Uzmeri and Nembhard (1998). This line relied on Mazur and Hastie
(1978), who found that learning was related to the accumulation of knowledge and there-
17
There is a small controversy about the results of this research (Jaber & Sikström, 2004; Nembhard & Osothsilp,
2004).
Page 27

fore an adequate model of learning has to account for accumulation. In essence Nembard
and his collaborators studied possible descriptions of the gain in productivity due to
learning in a factory and tried to fit it to a mathematical model, evaluating 11 models
against actual factory floor data (Nembhard & Uzumeri, 2000). The authors used almost
4,000 data points to test the models, which included all of the popular ones, such as
exponential, log-linear, and S-curve. One model fit best under a broad set of criteria, the
three-parameter hyperbolic learning curve. It is described – and applied – in the Model
section on page 58.
Place of Parsons' theory of action in sociology
The purpose of this section is to place Parson's theory in the spectrum of socio-
logical theories of the time and to address a few of the many criticisms, particularly those
applicable to the research described here.
• Action systems as unification. Finding a unifying thread in such diverse theories
as those of Marshall, Pareto, Weber, and Durkheim was a breakthrough of major
proportions. Later Parsons added Marx and Freud, no small accomplishment. The
unification put Parsons on the map in sociological theory and he spent the rest of
his professional life refining the theory of action.
• Structural-functionalism. Structural-functionalism is a school of thought within
sociology that concentrates on describing social systems by describing their
(static) structures and (dynamic) functions. Structural-functionalists tend not to
address questions of how the structures or functions arise nor whether some are
better than others.18 In the framework of Burrell and Morgan (1979), structural
functionalists are more interested in the sociology of regulation than of radical
change, more interested in objectivity than subjectivity, and "tend to assume that
the social world is composed of relatively concrete empirical artefacts and rela-
tionships which can be identified, studied and measured through approaches
derived from the natural sciences. The use of mechanical and biological analogies
as a means of modeling and understanding the social world is particularly
favoured." p. 26. Parsons(1977b) wrote:
"I well remember at a meeting of the International Sociological Associa-
tion, held in Washington D.C. , in 1961, [Robert] Merton very cogently
made the point of objecting to the phrase 'structural-functionalism.' He
particularly did not like having it labeled an 'ism' and suggested that the
simple descriptive phrase "functional analysis" was more appropriate. I
heartily concur in this judgment." "The two concepts 'structure' and 'func-
tion' are not parallel. … [T]he concept 'structure' does not stand at the
same level as that of function, but at a lower analytical level. It is a cog-
nate with the concept of 'process,' not function. Sometimes, the levels are
consolidated or fused by reference not to functions but to functioning.
From this point of view, the verb form may be considered to be a synonym
for process. We do not wish to hypostatize structure. It is any set of rela-
tions among parts of a living system which on empirical grounds can be
assumed or shown to be stable over a time period…. Thus, … the concept
18
These questions are not usually considered part of positivism, of which structural-functionalism is a school.
Positivists try not to go beyond what can be verified, lest their work be considered metaphysics and religion (Keat &
Urry, 1982, p. 5).
Page 28

'function,' unlike that of structure and of process, is not a rubric in terms of


which an immediately empirical description of a set of features … can be
stated. It is, rather, a concept that stands at a higher level of theoretical
generality and is more analytic than either structure or process. Its refer-
ence is to the formulation of sets of conditions governing the states of liv-
ing systems as 'going concerns' in relation to their environments. These
conditions concern the stability and/or instability, the survival and/or
probable extinction, and not least, the temporal duration of such systems."
[italics in original] (pp. 100-103)

It might be worth mentioning that the whole idea of functional analysis and its
related synonyms had come into question in the heyday of Parsons and his collaborators.
The central issues were questions of what is a theory, does it have to be empirically veri-
fiable or can it be a framework, a naming of the parts. Davis (1959) argues that investi-
gating the functions and functioning of a social system is not a special method, does not
need a special method, and is not a school of thought.
• Homeostasis. The structural-functionalists have been criticized for postulating sta-
ble structures, for not taking account of social revolution, of a set of norms that
aim to upset the status quo. Clearly, Parsons admired and sometimes quoted
biologists describing homeostasis, the dynamic balance of elements inside an
organization/organism and balance of the organization/organism with respect to
changes in its environment (Parsons, 1977a, p. 28). While the structural-
functionalists do not, indeed, emphasize upsetting the legitimization mechanism
of social entities, they do not obviate it either. Moore (1959) wrote, "I have come
to the personal conviction that for most purposes the equilibrium model of social
systems must be abandoned, as leading to too much distortion, particularly in
treating change as external, accidental, and in any event regrettable." (p. 718) A
balanced discussion, relying on cybernetics (à la Ashby (1956)) and the kind of
control Parsons characterized, can be found in Cadwallader (1959). Parsons him-
self (1977b) wrote:
"[Functional analysis] has nothing essentially to do with judgments about
the specific balances between elements of integration in social systems
and elements of conflict and/or disorganization. … Biology does not have
two basic theoretical schemes, a theory of healthy organisms and one of
pathological phenomena in organisms, but health and pathological states
are understandable in basically the same general theoretical terms. … A
related polemical orientation is the claim frequently put forward that
'functionalists' are incapable of accounting for social change: that is, their
type of theory has a built-in 'static' bias. This is also entirely untrue. If we
have any claim to competence as social scientists, we must be fully aware
that there are problems both of stability and of change, as there are prob-
lems of positive integration and malintegration." (pp. 108-109)

• Problem of concreteness. The theory of action is an abstraction, a made-up frame-


work, a world view, a way to interpret and make sense of (empirical) phenomena.
Its descriptions are not, in Parsons' terms "out there" (1968a, p. 46), but rather are
mental constructs. In particular they are analytic (deduced by theory and logic), as
Page 29

opposed to synthetic (induced by observation), referring to a dichotomy described


in a work about which Parsons often expressed admiration, Critique of Pure
Reason (Kant, 1896). It is important, Parsons reminds us, relying on Alfred North
Whitehead (1927), not to (mis)place concreteness on the abstractions (Parsons,
1968a, p. 29); the theory of action cannot itself be observed, but rather what is
observed in the world (what is "out there") can be described in terms of the theory
of action. Therefore, it would not be appropriate to validate the simulation of the
theory by world experience, but rather by careful comparison to the text of the
theory.
The bad news
As mentioned above, particularly in and near Table 1, p. 2, there are criticisms
and critics of Parsons. While the work described here took Parsons' theory as-is, warts
and all, without making a commitment to its veracity or even efficacy, it is useful to
expose at least a window on the counter-arguments to the theory of action. Udy (1960)
characterized the normalcy of criticizing Parsons, "Certain criticisms of Parsons' work
have become virtually traditional…. [T]heory [of action] is by and large equated either
with taxonomy or with functionalist arguments as to the requisite character of categories.
There is an almost complete absence of propositions containing variables." Likewise,
Bressler (1961) observed,
"Parsons has transformed the rhetoric of sociological discourse, and it has long
since become de rigeur for every sociologist to strong opinions about his contri-
butions to sociological theory. In fact, given the obstacles created by what D. A.
Sprott has been pleased to call Parsons' "spritely" prose, it would not be at all sur-
prising if Parsons had rather more critics than readers." (p. 149)

Perhaps the most succinct critique was Berger and Zelditch (1968), which took
Parsons to task on four grounds in 4+ pages. In the context of a book review, their ques-
tions were: (a) had there been an improvement in confirmation status, have there been
empirical studies confirming the theory of action; (b) was there increased rigor in the
framework or its arguments, had it become more logically structured; (c) had the theory
become more precise, more accurate; and (d) had the scope increased in order to make
the theory more general?19 In every case the authors believed that Parsons failed. They
went on to ask what was the importance of Parsons, why was he (still) read. They con-
cluded that there were several reasons: the admiration for the ambitiousness of his enter-
prise and as an "inexhaustible source of ideas." p. 450
Locating this work within all of Parsons'
A small subset of everything written about the theory of action by Parsons and
others was sought that could form the basis for animating the theory. Accordingly, in
harmony with the state of simulation as reviewed in the next large section, rich descrip-
tions were sought that illustrated the structure and function of the theory of action. That
is, detailed descriptions of static structures and dynamic (time-varying) functions or proc-
esses were sought. When found then the same processes that Sastry used were applied,
namely parsing them into simulation constructs. In a sense this operation is a culling of a

19
The comparison to Thorngate's one-armed clock is palpable.
Page 30

specific description of the theory of action in order to find the passages best suited to the
narrow purpose of simulation.
As stated on p. 48, one writing stood out with respect to this search: Parsons,
Bales and Shils (1953a). The volume in which the chapter appears, Parsons, Bales and
Shils (1953b), is an historical account of the development of the theory of action, and
Parsons, Bales and Shils (1953a) is the last chapter, therefore the most recent in the
collection. The chapter traces the path (Parsons et al. (1953a, p. 167) called it an orbit) of
energy moving among the four functional prerequisites in accordance with the pattern
variables. It is a step-by-step description of how energy enters a social system and trav-
erses the four units, possibly transforming the unit or itself or attributes of the social sys-
tem as it is passed from unit to unit.
Table 6, below on p. 72, presents in some detail the description of the transit of
energy through a social system described in Parsons, Bales & Shils (1953a) and the
corresponding elements of the simulation.
Models
Young journalist (YJ): Why do you work with models? Why don't you work with
the real world?
Albert Einstein (AE): Are you married, young man?
YJ: Yes.
AE: Do you have a picture of your wife?
YJ: (Fetches his wallet and digs around, finally producing a photo and handing it
to AE.) Yes, here.
AE: (Looks at the photo for a moment and hands it back.) She is rather small.
- Ronald W. Clark. (1971). Einstein: Life and times. New York: World Publishing.
Model ships appear frequently in bottles; model boys in heaven only. Model ships
are copies of real ones. Asked to describe a ship, we could point to its model. A
model boy, on the other hand, having no earthly counterpart, is everything a boy
ought to be. (Brodbeck, 1959) [italics in original]

Model is sometimes used in the sense of a replica, a non-verbal description of the


thing being modeled. A replica gives no new knowledge, and in that sense is not scien-
tifically interesting. On the other hand, models can help us understand more about the
thing modeled.
The term for the similarity between a thing and its model is isomorphism. In order
for there to be an isomorphism two conditions have to hold: a one-to-one correspondence
between the elements of the model and the elements of the thing modeled, and certain
relations must be preserved. If, besides structure and relations, the model works the same
way that the thing does then the isomorphism is called complete.
If, for instance, a model of a steam engine is also steam propelled, then the iso-
morphism is complete. The similarity or isomorphism of a planetarium with heav-
enly bodies is not complete. All the planets and their moons and the sun, together
with their spatial relations to each other, are duplicated. But the motions of these
bodies across the hemispherical ceiling are not cause by gravitational attraction.
(Brodbeck, 1959)

What is the difference between a model and a drawing of a model? In order to


pursue this, some distinctions must be made. "The language of science … consists wholly
Page 31

of declarative sentences." (Brodbeck, 1959) The sentences contain two kinds of words in
them: names for characteristics or attributes or events, and for relations among them.
Characteristics are the name of some state, such as grass that is in the state where its color
attribute has the value of green. Relations require at least two individuals or operands,
such as before, faster, and between. Some terms, such as north, fast, and first, appear to
be about a single individual, but are in fact in relation to some standard, and therefore
about at least two elements. While sentences may contain only attributes and relations,
the subject matter, the content, differs from one scientific discipline to another.
Sentences may have meaning because of the facts, the content, or they may have
meaning no matter what their content. For example, "He is tall and he is blond" is of the
form "X is A and X is B." One can speak of the truth value of either sentence, but in our
example the truth value of the first will be the only one we would be able to ascertain,
based on whether it were true that the subject was both tall and blond. In other words, the
truth value of the form cannot be known unless we know the values of X, A, and B. If we
can know the truth value of a form, then it is called a logical truth because it is true for all
possible values of the variables; it is also called tautological or analytic. An example is X
= X, because this is truth for all possible values of X in the sense that we usually give to
the equal sign. "Sentences whose truth depends upon their descriptive words as well as on
their form are called empirical statements, or also contingent or synthetic." (Brodbeck,
1959) [italics in original]
Perhaps the most common class of logical truths is arithmetic. All statements in
arithmetic are true by definition, by form, not because we examine the subject matter of
the sentences and from them deduce the truth value.
A concept is a term referring to a descriptive property or relation. A fact is a par-
ticular or specific thing, characteristic, event, or kind of event. To state a fact is to state
that a concept has an instance. Facts are significant when they are connected with other
facts to form generalizations or laws. "A law states that whenever there is an instance of
one kind of fact, then there is also an instance of another. Laws, therefore, are empirical
generalizations." (Brodbeck, 1959) A theory is a deductively connected set of laws. Some
of the laws, called axioms or postulates of the theory, imply others, called theorems. Axi-
oms are presupposed, their truth is taken for granted, at least for the purposes of the exer-
cise of seeing what else is true of they are.
"Two theories whose laws have the same form are isomorphic or structurally
similar to each other. If the laws of one theory have the same form as the laws of another,
then one may be said to be a model for the other." (Brodbeck, 1959) [italics in original]
How does one know if one theory is the model of another? One puts the second into one-
to-one correspondence with the first. If the forms are the same and the relations are pre-
served, then they are isomorphic. That is, one translates the form of the second into the
first and then ascertains whether the truth of the relations is preserved. If it is, then the
translation demonstrates the isomorphism between the theories, and the second can be
said to be a model of the first.20
"It is all too easy to overestimate the significance of structural isomorphism. The
fact that all or some of the laws in one area [of discourse] have the same form as those of

20
In fairness, the relation between the two theories is completely symmetric, one could be the model of the other and
vice versa.
Page 32

another need not signify anything whatsoever about any connection between the two
areas." (Brodbeck, 1959) The example she gives is all things that can be ranked and
measured: they are structurally isomorphic with arithmetic addition, yet that is quite pos-
sibly the only thing they have in common. Another example is "taller" and "smarter"
because they are both transitive (p. 394). Accordingly, we should not be misled by that
structural isomorphism is anything but a necessary condition for complete isomorphism.
The relations have to hold as well for there to be complete isomorphism.
The flow in the research described here is from (1) Parsons theory to (2) a model
constructed by the researcher that is an incomplete isomorphism to (3) a simulation con-
structed by the researcher that is both an extension and contraction of the model. That is,
the model redacts elements, structures, and relations from the theory, and then the simu-
lation further reduces the elements, structures, and relations, and also adds some ele-
ments, structures, and relations that are neither in the model nor in the theory.
Formalization
In lay terms, formalization is an expression of something so that it can be reason-
ed about. The most common formalization is mathematical, but there are other forms,
too. Two others that will be dealt with here are logic, which is more officially called first
order predicate calculus, and production systems. The place of formalism is that simula-
tion can also be a formalism because it represents an opportunity to reason.
Production systems
Approximately how many sentences are there in English (or any natural language;
natural languages are the ones we speak and read)? The short answer is: infinite,
approximately. How do we learn an infinite thing? How do we teach one? We look for
what is finite about it, and in the case of languages, as with many other things, it is the
(list of) rules that is finite. The collective rules of the construction of a language is called
grammar. The rules can be viewed either as specifying what is legal to read or what is
legal to construct, generate. That is, we can hand a sentence to a grammar and ask "Is this
sentence in the language, that is, is it properly formed according to the rules?" Or we can
"run" the rules of a grammar and generate correct sentences in the language. The rules
enable us to say, "That is not a sentence," or more properly, "That is not a sentence that is
allowed by the rules of grammar."
Note that the rules at this point evaluate or generate content-free sentences. The
rules (of grammar) have nothing to say about the content, only about the form. The form
is called syntax. That is, grammar describes syntax, without regard to (truth) value of the
words.
The appearance of sentences in a language is guided by grammar and by what
symbols and symbol combinations are valid. The symbols (e.g., letters of the alphabet)
are the lexical aspect of language. In principle there are two types of symbols: terminal
and non-terminal. Terminal are the ones we read, that are being read right now. Non-ter-
minal describe constructs in the language, such as sentence, paragraph, verb-phrase, sub-
ordinate clause, genitive case, pronoun, etc. In English, as in most natural languages, the
non-terminals are also terminals, so it is a bit confusing. But when describing artificial
languages there is attention paid to the difference between sentences in the language
(terminals) and terms used to describe sentences in the language (non-terminals).
Fararo and his collaborators have developed a grammar of social actions (Axten
& Fararo, 1977; Fararo & Skvoretz, 1984; Skvoretz & Fararo, 1980; Skvoretz, Fararo, &
Page 33

Axten, 1980) and symbolic interaction (Skvoretz & Fararo, 1996), drawing on the work
of Nowakowska (1973) and Hayes (1981). According to Fararo et al. one of their inspira-
tions was Harré (1972), where there are descriptions of rule-condition and role-condition
forms. Each of them can be thought of as "if … then" statements: if the condition is true
then this rule is executed by the role that matches the role-condition. That is precisely the
structure of grammar rules: if the right hand side of the rule matches the state of the
parsing, then the state is changed to the value of the left hand side, and the matching pro-
ceeds until that are no more matches possible. If the final match is what is called the dis-
tinguished symbol21 then the whole sentence or social action is recognized and declared
valid, otherwise the sentence/social action is not one that is described by the grammar
and is therefore noted as impossible or an error.
Another inspiration was Heise (1979): "[A] simple event is conceived as a
syntactically ordered conjunction of cognitive elements (usually culturally defined) des-
ignating actor, act, and object." Heise's method of "processing" situations that give rise to
actions is to scan actor-object combinations. When a match it found, the associated action
is executed. This is precisely the steps that Fararo et al. take in their grammar approach
(loc. cit.).
Fararo and his colleagues have created descriptions of valid constructs such that
the descriptions can be used like any other grammar, either to assess the validity of an
existing "sentence" (that is, social action) or to generate valid social actions. The most
important aspect from this dissertation's point of view is that the application of much of
Fararo's work was Parsons' theory of action. The grammar described social actions that
enacted the theory of action.
Logic
Brownstein (1982) has formalized the theory of action, too. He used first order
predicate logic, the same axioms and method used in high school geometry and trigo-
nometry proofs. In some-odd 300 pages Brownstein in the standard language of logic
tries to reconstruct the propositions Parsons intended. His conclusions are a bit discour-
aging.
Though his [Parsons'] scheme calls for functional explanations, precise, explicit
specifications of functional relations are not particularly salient. Moreover, sub-
stantive propositions, definitions, regulative principles, preliminary redescrip-
tions, and the like are rarely distinguished, thereby rendering it difficult to assess
… its conceptual health. … For a scheme with as many fundamental conceptual
disorders as Parsons', conceptual analysis becomes of primary concern …. Grave
difficulties attend the conceptualizations of the pattern variables. … Assessment
… leads to the conclusion that a proper analysis of action in Parsons' terms
demands a revision of the basic analytical framework which Parsons has con-
structed. (Brownstein, 1982)

Dubin (1960) used a form of logic, too. He looked at the pattern variables at the
personality level of analysis. He succeeded in enumerating all of the possible combina-
tions of the pattern variables at that level and offered that the choice among them in a
particular instance of action might be based on probability. In other words, Dubin used

21
This non-terminal is usually called "sentence" or in our case "social action."
Page 34

the logic of arithmetic to reason about the number of states that are available for an actor
to get into.
Time
If one skims the literature on the subject of "social time," one finds complaints
everywhere about the neglect and marginality of the time problem in sociology,
formulated concisely by Kurt Lüscher in the title of an essay: "Time: A much-
neglected dimension in social theory and research." (1974) … However, even
more decisive, in my opinion, for the impression of marginality and neglect is the
minimal theoretical basis of many of the available studies. … Many authors lose
themselves entirely in the momentum of their subject by making philosophical,
anthropological and everyday observations without even beginning to achieve
conceptual precision and a categorization of time within a sociological theory.
(Bergmann, 1992)

Time is the missing variable in modern sociological analysis. … Most sociologists


treat time as a contingent feature of their research, rather than a topic in its own
right. … Indeed, sociology can be almost said to be "time free." As emphasis has
been placed upon developing state perspectives – such as structural-functionalism
or system theory – then temporal analysis has been largely ignored. The socio-
logical research process has been 'synchronic rather than longitudinal'; that is, it
has stressed the enduring features of structure rather than the flux and dynamics
of change. The dominant research paradigm has been one favoring 'slice-through-
time' investigations, and in particular studies whose conclusions are based on one-
shot statistical correlations. In short, time has tended either to be excluded as an
explanatory variable, or else introduced only in post hoc justification. (Hassard,
1990) [italics in original]

Greater emphasis has been given to statics than to dynamics in most social sci-
ence theorizing. And, while comparative static analysis is a necessary and impor-
tant task, too much emphasis can deflect attention from other important theory-
building tasks. To the extent that social scientist's theoretical activities seek to
build explanations of phenomena, rather than descriptions, they must focus on
causal processes that occur over time. (Hanneman, 1988)

The typical quantitative theory in sociology is a function or formula, usually of


the form that to compute a value for some dependent variable Y there is an arithmetic
combination of independent variables, Xi. The changes in the Xi over time is not usually
considered, and therefore there is but a single Y in time.
Abbott (2001), a reprint and possible update of Abbott (1988), makes the point
more strongly. He claims, using many detailed examinations of published sociological
studies, that the equation described above and ones like it are more than mathematical
tools, that they influence sociologists to neglect time, to ignore the time path of actors as
they interact with themselves and their environments. He argues that the equations that
Page 35

relate variables linearly have "infected" thinking so that some social science researchers
believe that reality is generally linear22. He bases his argument in relevant part on:
1. Fixed entities with attributes. Linear equations, such as those used in regression
and structural equation modeling, assume fixed entities that have attributes. The entities
are fixed in such equations and the attributes can change value. This clearly assumes that
the existence of the entities is stable over the modeling period. Oddly enough, many of
the subjects studied in sociology are not fixed, such as occupations, roles, social move-
ments, and organizations. Abbott asks us to compare this fixed nature with its most com-
mon alternative, central-subject/event model.
A historical narrative is organized around a central subject. This central subject
may be a sequence of events, a transformation of an entity or set of entities into a
new one, or indeed a simple entity. The central subject includes or endures a
number of events, which may be large or small, directly relevant or tangential,
specific or vague. (Abbott, 2001) [parenthetical material omitted]

Fixed entities ignore changes that occur due to birth, death, combination, division,
and transformation. These changes will need to be simulated in the present research
because Parsons describes them in his theory.
2. Monotonic causal flow. The right sides of (linear) equations do not differentiate
the value over time or in time that each variable would contribute to the dependent vari-
able. They are all equal throughout all of time. That is, each right-hand side variable is
equally relevant at all times. There is no contingent time. Perhaps worse, the time horizon
for all variables in a single equation is identical. That is, if we are trying to measure the
effect of several factors on an outcome, all of the factors would have to be measured over
the same time scale and the outcome would have to be expressed in that time scale, too.
One can see how this could be a problem in the theory of action on several counts: (a)
actions happen on a smaller scale inside the organization than are sensed outside it, and
(b) there may be a different scale altogether in each functional prerequisite (that is, there
is nothing a priori to suggest that the time scales inside each functional prerequisite are
commensurate).
3. Univocal meaning. In linear modeling each variable can have only a positive or
negative effect, not one and then another under different situations. But (Abbott, 2001)
illustrates many cases where a variable may have at first a positive effect and then later
on a negative one.
4. Absence of sequence effects. The order of events does not affect the values of
variables in a linear combination, so that the actual time story or path or trajectory or un-
folding is completely lost using normal statistical tools. In the present research order
matters a great deal, because the timing of an external event has a great impact on the
organization's response, in light of its history to date of responses.
While the picture of taking time into account in social setting is a bit dark, there
are new methods for dealing with time in structural equation modeling, e.g., (Collins &
Sayer, 2001; Hamagami & McArdle, 2001),23 and there has always been auto-regressive

22
The term linear can have many meanings. The shortest one for our purpose is that a change in the value of an
independent variable causes a proportional change in the dependent variable.
23
To indicate the extent that time is rarely accounted for in sociological studies and make the point a bit closer to
home, Ralph O. Mueller is the chair of the George Washington University Graduate School of Education’s
Page 36

integrated moving average time series analysis (ARIMA, also called Box-Jenkins), but it
has been applied almost exclusively to economic data until recently (McCleary & Hay,
1980). And ARIMA usually addresses only a single entity and a single variable (Abbott,
2001).
There is some modeling of time in sociology, including longitudinal studies, such
as Durkheim’s famous one of suicide (Durkheim, 1951). There is some new interest in
time, for example, a 2002 special issue of Academy of Management Journal (Barkema,
Baum, & Mannix, 2002).
The treatment of time, above, contains a subtlety: it refers to a combination of
clock and social time. Clock time is what one gets from calendars, clocks, and other time
pieces. It is the time in physics, the one with which derivatives are taken; it is even rela-
tivistic time in the Einsteinian sense. Social time is socially constructed and includes such
diverse activities/entities as lunch time, waiting, graduation, career progression, and
stages of life. Which type drives Parsons' theory? It must be social time because there are
none of the attributes of clock time in Parsons' description, such as uniformity of
cadence. How can social time be simulated? Oddly, since it is socially constructed a uni-
form cadence can simulate social time as long as the social constructions demarking
events are present. After all, calendar time is an adequate backdrop for social time.
In the present case, here are some examples of Parsons' social constructions of
time in his theory of action: energy arrives at the system boundary at a particular mo-
ment, a functional prerequisite consumes time to perform its function, energy passes (in a
time interval) from one functional prerequisite to another in accord with the cybernetic
hierarchy, and a message is transmitted across a medium of interchange (in a time inter-
val). In fact, Parsons himself recognized the importance of time, "The first important im-
plication is that an act is always a process in time. The time category is basic to the
scheme." (1968a, p. 45)
The simulation of this social time is simply the ticking of a notional clock whose
moments are normatively agreed to mark forward time in an interval small enough to
permit the shortest social event to transpire.
Process
In the last decade a number of writers have proposed narrative as the foundation
for sociological methodology. By this they do not mean narrative in its common
senses of words as opposed to number and complexity as opposed to formaliza-
tion. Rather, they mean narrative in the more generic sense of process or story.
They want to make processes the fundamental building block of sociological
analysis. For them social reality happens in sequences of actions located within
constraining or enabling structures. It is a matter of particular social actors, in
particular social places, at particular social times.

In the context of contemporary empirical practice, such a conception is


revolutionary. Our normal methods parse social reality into fixed entities with
variable qualities. They attribute causality to the variables—hypostatized social
characteristics—rather than to agents; variables do things, not social actors. Sto-

Department of Educational Leadership. He is also the author of an introduction to structure equation modeling
(SEM) (Mueller, 1996) that does not mention the problem of time in the general linear model nor the newer
approaches to modeling time in SEM.
Page 37

ries disappear. The only narratives present in such methods are just-so stories jus-
tifying this or that relation between variables. Contingent narrative is impossible.
… While action and process have largely disappeared from empirical sociology,
they are by contrast central to much of sociological theory, both classic and
recent. (Abbott, 2001), reprinted from (Abbott, 1992)

Too much emphasis in empirical research ... has been placed on the study of indi-
viduals rather than social systems, and on single-time points in these systems
rather than on their continuing process. [Editors' note] Despite the early preoccu-
pation of sociologists with research on social stability and change, much of to-
day's research is neither dynamic nor oriented to social systems. [italics in origi-
nal] (In a volume honoring Talcott Parsons, Riley & Nelson, 1971, p. 407)

Lave and March (1993) advise modelers to "think process." By this the authors
meant that one should seek to describe, explain, predict the unfolding of the interaction of
social forces and the emergence of the resulting outcomes. Process has been variously
defined as "change that follows a stable pattern long enough for us to recognize continu-
ity, transient as the continuity itself may be," "a series of progressive and interdependent
steps by which an end is attained," "the interweaving of invariance and variance," "a
becoming of continuity," and "a tension between linear succession and sequential recur-
rence," as summarized in Abbott (1989).
Process is related to time in a straightforward way: the steps in a process are
described from a time perspective. "It is clear that process is inherently temporal."
(Rowell, 1989) Time in this sense may be an ordering, such as before, during, or after.
Or, "when this happens, then that happens." Or it may be in terms of delays, such as Act
B happens about six months after Act A. Or it may be any other indication of time or
timing. And it may be necessary to mention that time in the process meaning is social
time, not necessarily clock time, that is, how time is sensed, not how it clicks off of an
absolute clock.
The intuition is that what happens in a process is that events occur and something
inside those events trigger changes in the system state that in turn cause other events to
happen. In this way, process, system state, and events are related as follows:

Event1 State2 Event2


State1 State3

Process
Figure 9. Relationship among process, event, and state (notional).

Time is what travels on the lines in the direction of the arrowheads, indicating that
State 1 happens before State2, etc. State is the value of all of the variables in the system.
In principle, then, a system rests with its variables having some fixed value, then an event
happens that changes the values of some variables, putting the system into a different
Page 38

state. The event may consume time, the state change may consume time, and the interval
between them may consume time. To the extent that there is a pattern in the transition of
states and events, we call it a process and usually give it a name (e.g., adoption of a new
idea). This is pure construct; there is no commitment that such juxtaposition of event and
state exist independently in nature (this is one meaning that Parsons makes of "analyti-
cal."). Sometimes the sequence of state/event pairs (also called feeling-activity states
(Bergmann, 1992)) is called history, trace, story, time track, course of events, life cycle,
narrative, enchainment, or trajectory. Some sociologists call it cause (particularly those
committed to statistical methods, especially structural equation modeling), but we do not.
And process is linked to structure: "theories that focus on 'process' or social
dynamics must have (at least implicitly) models of structure embedded in them."
(Hanneman, 1988)
The research described here is the process kind. It attempts in very crude and
rough terms to explain, among other things, how, for example, Latent Pattern Mainte-
nance impacts the Adaptation function with respect to which energy it (Adaptation)
allows into the system. There are many steps in the flow between the entry of energy into
a system and the response of Latent Pattern Maintenance, and Parsons explains them
notionally. The simulation described here attempted to imitate and animate that flow, to
cause the stand-in for Latent Pattern Maintenance to react to perturbations of energy
entering and flowing through the organization.
The process view imposed a considerable burden on the researcher because much
has to be mechanized, in comparison with, say, another handy tool of sociological
research, structural equation modeling, in which the researcher collects and feeds num-
bers into a "black box" computation engine and interprets the stream of numbers that
come out. There is far less of a burden to construct the intricate relations among the
moving parts and how exactly each one interacts and impacts the other. There is no
"answer" in a simulation: the simulation itself is the answer!
Simulations of social systems
Computer simulations of social and organizational systems is not new (Bronson &
Jacobsen, 1986; Bronson, Jacobsen, & Crawford, 1988; Burton & Obel, 1995; Carley &
Prietula, 1994; Coleman, 1965; Conte et al., 1997; Coyle, 2000; Cyert & March, 1963;
Cyert & March, 1992; Epstein & Axtell, 1996; Gilbert & Conte, 1995; Gullahorn &
Gullahorn, 1963; Hamblin, Jacobsen, & Miller, 1973; Hanneman & Patrick, 1997;
Hanneman, 1988; Ilgen & Hulin, 2000; Jacobsen & Bronson, 1995; Jacobsen & Bronson,
1985; Jacobsen & Bronson, 1987; Jacobsen & Bronson, 1997; Jacobsen et al., 1990;
Lane, 2001; Leik & Meeker, 1995; Lin, 2000; Markley, 1967; Moss, 2000; Phelan, 1995;
Prietula et al., 1998; Rasmussen, 1985; Samuelson, 2000; Sastry, 1997; Senge, 1990;
Thomsen et al., 1999; Tuma & Hannan, 1984). Even the use of simulation games to illus-
trate concepts and let sociology students try their hands at applying what they already
have learned by more passive means, such as reading and discussion, are not new
(Simulation and Gaming and the Teaching of Sociology, 1997; Coleman, 1965; Cross,
1980; Dukes, 1975; Hanneman & Patrick, 1997; Hanneman, 1988; Markley, 1967; Pfahl,
Laitenberger, Dorsch, & Ruhe, 2003). There is an annual conference on computational
and mathematical organization theory, including social systems simulation (Computa-
tional, Social and Organizational Science), several professional societies (the American
Page 39

Sociological Association section on mathematical sociology,24 European Social Simula-


tion Association, North American Association for Computational Social and Organiza-
tional Science, International Simulation and Gaming Association), several journals
(Journal of Mathematical Sociology, Computational and Mathematical Organization
Theory, and the Journal of Artificial Societies and Social Simulation), a university
research program (Centre for Research in Social Simulation at the University of Surrey in
the UK), and a LISTSERV (SimSoc).
Why use simulation to study a system? Fishman (2001) lists eight reasons:
1. Enables an investigator to organize his/her theoretical beliefs and empirical
observations about a system and to deduce the logical implications of this
organization.
2. Leads to improved system understanding.
3. Brings into perspective the need for detail and relevance.
4. Expedites the speed with which an analysis can be accomplished.
5. Provides a framework for testing the desirability of system modifications.
6. Is easier to manipulate than the system.
7. Permits control over more sources of variation than direct study of a system
allows.
8. Is generally less costly than direct study of the system.
In addition, Fishman (2001) lists some technical attractions of simulation:
1. Compress time so that years of activity can be simulated in minutes. It can
also expand time so that detailed interaction can be seen and analyzed.
2. Identify and control sources of variation in order to postulate the relationships
among the dependent and independent variables.
3. Correct operation can be at least subjectively assessed during the execution of
the simulation by stopping time and examining the state of the system without
impacting the system under observation.
4. The state of the simulation can be stored for later analysis and then replicated
with the same initial conditions, enabling a kind of experimentation that is
impossible in the real world.
Discrete event simulation
Discrete event simulation (DES) was created in the 1960s to address a set of
problems for which there were no closed form equations that could be solved. The prob-
lems were an area of operations research called queuing theory, the study of waiting
lines. In fact, queuing theory addressed a number of related concerns that were growing
in importance, everything from how long to make left-turn traffic lanes to how many of
those expensive shopping carts to have in a grocery store. The primary application of
queuing theory was a particular type of manufacturing capability called job shop. A job
shop is a facility that makes custom parts, not a full-scale production line. Every major
manufacturing plant has a job shop and because of the custom nature of its work and that
it fills in for unexpected/unplanned incidents on the assembly line floor, it is a challenge
to plan its work. Many so-called dispatching schemes were created – including shortest
operation time first, longest first, prioritizing those that waited longest – and needed to be
tried. But without a formula to solve it was going to be tedious. And part of the pressure

24
Mathematical sociology is a topic much larger than simulation, but simulation is included in its ambit.
Page 40

on the solution was demand for a surge manufacturing capability in a US defense build-
up for the Cold War.
And the advent of the Internet, then called ARPANET, presented questions about
how big the computer storage on the network had to be in order to hold messages in the
event of transient outages, and was it better to have a few long messages or lots of short
ones.
The first DES systems were used by manufacturing, transportation, and telecom-
munications engineers. Then Leonard Kleinrock, a UCLA engineering professor who
pioneered much of the design of ARPANET, analytically solved many of the queuing
problems in closed form (Kleinrock, 1975-1976) and some of the pressure to simulate
waned.25 Kleinrock’s formulæ assume that inputs arrive at a random rate according to
some distribution and are serviced/ processed/transformed at another random rate, possi-
bly according to a different distribution; that is, that there is a probabilistic element to the
operation of the systems under study.
DES views the world as compartments among which items and information flow.
The items have to be "born" as they cross the boundary into the system, then are trans-
formed or processed or serviced, and then possibly consumed, and finally they exit the
boundary of the system and in effect "die." This view is sometimes called process, esp.
by social science researchers (for example, (Lave & March, 1993)) who are trying to
differentiate themselves from others who take a more static view.
There are two approaches to DES: event-scheduling and process-interaction
(Fishman, 2001). At the outset it is important to understand that the results are the same
independent of the approach, but during simulation construction there is a trade-off
between simulation simplicity and simulation control depending upon which approach is
used. "Every discrete-event system has a collection of state variables that change values
as time elapses. A change in a state variable is called an event" (Fishman, 2001)[italics in
original]. The simulation is thought of, comprised of, a set of events, such as, in this
research, energy appears at the boundary of the system, Adaptation filters are altered
based on the tension between internal stability and external energy level, if affective
energy is present then it takes priority over affective neutral, etc. In this conceptual
scheme, "each event contains all decision flows and state variables. Simulation is the
execution of a sequence of events ordered chronologically on desired execution times. No
time elapses within an event" (Fishman, 2001).
In the process-interaction approach the focus centers on the processing or trans-
forming entities, those parts of the simulation that take inputs and transform them. The
approach provides a sequence of activities (events) in a time order, other terms for which
are flow and process. In other words, the process-interaction approach concentrates on the
time history of the transactions and their transformers; it is not a "disconnected" list of
events that change the state of the simulation.
The research reported here uses the process-interaction approach as a way to trace
the time history of energy as it transits the organization and the time history of the
organization as it responds to the energy. This approach was selected because it more

25
This researcher may have written the first discrete event simulation program in a simulation language in Southern
California, in 1965-1966, and was a graduate student at UCLA in the department of and at the time of Kleinrock’s
work.
Page 41

closely follows Parsons' style of description in which flows are described in a series of
time-related steps.
To reiterate the baking example in the Methods section, p. 47, what would happen
if dough could be formed into loaves more quickly than the time it took to cook the
loaves in a batch? If the average rate at which loaves were created exceeded the average
cooking time then an infinite queue would grow in front of the ovens. If the average rate
at which loaves are created is about the same or lower than the rate at which loaves are
cooked, then on average any queue that would form would be finite and a queuing theory
formula can tell us how long it might be at its maximum. Formulæ could also be em-
ployed to evaluate whether there should be multiple slow ovens to compensate for fast
loaf making, etc.
In terms of simulating the theory of action, the transactions to be "born" are units
of energy, and this researcher thinks of them as news, such as a new idea. The DES com-
partments would be Parsons’ functional prerequisites, the processing or transforming
would be what happens inside each function (e.g., scanning the environment in Adapta-
tion; setting goals and allocating resources in Goal Attainment; recruiting and training
new staff, and integrating new processes in Integration; and establishing the filters by
which sense is made in the other functions in Latent Pattern Maintenance), and the dying
would be what happens to the imported energy after the last function in the flow, Latent
Pattern Maintenance, has responded. And the flow would be of (a) energy, and (b) mes-
sages along the paths of the interchange media among the functions. If the rate of arrival
of energy and other interchange media exceeded the rate by which it could be processed
then queues would grow between the producer and consumer. Such attention to rates and
queues, while an integral part of DES, is absent from Parsons’ conceptualization and
therefore writing. This may be an important indication that using DES is inappropriate for
simulating the theory of action, as prominently mentioned in the Limitations section, p.
20.
Without loss of generality, the arrival rate of news and service rates of the func-
tions are set to be fixed amounts, so the simulation here is deterministic. The reason is
that Parsons gives no insight into the rates, so the assumption of randomness, while more
realistic in terms of the real world, would only reflect invention by the researcher in Par-
sonian terms.
The researcher could only find two applications of discrete event simulation
applied to social systems. Fararo and Hummon (1994) used DES to analyze several
aspects of social networks; the senior author is well-known for his contributions and
extensions to the theory of action (e.g., (Fararo & Skvoretz, 1984)), so it is worth noting
that he (with collaborators) did not employ DES to simulate it. Jin and Levitt (1996)
described knowledge-work projects in terms of its how they are organized, the tasks to be
performed, and the links between the two. Then tokens, as stand-ins for real work items,
are moved through the task network (PERT chart) in a simulation of the work to be
accomplished as delays, errors, and noise were inserted into the project so that final per-
formance of the project was predicted, taking into account important social (particularly
team and organizational) aspects of knowledge work. The actual mechanism of the
Page 42

simulation was discrete-event.26 This project simulation system was described in lay
terms in Samuelson (2000).
Accordingly, while simulation itself was no stranger to social systems, discrete
event simulation was very rarely used.

26
Disclosure: the system described is an educational version. There were also several commercial versions and the
researcher's employer was a distributor and partner of the Stanford University spin-off created to enhance and market
the commercial version of the simulator. The researcher was the in-house expert of his employer.
Page 43

III. METHODS
Research overview
Like Sastry’s, this study was a test of our understanding of the interface between
(narrative) description and the technical goal of predicting the future of a social system
by constructing and "bringing to life" a laboratory replica of such a social system (see
Figure 10). In another sense, it was an application of the translation of description into
enactable constructs that can mirror the structure and function of a social system. The
study, therefore, had two conceptual forks: (a) understand the description of the theory,
and then (b) reify that understanding so that a laboratory replica can be created and oper-
ated. Expanded into more detail it looked like:
1. Understand the theory of action
a. Read what Parsons, his disciples, and his critics wrote.
b. Select descriptions.
c. Translate into constructs (say, structure and function).
d. Validate the simulation when it is completed.
2. Construct a computer simulation program
a. Develop constructs from the authoritative text, as needed.
b. "Program" the constructs into a computer simulation program.
c. Assure the correct technical operation.

System Parsons’
simulation theory of
action

Subject of this
dissertation
Figure 10. Intersection of the theory of action and system simulation.

Research methods
Place of simulation and theory
Parsons (1977b) wrote:
Methodologically, one must distinguish a theoretical system, which is a complex
of assumptions, concepts, and propositions having both logical integration and
empirical reference, from an empirical system, which is a set of phenomena in the
observable world that can be described and analyzed by means of a theoretical
system. An empirical system … is never a totally concrete entity but, rather, a
selective organization of those properties of the concrete entity defined as relevant
to the theoretical system in question. (p. 177)

Parsons above restates the definition of "model," just like the subject of this
research. In other words, a model is the theory with some details left out, abstracted
Page 44

away. So, is a model the same as a theory? Is a model the same as a simulation of a
theory? These are questions still being debated by the social simulation community.
Perhaps the clearest explanation is from an electronic mail message that was in
response to this question, posed on the social systems simulation SIMSOC LISTSERV
([email protected]):
Is a simulation a (logical) derivation of a theory? Either (a) the simulation is a logical
consequence of the theory and is [therefore] a theorem of this theory and then "If you can
derive a contradiction from a given set of axioms, then that [sic] axioms are invalid"; or
(b) it is an entity that is something like a theory with its "own"
axioms and theorems:
theory: axioms(T) -> theorems(T)
sim(1): axioms(sim(1)) -> theorems(sim(1))

sim(n): axioms(sim(n)) -> theorems(sim(n))
So, now what is the relationship between the axioms of T and sim(1)...sim(n)? In the
philosophy of science there are several attempts to clarify the relationship between
theories and models (here: simulation). One attempt is from Morrison/Morgan [(Morrison
& Morgan, 1999)]. Another attempt is from Sneed, Stegmüller and other authors
[(Balzer, Sneed, & Moulines, 2000; Stegmüller, 1979)]: the structuralist conception of
theories (structuralism) (cf. Klaus G. Troitzsch on simulation and structuralism
[(Troitzsch, 1998)])27. According to structuralism a theory is a theory-net consisting of
several theory-elements connected to a basic theory-element. There are several links
between the theory-elements: specialization, extension etc. From the perspective of
structuralism, you can specify the relationship (links) between "theory" and simulation(s).
The entity called "discursive sociological theory"28 is the basic theory-element that speci-
fies the basic concepts, the basic axioms. A simulation is a theory-element, which speci-
fies additional, new concepts, new functions and introduces new axioms. The introduc-
tion of new ("gap-filling") axioms is in principle not a problem. It becomes only a prob-
lem if you add new axioms that are contradictory to the axioms of the basic theory-ele-
ment. To the point "explanation and simulation": According to structuralism a simulation
is an extension/specialization etc. of a "discursive sociological theory" and can explain
certain aspects of reality. In the case of specialization it refers only to a subset of the
applications of the "discursive sociological theory"!

27
As evidence that this is not a settled matter, Troitzsch has called for a workshop on Epistemological Perspectives on
Simulation, in Koblenz, Germany, 1-2 July 2004, http://www.uni-koblenz.de/EPOS/. He states in his call for
abstracts "Simulation has been a research instrument for long in various disciplines. In recent years, it is gaining
further attention. This may be contributed to the lack of theories that would allow for explaining and predicting the
behaviour of complex systems. In addition to that, new modelling paradigms, associated with object-oriented
concepts, intelligent agents, or models of (business) processes inspire the use of simulation. … Furthermore, it seems
that simulation is regarded by some as an alternative to research methods that do not provide convincing support for
certain research topics. At the same time, the epistemological status of simulation remains unclear. This is, for
instance, the case for its relationship to core epistemological concepts, like truth and reason. Against this
background, it seems worthwhile to reflect upon the preconditions of using simulation successfully as a research
tool."
28
Parsons' theory of action is this type, a descriptive sociological theory, as opposed to, say, a mathematical or formal
expression of a theory.
Page 45

In a word, then, a simulation of a theory is not the theory itself, but rather an
extension and specialization, perhaps with some elements added ("gap-filling" axioms)
because in order to conduct the simulation they had to be. That is, one of the challenges
in simulating a theory is placing into the simulation that which is missing in the theory,
but is necessary for the simulation to proceed. An example in the instant case is queues or
buffers. There are no queues or buffers or waiting lines in Parsons' theory of action, but
what if there is a quick succession of interchange messages, too quickly for a function to
process or absorb. What happens to the interchange message? Is it queued or lost or
resent? Whatever the answer, it is a "gap-filling" axiom that is added to the simulation
but absent from the theory – in order to get the simulation to run.
This research was about a method: applying the method of simulation to a theory
of sociology. This section describes simulation and how it was applied in this instance.
There would be a scientific elegance if the steps were performed in the order in the out-
line. In fact, the steps were applied in a messy fashion because choices made in any step
may not work downstream. And any researcher seeking to simulate a theory is, as she
reads, mentally applying the techniques known to the researcher, even if subconsciously.
The techniques form a part of the grounding (a bias) that any researcher brings to a
simulation problem. The particular assortment of techniques that a researcher knows
colors her perception of the problem. Accordingly, in this research a prototype approach
was taken:
1. Read a representative sample of the theory of action.
2. Try a few descriptions as the basis of a pilot exploration.
3. Try a simulation technique (say, system dynamics or discrete event).
4. Encode the description using the simulation technique’s representation system
(that is, programming language).
5. Operate the simulation to see if the results correspond to what the theory
describes.
If the pilot obtains results of sufficient fidelity (an unevaluated term), then the
researcher will expand on the sample, will expand on the descriptions to be encoded, will
stay with the simulation technique but may increase the fidelity of the representation
(which in principle can be done nearly infinitely), and will operate a number of cases to
increase confidence that the simulated situation corresponds to the description of theory.
In addition, the operation of the computer program so constructed was validated.
One of the primary contributions of this research was the application of a par-
ticular simulation technique to a sociological theory: discrete event. It was central to the
contribution even though its choice will not occur first in the sequence of research events.
Choose appropriate simulation technology
There are styles of (social system) simulation. Two at the top of the description
tree (Figure 11) are the main ones: continuous and discrete event. The continuous style
mirrors systems that are continuous in time, such as most physical phenomena (e.g., dis-
tance, velocity, acceleration). The discrete mirrors step-by-step events that occur at a
particular time or in a particular order and for a particular duration, such as taking a test,
filling a car with gasoline, cooking a meal, etc. One of the differences is the mathematics
involved and the underlying mechanical way that time is advanced by the computer
simulator.
Page 46

Social system simulation


(Treatment of time)

Discrete event [Time] Continuous Other methods of accounting for time

Traditional, single agent Agent based System dynamics ...

Figure 11. Classification of social systems simulators, indicating the


position of this research in bold.

An example of continuous simulation is system dynamics, of which Sastry’s work


is illustrative (Sastry, 1997). System dynamics formulates its problems as one of flows
and accumulations. The standard example is a bathtub: water in the tub accumulates as
water flows in from a tap and water in the tub falls as water flows out through a drain.
What does this have to do with social systems? Nothing directly, but it can be used to
explain any accumulation, such as competence, performance, and the ability to change, to
mention a few in Sastry’s case, per Figure 1.
For the simulation described here, the discrete event technique was selected
because its style most naturally reflects a description of the sort "This happened, then that
happened, after which this happened." Discrete event simulation reflects the importance
of time-ordering. This is how, by and large, Parsons described the dynamics of his theory,
therefore there is a match, fit, and correspondence between the description and the com-
puter simulation technique. Such correspondence is the operational definition of "high
fidelity." For example, in describing some of the attributes of the pattern variable affec-
tivity/affective-neutrality, Parsons stated that affective actions take less time than affec-
tive-neutral ones, that it takes longer to react rationally (affective-neutrality), to study a
situation, than it does to respond emotionally (affectively). (Parsons, 1982) Discrete event
simulation makes the enactment of significant time-ordering a centerpiece, hypothesized
here as a good fit with Parsons’ description of the time-dependent aspects of his theory.
The use of agents is gaining currency in social systems simulation (Epstein &
Axtell, 1996; Gilbert & Conte, 1995). Basically, agents are autonomous "computers" that
imitate individuals that each execute their own "program" of social actions. They interact
with other agents (individuals), using the outputs of those agents to inform their "pro-
grams" and possibly changing their internals. They form "communities," "(artificial)
societies," and take collective action. There are agents (actors) in Parsons’ theory and he
described them and their actions at a lower unit of analysis than that treated here. Simu-
lating agents in Parsons’ theory of action might be a future application of the simulation
described here. If one viewed agents as co-operating and communicating sequential proc-
esses (Hoare, 1985) (in the context of agent-based simulation), then this study gives
insight into the program that might be inside each agent, that is, the instant research is a
necessary precursor to an agent-based simulation of Parsons’ theory of action.
The use of discrete event simulation was a centerpiece of this research because
prior work has almost never used it. Almost all previous social systems simulations have
used continuous or other methods of representing time (such as discrete Markov chains
(Coleman, 1964), or a cadence at which all agents communicate and change internal state
(Lomi & Larsen, 1995)).
The construction of a simulation, once the subject matter is understood, was to
translate that subject matter into a language understood by a type of computer program, a
simulation engine. The simulation engine acts like the subject matter by, in the case of
Page 47

the discrete event technique, moving "work" through a series of transformations (work
stations) that operate on the work in some simulation-useful fashion. The user will spec-
ify at what rate or interval "work" arrives, where it will go as it traverses the network of
transformations, and how it will move outside the boundary of the simulation (that is,
dies).
One can imagine an industrial baking oven in which dough loaves arrive at some
rate, move through an oven at some rate, and then move out of the system at some point.
Each of the "stations" has attributes, such as oven temperature. And each of the units of
"work" has attributes, such as composition (rye, pumpernickel, etc.). And any of the
attributes can be a random variable or a stochastic variable (that is, depend upon a ran-
dom value).
There are two parts to a simulation system: the engine, which interprets constructs
in the simulation language, and the simulation language itself. The engine moves work
along in accordance with the specification stated in the language. The language in the
case of the discrete event technique is often represented as boxes and lines between them.
Work travels along the lines and is transformed inside the boxes. Sometimes the work
arrives more quickly than the boxes can service it, so the work has to wait. Work waits in
a queue. One can again think of the baking example: what happens if dough loaves arrive
more quickly than the ovens can cook what has already arrived? The new dough loaves
wait. What if the ovens are always slower than the process that creates the dough loaves?
An infinite queue builds if the arrival rate exceeds the service rate.
The internals of the operation of a simulator are beyond the scope of this disserta-
tion. Suffice it to say that simulation is a mature discipline and that there are many
choices available to the researcher so that she does not have to build a simulation engine
or develop a simulation language (Banks & Carson, 1984; Mize & Cox, 1968; Zeigler,
Praehofer, & Kim, 2000). The criteria to be applied in the search for such a combination
of discrete event simulation engine and language for this research was, in priority order:
1. Building-block approach, with the existence of many pre-built components. This
refers to the simulation language. Many computer programming languages are
functional, where each line instructs the computer to perform a specific function,
such as arithmetic or printing. Building blocks, in contrast, specify the compo-
nents (works stations, units that transform the work) and connections in a net-
work. The advantage of a building block approach is that less has to be specified
and what is specified is particular to discrete event simulation. The pre-built com-
ponents simplify the job of specification because at some level of abstraction all
discrete event simulations are similar. All create work at some rate, distribute it
among work stations, transform the work item, collect like items, and then have
them exit the simulation at some rate.
2. The ability to create blocks that are not already in existence (by writing a com-
puter program) if the appropriate pre-built component is not available. In the
event the selected language could not specify something important in this
research, then it would be valuable if the block could be created, even if that
meant writing a (presumably) small computer program. This attribute is called
extensibility in the computing literature.
3. Ability to create a graphical user interface for the user, in which input values can
be requested and outputs can be viewed graphically. The primary users were envi-
Page 48

sioned to be people interested in social systems, not necessarily computer-centric


professionals. Therefore, graphic communication was important.
4. Visual programming approach for the developer of the simulation. This technical
attribute makes using the simulation language easier because the specification of
the components and their interconnections would be graphic, too, like a diagram
that becomes animated.
5. Low price.
There is no loss of generality in the selection of discrete event simulation for Par-
sons’ theory of action and the computer program could be converted to continuous simu-
lation by another researcher (Zeigler et al., 2000). That is, (mathematically) every dis-
crete event simulation can be represented in continuous time, but the character of the dis-
crete event outlook is destroyed in the process. In other words, the choice of the discrete
event technique will not preclude the choice of the continuous time technique, but the
transformation of discrete event to continuous time leaves the original, underlying dis-
crete event simulation unrecognizable. In yet other words, one might ask, "Does the
choice of the discrete event technique mean that there are cases that cannot be repre-
sented, that choosing, say, continuous time representation would be strictly more power-
ful?" The answer is that the continuous time representation is strictly more powerful (that
is, there is a least one model that can be represented in continuous time but not as discrete
events), and that any discrete event representation can be translated into the more inclu-
sive continuous time representation by a series of (algebra-like) steps. But the translated
model will not look like the original discrete event one because the translation process
does such violence to it.
Translate from description to simulation
Any simulation from descriptive text is built in a step by step process:
1. Identify description that will serve as the authority.
2. Transform the description into constructs:
a. Reflect the structure in the simulation.
b. Reflect the functions in the simulation.
3. Build the simulation based on the collection of constructs.
4. Operate the simulation.
5. Validate its fidelity.
6. Run more cases, rework, improve, expand.
To some degree the focus on time-ordering filtered how Parsons’ texts were read
and interpreted, how it was parsed into constructs about structure and function. In par-
ticular, when reading to find constructs one inevitably was led to questions of timing or
precedence. To the extent these questions are not out of place, not forced, the choice of
discrete event simulation was (informally) validated.
Construct selection criteria
In order to identify description that would serve as an authority, many writings of
Parsons were examined. One stood out as especially suitable for research purposes: Par-
sons, Bales and Shils (1953a). This 107-page working paper was the anchor29 of the work
described here, just as Tushman and Romanelli (1985) was for Sastry. While the work
here will not be limited to the anchor, it will likely rely heavily on it.

29
Researchers in artificial intelligence call such a source an "oracle" in order to convey the status as an authority.
Page 49

The following criteria were applied to each candidate construct:


1. Essential – Will it be possible to build a high fidelity simulation without
this construct? If not, then it was included.
2. Parsimony – Will this construct already be present or in a more general
form? If so, then it was excluded.
3. Dependency – Will the construct stand alone or depend on (many) others?
If it will stand alone then it was favored for inclusion.
4. Spectrum of interpretation – Will this construct be relatively free from
interpretative bias (e.g., discrete time) or multiple interpretations? If so,
then it was included.
5. Richness – Will the candidate add a lot of meaning or will it be a detail? If
it will add a lot of meaning then it was favored.
6. Recency – Favor Parsons' most recent rendition of his theory
Validate the simulation
Since the simulation reflects a theory, the validation step attempts to answer the
question, "Has the simulation been faithful to Parsons theory of action, according to Par-
sons et al. (1953a)?" There were two steps in the validation, both performed by a third-
party expert on Parsons' theory of action:
1. Did the structures and functions in the simulation accurately reflect the
theory of action?
2. Did the technical operation of the simulation produce results that were
predicted by the theory of action?
That is, the validation step was a subjective assessment by experts about the accu-
racy of the translation of theory to simulation artifacts. The attestations of the two
experts, one for each question, are found in an appendix.
Lave and March (1993, ch. 3, Evaluation of speculations) posit another approach:
truth, beauty, and justice.
The construction and contemplation of models are æsthetic experiences. Like
other æsthetic experiences they become richer and more enjoyable with an appre-
ciation of their nuances. The dicta of methodology are nothing more mysterious
than rules of thumb or improving the artistry of speculation (Lave & March, 1993,
p. 52).30

By truth the authors mean correctness. The model should accurately (synonym for
correctly) reflect the assumptions and derivations of the underlying theory. The emphasis
is on testing the derivations, not the assumptions because assumptions are often axioms
and therefore true by definition. Truth is sought by several means:
• Testing for circularity. Are the definitions tautological?
• Promulgation and evaluation of alternative derivations. Seeking alternatives can
expose errors in the original derivations and at least help sharpen models.
• Differentiation among competing derivations. Can an experiment verify or favor
one derivation over another?

30
The comment about the appreciation of nuance can be related to March's long association with models, beginning
with Cyert & March (1963), one of the first simulations of an organization.
Page 50

•Playfulness. By being intellectually playful a researcher can become less invested


in and less fervent about his/her model and therefore more open to question its
correctness.
By beauty the authors mean three criteria:
• Simplicity. A smaller number of assumptions is more attractive than a larger num-
ber, all other things being equal. This is Occam's razor and parsimony restated.
• Fertility. The simulation produces a large number of interesting derivations per
assumption. That is, the simulation is rich in descriptiveness. This is also a way to
say that more general simulations are preferred because they apply to a greater
number of situations.
• Surprise. Some of the implications are surprising and not immediately obvious
from the assumptions. The critical impetus for system dynamics models, for
example, according its creator, Jay Forrester, was to be an instrument of policy
study because the outcome of feedback loops is counterintuitive (that is, surpris-
ing) (Sterman, 2000). Achterkamp and Imhof (1999) also list surprisability as one
of three important features to credibly establish computer simulation in sociology
(generality and power to separate theoretical from technical results are the other
two).
Justice is the third dimension offered. It is a reminder that we should pursue the
explication of social myths and that our own philosophical commitments are not neutral,
that we have to expose, examine, and question our own worldviews.
How might we evaluate or judge the truth, beauty, and justice of the model/ mod-
eling described in this research? One way would be to ask the dissertation committee
members to make subjective evaluations on each of the dimensions and compare them
with other models they admire. This way the research described here would be placed in
a "quality" context related to other, like models.
It may be worth noting that in the case of pattern variables there were some field
experiments that potentially could have been used to validate the model (Cherns, 1980;
Park, 1967; Podell, 1966; Podell, 1967; Williams, 1959), but, alas, none of them were
able to make inferences at the level of analysis used here, namely organizational.
Assure correct technical operation
There are several parts of the technical operation. Conceptually, there is the
simulation that is constructed by the researcher, the computer program that interprets and
executes it, and the information consumed during the operation of the computer program.
There has been general frustration with the quality of computer software since its humble
beginnings in the late 1940s and this dissertation's application of computing will not
solve those concerns.31
There are two general approaches to increasing the quality of computer software:
assuring/proving its correctness and increasing the confidence in the final version by
testing. Other hard science disciplines often assure the correctness of their work by
proving certain properties of the results. For example, scientists and engineers can use
physics to "prove" that a bridge will withstand certain forces. But computer programs are
31
It may be worth noting from an authority perspective that the author of this dissertation is the Associate Editor in
Chief of Quality of the Institute of Electrical and Electronics Engineers (IEEE) Software Magazine. IEEE is the
largest professional organization in the world and the Computer Society is the largest subdivision of IEEE. Software
is a publication of the IEEE Computer Society.
Page 51

not physical objects and the laws of physics do not apply. On the other hand, computer
programs can be mathematical objects and a branch of mathematics could be used to
prove properties. The intuition is use to the logic we employed in high school geometry
when we proved theorems about triangles, such as congruence. We could say that the
lines in a computer program are like a geometry proof, where our task is to show that
each line is an entry in a proof that given the inputs and the transformations being
applied, the output was what we want. This way computer programmers would be able to
prove that their programs worked before they were ever executed, before they were every
tried on any computer.
As appealing as this approach was conceptually, it has not been widely applied in
practice. For example, even though they are prime candidates for this mathematical proof
of correctness, the most popular computer programs for statistical analysis (e.g., SAS,
SPSS, BMDP, and Systat) did not use the approach. Rather they, and nearly all other
computer programs, are tested as a method of improving confidence in the results pro-
duced by the computer program. Testing can only show the existence of errors, never
their absence.
Testing is essentially a (serious) mathematics problem. Even a simple computer
program has more states in it than there are assumed to be molecules in the universe.
Therefore, exhaustive testing – testing of every state that can be obtained in a computer
program – is not feasible as a matter of practice. Even with the fastest computers it would
take hundreds of years to try all of the states in a simple program.
Accordingly, one application of math to the problem is to find/compute equiva-
lence classes, one example of a large subset of the possible states, and have that one
example stand-in for all of them. For instance, if we were testing the printing of United
States ZIP codes, those five-digit numbers that indicate the general geographic location
of mail destinations, then we might select a sample of them instead of all 00000-99999 =
one hundred thousand possibilities. In fact, it might be normal to select only three values
to test the printing: 00000, 99999, and a random choice in between.
This leads to another testing approach that is ad hoc but often used: test the places
in a computer program where errors are known to "hide": boundary conditions and inter-
faces. Boundary conditions are the extreme values that a computer program takes in or
puts out, such as a large negative number, zero, and a large positive number. Interfaces
are places where one computer program uses the services of another, such as, in the cur-
rent research, the simulation engine invokes Microsoft Excel for input from a table.
Upon closer examination of the research described here errors could occur in the
following places in computer programs:
1. Errors in the specification of the simulation, that is, in what can be specified in
the SIMUL8 language. The researcher is the author of the specification. This
type of error can be discovered by reading and by being traced back from
anomalous results.
2. Errors in the execution of the SIMUL8 language program. The provider of
SIMUL8, SIMUL8 Corporation, is the author of the engine. Therefore, errors
of this type are the most difficult to discern, hopefully are the most rare, and
can be discovered by tracing back from anomalous results. Also, SIMUL8
Corporation regularly updates the engine based on user input from world-wide
usage.
Page 52

3. Errors in the values provided in the Excel sheet that is read in during program
execution. The researcher is the author of the specification. This type of error
can be discovered by reading and by being traced back from anomalous
results.
In sum, the quality of the results of the simulation cannot be proved and will
always be suspect. Confidence in the results can be improved by increased testing, which
is a problem of multiplicity of states, of trying to test as many states as practical given the
constraints on the research resources.
Delimitations
The delimitations of a study are those characteristics that limit the scope (i.e.,
define the boundaries) of the inquiry as determined by the conscious decisions to exclude
and include that were made. Among these were the choice of objectives and questions,
variables of interest, alternative theoretical perspectives that could have been adopted,
etc. The first limiting step was the choice of problem itself.
This study was highly biased by the search for strong time orderings; that is the
basis of discrete event simulation. In order to construct the simulation the researcher
scoured Parsons, Bales and Shils (1953a) for even the remotest indications of time order-
ing and surely this biased the fidelity of the simulation. Furthermore, it could never be
argued that the time ordering in Parsons theory was an essential feature, surely not on the
level of the pattern variables, functional prerequisites, interchange media, and cybernetic
hierarchy. Accordingly, the claim must be made that the research here was one transla-
tion of the theory, not the (definitive) translation. The simulation was not comprehensive;
at best it is intended to be a scaffold that other researchers will use to build higher fidel-
ity, more comprehensive simulations of the theory of action.
In addition, the theory of action contains many permutations and layers. The study
here limited its scope to:
1. Performance; neglects learning – The Parsons theory can be applied to two views
of organizations: their performance in pursuit of their "exterior" goals, and their
"interior" learning as they perform. In the performance case, energy flows from
Adaptation to Goal Attainment; in the learning case energy flows from Integration
to Goal Attainment (Parsons, 1960, p. 217). This research addressed only the per-
formance view/flow. As an aside, the organization simulated does learn how to
perform (well, how to reduce tension, the difference between the energy outside
the organization and the energy circulating within), using classical conditioning,
which is not what Parsons implied in his learning vs. performance dichotomy.
2. A single unit of analysis: the organization – Parsons illustrated that the unit of
analysis of his theory can be any size, from, for example, individual to a nation or
national culture. This research selected a single unit in order to demonstrate feasi-
bility. Also, because of a single unit of analysis the research did not address inter-
penetration, the impact of levels of analysis on each other, as, for example, norms
for the personality level can impact the performance of an individual at the col-
lective level.
3. Single level of the four functional prerequisites (not the infinite regress) – In addi-
tion to multiple units and levels of analysis, Parsons illustrated that each of the
four functional prerequisites can, in turn, be subdivided into four units, and each
of those four units into four more, ad infinitum. This research addressed a single
Page 53

level in order to demonstrate feasibility of simulating the characteristics of each of


the four different prerequisites.
4. One of the pattern variables, not all of them – Parsons constructed four pairs of
variables to explain the underlying behavior and motivation of the four functional
prerequisites. This research addressed one, affect and affect-neutrality, because it
is described in Parsons, Bales and Shils (1953a) as having a time-varying impact.
That impact was that events that engage affect (emotional) are dealt with more
quickly than those that are affect-neutral (cognitive, rational). Also, on account of
not simulating all of the pattern variables the interaction among them, as seen in
the proliferation of 16 combinations32 in Figure 2, was not investigated. There-
fore, interaction effects were neglected, perhaps to the significant detriment of the
simulation's fidelity. Not taking into account the rest of the pattern variables may,
in fact, invalidate the model because the feedback and interaction among the pat-
tern variables could well change the outcomes entirely, as in any dynamic system
where a model simplifies feedback (Hanneman, 1988). On the other hand, since
the purpose of this research was to show feasibility, simulating only one pair of
pattern variables may serve that goal.
5. Only orientation, not modality – Each pattern variable pair defines one property
of a particular class of components. Orientation, one of set components, concerns
an actor's relationship to the objects in his/her situation and is conceptualized by
the two "attitudinal" pairs of variables of diffuseness-specificity and affectivity-
neutrality. Modality, the other set of components, concerns the meaning of the
object for the actor and is conceptualized by the two "object-categorization" pairs
of variables of quality-performance and universalism-particularism. This research
only addressed orientation, and, as mentioned above, only one pair of them,
namely affect and affect-neutrality.
6. Sufficient affect-neutrality, not sufficient affect – Fararo observes (2001) that in
order to sustain a system there needs to be sufficient attention to the tasks con-
fronting it. In order to address those tasks a system must emphasize affect-neu-
trality, though, of course, not to the total exclusion of affect. "A necessary condi-
tion for social order is that affective neutrality is not too small." (p. 157) This
research did not address sufficiency, but did address a related topic, the pursuit of
reducing tension.
7. No direct simulation of the cybernetic hierarchy – The hierarchy was implicitly
simulated on account of the direction of flow of energy, only Adaptation to Goal
Attainment to Integration to Pattern Maintenance. Therefore control is manifest
only from Pattern Maintenance to Adaptation. Here, too, the observation in 4,
above, about the omission of feedback paths invalidating the fidelity applies.
There would likely be an entirely different set of outcomes if all of the paths were
modeled. As stated in 4, the purpose of this research was to demonstrate feasibil-
ity, so modeling some of the paths may achieve that goal.
8. Only a few of the interchange media – There are twelve interchange media. This
research only examined four: Adaptation to Goal Attainment to Integration to

32
Four pairs of pattern variables generate 24 = 16 combinations.
Page 54

Latent Pattern Maintenance, and then Latent Pattern Maintenance back to Adap-
tation.
9. Very few of the possible process features in Parsons, Bales and Shils (1953a)
were simulated, not all of them.
In every case, the choice of boundary was caused by the nature of this research: a
toy simulation to investigate the feasibility of simulating Parsons' theory. That is, by its
nature this research was bounded.
Page 55

IV. THE MODEL AND SIMULATION


Why simulate?
Regardless of the form of the model or technique used, the result of the elicitation
and mapping process is never more than a set of causal attributions, initial
hypotheses about the structure of the system, which must then be tested. Simula-
tion is the only practical way to test these models. The complexity of our …
models vastly exceeds our capacity to understand their implications. (Sterman,
2000)

Chatting about Sociological Laboratory [a book and accompanying computer


programs by Bainbridge] over lunch with me one day, George Homans said that
he had once hoped to write a third book in the spirit of The Human Group and
Social Behavior. To be called A Toy Society, it would start from his set of simple
axioms and build a miniature society operating according to principles logically
derived from the axioms and through its realism, demonstrating that Homans's
approach could indeed explain the chief features of human society. Unfortunately,
Homans said, he could not find the means to produce a functioning toy society.
Computer simulation, he agreed, could be that means. Unavailable to Homans,
modern computer simulation techniques make possible a variety of experiments
with toy societies, leading ultimately to a grand test of the logical coherence and
sufficiency of any theory of human behavior constructed along the lines proposed
by Homans. (Bainbridge, 1992)

There are only a few alternatives to a computer simulation of a social system:


normal science, a descriptive model, real world experimentation, or a formal model.
Normal science is possible when parameter variables are able to be controlled, and then
the whole mechanism of hypothesis testing, experimental design (Campbell & Stanley,
1963) and its corresponding statistics are available.
Parsons provided the second: a descriptive model, using words and drawings to
communicate his meaning. How do we come to understand the interactions of the com-
ponents he proposed? How can we test our understanding of how the parts fit together
and what the outcomes are?
Real world experimentation is a problem in all social sciences because of the lack
of control: parameter values cannot easily be fixed and even if they could the very fixing
often interferes with the process in situ that was sought to be observed. In addition, there
is considerable attenuation (that is, delay between cause and effect) in the real world,
particularly with the Latent Pattern Maintenance function. Worse, the very nature of it, its
latency, means by definition that it cannot be directly observed.
Formal models are those about which we can reason, usually by manipulating the
statements. The most common formal model is mathematical; there are also a number of
formal models stated in first order predicate logic or as the type of axiomatic logic one
finds in high school geometry or trigonometry theorem proofs. There is a formal model
(in the predicate calculus) of Parsons' theory (Brownstein, 1982) and it does illustrate
inconsistencies and gaps. However, due to those inconsistencies and gaps, according to
its author it cannot be used for inference. There is no complete mathematical model of the
theory of action.
Page 56

One objection to formalization in general is that important aspects of a problem


might be omitted in order to obtain or preserve tractability. Simulation was specifically
developed to overcome the limitation of tractability, but, in fairness, imposes others, such
as a significant barrier to entry in terms of computer and operations research knowledge,
just as mathematics-based approach does, too (Hanneman, 1988).
Mathematical formulations of complex problems often exceed the capacities of
the creators and consumers to understand and explicate them. The introduction of
conditional relationships, nonlinear relationships, and complex patterns of coup-
ling among even small numbers of variables can rapidly exceed our capacity to
solve such systems, or to comprehend the meaning of the solution if one is found.
(Hanneman, 1988)

Simulation languages are intermediate between mathematics and description.


They are invented languages that are more restrictive than natural languages (such as
English) and less restrictive than traditional mathematics. They force upon the program-
mer a discipline to describe in some detail the structure and function of the system under
study.
Another objection to simulation is that the social world is unsystematic and am-
biguous in causality (Tsuchiya, 1966). Two researchers, in particular, have had success
using simulation to aid understanding of "soft," that is, qualitative problems, including
those of ambiguity, which holds out hope that simulation can be used in poorly-quantified
domains (Robinson, 2001; Tsuchiya, 1966). In addition, the whole application of system
dynamics to social systems is a response to this objection, as documented in the annual
Proceedings of the International Conference of the System Dynamics Society and in the
quarterly journal, System Dynamics Review. And (Hanneman, 1988) is a tour de force in
the application of system dynamics to social systems. System dynamics, in that role,
attempts to aid understanding by showing the consequences of assumptions about struc-
ture and function.
In some sense we are "stuck" with simulation as a laboratory workbench for
exploring our understanding of social systems and where that understanding might take
us. Simulation, while formulated in the orthodox theory it is trying to animate, nonethe-
less can lead to radical changes in such fundamentals as the presuppositions, model
boundary, time horizon, and dynamic hypotheses (Sterman, 2000).
Model construction
Models are two things: a choice of goals and a choice of constructs from many.
The goal of this model was to illustrate feasibility: was it feasible to simulate Parsons’
theory of action? The choice of constructs was more multidimensional: unit of analysis,
level of abstraction, granularity, time horizon, fidelity. For example, clearly the level of
fidelity will drive the choice of granularity (greater fidelity requires greater granularity),
and the choice of granularity will drive the choice of the unit of analysis (the greater the
granularity the lower the unit of analysis). Since the goal was feasibility, the highest unit
of analysis was selected, along with low granularity and low fidelity.
In the sense used here, a model is a theory with bits left out. The construction of a
theory – like that of a model – is a messy mental process. Most reports of theories do not,
thank goodness, give the details of the creative and cognitive processes that gave rise to
them. In its briefest form, the researcher reads something that inspires an outline of a
theory and gives rise to a place to "hang" or represent future insights. In other words,
Page 57

there is a preliminary (usually mental, hypothetical) theory that is elaborated as more and
more concepts are added. That first version of the theory is sufficient if there is a place to
put each elaboration without "too much" work. This is the cognitive mechanism behind
grounded theory, a creative process of deriving an explanation from raw data. It is also
the cognitive mechanism behind the process point of view: a creative process of deriving
the mechanism that yields an outcome.
Numerous authors (e.g., Bainbridge (1992), relying on Rodney Stark's "sociologi-
cal process" (2003)) have described their steps for building a theory or model. Here is a
sample (Lave & March, 1993):
1. Observe some facts.
2. Look at the facts as though they were the end result of some unknown process
(model). Then speculate about the processes that might have produced such a
result.
3. Then deduce other results (implications/consequences/predictions) from the
model.
4. Then ask yourself whether these other implications are true and produce new
models if necessary.
A recent table, below, shows the variety of steps possible, summarizing those
offered by various system dynamics authorities:
Table 3.
The system dynamics modeling process across the classic literature. (Luna-Reyes & Andersen, 2003)

While all of the proposals, above, appear logical and linear, in fact the process is a
non-linear, iterative one of creative speculation and hypothesis testing. No more will be
said of this inchoate process; the point is to appreciate the difference between what is
prescribed as a set of steps to establish a theory or model and what really transpires inside
the mind and workbench of the theorist or model builder.
The plan of this chapter is, first, a description of the elementary model, followed
by a short tutorial on the implementation of learning and tension. After that is a descrip-
tion of what the user saw as she operated the simulation, along with the rules and
assumptions that were implemented. The chapter concludes with a parsing of selections
from Parsons, Bales and Shils (1953a) and their correspondence in the possibly more
elaborated model to illustrate the fruits of the process that built a bridge between Parsons'
text and the simulation model.
Basic concept
The basic concept of the simulation is that of a baking oven fed by a conveyor
belt on which is raw dough. The dough represents the energy outside the system under
study (the oven). The oven represents the heat that will be applied in successive internal
Page 58

chambers as the raw dough is transformed into its cooked form. The goal of the oven
(system) is to produce bread that is cooked "just right." This particular oven senses how
large the dough mass is and adjusts its internal heat based on it. And, in fact, it adjusts
based on the pattern of dough masses as it sees them one at a time as they enter the oven
door.
In Parsonian terms, the dough represents energy from outside an organization, say
news or a new idea. The news will pass through the four functional prerequisites as it is
transformed and it transforms the organization. Success is measured by how well the
latent pattern maintenance matches the pattern of energy entering the system. The differ-
ence between energy presenting itself to the system and the energy inside the system (at
the latent pattern maintenance stage) is called tension. Our goal is to minimize tension, so
the goal of the interaction of the internal functions and structures is to match or fit the
latent pattern maintenance energy to that that entering the organization.
To increase the fidelity a bit, there are not only different bread masses, but differ-
ent kinds of bread (rye, poppy, sourdough, etc.). For each the oven has to react differently
because in order to cook properly it is not only a matter of temperature but also of time.
Some dough has to be cooked more quickly, some more slowly, even at the same tem-
perature.
In Parsonian terms, in addition to raw energy in the environment, there are values
of pattern variables that are intrinsic to different types of energy. The effect of processing
of energy that has one type of pattern variable, affectivity/neutrality, is modification of
the time that the system takes to respond to the energy. An affective value moves the
energy more quickly; an affective neutral (that is, rational) value moves the energy more
slowly. So, external energy is typed – by the value of the affectivity/neutrality pattern
variable.
Model of tension and learning
To increase the fidelity a bit more, imagine that the oven knows that it works best
with raw dough that exceeds a certain mass, that is, the dough has certain characteristics
or a "signature." So it filters out – rejects – dough that is not heavy enough. It takes in
only a certain size and above. And -- here we are stretching -- the filter at the opening of
the oven is operated from inside the oven: the intuition is that the oven comes to learn the
minimum value that it will accept and adjusts the filter as it learns. The filter setting
could be different for every lump of raw dough as the oven learns.
In Parsonian terms, the Adaptation function filters the energy that it accepts at the
boundary of the system. That filter is set by Latent Pattern Maintenance. If the pattern
maintenance function sets the filter too high, then some useful energy in the environment
will not be imported, potentially creating tension. If the filter is set too low, then the sys-
tem responds to everything and patterns are difficult to develop, expending system
energy with no added patterned capability.
Another way to think of the simulation, that is, another analogy, is target tracking.
The organization being simulated is trying to track the pattern of energy in the world out-
side of it, just as radar tracks a target. If the target turns out to be a "bad guy," then the
tracking attention should increase. If the target is noise, not an important thing at all, then
it should quickly identify that and not expend extra energy. The difference between what
is expended and what should be expended is tension, a quantity to be minimized.
Page 59

Before providing more detail it is important to take note of advice that Parsons
sprinkled liberally throughout his writing (for example, (1968a, p. 47)): be careful about
the unit and level of analysis. If the system under study is an organization, then it is
treated as an indivisible "black box." We should look only at its input and output, not
how it processed the input to achieve the observed output. But our situation was a bit dif-
ferent – not that we are inattentive to Parsons’ generous advice – because the simulation
created here generated the output by processing the input. Therefore, the researcher had
to know something of the inner workings lest he could not have transformed the input
into the output. Or, put another way, the computer simulation IS the black box.
Accordingly, the flow of energy from outside the system was first filtered in the
Adaptation function, as stated above. If the energy passes through the filter, whose value
was set by the Latent Pattern Maintenance function, then it will pass to the Goal Attain-
ment function after a delay depending upon whether the energy to be responded to is
affect or is affect-neutral. If it is to be responded to by affect, then it will traverse now
and through the rest of its journey quickly, according to a user-set value. If it is affect-
neutral, then it will travel more slowly, consuming time to "think." The energy will then
pass from Goal Attainment to the Integration function according to delay and selection
rules, and then it will pass from the Integration function to the Latent Pattern Mainte-
nance function according to delay rules. Different delay values can be set by the user for
each of the functional prerequisites x pattern variable value (affect or affect-neutral).
Systems are goal-oriented, so there must exist some mechanism that matches what
outside energy is allowed in compared with the goals of the system. The goal of the toy
system described here is to reduce tension – that is, the difference between the energy
outside the system and the energy circulating inside the system. The mechanism, then, is
to adjust the filter of the in-coming energy so that the energy circulating inside the system
matches the outside energy. In principle there are many ways to accomplish the match.
In Latent Pattern Maintenance a complex interaction will occur that will set the
filter on the Adaptation function. In essence, the Latent Pattern Maintenance function has
as its goal to seek to minimize tension; that is, the system's goal is administered by the
Latent Pattern Maintenance function. Symbolically it will do this by learning the pattern
of energy arriving from the outside and matching the filter to it in order to let in enough
energy to sustain the enterprise. Parsons, Bales and Shils (1953a, Fig. 7, p. 223) call the
style of learning classical conditioning. Accordingly, this is the style of learning that was
simulated. But it is learning by an organization, not by an individual, despite a question
about whether organizations can be classically conditioned.
Classical conditioning
As mentioned in the Literature Review, one of the challenges in simulation is
transforming qualitative concepts into quantities so that a digital computer can manipu-
late them. Unfortunately, there were almost no reports of quantitative measures of
organizational learning, despite the abundance of references to organizational learning
and learning organizations. One of the only quantitative studies of organizational learning
was the formulation of Nembhard and Uzumeri(2000), which was used in this study:
Page 60

⎛ x+ p ⎞
y = k ⎜⎜ ⎟⎟ , a three-parameter hyperbolic function, where
⎝x+ p+r⎠
p = cumulative prior learning, in clock ticks33. Must be a positive integer. Incre-
ments with every clock tick. Default is 500.

r = cumulative time to reach k, in clock ticks. Must be a positive integer. Default


is 250.

x = units of time since the last change in k. Must be a positive integer or zero.
Increments with every clock tick. Default is zero.

k = asymptotic value for Latent Pattern Maintenance. Initial value is 2.

y = successive values of k as x approaches infinity. This is the value that is pre-


sented for Latent Pattern Maintenance in the computation of tension, where it is
called the "energy level of L."

Figure 12 illustrates the intuition. k is the value being sought by the system, the
value that L is trying to obtain by adjusting the energy filter on A. Given this value,
Latent Pattern Maintenance must compute a possibly new value for the energy it lets into
the system via the Adaptation function. Basically, the formula smoothes prior values in
order to reach its goal of k in a stable and planned way. There are two cases: approaching
the target k from above and approaching it from below, both of which are illustrated.
Imagine that the L function has determined – in a process opaque to us at the moment –
that the target value of some important variable is 2. If the outside environment is pre-
senting, say, 4, then L clamps down on the filter that lets values in so that the 4s are not
permitted to enter. This is Case 2 in the figure. Given the same target of 2, imagine that
the outside energy is less than 2, then L opens up the filter and lets it all inside. This is
Case 1.
In each case, the L function responds to the difference between its target value
and the value circulating within the system, that is, the value let in. It then exercises its
considerable control (it is the highest in the cybernetic hierarchy on the control dimen-
sion) to bring the circulating energy closer to the target, either from above or below.

33
Clock tick represents an interval of time. In the simulation the clock "ticks" once every business day, as a default.
Page 61

Figure 12. Structure of the three-parameter hyperbolic learning curve model.


(Nembhard & Uzumeri, 2000)

Setting the target


Now we address how the target of the learning model was set. That is, we needed
to determine what the successive values of k, above, should be. There was no reference in
any of Parsons' works to guide us, and little elsewhere. One guidepost is that negative
exponential distributions have appeared to model forgetting since 1885 (Ebbinghaus,
1987); see Wixted & Ebbesen (1997) for an argument that power functions are a better fit
and Nembhard & Osothsilp (2001) for a review of more accurate forgetting models. The
negative exponential distribution, Nnew = Nbase • egt, depends upon two parameters, Nbase
and g, where t is time, e is the base of the natural logarithm, and Ni is the magnitude of, in
our case, the energy. Nbase is the value from which the declining curve begins, the
"anchor." g is a negative number that determines the rate of decline and its asymptote.
The figure below helps to visualize the general shape of this "forgetting" curve, where
Nbase is 4, g is –0.013, and the period is 52 (as in weeks in a year).
Page 62

Negative exponential distribution

4
Energy

0
0 10 20 30 40 50 60
Time

Figure 13. Illustration of a negative exponential distribution as a


"forgetting" function.

In the model, the setting of the target proceeds as follows:


Step Reason
1. Given a range of time over which it is The system does not want to respond to
looking (a window) and a sensitivity small (presumably random) variations
factor (threshold to respond to change), around a maximum, so an absolute value is
is the new value of the external energy set by the user above which change can
higher than any other in the window, is happen. And a window is specified (in an
it a new maximum? analog to simulation time units) in order to
2. If so, then adjust to the new maximum tune the responsiveness to change (shorter
by changing the base of the decay and window implies more change; longer win-
reset the time to 1. dow implies less change).
The formula in the step is:
If external energy > maximum in the
window) and that difference exceeds a
threshold, then we have a new maxi-
mum. So set Nbase to the new maximum
and t to 1.
3. Compute the target value as: The new value would depend upon either
Nnew = Nbase • egt (a) the new maximum and t=1, or (b) it will
continue to follow the negative exponential
decay downward with the old base and the
next natural value of t.
Page 63

Operation of the simulation


The user of the simulation began by reviewing a dedicated Excel spreadsheet for
two types of values: parameter values and the pattern of energy that the Adaptation func-
tion will see. Both can be seen in
Figure 14, a likeness of the user's Excel spreadsheet. The parameter values were
entered into column 10. The "periods" referred to are clock ticks, ostensibly one business
day by default.
Columns 1 and 2 contained in successive rows the magnitude of external energy
and whether the energy will be processed as affect or not. If Affect=1, then the energy
will be processed as affect, otherwise (nominally zero) it will be processed by an affect-
neutral (rational) mechanism, implying that the processing of that energy will take longer
than energy processed by affect. The first row was consumed at time=1, the second at
time=2, etc. When a blank row was encountered, then the values began at the top again.
In this way, the two columns can be thought of as continuously repeating until a target
number of clock ticks has transpired.
Without loss of generality, the range of the magnitude of energy can be thought as
having a maximum. It is helpful to have a maximum value for the magnitude so that the
value of k (the target value of internal energy) can be easily compared to the external
energy as a way to visualize how much learning must take place during the simulation.
Also, once all of the values are set in the spreadsheet they cannot be changed.34
Since the simulation is deterministic (not probabilistic), the outcome of a particular run is
completely determined by the values on the spreadsheet and the length of time (that is,
number of clock ticks) the simulation is run.

34
It would be a trivial upgrade to the simulation to have the simulation itself change the values or at certain intervals
or on the occurrence of certain events ask the user whether changes were wanted.
Page 64

Ver. 0.8 For The Parsons Game by Stan Rifkin


1 2 3 4 5 6 7 8 9 10
Dwell time Values
Affect Energy Adaptation
(1=true) (Level) Affect-Neutral (periods) (Should be greater than Affect) 10
0 2 Affect (periods) 5
0 2 Goal Attainment
0 2 Affect-Neutral (periods) 120
0 2 Affect (periods) 20
0 2 Integration
0 2 Affect-Neutral (periods) 120
0 2 Affect (periods) 60
0 2 Latent Pattern Maintenance
0 2 Affect-Neutral (periods) 0
0 2 Affect (periods) 0
0 2 Percentage of ideas not funded
0 2 Percentage (expressed as decimal < 1.0) 0.80
0 2 Prospensity to change
0 2 Energy filter threshold (initial value) 2
0 2 Sense of test: Pass Energy if Energy [operator] Filter Threshold >=
0 2 Learning/Forgetting
0 2 Prior learning (p) 500
0 2 Time to reach current pattern (r) 250
0 2 Starting value of L energy (k) 2
0 2 Negative exponential decay parameters
0 2 Threshold to respond to change (in energy units) 1
0 2 Decay coefficient (<0 and in time units) -0.003
0 2 Response window (in number of Affect-Energy pairs) 52

Figure 14. User view of dedicated Excel spreadsheet.

Once the spreadsheet is completed then the simulation program was invoked and
a screen like Figure 15 appeared. The user typically performs just two operations on this
screen: reset the clock (and all other variables) to zero and then start the simulation. The
default duration is 2400 business days, or approximately ten years. There is a row of

buttons along the top of the display: . The left most one is
reset; the next one is "step," which advances the clock one tick each time it is touched;
and the next one is "run," which starts the clock and the simulation runs automatically
until the final value of the clock is reached. If the run button is pushed during the actual
simulation then the program pauses; touching it again starts the simulation where it left
off. The other icons are not used by the user, only by the researcher to develop the simu-
lation in the first place.
Based on the results of the inputs the user can view the convergence of internal
and external energy on a graph after the simulation has ended; it is in the file containing
the Excel spreadsheet. That is, the user can judge how well Latent Pattern Maintenance
performs its function of restoring spikes or challenges to its target value of "culture,"
which in the simulated case is instantiated by external energy.
Page 65

Figure 15. User view of the simulation.

Figure 15 is the tableau on which the user witnesses the simulation. Energy enters
from the left on the device that looks like a conveyor belt, something like the bakery
example. The value of energy and its affect/affect-neutral pattern variable comes from
successively reading the Excel spreadsheet. This external energy is presented to the test
in the spreadsheet: Pass Energy if Energy [operator] Filter Threshold, where
operator and Filter Threshold are read from the spreadsheet. If the energy does not pass,
then it goes to the element Energy that does not enter. If the energy enters then the Adapta-
tion function looks at its affect/affect-neutral pattern variable value. If it is affect then the
energy takes the top path, the one marked Affect path, and is processed for a period read
from the spreadsheet. During that time, Latent Pattern Maintenance, in an unseen (hence
latent) process resets to Adaptation filter to a possibly new value in order to reduce ten-
sion. After that the energy goes to Goal Attainment, where it may have to wait in a queue
of the GA function is busy. If the energy in Adaptation is affect-neutral, then it takes the
lower path, possibly to a queue, where it waits for the affect-neutral processing for the
period of time specified on the spreadsheet. At the end of that processing the Latent Pat-
Page 66

tern Maintenance function possibly resets that Adaptation filter in order to reduce ten-
sion; that is, on each of the possible exits from Adaptation (affect and affect-neutral),
Latent Pattern Maintenance potentially resets the Adaptation filter for the next time it
encounters outside energy.

The icon depicts a "work station," where the input is transformed into an
output for some duration and decisions are made about where to go next. In our case each

occurrence of a work station can make a different transformation. The and


icons are queues where energy waits for a function to become available to process it. The
number above the picture is the number of energy "bundles" waiting at the moment. The

icon is where energy exits the simulation. The number above the picture is the num-
ber of "dead transactions."
Energy can enter Goal Attainment from two sources, both of which are paths out
of Adaptation. The top is the Affect path and it is fed to the Goal Attainment function im-
mediately, unless G is already working on energy affectively. If G is already occupied
with an affective action, then the in-coming affective energy is queued. If energy enters
on the lower path then it is affect-neutral and it enters a queue for rational processing
once every resource allocation interval. When the interval occurs, then all of the queued
resource requests are read by Prepare budget proposal and a percentage of them are passed
on to the Integration function and the rest exit the organization and end up in Ideas not
resourced. The percentage of ideas that are not resourced is set on the spreadsheet and
remains constant for the duration of the simulation.
Energy enters Integration from two sources, too, both from Goal Attainment. If
the energy is to be responded to affectively then it goes directly to the Integration func-
tion. If the Integration function was already working affectively-neutral, then that process
is suspended and held in Interrupted Integration until the affective processing is completed
and then it is restored for the remainder of its time. If the Integration function is working
rationally on energy when the next batch of rational energy to be integrated arrives, that
arriving batch waits and is processed one at a time on a first come, first served basis. If
the energy takes the Affect path from Goal Attainment to Integration and Integration is
already working on energy that is to be affectively integrated, then it waits in the queue
on the Affect path until the Integration function has completed its processing of the current
affective activity.
After the energy is integrated it passes to Latent Pattern Maintenance, where in
the current model nothing happens except that the energy is passed out of the organiza-
tion, out of its system boundary. L has already had its effect by potentially altering the
Adaptation function every time energy leaves Adaptation. The alteration of the filter is
truly latent here.
It is very important to note the cumulative delay that has occurred between when
the energy first enters the system and when it finally impacts Latent Pattern Maintenance.
The delay is the sum of the processing times in Adaptation, Goal Attainment, and Inte-
gration, plus waiting times. It is not insubstantial. Delay in time-varying systems can
Page 67

cause many kinds of dysfunctional behavior, including most notably oscillation as the
organization tries to respond to reduce tension.
Rules
All models are the union of their parameters and rules, as mentioned on page 19.
Here is a list of rules programmed into the simulation.
Table 4.
Rules of the simulation.

Rule Real world interpretation/application


1. The simulation is deterministic. If the same values are used then the same
There are no random values. All val- results will obtain. There is no randomness.
ues are entered before the simulation
begins.
2. Energy comes in bundles, a pair of Think of energy as "news," the kind that
values: magnitude and type. The val- comes from newspapers and other media
ues of the magnitude and type broadcasts. Then the power of the news mes-
(whether it is processed as affect or sage itself contains whether that information
affect-neutral) are fixed for the will be handled in a rational or emotional
simulation period. way.
3. There is only one place in the system News can only come in through one door in
where the energy can enter from out- the organization.
side: the Adaptation function. In its
role to scan the environment, Adap-
tation will permit energy to enter the
system if it passes certain tests.
4. The tests are: If the magnitude of the If the news is not significant enough then it
external energy is in the appropriate does not rise to a level of sufficient to get the
relation to the target value, then the attention of the organization.
energy is permitted to enter and tran-
sit the system. Else it leaves the sys-
tem. "Appropriate relation" means
that that its value "passes" the rela-
tion, where both the value and the
relation are found in the spreadsheet.
"Passes" means that "External-energy
Relation Value-in-spreadsheet" is
true.35
5. If the energy is the type that will be Information that is sensed as needing to be
processed as affect, then the proc- responded to emotionally is processed in a
essing times are selected from one set shorter duration than that requiring rational
of cells on the spreadsheet, else they administration.
are selected from the other set.

35
For example, if the external energy is of magnitude 5, the spreadsheet relation is >, and the spreadsheet value is 2,
then the energy bundle passes because 5>2 is true.
Page 68

6. Adaptation and Goal Attainment can The capability of the organization to respond
work on only two energy bundles at to news is limited to one emotional event and
once, one requiring affect and the one rational event at the same time.
other not requiring affect.
7. Integration and Latent Pattern Only one (major, funded) process can be
Maintenance can only work on one integrated at a time, be it rational or emo-
energy bundle at a time. tional, e.g. Total Quality Management or the
loss of the CEO. And LPM can respond to
only one event at a time, too, thought its
effect can be quite long-lasting, as it controls
how much news is let in.
8. If A, G, or I are finished processing News waits to be responded to, it does not
an energy bundle but its successor is drop or go away.
not ready to accept the bundle, then
the bundle is put into a queue
between them and the function is
given more energy to process if any
has arrived at that point in the cycle.
9. If energy is the type that will be proc- News that will be responded to emotionally
essed by affect, then the (media inter- takes a "fast path" through the functions.
change) path between A and G and G
and I are different than if not proc-
essed by affect.
10. Goal Attainment enqueues the energy Goal Attainment simulates the rational budget
passed to it from Adaptation and process (if the news is affect-neutral) and
processes it all at once at a given queues resource (that is, funding) requests
interval if not affect; if affect, then it until a definite period has transpired, such as
is processed as it arrives, consuming every six months.
the affect-neutral delay according to
the spreadsheet.
11. Not every queued affect-neutral Think of these Goal Attainment energy pack-
energy package transits from Goal ages as funding requests. The Goal Attain-
Attainment to Integration, only a per- ment function processes the budget requests
centage does. That percentage is set all at once every six months, passing on only
at simulation run time. a portion of them as "approved."
12. Integration addresses the incoming Imagine that an integration activity is going
energy for the duration specified in on, such as implementing Total Quality Man-
the spreadsheet, if not affect. If agement. Then the organization learns that its
affect, then any non-affect current CEO is suddenly, unexpectedly deceased.
integration efforts are suspended (that The organization suspends the TQM initiative
is, made to pause and put into a and focuses on succession and how to
special queue) and the affect energy respond to the urgent news. After responding
is given priority of Integration. After to the urgency, the organization goes back to
all of the interrupting (that is, affect) implementing TQM.
Page 69

Integration has been processed, then


the non-affect queue is resumed.
13. Latent Pattern Maintenance computes Every time there is news the culture responds
a new value for the Adaptation filter by tuning its filter on what it will sense dur-
once per energy bundle that arrives at ing the next news-gathering cycle. The tuning
Adaptation. The new value is com- is accomplished as Latent Pattern Mainte-
puted according to the learning nance attempts to reduce the difference (ten-
model. sion) between the news presented from the
outside and its response to it inside.
14. Energy leaving Latent Pattern Once the culture has responded to news then
Maintenance leaves the system. its influence remains, passively.
Assumptions
As in all modeling, the underlying theory may not provide enough details for a
machine to operate like the theory. Therefore, the modeler must make assumptions,
always in the absence of concrete guidance from the theorist. The following is a list of
assumptions made for model of the theory of action:
Table 5.
Assumptions made in the simulation.

Assumption Real world interpretation/application


1. Transit times through the framework The time it takes for news to pass from
are significant, that is, they matter. one function to the next is significant.
This is because if a downstream function
takes longer to respond to news than an
upstream one, the next news will have to
wait on or be lost or preempt the news
being responded to by the slower func-
tion. And those options are significant to
how the organization respond to external
events/change.
2. Transit times can be different, depend- News that is handled in an emotional
ing upon whether the energy to be im- way does not take as long as that han-
ported will be handled with affect or in dled in a rational, studied way.
an affect-neutral way. Affect-neutral
energy can be processed in a longer
time period in order to simulate the
time to be consumed during rational
consideration, and energy that will be
addressed by affect can take a shorter
period in order to simulate that non-
rational actions can take significantly
shorter duration than affect-neutral
ones.
3. The modeled transit times are in units The granularity of "time" described in
of a business day. There is no guid- the simulation is the business day. There
Page 70

ance from Parsons on the numerical or is no telling how many business days
relative values. each function consumes.
4. The paths through Adaptation and There is a "fast path" through some
Goal Attainment are separate and par- functions in order to simulate the effects
allel for affect and affect-neutral of immediate response implied by
energy. That is, as energy enters the "emotional events" vs. the more studied
organization it is identified as affect or and time-consuming one of the rational
affect-neutral and then put on its own response.
path.
5. The paths are separate because one is
accelerated and the other is not.
Energy on the accelerated path then
may be handled differently than that
on the affect-neutral path.
6. The affect-neutral path to Goal Attain-
ment queues energy such that the Goal
Attainment function empties that
queue only every so many days, to
simulate the periodic (that is, calendar-
driven) resource allocation review
process.
7. The paths come together at Integration Integration is so consuming that only
because an organization can only inte- one organizational initiative can be ac-
grate one set of processes at a time. complished at a time. And emotional
Therefore, the energy that has been ones have priority.
identified as affective preempts the
energy that has been identified as
affectively-neutral.
8. Preempted energy is enqueued. That Rational Integration functions that are
is, it is put aside and waits for the pre- interrupted by emotional ones are not
empting force to finish and then it lost, but rather are delayed by the time it
resumes. No energy is lost, it is stored takes to address the emotional one. Then
for later use. Its strength does not when the integration of the emotional
diminish during storage. event has been completed either another
emotional event can be addressed if
there is one, or the rational event that
was interrupted will be restored and
continue to process as if nothing had
interrupted it.
9. There is a single path through Latent Latent Pattern Maintenance actually
Pattern Maintenance. appears in two places: during Adaptation
it changes the filter on incoming news to
let more or less in based on its response
to tension (the difference between what
Page 71

is outside and what is inside), and after


Integration, where it does nothing and
lets its lasting influence be the setting
the of the Adaptation filter.
10. There are only two threshold relations The Adaptation function checks to see
of interest in the Adaptation function: whether the strength of the outside news
if the outside energy is greater than exceeds a certain level or whether it is at
the threshold, or if the outside energy or in excess of a certain level. No other
is greater than or equal to the thresh- relationships are permitted.
old.
11. There are three related quantities that The culture tries to reduce the tension
could be conflated: the threshold for between what is sensed externally and
energy to enter the Adaptation func- what it can tolerate internally. The cul-
tion, the value Latent Pattern Mainte- ture may or may not respond to a spike
nance learns to use as the threshold, in the news, for example, if there is a
and the target Latent Pattern Mainte- strong cultural counter-force to ignore
nance uses in its learning. that presumably one-time event, so that
event is smoothed away by looking at a
longer-term trend instead. The culture
selects how smooth of a trend it will use
as input to what it learns as the trend of
the real, external events.
12. The smoothing of the external energy If there is a spike (a very disturbing bit
values uses this procedure: if the new of news) then is it greater than the last
energy exceeds the maximum value maximum the organization can remem-
already seen during the time period ber? If it is, then that becomes the new
being considered (the "look-back" maximum value. If not, then it is consid-
window) by a specified margin ered a bump along the way to forgetting
(threshold to respond to change), then that maximally disturbing event.
establish a new maximum and that is By whichever means the maximum is
the value to be used. Else move along established, if it really is a spike, then
a negative exponential curve from the gradually forget it in light of new, lower
maximum towards the moving average intensity news. This slowly seeks the
of the external energy (where the win- equilibrium level of the daily news after
dow of the moving average is the same a very disturbing event by gradually for-
as for locating the local maximum). getting the effect of the disturbing event
until the next very disturbing one.

Mapping the model to theory


This section is an abbreviated version of the parsing that the researcher
performed on the 107-page Parsons, Bales and Shils (1953a). For every feature of the
model the corresponding phrase in the working paper is identified in the table below.
Page 72

Table 6.
Map of the theory to the model.

Phrase in Parsons, Bales and Shils (1953a), Interpretation and place in the model
with original orthography and punctuation
We make four major assumptions in our Parsons was referring to Newtonian inertia,
analysis of boundary-maintaining systems something that is observed (or defined) in
which are composed of a plurality of units, the physical world. Surely the terms
or "particles". We assume first, the princi- "direction" and "constant rate" have a dif-
ple of inertia, namely that a unit or "parti- ferent meaning in an organizational setting.
cle" always tends to move in the same By direction, the model assumes that
direction at a constant rate unless deflected energy passes first through A, then G, then
or impeded. p. 164 I, and then exits the system after passing
through L. By rate, the model assumes
"cognitive rate," the rate at which energy is
made sense of in organizations. Since there
is no conversion factor to rates in the
physical world, the simulation lets the user
set the duration (called dwell) for each
quadrant; the simulation permits different
values for affect and affect-neutral energy,
so there are eight possible user-set dura-
tions (four functional prerequisites x two
values of a single pattern variable).
In no concrete case of system-process can Indeed, as energy transits the system the
this constancy of direction and rate be rate of process can vary (in this research
maintained for any span of time, since the deterministically, not randomly). There are
interdependence of units is the very essence several cases where direction can vary: at
of the conception of system. p. 164 the outset energy might not be let into the
system due to the setting of the filter in
Adaptation; not all energy (news or ideas)
will be allocated resources in Goal Attain-
ment, so some will travel onward (those
approved) and some will exit the system;
and if the energy is to be affectively
responded to then in Integration it can at
least temporarily displace activities that
were being addressed affectively-neutral.
The unit in a stable state of the system will The fixed set of rules and assumptions
tend to follow a sequential pattern of guarantee that a sequential (in this case
changes of direction as its relations to the deterministic) pattern, both in relation to
other units in the systems and to the exter- each of the predecessor functions and the
nal situation change over time. p. 164 external situation over time.
This sequence may be oscillatory or cycli- The pattern in this research depends pri-
cal or it may have some other form, but it marily on the pattern of the external energy
Page 73

will always involve changes of direction over time, and the initial settings of the
(and of rate). These changes will always values that control dwell times, rate of
follow a pattern, although there may be learning and forgetting, and the window
some random elements intermixed with the over which the system looks back in which
pattern. p. 164 to formulate its latent pattern maintenance
response.
We assume the principle of action and The implementation of "equal and oppo-
reaction tend to be equal in "force" and site" is the setting of the filter on Adapta-
opposite in direction. We interpret this to tion by the Latent Pattern Maintenance
be another version of, or a premise under- function. If Adaptation lets in "too much"
lying, the conception of system-equilib- energy LPM compensates by decreasing
rium. No more than the statement of the the amount to be let in in the future and
principle of inertia does the statement of mutates mutandis for "too little." And if the
this principle imply that actions and re- reaction of LPM is not appropriately "equal
actions empirically are always equal and and opposite" then tension increases, which
opposite; it does imply that where they are in turn increases the pressure on LPM to
not equal and opposite, a problem is pre- adjust the incoming energy.
sented. p. 164
We assume the principle of acceleration The model does not handle this. Rates are
which asserts that changes of rates of proc- not adjusted based on the consumption of
ess must be accounted for by "forces" energy. There are increases and losses of
operating on (or in) the unit(s) in question. energy in the sense of sinks and sources in
An increase of rate implies an "input" of the model, those changes do not affect the
energy from a source outside the unit in rates of anything.
question, and decrease of rate, a loss of
energy, and "output" of some sort from the
unit. p. 165
We assume the principle of system-inte- The system and its boundary are given in
gration. We interpret this to mean that, the simulation and cannot be changed. On
independently of the operation of the other the other hand, the components are com-
three principles, there is an imperative patible with each other in the sense that
placed on systems of action which require they non-destructively interchange infor-
that pattern-elements in the organization of mation. If there is a question of coexistence
their components should be compatible of the system in its environment, then it is
with each other while maintaining the reflected by increased tension, that is, by an
boundaries of the system vis-a-vis its increase in the difference between the pat-
external situation. p. 165 tern of energy of LPM and the pattern of
energy external to the system.
Central to our scheme is the conception of The model simulates action as process by
action as a process occurring in or consti- the four dimensions and one of the four
tuting boundary-maintaining systems con- pattern variables.
ceived within a given frame of reference.
This frame of reference involves, above all,
the four dimensions … and … four pattern
variables. p. 165
Page 74

An orientation cannot be both affective and Each bundle of energy crossing the system
neutral [at the same time]. p. 166 boundary is tagged as either that which is
reacted to by affect or by affect-neutrality.
There are no degrees of affect; affect and
affect-neutrality are mutually-exclusive and
exhaustive.
The dimensions are, we assume, essentially Energy enters the system and moves along
directional coordinates with reference to pre-determined paths, operating in one
which the process of action is analyzed. place at a time. Energy in the simulation is
Motivational energy entering the system always specifically located. And the units
from an organism cannot simultaneously are connected by definite interchange
operate in all possible processes which go paths, along which energy and information
to make up the system. It must be specifi- move.
cally located, in the sense that it must be
allocated to one or more units of the sys-
tem. But at any given time this unit must be
located at some definite point in the action
space, and must be moving … in a definite
manner. p. 166
The system operates through interaction of This describes interchange media, of which
its member units. Every change of state of only two types have been implemented in
one unit … will affect all of the other units the model: feed forward, the forward transit
in the system and in turn the effects of of energy, and a single feedback path from
these effects on the other units will "feed LPM to Adaptation that sets the filter on A
back" to the original unit. p. 167 to determine how much energy to let in.
So, there is only a single instance of feed-
back in the current model; it is not fully-
connected.
We derive the conclusion that systems of Time is a fundamental construct in the
action must be treated as differentiated model, indeed in discrete-event simulation.
systems. It then becomes clear that this dif-
ferentiation will work out in two ways.
Since we are dealing processes which occur
in a temporal order, there we must treat
systems and the processes of the units as
changing over time. p. 167
The one way character of the process we The energy that transits the system does
have deduced from the nature of motiva- produce a kind of reactive consequence: the
tional energy—the fact that it is setting of the filter on the Adaptation func-
"expended" in action. We assume through- tion so that the inputs and outputs are,
out … there is, if not a law of conservation indeed, in balance. It does this in a numeri-
of motivational energy, a law of "equiva- cal, quantitative way, but without any
lence" in the sense that this energy does not measurement in The Real World. That is,
simply disappear, but, "produces" some the numerical aspect was created in the
kind of consequences, that there is a bal- model for purely illustrative purposes.
Page 75

ancing of input and output. We believe,


though at this time we cannot prove, that
this is a quantitative balance which will
eventually proved to be reducible to terms
of numerical equivalence. pp. 168-169
As a process of output contrasted with the Learning is manifest in the simulation as a
inputs of motivational energy, of percep- force against Adaptation taking in all (mo-
tion of objects, of facilities and rewards as tivational) energy in the environment.
the two primary categories of possessions, Rather, learning is a counter-force in that it
we thus treat the learning process as "oppo- limits, potentially reducing, the amount of
site" in directionality from the motivational energy "ingested."
input processes. p. 170
The distinction between performance and The model focuses on performance, not
learning aspects of action process forms the learning (except classical conditioning), on
basis for a further classification of types of attaining a system-goal and does not
process in systems…. In general the type of address "changes in the properties of the
analysis … presented … provides a model group and its constituent role-units." In
for the typical performance process where fact, there are no modeled properties or
the primary concern is not with changes in roles.
the properties of the group and its constitu-
ent role-units, but with task performance,
that is. in the terminology we are adopting
here, with attainment of a system-goal. p.
170
In the absence of an adequate mathematical The simulation is a toy, a proof-of-concept,
model, any feasible form chosen necessar- a stepping off point for further inquiry.
ily involves elements of arbitrariness which
all too readily become distortions. We have
chosen one such mode of presentation …,
but in order to counteract any tendency to
reify such a scheme, we have thought it
best to say explicitly that it is arbitrary, that
there are many possible types of models
appropriate to the fundamental ideas, that
we have experimented with several and are
looking for others. We feel that in the pre-
sent stage of development of this type of
theory it is exceedingly important to be
highly pragmatic about these matters and to
try out a variety of devices. Only in such a
way can we be protected against a pre-
mature rigidity of formulations…. pp. 171-
172
Page 76

V. RESULTS
The results of the simulation are divided into three broad areas: an example, base
cases, and an extension. The base cases and extension are quite similar in their pattern of
presentation: briefly explain the theory and what it would predict, indicate what the
inputs to the simulation were, and then illustrate the output of the simulation run com-
pared to the theory prediction. They directly address Parsons' theory of action. The
example has been offered to provide some concreteness to the application of the theory, a
topic largely beyond the scope of this research.
Example
Parsons' theory is opaque, so the model of it was correspondingly abstract. An
example taken from real events might aid the comprehension of the theory and therefore
the model. At the outset we must be mindful of Parsons' admonitions about misplaced
concreteness (see p. 28 above), about the value of analytic thought, so this example must
be disclaimed from the outset as present here for illustrative purposes only. Nothing is
intended to be proved by it.
Up to now patterns of flows among functions have been described (Parsons'
phases), but there have been no acts! The example here is an attempt to show how the
flows and functions could describe actual human action.
For the example a situation was sought in which the energy outside of the system
was relatively uniform for a long time (stable) and then there was a jolt, an impulse of
sudden energy, mirroring some of the events to be presented below in the base cases and
extension. Waller (1999) examined the order and timing of events in a commercial airline
cockpit simulator during training drills with real airline flight crews while they were
addressing "nonroutine" events, the kind that were associated with high outside energy.
On the one hand a real situation was sought, but the example explored here was,
too, a simulation of such a real series of events. The problem is a scarcity of reports about
real world events in which timing and order are recorded, along with outcomes. There-
fore, a simulated though realistic setting is presented.
There were ten flight crews of three persons each, so each crew was a small
group, the kind that exhibits collective behavior. The setting was naturalistic, as such
training simulators are constructed precisely to mirror real world situations and condi-
tions. The nonroutine events were arranged in a sequence of six unexpected items of
news during a planned 60-minute flight from Los Angeles to San Francisco.
The unexpected events were:
1. Poor weather forecast; bad weather at San Francisco and its alternates; heavy
takeoff weight.
2. Light to moderate turbulence during the climb and cruise phase.
3. Fast, noisy descent required by air traffic control during approach to San
Francisco.
4. The approach was missed due to hydraulic failure; crew must select an alter-
native destination (Sacramento).
5. During the flight to Sacramento emergency procedures had to be performed,
including trying to manually extend the nose landing gear and force the flap
that experienced the hydraulic failure.
Page 77

6. During the landing the crew had to compensate for the non-responsive flap, no
steering possible with the nose landing gear, and high landing speed because
the non-working flap also acted as an air brake.
Waller hypothesized that success at handling nonroutine events would depend
upon information collection and dissemination, task prioritization, and task distribution.
She noted that these all dealt with the level of the behavior, but not the timing [emphasis
hers]. She noted, for example, "rather than viewing the time of change as a function of
internal stages or clocks, the time of change may be seen as more tightly linked to exter-
nal events." p. 130, relying on Ancona and Chong (1996, p. 263)
Therefore, she hypothesized and tested for timing by studying whether there was
a relationship between the time an external event occurred and when it was reacted to.
Waller found that, for example, there was no difference in the level of workload between
crews that responded quickly and those that responded less quickly, though the crews that
responded quickly to external events all performed much better than the crews that
responded less quickly or did not let the external events come to their notice. In other
words, there was no difference in the level of behavior, but the difference in timing made
the significant difference in crew performance outcome. As Waller pointed out, the
higher performing teams did not work harder, did not perform more tasks, but did achieve
more, all because of timing, because of noticing and following significant external
events. (p. 134)
Two scenarios are described within the theory of action simulation, one with a
relatively long window and one with a relatively short one. The window, as one may
recall, is how far back the Latent Pattern Maintenance function looks back in order to
"remember" what happened historically. Strong cultures look back a long ways and weak
ones a short time. Here was the input from the user for the long window, the whole flight.
Page 78

Dwell time Values


Adaptation
Affect-Neutral (periods) (Should be greater than Affect) 3
Affect (periods) 1
Goal Attainment
Affect-Neutral (periods) 10
Affect (periods) 1
Integration
Affect-Neutral (periods) 30
Affect (periods) 5
Latent Pattern Maintenance
Affect-Neutral (periods) 0
Affect (periods) 0
Percentage of ideas not funded
Percentage (expressed as decimal < 1.0) 0.20
Prospensity to change
Energy filter threshold (initial value) 1
Sense of test: Pass Energy if Energy [operator] Filter Threshold >=
Learning/Forgetting
Prior learning (p) 1.00E+08
Time to reach current pattern (r) 1.00E+07
Starting value of L energy (k) 1
Negative exponential decay parameters
Threshold to respond to change (in energy units) 1
Decay coefficient (<0 and in time units) -0.001
Response window (in number of Affect-Energy pairs) 60
Figure 16. User display for example with long window.

Each period of the simulation is one minute and there are 60 periods, to mirror
Waller's experiment. The pattern of external energy, not shown, is eight minutes of rela-
tive calm followed by a single one-minute message of high energy (Energy=4) that has to
be dealt with affectively. Figure 16 shows that the assumed period of prior learning is
approximately ten years (in minutes! "1.00E+08" is 108 minutes) and the period of time
to reach the current level of expertise is one year. All external events are permitted to
enter (Energy filter=1 and Threshold to respond to change=1). The Adaptation function
looks at the external environment once every minute.
Here is the pattern of internal and external energy, assuming such a strong culture
that the strength of the culture remains constant during the 60-minute flight.
Page 79

External energy vs. LPM energy

4.5
4
3.5
Energy level

3
2.5 Internal
2 External
1.5
1
0.5
0
0 10 20 30 40 50 60
Simulated time (minutes)

Figure 17. Energy for the long window example.

One can see that after the first (of six) non-routine events, the culture only lets in
really high energy events as an indication that it has learned that such non-routine events
can occur and that attention has to be paid to them immediately.
Now we make a single change: the window is reduced to a few minutes, as if the
crew forgets the (disruptive) impact of each non-routine event. The window is set to five
minutes and here are the results:

External energy vs. LPM energy

4.5
4
3.5
Energy level

3
2.5 Internal
2 External
1.5
1
0.5
0
0 10 20 30 40 50 60
Simulated time (minutes)

Figure 18. Energy for the short window example.


Page 80

The energy bounces up and down as the culture more closely follows the pattern
of the external energy, with a non-routine event every nine minutes. This, too, is pre-
dicted by the theory of action because a weak culture (= short window) will respond
much more quickly to changes in outside energy and therefore will maintain less of a
pattern, will enforce less of a culture.
In summary, Waller notes that groups can "match the rhythms of [their] task-ori-
ented behaviors to exogenous events, rhythms, or deadlines." p. 135. This is precisely
what the model described in this dissertation: better performance is achieved by matching
the energy in the environment with the energy circulating internally, presumably con-
trolled by the most powerful function in the cybernetic hierarchy, Latent Pattern Mainte-
nance.
In the Waller example cast in Parsonian terms, the crew executes the Adaptation
function itself, sometimes by asking for news (such as weather conditions) or by noticing
indicators (such as the nose gear not engaging and the noisy approach descent). Based on
its sense-making during Adaptation the crew determines whether each event is routine or
not. When it was not routine, then the best crews responded to it affectively, accelerating
the transit of the event through the crew's equivalent of Goal Attainment (redirect atten-
tion towards the new event, immediately "approving" it for (attention) funding), and Inte-
gration (executing the standard procedure for that unexpected event, but interrupting or
suspending standard processing).
Base cases
Affect vs. affect-neutrality
Affect and affect-neutrality are traditionally ascribed to each of the four functions:
affect to G and I, and affect-neutrality to A and L (see Figure 5). That is, A and L are to
be more cognitive, rational, thought-out, and G and I address gratification and emotional
aspects, they are not rational. There is also the view that energy dealt with affectively
transits the functions more quickly than energy that is dealt with in a studied, reflective,
rational way. One case, then, examines the extent to which energy that is dealt with affect
passes through the organization more quickly than energy dealt with affectively-neutral.
One run of the simulation had the following values:
Page 81

Ver. 0.9 For The Parsons Game by Stan Rifkin


1 2 3 4 5 6 7 8 9 10
Dwell time Values
Affect Energy Adaptation
(1=true) (Level) Affect-Neutral (periods) (Should be greater than Affect) 10
0 2 Affect (periods) 5
0 2 Goal Attainment
0 2 Affect-Neutral (periods) 120
0 2 Affect (periods) 20
0 2 Integration
0 2 Affect-Neutral (periods) 120
0 2 Affect (periods) 60
0 2 Latent Pattern Maintenance
0 2 Affect-Neutral (periods) 0
0 2 Affect (periods) 0
0 2 Percentage of ideas not funded
0 2 Percentage (expressed as decimal < 1.0) 0.80
0 2 Prospensity to change
0 2 Energy filter threshold (initial value) 2
0 2 Sense of test: Pass Energy if Energy [operator] Filter Threshold >=
0 2 Learning/Forgetting
0 2 Prior learning (p) 500
0 2 Time to reach current pattern (r) 250
0 2 Starting value of L energy (k) 2
0 2 Negative exponential decay parameters
0 2 Threshold to respond to change (in energy units) 1
0 2 Decay coefficient (<0 and in time units) -0.001
0 2 Response window (in number of Affect-Energy pairs) 45
Figure 19. Base case for affect vs. affect-neutrality.

The simulation ran for 2400 simulated business days (about ten years) and an
energy stream of all 2's that were to be dealt with affectively-neutral, except that every
240 business days (approx. one business year) there was an event of energy 4 and it was
to be dealt with affectively (which one would assume, as it represents a large departure
from "normal"). Accordingly, there were ten such events of magnitude 4, including one
on the last business day simulated. The results: there were 480 events (one every business
week, which was the frequency of scanning the environment by the Adaptation function),
408 events did not pass through the Adaptation filter and therefore were not processed
further, 49 were not resourced, four were in processing in the four functions, and 22
completed the journey through all four functions. Of those 22 were ALL nine of the mag-
nitude 4 events that were to be dealt with affectively, leaving the remaining 14 to be the
"normal" affectively neutral events. Clearly, the affectively-neutral events speeded
through the organization.
Strong vs. weak culture
Organizations with strong cultures have, in essence, a strong Latent Pattern
Maintenance function, one that restores any disturbances that might enter. In fact, one of
the ways LPM can limit disturbances is to not let them in in the first place, by limiting the
information/energy that the Adaptation function lets pass. By restricting the filter on the
Adaptation function, LPM limits the excursions of energy inside the organization. In a
weak culture, the organization more closely follows the external energy, in a sense
tracking it with all of its ups and downs.
Figure 19, above, represents the tableau of a strong culture. First, its memory win-
dow is long, 45 events; assuming one event per five days, that's almost one business year.
In other words, the reaction time of this organization can be delayed by a year, after
Page 82

which it again reacts to the outside energy. Here is the pattern of external energy and the
pattern of following it internally:

External energy vs. LPM energy

4.5
4
3.5
Energy level

3
2.5 Internal
2 External
1.5
1
0.5
0
0 480 960 1440 1920 2400
Simulated time (business days)

Figure 20. Pattern of internal energy following external with a strong


culture.

As can be seen, there are annual jumps in energy to a value of 4, with long peri-
ods of 2 between them. This strong culture "forgets" the high values over time until
another one hits and then its internal energy jumps up again to follow it. The figure illus-
trates the deterministic, repeated pattern of how internal energy followed external pertur-
bations. Consistent with the prescription of classical conditioning, there was no long term
learning, the pattern of internal energy is completely determined by the pattern of exter-
nal energy.
By changing just two parameters, the window to 25, about half of the previous
value, and the Threshold to respond to change from 1 to 0.5, the organization mirrors a
weaker culture. Here are the results, with the same stream of energy as in the figure
above:
Page 83

External energy vs. LPM energy

4.5
4
3.5
Energy level

3
2.5 Internal
2 External
1.5
1
0.5
0
0 480 960 1440 1920 2400
Simulated time (business days)

Figure 21. Pattern of internal energy following external with a weak


culture.

In this case the organization gradually forgets the high values and then when the
window has passed it says to itself, in effect, "Let's stop responding to old news and get
synchronized with what is happening now, let's loosen the reins a bit and let some new
energy in." But, again, consistent with the prescription of classical conditioning, there
was still no long-term learning, the pattern of internal energy is completely determined by
the pattern of external energy. The only change was the period of looking back.
And here are the results in an organization with really weak culture: the window
was set to about one month. This would be the case in an organization where something
like the terrorist attacks of 9/11 happened every year and within a month the organization
was incorporating the weekly news, as if nothing had happened. It would be as if there
were no heritage, no legacy. No pattern maintenance.
Page 84

External energy vs. LPM energy

4.5
4
3.5
Energy level

3
2.5 Internal
2 External
1.5
1
0.5
0
0 480 960 1440 1920 2400
Simulated time (business days)

Figure 22. Pattern of internal energy following external energy with


very weak culture.

Extension
The outcomes in the previous cases were easily predicted by the theory of action.
Here is a case in which there is no theory to guide predictions. In some sense, the simula-
tion is the prediction.
In this case, Figure 19 is used. The pattern of inputs was varied slightly: instead of
there being 49 weeks of a constant value of Energy=2 and no affective processing fol-
lowed by a single week of Energy=4 with affect, there are 25 weeks of Energy=2 with no
affect followed by one week of Energy=3 with affect, followed by 25 weeks of Energy=2
and no affect, followed one week of Energy=4 with affect. In all there are 52 weeks,
during which there is an energetic event in the middle and one at the very end, both the
only two events to be dealt with affectively. Everything else is the same dull news, to be
dealt with affectively-neutral.
Here was the simulation at the end of 2400 business days:
Page 85

Figure 23. Simulation after two energetic events per year, both with affect. Illustrates queuing effects.

While it is difficult to read, the small numbers inside the main quadrants repre-
sented the number of energy bundles presently being processed. As can be seen, there
were 28 pending "requests" for Integration, along with 16 Integration processes that were
interrupted while higher priority ones were being processed (presumably those that had to
be dealt with affectively). Why are they all waiting? It is because the current Integration
activity is processing energy affectively. So, with the current values, it will be about two
times the 60 business days each that each affective process will wait to complete Integra-
tion (that is, about 120 business days, six months) before the first affect-neutral Integra-
tion process could even begin. A total of 44 affective-neutral events were funded in Goal
Attainment and all of them are queued: 28 in the Integration queue and 16 of them in
Interrupted Integration. The queues build because, according to Figure 19, each value for
the dwell time increased as the energy made its way around the AGIL circuit. This was
logical because Adaptation took less time than Goal Attainment, which in turn took less
time than Integration.
To reiterate, all 17 items in Spent Energy, those that have completely transited the
organization, are limited to those that were dealt with affectively. That is, in the ten years
simulated NO affect-neutral events were processed all the way through! That is due to the
Page 86

frequency of the events that had to be dealt with affectively (two per year) and the long
time it takes to integrate the responses to them (60 business days, three calendar months).
Therefore, one of the extensions to the theory of action is the impact of time spent
in each functional unit as a function of the rate at which inputs and messages arrive. If the
time spent is on average greater than the inter-arrival rate of the inputs + messages, then
queues will build. This is a fundamental principle of queuing theory (Kleinrock, 1975-
1976). Parsons did not write about what happens when some energy has to wait three
years to be integrated.
In sum, the results of operating the simulation both were accurately predicted by
the theory and without effort demonstrated potential extensions to the theory. The results
were collectively an encouraging step towards a workbench for scientists to use to char-
acterize and experiment with their understanding of social systems.
Page 87

VI. CONCLUSIONS AND RECOMMENDATIONS FOR FURTHER STUDY


Lave and March (1993) stress the indispensable importance for modelers to
recognize when they are wrong. "A final protection from the danger of believing too fer-
vently in a theory [of your own] is to be intellectually playful." (p. 61) [emphasis in
original] And in another place: "Play to your analytical strength. Do not be afraid of
twisting a phenomena around a bit to make it fit into an analytical scheme that can drive
some implications for you. Do not hesitate to look for phenomena that can be examined
usefully with the models and techniques you have." (p. 73) One of the key points of the
research reported in this dissertation is that attention to timing of social systems events
can have an impact on theory. But if timing were not a central focus of the theory of
action in the first place, then is the time-related research of no account?
Review of purpose and research question
The purpose of this research was to see if Parsons’ large body of descriptive text
could be understood well enough to fashion an animation of it. A partial response was
that that was beyond the scope of a dissertation-type research. Rather, a single, indicative
work of Parsons(1953a) that described the dynamics of the theory of action was selected
as a stand-in for the totality of Parsons' works.
Can the salient factors (structure and function) be extracted? Saliency is clearly in
the eye of the beholder. A more concrete response is that a simulation was constructed
and in the dissertation committee (and the researcher, of course!) it performed according
to the theory.
Finally, was it possible to instantiate, make concrete, those salient factors so that a
high fidelity representation of the descriptive theory of action can be constructed? Again,
fidelity is in the eye of the beholder, and the dissertation committee agreed that the fidel-
ity was sufficient to demonstrate that it was possible to simulate the selected aspects of
Parsons et al. (1953a).
Finally, the question guiding this research was "What is the minimum set of struc-
tures and related functions that can simulate Parson’s theory of action to some criteria of
validity?" That is, what was the most parsimonious selection of theory of action con-
structs that, when animated, achieved a given level of fidelity? Can the theory of action
be simulated using only the functional prerequisites, (one pair of) the pattern variables,
(four of) the interchange media, and the cybernetic hierarchy of social control?
The research did not really address minimality as much as it appeals to the
reader's sense of parsimony and asks rhetorically, "Could the theory be simulated with
fewer structures and corresponding functions?" This question is the opposite of the usual
one, which in the instant case might be "How many structures and functions can be
included (jammed?) into a simulation in order to make it high fidelity?" The stated level
of fidelity of the simulation described here is one that can serve as a building block for
the next researcher to construct a more accurate or more general model of the theory of
action; this research was a demonstration of possibility, of proof of concept. Accordingly,
the best judge about whether the simulation shall achieve its purpose will be to interview
the next person in line to use it!
Review of findings
Accordingly, the primary finding is that a toy, proof-of-concept simulation of Par-
sons' theory of action can be constructed and operated. It was constructed mentally and in
computer programming terms by scaffolding feature-after-feature, much the way the next
Page 88

researcher might use the model, which adds weight to the practicality of the future pros-
pect of a more extensive (deeper and broader) exploration of the theory of action. That is,
the current research was conducted by beginning with a modest baseline of operational
capabilities and successively adding to it in order to increase the coverage of, and there-
fore fidelity to, the theory.
The fidelity was illustrated by several "base cases," whose outcomes were pre-
dicted by the theory. One case was for a strong culture, one with a strong Latent Pattern
Maintenance function that could remember for a long time. Theory would predict that
such an LPM function could reset new information entering the organization by quickly
decreasing the amount of new information permitted in until the organization was "over"
(in the sense of forgot) the out-of-the-ordinary impulses. Another case of weak culture
illustrated what the theory of action predicts: the organization under study closely follows
the pattern of external information (as though it had no memory) and was therefore "whip
sawn" by the shape of external events; there was virtually no counter-force to the im-
pulses entering the Adaptation function from a possibly-turbulent environment.
On account of these base cases, one can have a degree of confidence that the
model enacts the theory. In addition, an Appendix contains the attestation of an expert on
the simulation language selected that the modeling and the model achieved what was
sought.
Discussion
As presented above in the Literature review, p. 25, there was a notional set of
steps to be taken to build the simulation:
Table 7.
Correspondence between what was required and what was developed.

What was required (Hanneman, 1988) What was developed


Define boundaries of the system. The system boundary was defined in terms
of "internal" and "external," meaning inside
and outside of the system or organization
under study. The important element that
came from the outside and was evaluated
and contingently transformed inside was
energy, notionally in the form of news,
information.
Define the elements of the state space and The partitioning was given by Parsons' four
partition the state space into subsystems. functional prerequisites, so there are four
elements of the state space. All four were
simulated. In addition, Parsons' defined
four pairs of pattern variables that will
define the dynamic aspects, below. A
single pair, affect and affect-neutrality, was
simulated.
Describe the connectivity of the state space Parsons defined a fully-connected space
elements, and the forms of relations among using interchange media as the paths
the states of the system. among all possible combinations of ele-
ments. That is, there was a 12-part bi-
Page 89

directional connection in Parsons' model,


but only four of them were simulated. And
only a single connection between inside
and outside was defined, namely the one-
way connection from the outside to the
Adaptation function.
Define the dynamic aspects of the relations Parsons stated that energy passed along
among state space elements. interconnections from each state space
element to each other one. The simulation
defined only two paths, really: a clockwise
one from A -> G -> I -> L and a counter-
clockwise one from L -> A. Another
dynamic aspect suggested by Parsons but
not described in much detail was the classi-
cal conditioning that L learns as it reduced
tension. One more dynamic aspect was that
energy that was to be treated affectively
had a faster path through the state space
than that treated affectively-neutral. In
addition, there were other dynamic aspects
that were in the simulation but not in Par-
sons: energy queued when the next func-
tion was not able to absorb it, a target value
of external energy was selected in order to
achieve gradual learning, under affect-neu-
tral conditions G empties its queue all at
once, and it could take longer and longer
for each function in the cycle of A -> G ->
I.
Latent Pattern Maintenance
Latent Pattern Maintenance has a profound and profuse effect on the other three
functions, according to the theory. Qualitatively, LPM reset every function whenever a
disturbance entered from the outside. LPM performs the reset in order to preserve what it
has learned is the value to keep the difference between the energy outside and the energy
inside (the difference is defined as tension) within a threshold. According to Parsons the
style of this learning is classical conditioning (Parsons et al., 1953a, p. 226). The chal-
lenge in the research here was to translate the notional, qualitative learning into one that
could be simulated, that is, was quantitative. The simulation enacted the computation of a
reset value according to the formula in Nembhard and Uzumeri (2000). It is a subject of
future work to identify other, potentially better, formulæ for learning.
Time
As mentioned in the Literature Review on p. 34, time is almost never taken into
account in social systems studies. But time was the organizing principle for the research
presented here, so it differs markedly from other sociological tracts. The way time was
presented in the theory of action was in terms of "before," "during," "after," "longer,"
"shorter," and "then." There are sequences of patterned action described in (Parsons et al.,
Page 90

1953a), in particular phases and cycles. Sequence by its definition suggests ordering and
therefore time and timing. The translation of Parsons' time to the model developed here
was aided in a straightforward way using the trick of discrete event simulation, a method
of simulation that explicitly specified order and time.
The challenge with respect to time was obtaining times (intervals) for the duration
of each of the four functional prerequisites. It was not sufficient to leave that to the user.
The arbitrary unit of a single business day was selected as the atomic unit of time, with-
out loss of generality. Then the user of the simulation specified durations in units of busi-
ness days in the hopes that that somehow was consistent with what Parsons envisioned
but never wrote. Again, the unit of time could be changed throughout the simulation to
another other one, as long as it was the same atomic unit everywhere. That is, there is no
subjective or social time in the simulation; all time is in terms of a clock tick or cadence
of equal duration.
The impact of using uniform time instead of subjective time is not clear. If there
were some way to model subjective time then the mismatch among durations of the first
three functional prerequisites would still exist, queues would still build, and learning
would still be time-based. That is, in the main the results would be the same whether time
was modeled as uniform or subjective.
Process
The description of social systems as process was introduced in the Literature
Review on p. 36. There it was argued that the process focus imposed a heavy burden be-
cause it required a rich description of the mechanism and steps by which states change
inside an organization in response to external stimuli, as opposed to what one usually
finds in School of Education dissertations, which are statistical analyses of scores; there
is no dynamics, no detailed mechanism of how a score gets its value.
The heavy burden is manifest in writing a simulation because the computer has to
be told everything! Not only was the structure and function to be made manifest for the
computer, but also the many details about which Parsons gave no guidance: were there
queues between functions, how exactly did Latent Pattern Maintenance learn (we know
that it was classical conditioning, but what was the model and what were the values of its
parameters?), how exactly did Latent Pattern Maintenance affect Adaptation (that is, how
did LPM affect the energy that Adaptation sensed or not?), how did LPM measure or
sense tension, and then what exactly did it do to Adaptation in order to present a counter-
force to energy that disturbed the previous state, etc.
As the simulation results unfolded, another process question arose: did Parsons
foresee that organizations had a capacity to respond that is finite over an arbitrarily small
period? Did he foresee that the functions might get to a state where they could no longer
absorb or respond to any more energy? And then would he have predicted what would
happen? While these questions might properly belong in the section below on recom-
mendations for further study, in fact rather they suggest the fruits of the process view, as
taken during this research.
Discrete event simulation as a technique
Most modelers of social systems use the techniques of system dynamics for good
reason: there is a community of practice centered around MIT and other distinguished
universities (e.g., System Dynamics Society), an excellent text with many examples
(Hanneman, 1988), and there is a growing corpus of applications (quarterly System
Page 91

Dynamics Review and the annual Proceedings of the International Conference of the
System Dynamics Society). However, system dynamics did not appear to be up to the
task of modeling the theory of action, particularly using the direct words of Parsons as the
oracle.
Probably the breakthrough in this research came when Parsons' description of the
phase movement was seen as a partial time-ordering and then discrete event simulation
techniques were applied to see if there was a fit. The single largest contribution of this
research may be the application of discrete event simulation to a social system, as this
was only the third recorded instance of such an application.
Those who most often use discrete event simulation are trying to understand how
waiting lines form, so it was no surprise that the waiting lines in the theory of action were
exposed. This, too, may be a contribution of this research, as the topic appeared to be
neglected by Parsons, his supporters, and his critics.
Other difficulties
The effort to model the theory of action was difficult for several additional
reasons: (a) so much is written by and about Parsons; (b) what is written is difficult to
understand; and (c) the paucity (well, complete absence) of empirical, time-varying
results that could be used to verify the simulation.
Each of these difficulties was addressed, though not all to the same level of rigor.
In order to not claim any relation to the totality of Parsons' work, a single chapter was
selected as indicative, and then only a very small portion of it was selected to be simu-
lated. There is likely no antidote to the difficulty of understanding what has been written
by and about Parsons. And one can only try to triangulate among Parsons and experts,
and then to have it reviewed by experts in order to increase confidence in the fidelity of
the understanding.
Above all, this research should be seen for what it was: a small, toy experiment –
without verification – to see if something bigger is possible. Only by seeing that bigger
thing, produced by a future researcher perhaps built on the foundation presented here, can
the import of the current research be assessed.
Implications
For theory of social systems
One implication for social systems research is the consequence of framing inter-
actions in terms of time sequences but not attending to the impact of those time
sequences. For example, the instant research illustrated the impact of not attending to the
relationship between arrival rates and service rates. That is, in an external or even internal
environment of turmoil and "white water," (Vaill, 1996) a scan of that environment can
identify many items that need the attention of the organization (high arrival rate). Will
there be enough time (high enough service rate) to attend to them all? What happens to
the ones not attended to? These are questions about which Parsons offered no guidance.
In addition to the typical problem of queues building when the average arrival rate
exceeds the average service rate, there is also the question of priority queues and high
priority processing. While usually the domain of industrial engineering and operations
research, those topics entered this research during the simulation of an organization
responding affectively to stimuli. In the affective case the service rate is faster, higher
(Parsons et al., 1953a, p. 201), at least one reason for which is that affect is by definition
emotional and its opposite, affect-neutrality is rational, reasoned, cognitive, and it takes
Page 92

longer to be rational than not rational. The faster service rate of energy being addressed
affectively can compensate for a higher arrival rate of external energy. The dilemma is
that the quality of decisions arrived at affectively is lower than those decided affectively-
neutral (that is, rationally). As Fararo (2001, p. 157) avers, the problem for further
research is finding the "sweet spot," the stable region, between the two. Fararo said it is
the kind of theorem one would like to see for the operation of pattern maintenance (loc.
cit.).
For research in simulation
Clearly discrete event simulation is underused in social systems research; only
two previous examples were found. Perhaps the most compelling reason for that is the
paucity of social systems research that incorporates time; time is the main independent
variable in discrete event simulation. In that sense Parsons was many decades ahead of
social systems research. And it might be premature to suppose that sociology has caught
up with his practice of seeing social processes as events in a time sequence, the kind of
string of actions that are ideally-suited to be simulated in a discrete event framework.
Another force that might augur for additional application of the discrete-event
approach is the increased cross-over between sociology and other computer- and mathe-
matics-related disciplines. One finds the CMOT (computer and mathematical organiza-
tion theory) community increasingly using engineering-oriented tools to address socio-
logical problems. For example, Burton and Obel (1995), management scientists, have
found that the design of an organization (structure) can be optimum, a term never used by
sociologist or organizational designers. Burton and Obel cast the problem as one in linear
programming, where an objective function was trying to be maximized (such as decision
speed or decision quality) or minimized (such as communication expense or overhead, or
rework), subject to constraints. This framing as a linear programming problem is an
example of the intersection of two disciplines and that new techniques grew from it.
The application here is that perhaps the confluence of people outside of sociology
being interested in the theory of action with the increased capabilities and expressiveness
of simulation languages and tools might result in a renaissance in investigations of the
meaning of the theory of action using the mechanism of a workbench that a researcher
could manipulate to explore understanding.
For practice
The scientist finds his reward in what Henri Poincaré calls the joy of comprehen-
sion, and not in the possibilities of application to which any discovery may lead.
- Albert Einstein36

There is very little here for the practitioner. After all, this is a simulation of a
theory, in effect an abstraction of an already abstract theory. If there is one finding for a
practitioner it is the power of affect and the need to balance it with affect-neutrality for
the long-term health of the organization. This was not lost on Fararo (2001), who notes
"A 'functional necessity' or 'functional imperative' for an ongoing social system is that the
element of affective neutrality be built into it (i.e., action in some situations should take
the form of disciplined attention to instrumental and moral considerations in priority over
immediate gratification)." p. 137
36
In Alice Calaprice (ed.). (1996). The quotable Einstein. Princeton, NJ: Princeton University Press, p. 173, in turn
quoting from "Prologue" in Max Planck. (1932). Where is science going? New York: Norton, p. 211.
Page 93

One might be able to speculate under what conditions that the balance between
affect-neutrality and purely affective response might be struck, namely that tension is not
being tracked, that Latent Pattern Maintenance is not matching the internal energy of the
organization with the energy of the environment and that the pattern of matching is
diverging over time, getting worse, more tense. In that case, one cause might be that
greater affect-neutrality (that is, cognition) is required in order to make higher quality
decisions and those decisions in turn would better match the internal energy to the
external.
Recommendations for further study
Surely the most commonly-heard expression by a reader upon reaching the end of
this dissertation will be "Why did he stop here? This is just the beginning. There is so
much further to go." The dilemma when attempting something never done before is to
determine when a beginning has been accomplished, when is it time to declare the end of
one phase so that another may begin (possibly conducted by another researcher).
This dissertation presented a simulation of a part of Talcott Parsons’ theory of
action. Like all simulations its fidelity can be improved in a number of dimensions, in
this case: taking more theory into account, being more accurate, being more general,
being more user-friendly, accounting for more pattern variables and their interaction,
having finer granularity, having courser granularity, having adjustable granularity, and
dealing with more interchange media. In fact, one way to generate a list of considerations
for further study would be to systematically address this study's Limitations section, p.
52.
The decision about where/when to stop was based on a single judgment: could
another researcher pick up where this work left off and continue along a path of refine-
ment or generality? While the author could have gone further (in fact, without limit), it is
the researcher's judgment that the state of the simulation is complete enough so that an-
other person can carry on. Accordingly, while this work could have continued, it is also
true that other researchers can join in the fun now.
[The model here] exhibits a logic of "theoretical models in progress." This usually
means starting with initial simplifications and then adding complications in suc-
cessive revisions. In [computer] programming terms, any one theoretical model
becomes successively embodied in a series of program, the later programs cor-
recting and extending the earlier. … At any one point in this series of develop-
ments, a simulation model is both a theoretical model and a program. There is
really never a last program in the series, only a place of rest or termination
through exhaustion of the creative possibilities or diversion into work on other
such projects. (Fararo & Hummon, 1994)

Increase simulation fidelity


As mentioned in Delimitations and Limitations, above, fidelity can be increased
infinitely. In particular, there are four obvious areas of concentration:
1. The number of interchange media could be increased from four to the full comple-
ment of 12. See Figure 4.
2. The number of interpenetrations could be increased from zero to at least one, as illus-
trated in Figure 2 and Figure 3. That is, in addition to the four functional prerequisites
alone, each of them has inside four of its own and this could be simulated, too.
3. The cybernetic hierarchy could be simulated explicitly.
Page 94

4. The remaining three pattern variables could be simulated, along with their (combi-
natorial, multiple order) effects on each other. Dubin (1960) gives insight into the size
of the combinatorics and suggests a probability associated with each pattern variable
occurrence, something quite beyond Parsons' conceptualization. While there, though,
it would be possible to augment the measure of the magnitude of external energy (0 to
4 in the current simulation) with some measure of the certainty (or ambiguity) of the
information communicated by the energy, and thereby give the probabilistic approach
greater richness in explaining potentially non-deterministic situations. Again, we note
that this was quite beyond what Parsons explained.
In addition, the unit of analysis could be changed either up (to culture) or down
(to personality) or one embedded in the other, which is a variation of item 2, above.
Zelditch (1955) adds considerable details to the explanations of an orbit and phase
movement that originate in Parsons et al. (1953a), so incorporating Zelditch's work might
be an important exercise to determine whether the simulation presented here would be
extensible along the lines that Parsons and his colleagues might have taken it.
Apply to more reported situations where there is a response to external energy
Only one application was made in the Results chapter to an instance reported in
scholarly journals. Therefore, it would be instructive to move from the demonstration of a
toy to a tool that explained reported structural and functions responses to external stimuli.
Candidates might include (Audia, Locke, & Smith, 2000; Barr, 1998; Chattopadhyay,
Glick, & Huber, 2001; Haveman, Russo, & Meyer, 2001; Hoffman, 1999; Holmwood,
1983; Marcus & Nichols, 1999).
In addition, a future approach might also focus on affect vs. affect-neutrality in
decision making, relying on such sources as " Toxic decision processes: A study of emo-
tion and organizational decision making" (Maitlis, 2004), The neurotic organization:
Diagnosing and changing counterproductive styles of management (Kets de Vries, 1984),
Unstable at the top: Inside the troubled organization (Kets de Vries, 1987), and The
Icarus paradox: How exceptional companies bring about their own downfall; new lessons
in the dynamics of corporate success, decline, and renewal (Miller, 1990). One aspect of
the focus on affect vs. affect-neutrality that is missing in the current research is that of
decision quality or organizational fitness: is there a better or worse Latent Pattern Main-
tenance function with respect to a realistic goal to be optimized. The current research
makes the single and naïve goal of matching internal energy to the pattern of external.
Clearly there is much room here for improvement.
Apply to agent-based systems
In the Research Methods section of the Methods chapter on the topic of selecting
the appropriate simulation technology, the observation was made that agent-based simu-
lation systems were gaining currency. In addition, there it was stated,
Simulating agents in Parsons’ theory of action might be a future application of the
simulation described here. If one viewed agents as co-operating and communi-
cating sequential processes (Hoare, 1985) (in the context of agent-based simula-
tion), then this study gives insight into the program that might be inside each
agent, that is, the instant research is a necessary precursor to an agent-based
simulation of Parsons’ theory of action.

Now, therefore, to apply the instant research to agent-based simulation, one must
construct a hierarchy or network into which agents fit. Much of this has been worked out
Page 95

for the theory of action in a different context (Fararo & Skvoretz, 1984), namely, a hierar-
chy of interconnected automata that operate at different levels of interpenetrating
abstraction, different units of analysis. One of the advantages of the approach described
by Fararo and Skvoretz (1984) is that it preserves the non-determinism of agent-based
systems and, again, it is entirely grounded in the theory of action.
Address the dual of performance: organizational learning
The theory of action contains a duality of performance and learning. The simula-
tion reported here deals only with performance and neglects learning. Therefore the
simulation could be expanded to take into account learning. It is not clear how organiza-
tions learn and particularly how Parsons thought they did. Therefore, further research
could experiment with how each function makes sense of the energy presented to it and
how it changes its internal processing correspondingly.
Increase technical robustness
The human-computer interface could be improved. The current version is, to be
charitable, unusable by anyone but the researcher. There is a significant literature written
about how to construct effective interfaces between computers and humans. The ecologi-
cal interface seems particularly applicable (Bennett & Flach, 1992; Chistoffersen, Hunter,
& Vicente, 1998; Goldstein, 1969; Hoffman & Ocasio, 2001; Howie & Vicente, 1998a;
Howie & Vicente, 1998b; Howie, Sy, Ford, & Vicente, 2000; Janzen & Vicente, 1998;
Mitchell & Miller, 1986; Pawlak & Vicente, 1996; Rasmussen & Batstone, n.d.;
Rasmussen, Duncan, & Leplat, 1987; Shneiderman, 1983; Vicente & Rasmussen, 1990;
Vicente & Rasmussen, 1992; Weir, 1991; Woods, 1984; Woods, 1991).
In addition, the simulation could be rewritten with provable correctness in mind,
so that testing and evaluation by a third party would be less important because the com-
puter program could be proved correct, given its specification. There are several (award
winning) methods for constructing and proving correct computer programs where the
programs contain timing (Hoare, 1985; Manna & Pnueli, 1991; Manna & Pnueli, 1995).
In sum
Alas, the real estimate of whether the model reported here will be sufficient for
further enrichment – which was the purpose of this research – can only be made by the
next researcher in turn, who will evaluate this scaffold for its ability to continue the con-
struction of a high fidelity replica of Parsons' theory of action.
Page 96

EPILOGUE
As can be said of so many other doctoral candidates, this was not the dissertation I
set out to write. My first idea was to create a method of translating causal loop diagrams
(CLDs) into system dynamics models (an example of which is shown in Figure 1, p. 4).
Causal loop diagrams are informal drawings that show what are presumably causes and
effects among circular influences. They were made popular in Senge (1990); Senge was a
student of Jay Forrester, the "father" of system dynamics (Forrester, 1968). No one has
ever been able to translate from CLDs to system dynamics models because there is (so
much) information missing. I found some patterns that – when a few additional questions
were asked and answered – would provide a first draft system dynamics model from
CLDs. I was going to use some then-new results from qualitative physics, a branch of
mathematics that does not rely on exact quantities, in order to reason about the relation-
ships among variables.
In addition, I thought that causal loop diagrams might help with an endemic
problem in system dynamics modeling: the misperception of feedback (Diehl & Sterman,
1995; Kleinmuntz, 1993; Paich & Sterman, 1993; Sterman, 1989a; Sterman, 1989b). It
seems that our human cognition is not very good at seeing non-linear or cyclic or attenu-
ated cause and effect connections. And this has been demonstrated even among people
who construct such connections every day.
During tea at a George Washington University function I was chatting with Karl
Weick about my work because I knew that he was interested (I had written a school paper
in which I pointed out that I thought he was mistaken in Weick (1979, p. 69 ff) about cer-
tain system dynamics applications). He asked me whether I thought I was solving a
problem of ambiguity or of uncertainty. These are his shorthand terms for the two types
of equivocality. Weick has written that the purpose of organizations is to reduce equivo-
cality. Uncertainty is the want of information. Ambiguity is the want of sense(-making),
there may be enough information and it may be contradictory.
I was stunned because I did not know the answer to that simple question. I pon-
dered it a long time and spoke with system dynamics experts, including the author of
Figure 1. I came away with no answer, so I abandoned that work, in which I had invested
a significant portion of my research energy.
My next attempt was to see if I could apply some of the concepts of complex
adaptive systems (CAS) – also called chaos or complexity theory – to some real organ-
izational events. I had an idea that some of the arguments in the field – particularly about
whether change is (a) rapid and cataclysmic (averred by those supporting punctuated
equilibrium (Romanelli & Tushman, 1994) and quantum change (Miller, Friesen, &
Mintzberg, 1984)), or (b) incremental (Donaldson, 1996) – are simply on a continuum of
rate of change and that those changes could be more parsimoniously (and less polemi-
cally) explained by a fact of (non-linear) differential equations, the staple of CAS.
The problem was that I could not figure out what to measure. I still cannot, and
nor it seems can anyone who studies organizations from the CAS perspective. CAS
appears to be a metaphor, not really yet a computational tool.37

37
"Despite the promise indicated by various authors within the field, complexity science has thus far failed to deliver
tangible tools that might be utilized in the examination of complex systems." (Richardson, Cilliers, & Lissack, 2001)
Page 97

One of the turning points in my search to apply what I knew as an engineer and
physical scientist to social systems came when David Schwandt, the chair of the disserta-
tion committee and the Director of the Executive Leadership Program, invited me to read
Daft and Weick (1984), which relied on Boulding (1956). Basically, that work argues that
social systems interpret the forces that impinge upon them, they do not "robotically"
absorb and then reflect the energy that is aimed at them, as billiard balls would. That is, a
social system could absorb energy, reflect energy, multiply energy, delay energy, con-
sume energy, or do with energy whatever it wanted to, completely and totally unlike
physical systems; physical systems conserve energy. That is, in physical systems there is
a fixed amount of energy and for an object to gain more means a loss of some somewhere
else, and vice versa. In social systems there is no conservation, no limit to the energy in
the system. Nearly all of physics is based upon an equation, an equality, that connects
energy to its other embodiments. What would the energy in a social system be equal to?
What equality would be preserved/conserved across social acts? I could not and cannot
answer those questions, so I dropped my search for physics-like thinking, especially
complex adaptive systems (also called complexity theory), applied to social systems.
The current research flowed from my interest in Parsons theory of action because
I apply it every day in the delivery of advisory services. I use the theory of action to
evaluate the situation, diagnose the current state, and look for leverage for change. I
wondered whether I could animate the theory, as so many have for other social systems
before me.
I started to go to college by attending night school. It was the time of the military
draft and students were deferred if they made normal progress. In trying to make normal
progress I was forced to take courses that I could get into, whether I had the inclination or
prerequisites or not. During an early semester I took a computer course and did badly.
The next semester the only course for which I really had taken the prerequisite was the
follow-on computer course. In that more advanced course (it was the most advanced
offered at the university at the time) the instructor asked me to learn about a new thing,
discrete event simulation (DES). I learned the principles (current in 1967) and wrote a
computer program in the General Purpose System Simulation (GPSS) language, which
was brand new at the time, that simulated a grocery store, in particular something that
was first being tried in that era: designated lines for a small number of items. I was curi-
ous about whether those lines worked or not.
The effect on waiting times notwithstanding, my GPSS program impressed quite a
few people, so it ended up impressing me! And by that experience simulation became
something of a lens through which I viewed a part of the world, particularly the world of
management decision-making, which was to become my undergraduate focus in business
administration.
In 1975 for my employer at the time I was trying to predict the growth of adoption
of a new product. I already knew about the usual S-shaped growth pattern that one gets in
a restricted medium, like a Petri dish, and it had been applied to the adoption of technol-
ogy despite the obvious violation of assumptions. I was looking for something, well,
more human.
Limits to growth (Meadows, Meadows, Randers, & Behrens, 1972) had been
recently published and made fascinating reading. It was a simulation of how the world
would grow in the next century. It was my first exposure to system dynamics and it was
Page 98

an impressive one. I wrote a simulation of product adoption based on what I learned from
Limits to growth. And I tried to stay current with what was called the world model by the
system dynamics community.
In the end I did not let my initial exposure to computing in that first university
course influence my final direction, mostly because I did so much better in the second
course, and that by learning and applying discrete event simulation. DES and system
dynamics have different heritages and often appear as intellectual schools that fight over
the same turf, much like any school of thought before it becomes "normal" (Kuhn, 1970).
Having some facility in both and no commitment to either, this would not be the last time
I would be spanning boundaries.
By the time I had earned my undergraduate degree in business I was very inter-
ested in computers, so tried to pursue a graduate program in that field, ostensibly inside a
graduate school of business. The business school I selected turned out to be having a bat-
tle about the place of computing inside it and I could see myself as becoming a pawn in
the conflict, so I sought another place at the same university where I could learn com-
puting in a different setting. In the end I entered the school of engineering and applied
science, for which I virtually completely lacked the prerequisites and did not understand
most of the course titles! I had a lot of catching up to do.
I completed my masters work with a thesis that was widely regarded and earned
me a visiting scientist position for a year at a distinguished physics institute. I was work-
ing on my PhD dissertation there, a simulation system that would permit an arbitrary
level of detail. One of the challenges in creating any simulation is that there are some
things you care about and some you do not. In each simulation system what a researcher
might select as the choices to care about and not to are already made. I wanted to permit
the modeler an arbitrary level of concern. As part of my literature search I read about 300
engineering dissertations, nearly all of which had been one year of work and did not build
on previous work, so none of them could attack the arbitrariness of the level of detail. I
ran out of time, too, and never completed the research. And there has never been a simu-
lation system that lets the researcher select an arbitrary level of detail/concern/ abstrac-
tion.
Much later in my career I became a consultant to the parts of organizations in
which software is developed. Gradually the level of my clients inside those organizations
rose and the nature of their questions changed from technical to organizational: "You are
advocating that we work in teams. How long does it take a team to do its work?" "What’s
the best way for me to organize the 7500 people who work for me?"
This set of questions, and ones like it about how innovation is adopted, took me
away from my technical background and placed me on weak ground. So I pursued the
learning offered by the Executive Leadership Program’s doctoral degree. Again, I had
none of the prerequisites and had to study very hard just to catch up and then stay in
place.
I use every day what I learned and I believe that, despite my fits and starts on a
dissertation topic, I am a poster-child for the Program, a Program that encourages bound-
ary spanning by the example of its leader, Prof. Dave Schwandt, who is a recovering
physicist.
Page 99

REFERENCES

(1997). Simulation and Gaming and the Teaching of Sociology. ASA Resources
Materials for Teaching. American Sociological Association. 19 pages.

Abbott, A. (1988). Transcending general linear reality. Sociological Theory, 6(2), 169-
186.

Abbott, A. (1992). From causes to events. Sociological Methods and Research, 20, 428-
455.

Abbott, A. (2001). Time matters: On theory and method. Chicago: University of Chicago
Press.

Achterkamp, M., & Imhof, P. (1999). The importance of being systematically surprise-
able: Comparative social simulation as experimental technique. Journal of Mathe-
matical Sociology, 23( 4), 327-347.

Alexander, J. C. (1983). The later period (1): The interchange model and Parsons' final
approach to multidimensional theory. In J. C. Alexander, The modern reconstruction
of classical thought: Talcott Parsons (Vol. Four, pp. 73-118). Berkeley, CA:
University of California Press.

Alexander, J. C., & Sciortino, G. (1996). On choosing one's intellectual predecessors:


The reductionism of Camic's treatment of Parsons and the Institutionalists.
Sociological Theory, 14(2), 154-171.

Ancona, D. G., & Chong, C. (1996). Entrainment: Pace, cycle, and rhythm in
organizational behavior. In L. L. Cummings, & B. M. Staw, (Eds.), Research in
organizational behavior (Vol. 18,Chap. 251-284, ). Greenwich, CT: JAI Press.

Ashby, W. R. (1956). An introduction to cybernetics. New York, NY: John Wiley.

Audia, P. G., Locke, E. A., & Smith, K. G. (2000). The paradox of success: An archival
and a laboratory study of strategic persistence following radical environmental
change. Academy of Management Journal , 43(5), 837-853.

Axten, N., & Fararo, T. J. (1977). The information processing representation of


institutionalized social action. Sociological Review Monograph, 24, 35-77.

Bainbridge, W. S. (1992). Social research methods and statistics: A computer-assisted


introduction. Belmont, CA: Wadsworth.

Bales, R. F. (1950). Interaction process analysis. Cambridge: Addison-Wesley.

Bales, R. F. (1999). Social interaction systems: Theory and measurement. New


Brunswick, NJ: Transaction Publishers.
Page 100

Balzer, W., Sneed, J. D., & Moulines, C. U. (2000). Structuralist knowledge


representation: Paradigmatic examples. Amsterdam: Rodopi.

Banks, J., & Carson, J. S.II. (1984). Discrete-event system simulation. Englewood Cliffs,
NJ: Prentice-Hall.

Barber, B., & Inkeles, A. (Eds.). (1971). Stability and change: A volume in honor of
Talcott Parsons. Boston: Little, Brown and Co.

Barkema, H. G., Baum, J. A. C., & Mannix, E. A. (Eds.). (2002). A new time. [Special
research forum]. Academy of Management Journal, 45(5).

Barr, P. S. (1998). Adapting to unfamiliar environmental events: A look at the evolution


of interpretation and its role in strategic choice. Organization Science, 9(6), 644-669.

Baudrillard, J. (1995). Simulacra and simulation. Ann Arbor, MI: University of


Michigan Press.

Bennett, K. B., & Flach, J. M. (1992). Graphical displays: Implications for divided
attention, focused attention, and problem solving. Human Factors, 34(5), 513-534.

Berger, J., & Zelditch, M., Jr. (1968). Sociological theory and modern society. [Book
review]. American Sociological Review, 33(3), 446-450.

Bergmann, W. (1992). The problem of time in sociology: An overview of the literature


on the state of theory and research on 'Sociology of Time,' 1900-82. Time & Society,
1(1), 81-134.

Black, M. (Ed.). (1961). The social theories of Talcott Parsons. Englewood Cliffs, NJ:
Prentice-Hall.

Bluth, B. J. (1982). Parsons' general theory of action: A summary of the basic theory.
Granada Hills, CA: NBS.

Boudon, R., & Bourricaud, F. (1989). A critical dictionary of sociology. London:


Routledge.

Boulding, K. E. (1956). General systems theory: The skeleton of a science. Management


Science, 2, 197-207.

Bourricaud, F. (1981). The sociology of Talcott Parsons. Chicago, IL: University of


Chicago Press.

Bressler, M. (1961). Supplement: Some selected aspects of American sociology,


September 1959 to December 1960. Annals of the American Academy of Political
and Social Science, 337, 146-159.

Brodbeck, M. (1959). Models, meaning, and theories. In L. Gross, (Ed.), Symposium on


Page 101

sociological theory (pp. 373-403). New York: Harper & Row.

Bronson, R., & Jacobsen, C. (1986). Simulation and social theory. Simulation, 47(2 ), 58-
62.

Bronson, R., Jacobsen, C., & Crawford, J. (1988). Estimating functional relationships in a
macrosociological model. Mathematical Computer Modelling, 11, 386-390.

Brownstein, L. (1982). Talcott Parsons' general theory of action: An investigation of


fundamental principles. Cambridge, MA: Schenkman Publishing Co.

Burrell, G., & Morgan, G. (1979). Sociological paradigms and organisational analysis.
Portsmouth, NH: Heinemann.

Burton, R. M., & Obel, B. (Eds.). (1995). Design models for hierarchical organizations:
Computation, information, and decentralization. Boston, MA: Kluwer Academic
Publishers.

Cadwallader, M. L. (1959). The cybernetic analysis of change in complex social


organizations. American Journal of Sociology, 65(2), 154-157.

Camic, C. (1996). Alexanders' antisociology. Sociological Theory, 14(2), 172-186.

Camic, C. (1998). Reconstructing the theory of action. Sociological Theory, 16(3), 283-
291.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs


for research. Boston: Houghton Mifflin.

Carley, K. M., & Prietula, M. J. (Eds.). (1994). Computational organization theory.


Hillsdale, NJ: Lawrence Erlbaum Associates.

Chattopadhyay, P., Glick, W. H., & Huber, G. R. (2001). Organizational actions in


response to threats and opportunities. Academy of Management Journal, 44(5), 937-
955.

Checkland, P. (1999). Systems thinking, systems practice (30-year retrospective ed.).


West Sussex, England: John Wiley & Sons.

Checkland, P., & Scholes, J. (1999). Soft systems methodology in action (30-year
retrospective ed.). West Sussex, England: John Wiley & Sons.

Cherns, A. (1980). Work and values: Shifting patterns in industrial society. International
Social Science Journal, 32(3 ), 427-441.

Chistoffersen, K., Hunter, C. N., & Vicente, K. J. (1998). A longitudinal study of the
effects of ecological interface design on deep knowledge. International Journal of
Human-Computer Studies, 48(6), 729-762.
Page 102

Coleman, J. S. (1964). Introduction to mathematical sociology. New York, NY: Free


Press of Glencoe.

Coleman, J. S. (1965). The use of electronic computers in the study of social


organizations. Archives Européennes De Sociologie, VI(I), 89 ff.

Collins, L. M., & Sayer, A. G. (Eds.). (2001). New methods for the analysis of change.
Washington DC: American Psychological Association.

Conte, R., Hegselmann, R., & Terno, P. (Eds.). ( 1997). Simulating social phenomena.
Heidelberg, Germany: Springer.

Conway, R. W., & McClain, J. O. (2003). The conduct of an effective simulation study.
INFORMS Transactions on Education, 3(3).

Coyle, G. (2000). Qualitative and quantitative modelling in system dynamics: Some


research questions. System Dynamics Review, 16(3), 225-244.

Cross, W. M. (1980). The use of situation-generated simulation games in the teaching of


sociology. For presentation at Annual Meeting of the Illinois Sociological Society .

Cubitt, S. (2001). Simulation and social theory. London: Sage.

Cyert, R. M., & March, J. G. (1963). A behavioral theory of the firm. Englewood Cliffs,
NJ: Prentice-Hall.

Cyert, R. M., & March, J. G. (1992). A behavioral theory of the firm (2nd ed.).
Cambridge, MA: Blackwell.

Daft, R. L., & Weick, K. E. (1984). Toward a model of organizations as interpretation


systems. Academy of Management Review, 9(2), 284-295.

Dar-El, E. M. (2000). Human learning: from learning curves to learning organizations.


Boston, MA: Kluwer Academic Publishers.

Davis, K. (1959). The myth of functional analysis as a special method in sociology and
anthropology. American Sociological Review, 24( 6), 757-772.

Diehl, E., & Sterman, J. D. (1995). Effects of feedback complexity on dynamic decision
making. Organizational Behavior & Human Decision Processes, 62(2), 198-215.

Donaldson, L. (1996). For positivist organization theory: Proving the hard core. London:
Sage.

Dubin, R. (1960). Parsons' actor: Continuities in social theory. American Sociological


Review, 25(4), 457-466.

Dukes, R. L. (1975). An evaluation of six prominent simulation games for teaching


undergraduate sociology. Presented at 50th Annual Meeting of the Southwestern
Page 103

Sociological Association .

Durkheim, E. (1951). Suicide, a study in sociology. Glencoe, IL: Free Press.

Ebbinghaus, H. (1987). Memory: A contribution to experimental psychology. New York:


Dover Publications.

Epstein, J. M., & Axtell, R. (1996). Growing artificial societies: Social science from the
bottom up. Washington, DC: Brookings Institution Press.

Etzioni, A. (1975). A comparative analysis of complex organizations. New York, NY:


Free Press.

Fararo, T. J. (1984). Neoclassical theorizing and formalization in sociology. In T. J.


Fararo, (Ed.), Mathematical ideas and sociological theory (pp. 143-175). New York:
Gordon and Breach.

Fararo, T. J. (1989). The meaning of general theoretical sociology: Tradition and


formalization. Cambridge, England: Cambridge University Press.

Fararo, T. J. (2001). Social action systems: Foundation and synthesis in sociological


theory. Westport, CT: Praeger.

Fararo, T. J., & Hummon, N. P. (1994). Discrete event simulation and theoretical models
in sociology. In B. Markovsky, K. Heimer, & J. O'Brien, (Eds.), Advances in group
processes (Vol. 11,pp. 25-66). Greenwich, CT: JAI Press.

Fararo, T. J., & Skvoretz, J. (1984). Institutions as production systems. Journal of


Mathematical Sociology, 10, 117-182.

Fishman, G. S. (2001). Discrete-event simulation: Modeling, programming, and analysis.


New York: Springer-Verlag.

Forrester, J. (1968). Principles of systems. Cambridge, MA: MIT Press.

Garson, G. D. (1994). Social science computer simulation: Its history, design, and future.
Social Science Computer Review, 12(1), 55-82.

Gilbert, N., & Conte, R. (Eds.). (1995). Artificial societies: The computer simulation of
social life. London: UCL Press.

Goldstein, M. (1969). Display aspects of algebra. Psychological Reports, 24, 937-938.

Gullahorn, J. T., & Gullahorn, J. E. (1963). A computer model of elementary social


behavior. Computers in Behavioral Science.

Habermas, J. (1981). Talcott Parsons: Problems of theory construction. Sociological


Inquiry, 51(3/4), 173-196.
Page 104

Hamagami, F., & McArdle, J. J. (2001). Advanced studies of individual differences:


Linear dynamic models for longitudinal data analysis. In G. A. Marcoulides, & R. E.
Schumacker, (Eds.), New developments and techniques in structural equation
modeling (pp. 203-246). Malwah, NJ: Lawrence Erlbaum Associates.

Hamblin, R. L., Jacobsen, R. B., & Miller, J. L. L. (1973). A mathematical theory of


social changes. New York, NY: Wiley-Interscience.

Hanneman, R., & Patrick, S. (1997). On the uses of computer-assisted simulation


modeling in the social sciences. Sociological Research Online, 2(2).

Hanneman, R. A. (1988). Computer-assisted theory building: Modeling dynamic social


systems. Newbury Park, CA: Sage.

Harré, R., & Secord, P. F. (1972). The explanation of social behaviour. Oxford, England:
Basil Blackwell.

Hassard, J. (1990). Introduction: The sociological study of time. In J. Hassard, (Ed.), The
sociology of time (pp. 1-18). New York: St. Martin's Press.

Haveman, H. A., Russo, M. V., & Meyer, A. D. (2001). Organizational environments in


flux: The impact of regulatory punctuations on organizational domains, CEO
succession, and performance. Organization Science, 12(3), 253-273.

Hayes, A. C. (1981). Structure and creativity: The use of transformational-generative


models in action theory. Sociological Inquiry, 51( 3-4), 219-239.

Heise, D. R. (1979). Understanding events: Affect and the construction of social actions.
Cambridge, England: Cambridge University Press.

Hills, R. J. (1968). Towards a science of organization. Eugene, OR: Center for the
Advanced Study of Educational Administration.

Hoare, C. A. R. (1985). Communicating sequential processes. Englewood Cliffs, NY:


Prentice-Hall International.

Hoffman, A. J. (1999). Institutional evolution and change: Environmentalism and the


U.S. chemical industry. Academy of Management Journal, 42(4), 351-371.

Hoffman, A. J., & Ocasio, W. (2001). Not all events are attended equally: Toward a
middle-range theory of industry attention to external events. Organization Science,
12(4), 414-434.

Holmwood, J. (1996). Founding sociology? Talcott Parsons and the idea of general
theory. New York: Longman.

Holmwood, J. M. (1983). Action, system and norm in the action frame of reference:
Talcott Parsons and his critics. Sociological Review, New Series, 31, 310-336.
Page 105

Honderich, T. (Ed.). (1995). Oxford companion to philosophy. New York, NY: Oxford
University Press.

Howie, D. E., & Vicente, K. J. (1998a). Making the most of ecological interface design:
The role of self-explanation. International Journal of Human-Computer Studies,
49(5), 651-674.

Howie, D. E., & Vicente, K. J. (1998b). Measures of operator performance in complex,


dynamic microworlds: Advancing the state of the art. Ergonomics, 41(4), 485-500.

Howie, E., Sy, S., Ford, L., & Vicente, K. J. (2000). Human-computer interface design
can reduce misperceptions of feedback. System Dynamics Review, 16(3), 151-171.

Ilgen, D. R., & Hulin, C. L. (Eds.). (2000). Computational modeling of behavior in


organizations: The third scientific discipline. Washington, DC: American
Psychological Association.

Jaber, M. Y., & Sikström, S. (2004). A note on "An empirical comparison of forgetting
models". IEEE Transactions on Engineering Management, 51(2), 233-234.

Jackson, M. A. (1983). System development. Englewood Cliffs, NJ: Prentice-Hall.

Jacobsen, C., & Bronson, R. (1995). Computer simulation and empirical testing of
sociological theory. Sociological Methods & Research, 23(4), 479-506.

Jacobsen, C., & Bronson, R. (1985). Simulating violators. Operations Research Society
of America [now Institute for Operations Research and Management Science].

Jacobsen, C., & Bronson, R. (1987). Defining sociological concepts as variables for
system dynamics modeling. System Dynamics Review, 3(1), 1-7.

Jacobsen, C., & Bronson, R. (1997). Computer simulated empirical tests of social theory:
Lessons from 15 years' experience. In R. Conte, R. Hegselmann, & P. Terno (Eds.),
Simulating social phenomena (pp. 97-102). Heidelberg, Germany: Springer.

Jacobsen, C., Bronson, R., & Vekstein, D. (1990). A strategy for testing the empirical
adequacy of macro-sociological theories. Journal of Mathematical Sociology, 15 (2),
137-148.

Janzen, M. E., & Vicente, K. J. (1998). Attention allocation within the abstraction
hierarchy. International Journal of Human-Computer Studies, 48(4), 521-545.

Jin, Y., & Levitt, R. (1996). The virtual design team: A computational model of project
organizations. Computational & Mathematical Organization Theory, 2(3), 171-196.

Kant, I. (1896). Critique of pure reason. New York, NY: Macmillan.

Keat, R., & Urry, J. (1982). Social theory as science. London: Routledge & Kegan Paul.
Page 106

Kets de Vries, M. F. R. (1984). The neurotic organization: Diagnosing and changing


counterproductive styles of management. San Francisco, CA: Jossey-Bass.

Kets de Vries, M. F. R. (1987). Unstable at the top: Inside the troubled organization. New
York, NY: New American Library.

Kleinmuntz, D. (1993). Information processing and misperceptions of the implications of


feedback in dynamic decision making. System Dynamics Review, 9(3), 223-237.

Kleinrock, L. (1975-1976). Queueing systems (Vols. 1-2). New York: Wiley.

Kolb, W. L. (1962). The social theories of Talcott Parsons: A critical examination.


American Journal of Sociology, 67(5), 590-591.

Kuhn, T. S. (1970). The structure of scientific revolutions. Chicago, IL: University of


Chicago Press.

Lackey, P. N. (1987). Invitation to Talcott Parsons' theory. Houston: Cap and Gown
Press.

Lane, D. (2001). Rerum cognoscere causas: Part II -- Opportunities generated by the


agency/structure debate and suggestions for clarifying the social theoretic position of
system dynamics. System Dynamics Review, 17(4), 293-309.

Lave, C. A., & March, J. G. (1993). An introduction to models in the social sciences.
Lanham, MD: University Press of America.

Leik, R. K., & Meeker, B. F. (1995). Computer simulation for exploring theories: Models
of interpersonal cooperation and competition. Sociological Perspectives, 38(4), 463-
482.

Lin, Z. (2000). Organizational performance under critical situations -- exploring the role
of computer modeling in crisis case analysis. Computational & Mathematical
Organization Theory, 6(3), 277-310.

Lomi, A., & Larsen, E. R. (1995). The emergence of organizational structure. In R. M.


Burton, & B. Obel, (Eds.), Design models for hierarchical organizations:
Computation, information, and decentralization ( pp. 209-231). Boston: Kluwer.

Loubser, J. J., Baum, R. C., Effrat, A., & Lidz, V. M. (Eds.). (1976a). Explorations in
general theory in social science: Essays in honor of Talcott Parsons Vol. I. New York:
Free Press.

Loubser, J. J., Baum, R. C., Effrat, A., & Lidz, V. M. (Eds.). (1976b). Explorations in
general theory in social science: Essays in honor of Talcott Parsons Vol. II. New
York: Free Press.

Luna-Reyes, L. F., & Andersen, D. L. (2003). Collecting and analyzing qualitative data
Page 107

for system dynamics: Methods and models. System Dynamics Review, 19(4), 271-
296.

Lüscher, K. (1974). Time: A much-neglected dimension in social theory and research.


Sociological Analysis and Theory, 4, 101-117.

Maitlis, S. O. H. (2004). Toxic decision processes: A study of emotion and organizational


decision making. Organization Science, 15(4), 375-393.

Manna, Z., & Pnueli, A. (1991). The temporal logic of reactive and concurrent systems.
New York: Springer.

Manna, Z., & Pnueli, A. (1995). Temporal verification of reactive systems: Safety. New
York: Springer.

Marcus, A. A., & Nichols, M. L. (1999). On the edge: Heeding the warnings of unusual
events. Organization Science, 10(4), 482-499.

Markley, O. W. (1967). A simulation of the SIVA model of organizational behavior.


American Journal of Sociology, 73(3), 339-347.

Mazur, J. E., & Hastie, R. (1978). Learning as accumulation: Reexamination of the


learning curve. Psychological Bulletin, 85(6), 1256-1274.

McCleary, R., & Hay, R. A., Jr. (1980). Applied time series analysis for the social
sciences. Beverly Hills, CA: Sage.

McGowan, J. (1998). Towards a pragmatic theory of action. Sociological Theory, 16 (3),


292-297.

Meadows, D. H., Meadows, D. L., Randers, J., & Behrens, W. W. (1972). The limits to
growth. New York, NY: Universe Books.

Miller, D. (1990). The Icarus paradox: How exceptional companies bring about their own
downfall; new lessons in the dynamics of corporate success, decline, and renewal.
New York, NY: Harper Business.

Miller, D., Friesen, P. H., & Mintzberg, H. (1984). Organizations: A quantum view.
Englewood Cliffs, NJ: Prentice-Hall.

Mitchell, C. M., & Miller, R. A. (1986). A discrete control model of operator function: A
methodology for information display design. IEEE Transactions on Systems, Man,
and Cybernetics, SMC-16(3), 343-357.

Mize, J. H., & Cox, J. G. (1968). Essentials of simulation. Englewood Cliffs, NJ:
Prentice-Hall.

Moore, W. E. (1959). The whole state of sociology. [Book reviews of Sociology today:
Page 108

Problems and prospects and Symposium on sociological theory]. American


Sociological Review, 24(5), 715-718.

Morrison, M., & Morgan, M. S. (1999). Models as mediating instruments. In M. S.


Morgan, & M. Morrison, (Eds.), Models as mediators: Perspectives on natural and
social science (pp. 10-37). Cambridge, England: Cambridge University Press.

Moss, S. (2000). Canonical tasks, environments and models for social simulation.
Computational & Mathematical Organization Theory, 6(3), 249-275.

Mueller, R. O. (1996). Basic principles of structural equation modeling: An introduction


to LISREL and EQS . New York: Springer.

Nembhard, D. A., & Osothsilp, N. (2001). An empirical comparison of forgetting models.


IEEE Transactions on Engineering Management, 48(3), 283-291.

Nembhard, D. A., & Osothsilp, N. (2004). Authors' reply to "A note on 'An empirical
comparison of forgetting models'". IEEE Transactions on Engineering Management,
51(2), 235.

Nembhard, D. A., & Uzumeri, M. V. (2000). An individual-based description of learning


within an organization. IEEE Transactions on Engineering Management, 47(3 ), 370-
378.

Nowakowska, M. (1973). A formal theory of actions. Behavioral Science, 18, 393-416.

Paich, M., & Sterman, J. D. (1993). Boom, bust, and failures to learn in experimental
markets. Management Science, 39(12), 1439-1458.

Park, P. (1967). Measurement of the pattern variables. Sociometry, 30(2), 187-198.

Parsons, T. (1951). The social system. Chicago, IL: Free Press.

Parsons, T. (1954). Essays in sociological theory (Revised ed.). New York: NY: Free
Press.

Parsons, T. (1960). Pattern variables revisited: A response to Robert Dubin. American


Sociological Review, 25(4), 467-483.

Parsons, T. (1961a). The general interpretation of action: Editorial forward. In T. Parsons,


E. Shils, K. D. Naegele, & J. R. Pitts, (Eds.), Theories of society: Foundations of
modern sociological theory (pp. 85-97). New York: Free Press.

Parsons, T. (1961b). An outline of the social system. In T. Parsons, E. Shils, K. D.


Naegele, & J. R. Pitts, (Eds.), Theories of society: Foundations of modern
sociological theory (pp. 30-79). New York: Free Press.

Parsons, T. (1961c). The point of view of the author. In M. Black (Ed.), The social
Page 109

theories of Talcott Parsons (pp. 311-363). Englewood Cliffs: NJ: Prentice-Hall.

Parsons, T. (1968a). The structure of social action: A study in social theory with special
reference to a group of recent European Writers (With a new introduction ed.). Vol. I.
New York: NY: Free Press.

Parsons, T. (1968b). The structure of social action: A study in social theory with special
reference to a group of recent European Writers (With a new introduction ed.). Vol.
II. New York: NY: Free Press.

Parsons, T. (1970). The impact of technology on culture and emerging new modes of
behaviour. International Social Science Journal, XXII(4), 607-627.

Parsons, T. (1974). The institutional function in organization theory. Organization and


Administrative Sciences, 5(1), 3-12.

Parsons, T. (1977a). On building social systems theory: A personal history. In T. Parsons,


Social systems and the evolution of action theory (pp. 22-76). New York, NY: Free
Press.

Parsons, T. (1977b). Social systems and the evolution of action theory. New York: NY:
Free Press.

Parsons, T. (1977c). Some problems of general theory in sociology. In T. Parsons, Social


systems and the evolution of action theory (pp. 229-269). New York, NY: Free Press.

Parsons, T. (1978a). Epilogue. In The doctor-patient relationship in the changing health


scene (pp. 445-455). Washington, D.C.: U.S. Department of Health, Education, and
Welfare.

Parsons, T. (1978b). Comment on R. Stephen Warner's "Toward a redefinition of action


theory: Paying the cognitive element its due" . American Journal of Sociology, 83(6),
1350-1358.

Parsons, T. (1982). The pattern variables. In L. Mayhew, (Ed.), Talcott Parsons on


institutions and social evolution: Selected writings (pp. 106-114). Chicago:
University of Chicago Press.

Parsons, T., & Bales, R. F. (1953). The dimensions of action-space. In T. Parsons, R.


Bales, & E. A. Shils, Working papers in the theory of action (Chap. 3, pp. 63-109).
Glencoe, IL: Free Press.

Parsons, T., Bales, R. F., & Shils, E. A. (1953a). Phase movement in relation to
motivation, symbol formation, and role structure. In T. Parsons, R. Bales, & E. A.
Shils, Working papers in the theory of action (Chap. 5, pp. 163-269). Glencoe, IL:
Free Press.

Parsons, T., Bales, R. F., & Shils, E. A. (Eds.). (1953b). Working papers in the theory of
Page 110

action. Glencoe, IL: Free Press.

Parsons, T., & Platt, G. M. (1973). The American university. Cambridge, MA: Harvard
University Press.

Parsons, T., & Shils, E. A. (1951). Toward a general theory of action. Cambridge, MA:
Harvard University Press.

Parsons, T., & Smelser, N. J. (1956). Economy and society: A study in the integration of
economic and social theory. Glencoe, IL: Free Press.

Pawlak, W. S., & Vicente, K. J. (1996). Inducing effective operator control through
ecological interface design. International Journal of Human-Computer Studies, 44(5),
653-688.

Pfahl, D., Laitenberger, O., Dorsch, J., & Ruhe, G. (2003). An externally replicated
experiment for evaluating the learning effectiveness of using simulations in software
project management education. Empirical Software Engineering, 8(4), 367-395.

Phelan, S. E. (1995). Using Simulation for Theory Generation in Strategic Management.


2nd Australasian Conf. in Strategic Management. La Trobe University, Melbourne,
Australia: 6 pages.

Podell, L. (1966). Sex and role conflict. Journal of Marriage and the Family, 28(2), 163-
1165.

Podell, L. (1967). Occupational and familial role-expectations. Journal of Marriage and


the Family, 29(3), 492-493.

Prietula, M. J., Carley, K. M., & Gasser, L. (1998). A computational approach to


organizations and organizing. In M. J. Prietula, K. M. Carley, & L. Gasser, (Eds.),
Simulating organizations: Computational models of institutions and groups (p. xiii-
xix). Menlo Park, CA: AAAI Press/MIT Press.

Rasmussen, J. (1985). The role of hierarchical knowledge representation in decision


making and system management. IEEE Transactions on Systems, Man, and
Cybernetics, SMC-15(2), 234-243.

Rasmussen, J., & Batstone, R. (n.d.). Why do complex organizational systems fail?
World Bank.

Rasmussen, J., Duncan, K., & Leplat, J. (Eds.). (1987). New technology and human error.
Chichester, England: John Wiley & Sons.

Richardson, K. A., Cilliers, P., & Lissack, M. (2001). Complexity science: A "gray"
science for the "stuff in between". Emergence: A Journal of Complexity Issues in
Organizations and Management, 3(2), 6-18.
Page 111

Riley, M. W., & Nelson, E. E. (1971). Research on stability and change in social systems.
In B. Barber, & A. Inkeles, (Eds), Stability and social change (pp. 407-449). Boston,
MA: Little, Brown and Co.

Robinson, S. (2001). Soft with a hard centre: Discrete-event simulation in facilitation.


Journal of the Operational Research Society, 52, 905-915.

Rocher, G. (1975). Talcott Parsons and American sociology. New York: Barnes & Noble.

Romanelli, E., & Tushman, M. L. (1994). Organizational transformation as punctuated


equilibrium: An empirical test. Academy of Management Journal, 37(5), 1141-1166.

Rowell, L. (1989). Foreword. In Time and process: Interdisciplinary issues (The Study of
Time VII) (p. vii-ix). Madison, CT: International Universities Press.

Samuelson, D. A. (2000). Designing organizations: CMOT launches success on a solid


scientific foundation (What is CMOT and why is it taking off?). ORMS Today, 27(6),
24-27.

Sastry, M. A. (1997). Problems and paradoxes in a model of punctuated organizational


change. Administrative Science Quarterly, 42(2), 237-275.

Savage, S. P. (1981). The theories of Talcott Parsons: The social relations of action. New
York, NY: St. Martin's Press.

Selznick, P. (1961). The social theories of Talcott Parsons. American Sociological


Review, 26(6), 932-935.

Senge, P. M. (1990). The fifth discipline: The art & practice of the learning organization.
New York, NY: Doubleday.

Shackle, G. L. S. (1969). Decision order and time in human affairs (2nd ed.). Cambridge,
England: Cambridge University Press.

Shneiderman, B. (1983). Direct manipulation: A step beyond programming languages.


IEEE Computer, 57-69.

Sibeon, R. (1999). Anti-reductionist sociology. Sociology: Journal of the British


Sociological Association, 33(2), 317.

Skvoretz, J., & Fararo, T. J. (1980). Languages and grammars of action and interaction:
A contribution to the formal theory of action. Behavioral Science, 25(1), 9-22.

Skvoretz, J., & Fararo, T. J. (1996). Generating symbolic interaction: Production system
models. Sociological Methods & Research, 25(1), 60-102.

Skvoretz, J., Fararo, T. J., & Axten, N. (1980). Role-programme models and the analysis
of institutional structure. Sociology, 14(1), 49-67.
Page 112

Stegmüller, W. (1979). The structuralist view of theories. Berlin: Springer-Verlag.

Sterman, J. D. (1989a). Misperceptions of feedback in dynamic decision making.


Organizational Behavior and Human Decision Processes, 43( 3), 301-335.

Sterman, J. D. (1989b). Modeling managerial behavior: misperceptions of feedback in


dynamic decision making experiment. Management Science, 35(3), 321-339.

Sterman, J. D. (2000). Business dynamics: Systems thinking and modeling for a complex
world. Boston: Irwin McGraw-Hill.

Thomsen, J., Levitt, R. E., & Kunz, J. (1999). A trajectory for validating computational
emulation models of organizations. Computational & Mathematical Organization
Theory, 5(4), 385-401.

Thorngate, W. (1976). "In general" vs. "It depends": some comments on the Gergen-
Schlenker debate. Personality & Social Psychology Bulletin, 2, 404-410.

Troitzsch, K. G. (1998). Multilevel process modeling in the social sciences:


Mathematical analysis and computer simulation. In W. B. G. Liebrand, A. Nowak, &
R. Hegselmann, (Eds.), Computer modeling of social processes (pp. 20-36). London:
Sage.

Tsuchiya, S. (1966). A new role for computerized simulation in social science: Summary
thoughts on a case study. Simulation & Gaming, 27( 1), 103-109.

Tuma, N. B., & Hannan, M. T. (1984). Social dynamics: Models and methods. Orlando,
FL: Academic Press.

Turner, B. S. (Ed.). (1999). The Talcott Parsons reader. Malden, MA: Blackwell.

Tushman, M. L., & Romanelli, E. (1985). Organizational evolution: A metamorphosis


model of convergence and reorientation. Research in Organizational Behavior, 7,
171-222.

Udy, S. H., Jr. (1960). Structure and process in modern societies. [Book review].
American Journal of Sociology, 66(1), 96.

Uzmeri, M., & Nembhard, D. (1998). A population of learners: A new way to measure
organizational learning. Journal of Operations Management, 16(5), 515-528.

Vaill, P. B. (1996). Learning as a way of being: Strategies for survival in a world of


permanent white water. San Francisco: Jossey-Bass.

van Fraassen, B. C. (2002). The empirical stance. New Haven, CT: Yale University Press
.

Vicente, K. J., & Rasmussen, J. (1990). The ecology of human-machine systems II:
Page 113

Mediating 'direct perception' in complex work domains. Ecological Psychology, 2(3),


207-249.

Vicente, K. J., & Rasmussen, J. (1992). Ecological interface design: Theoretical


foundations. IEEE Transactions on Systems, Man, and Cybernetics, 22(4), 589-606.

Waller, M. J. (1999). The timing of adaptive group responses to nonroutine events.


Academy of Management Journal, 42(2), 127-137.

Weick, K. (1979). The social psychology of organizing (2nd ed.). New York: McGraw-
Hill.

Weir, G. R. S. (1991). Living with complex interactive systems. In G. R. S. Weir, & J. L.


Alty, (Eds.), Human-computer interaction and complex systems (Chap. 1, pp. 1-21).
London: Academic Press.

Whitehead, A. N. (1927). Science and the modern world: Lowell lectures, 1925 . New
York, NY: Macmillan.

Williams, R. M., Jr. (1959). Friendship and social values in a suburban community: An
exploratory study. Pacific Sociological Review, 2(1), 3-10.

Wixted, J. T., & Ebbesen, E. B. (1997). Genuine power curves in forgetting: A


quantitative analysis of individual subject forgetting functions. Memory & Cognition,
25(1), 731-739.

Woods, D. D. (1984). Visual momentum: A concept to improve the cognitive coupling of


person and computer. International Journal of Man-Machine Studies, 21, 229-244.

Woods, D. D. (1991). The cognitive engineering of problem representations. In G. R. S.


Weir, & J. L. Alty, (Eds.), Human-computer interaction and complex systems (Chap.
Ch. 7, pp. 169-188). London: Academic Press.

Zeigler, B. P., Praehofer, H., & Kim, T. G. (2000). Theory of modeling and simulation:
Integrating discrete event and continuous complex dynamic systems (2nd ed.). San
Diego, CA: Academic Press.

Zelditch, M., Jr. (1955). A note on the analysis of equilibrium systems. In T. Parsons, &
R. F. Bales, Family, socialization and interaction process (pp. 401-408). New York:
Free Press.
Page 114

APPENDIX – ATTESTATION OF AN EXPERT


Page 115

APPENDIX – SIMULATION PROGRAM LISTING

This section contains the complete simulation model in the language of SIMUL8,
a product from Simul8 Corporation, 26th Floor, 225 Franklin Street, Boston, MA 02110;
telephone: 800 547 6024.. More information is available at http://www.simul8.com
The model shown here was written and executed in Release 10 Standard.
The most important part of the listing is the last one, Learning Model Common. It
enacted the learning portion of Latent Pattern Maintenance and adjusted the Adaptation
filter in order to reduce tension. It was invoked on each exit from the Adaptation
function.
There is a narrated illustration of the model in action at
http://www.Master-Systems.com/Parsons.ivnu

SIMUL8 Documentation for: Game 1.0.S8 at time 10/9/2004 10:15:10 PM


Version: 10.0.0.678
-----------------------------------------------------------------------
Parsons' Game

A "game" where an organization tries to match its internal energy to


the external, whose future pattern is unknown. It performs the match my
imitating Parsons' theory of action, and in particular the Latent
Pattern Maintenance function that adjusts internal energy to patterns
in the past external energy.

Created by: Stan Rifkin


Last opened by: Stan Rifkin

***********************************************************************
General Simulation Information
------------------------------

Warm Up Time: 0 Results Collection Time: 2400 (Days)


Start of day: 540 Length of day: 480 , Days per week: 5
Current Random Stream Set: 1
Data display when simulation stopped: Work Item Count
***********************************************************************

Distributions
SetEnergy
External
Column of Data
Cell R6C2
GameInput.xls
SetAffect
External
Column of Data
Cell R6C1
GameInput.XLS
Energy
Label Based :Energy
ADwell
Label Based :ADwell
GDwell
Page 116

Label Based :GDwell


IDwell
Label Based :IDwell
LDwell
Label Based :LDwell
SelectIDwell
SetDUE

Labels
Energy
(Number)
Affect
(Number)
Label
(Number)
ADwell
(Number)
GDwell
(Number)
IDwell
(Number)
LDwell
(Number)
DUE
(Number)
AlwaysOne
(Number)
WAIT TIME
(Number)
WORK TIME
(Number)
PRIORITY
(Number)

Images
Default Image Entry
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Storage Bin
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Work Center
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Exit
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Resource
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Conveyor
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Tank
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Rotz
Page 117

Width: 32 Height: 32
Transparent Color: 16777215
Default Image Process
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Loader
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Vehicle
Width: 32 Height: 32
Transparent Color: 16777215
Default Image Component
Width: 32 Height: 32
Transparent Color: 16777215
Default Image 3D Light
Width: 32 Height: 32
Transparent Color: 16777215
Default Image 3D Object
Width: 32 Height: 32
Transparent Color: 16777215
Bolt_m
Width: 16 Height: 13
Transparent Color: 16777215
Image 2
Width: 32 Height: 32
Transparent Color: 16777215
Image 3
Width: 31 Height: 32
Transparent Color: 16777215

SIMUL8 Windows and sub-windows


------------------------------

Open
Icon Location X:640 Y:497 W:32 H:32
Window Location X:4 Y:124 W:1255 H:823
Color 16777215

Work Item Types


---------------

Main Work Item Type


Image: Bolt_m
Length 1
Attached Labels:
Energy
Affect
Label
ADwell
GDwell
IDwell
LDwell
DUE
AlwaysOne
WAIT TIME
WORK TIME
Page 118

PRIORITY

***********************************************************************

Simulation Objects
------------------

Work Entry
External World
--------------
Display Parameters 4
X:205 Y:298 W:32 H:32
Xinc -10 Yinc 0
Image 0 Default Image Conveyor
Show Image
Do not collect results
Work Item Type: Main Work Item Type
Inter-arrival time
Distribution Detail:
Fixed 5 0 0 0
Route Out Objects
AFilter
On Label Action Visual Logic:
VL SECTION: Set dwell time properties
SET PRIORITY = Affect
'If Affect = 1 then Affect is required.
IF Affect = 1
SET ADwell = Table[10,6]
SET GDwell = Table[10,9]
SET IDwell = Table[10,12]
SET LDwell = Table[10,15]
ELSE
SET ADwell = Table[10,5]
SET GDwell = Table[10,8]
SET IDwell = Table[10,11]
SET LDwell = Table[10,14]
SET DUE = IDwell
Label Actions
PRIORITY
None
AlwaysOne
Set
Distribution Detail:
Fixed 1 0 0 0
Affect
Set
Distribution Detail:
Uses: SetAffect
External (Excel)
Energy
Set
Distribution Detail:
Uses: SetEnergy
External (Excel)
GDwell
None
Page 119

ADwell
None
LDwell
None
IDwell
None
DUE
None

Work Center
AFilter
-------
Display Parameters 4
X:345 Y:225 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Do not collect results
Priority 50
Routing In
Priority
Route In Objects
External World
Require resources before collecting any work items
Routing Out
Label
On label: Label
Preference only
Route Out Objects
Energy that does not enter
Waiting for Adaptation
Affect processing
Release resources as soon as task complete
Operation Time
Distribution Detail:
Fixed 0 0 0 0
On Label Action Visual Logic:
VL SECTION: AFilter Action Logic
CALL Learning Model Common
'GT is the greater than relation (>) and GTE is greater than or
equal to (>=).
'Temp contains the new threshold, computed from the learning
function.
IF Relation = GT
IF Energy > Temp
IF Affect = 0
SET Label = 2
'Path 2 = (Normal, affect neutral) Adaptation
ELSE
SET Label = 3
'Path 3 = Adaptation with Affect
ELSE
SET Label = 1
'Path 1 = exit the organization
ELSE IF Relation = GTE
Page 120

IF Energy >= Temp


IF Affect = 0
SET Label = 2
'Path 2 = (affect neutral) Adaptation
ELSE
SET Label = 3
'Path 3 = Adaptation with Affect
ELSE
SET Label = 1
'Path 1 = exit the organization
ELSE
SET Label = 1
Label Actions
Label
None

Work Center
Adaptation
----------
Display Parameters 4
X:444 Y:292 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Do not collect results
Priority 50
Routing In
Priority
Route In Objects
Waiting for Adaptation
Require resources before collecting any work items
Routing Out
Circulate
Preference only
Route Out Objects
Queue for Goal Attainment
Release resources as soon as task complete
Operation Time
Distribution Detail:
Uses: ADwell
Label Based

Work Center
Integration
-----------
Display Parameters 4
X:660 Y:509 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Priority label: Affect
Routing In
Priority
Page 121

Route In Objects
Queue from GA affect neutral
Goal Attainment Affect
Interrupted Integration
Require resources before collecting any work items
Routing Out
Circulate
Preference only
Route Out Objects
Queue for Latent Pattern Maintenance
Release resources as soon as task complete
Operation Time
Distribution Detail:
Fixed 120 0 0 0
Interruptable

Work Center
Latent Pattern Maintenance
--------------------------
Display Parameters 4
X:376 Y:506 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Do not collect results
Priority 50
Routing In
Priority
Route In Objects
Queue for Latent Pattern Maintenance
Require resources before collecting any work items
Routing Out
Circulate
Preference only
Route Out Objects
Spent Energy
Release resources as soon as task complete
Operation Time
Distribution Detail:
Uses: LDwell
Label Based
Label Actions
Affect
None

Work Exit Point


Energy that does not enter
--------------------------
Display Parameters 4
X:238 Y:368 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Input Objects
Page 122

AFilter

Storage Bin
Waiting for Adaptation
----------------------
Passed the Adaptation filter and now waits for a blocked Adaptation
activity, which is evidently busy making sense of "old" news.
Display Parameters 0
X:392 Y:292 W:32 H:32
Xinc -10 Yinc 0
Show Count
Show Image
Do not collect results
Capacity: -1
Input Objects
AFilter
Output Objects
Adaptation

Storage Bin
Queue for Goal Attainment
-------------------------
Display Parameters 5
X:563 Y:292 W:32 H:32
Xinc -10 Yinc 0
Show Count
Show Image
Do not collect results
Capacity: -1
Input Objects
Adaptation
Output Objects
Prepare budget proposal

Storage Bin
Queue from GA affect neutral
----------------------------
Display Parameters 0
X:658 Y:438 W:32 H:32
Xinc -10 Yinc 0
Show Count
Show Image
Capacity: -1
Input Objects
Prepare budget proposal
Output Objects
Integration

Storage Bin
Queue for Latent Pattern Maintenance
------------------------------------
Display Parameters 0
X:473 Y:508 W:32 H:32
Xinc -10 Yinc 0
Show Count
Show Image
Do not collect results
Page 123

Capacity: -1
Input Objects
Integration
Output Objects
Latent Pattern Maintenance

Work Exit Point


Spent Energy
------------
Display Parameters 4
X:374 Y:696 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Input Objects
Latent Pattern Maintenance

Work Center
Goal Attainment Affect
----------------------
Display Parameters 4
X:733 Y:225 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Do not collect results
Priority 50
Routing In
Priority
Route In Objects
Queue for Goal Attainment Affect
Require resources before collecting any work items
Routing Out
Circulate
Preference only
Route Out Objects
Integration
Release resources as soon as task complete
Batching
Product type of fixed value: 1
Operation Time
Distribution Detail:
Uses: GDwell
Label Based

Storage Bin
Interrupted Integration
-----------------------
Display Parameters 0
X:836 Y:520 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Page 124

Capacity: -1
Output Objects
Integration

Storage Bin
Queue for Goal Attainment Affect
--------------------------------
Display Parameters 0
X:610 Y:225 W:32 H:32
Xinc -10 Yinc 0
Show Count
Show Image
Do not collect results
Capacity: -1
Input Objects
Affect processing
Output Objects
Goal Attainment Affect

Work Center
Affect processing
-----------------
Display Parameters 4
X:496 Y:225 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Do not collect results
Priority 50
Routing In
Passive
Route In Objects
AFilter
Require resources before collecting any work items
Routing Out
Circulate
Preference only
Route Out Objects
Queue for Goal Attainment Affect
Release resources as soon as task complete
Operation Time
Distribution Detail:
Uses: ADwell
Label Based
Label Actions
Label
None

Work Exit Point


Ideas not resourced
-------------------
Display Parameters 4
X:827 Y:271 W:32 H:32
Xinc -10 Yinc 0
Show Title
Page 125

Show Count
Show Image
Do not collect results
Input Objects
Prepare budget proposal

Work Center
Prepare budget proposal
-----------------------
Display Parameters 4
X:623 Y:292 W:32 H:32
Xinc -10 Yinc 0
Show Title
Show Count
Show Image
Replicate 1
Do not collect results
Priority 50
Routing In
Priority
Route In Objects
Queue for Goal Attainment
Require resources before collecting any work items
Label Batching
Label
Min batch quantity 1
Max batch quantity 10000
Routing Out
Percent
Route Out Objects
Ideas not resourced
(0%)
Queue from GA affect neutral
(100%)
Release resources as soon as task complete
Operation Time
Distribution Detail:
Uses: GDwell
Label Based

Information Store
-----------------

Simulation Time
---------------
SIMUL8 Data
Current Value 0

Warm Up Period
--------------
SIMUL8 Data
Current Value 0

Results Collection Period


-------------------------
SIMUL8 Data
Page 126

Current Value 2400

Current Work Item


-----------------
SIMUL8 Data
Current Value 0

Overhead Cost
-------------
SIMUL8 Data
Current Value 0

Overhead Revenue
----------------
SIMUL8 Data
Current Value 0

Graph Sync Interval


-------------------
SIMUL8 Data
Current Value 1

Temp
----
Number
Current Value 0
Reset Value 0

Table
-----
Spreadsheet

EnergyThreshold
---------------
Number
Current Value 0
Reset Value 2

Relation
--------
Text
Current Value >=
Reset Value NOCHANGE

GT
--
Text
Current Value >
Reset Value NOCHANGE

GTE
---
Text
Current Value >=
Reset Value NOCHANGE

Detail Timing Log


Page 127

-----------------
Spreadsheet

y
-
Number
Current Value 0
Reset Value 0

x
-
Number
Current Value 0
Reset Value 0

p
-
Number
Current Value 0
Reset Value 0

k
-
Number
Current Value 0
Reset Value 0

r
-
Number
Current Value 0
Reset Value 0

Saved_clock
-----------
Time
Current Value 0
Reset Value 0

RowCtr
------
Number
Current Value 29
Reset Value -2147483648

ValCol
------
Number
Current Value 10
Reset Value 10

BotRow
------
Number
Current Value 29
Reset Value 29
Page 128

TimeOfMax
---------
Time
Current Value 0
Reset Value 0

MaxEnergy
---------
Number
Current Value 0
Reset Value 0

ThresholdToRespond
------------------
Number
Current Value 0
Reset Value 0

PercentNotFunded
----------------
Number
Current Value 0
Reset Value -2147483648

DecayCoefficient
----------------
Number
Current Value 0
Reset Value -0.001

ResponseWindow
--------------
Number
Current Value 0
Reset Value 52

Summarize Input Data


--------------------
Spreadsheet

MovingAverage
-------------
Number
Current Value 0
Reset Value 0

NbrInputs
---------
Number
Current Value 0
Reset Value 0

RunningTotal
------------
Number
Current Value 0
Reset Value 0
Page 129

TempRow
-------
Number
Current Value 0
Reset Value 0

I
-
Number
Current Value 0
Reset Value 0

Divisor
-------
Number
Current Value 0
Reset Value 0

Reset Visual Logic:


VL SECTION: Reset Logic
'Obeyed just after all simulation objects are initialized at time
zero
Clear Sheet Table[1,1]
'Read external Excel sheet into internal one in order to speed up
execution.
Get from EXCEL Table[1,1] , "[GameInput.XLS]ParsonsGameInput" ,
1 , 1 , ValCol , BotRow-1
'Set constants from the table just read in.
SET PercentNotFunded = Table[ValCol,17]
SET p = Table[ValCol,22]
SET r = Table[ValCol,23]
SET k = Table[ValCol,24]
SET Table[3,BotRow] = "Affect"
SET Table[4,BotRow] = "External"
SET Table[5,BotRow] = "Clock"
SET Table[6,BotRow] = "Internal"
SET RowCtr = BotRow
Clear all Display Plus ""
SET EnergyThreshold = Table[ValCol,19]
SET Temp = EnergyThreshold
Get from EXCEL Relation , "[GameInput.XLS]ParsonsGameInput" ,
ValCol , 20 , 1 , 1
'GT is the greater than relation (>) and GTE is greater than or
equal to (>=).
SET ThresholdToRespond = Table[ValCol,26]
SET DecayCoefficient = Table[ValCol,27]
SET ResponseWindow = Table[ValCol,28]
Set Route Out Discipline Prepare budget proposal , Percent
Set Route Out Percent Prepare budget proposal ,
PercentNotFunded*100 , Ideas not resourced
Set Route Out Percent Prepare budget proposal , [1-
PercentNotFunded]*100 , Queue from GA affect neutral

Start Run Visual Logic:


VL SECTION: Start Run Logic
Page 130

'Obeyed every time the user clicks the RUN button (at any
simulation time)

End Run Visual Logic:


VL SECTION: End Run Logic
'Obeyed when the simulation reaches end of "Results Collection
Period"
SET RowCtr = RowCtr+1
SET Table[3,RowCtr] = ""
SET Table[4,RowCtr] = ""
SET Table[5,RowCtr] = ""
SET Table[6,RowCtr] = ""
Set in EXCEL Table[3,BotRow+1] ,
"[GameInput.XLS]ParsonsGameInput" , 3 , BotRow+1 , 4 , [RowCtr-
BotRow]+2

Other Visual Logic:


VL SECTION: Learning Model Common
'Common processing for the learning model.
'Construct the data that are in the "Window".
SET NbrInputs = NbrInputs+1
IF NbrInputs > ResponseWindow
'Find the Maximum in the new range.
SET MaxEnergy = 0
SET TempRow = [BotRow+NbrInputs]-ResponseWindow
LOOP TempRow >>> I >>> [[BotRow+NbrInputs]-1]
IF Table[4,I] > MaxEnergy
SET MaxEnergy = Table[4,I]
'Adjust the MovingAverage by dropping off the least recent entry
in the Window.
SET RunningTotal = RunningTotal-Table[4,TempRow]
SET Divisor = ResponseWindow
ELSE
SET Divisor = NbrInputs
IF Energy > MaxEnergy
IF [Energy-MaxEnergy] > ThresholdToRespond
'The new Energy has to be greater than Max by a threshold in
order to make a change.
'Here if there is a new maximum in the external energy (within
the Window).
SET MaxEnergy = Energy
SET k = MaxEnergy*EXP[DecayCoefficient]
SET TimeOfMax = Simulation Time
SET RunningTotal = RunningTotal+Energy
ELSE
'Standard case: no new maximum.
SET k = MaxEnergy*EXP[[Simulation Time-
TimeOfMax]*DecayCoefficient]
SET RunningTotal = RunningTotal+Energy
SET MovingAverage = RunningTotal/Divisor
IF k < MovingAverage
SET k = MovingAverage
'The code below represents the learning by LPM in order to adjust
the energy filter to a target value in the Adaptation function.
'p represents cum prior learning interval, so have to add current
elapsed time to running total
SET p = p+Simulation Time
Page 131

'x is the time since the last change


SET x = Simulation Time-Saved_clock
'Now compute the new value to be used as an energy filter in
Adaptation.
SET Temp = k*[[x+p]/[[x+p]+r]]
'Save the current time so that it can be used next time for the
computation of x
SET Saved_clock = Simulation Time
'Prepare elements needed to be saved in spreadsheet so that we can
follow the graph of the external energy vs. the internal energy
generated by LPM.
SET RowCtr = RowCtr+1
SET Table[3,RowCtr] = Affect
SET Table[4,RowCtr] = Energy
SET Table[5,RowCtr] = Simulation Time
SET Table[6,RowCtr] = Temp

View publication stats

You might also like