Cybernetics E
Cybernetics E
Cybernetics E
TRAPEZNIKOV INSTITUTE
OF CONTROL SCIENCES
(ICS RAS)
D.A. Novikov
CYBERNETICS:
From Past to Future
2015
1
NOVIKOV D.A. Cybernetics: From Past to Future. – Heidelberg:
Springer, 2016. – 107 p.
This book is a brief “navigator” across the history of cybernetics, its state-
of-the-art and prospects.
The evolution of cybernetics (from N. Wiener to the present day) and the
reasons of its ups and downs are considered. The correlation of cybernetics with
the philosophy and methodology of control, as well as with system theory and
systems analysis is established.
A detailed analysis focuses on the modern trends of research in cybernetics.
A new development stage of cybernetics (the so-called cybernetics 2.0) is dis-
cussed as a science on general regularities of systems organization and control.
The author substantiates the topicality of elaborating a new branch of cybernetics,
i.e., organization theory (O3) which studies an organization as a property, process
and system.
The book is intended for theoreticians and practitioners, as well as for stu-
dents, postgraduates and doctoral candidates. In the first place, the target audience
includes tutors and lecturers preparing courses on cybernetics, control theory and
systems science.
2
CONTENTS
Introduction ......................................................................................... 5
1. Cybernetics in the 20th century ...................................................... 7
1.1. Wiener’s Cybernetics ............................................................... 9
1.2. Cybernetics of Cybernetics and Other Types of Cybernetics 17
1.3. Achievements and Disillusions of Cybernetics ..................... 22
2. Cybernetics, Control Philosophy and Control Methodology ........ 29
2.1. Control Philosophy ................................................................ 29
2.2. Control Methodology ............................................................. 33
3. Laws, Regularities and Principles of Control ............................... 36
4. Systems Theory and Systems Analysis. Systems Engineering ..... 50
5. Some Trends and Forecasts .......................................................... 57
5.1. Topic Analysis of Leading Control Conferences ................... 57
5.2. Interdisciplinarity ................................................................... 62
5.3. “Networkism” ........................................................................ 70
5.4. Heterogeneous Models and Hierarchical Modeling............... 73
5.5. Strategic Behavior .................................................................. 84
5.6. Big Data and Big Control ...................................................... 90
Conclusion: Cybernetics 2.0 ........................................................... 100
References ....................................................................................... 113
Appendix I: A List of Basic Terms ................................................. 127
Appendix II: Topics for Further Self-study .................................... 131
About the Author ............................................................................ 133
3
In warm memory of my father, Academi-
cian A.M. Novikov, who opened up the
world of cybernetics to me
4
We had dreamed for years of an institution of independent sci-
entists, working together in one of these backwoods of sci-
ence, not as subordinates of some great executive officer, but
joined by the desire, indeed by the spiritual necessity, to un-
derstand the region as a whole, and to lend one another the
strength of that understanding. N. Wiener
Introduction
5
and physiologists, etc. for obtaining fundamentally new results and break-
through technologies.
The third factor was that the role and “benefit” of science became
evident for everymen and politicians. The former enjoyed scientific results
owing to their rapid and mass implementation. The latter (a) realized that
science is an important public and economic drive of a society and (b) got
accustomed to that project-based management of applied research allows
predicting and in part guaranteeing its duration and results.
However, the flight of thought and stormy feelings of any romanti-
cism go in parallel with overestimated expectations. Moreover, the onrush
development of any science is inevitably followed by its normal ad-
vancement (e.g., according to T. Kuhn).
All these regularities fully affected cybernetics–a science born in the
above “romantic period” (1948) and undergone romantic childhood, the
disillusionment of juvenility and the decay of maturity.1 The book dis-
cusses exactly these issues, representing a brief “navigator” across the
history of cybernetics, its state-of-the-art and prospects. The style of a
“navigator” implies renunciation of a detailed characterization of results:
numerous references cover almost all2 classical works on cybernetics3
published to-date. On the other hand, such style a priori makes exposition
somewhat incomplete, eclectic and nonrigorous, as it would seem to a
representative of any concrete science mentioned.
The book possesses the following structure. First, we consider the
evolution of cybernetics (from N. Wiener to the present day), see Sections
1.1 and 1.2. A detailed analysis focuses on the reasons of its ups and
downs in Section 1.3. Next, we study the interrelation of cybernetics with
control philosophy and control methodology (Chapter 2), as well as with
systems theory and systems analysis (Chapter 4). Chapter 3 discusses the
basic laws, regularities and principles of control. Chapter 5 identifies
some modern development trends of cybernetics. In the Conclusion, we
introduce the new stage of cybernetics development–the so-called Cyber-
1
Note that general systems theory and systems analysis proceeded a similar path, see
below.
2
Cybernetics is a synthetic science and any attempt to give a comprehensive bibliography
of its components (e.g., control theory) is doomed to failure. By saying “all,” we mean
cybernetics proper (Cybernetics with capital C as explained in Section 1.1).
3
Most references are publicly available to an interested reader in Internet.
6
netics 2.0 as the science of systems organization and control. Appendices
contain a list of basic terms and topics for self-study.
The author is deeply grateful to V. Afanas’ev, V. Breer, V. Burkov,
A. Chkhartishvili, M. Goubko, A. Kalashnikov, K. Kolin, V. Kondrat’ev,
N. Korgin, O. Kuznetsov, A. Makarenko, R. Nizhegorodtsev, B. Polyak,
I. Pospelov, A. Raikov, P. Skobelev, A. Teslinov and V. Vittikh for fruit-
ful discussions and valuable remarks. Of course, the author takes all
shortcomings as referring to himself.
And finally, my deep appreciation belongs to Dr. A. Mazurov for his
careful translation, permanent feedback and contribution to the English
version of the book.
4
This root induced the words “governor”, “government” and “governance.”
5
Depending on the mutual penetration of subjects and methods, a pair of sciences often
appears at the junction of two sciences (e.g., physical chemistry and chemical physics).
7
In ancient Greece, the term “cybernetics” denoted the art of a munic-
ipal governor (e.g., in Plato’s Laws).
A. Ampere (1834) related cybernetics to political sciences: the book
[6] defined cybernetics (“the science of civil government”) as a science of
current policy and practical governance in a state or society.
B. Trentowsky (1843, see [136, 201]) viewed cybernetics as “the art
of how to govern a nation.”
In its Tektology (1925, see [29]), A. Bogdanov examined common
organizational principles for all types of systems. In fact, he anticipated
many results of N.Wiener and L. Bertalanffy, as the both were not famil-
iar with Bogdanov’s works.
The history and evolution of cybernetics can be traced in [85, 84,
179, 65, 168, 206].
The modern (and classical!) interpretation of the term “cybernetics”
as “the scientific study of control and communication in the animal and
the machine” was pioneered by Norbert Wiener in 1948, see the mono-
graph [221]. Two years later, Wiener also added society as the third object
of cybernetics [225]. Among other classics, we mention William Ashby6
[14, 15] (1956) and Stafford Beer [23] (1959), who made their emphasis
on the biological and “economic” aspects of cybernetics, respectively.
Therefore, cybernetics 1.0 (or simply cybernetics) can be defined7 as
“THE SCIENCE OF CONTROL AND DATA PROCESSING IN
ANIMALS, MACHINES AND SOCIETY.” An alternative is the defini-
tion of Cybernetics (with capital C, to distinguish it from cybernetics
whenever confuse may occur) as “THE SCIENCE OF GENERAL
REGULARITIES OF CONTROL AND DATA PROCESSING IN
ANIMALS, MACHINES AND SOCIETY.” The second definition differs
from its first counterpart in the words “general regularities,” which is
crucial and will be repeatedly underlined and used below. In the former
case, the matter concerns “the umbrella brand,” i.e., the “integrated”
results of all sciences dealing with problems of control and data pro-
cessing in animals, machines and society. The latter case covers partial
“intersection” of these results8 (see Fig. 9), i.e., usage of common results
for all component sciences. Furthermore, we will adhere to this approach
6
Interestingly, W. Ashby introduced and analyzed the categories of “variety” and “self-
organization,” as well as the terms “homeostat” and “black box” in cybernetics.
7
These definitions will be addressed throughout the whole book, except the Conclusion.
8
Figuratively speaking, the central rode of the “umbrella.”
8
over and over again for discrimination between the corresponding umbrel-
la brand and the common results of all component sciences in the context
of different categories such as interdisciplinarity, systems analysis, organ-
ization theory, etc.
9
This somewhat conditional differentiation applies not only to sciences, but to research-
ers. As mentioned in [149], in some fields of science researchers are traditionally divided
into two categories. The first one is called “screwmen.” They study new problem domains
(“screws”) using common methods (“spanners”). The second category is known as
“spannermen”; such researchers design new technologies of cognition (methods, “span-
ners”) and illustrate their efficiency in different problem domains (for unscrewing com-
mon “screws”).
9
3) Rather easy and fast generation/accumulation of nontrivial theoret-
ical and applied results and their popularization, within the scientific
community and everymen.
Speaking about cybernetics, the first and second conditions had been
satisfied by the middle of the 1940’s (see the Introduction). And the long-
term cooperation between N. Wiener and biologists, alongside with his
wide and deep professional interests (recall Wiener processes, Banach-
Wiener spaces, the Wiener-Hopf equations) ensured “subjective” satisfac-
tion of these conditions. In its late interview to Russian Studies in Philos-
ophy (1960, No. 9), Wiener noted that “the aim was to unite efforts in
different branches of science and get focused on uniform solution of
similar problems.” The third condition–rapid accumulation and populari-
zation of new results–was also realized, see the discussion below.
In 1948 integration of results obtained by different sciences and their
substantiated applicability to different subjects (see Fig. 1) gave birth to a
new synthetic science known as Wiener’s cybernetics.
Method-oriented sciences Subject-oriented sciences
Engineering
sciences
Control
theory Control Machine
Communication Society
theory Communication
Social
sciences
10
as a synthetic science10 mostly employs the results of its components
(source sciences);
– explanatory (explicative) function, i.e., elucidation of phenomena
and processes, their internal mechanisms. Here the question to-be-
answered is “Why does the world is exactly this?”. In this function, cy-
bernetics plays a more visible role: even analogies may have powerful
elucidation;
– generalizing function, i.e., formulation of laws and regularities sys-
tematizing and absorbing numerous fragmented phenomena and facts (the
associated question is “What are the common features of ... ?”). Perhaps,
this is the main function of cybernetics, since generalizations (in the form
of laws, regularities, models, research approaches) comprise the frame-
work of its results;
– predictive (prognostic) function, i.e., scientific knowledge allow
predicting new processes and phenomena (this function answers the
question “What and why will happen?”). Efficient forecasting is possible
using substantiated analogies and constructive generalizations within
synthetic science cybernetics;
– prescriptive (normative) function, i.e., scientific knowledge allow
organizing activity with certain goals (the corresponding question is
“What and how should be done for goal achievement?”). Normative
function has a close connection with solution of control problems, an
important subject of cybernetics.
Definitions. Just like any comprehensive category, cybernetics would
hardly possess a unique definition. Moreover, the meanings of terms
describing this category also evolve with the course of time. Let us give a
series of widespread definitions of cybernetics:
“A science concerned with the study of systems of any nature which
are capable of receiving, storing, and processing information so as to use
it for control”–A. Kolmogorov;
“The art of steersmanship: deals with all forms of behavior in so far
as they are regular, or determinate, or reproducible: stands to the real
machine–electronic, mechanical, neural, or economic–much as geometry
stands to real object in our terrestrial space; offers a method for the scien-
10
For instance, A. Kolmogorov believed that cybernetics is not a science but a scientific
direction; however, the listed functions also apply to the latter.
11
tific treatment of the system in which complexity is outstanding and too
important to be ignored.”–W. Ashby;
“A branch of mathematics dealing with problems of control,
recursiveness, and information, focuses on forms and the patterns that
connect.”–G. Bateson;
“The art of effective organization.”–S. Beer;
“The art of securing efficient operation.”–L. Couffignal;
“The art and science of manipulating defensible metaphors.”–
G. Pask;
“The art of creating equilibrium in a world of constraints and possi-
bilities.”–E. Glasersfeld;
“The science and art of understanding.”–H. Maturana;
“A synthetic science of control, information and systems”–
A.G. Butkovsky;
“A system of views a governor must have for efficient control of its
κυβερνη”–N. Moiseev;
“The art of interaction in dynamic networks.” – R. Ascott.
Almost all definitions involve the terms “control” and “system,” see
the definition of “cybernetics 2.0” in the Conclusion. Therefore, they are
mutually noncontradictory and well consistent with the definition of
cybernetics accepted by us.
Consequently, Wiener’s cybernetics has the following key terms:
control, communication, system, information, feedback, black box, varie-
ty, homeostat.
Cybernetics today (disciplines included in cybernetics in the de-
scending order of their “grades” of membership, see Fig. 9, with year of
birth if available):
– control theory11 (1868–the papers [127, 216] published by
J. Maxwell and I. Vyshnegradsky);
– mathematical theory of communication and information (1948–
K. Shannon’s works [187, 188]);
– general systems theory, systems engineering and systems analysis12
(1968–the book [26] and 1956–the book [92]);
11
According to an established tradition, control science will be called control theory (yet,
such name narrows its subject).
12
Chapter 4 discusses the history of these scientific directions in more details.
12
– optimization (including linear and nonlinear programming; dynam-
ic programming; optimal control; fuzzy optimization; discrete optimiza-
tion, genetic algorithms, and so on);
– operations research (graph theory, game theory and statistical deci-
sions, etc.);
– artificial intelligence (1956–The Dartmouth Summer Research Pro-
ject on Artificial Intelligence);
– data analysis and decision-making;
– robotics
and others (purely mathematical and applied sciences and scientific direc-
tions, in an arbitrary order) including systems engineering, recognition,
artificial neural networks and neural computers, ergatic systems, fuzzy
systems (rough sets, grey systems [91, 94, 162, 165]), mathematical logic,
identification theory, algorithm theory, scheduling theory and queuing
theory, mathematical linguistics, programming theory, synergetics and all
that jazz.
In its components, cybernetics intersects considerably with many
other sciences, in the first place, with such metasciences as general sys-
tems theory and systems analysis (see Chapter 4) and informatics13 (see
the Conclusion).
There exist a few classical monographs and textbooks on Cybernetics
with its “own” results; here we refer to [2, 14, 22, 23, 26, 62, 63, 133,
222-225]. On the other hand, textbooks on cybernetics (mostly published
in the former USSR) include many of the above-mentioned directions (par
excellence, control in technical systems and informatics)–see [52, 68, 108,
113, 119].
The prefix “cyber” induces new terms on a regular basis, viz.,
cybersystem, cyberspace, cyberthreat, cybersecurity, etc. In a broader
view of things, this prefix embraces all connected with automation, com-
puters, virtual reality, Internet and so on.14
Nowadays, cybernetics attracts the attention of several hundreds of
dedicated research centers (institutes, departments, research groups) and
13
Or even with computer science, but we will omit this aggregative term due to its unde-
termined and eclectic character.
14
Perhaps, this reflects the word “cybernetics” in mass consciousness, even despite that
experts in the field disagree with such (general and simplified) usage of the prefix.
13
associations15 worldwide (with explicit usage of the term “cybernetics” in
their names), plus hundreds of scientific journals and regular conferences.
For instance, see Internet resources on cybernetics:
– http://www.asc-cybernetics.org/
– http://pespmc1.vub.ac.be/
– http://wosc.co/
– http://neocybernetics.com/wp/links/
and others.
“Sectoral” types of cybernetics. Alongside with general cybernet-
ics, there exist special (“sectoral”) types of cybernetics [113]. A most
natural approach (which follows from Wiener’s extended definition) is to
separate out technical cybernetics, biological cybernetics and socioeco-
nomic cybernetics besides theoretical cybernetics (i.e., Cybernetics).
It is possible to compile a more complete list of special types of cy-
bernetics (in the descending order of the current level of exploration):
– technical cybernetics, engineering cybernetics;
– biological and medical cybernetics, evolutionary cybernetics, cy-
bernetics in psychology [5, 9, 10, 15, 24, 61, 100, 109, 160, 169, 202];
– economic cybernetics [22, 23, 99, 138, 227];
– physical cybernetics (to be more precise, “cybernetical physics”16,
see [203]);
– social cybernetics, educational cybernetics;
– quantum cybernetics (quantum systems control, quantum compu-
ting) (see surveys in [69, 72]).
As standing apart, we mention a branch of biological cybernetics
known as cybernetic brain modeling integrated with artificial intelligence,
neural and cognitive sciences. A romantic idea to create a cybernetic
(computer-aided) brain at least partially resembling a natural brain stimu-
lated the founding fathers of cybernetics (see the works of W. Ashby [15],
G. Walter [218], M. Arbib [11], F. George [61], K. Steinbuch [193] and
others) and their followers (for a modern overview, we refer to [169]).
15
Principia Cybernetica (V. Turchin et al.), American Society for Cybernetics
(http://www.asc-cybernetics.org), World Organization of Systems and Cybernetics, to
name a few.
16
Cybernetical physics is a science studying physical systems by cybernetical methods.
Owing to the maturity of physical objects modeling (in the sense of duration and depth),
today the results in this field can be stated as general and well grounded laws, see [59, pp.
38-40] and below.
14
Bibliometric analysis. The degree of penetration of cybernetics into
other sciences and the scale of its “synthetic” character can be estimated
using a simple bibliometric analysis. Fig. 2 and Fig. 3 demonstrate the
usage of the terms “Cybernetics” and “Control” in scientific publications
(paper titles) indexed by Scopus. Clearly, these terms appear interdiscipli-
nary and widespread in many branches of modern science.
1 000 000
30000
800 000
400 000
10000
200 000
0 0
1947 1952 1957 1962 1967 1972 1977 1982 1987 1992 1997 2002 2007 2012
17
Such approach has been and still is conventional for theory of control in organizations
(e.g., see Fig. 4.15 in [131] and [158]).
17
Second-order cybernetics
OBS ERV ER
First-order cybernetics
CONTROL SUBJECT
State of controlled
Cont rol object
CONTROLLED OBJECT
External disturbances
18
The “biological” stage in second-order cybernetics is associated with
the names of H. Maturana and F. Varela [125, 126, 210] and their notion
of autopoiesis (self-generation and self-development of systems).
F. Varela underlined that “first-order cybernetics is the cybernetics of
observed systems; second-order cybernetics is the cybernetics of observ-
ing systems.” The latter focuses on feedback of a controlled system and an
observer.
Therefore, the key terms of second-order cybernetics are
recursiveness, self-regulation, reflexion, autopoeisis. For a good survey of
this direction, we refer to [116].
P. Asaro [13] believed that there exist three interpretations of cyber-
netics (actually, we have mentioned the first two above):
1) the narrow interpretation, i.e., a science studying feedback control;
2) the wide interpretation, i.e., “cybernetics is all the things, and we
live in the Century of Cybernetics”;
3) the intermediate (epistemological) interpretation, i.e., second-order
cybernetics (an emphasis on feedback of a controlled system and an
observer).
However, the historical picture has appeared much more colorful and
diverse, not confining to the second order–see Fig. 8.
Some authors adopt the terms “third-order cybernetics” (social
autopoeisis; second-order cybernetics considering autoreflexion) and
“fourth-order cybernetics” (third-order cybernetics considering observer’s
system of values), but they are conceptual and still have no generally
accepted meanings (e.g., see a discussion in [31, 95, 121, 122, 140, 206,
207]).
For instance, V. Lepsky wrote: “Third-order cybernetics can be
formed basing on the thesis “from observing systems to self-developing
systems.” In this case, control is gradually transformed into a wide spec-
trum of support processes for system self-development, namely, social
control, stimulation, maintenance, modeling, organization, “assem-
bly/disassembly” of subjects and others.” [118, p. 7793].
19
Post-cybernetics
Cybernetics Autopoeisis Evergetics
Neo-cybernetics
Conceptual
cybernetics Control
of third and methodology
Second-order fourth orders
cybernetics
Hi-Hume
Cybernetics
21
The observed variety of the approaches claiming (explicitly or im-
plicitly) to be a new mainstream in classical cybernetics development
seems natural, as reflecting the evolution of cybernetics. With the lapse of
time, certain approaches will be further developed, others will stop grow-
ing. Of course, it is extremely desirable to obtain a general picture with
integration, generalization and joint positioning of all existing approaches
or most of them (see the Conclusion).
18
This also applies to systems theory and systems analysis, see Chapter 4.
22
slowed down, its integral flow was decomposed into numerous partial
subflows and “lost in details”: the number of scientific directions19 (see
Fig. 9) increased and each of them continued further development, but
general regularities were almost not identified and not systematized.
Actually, cybernetics had rapid growth owing to its components, but
Cybernetics stood still.
CYBERNETICS
1868
1948
Control
theory 1948
…
Information
theory
Mathematical
… communication
theory 1968
Concerning Fig. 9 and similar ones (see Fig. 20, Fig. 21 and Fig. 55),
the author addresses esteemed readers with an appeal to acknowledge that
any ideas about the correlation of sciences and their branches are “ego-
centric”–a scientist places its own (“favorite”) science “in the core.”
Moreover, any scientific branch or scientific school hyperbolizes its
achievements and capabilities. Such subjectivism seems natural, and a real
picture can be always reconstructed with appropriate corrections to it.
19
Exactly scientific directions, i.e., sciences, group of sciences and application domains.
23
Another argument: since the 1950’s, the mankind has been observing
the “exponential” growth of technological innovations and the same
growth of scientific publications, parallel to sciences differentiation
(N. Wiener wrote: “Since Leibniz there has perhaps been no man who has
had a full command of all the intellectual activity of his day.”
[221, p. 43]). An interesting paradox: over this period, the number of
researchers, scientific papers, journals and conferences has been increas-
ing, but almost without the appearance of revolutionary fundamental
scientific discoveries “clear to everymen.” Fundamental science “has
passed ahead of” technologies and its groundwork is now implemented in
new technologies. Yet, intensive development of fundamental science
cannot be stimulated without an explicit mass “demand” from technolo-
gies.
In the era of cumulative differentiation of sciences, cybernetics has
been remaining a striking example of the synergetic effect, i.e., a success-
ful attempt to integrate different sciences, to search their common lan-
guage and regularities. Unfortunately, it is one of the last examples:
modern fashionable “convergent sciences” (NBICS: nano-, bio-, informa-
tional, cognitive and humanitarian social sciences) have still not realized
themselves in this sense. Indeed, the widespread “interdisciplinarity” is
rather an advertising umbrella brand or a real “junction” of two or more
sciences. Genuine Interdisciplinarity must operate common results and
regularities of several sciences.
As an epistemological digression, note that the dialectic spiral “from
partial to generalizations, from generalizations to new partial results” is
characteristic for any-scale theory, i.e., from partial (yet, integral) direc-
tion of investigations20 to full-scale scientific research (see Fig. 10 import-
ed from [148]). Wiener’s ideas about the general regularities of control
and communication in different-nature systems were the result of general-
izing some (of course, not all!) achievements of automatic control theory,
communication theory, physiology and a series of other sciences of that
time. Wiener’s cybernetics with the key concepts of feedback (causality),
20
For instance, an efficient solution method for a certain class of control problems
becomes applicable to problems in adjacent fields (e.g., communication, production, etc.).
Thereby, this method is “transferred” from control theory to cybernetics. And then, it can
be an asset of applied mathematics, i.e., a “spanner” for experts in various fields (when-
ever studied systems satisfy its initial requirements).
24
homeostasis and others spurred new results in control, informatics and
other sciences.
PRIMARY GENERALIZATIONS
The process of ascending from
SECONDARY GENERALIZATIONS
the concrete to the abstract
AND SO ON
CONCEPTUAL STATEMENTS
REQUIREMENTS
MECHANISMS
PROCEDURES
CONDITIONS
PRINCIPLES
AND SO ON
MODELS
25
Thus, the “romantic” period (see the Introduction) was followed by
the period of rapidly obtained results, ergo by the growing expectations.
Those expectations were not necessarily professional. Cybernetics became
fashionable and many authors started its popularization.21 Sometimes, the
number of popularizers even exceeded the number of professionals (for
the sake of justice, we emphasize that professionals realized not all their
expectations). A. Kolmogorov was right saying that “I do not belong to
great enthusiasts of all rich literature on cybernetics published today and
see numerous exaggerations (on the one part) and much oversimplifica-
tion (on the other part) in it.” [104].
Perhaps, such situation is typical for the development of scientific
branches and directions. There exist many examples of failed expectations
originally created and maintained by dilettantes. For instance, the termi-
nology of rather fruitful independent sciences such as nonlinear dynamics
and synergetics [45, 77, 120, 175, 194] (attractors, bifurcations, etc.) is
often employed by humanists for constructing a scientific entourage for
the outsiders. Fuzzy set theory, artificial neural networks, genetic algo-
rithms and many other scientific fields have already passed or are now
facing a crisis due the collapse of corresponding overrated expectations.
Consider the following groups of subjects:
– researchers focused on cybernetics proper;
– researchers representing adjacent (component) sciences;
– popularizers of cybernetics (mass media or dilettantish “research-
ers” interpreting the results of others22);
– authorities (“politicians”) and potential users of applied results
(“customers”) in business structures.
The failed expectations for cybernetics caused disillusions of all these
groups. Answering to the question “Where are the results?”, experts in
cybernetics parried: “We work as hard as possible; all promises were
given by popularizers and they must bear the responsibility.” Due to their
21
Actually, the first popularizer was N. Wiener himself. Later, he mentioned that the
appearance of the book [221] in a moment transformed him from a working scientist with
a definite authority in his research field into some public figure. That was pleasing, but
also had negative consequences, as henceforth N. Wiener was obliged to maintain busi-
ness relations with various scientific groups and take part in a movement which rapidly
gained in scope so that he could not even cope with it.
22
Such “researchers” exist in any science, especially in and around intensively developing
ones.
26
sound “jealousy” to cybernetics, the representatives of adjacent sciences
replied “The things are going well with us”23 (really, many “components”
of cybernetics such as control theory, informatics and others were quite
successful, see Fig. 9). Popularizers infrequently feel pangs of con-
science24 and can always note: “We are not experts, we were deluded.”
With the course of time, politicians also felt definite pessimism over
cybernetics, i.e., particularly due to the attitude towards cybernetics in the
early 1950’s in the USSR, the Chilean experiments of S. Beer’s team
(implementation of cybernetical ideas and approaches in real economy
management) and V. Glushkov’s unrealized intentions to deploy all-
embracing computer-based centralized planning in the USSR.
No guilty persons found, something failed, and that’s it. Actually, the
situation is not so bad as it seems to be. First, cybernetics is rather
efficient as an integrative science: its components have been and will be
developing for years, while a unique look and a holistic picture covering a
whole group of sciences is surely needed (see Chapter 2). Reflexion with
respect to disillusions and their reasons is anyway useful.
Second, for several decades cybernetics was considered as a “magic
lamp” throwing the light on the correct structure of any subject domain
and systematizing its organization (N. Moiseev noted that cybernetics
defines “a thinking standard” [136]). In many cases (technical systems,
numerous results in biology and economics, etc.), the hopes were justified,
inducing higher expectations. Any synthetic science including cybernetics
represents not a “lamp,” but a “lens” properly focusing rays (scientific and
applied results) from a “source” (concrete sciences): a lens gives no light,
but acts as a light converter.
The main problem of cybernetics as a “lens” consists in the follow-
ing. Except the founding fathers of classical cybernetics (N. Wiener,
W. Ashby and S. Beer), just a few researchers studied Cybernetics
deeply and professionally endeavoring to reveal, formulate and develop
its general laws (see Chapter 3), despite the huge growth of knowledge in
adjacent sciences within the past decades. (A new turn of appreciable
generalizations took no place, see Fig. 10). Moreover, the
23
In fact, valuable results in automatic control theory, statistical communication theory,
etc. were followed by some recession (quite naturally, see Fig. 32).
24
During its speech at 1962 IFIP Meeting, USSR representative A. Dorodnitsyn suggested
two terms for the glossary of information processing, namely, “Cybernetics active” and
“Cybernetics talkative.”
27
interdisciplinarity of cybernetics (multiple subjects and methods of
study25) testified to its “fuzziness.” Contrariwise, Cybernetics is a more
holistic science with its own subject–general regularities of control and
communication. Therefore, experts and specialists should pay their
attention and apply every effort to develop Cybernetics!
Concluding this section, recall the “principle of uncertainty” de-
scribed in [149]: epistemologically weak sciences introduce the minimal
constraints (or no constraints at all) and obtain the fuzziest results. Contra-
riwise, epistemologically strong sciences impose many limiting condi-
tions, involve scientific languages, but yield more precise (and well-
grounded) results. However, the field of their application appears rather
narrowed (i.e., clearly bounded by these conditions). In other words, the
current level of science development is characterized by certain mutual
constraints imposed on results “validity” and results applicability, see Fig.
11. That is, the “product” of the domains of results applicability and
validity does not exceed a constant (increasing the value of a “multipli-
cand” reduces the value of another “multiplicand”).
Epistemologically
weak sciences
Cybernetics
Decision
theory
… Epistemologically
Operations research strong sciences
…
Control theory …
25
In this sense, interdisciplinarity is rather a negative feature.
28
But this regularity applies only to a current development level of a
corresponding science. The presence of generalizations (the main role of
Cybernetics!) extends the boarders by shifting the curve to the right and
top (see Fig. 11). As a result, some progress is achieved in the both do-
mains.
29
metaphysics and then the trunk is physics. The branches coming out of the
trunk are all the other sciences.”
R. Mirzoyan felt rightly that, on the basis of historical and philosoph-
ical analysis, first control/management theorists were exactly philosophers
[135]. Confucius, Lao-tzu, Socrates, Platon, Aristotle, N. Machiavelli,
T. Hobbes, I. Kant, G. Hegel, K. Marx, M. Weber, A. Bogdanov–this is a
short list of philosophers that laid down the foundations of modern control
theory for the development and perfection of managerial practice.
Consider Fig. 12 [152] illustrating different connections between the
categories of philosophy and control; they are treated in the maximal
possible interpretation (philosophy includes ontology, epistemology,
logic, axiology, ethics, aesthetics, etc.; control is viewed as a science and
a type of practical activity). We believe that the three shaded domains in
Fig. 12 are the major ones.
PHILOSOPHY
Control
philosophy
Management
Cybernetics philosophy
Control Managerial
science practice
30
Presently, concrete control problems are no more the subject of phil-
osophical analysis. Philosophy (as a form of social consciousness, the
theory of general principles of entity and cognition, human attitude to the
reality, as the science of universe laws of natural development) studies
GENERAL problems and regularities separated out by experts in certain
sciences.
V. Diev believed that control philosophy is “a system of generalizing
philosophical judgments about the subject and methods of control, the
place of control among other sciences and in the system of scientific
cognition, as well as about its cognitive and social role in a modern socie-
ty.” [50, p. 36].
One can define control philosophy as a branch of philosophy con-
nected with comprehension and interpretation of control processes and
control cognition, studying the essence and role of control [152]. Such
meaning of the term “control philosophy” (see the dashed-line contour in
Fig. 12) has a rich internal structure and covers epistemological research
of control science, the analysis of logical, ontological, ethical and other
foundations (both for control science and managerial practice).
Cybernetics (with capital C, as a branch of control science, studying
its most general theoretical regularities). According to V. Diev, “... for
many scientific disciplines, there exists a range of problems related to
their foundations and traditionally referred to as the philosophy of a
corresponding science. Control science follows this tradition, as well.”
[50, p. 36]. Foundations of control science also include general regulari-
ties and principles of efficient control representing the subject of Cyber-
netics (see Chapter 3).
In the 1970-1990’s, against the background of first disillusions of
cybernetics, the only bearers of canonical cybernetic traditions were
philosophers (!), whereas experts in control theory lost their confidence in
ample opportunities of cybernetics.
Things can’t carry on as they are. On the one hand, philosophers vi-
tally need knowledge of the subject (actually, the generalized knowledge).
In this context, V. Il’in mentioned that “philosophy represents second-
rank reflexion; it provides theoretical grounds to other ways of spiritual
production. The empirical base of philosophy consists in specific reflec-
tions of different types of cognition; philosophy covers not the reality
itself, but the treatment of reality in figurative and category-logical
forms.” [87].
31
On the other hand, experts in control theory need “to see the wood
for the trees.” Hence, one can hypothesize that Cybernetics must and
would play the role of control “philosophy” (here quotation marks are
crucial!) as a branch of control theory, studying its most general regulari-
ties. Here the emphasis should be made on constructive development of
control philosophy, i.e., on formation of its content through obtaining
concrete results (probably, first partial results and then general ones).
Reflexion can be continued by considering cybernetics philosophy, and so
on.
The book [152] briefly analyzed the correlation of control philosophy
(as a branch of philosophy studying general problems of control theory
and practice), Cybernetics (as a branch of control science generalizing the
methods and results of solving theoretical problems of control) and man-
agement “philosophy” (as a branch of control science generalizing the
experience of successful managerial practice), see Fig. 13.
PHILOSOPHY
Control philosophy
General General
regularities of regularities of Ontological, ethical
Epistemological, and other foundations
logical and other efficient control control activity
foundations
Theory verification,
practical experience
Cybernetics generalization, etc. Management "philosophy"
CONTROL THEORY
32
2.2. Control Methodology
Control
methodology
Control subject
Control
theory
Controlled object
33
CONTROL SUBJECT
Conditions Norms Principles
1 2 3 4 5
Technology
Needs, (content and
Goal Task forms, methods
Action Result
motives
and means)
Assessment
Criteria
CORRECTIONS Self-regulation
State of
Control
controlled
object
CONTROLLED OBJECT
Conditions Norms Principles
1 2 3 4 5
Technology
Needs, (content and
Goal Task forms, methods
Action Result
motives
and means)
External disturbances
Assessment
Criteria
CORRECTIONS Self-regulation
26
Note that “organism” and “organization” are paronyms.
Scheme
Subjects of control
of control activity
Methods and
types of control
Control tasks and
control
mechanisms COMPONENTS OF
CONTROL THEORY
Forms of control
Control
principles
Conditions
of control
Control functions Control means
36
will also address its third interpretation as a key feature in system struc-
ture.
A law is a permanent cause-and-effect relation of phenomena or pro-
cesses.
A law is a necessary, essential, stable and repetitive relation among
phenomena.
In contrast to laws, regularities are not compulsory; principles can be
treated as strict imperatives or desirable properties.
There exists a hierarchy of laws and principles (see Fig. 17): philo-
sophical laws are most general; the next level is occupied by more “par-
tial” logical and other general scientific laws and principles (including the
ones of cognition and practical activity, see [148, 149]); and finally, laws,
regularities and principles of specific sciences appear least general (on
the one hand, control theory as a science possesses its own laws and
principles; on the other hand, it employs laws and principles of other
sciences relating to a controlled object).
Philosophical laws
30
Optimization consists in seeking for best alternatives among a set of admissible ones
under given constraints (optimal alternatives). In this phrase each word is important.
“Best” means the presence of a criterion (or several criteria) and a way (several ways) to
compare alternatives. It is crucial to take into account existing conditions and con-
straints: their variation possibly leads to a situation when other alternatives appear best
under a same criterion (same criteria). The notion of optimality has received a rigorous
and exact representation in different mathematical sciences, has firmly entrenched in
practical design and exploitation of technical systems, has played a prominent role in
formation of modern systems ideas. Moreover, this notion is widespread in administrative
and public practice and is known to almost everyone. Obviously, aspiration for increasing
the efficiency of any purposeful activity has found its expression, a clear and intelligible
form in the idea of optimization.
31
20% of people drink 80% of beer.
39
– 80% of company’s stocking cost corresponds to 20% of its product
types;
– 80% of company’s sales income is made by 20% of its customers;
– 80% of problems are created by 20% of causes;
– 20% of working time is spent on 80% of work;
– 80% of work is performed by 20% of employees, and so on.
Another example concerns the principle of harmony. Using the pro-
portions established by L. da Vinci (the golden section) and the well-
known properties of the Fibonacci sequence, one postulates the corre-
sponding ratio of other indicators (e.g., the number of employees, wages,
budget articles, and so on).
Such “principles” and their apologists can be treated with a smile, as
both the former and the latter have no attitude to science proper.
Second, all researchers (!) have not stated any enumeration bases for
principles and laws suggested by them. This fact testifies to their possible
non-universalism, as well as to incomplete enumeration, its weak sound-
ness, possible internal inconsistency, etc.
And third, the list of laws, regularities and principles should be ex-
tended and systematized.
As an illustration, consider some principles and laws of control and
functioning of complex systems proposed by different authors.
PRINCIPLES OF COMPLEX SYSTEMS FUNCTIONING [86,
p. 60–67]:
1) The principle of reactions–responding to an external influence, a
system reinforces processes to compensate it (the Le Chatelier–Brown
principle imported from physics and chemistry).
2) The principle of system cohesion–a system's form is maintained
by a balance, static or dynamic, between cohesive and dispersive influ-
ences. The form of an interacting set of systems is similarly maintained.
3) The principle of adaptation–for continued system cohesion, the
mean rate of system adaptation must equal or exceed the mean rate of
changes of environment (the response times obey the reverse rule).
4) The principle of connected variety–interacting systems stability
increases with variety, and with the degree of connectivity of that variety
within the environment.
5) The principle of limited variety–variety in interacting systems is
limited by the available space and the minimum degree of differentiation.
6) The principle of preferred pattern–the probability that interacting
systems will adopt locally-stable configurations increases both with the
variety of systems and with their connectivity.
40
7) The principle of cyclic progression–interconnected systems driven
by an external energy source will tend to a cyclic progression in which
system variety is generated, dominance emerges to suppress the variety,
the dominant mode decays or collapses, and survivors emerge to regener-
ative variety.
According to [156], most well-known principles and laws of func-
tioning of complex (in the first place, biological) systems are exactly
regularities or hypotheses. To explain this statement, consider
PRINCIPLES OF BIOLOGICAL SYSTEMS32 FUNCTIONING
which are also the subject of Cybernetics (see surveys in [10, 156]).
1. The principle of least action. A dynamic system moves from an
initial configuration to a final configuration in a specified time along a
trajectory which minimizes the action (a functional of the trajectory).
Actually, this principle coincides with the law of optimality pioneered in
physics in the 1790–1800’s.
2. The principle of the permanent inequilibrium (E. Bauer, 1935).
The living and only the living systems are never in an equilibrium, and,
on the debit of their free energy, they continuously invest work against
the realization of the equilibrium which should occur within the given
outer conditions on the basis of the physical and chemical laws [21, p. 43]
(see the principle of reactions).
3. The principle of simplest structure (N. Rashevsky, 1943). A
concrete structure of a living system which exists in nature is the simplest
among all structures being able to perform a given function or a set of
functions [178].
4. The principle of feedback (see also the principle of functional
systems by P. Anokhin [9]). In this context, we have to mention his
principle of anticipatory reflection or reality: “One universal regularity
was formed during the adaptation of the organisms to the environment,
which was further developed during the whole period of evolution of
living organisms: the highest order of speed for the reflection of the low
speed deployment of the events of the real world.” [7]. A complex adap-
tive system responds not to an external influence as a whole, but “to the
first chain of a repeated series of external influences.” [7].
Practical realizations of the principle of feedback have a long histo-
ry–from several mechanisms in Egypt (Ctesibius’ water clock, the 2nd-
3rd century B.C.) to perhaps first feedback usage in Drebbel’s thermostat
(1572–1633), Polzunov’s water-level float regulator (1765) and Watt’s
32
Interestingly, the overwhelming majority of these principles were formulated in the
1940-1960's.
41
steam engine governor (1781), Jacquard’s loom with program control
(1804–1808), etc.
The pioneering fundamental works on mathematical control theory
were published by J. Maxwell [127] and I. Vyshnegradsky [216]33. Actu-
ally, the first general systematic analysis of feedback was performed by
P. Anokhin (1935) [8], later jointly by A. Rosenblueth, N. Wiener, and
J. Bigelow (1943) [181] and, in the final statement, by N. Wiener (1948)
[221]. For justice’ sake, note that feedback was studied and used in
electrical engineering in the 1920’s.
5. The principle of least interaction (I. Gelfand and M. Tseitlin,
1962 [60]). Nerve centers aspire to achieve a situation when afference
(informational and control flows and signal transmitted in central nervous
system) is minimal. In other words, a system functions rationally in some
external environment if it seeks to minimize interaction with the envi-
ronment [202].
6. The principle of brain’s stochastic organization (A. Kogan,
1964 [100]). Each neuron has no independent function, i.e., is a priori not
responsible for solution of a concrete task; all tasks are distributed ran-
domly.
7. The principle of hierarchical organization (particularly, infor-
mation processing by brains), see the works of N. Amosov,
N. Bernshtein, G. Walter, and W. Ashby [5, 15, 24, 218]. Achieving a
whole goal is equivalent to achieving the set of its subgoals.
8. The principle of adequacy (W. Ashby, 1956 [14],
Yu. Antomonov [10] and others). For effective control the complexity of
the controller (dynamics of its states) must be adequate to the complexity
(rate of change) of controlled processes. In other words, the “capacity” of
the controller defines the absolute limit of control regardless of the capa-
bilities of the controlled system (see the law of requisite variety above).
9. The principle of probabilistic prediction in actions design
(N. Bernshtein, 1966) [24]. The world is reflected in two models, viz., the
model of the desired future (probabilistic prediction based on previously
accumulated experience) and the model of the backward (which explicitly
reflects the observed reality).
10. The principle of necessary degree of freedom selection
(N. Bernshtein, 1966). Initially, learning involves more degrees of free-
dom of a learned system than is actually required for learning [24]. Dur-
ing learning, the number of “involved” variables decreases as inessential
33
The first course of lectures entitled "Theory of direct-action regulators" by D. Chizhov
appeared in Russia in 1838.
42
variables are “eliminated” (compare this principle with the phenomena of
generalization and concentration of nervous processes–I. Pavlov,
A. Ukhtomskii, P. Simonov, and others).
11. The principle of determinism destruction (H. Foerster,
Yu. Antomonov [10, 55] and others, 1966). To achieve a qualitatively
new state and to increase the level of system organization, it is necessary
to destroy (rearrange) the existing deterministic structure of connections
among system elements, which was formed by the previous experience.
12. The principle of requisite variety (W. Ashby, 1956). This prin-
ciple (see above) is close to the principle of adequacy [14].
13. The principle of natural selection (S. Dancoff, 1953). In sys-
tems becoming efficient due to natural selection, the variety of mecha-
nisms and capacity of information transmission channels does not appre-
ciably exceed the minimum level required [48].
14. The principle of deterministic representation (J. Kozielecki,
1979 and others). Modeling of decision-making by an individual admits
that its beliefs about the reality do not contain random variables and
uncertain factors (the consequences of decisions depend on well-defined
rules) [109].
15. The principle of complementarity (inconsistency) (N. Bohr,
1927; L. Zadeh, 1973). The high accuracy of description of a certain
system is inconsistent with its high complexity [228]. Sometimes, this
principle is given a simpler interpretation: the real complexity of a system
and the accuracy of its description are roughly inversely proportional.
16. The principle of monotonicity (“keep the achieved,” W. Ashby,
1952). In learning, self-organization, adaptation, etc., a system must
“keep” an achieved (current) positive result (equilibrium, goal of learn-
ing, etc.) [14, 15].
17. The principle of natural technologies in biological systems (A.
Ugolev, 1967 [205]). The principle of block structure states that physio-
logical functions and their evolution are based on combinations of uni-
versal functional blocks implementing different elementary functions and
operations.
At the first glance, the discussed principles of functioning of biolog-
ical systems can be formally divided into natural-scientific approaches
(e.g., principles no. 1, 2, 5, 8, and 15), empirical approaches (e.g., princi-
ples no. 4, 6, 10, 11, 14, 16, and 17) and intuitive approaches (principles
no. 3, 7, 9, 12, and 13).
Natural-scientific approaches (“laws”) reflect the general regulari-
ties, constraints and capabilities of biological systems imposed by natural
laws. As a rule, empirical principles are formulated via analysis of exper-
43
imental data, the results of experiments and observations, thereby having
a more local character than natural-scientific approaches. And finally,
intuitive laws and principles (in idea, not contradicting natural-scientific
ones and being consistent with empirical ones) appear least formal and
universal, as proceeding from intuitive understanding and common sense.
Yet, a detailed consideration shows that all the “natural-scientific”
principles mentioned above are rather empirical and/or intuitive (not
formally justified). For instance, the principle of least action (seemingly,
a classical physical law) is formulated for mechanical systems (there exist
its analogs in optics and other branches of physics). And its unadapted
application to biological and other systems becomes somewhat incorrect
and partially substantiated. In other words, that biological systems obey
the principle of least action is merely a hypothesis made by researchers:
today, in many cases it possesses no well-defined grounds.
Therefore, all the well-known and accepted principles (and laws) of
biological systems functioning agree with one of the following standard
statements: a regularity–“if a system has a (concrete) internal structure,
then it demonstrates an (appropriate) behavior” or: a hypothesis–“if a
system demonstrates a (concrete) behavior, then it most likely has an
(appropriate) internal structure.” Here the words “most likely” are essen-
tial: first-type statements establish sufficient conditions for realization of
an observed behavior and can be (partially or completely) verified during
experiments; second-type statements act as hypotheses, i.e., “necessary”
conditions (in most cases, postulated without rigorous argumentation and
bearing the explanatory function) which are imposed on the structure and
properties of a system on the basis of its observation.
Partiсular laws and principles. We emphasize that different
branches of control theory formulate separate laws and principles valid
under corresponding assumptions. Here are some examples.
The book [59] presented several laws of cybernetical physics:
– The value of any controlled invariant of a free system can be
changed in an arbitrary quantity via arbitrarily small feedback;
– For a controlled Lagrange or Hamilton system with a small dissi-
pation rate ρ, the energy achievable by a control action of a level γ has
the order of (γ/ρ)2;
– Each controllable chaotic trajectory can be transformed into a pe-
riodic one using an arbitrarily small control action.
The book [157] introduced several principles of control in organiza-
tions:
44
– The principle of agents’ game decomposition, stating that a Prin-
cipal applies controls implementing a dominant strategy equilibrium of
agents’ game;
– The principle of functioning periods’ decomposition, stating that a
Principal applies controls making agents’ decisions independent from
game history;
– The principle of trust (the fair play principle [36, 39] and the reve-
lation principle [141] as its analog), stating that an agent trusts infor-
mation reported by a Principal, whereas the latter makes decisions assum-
ing the truth of information reported by the former;
– The principle of sufficient reflexion, stating that the reflexion depth
of an agent is defined by its awareness.
Obviously, the above and similar laws and principles represent fruit-
ful and general results derived in separate branches of control theory, but
have no universal character: they are inapplicable or selectively applica-
ble in “adjacent” branches.
CONTROL PRINCIPLES34 [152].
Principle 1 (the principle of hierarchy). Generally, a control system
has a hierarchical structure. It must agree with the functional structure of
a controlled system and not contradict the hierarchy of (horizontally or
vertically) adjacent systems. Tasks and resources supporting the activity
of a controlled system must be decomposed according to its structure.
Principle 2 (the principle of unification). Controlled systems and
control systems of all levels must be described and studied using common
principles (this applies both to the parameters of their models and the
efficiency criteria of their functioning). However, such principles must
not eliminate the necessity of considering the specifics of a concrete
system. Most real control situations can be reduced to a set of the so-
called typical situations, where the corresponding typical decisions ap-
pear optimal.
On the other part, control inevitably causes specialization (restriction
of variety) of control subjects and controlled subjects.
Principle 3 (the principle of purposefulness). Any impact of a con-
trol system on a controlled system must be purposeful.
Principle 4 (the principle of openness). Operation of a control sys-
tem must be open to information, innovations, etc.
34
Of course, ideally all principles should not be stated as requirements to control systems
(“it must be that…”, “it is necessary that…” and so on), which can satisfied or not
satisfied. Instead, the general approach should be, whenever a certain principle fails, a
control system appears unable to work properly. Unfortunately, such “hard” principles
do not exist (perhaps, except the feasibility of control).
45
Principle 5 (the principle of efficiency). A control system must im-
plement the most efficient control actions from the set of feasible control
actions (also see the principle of extremization).
Principle 6 (the principle of responsibility). A control system ap-
pears responsible for decisions made and the efficiency of controlled
system operation.
Principle 7 (the principle of non-interference). Any-level Principal
interferes in a process iff its direct subordinates are unable to implement a
complex of necessary functions (at present and/or based on a forecast).
Principle 8 (the principle of social and state control, participation).
Control of a social system must aim at the maximal involvement of all
interested subjects (society, bodies of state power, individual and artifi-
cial persons) in the development of a controlled system and its operation.
Principle 9 (the principle of development). A control action lies in
modifying a control system proper (being induced from within, it can be
treated as self-development). The matter also concerns the development
of a controlled system.
Principle 10 (the principle of completeness and prediction). Under a
given range of external conditions, the set of control actions must ensure
posed goals (the completeness requirement) in an optimal and/or feasible
way. This must be done taking into account a possible response of a
controlled system to certain control actions in predicted external condi-
tions.
Principle 11 (the principle of regulation and resource provision).
Control activity must be regulated (standardized) and correspond to
constraints set by a metasystem (a system possessing a higher hierar-
chical level). Any management decision or control action must be feasi-
ble (also, in the sense of provision with necessary resources).
Principle 12 (the principle of feedback). Efficient control generally
requires information on the state of a controlled system and on the condi-
tions of its functioning. Moreover, implementation of a control action and
corresponding consequences must be monitored by a control subject.
Principle 13 (the principle of adequacy). A control system (its struc-
ture, complexity, functions) must be adequate to a controlled system (to
its structure, complexity, functions, respectively). Problems to-be-solved
by a controlled system must be adequate to its capabilities.
Principle 14 (the principle of well-timed control). This principle
states that, in real-time control, information required for decision-making
must be supplied at the right time. Moreover, management decisions
(control actions) must be made and implemented (chosen and generated,
respectively) quickly enough according to any changes in a controlled
46
system and external conditions of its functioning. In other words, the
characteristic time of management decisions or control actions must not
exceed the characteristic time of changes in a controlled system (i.e., a
control system must be adequate to controlled processes in the sense of
their rate of change).
Principle 15 (principle of predictive reflection). A complex adaptive
system predicts feasible changes in essential external parameters. Conse-
quently, when generating control actions, one should predict and antici-
pate such changes.
Principle 16 (the principle of adaptivity). The principle of predictive
reflection underlines the necessity of predicting the states of a controlled
system and corresponding actions of a Principal. In contrast, the principle
of adaptivity states that (1) one must consider all available information on
the history of controlled system functioning and (2) once made decisions
or chosen control actions (and the corresponding principles of decision-
making) must be regularly revised (see the principle of well-timed con-
trol) following any changes in the states of a controlled system and in the
conditions of its functioning.
Principle 17 (the principle of rational decentralization). This prin-
ciple claims that, in any complex multi-level system, there exists a ration-
al decentralization level for control, authorities, responsibility, awareness,
resources, etc. Rational decentralization implies adequate decomposition
and aggregation of goals, problems, functions, resources, and so on.
In [154] it was shown that multilevel hierarchical systems gain new
properties (in comparison with two-level ones) mainly due to the follow-
ing factors:
– the “aggregative” factor, consisting in aggregation (“convolu-
tion,” “compression,” and so on) of information about system elements,
subsystems, an environment, etc. as the level of hierarchy grows;
– the “economic” factor, consisting in variation of financial, materi-
al and other resources of a system under any changes in the composition
of its components;
– the “uncertainty” factor, consisting in variation of the awareness
of system elements about the essential (internal and external) parameters
of their functioning;
– the “organizational” factor, consisting in power sharing, i.e., the
feasibility of some system elements to establish “rules of play” for the
other;
– the “informational” factor, consisting in variation of informational
load on system elements.
47
“In fact, any complex system, whether it has arisen naturally or been
created by human beings, can be considered organized only if it is based
on some kind of hierarchy or interweaving of several hierarchies. At least
we do not yet know any organized systems that are arranged differently.”
[203, p. 37].
Principle 18 (the principle of democratic control, also known as the
principle of anonymity). This principle requires equal conditions and
opportunities for all participants of a controlled system (without a priori
discrimination in informational, material, financial, educational and other
resources).
Principle 19 (the principle of coordination). This principle declares
that, under existing institutional constraints, control actions must be
maximally coordinated with the interests and preferences of controlled
subjects.
Principle 20 (the principle of ethics, the principle of humanism) im-
plies that, in management and control, consideration of existing ethical
norms (in a society or an organization) has a higher priority over other
criteria.
Note that the above control principles are applicable almost to any-
nature systems (probably, except the principle of social and state control
and the principle of coordination, making no sense in control of technical
systems).
Possible classification bases for the listed control principles are the
relations between objects (a controlled system, a control system, an
external environment–see Fig. 18) or the temporal relations (past, present,
future–see Fig. 19).
48
External
Principles of:
environment
2) unification
4) openness
8) participation
16) adaptivity Principles of:
1) hierarchy
9) development
10) completeness
and prediction
Control 11) regulation and
resource provision
Principles of: system 13) adequacy
3) purposefulness
14) well-timed
5) efficiency
control
6) responsibility
17) rational
7) non-interference
decentralization
12 feedback
15) predictive reflexion
18) anonymity
19) coordination Controlled
system
Control
system
Therefore, the general laws and principles of control are the subject
of Cybernetics. Their list is far from final canonization, and its supple-
mentation and systematization represent a major task of Cybernetics!
49
4. Systems Theory and Systems Analysis.
Systems Engineering
35
Integrity and commitment to a common goal form a backbone factor.
36
An aggregate of stable connections among system elements, ensuring its integrity and
self-identity, is called its structure.
50
substantiation and implementation of complex problem solving including
political, social, economic, technical and other problems [166].
To solve well-defined problems (i.e., the ones which admit an explic-
it quantitative description and strong formalization), systems analysis
employs optimization and operations research methods: a researcher
constructs an adequate mathematical model and seeks for optimal pur-
poseful actions (control) within the model. To solve ill-defined problems,
systems analysis operates different techniques including typical stages
(see Table 2 for a series of common approaches to systems analysis and
strategic analysis of problem solving). Actually, systems analysis sug-
gests universal methods of problem solving applicable to a wide range of
fields: organizational control, economics, military science, engineering,
and others.
51
Table 2. Systems analysis and strategic analysis of problem solving (see [73])
E. Golubkov P. Drucker D. Novikov S. Optner N. Fedorenko Yu. Chernyak S. Young
1. Problem statement 1. Purpose and 1. Monitoring and 1. Symptoms 1. Problem 1. Problem analysis 1. Goal-setting for
2. Examination expected results analysis of actual identification formulation 2. Definition of organization
3. Analysis 2. Key elements to state 2. Problem urgency 2. Definition of system 2. Problem
4. Preliminary process design: time, 2. Forecasting of estimation goals 3. Structural identification
judgment resources, budget, evolution 3. Goal-setting 3. Data acquisition analysis 3. Diagnosing
5. Confirmation major steps 3. Goal-setting 4. Definition of 4. Elaboration of the 4. Formation of 4. Decision search
6. Final judgment 3. Roles and responsi- 4. Choosing system structure and maximal number of general goal and 5. Assessment and
7. Implementation of bilities of self- technology of its defects alternatives criterion choice of alterna-
chosen decision assessment team activity 5. Capabilities 5. Selection of 5. Goal decomposi- tives
4. Elements essential to 5. Planning and assessment alternatives tion, identification 6. Decision
success: resources allocation 6. Alternatives 6. Modeling by of demands in negotiation
- Utilizing an experi- 6. Motivation search equations, programs resources and 7. Decision
enced facilitator; 7. Control and 7. Alternatives or scenarios processes approval
- Engaging dispersed operative manage- assessment 7. Costs estimation 6. Identification of 8. Preparation for
leadership; ment 8. Decision 8. Sensitivity tests resources and decision implemen-
- Encouraging 8. Reflexion, elaboration (parametric analysis) processes tation
constructive dissent; analysis and 9. Decision 7. Forecasting and 9. Decision
- Using data to inform improvement of acceptance analysis of future application control
dialogue. activity 10. Decision conditions 10. Efficiency
procedure initiation 8. Assessment of verification
11. Decision goals and means
implementation 9. Selection of
control alternatives
12. Assessment of 10. Diagnosis of
implemented existing system
decision and its 11. Elaboration of
consequences complex develop-
ment program
12. Design of
organization for
goals’ achievement
Therefore, in the USSR systems analysis was considered side by side
with systems theory (and later almost “absorbed” the latter) as a set of
general principles of examining any systems (systems approach). Similar-
ly to cybernetics, systems analysis (being an integrative science) admits
the “umbrella” definition as a union of different component sciences
under the auspices of “systemacy”: artificial intelligence, operations
research,37 decision theory, systems engineering and others, see Fig. 20.
According to this viewpoint, systems analysis has almost no its own
results.
SYSTEMS ANALYSIS
Artificial
intelligence
Information
… technology
Data analysis
and decision- Systems General
making analysis
systems theory
Ope rations
research
Systems
engineering
Optimization
39
S. Beer defined management science as “the business use of operations research.”
54
systems science (present days). In other words, “systems analysis” as it is
comprehended in Russia rather matches “systems science” (SS) in foreign
research, as sciences about systems, systems studies–see Fig. 21.
As a matter of fact, general systems science evolved in several direc-
tions. First, its “mainstream” gave birth to two known subdirections:
K. Boulding’s theory of systems classes [30] and P. Checkland’s soft
systems methodology [43, 44].
Second, note that the 1950-1970’s were remarkable for a significant
breakthrough in mathematical system theory [40, 93, 132, 133], which
later merged with control theory.
Third, we naturally mention systems dynamics exploring the influ-
ence of system elements and structure on its behavior in time. Here the
main apparatus includes simulation modeling of differential equations or
discrete mappings. The pioneering works were [57, 58] and the most
famous application to global development was described in the book
[130]. The state-of-the-art in this field can be traced in [78, 129].
Reverting to the subject, systems analysis actually concerns any ana-
lytical study assisting a decision-maker to choose an appropriate course
of actions [147].
Subsequently, SA developed towards systems engineering (SE) (see
the classical publications [67, 70]). This is a branch of science and tech-
nology covering the whole life cycle of a complex system (design, pro-
duction, testing, exploitation, support, maintenance and repair, upgrade
and utilization).
As years passed, SA became a set of practice-oriented analysis tech-
nologies for concrete systems, i.e., products and/or services [142, 185,
209, 219]. Systems analysis goes in parallel with systems design (SD),
systems development and other associated stages.
Nowadays, SS and SE (e.g., see the modern textbooks and standards
[49, 76, 88, 185, 196, 209, 219]) comprise SA, SD, product lifecycle
management (PLM), project and program management, several branches
of management science and others, as illustrated by Fig. 21. And general
systems theory forms their common methodological core, see Fig. 21.
Most applications of SE are complex technical and organization-
technical systems, as well as software development.
55
SYSTEMS SCIENCE
Systems
Analysis Systems
… Design
General
Management Systems Systems
Science Theory Development
Project and
Program
Management
…
PLM
40
Holism is an approach treating complex systems as a whole; it claims that the proper-
ties of complex systems cannot be derived via examining the properties of their elements.
56
5. Some Trends and Forecasts
41
The existing grant-based funding of research facilitates differentiation of sciences and
partially stimulates the existence of scientific self-reproducing “sects” in all fields of
investigations.
42
Actually, these conferences gather researchers from many other countries.
57
All-Union Meetings on Regulation Theory (later, on Automatic Control
and, then, on Control Problems). Interestingly, the gradually changing
title of these scientific events agrees with the evolution of control theory
and its subjects (see below).
Generally speaking, world science demonstrates a stable growth of
publications dedicated to control (see Fig. 4–Fig. 6).
43
All figures show the relative shares of papers having a corresponding topic.
44
In AMCP-2014, about 25-33% of the papers were dedicated to control problems in
interdisciplinary systems (socioeconomic, organizational and technical, etc.). They have
been eliminated from our analysis.
45
Notwithstanding its “classical” character, ACT has an intensive development, includ-
ing the appearance of new problems in well-known fields (e.g., in linear control systems)
and new controlled objects (e.g., the rapid growth of publications on quantum systems
control).
58
Fig. 22. The general topics of ACC and CDC
Fig. 22 and Fig. 23 illustrate (a) the relative “stability of traditions”
of appropriate scientific events and (b) a well-known fact that CDC are
more “theoretical,” whereas IFAC congresses are par excellence applica-
tion-oriented. In this sense, AMCP-2014 most likely follows the tradition
of IFAC congresses.
59
Fig. 24. The topics of papers at AMCP-2014
60
munications in MAS, cooperative control, upper levels of control (strate-
gic behavior of agents), “others” (mostly, information and communica-
tion networks with a slight emphasis on control problems).
61
transport and traffic), marine vehicles and “others” (from agriculture to
education).
5.2. Interdisciplinarity
62
Modern control theory (see Fig. 30 and Fig. 32) studies control prob-
lems for different classes of controlled objects by designing or applying
appropriate methods and means of control.
OBJECTS METHODS
CONTROL
MEANS
Measuring,
converting, Informational,
actuation computing
46
According to Merriam-Webster dictionary, science is knowledge about or study of the
natural world based on facts learned through experiments and observation; a particular
area of scientific study (such as biology, physics, or chemistry) or a particular branch of
science; a subject that is formally studied in a college, university, etc.
47
In the case of technical systems, initial information “suppliers” are mechanics, aerody-
namics, and so on.
63
… Scientific knowledge about
controlled object
Control theory
Applications of theory
Control technologies …
64
Within the last 50 years,48 mathematical control theory simultane-
ously involved new and new classes of controlled objects (since the 1950-
1960’s–economic systems, later ecological-economic and other systems).
Concerning the recent decades, the focus of attention has been gradually
drifting to living systems and social systems. The fruitful development of
the corresponding branches of control theory and accumulation of
knowledge about controlled objects require a close cooperation between
mathematicians (control experts) and representatives of associated sci-
ences.
Moreover, the application domain of control theory becomes wider.
A key problem in its methods dissemination (the integration problem) is
the availability of sufficiently adequate models of controlled objects.
Again, here we need a close cooperation between control experts and
representatives of associated sciences (physics, economics, biology,
sociology and others).
For a large scientific organization, institution or scientific school to
maintain and/or gain leading positions in the field of control in several
decades when seeming new objects of control will become classical, it is
necessary to initiate their intensive research right now!
48
Interestingly, a broader retrospective review indicates that social systems cyclically
interchange with technical ones in the focus of control theory, getting “back” at a new
turn of the dialectical spiral. Indeed, perhaps the first object of control (in the prehistoric
society) was a group of people, later on - transport and elementary mechanisms, again
followed by groups of people (Plato-N. Machiavelli-F.Bacon-T.Gobbs-…-A. Ampere-
B. Trentowski). Starting from the middle of the 19th century, control theory switched to
technical (mechanical) systems. Today, control of human beings, their groups and/or
collectives is again on the agenda.
65
Mechanical systems Technical systems Organization-technical Decentralized (networked)
and informational systems intelligent systems
1
1– technical systems
2– economic systems
3– ecological-economic systems
4– living systems
5– social systems
P rincipal
3
Agent
???
1860’s 1900’s 1930’s 1940’s 1950’s 1960’s 1970’s 1980’s 1990’s 2000’s 2010’s 2020’s
…
Fig. 32. The past, present and future of control theory
More frequently, controlled objects represent the so-called interdis-
ciplinary-nature systems [152]. Imagine that the corresponding classifica-
tion is based on the subject of human activity (“nature – society – produc-
tion”). In this case, we may distinguish among organizational systems
(people), ecological systems (nature), social systems (society), as well as
economic (technical) systems (production), see Fig. 33. Different paired
combinations emerge at the junction of these classes of systems:
organization-technical systems;
socio-economic systems;
ecological-economic systems;
socio-ecological systems;
normative-value systems;
noosphere systems.49
Economic Technical
systems systems
Production
Organization-technical
and man-machine
systems Ecological-
Socioeconomic
systems economic
systems
Organizational
systems
(human being)
Normative- Noosphere
value systems systems
49
Systems, where a specially organized activity of human beings is a determining factor
for the development of large-scale (global) ecological systems.
Separation of the following research priorities evidences the grow-
ing interest of investigators in interdisciplinary-nature systems:
– US National Science Foundation: group control, spacecraft clus-
ters, combat control, control of financial and economic systems, control
of biological and ecological systems, multiple-profile teams in control
loop, etc.;
– Research in the European Union: man-machine symbiosis (model-
ing of a human being in control loops including the case of a controlled
subject), complex distributed systems and quality improvement of sys-
tems in an uncertain environment (global manufacturing, security, heter-
ogeneous control strategies, new principles of multidisciplinary coordina-
tion and control) and others;
– Key directions of fundamental research by the Russian Academy
of Sciences: control in interdisciplinary models of organizational, social,
economic, biological and ecological systems; group control; cooperative
control and others.
The paper [66] mentioned three global challenges to cybernetics,
namely, transitions:
1) from nonliving to living (from chemistry to biology);
2) from living to intelligent (from living organisms to human con-
sciousness);
3) from human consciousness to human spirit as the highest level of
consciousness.
The specifics of interdisciplinary-nature systems incorporating hu-
man beings as a control object consist in the following:
– independent goal-setting, purposeful behavior (conscious infor-
mation misrepresentation and strategic behavior, non-fulfillment
of commitments, etc.);
– reflexion (nontrivial mutual awareness, foresight, behavior fore-
casting for a Principal or control object/subject, the effect of roles
exchange,50 etc.);
– bounded rationality (decision-making in uncertain conditions and
under existing constraints on the volume of processed data);
– cooperative and/or competitive interaction (formation of coali-
tions, informational contagion, etc.);
– hierarchical structure;
– multicomponent structure;
50
In systems whose elements have strategic behavior, discrimination between control
subjects and controlled ones can be ambiguous; e.g., in some situations a subordinate
manipulates its superior.
68
– distributed/networked structure and/or different scale (in space
and/or time, see the paper [135] discussing the principle of requi-
site variety and its extension to multiscale systems).
Historically, “mechanical” systems (later, technical ones) were the
first classes of controlled objects theoretically studied on a mass scale
(see Fig. 32). As a matter of fact, most deep and extensive theoretical
results of control were obtained exactly for these classes. As new con-
trolled objects appear, researchers naturally endeavor to perform “results
transfer,” i.e., translate some existing results to the new objects. That was
exactly the case for interdisciplinary-nature systems: general results of
Cybernetics and concrete analysis results of control problems for tech-
nical systems were transferred to the former, see arrow I in Fig. 34.
Multi-agent systems
Interdisciplinary-nature
Technical systems systems
51
For the sake of justice, note that at all times living systems encouraged scientists and
engineers to apply analogies, i.e., to “repeat” certain properties of living nature objects
in artificial systems.
69
inverse tendency has been gradually showing itself–more and more
artificial technical or informational systems are assigned the inherent
properties of social or living systems. This represents a basic trend which
will be perhaps intensified in future. In many cases, multi-agent systems
act as a tool of “inverse results translation” (see arrow II in Fig. 34).
Multi-agent systems are discussed below. For instance, such inverse
translation takes place in numerous manifestations of “intellectuality”:
cooperative behavior, reflexion, etc.
5.3. “Networkism”
52
Control problems of quantum systems are mostly treated in theory, but micro-level
controlled objects (“microsystems”) have become almost common.
70
MAS
53
Not to mention the penetration of ICT into engineering and everyday life, the associated
educatory and social capabilities and threats.
71
Centralized control
Decentralized control
NETWORK
54
Network-centrism operates its own abbreviations differing from control theory (see
above): C3I–Command, Control, Communications and Intelligence, C4I–Command,
Control, Communications, Computers and Intelligence, and others.
72
ties and industrial sectors). One of such aspects consists in informational
control as a purposeful impact on the awareness of controlled subjects;
therefore, a topical problem is to develop a mathematical apparatus
providing an adequate description for an existing relationship between the
behavior of system participants and their mutual awareness [158].
Design of intelligent analytic systems for informational and analytic
support of goal-setting and control cycle represents another important
informational aspect of control in decentralized hierarchical systems.
Here it seems necessary to substantiate methodological approaches to
control efficiency in decentralized control systems, including elaboration
of principles and intelligent technologies for data acquisition, representa-
tion, storage and exchange.
We underline that an appreciable share of information required for
situation assessment, goal-setting and control strategy choice in decen-
tralized systems is ill-structured (mostly, in the form of text). And there
arise the problems of relevant search and further analysis of such infor-
mation. The described circumstances bring to the need for suggesting
new information retrieval methods (or even knowledge processing meth-
ods) based on proper consideration of its lexis and different quantitative
characteristics and, moreover, on analysis of its semantics, separation of
target data and situation parameters, assessment of their dynamics and
scenario modeling of situation development in future periods.
In recent years, control theory more and more addresses the term of
system “heterogeneity” comprehended, in the first place, as the multiplic-
ity of its mathematical description (e.g., descriptive dissimilarity of
separate subsystems: the type and scale of time/space of subsystems
functioning, multi-type descriptive languages for certain regularities of a
studied object, etc.). “Heterogeneity” also means complexity appearing in
(qualitative, temporal and functional) dissimilarity, (spatial and temporal)
distribution and the hiеrarchical/networked structure of a controlled
object and an associated control system (see Section 5.3).
An adequate technology for design and joint analysis of a certain set
of heterogeneous systems models is the so-called hierarchical modeling.
According to this technology, models describing different parts of a
studied system or its different properties (perhaps, with different levels of
detail) are ordered on the basis of some logic, thereby forming a hierar-
chy or a sequence (a horizontal chain). Generally, lower hierarchical
levels correspond to higher levels of detail in modeled systems descrip-
73
tion. Each element of a sequence possesses almost same level of detail,
and the results (outputs) of a current model represent input data for a next
model. Such approach to modeling was born and further developed in the
1960–1970’s [40, 133].
In some sense, hierarchical models are a wider category than hybrid
models and the multi-model approach. A hybrid model is a model com-
bining elements of two or more models reflecting different aspects of a
studied phenomenon or process and/or employing different apparatuses
(languages) of modeling–see Fig. 37. For instance, a hybrid model can
include discrete and continuous submodels, digital and analog submodels,
and so on.
Model n
RESULT
External
information
Model 2
Model 1
55
The classical CBG has the following statement. Two commanders (colonels Blotto and
Lotto) distribute their forces among a finite number of springboards. The winner at each
springboard is the player having more forces. Each commander strives for winning at as
many springboards as possible.
75
that, in most cases, it is difficult to find the analytical solution to the CBG
(see a survey in [107]).
In addition, Lanchester’s models allow the hierarchical approach. At
the lower level, the Monte Carlo method serves for simulating the inter-
action of separate military units. At the middle level, this interaction is
described by Markov models. And finally, the upper (aggregated, deter-
ministic) level involves Lanchester’s differential equations proper. By
introducing control variables (temporal distributions of forces and means,
reserves engagement, etc.), one can superstruct control problems “over”
these models (in terms of controlled dynamic systems, differential and/or
repeated games, etc.). Consequently, we obtain the following hierarchical
model, illustrated by Table 3.
76
observing the behavior of the reconnaissance agents, the rest ones per-
form “reflexion,” assess the limits of dangerous areas and solve the posed
problem. Strategic interaction of counteracting sides can be described in
terms of game theory, see [106].
The following hierarchical model defined by Table 4 serves for ap-
praising and choosing most efficient algorithms of behavior in [105]:
77
Goal-setting level
(including control of Confrontation
mechanisms of Hierarchies
Game theory
functioning)
Models of collective
Collective
behavior
Strategic level Decision-making
(decision-making,
External information
adaptation, learning,
reflexion) Cooperative
Control
intelligence
Artificial
Distributed
Optimization (e.g.
Tactical level Task Assignment)
Dynamic systems
Mission Planning
Formation Control
Operational level Consensus
(execution level) Problem
Action
78
Consider the strategic level of agent’s architecture, which answers
for adaptation, learning, reflexion and other aspects of strategic decision-
making. Game theory and theory of collective behavior analyze interac-
tion models for rational agents. In game theory, a common scheme con-
sists in (1) describing the “model of a game,” (2) choosing an equilibrium
concept defining the stable outcome of the game and (3) stating a certain
control problem–find the values of controlled “game parameters” imple-
menting a required equilibrium (see Fig. 41, where “levels” correspond to
the functions of science discussed in Section 1.1).
Taking into account informational reflexion leads to the necessity of
constructing and analyzing awareness structures [158]. This enables
defining an informational equilibrium, as well as posing and solving
informational control problems. Taking into account strategic reflexion
generates a similar chain marked by heavy lines, i.e., posing and solving
“reflexive control” problems [154].
Control problems
Normative
Informational Reflexive
control control
80
the US President F.D. Roosevelt in 1939). Today, even experts have no
totally clear understanding of the social impact of ICS. No doubt, ICT
provide ample opportunities for decision-making, particularly, for exper-
tise [73]. On the other hand, there arise new problems, too.
The results of functioning of computer-aided decision support sys-
tems (including the ones obtained within some formal models using
modern ICT) are applied to make real important decisions. Hence, this
aggravates security problems, i.e., making decisions and their conse-
quences proof against the negative impacts of all the participating ele-
ments (both hardware components and active subjects).
Furthermore, society and government display growing interest in so-
cial media (online networks) as a source of specific information for
predictive detection of aborning implicit tendencies to-be-controlled.
In other words, we inevitably face the problems of social, economic
and informational security for an individual, society and a whole country:
social, expert and other networks actually form an arena of informational
contagion when control subjects struggle for the “minds” of other net-
work members, whereas a social network itself represents an object
and/or tool of informational impacts.
5.4.5. “Hierarchical automation” in organization-technical sys-
tems. Since the 1980’s, production systems have followed a long path
from flexible to holonic systems. In recent years, they attract the growing
interest of researchers in connection with new market challenges: the
efficiency of production specialization and decentralization, product and
service differentiation, etc. There appear networked productions and
“cloud” productions. Along with implementation of fundamentally new
technologies of production (nanotechnologies, additive technologies,
digital production, and so on), we observe gradual changes in its organi-
zation, i.e., the emphasis is shifted from operations automation to control
automation at all life cycle stages.
Existing challenges such as:
– a huge number of product’s customized configurations;
– integration of small- and large-scale production;
– lead-time reduction for an individual order;
– supply chains integration for stock optimization;
and others call for solutions guaranteeing:
– the universality of production systems and their separate compo-
nents;
– the capability of rapid and flexible adjustment with respect to new
tasks;
81
– autonomous decision-making in production owing to high-level
control automation;
– survivability, replicability and scalability owing to network-centric
control and multi-agent technologies;
– decision-making in production with proper consideration of eco-
nomic factors, etc.
Modern production systems have a hierarchical structure, as indicat-
ed by Fig. 42. And the complexity of control problems treated induces
their decomposition into decision-making levels. Each level in control
problems solution corresponds to its own goals, models and tools (Fig.
42) at each stage of control (organizing, planning, implementing, control-
ling and analyzing). Hence, in organizational-technical production sys-
tems it is possible (and necessary) to apply hierarchical modeling.
Strategic
planning • Scenario-based financial and economic analysis
Structure
design • Discrete optimization models, networked games
Assortment
planning • Economic demand models, discrete optimization models
82
– production planning and management systems (MRP, CRP, …,
MRP2, …);
– integrated systems (MES, …, ERP., …);
– systems responsible for interaction with an external environment
or development (SCM, CRM, PMS, …);
– upper-level analytic systems (OLAP, BSC, DSS, …).
These classes of systems use mathematical models, but very sparse-
ly; as a rule, the higher is the level of hierarchy,56 the lesser is their usage.
For instance, lower-level controllers employ in full automatic control
theory; project management systems (PMS) incorporate classical algo-
rithms for critical path search, Monte Carlo methods for project duration
estimation, and heuristics for resources balancing; ERP systems and
logistics systems (SCM) involve elementary results from stock manage-
ment theory, and so on.
Nevertheless, full-fledged implementation of the so-called “hard”
models and “quantitative science” (operations research, discrete optimiza-
tion, data analysis and other branches of modern applied mathematics) in
informational systems still waits in the wings.
Several global problems exist here. On the one hand, mathematical
models require very accurate and actual information often associated with
inadmissibly high organizational and other costs. On the other hand, in
many cases “soft” models (putting things in order in production process-
es, implementation of typical solutions and standards in the form of
qualitative best practices, etc.) yield an effect exceeding manyfold the
outcomes of quantitative models, yet consume reasonable efforts. There-
fore, it seems that quantitative models should be applied at the second
stage, “extracting” the remainder of potential efficiency increase.
Concluding this section dedicated to heterogeneous models and hier-
archical modeling, we underline a series of their common classes of
problems. Modern controlled objects are complicated so that sometimes a
researcher would hardly separate out purely hierarchical or purely net-
worked components. In such cases, it is necessary to consider networks of
hierarchies and hierarchies of networks.
First, at each level models have their own intricacies induced by a
corresponding mathematical apparatus. Moreover, there arise “conceptual
coupling” dilemmas and the common language problem among the
representatives of different application domains.
56
This statement is true for separate informational systems and for integrated informa-
tional systems of product life cycle management (PLM) including computer-aided design
systems, which realize the complex of the listed functions.
83
Second, a complex of “joined” models inherits all negative proper-
ties of each component. Just imagine that, at least, one model in a “chain”
admits no analytic treatment; then the whole chain is doomed to simula-
tion modeling. The speed of computations in a chain is determined by the
slowest component, and so on.
And third, it is necessary to assess the comparative efficiency of the
solutions of aggregated problems, as well as to elaborate and disseminate
typical solutions of corresponding control problems in order to transfer
them to the engineering ground.
85
Game theory
MAS, group
control
Game theory,
mechanism design
Collective behavior
theory, bounded
rationality
MAS, group
control
Fig. 45. MAS and strategic behavior sciences: the normative picture of
interaction
"Result"
"Effect" "Costs"
"Intellectualization"
"INTELLECTUALIZATION" LEVEL
58
Alternatives are, e.g., consideration of evolutionary games [220] or learning effects in
games [141].
87
Informational reflexion is the process and result of agent’s thinking
about (a) the values of uncertain parameters and (b) what its opponents
(other agents) know about these values. Here the “game” component
actually disappears–an agent makes no decisions.
Strategic reflexion is the process and result of agent’s thinking about
which decision-making principles its opponents (other agents) employ
under the awareness assigned by it via informational reflexion, see Fig.
41.
A key role belongs to the notions of informational/reflexive struc-
tures describing the nontrivial mutual awareness of agents (or their self-
awareness, see ethical choice models in [115]) and phantom agents
existing in the minds of other real and phantom agents and possessing
certain awareness.
The concept of phantom agents yields rigorous statement of reflexive
games as games of real and phantom agents (the term suggested in 1965
by V. Lefevbre [116]). Moreover, this concept allows defining informa-
tional equilibria as a generalization of Nash equilibria for reflexive
games: each (real or phantom) agent evaluates its subjective equilibrium
(an equilibrium in a game this agent thinks it actually plays) based on an
existing hierarchy of believes about the objective and reflexive realities
[158].
Reflexive games research yields the following. First, it provides a
uniform methodology and mathematical framework to describe and
analyze various situations of collective decision-making by agents pos-
sessing different awareness, to study the impact of reflexion ranks on
agents’ payoffs, to obtain conditions of existence and implementability of
informational equilibria, etc. Second, such research makes it possible to
establish the existence conditions and properties of an informational
equilibrium, as well as to pose constructively and correctly the problem
of informational control. In this problem, a Principal has to find an
awareness structure such that the informational equilibrium implemented
in it appears most beneficial to it. An interested reader can find a neces-
sary theoretical background and numerous applications of reflexive
games and informational control in the book [158].
The achievements and illusions of “emergent intelligence.” This
section ends with a brief consideration of a phenomenon related to “intel-
ligent” control and behavior of artificial (e.g., multi-agent) systems.
In the two recent decades, much attention of researchers in cybernet-
ics and artificial intelligence has been paid to emergent intelligence. A
system composed of very many relatively simple homogeneous elements
88
(e.g., agents in MAS59) locally interacting with each other and an external
environment demonstrates a complex60 “intelligent” behavior in compari-
son with the simplicity of its elements. Investigations in this field are also
motivated by existing analogs in nature (Swarm Intelligence, i.e., heuris-
tic algorithms of distributed optimization in ant colonies and beehives,
flocks of birds, fish shoals, etc.).
Such systems enjoy a series of obvious advantages: the cheapness
and simplicity of a separate element, local fault-tolerance, scalability,
reconfigurability, asynchrony, parallel processing of local information
(ergo, high-level performance of real-time operation). They have numer-
ous applications: social systems (crowd wisdom, e-expertise, social
networks, etc.), economic systems (financial and other markets, national
and regional economics, etc.), telecommunication networks, models of
production and transport logistics systems, robotics, knowledge extrac-
tion (particularly, from Internet), Internet of Things and others [53, 73,
75, 151, 183, 195].
The appearance of qualitatively new properties in a whole system
(against the individual properties of its elements), i.e., transition from
simple local and decentralized interaction of elements to a nontrivial and
complex global behavior, allows treating the latter as adaptive and self-
organizing. Indeed, nonlinearity, evolution, adaptivity and self-
organization are the characteristic features of real modern complex
systems (e.g., see examples and their discussion in [183]).
In addition to many achievements and good prospects, emergent in-
telligence sometimes creates several illusions. Actually, emergent intelli-
gence concerns artificial systems, but adaptation and self-organization
(despite all their pluses) are embedded at the stage of system design.
Notwithstanding the law of emergence (the whole is greater than the sum
of its parts, see above), the behavior of artificial systems gets predeter-
mined by the behavior/interaction of its elements.
Similar delusions occurred in the history of science (e.g., at the early
development stages of cybernetics and artificial intelligence61). They
59
This class also includes the problematique of artificial neural and immune networks,
probabilistic automata, genetic algorithms, and so on.
60
Some authors insist on the birth of a new science called complexity science.
61
A cybernetical system always has the behavior defined by its embedded algorithms
(“stochastic,” “nondeterministic,” and others), despite the seeming generation of new
knowledge or demonstration of qualitatively new (“unexpected”) behavior. This is
especially the case under interaction of very many elements (a simple-structure system
shows a complex behavior).
89
induced much disappointment and put the brakes on the evolution of
these scientific directions.
Furthermore, recall that MAS realize heuristics and it is necessary to
assess the guaranteed efficiency of their solutions, see above.
Generally speaking, there exist three large sources of “new” proper-
ties of a system:
- additive interaction62 of its elements;
- for an observer/researcher having limited information and cognitive
capabilities, the multiplicity of elements and their mutual relations (per-
haps, nonlinear, asynchronous, with delayed information exchange, etc.)
makes it impossible to conduct a mental experiment for reproducing
agents’ behavior in detail; and computer simulation yields “surprising”63
results (an unexpected system behavior);
- artificial randomization (embedded into behavioral algorithms de-
scribing agents’ interaction with each other and/or an external environ-
ment) is necessary for variety creation (in the final analysis, for self-
organization).64
62
For instance, a microrobot cannot move a heavy load, in contrast to many microrobots
applying their joint efforts.
63
The complete model of a system is so complicated that the appearance of new proper-
ties represents a “miracle” for an external observer (at the same time, scientists inten-
sively exploit it and start believing that an artificial system can demonstrate an “inde-
pendent” behavior).
64
An uncertainty is always induced by some other uncertainty potentially comprising lack
of knowledge (insufficient information) and/or the action of random factors (an uncertain-
ty never arises from an abstract “complexity” and similar conceptual factors). Facing an
“uncertainty,” one should analyze cause-and-effect relations and seek for its source
(“initial uncertainty”). Of course, different complexity factors merely get the things into
muddle.
65
In some classifications, big data handling is associated with 4D (data discovery,
discrimination, distillation and delivery/dissemination).
90
– storage (including recording and extraction);
– processing (transformation, modeling, computations and analysis);
– usage (including visualization) in practical, scientific, educational
and other types of human activity.
In the narrow interpretation, the term “big data” sometimes covers
only the technologies of their acquisition, transmission and storage. In
this case, big data processing (including construction and analysis of
corresponding models) is called big analytics (including big computa-
tions), whereas visualization of the corresponding results (depending on
user’s cognitive capabilities) is called big visualization (see Fig. 50).
Big Big
data analytics
Big
visualization
66
We will not discuss another fashionable triad (big data, high-performance computa-
tions, cloud technologies).
91
Object CONTROL Subject
Acquisition Usage
Knowledge
Transmission
Data
Processing
Transmission Transmission
Processing
Transmission Transmission
67
As we have mentioned above, in the recent 15 years experts in control theory have
tended to consider the problems of control, computations and communication jointly (the
so-called C3 problem (Control, Computation, Communication)). According to this
viewpoint, control actions are synthesized in real time taking into account the existing
delays in communication channels and information processing time (including computa-
tions). There is another generally accepted term (large-scale systems control), but big
data can be generated by “small” systems.
68
An alternative interpretation of “big control” concerns control of big data handling
processes. Actually, this represents an independent and nontrivial problem.
92
distant Earth probing, geology and geophysics, aerodynamics and hydro-
dynamics, genetics, biochemistry and biology, etc.);
– Internet (in the wide sense, including Internet of things) and other
telecommunication systems;
– business, commerce and finances, as well as marketing and adver-
tising (including trading, targeting and adviser systems, CRM-systems,
RFID–radiofrequency identifiers used in sales, transportation, logistics
and so on);
– monitoring (geo-, bio-, eco-; space, air, etc.);
– security (military systems, antiterrorist activity, etc.);
– power engineering (including nuclear power engineering),
SmartGrid;
– medicine;
– governmental services and public administration;
– production and transport (objects, units and assemblies, control
systems, etc.).
Numerous applications69 of big data in these fields can be found in
popular science literature (or even “glossy” journals) available at public
Internet sources. We will not describe these applications here to avoid
embarrassing “zettabytes” and “yottabytes.”
In almost all fields cited, the modern level of automation is such that
big data have automatic generation. Therefore, the following question
gains growing importance. What is the volume of “lost” data flows (due
to insufficient capabilities or time for their storage or processing)? This
question seems correct for an engineer in ICT, but not for a scientist or a
user of big data processing results. Rather, the former and the latter
would ask “What are essential losses in this case?” and “What are the
changes if we successfully acquired and processed all data?”, respective-
ly.
Traditionally, big data are unstructured data whose volume exceeds
the available handling capabilities in required time. However, this defini-
tion appears somewhat “cunning”: data considered big today cease to be
such tomorrow owing to the progress of data handling methods and
means. Data that looked big several hundreds or even thousands of years
ago (in the absence of automatic treatment) are easily processed today by
home computers. The competition between the (hypothetic) computation-
69
The principal idea of using big data is revealing “implicit regularities,” i.e., answering
nontrivial questions: epidemic prediction based on information from social networks and
sales in drugstores; medical and technical diagnostics; retention of clients by analyzing
sellers’ behavior in stores (the spatial movements of RFID-tags of products); and others.
93
al demands of mankind and corresponding technical capabilities had been
known very long ago. Of course, the capabilities have been always chas-
ing the needs. And the gap between them represents a monumental stimu-
lus for science development. Researchers have to suggest simpler (yet,
adequate) models, design more efficient algorithms, etc.
Sometimes, the definition of big data includes the so-called 5V
properties (Volume, Velocity, Variety, Veracity, Validity). Alternatively,
the difference between the big volume of conventional data and big data
proper is that the latter form the big flow of unstructured70 data (in the
sense of volume and velocity as the volume per unit time).
In the wide comprehension, the unstructuredness of big data (text,
video, audio, communications structures, etc.) is actually their character-
istic feature and a challenge for applied mathematics, linguistics, cogni-
tive sciences and artificial intelligence. Creation of real-time processing
technologies,71 including the feasibility of implicit information revelation,
for large flows of text, audio, video and other information forms the
mainstream of applications of the above sciences72 to ICT.
Therefore, we observe a direct (and explicit) query from technolo-
gies to science. The second explicit query concerns adaptation of tradi-
tional statistical analysis, optimization and other methods to big data
analysis. Furthermore, it is necessary to develop new methods with due
consideration of big data specifics. A modern fashionable trend is boost-
ing analytics tools (generally, business analytics) for big data. But their
list almost coincides with the classical kit of statistical tools (or is even
narrower, since some methods are inapplicable to big data). This is also
the case for:
– machine learning methods (support vector machine, random for-
ests, artificial neural networks, Bayesian networks including separation of
informational attributes and dimension reduction of attribute spaces in
model relearning) and artificial intelligence methods;
– high-dimensional optimization problems (in addition to traditional
parallel computing, intensive research focuses on distributed optimiza-
tion);
70
Data unstructuredness can be the result of their omissions and/or different scales of
studied phenomena and processes (in space and time, see the so-called multi-scale
systems).
71
In the first place, these technologies must perform data aggregation (e.g., detecting
changes in technological data or storing aggregated indices). Really, one does not need
all data (especially, “homogeneous” data).
72
Mathematics rather easily operates structured data; and so, data structuring makes an
important problem.
94
- discrete optimization methods (here an “alternative” lies in applica-
tion of multi-agent program systems–see the above discussion of distrib-
uted optimization problems).
The common feature in the stated queries of technologies to science
is the insufficiency of adaptation or small modification of well-known
tried-and-true methods. We have to be aware of the following. Generally,
automatic modeling (by traditional tools73) based on raw data represents
just a fashionable delusion. 74 We expect to suggest algorithms and apply
them to bulky volumes of unstructured (often irrelevant) information,
thereby improving the efficiency of decision-making (recall the “emer-
gent intelligence illusion”). There exist no miracles in science: generally,
new conclusions require new models and new paradigms (e.g., see the
books on science methodology [112, 149]).
The complexity of the surrounding world grows at a smaller rate
than the capabilities of data detection (“measurement”) and storage.
Perhaps, these capabilities have exceeded the ability of mankind to real-
ize the feasibility and reasonability of their usage. In other words, we
“choke” with data, trying to find what to do with them.
However, there exists an alternative viewpoint of this situation as
follows. Obtaining big data (having an arbitrary large volume) is possible
and easy enough (obvious examples arise in combinatorial optimization,
nonlinear dynamics or thermodynamics, see below). But we have to
understand how to manage big data (and ask the Nature correct ques-
tions). Furthermore, it is possible to construct an arbitrary complex model
using big data and then try to reach a higher accuracy within the model.
But the associated dilemma is whether we obtain new results or not (in
addition to very many new problems75). Long ago mathematicians and
physics knew that increasing the dimensionality and complexity of a
model (aspiration for considering more factors and relations among them)
does not necessarily improve the quality of modeling results; sometimes,
it even carries to the point of absurdity. 76
73
An additional encumbrance is the accumulated experience of a researcher/developer
and the traditions of his scientific school. Successful solution of a certain problem leads
to the conviction that same methods (only!) are applicable to the rest open problems.
74
In some cases, additional information can be obtained by increasing the volume of data
(under correct processing).
75
We recognize the importance of model’s adequacy and stability of modeling results, but
omit these problems.
76
Not to mention situations, when existing scientific paradigms make it impossible in
principle to model system behavior on a large time horizon (e.g., accurate weather
forecasting).
95
Based on analysis of several examples, the paper [151] distinguished
between natural and artificial big data depending on their source. In the
former case, data are generated by some independent object and we
(“investigators”) decide what should be “measured.” In the latter case, the
source of big data is a model; complexity (data flow) is partially con-
trolled and defined during simulation.
“Recipes.” There exist four large groups of subjects (see Fig. 50)
operating (explicitly or implicitly) big data in their professional (scientific
and/or practical) activity:
– manufacturers of big data handling tools (software/hardware de-
velopers, suppliers, consultants, integrators, etc.);
– designers of big data handling methods (experts in applied mathe-
matics and computer science);
– specialists in application domains (scientists focused on real ob-
jects or their models) that represent big data sources;
– customers utilizing or planning to utilize the results of big data
analysis in their activity.
Manufacturers Designers
of big data of big data
handling tools handling methods
Big
data
Specialists in
Customers
application domains
96
Manufacturers of big data Designers of big data
handling tools handling methods Specialists in Customers
application domains
77
Though, it is possible to store data de bene esse (e.g., to verify a certain hypothesis in
future based on them).
97
[18] and macrodescriptions (in terms of distribution functions of essential
parameters) [34], as well as establish a correspondence between them
[33]. Such approach is also developed within the framework of
sociophysics and ecophysics, where statistical physics tools are applied to
model complex networks and big socioeconomic systems.
Some threats. In addition to the emphasized necessity of searching
for adequate simple models and the alerting trend of anticipatory technol-
ogies development, we expect the future relevance of the following
problems (the list below is unstructured and incomplete).
The informational security of big data. This requires adaptation
of well-known methods and tools, as well as development of fundamen-
tally new ones. Really, alongside with the growing topicality of
cybersecurity problems (in the wide sense, the informational security of
control systems) and the problem of security “against information” (espe-
cially, in social networks), one should consider the specifics of big data
proper.
The energy efficiency of big data. Even today, data processing
centers represent a considerable class of power consumers. The bigger are
data to-be-processed, the higher is energy needed.
The principle of complementarity was established in physics long
ago; it declares that measurements modify the state of a system. Howev-
er, does it apply to social systems whose elements (people) are active,
i.e., possess their own interests and preferences, choose their actions
independently, etc. [36, 131, 157]?
A demonstration of this principle lies in the so-called information
manipulation (strategic behavior). According to theory of choice [36, 38,
39, 131], an active subject reports information by forecasting the results
of its usage; generally speaking, an active subject does not adhere to
truth-telling.
Another example concerns the so-called active forecasting: a system
changes its behavior based on new knowledge about itself [158].
Are these and similar problems eliminated or aggravated in the case
of big data?
Recall the principle of uncertainty in the following (epistemologi-
cal) statement [149]: the current level of science development is charac-
terized by certain mutual constraints imposed on results “validity” and
results applicability, see Fig. 11. In the context of big data, this principle
means the existence of a rational balance between the level of detail in
the description of a studied system and the validity of results and conclu-
sions to-be-made on the basis of this description.
98
A traditional assumption in design and operation of information
systems (corporate systems, decision support systems of governmental
services, inter-agency circulation of documents, etc.) is that all infor-
mation in such systems must be complete, unified and publicly available
(under existing access rights). But it is possible to show the “distorting-
mirror” reality to each person, i.e., to create an individual informational
picture,78 thereby performing informational control [157, 158]. Should
we strive for or struggle against these effects in the field of big data?
Summarizing the above consideration of trends and forecasts in con-
trol theory, we declare that a similar (or even more systematic, regular
and in-depth) analysis is vital for other sciences, viz., cybernetics, sys-
tems analysis, optimization, artificial intelligence, etc. This would give an
impetus for the evolution of Cybernetics via the appearance of new
generalizations in the form of corresponding laws, regularities, principles
and so on.
78
At the very least, a fragment of the “objective” picture (hushing up the whole truth); at
the most, an arbitrary inconsistent system of beliefs about the reality.
79
Another “educational” question ensuing from the generality of control laws and
principles can be stated as follows: “Is it better to organize a department for control
problems in each “sectoral” university or a university dedicated to control problems with
“sectoral” departments?”. The book will touch this question.
99
systems as a “universal” descriptive framework for any controlled ob-
jects) and pay little attention to intelligent control, networked control, the
“sectoral” specifics of controlled objects and so on. To our regret and
despite the efforts of N. Wiener and its followers on creation of a univer-
sal control science, none of textbooks on ACT deeply treats the generality
of laws and processes of control in the animal, machine and society.
Imagine that this request (“Please, recommend a textbook on modern
control theory...”) is addressed to a potential reader without well-
developed skills in higher mathematics (e.g., a schoolchild). How can we
make the results of modern control theory clear to such readers? Here the
situation seems even worse. Of course, (a) the amount of scientific
knowledge accumulated in control theory is huge, (b) the study of this
knowledge requires special training, (c) a dilettante would never perceive
it, (d) the described function is performed by handbooks and reference
books,... But a counterargument is that today many sciences (physics,
chemistry, biology) can be presented at the levels of a school textbook, a
university textbook or a scientific monograph. For instance, such “ency-
clopaedic” textbooks exist for other “capacious” sciences, namely, infor-
matics, artificial intelligence, game theory, operations research, etc. Why
are there no school textbooks on control theory80 and only a few broad
university textbooks? Creation of easy-to-understand (yet, rigorous and
complete) textbooks on control theory is an urgent challenge for experts
in the field!
80
Speaking about “control theory,” we mean exactly mathematical control theory (and
not a corresponding branch of management science discussed in numerous bélles-léttres
textbooks available today at stores).
100
But take a broader view of communication.81 Both in the paper [181]
and in the original book [221], N. Wiener explicitly or implicitly men-
tioned interrelation or intercommunication or interaction–reasonability
and causality (cause-effect relations). Really, in feedback control sys-
tems, control-effect is defined by its cause, i.e., the state of a controlled
system (plant); conversely, control supplied to the input of a plant is
induced by its cause, i.e., the state of a controller, and so on. No doubt,
the channels and methods of communication are important but secondary
whenever the matter concerns universal regularities for animals, ma-
chines and society.
A much broader view of communication implies interpreting com-
munication as INTERCOMMUNICATION, e.g., between elements of a
plant, between a controller and a plant, etc. including different types of
impacts and interactions (material, informational and other ones). “Inter-
communication” is a more general category than “communication.”
In the general systems context, intercommunication corresponds to
the category of ORGANIZATION (see its definition and discussion
below). Therefore, a simple correction (replacing “communication” with
“organization” in Wiener’s definition of cybernetics) yields a more
general and modern definition of cybernetics: “the science of systems
organization and their control.” We call it cybernetics 2.0.
Making such substitution, we get distanced from informatics. Con-
sider the soundness and consequences of this distancing.
Cybernetics and informatics. Nowadays, cybernetics and informat-
ics form independent interdisciplinary fundamental sciences [101]. Ac-
cording to a figurative expression of B. Sokolov and R. Yusupov [191],
informatics and cybernetics are “Siamese twins.” Yet, in nature Siamese
twins represent pathology.82
81
Academician A. Kolmogorov was against such interpretation. In 1959 he wrote:
“Cybernetics studies any-nature systems being capable to perceive, store and process
information, as well as to use it for control and regulation. Cybernetics intensively
employs mathematical methods and aims at obtaining concrete special results, both in
order to analyze such systems (restore their structure based on experience of their
operation) and to design them (calculate schemes of systems implementing given actions).
Owing to this concrete character, cybernetics is in no way reduced to the philosophical
discussion of reasonability in machines and the philosophical analysis of a circle of
phenomena explored by it.” We venture to disagree with this opinion of a great Soviet
mathematician.
82
For instance, the definition of informatics as the “union” of general laws of informatics
and control would induce a megascience without concrete content, subsisting at concep-
tual level exclusively.
101
Cybernetics and informatics have a strong intersection (including the
level of common scientific base–statistical information theory83). Their
accents much differ. The fundamental ideas of cybernetics are Wiener’s
“control and communication in the animal and the machine,” whereas the
fundamental ideas of informatics are formalization (theory) and comput-
erization (practice). Accordingly, in the mathematical sense cybernetics
bases on control theory and information theory, whereas informatics
proceeds from theory of algorithms and formal systems.84
The subject of modern informatics (or even the “umbrella brands” of
informational sciences) covering information science, computer science
and computational science [102] are informational processes.
Indeed, on the one hand, information processing arises everywhere
(!), not only in control and/or organizing. On the other hand, information-
al processes and corresponding information and communication technol-
ogy are integrated into control processes85 so that their discrimination
seems almost impossible. A close cooperation of informatics and cyber-
netics at partial operational level will be continued and even extended in
future.
Organization. Organization theory. Organizational culture. Ac-
cording to the definition provided by Merriam-Webster dictionary, an
organization is:
1. The condition or manner of being organized;
2. The act or process of organizing or of being organized;
3. An administrative and functional structure (as a business or a po-
litical party); also, the personnel of such a structure–see Fig. 52.
83
Note that mathematical (statistical) theory of communication and information operates
quantitative assessments of information. Unfortunately, no essential advancements have
been made in the field of substantial (semantic) value of information. This problem is still
a global challenge of informatics.
84
This distinction partly elucidates why some sciences often related to informatics or
computer sciences have not been reflected in the book: theory of formal languages and
grammars, “true” artificial intelligence (knowledge engineering, reasoning formaliza-
tion, behavior planning, etc. instead of artificial neural networks as a modern empirical
engineering science), automata theory, computational complexity theory, and so on.
85
N. Wiener believed that control processes are, in the first place, informational process-
es: information acquisition, processing and transmission (see the above discussion of
joint solution of problems appearing in control, computations and communication).
102
ORGANIZATION
The present book uses the notion “organization” mostly in its second
and first meanings, i.e., as a process and a result of this process. The third
meaning (an organizational system) as a class of controlled objects ap-
pears in theory of control in organizational systems [131, 157].
At descriptive (phenomenological) and explanatory levels, “system
organization” reflects HOW and WHY EXACTLY SO, respectively, a
system is organized (organization as a property). At normative level,
“system organization” reflects how it MUST be organized (requirements
to the property of organization) and how it SHOULD be organized (re-
quirements to the process of organization).
A scientific branch responsible for the posed questions (Organiza-
tion86 theory, or O3 (organization as a property, process and system,
by analogy to C3 as discussed above) has almost not been developed to-
date. Yet, this branch obviously has a close connection and partial inter-
section with general systems theory and systems analysis (mostly focused
on descriptive level problems and a little bit dealing with normative level
ones), as well as with methodology (as the general science of activity
organization [148]). Creating a full-fledged Organization theory is a
topical problem of cybernetics!
86
Note that there also exists “theory of organizations” (“organizational theory”) - a
branch of management science, both in its subject (organizational systems) and methods
used. Unfortunately, numerous textbooks (and just a few monographs!) give only descrip-
tive generalizations on the property and process of organization in their Introductions,
with most attention then switched to organizational systems, viz., management of organi-
zations (for instance, see the classical textbooks [47, 134]).
103
Speaking about the notion of organization, one should not ignore the
phenomenon of organizational culture. Different historical periods of
civilization evolvement are remarkable for different types of activity
organization now called organizational culture, see Table 6.
87
The author believes that “the knowledge-based type of organizational culture,”
“knowledge society,” “knowledge management” and others are lame terms in this
context. Really, a preceding type of organizational culture–the professional (scientific)
one–was also founded on scientific knowledge. Nevertheless, these terms are widely used.
Let us clarify the meaning of knowledge here. In the professional (scientific) type of
organizational culture, the leading role belonged to scientific knowledge in the form of
104
of social structure (nowadays, the term “knowledge economics” has wide
spread occurrence). Cybernetics 1.0 de bene esse matched the project-
technological type of organizational culture, whereas cybernetics 2.0
corresponds to the knowledge-based type (at the new stage of develop-
ment, organization becomes crucial).
Consider the correlation of the two basic categories in the definition
of cybernetics 2.0 (“organization” and “control”).
Control is “an element, function of different organized systems (bio-
logical, social, technical ones) preserving their definite structure, main-
taining activity mode, implementing a program, a goal of activity.”
Control is “an impact on a controlled system, intended for ensuring its
necessary behavior” [157].
Consequently, the categories of organization and control do inter-
sect, but do not coincide. The former fits system design and the latter fits
system functioning88; they are jointly realized during system implementa-
tion and adaptation, see Fig. 53. In other words, organization (strategic
loop) “foregoes” control (tactical loop).
Organization Control
I II III
AGGREGATIVE STAGES OF
SYSTEM LIFE CYCLE
Science Craft
Technologies
Individual
(creative)
experience
Art
90
A craft is a personal skill of routine operations based on experience.
91
Art is a system of techniques and methods in some branch of practical activity; the
process of talent usage; an extremely developed creative skill or ability.
107
SYSTEM
Metasciences
Subject-oriented Method-oriented
sciences sciences
TECHNOLOGIES (implementation)
SCIENCES (research)
CYBERNETICS 2.0
TECHNOLOGIES
CYBERNETICS 2.0
CONTROL ORGANIZATION
SYSTEM
SCIENCE
Conceptual level
Cybernetics 2.0
CONTROL CONTROL
PHILOSOPHY METHODOLOGY
ORGANIZATION
THEORY
SYSTEMS
ENGINEERING
INFORMATICS OPTIMIZATION
…
OPERATIONS ARTIFICIAL
RESEARCH INTELLIGENCE
The general architecture of cybernetics 2.0 (see Fig. 57) admits pro-
jection to different application domains and branches of subject-oriented
sciences depending on a class of posed problems (technical, biological,
social, etc.).
The prospects of cybernetics 2.0. Further development of cybernet-
ics has several alternative scenarios as follows:
– the negativistic scenario (the prevailing opinion is that “cybernet-
ics does not exist” and it gradually falls into oblivion);
– the “umbrella” scenario (owing to past endeavors, cybernetics is
considered as a “mechanistic” (non-emergent) union, and its further
development is forecasted using the aggregate of trends displayed by the
110
basic and complementary sciences under the “umbrella brand” of cyber-
netics);
– the “philosophical” scenario (the framework of new results in cy-
bernetics 2.0 includes conceptual considerations only–the development of
conceptual level);
- the subject-oriented (sectoral) scenario (the basic results of cyber-
netics are obtained at the junction of sectoral applications);
– the constructive-optimistic (desired) scenario (the balanced devel-
opment of the basic, complementary and “conceptual” sciences is the
case, accompanied by the convergence and interdisciplinary translation
of their common results, with subsequent generation of conceptual level
generalizations (realization of Wiener’s dream “to understand the region
as a whole,” see the epigraph to this book).
Let us revert to the trends and groups of subjects mentioned in Sec-
tion 1.3. Note that the development of cybernetics 2.0 in the conditions of
intensified sciences differentiation provides the following (see Fig. 58):
- for scientists specialized in cybernetics proper and the representa-
tives of adjacent sciences: the general picture of a wide subject domain
(and a common language of its description), the positioning of their
results and promotion in new theoretical and applied fields;
- for potential users of applied results (authorities, business struc-
tures): (1) confidence in the uniform positions92 of researchers; (2) more
efficient solution of control problems for different objects based on new
fundamental results and associated applied results.
Main challenges are control in social and living systems. Several
classes of control problems seem topical, namely:
- network-centric systems (including military applications, net-
worked and cloud production);
- informational control and cybersafety;
- life cycle control of complex organization-technical systems;
- activity systems engineering.
Among promising application domains, we mention living systems,
social systems, microsystems, energetics and transport.
There exists a series of global challenges to cybernetics 2.0 (i.e., ob-
served phenomena going beyond cybernetics 1.0), see Chapter 5:
1) the scientific Tower of Babel (interdisciplinarity, differentiation
of sciences; in the first place, in the context of cybernetics–sciences of
control and adjacent sciences);
92
The diversity and inconsistency of opinions and approaches suggested by experts
(subordinates) always confuse customers (superiors).
111
2) centralization collapse (decentralization and networkism, includ-
ing systems of systems, distributed optimization, emergent intelligence,
multi-agent systems, and so on);
3) strategic behavior (in all manifestations, including interests in-
consistency, goal-setting, reflexion and so on);
4) complexity damnation (including all aspects of complexity and
nonlinearity93 of modern systems, as well as dimensionality damnation–
big data and big control).
CHALLENGES CLASSES OF
PROBLEMS
cybernetics 2.0
APPLICATION
DOMAINS
Thus, the main tasks of cybernetics 2.0 are developing the basic and
complementary sciences, responding to the stated global challenges, as
well as advancing in appropriate application domains, see Fig. 58.
And here are the main Tasks of Cybernetics 2.0:
1) ensuring the Interdisciplinarity of investigations (with respect to
the basic and complementary sciences, as illustrated by Fig. 57);
2) revealing, systematizing and analyzing the general laws, regulari-
ties and principles of control for different-nature systems within control
philosophy; this would require new and new generalizations (see Fig. 10);
3) elaborating and refining Organization theory (O3).
This book has described the phylogenesis of a new stage of cyber-
netics–cybernetics 2.0. Further development of cybernetics would call for
considerable joint effort of mathematicians, philosophers, experts in
control theory, systems engineering and many others involved.
93
Figuratively, in this sense cybernetics 2.0 has to include nonlinear automatic control
theory studying nonlinear decentralized objects with nonlinear observers, etc.
112
References
113
17 Baker K., Kropp D. Management Science: Introduction to the
Use of Decision Models. – New York: John Wiley and Sons Ltd, 1985. –
650 p.
18 Barabanov I., Korgin N., Novikov D., Chkhartishvili A. Dynamic
Models of Informational Control in Social Networks // Automation and
Remote Control. 2010. Vol. 71. No. 11. P. 2417–2426.
19 Bar-Yam Y. Multiscale Variety in Complex Systems // Complex-
ity. 2004. Vol. 9. No 4. P. 37–45.
20 Bateson G. Steps to an Ecology of Mind. – San Francisco: Chan-
dler Pub. Co., 1972. – 542 p.
21 Bauer E. Theoretical Biology. – Moscow, Leningrad: All-USSR
Institute of Experimental Medicine, 1935. – 206 p. (in Russian)
22 Beer S. Brain of the Firm: A Development in Management Cy-
bernetics. – London: Herder and Herder, 1972. – 319 p.
23 Beer S. Cybernetics and Management. – London: The English
University Press, 1959. – 214 p.
24 Bernstein N. Sketches on the Physiology of Movements and the
Physiology of Activity. – Moscow: Meditsina, 1966. – 347 p. (in Rus-
sian)
25 Bertalanffy L. General System Theory – a Critical Review //
General Systems. 1962. Vol. 7. P. 1–20.
26 Bertalanffy L. General System Theory: Foundations, Develop-
ment, Applications. – New York: George Braziller, 1968. – 296 p.
27 Bertalanffy L. The Theory of Open Systems in Physics and Biol-
ogy// Science. 1950. 13 Jan. Vol. 111. P. 23–29.
28 Blauberg I., Yudin E.G. The Formation and Essence of Systems
Approach. – Moscow: Nauka, 1973. – 271 p. (in Russian)
29 Bogdanov A. The General Organizational Science. – Moscow:
Ekonomika, 1913-17. Vol. 1-2., 1925-29. Vol. 3. (in Russian) /
Bogdanov A. Algemeine Organisationslehre (Tektologie). – Berlin:
Hirzel, 1926. I; 1928. II / Bogdanov A. Essays in Tektology. – Seaside:
Intersystems Publications, 1980. – 291 p.
30 Boulding K. General System Theory – The Skeleton of Science //
Management Science. 1956. Vol. 2. P. 197–208.
31 Boxer P., Kenny V. Lacan and Maturana: Constructivist Origins
for a 30 Cybernetics // Communication and Cognition. 1992. Vol. 25. No.
1. P. 73–100.
32 Boyd S., Parikh N., Chu E., et al. Distributed Optimization and
Statistical Learning via the Alternating Direction Method of Multipliers //
Foundations and Trends in Machine Learning. 2011. No. 3(1). P. 1–122.
114
33 Breer V.V., Novikov D.A., Rogatkin A.D. Micro- and
Macromodels of Social Networks // Automation and Remote Control.
Part 1: General Theory; Part 2: Identification and Simulation Experi-
ments. 2015.
34 Breer V.V., Novikov D.A., Rogatkin A.D. Stochastic Models of
Mob Control // Large-Scale Systems Control. 2014. No. 52. P. 85–117.
(in Russian)
35 Bubnicki Z. Modern Control Theory. – Berlin: Springer, 2005. –
423 p.
36 Burkov V. Foundations of Mathematical Theory of Active Sys-
tems. – Moscow: Nauka, 1977. – 255 p. (in Russian)
37 Burkov V., Enaleev A. Stimulation and Decision-Making in the
Active Systems Theory: Review of Problems and New Results // Mathe-
matical Social Sciences. 1994. Vol. 27. P. 271–291.
38 Burkov V., Goubko M., Korgin N., Novikov D. Introduction to
Theory of Control in Organizations. – New York: CRC Press, 2015. –
352 p.
39 Burkov V., Lerner A. Fairplay in Control of Active Systems /
Differential Games and Related Topics. Amsterdam, London: North-
Holland Publishing Company, 1971. P. 164–168.
40 Buslenko N. Modeling of Complex Systems. – Moscow: Nauka,
1978. – 420 p. (in Russian)
41 Cannon W. The Wisdom of the Body. – New York: Norton,
1932. – 312 p.
42 Casti J. Connectivity, Complexity and Catastrophe in Large-Scale
Systems. – Chichester: John Wiley and Sons, 1979. – 203 p.
43 Checkland P. Soft System Methodology: A Thirty Years Retro-
spective // Systems Research and Behavioral Science. 2000. Vol. 17. P.
11–58.
44 Checkland P. Systems Thinking, Systems Practice. – Chichester:
John Wiley & Sons Ltd., 1981. – 331 p.
45 Chernavsky D. Synergetics and Information. – Moscow: Editorial
URSS, 2004. – 288 p. (in Russian)
46 Chernyak Yu. Systems Analysis in Economy Management. –
Moscow: Ekonomika, 1975. – 191 p. (in Russian)
47 Daft R. Organization Theory and Design. 11th ed. – New York:
Cengage Learning, 2012. – 688 p.
48 Dancoff S., Quastler H. The Information Content and Error Rate
of Living Things / Essays on the Use of Information Theory in Biology. –
Illinois: University of Illinois Press, 1953. P. 263–274.
115
49 Dennis A., Wixom B., Roth R. Systems Analysis and Design. 5th
ed. – New York: Wiley, 2012. – 594 p.
50 Diev V. Control. Philosophy. Society // Voprosy Filosofii. 2010.
No. 8. P. 35–41. (in Russian)
51 Dorf R., Bishop R. Modern Control Systems. 12th ed. – Upper
Saddle River: Prentice Hall, 2011. – 1111 p.
52 Druzhinin V., Kontorov D.S. Introduction to Conflict Theory. –
Moscow: Radio i Svyaz’, 1989. – 288 p. (in Russian)
53 Emergent Intelligence of Networked Agents / Ed. by
Namatame A., Kurihara S., Nakashima H. – Berlin: Springer, 2007. –
261 p.
54 Foerster H. The Cybernetics of Cybernetics. 2nd edition. Minne-
apolis: Future Systems, 1995. – 228 p.
55 Foerster H. Understanding Understanding: Essays on Cybernetics
and Cognition, New York: Springer-Verlag, 2003. – 362 p.
56 Forrest J., Novikov D. Modern Trends in Control Theory: Net-
works, Hierarchies and Interdisciplinarity // Advances in Systems Science
and Application. 2012. Vol.12. No. 3. P. 1–13.
57 Forrester J. Industrial Dynamics. – Cambridge: Pegasus Commu-
nications, 1961. – 464 p.
58 Forrester J. Principles of Systems. – Cambridge: Pegasus Com-
munications, 1968. – 387 p.
59 Fradkov A. Cybernetical Physics: From Control of Chaos to
Quantum Control (Understanding Complex Systems). – Berlin: Springer,
2006. – 236 p.
60 Gelfand I., Gurfinkel V.S., Tseitlin M.L. On Tactics of Complex
Systems Control in Connection with Physiology / Biological Aspects of
Cybernetics. – Moscow: USSR Academy of Sciences, 1962. P. 66–73. (in
Russian)
61 George F. The Brain as a Computer. – New York: Pergamon
Press, 1962. – 437 p.
62 George F. The Foundations of Cybernetics. – London: Gordon
and Breach Science Publisher, 1977. – 286 p.
63 George F.H. Philosophical Foundations of Cybernetics. – Kent:
Abacus Press, 1979. – 157 p.
64 Germeier Yu. Non-Antagonistic Games, 1976. – Dordrecht: D.
Reidel Publishing Company, 1986. – 331 p.
65 Gerovich S. From Newspeak to Cyberspeak: A History of Soviet
Cybernetics. – Cambridge: MIT Press, 2002. – 383 p.
66 Gershenson C., Csermely P., Érdi P., Knyazeva H., Laszlo A.
The Past, Present and Future of Cybernetics and Systems Research //
116
Systems. Connecting Matter, Life, Culture and Technology. 2013. Vol. 1
No 3. P. 4–13.
67 Gigch J. Applied General Systems Theory. 2nd ed. – New York:
Harper & Row, 1978. – 736 p.
68 Glushkov V. Introduction to Cybernetics. – Kiev: Ukr. SSR
Academy of Sciences, 1964. – 324 p. (in Russian)
69 Gonçalves C. Quantum Cybernetics and Complex Quantum Sys-
tems Science – A Quantum Connectionist Exploration // Neuroqantology.
2015. Vol. 13. No 1.
70 Goode H., Machol К. System Engineering: an Introduction to the
Design of Large-scale Systems. – New York: McGrawhill Book Compa-
ny, 1957. – 551 p.
71 Gorsky Yu. A System-Informational Analysis of Control Pro-
cesses. - Novosibirsk: Nauka, 1988. - 327 p. (in Russian)
72 Grössing G. Quantum Cybernetics. Toward a Unification of
Relativity and Quantum Theory via Circularly Causal Modeling. – New
York: Springer, 2000. – 153 p.
73 Gubanov D., Korgin N., Novikov D., Raikov A. E-Expertise:
Modern Collective Intelligence. – Heidelberg: Springer, 2014. – 150 p.
74 Gubanov D., Makarenko A., Novikov D. Analysis Methods for
the Terminological Structure of a Subject Area // Automation and Re-
mote Control. 2014. Vol. 75. No. 12. P. 2231–2247.
75 Gubanov D., Novikov D., Chkhartishvili A.G. Social Networks:
Models of Informational Influence, Control and Confrontation. – Mos-
cow: Fizmatlit, 2010. – 228 p. (in Russian)
76 Guide to the Systems Engineering Body of Knowledge (SEBoK)
v1.3.2. BKCASE, INCOSE 2015. – 971 p.
77 Haken H. Advanced Synergetics: Instability Hierarchies of Self-
Organizing Systems and Devices. 2nd ed. – New York: Springer-Verlag,
1993. – 356 p.
78 Handbook of Dynamic Systems Modeling / Ed. by P. Fishwick. –
New York: CRC Press, 2007. – 760 p.
79 Kharitonov V., Alekseev A.O. The Concept of Subject-Oriented
Control in Social and Economic Systems // Polythematic Electronic
Journal of Kuban State Agricultural University [Electronic source]. –
Krasnodar: Kuban State Agricultural University, 2015. – No. 05 (109). –
IDA [article ID]: 1091505043. – Available at
http://ej.kubagro.ru/2015/05/pdf/43.pdf. (in Russian)
80 Heylighen F. Principles of Systems and Cybernetics: An Evolu-
tionary Perspective / Cybernetics and Systems’92. – Singapore: World
Science, 1992. P. 3–10.
117
81 Heylighen F., Joslyn C. Cybernetics and Second-Order Cybernet-
ics / Encyclopedia of Physical Science & Technology. 3rd ed. – New
York: Academic Press, 2001. P. 155–170.
82 Hillier F. and Lieberman G. Introduction to Operations Research
(8th ed.). – Boston: McGraw-Hill, 2005. – 1061 p.
83 Historic Control Textbook / Ed. by J. Gertler. – Oxford: Elsevier,
2006. – 304 p.
84 Vus M.A. The History of Informatics and Cybernetics in Saint
Petersburg (Leningrad). Vol. 1. Striking Historical Examples // Ed. by
Corr. Member of RAS R.M. Yusupov; Institute of Informatics and Auto-
mation of RAS. – St. Petersburg: Nauka, 2008. – 356 p. (in Russian)
85 The History of Cybernetics / Ed. by Ya.I. Fet. – Novosibirsk:
Geo, 2006. – 339 p. (in Russian)
86 Hitchins D. Putting Systems to Work. – New York: Wiley, 1993.
– 342 p.
87 Il’in V. The Philosophy and History of Science. – Moscow:
Lomonosov Moscow State University, 2005. – 432 p. (in Russian)
88 INCOSE Systems Engineering Handbook Version 3.2.2 – A
Guide for Life Cycle Processes and Activities / Ed. by C. Haskins. – San
Diego: INCOSE, 2012. – 376 p.
89 Jackson M. Social and Economic Networks. – Princeton: Prince-
ton Univ. Press, 2010. – 520 p.
90 Jaradat R., Keating C. A Histogram Analysis for System of Sys-
tems // International Journal System of Systems Engineering. 2014. Vol.
5. No. 3. P. 193–227.
91 Julong D. Introduction to Grey System Theory // The Journal of
Grey System. 1989. Vol. 1. P. 1–24.
92 Kahn H., Mann I. Techniques of Systems Analysis. – Santa Mon-
ica: RAND Corporation, 1956. – 168 p.
93 Kalman R., Falb P., Arbib M. Topics in Mathematical System
Theory. – McGraw Hill Book Co., 1969.
94 Kaufman A. Introduction to Fuzzy Arithmetic. – New York: Van
Nostrand Reinhold Company, 1991. – 384 p.
95 Kenny V. There’s Nothing Like the Real Thing. Revisiting the
Need for a Third-Order Cybernetics // Constructivist Foundations. 2009.
No 4(2). P. 100–111.
96 Khalil H. Nonlinear Systems. 2nd ed. – Upper Saddle River: Pren-
tice Hall, 1996. – 734 p.
97 Klaus G. Kybernetic und Gesellschaft. – Berlin: Veb Deutscher
Verlag der Wissenschaften, 1964. – 384 p.
118
98 Klaus G. Kybernetik in Philosophischer Sicht. – Berlin: Dietz
Verlag Berlin, 1961. – 491 p.
99 Kobrinsky N., Maiminas E.Z., Smirnov A.D. Economic Cyber-
netics. – Moscow: Ekonomika, 1982. – 408 p. (in Russian)
100 Kogan A., Naumov N.P., Rezhabek V.G., Chorayan O.G. Bio-
logical Cybernetics. – Moscow: Vysshaya Shkola, 1972. – 384 p. (in
Russian)
101 Kolin K. Philosophical Problems of Informatics. – Moscow:
BINOM, 2010. – 270 p. (in Russian)
102 Kolin K. The Formation of Informatics as a Fundamental Sci-
ence and a Complex Scientific Problem // Sistemy i Sredstva Informatiki.
2006. Special Issue on Scientific and Methodological Problems of Infor-
matics. P. 7–58. (in Russian)
103 Kolin K. The Structure of Scientific Research on the Complex
Problem of Informatics / Sotsial’naya Informatika. - Moscow: Higher
Commercial School, 1990. P. 19–33. (in Russian)
104 Kolmogorov A. Mathematics – A Science and Profession //
Kvant. No. 64. – Moscow: Nauka, 1988. P. 43–62. (in Russian)
105 Korepanov V., Novikov D. The Diffuse Bomb Problem // Au-
tomation and Remote Control. 2013. Vol. 74. No 5. P. 863–874.
106 Korepanov V., Novikov D. Models of Strategic Behavior in the
Diffuse Bomb Problem // Control Sciences. 2015. No. 2. P. 38–44. (in
Russian)
107 Korepanov V., Novikov D. Reflexive Colonel Blotto Game //
Control Systems and Information Technology. 2012. No. 1 (47). P. 55–
62. (in Russian)
108 Korshunov Yu. Mathematical Foundations of Cybernetics. –
Moscow: Energoatomizdat, 1987. – 496 p. (in Russian)
109 Kozielecki J. Psychological Decision Theory. – London:
Springer, 1982. – 424 p.
110 Kozlov V. Systems Analysis, Optimization and Decision-
Making. – Moscow: Prospekt, 2010. – 176 p. (in Russian)
111 Krylov S. Neocybernetics: Algorithms, Evolution Mathematics
and Future Technologies. – Moscow: LKI, 2008. – 288 p. (in Russian)
112 Kuhn T. The Structure of Scientific Revolutions. – Chicago:
University of Chicago Press, 1962. – 264 p.
113 Kuzin L. The Foundations of Cybernetics. – Moscow: Energiya,
1979. Vol. 1. – 504 p. Vol. 2. – 584 p. (in Russian)
114 Larichev O. Systems Analysis: Problems and Prospects // Au-
tomation and Remote Control. 1975. Vol. 36. No. 2. P. 241–249.
119
115 Lefevbre V. Algebra of Conscience. – London: Springer, 2001.
– 372 p.
116 Lefevbre V. Second-Order Cybernetics in the Soviet Union and
Western Countries // Reflexive Processes and Control. 2002. No. 1. Vol.
2. P. 96–103. (in Russian)
117 Lefevbre V. The Structure of Awareness: Toward a Symbolic
Language of Human Reflexion. – New York: Sage Publications, 1977. –
199 p.
118 Lepsky V. The Philosophy and Methodology of Control in the
Context of Scientific Rationality Development / XII All-Russian Meeting
on Control Problems. – Moscow: Trapeznikov Institute of Control Sci-
ences, 2014. P. 7785–7796. (in Russian)
119 Lerner A. Fundamentals of Cybernetics. – Berlin: Springer,
1972. – 294 p.
120 Malinetsky G., Potapov A.B., Podlazov A.V. Nonlinear Dynam-
ics: Approaches, Results, Expectations. 3rd Ed. – Moscow: URSS, 2011. –
280 p. (in Russian)
121 Mancilla R. Introduction to Sociocybernetics (Part 1): Third-
Order Cybernetics and a Basic Framework for Society // Journal of
Sociocybernetics. 2011. Vol. 42. No. 9. P. 35–56.
122 Mancilla R. Introduction to Sociocybernetics (Part 3): Fourth-
Order Cybernetics // Journal of Sociocybernetics. 2013. Vol. 44. No. 11.
P. 47–73.
123 Mansour Y. Computational Game Theory. – Tel Aviv: Tel Aviv
University, 2003. – 150 p.
124 Maruyama M. The Second Cybernetics: Deviation-Amplifying
Mutual Causal Processes // American Scientist. 1963. Vol. 5. No. 2. P.
164–179.
125 Maturana H., Varela F. Autopoiesis and Cognition. – Dordrecht:
D. Reidel Publishing Company, 1980. – 143 p.
126 Maturana H., Varela F. The Tree of Knowledge. – Boston:
Shambhala Publications, 1987. – 231 p.
127 Maxwell J.C. On Governors // Proceedings of the Royal Society
of London. 1868. Vol. 16. P. 270–283.
128 Mead M. The Cybernetics of Cybernetics / Purposive Systems.
Ed. by H. von Foerster et al. – New York: Spartan Books, 1968. P. 1–11.
129 Meadows D. Thinking in Systems. – London: Earthscan, 2009. –
218 p.
130 Meadows D., Randers J., Behrens W. The Limits to Growth. –
New York: Universe Books, 1972. – 205 p.
120
131 Mechanism Design and Management: Mathematical Methods
for Smart Organizations / Ed. by Prof. D. Novikov. – New York: Nova
Science Publishers, 2013. – 163 p.
132 Mesarovic M. Takahara Y. General Systems Theory: Mathemat-
ical Foundations (Mathematics in Science and Engineering). – Elsevier,
1975. – 322 p.
133 Mesarović M., Mako D., Takahara Y. Theory of Hierarchical
Multilevel Systems. – New York: Academic, 1970. – 294 p.
134 Milner B. Theory of Organization. 2nd ed. – Moscow: INFRA-
M, 2000. – 480 p. (in Russian)
135 Mirzoyan R. Control as a Subject of Philosophical Analysis //
Russian Studies in Philosophy. 2010. No. 4. P. 35–47. (in Russian)
136 Moiseev N. Mathematical Problems of Systems Analysis. –
Moscow: Nauka, 1981. – 488 p. (in Russian)
137 Moiseev N. People and Cybernetics. – Moscow: Molodaya
Gvardiya, 1984. – 224 p. (in Russian)
138 Morris W. Management Science: A Bayesian Introduction. –
New York: Prentice Hall, 1968. – 226 p.
139 Morse P., Kimball G. Methods of Operations Research. – New
York: Wiley, 1951. – 258 p.
140 Müller K. The New Science of Cybernetics: A Primer // Journal
of Systemics, Cybernetics and Informatics. 2013. Vol. 11. No. 9. P. 32–
46.
141 Myerson R. Game Theory: Analysis of Conflict. – London:
Harvard Univ. Press, 1991. – 568 p.
142 NASA Systems Engineering Handbook. 2007. – 360 p.
143 Nash J. Non-cooperative Games // Ann. Math. 1951. Vol. 54. P.
286–295.
144 Nature. – 2008. September 3 (Special Issue).
145 Neumann J., Morgenstern O. Theory of Games and Economic
Behavior. – Princeton: Princeton University Press, 1944. – 776 p.
146 Nikanorov S. Conceptualization of Subject Domains. – Mos-
cow: Kontsept, 2009. – 268 p. (in Russian)
147 Novick D. Program Budgeting. – Cambridge: Harvard Universi-
ty Press, 1965. – 88 p.
148 Novikov A., Novikov D. Methodology. – Moscow: Sinteg,
2007. – 668 p. (in Russian)
149 Novikov A., Novikov D. Research Methodology: From Philoso-
phy of Science to Research Design. – Amsterdam, CRC Press, 2013. –
130 p.
121
150 Novikov D. Analysis of Some Leading Conferences on Control
Problems // Automation and Remote Control. 2014. No. 12. P. 160–166.
(in Russian)
151 Novikov D. Big Data and Big Control // Advances in Systems
Studies and Applications. 2015. Vol. 15. No. 1. P. 21–36.
152 Novikov D. Control Methodology. – New York: Nova Science
Publishers, 2013. – 76 p.
153 Novikov D. Hierarchical Models of Warfare // Automation and
Remote Control. 2013. Vol. 74. No. 10. P. 1733–1752.
154 Novikov D. Mechanisms of Functioning of Multilevel Organiza-
tional Systems. – Moscow: Control Problems Foundation, 1999. – 150 p.
(in Russian)
155 Novikov D. Models of Strategic Behavior // Automation and
Remote Control. 2012. Vol. 73. No. 1. P. 1–19.
156 Novikov D. Regularities of Iterative Learning. – Moscow:
Trapeznikov Institute of Control Sciences RAS, 1998. – 98 p. (in Rus-
sian)
157 Novikov D. Theory of Control in Organizations. – New York:
Nova Science Publishers, 2013. – 341 p.
158 Novikov D., Chkhartishvili A. Reflexion and Control: Mathe-
matical Models. – London: CRC Press, 2014. – 298 p.
159 Novikov D., Rusyaeva E. Foundations of Control Methodology
// Advances in Systems Science and Application. 2012. Vol. 12. No. 3. P.
33–52.
160 Novosel’tsev V. Control Theory and Biosystems. – Moscow:
Nauka, 1978. – 319 p. (in Russian)
161 Ogata K. Modern Control Engineering. 5th ed. – Upper Saddle
River: Prentice Hall, 2010. – 905 p.
162 Orlovski S. Optimization Models Using Fuzzy Sets and Possi-
bility Theory. – Berlin: Springer, 1987. – 452 p.
163 Oрtner S. Systems Analysis for Business Management. – New
York: Prentice Hall, 1960. – 190 p.
164 Pareto V. Cours d’Economie Politique. Vol. 2. 1897. – 420 p.
165 Pawlak Z. Rough Sets: Theoretical Aspects of Reasoning about
Data. – Dordrecht: Kluwer Academic Publishing, 1991.
166 Peregudov F., Tarasenko F. Introduction to Systems Analysis. –
OH: Columbus: Glencoe/Mcgraw-Hill, 1993. – 320 p.
167 Pervozvansky A. A Course on Automatic Control Theory. –
Moscow: Nauka, 1986. – 616 p. (in Russian)
168 Peters B. Normalizing Soviet Cybernetics // Information & Cul-
ture: A Journal of History. 2012. Vol. 47. No. 2. P. 145–175.
122
169 Pickering A. The Cybernetic Brain. – Chicago: The University
of Chicago Press, 2010. – 537 p.
170 Polonnikov R., Yusupov R.M. Will the 20th Century Perceive
Cybernetics // Problemy Upravleniya i Informatiki. 2001. No. 6. P. 132–
152. (in Russian)
171 Polyak B., Scherbakov P. Robust Stability and Control. – Mos-
cow: Nauka, 2002. – 303 p. (in Russian)
172 Polyak B., Stepanov O., Fradkov A.L. The 19th IFAC World
Congress // Automation and Remote Control. 2015. No. 2. P. 150–156.
(in Russian)
173 Pospelov I. A Preface to Wiener’s books “The Human Use of
Human Beings. Cybernetics and Society” and “God and Golem”. –
Moscow: Taideks, 2003. – 248 p. (in Russian)
174 Prangishvili I. Systems Approach and System-wide Regularities.
– Moscow: SINTEG, 2000. – 528 p. (in Russian)
175 Prigogine I., Stengers I. Order Out of Chaos. – New York: Ban-
tam Books, 1984. – 285 p.
176 Pushkin V., Ursul A.D. Informatics, Cybernetics, Intelligence:
Philosophical Sketches. – Kishinev: Shtiintsa, 1989. – 341 p. (in Russian)
177 Rapoport A. General System Theory: Essential Concepts & Ap-
plications. – Kent: Abacus Press, 1986. – 250 p.
178 Rashevsky N. Outline of a New Mathematical Approach to
General Biology // Bulletin of Mathematical Biophysics. 1943. Vol. 5. P.
33–47, 49–64, 69–73.
179 Fet Ya.I. A Reading Book on the History of Informatics / Ed. by
B.G. Mikhailichenko; Institute of Computational Mathematics and Math-
ematical Geophysics, Siberian Branch of RAS. – Novosibirsk: Geo, 2014.
– 559 p. (in Russian)
180 Ren W., Yongcan C. Distributed Coordination of Multi-agent
Networks. – London: Springer, 2011. – 307 p.
181 Rosenblueth A., Wiener N., Bigelow J. Behavior, Purpose and
Teleology // Philosophy of Science. 1943. No. 10. P. 18–24.
182 Rukov A. Models and Methods of Systems Analysis: Decision-
Making and Optimization. – Moscow: Moscow Institute of Steel and
Alloys, 2005. – 352 p. (in Russian)
183 Rzevski G., Skobelev P. Managing Complexity. – London: WIT
Press, 2014. – 216 p.
184 Sadovsky V. Foundations of General System Theory. – Mos-
cow: Nauka, 1978. – 280 p. (in Russian)
185 Satzinger J., Jackson R., Burd S. Introduction to Systems Analy-
sis and Design. 6th ed. – Boston: Course Technology, 2011. – 512 p.
123
186 Schedrovitsky G. Selected Proceedings. - Moscow: Higher
School of Culturology, 1995. - 800 p. (in Russian)
187 Shannon C. A Mathematical Theory of Communication // Bell
System Technical Journal. 1948. Vol. 27. P. 379–423, 623–656.
188 Shannon C., Weaver W. The Mathematical Theory of Commu-
nication. – Illinois: University of Illinois Press, 1948. – 144 p.
189 Shoham Y., Leyton-Brown K. Multiagent Systems: Algorithmic,
Game-Theoretic, and Logical Foundations. – Cambridge: Cambridge
University Press, 2008. – 504 p.
190 Smuts J. Holism and Evaluation. – London: Macmillan, 1926. –
368 p.
191 Sokolov B., Yusupov R.M. Analysis of Interdisciplinary Interac-
tion between Modern Informatics and Cybernetics: Theoretical and
Practical Aspects // XII All-Russian Meeting on Control Problems. –
Moscow: Trapeznikov Institute of Control Sciences RAS, 2014. P.
8625–8636. (in Russian)
192 Sokolov B., Yusupov R.M. Neocybernetics in the Modern Struc-
ture of System Knowledge // Robototekhnika i Tekhnicheskaya
Kibernetika. 2014. No. 2(3). P. 3–10. (in Russian)
193 Steinbuch K. Automat und Mensch. Kybernetische Tatsachen
und Hypothesen. – Berlin: Springer-Verlag, 1963. – 392 p.
194 Strogats S. Nonlinear Dynamics and Chaos: With Applications
to Physics, Biology, Chemistry, and Engineering (Studies in Nonlineari-
ty). – Boulder: Westview Press, 2001. – 512 p.
195 Surowiecki J. The Wisdom of Crowds: Why the Many Are
Smarter Than the Few and How Collective Wisdom Shapes Business,
Economies, Societies and Nations. – New York: Doubleday, 2004. –
336 p.
196 Systems Engineering Guide. – Bedford: MITRE Corporation,
2014. – 710 p.
197 Systems Theory and Systems Analysis in Control of Organiza-
tions: A Handbook / Ed. by V.N. Volkova and A.A. Emel’yanov. –
Moscow: Finansy i Statistika, 2006. – 848 p. (in Russian)
198 Taha H. Operations Research: An Introduction (9th ed.). – New
York: Prentice Hall, 2011. – 813 p.
199 Tesler G. New Cybernetics. – Kiev: Logos, 2004. – 404 p. (in
Russian)
200 The Control Handbook. 2nd ed. Ed. By W. Levine. – New York:
CRC Press, 2010. – 786 p.
201 Trentowski B. Stosunek Filozofii do Cybernetyki, Czyli Sztuki
Rządzenia Narodem. – Warsawa, 1843. – 195 p.
124
202 Tsetlin M. Studies on Automata Theory and Modeling of Bio-
logical Systems. – Moscow: Nauka, 1969. – 316 p. (in Russian)
203 Turchin V. The Phenomenon of Science. – New York: Colum-
bia University Press, 1977. – 348 p.
204 Uemov A. Systems Approach and General Systems Theory. –
Moscow: Mysl’, 1978. – 272 p. (in Russian)
205 Ugolev A. Natural Technologies of Living Systems. – Lenin-
grad: Nauka, 1987. – 317 p. (in Russian)
206 Umpleby S. A Brief History of Cybernetics in the United States
// Austrian Journal of Contemporary History. 2008. Vol. 19. No. 4. P. 28–
40.
207 Umpleby S. The Science of Cybernetics and the Cybernetics of
Science // Cybernetics and Systems. 1990. Vol. 21. No. 1. P. 109–121.
208 Ursul A. The Nature of Information. – Moscow: Politizdat,
1968. – 288 p. (in Russian)
209 Valachich J., Jeorge J., Hoffer J. Essentials of Systems Analysis
and Design. 5th ed. – Prentice Hall: Pearson, 2012. – 445 p.
210 Varela F. A Calculus for Self-reference // International Journal
of General Systems. 1975. Vol. 2. P. 5–24.
211 Vassilyev S., Zherlov A.K., Fedosov E.A., Fedunov B.E. Intelli-
gent Control of Dynamic Systems. – Moscow: Fizmatlit, 2000. – 352 p.
(in Russian)
212 Vittikh V.A. Evolution of Ideas on Management Processes in
the Society: From Cybernetics to Evergetics // Group Decision and Nego-
tiation. http://link.springer.com/article/10.1007/s10726-014-9414-
6/fulltext.html. Published online on September 14, 2014.
213 Volkova V. From the History of Systems Analysis Evolution in
Russia. – St. Petersburg: St. Petersburg State Technical University, 2001.
– 210 p. (in Russian)
214 Volkova V., Denisov A.A. The Foundations of Systems Theory
and Systems Analysis. 2nd ed. – St. Petersburg: St. Petersburg State
Technical University, 2001. – 512 p. (in Russian)
215 Voronov A. Controllability, Observability, Stability. – Moscow:
Nauka, 1979. – 339 p. (in Russian)
216 Vyshnegradsky I. On Direct-Action Controllers // Izvestiya St.
Petersburg Practical technological Institute. 1877. Vol. 1. P. 21–62. (in
Russian)
217 Wagner H. Principles of Operations Research. 2-nd ed. – NJ
Upper Saddle River: Prentice Hall, 1975. – 1039 p.
218 Walter G. The Living Brain. – London: Pelican Books, 1963. –
255 p.
125
219 Wasson C. System Analysis, Design and Development: Con-
cepts, Principles and Practices. – Hoboken: Wiley, 2006. – 832 p.
220 Weibull J. Evolutionary Game Theory. – Cambridge: The MIT
Press, 1995. – 256 p.
221 Wiener N. Cybernetics: or the Control and Communication in
the Animal and the Machine. – Cambridge: The Technology, 1948. –
194 p.
222 Wiener N. Ex-Prodigy: My Childhood and Youth. – Cambridge:
The MIT Press, 1964. – 317 p.
223 Wiener N. God and Golem, Inc.: A Comment on Certain Points
where Cybernetics Impinges on Religion. – Cambridge: The MIT Press,
1966. – 99 p.
224 Wiener N. I Am Mathematician. – Cambridge: The MIT Press,
1964. – 380 p.
225 Wiener N. The Human Use of Human Beings. Cybernetics and
Society. – Boston: Houghton Mifflin Company, 1950. – 200 p.
226 Wooldridge M. An Introduction to Multi-Agent Systems. – New
York: John Wiley and Sons, 2002. – 376 p.
227 Young S. Management: A Systems Approach. – Glenview:
Scott, Foresman and Company, 1966. – 360 p.
228 Zadeh L. Outline of a New Approach to the Analysis of Com-
plex Systems and Decision Processes / IEEE Trans. Syst., Man, Cybern.
1973. Vol. SMC-3. No. 1. P. 28–44.
126
94
Appendix I: A List of Basic Terms
94
Analysis methods for the terminological structure of a subject area were studied in
[74].
127
ior; 2) the science of control; 3) an object, i.e., a tool of control, a struc-
ture (e.g., a department) of several subjects performing control.
DEVELOPMENT is an irreversible, directed and consistent change
of material and ideal objects. Development in a desired direction is called
progress. Development in an undesired direction is called a regress.
DIVERSITY is a quantitative characteristic of a system, which
equals the number of its admissible states or the logarithm of this number.
EXTERNAL ENVIRONMENT is a set of all objects and subjects
lying outside a given system, whose behavior and/or changed properties
affects the system and all objects/subjects whose behavior and/or proper-
ties vary depending on system’s behavior.
FEEDBACK (FB) is a reverse impact exerted by the results of a cer-
tain process on its behavior; information on the state of a controlled
system, which is supplied to a control system (see CONTROL). FB
characterizes control systems in wild life, society and technology. There
exist positive and negative FB. If the results of a process strengthen its
effect, FB is positive. Negative FB takes place whenever the results of a
process weaken its effect. Negative FB stabilizes process behavior,
whereas positive FB often accelerates process evolution and causes
oscillations. In complex systems (e.g., social or biological ones), it seems
difficult or even impossible to identify FB types. In addition, FB loops
are classified based on the character of bodies and media realizing them:
mechanical (e.g., the negative FB realized by Watt’s steam engine gover-
nor); optical (e.g., the positive FB realized by an optical cavity in a laser);
electrical, and others. The notion of FB as a form of interaction plays an
important role in the analysis of complex control systems (their function-
ing and development) in wild life and society.
FUNCTION is 1) (philosophy) a phenomenon dependent on another
phenomenon, which varies simultaneously with the latter;
2) (mathematics) a law assigning a certain well-defined quantity to each
value of a variable (argument), as well as this quantity itself; a ratio of
two (or more) objects such that variation of one object causes an appro-
priate variation of another object (other objects); 3) a job performed by an
organ or organism; 4) a role or meaning of something; a role a subject or
a social institute plays with respect to the needs of an upper subsystem or
the interests of its groups and individuals; a duty or circle of activity.
GOAL is anything strived for or to-be-implemented. In philosophy,
a goal (of an action or activity) is an element in the behavior and con-
scious activity of a human being, which characterizes anticipation in
thinking of the activity result and ways of its implementation using
128
definite forms, methods and means. A goal represents a way of integrat-
ing different actions of a human being into a certain sequence or system.
HIERARCHY (from the Greek εραρχία “rule of a high priest”) is a
structural organization principle of complex multilevel systems, which
lies in ordering the interaction between levels of a system (top-bottom),
characterizes the mutual correlation and collateral subordination of pro-
cesses at different levels and ensures its functioning and behavior in
whole.
HOMEOSTAT (from the Greek ὁμοιος “like, resembling” and
στάσις “a standing still ”) is 1) the capability of an open system for pre-
serving its internal state invariable via coordinated responses for main-
taining a dynamic equilibrium; 2) (biological systems) the permanence of
characteristics essential for system’s vital activity under existing disturb-
ances in an external environment; the state of relative constancy; the
relative independence of an internal environment from external condi-
tions [14, 41, 160].
MODEL (in wide sense) is any image, analog (mental or condition-
al, e.g., a picture, description, scheme, diagram, graph, plan, map, and so
on) of a certain object, process or phenomenon (the original of a given
model); a model is an auxiliary object chosen or transformed for cogni-
tive goals, which provides new information about the primary object.
Model design proper does not guarantee that the resulting model answers
its purposes. For normal functioning, a model must meet a series of
requirements such as inherence, adequacy and simplicity.
ORGANIZATION: is 1) the internal order, coordinated interaction
of more or less differentiated and autonomous parts of a whole, caused by
its structure; 2) a set of processes or actions leading to formation or
perfection of interconnections between the parts of a whole; 3) an associ-
ation of people engaged in joint implementation of a certain program or
task, using specific procedures and rules, i.e., mechanisms of operation (a
mechanism is a system or device determining the order of a certain activi-
ty). The last meaning of the term “organization” is the definition of an
organizational system. The category of organization is a backbone ele-
ment of control theory [157].
SELF-ORGANIZATION is a process leading to creation, reproduc-
tion or perfection of complex system organization. Self-organization
processes run only in systems having a high level of complexity and a
large number of elements with nonrigid (e.g., probabilistic) connections.
Self-organization properties are inherent to objects of different nature,
namely, a living cell, an organism, a biological population,
129
biogeocenosis, a collective of human beings, complex technical systems,
etc. Self-organization processes run via readjusting the existing connec-
tions and forming new connections among system elements. A distinctive
feature of such processes is their purposeful, yet natural (spontaneous)
character. Self-organization processes imply system interaction with an
external environment, are somewhat autonomous and relatively inde-
pendent from an environment.
SELF-REGULATION is generally defined as reasonable function-
ing of living systems; it represents a closed control loop (see
FEEDBACK), where the subject and object of control do coincide. Self-
regulation has the following structure: an activity goal accepted by the
subject, a model of significant activity conditions, a program of actions
proper, a system of activity efficiency criteria, information on real results
achieved, an assessment of the existing correspondence between real
results and efficiency criteria, decisions on the necessity and character of
activity corrections.
STRUCTURE is a set of stable connections among the elements of a
certain system, ensuring its integrity and self-identity.
SYNERGETICS is an interdisciplinary research direction of self-
organization processes in complex systems, which describes and explains
the appearance of qualitatively new properties and structures at the
macrolevel as the result of interactions among the elements of an open
system at the microlevel. Synergetics employs the framework of nonline-
ar dynamics (including catastrophe theory) and nonequilibrium thermo-
dynamics.
SYNTHESIS is a mental operation which integrates different ele-
ments or sides of a certain object in a comprehensive whole (a system).
Synthesis appears opposite to and has an indissoluble connection with
analysis. Synthesis represents a theoretical method-operation inherent to
any activity.
SYSTEM is a set of elements having mutual relations and connec-
tions, which forms a definite unity and is dedicated to goal achievement.
Systems have the following basic features: integrity, relative isolation
from an external environment, connections with the environment, the
existence of parts and their connections (structuredness), whole system
dedication to goal achievement.
UNCERTAINTY is the absence or incomplete definition or infor-
mation.
130
Appendix II: Topics for Further Self-study
132
About the Author
NOVIKOV,
DMITRY A.
133