Cybernetics E

Download as pdf or txt
Download as pdf or txt
You are on page 1of 133

RUSSIAN ACADEMY OF SCIENCES

TRAPEZNIKOV INSTITUTE
OF CONTROL SCIENCES
(ICS RAS)

D.A. Novikov

CYBERNETICS:
From Past to Future

2015

1
NOVIKOV D.A. Cybernetics: From Past to Future. – Heidelberg:
Springer, 2016. – 107 p.

ISBN 978-3319273969, 3319273965

This book is a brief “navigator” across the history of cybernetics, its state-
of-the-art and prospects.
The evolution of cybernetics (from N. Wiener to the present day) and the
reasons of its ups and downs are considered. The correlation of cybernetics with
the philosophy and methodology of control, as well as with system theory and
systems analysis is established.
A detailed analysis focuses on the modern trends of research in cybernetics.
A new development stage of cybernetics (the so-called cybernetics 2.0) is dis-
cussed as a science on general regularities of systems organization and control.
The author substantiates the topicality of elaborating a new branch of cybernetics,
i.e., organization theory (O3) which studies an organization as a property, process
and system.
The book is intended for theoreticians and practitioners, as well as for stu-
dents, postgraduates and doctoral candidates. In the first place, the target audience
includes tutors and lecturers preparing courses on cybernetics, control theory and
systems science.

2
CONTENTS

Introduction ......................................................................................... 5
1. Cybernetics in the 20th century ...................................................... 7
1.1. Wiener’s Cybernetics ............................................................... 9
1.2. Cybernetics of Cybernetics and Other Types of Cybernetics 17
1.3. Achievements and Disillusions of Cybernetics ..................... 22
2. Cybernetics, Control Philosophy and Control Methodology ........ 29
2.1. Control Philosophy ................................................................ 29
2.2. Control Methodology ............................................................. 33
3. Laws, Regularities and Principles of Control ............................... 36
4. Systems Theory and Systems Analysis. Systems Engineering ..... 50
5. Some Trends and Forecasts .......................................................... 57
5.1. Topic Analysis of Leading Control Conferences ................... 57
5.2. Interdisciplinarity ................................................................... 62
5.3. “Networkism” ........................................................................ 70
5.4. Heterogeneous Models and Hierarchical Modeling............... 73
5.5. Strategic Behavior .................................................................. 84
5.6. Big Data and Big Control ...................................................... 90
Conclusion: Cybernetics 2.0 ........................................................... 100
References ....................................................................................... 113
Appendix I: A List of Basic Terms ................................................. 127
Appendix II: Topics for Further Self-study .................................... 131
About the Author ............................................................................ 133

3
In warm memory of my father, Academi-
cian A.M. Novikov, who opened up the
world of cybernetics to me

4
We had dreamed for years of an institution of independent sci-
entists, working together in one of these backwoods of sci-
ence, not as subordinates of some great executive officer, but
joined by the desire, indeed by the spiritual necessity, to un-
derstand the region as a whole, and to lend one another the
strength of that understanding. N. Wiener

Introduction

The history of science development has “romantic” periods. One of


them fell on the middle of the 1940’s. “Romanticism” was determined by
several factors.
The first factor concerned an intensive flow of scientific and applied
results. Just imagine: the end of the terrible World War II (1945); dynam-
ic growth of industry; the way out of the crisis in physics (which occurred
at the beginning of the 20th century) - the appearance and rapid develop-
ment of atomic physics, quantum mechanics, general and special relativity
theory, astrophysics; first atomic bomb explosion (1945), followed by first
atomic power plant launch (1954); electrical and radio devices usage by
everymen; a series of discoveries in biology, physiology and medicine
(commercially produced (1941) penicillin (1928) saved millions of lives,
the soon appearance of the three-dimensional DNA helix model (1953),
rapid development of radiobiology and genetics, etc.); creation of first
computer (1945) and bipolar transistor (1947); the birth of choice theory
[12] (1951), artificial neural networks (1943), game theory (1944–see
[143, 145]) and operations research (1943), representing a striking exam-
ple of an interdisciplinary synthetic science.
The second factor was associated with the comprehension of science
interdisciplinarity by researchers from different branches.
Interdisciplinarity implies that (a) there exist general approaches and
regularities in different scientific branches and (b) it is possible to perform
an adapted translation of results between some branches. This led to the
obvious necessity of generalizations search, not only within the frame-
work of a certain field of knowledge or at a junction of fields, but (in the
first place) at their “intersection.” In other words, the matter was not even
to create new paradigms in Т. Kuhn’s sense [112] for a branch, but to
apply joint efforts of physicians and biologists, mathematicians, engineers

5
and physiologists, etc. for obtaining fundamentally new results and break-
through technologies.
The third factor was that the role and “benefit” of science became
evident for everymen and politicians. The former enjoyed scientific results
owing to their rapid and mass implementation. The latter (a) realized that
science is an important public and economic drive of a society and (b) got
accustomed to that project-based management of applied research allows
predicting and in part guaranteeing its duration and results.
However, the flight of thought and stormy feelings of any romanti-
cism go in parallel with overestimated expectations. Moreover, the onrush
development of any science is inevitably followed by its normal ad-
vancement (e.g., according to T. Kuhn).
All these regularities fully affected cybernetics–a science born in the
above “romantic period” (1948) and undergone romantic childhood, the
disillusionment of juvenility and the decay of maturity.1 The book dis-
cusses exactly these issues, representing a brief “navigator” across the
history of cybernetics, its state-of-the-art and prospects. The style of a
“navigator” implies renunciation of a detailed characterization of results:
numerous references cover almost all2 classical works on cybernetics3
published to-date. On the other hand, such style a priori makes exposition
somewhat incomplete, eclectic and nonrigorous, as it would seem to a
representative of any concrete science mentioned.
The book possesses the following structure. First, we consider the
evolution of cybernetics (from N. Wiener to the present day), see Sections
1.1 and 1.2. A detailed analysis focuses on the reasons of its ups and
downs in Section 1.3. Next, we study the interrelation of cybernetics with
control philosophy and control methodology (Chapter 2), as well as with
systems theory and systems analysis (Chapter 4). Chapter 3 discusses the
basic laws, regularities and principles of control. Chapter 5 identifies
some modern development trends of cybernetics. In the Conclusion, we
introduce the new stage of cybernetics development–the so-called Cyber-

1
Note that general systems theory and systems analysis proceeded a similar path, see
below.
2
Cybernetics is a synthetic science and any attempt to give a comprehensive bibliography
of its components (e.g., control theory) is doomed to failure. By saying “all,” we mean
cybernetics proper (Cybernetics with capital C as explained in Section 1.1).
3
Most references are publicly available to an interested reader in Internet.
6
netics 2.0 as the science of systems organization and control. Appendices
contain a list of basic terms and topics for self-study.
The author is deeply grateful to V. Afanas’ev, V. Breer, V. Burkov,
A. Chkhartishvili, M. Goubko, A. Kalashnikov, K. Kolin, V. Kondrat’ev,
N. Korgin, O. Kuznetsov, A. Makarenko, R. Nizhegorodtsev, B. Polyak,
I. Pospelov, A. Raikov, P. Skobelev, A. Teslinov and V. Vittikh for fruit-
ful discussions and valuable remarks. Of course, the author takes all
shortcomings as referring to himself.
And finally, my deep appreciation belongs to Dr. A. Mazurov for his
careful translation, permanent feedback and contribution to the English
version of the book.

1. Cybernetics in the 20th century

This section is intended to consider in brief the history of cybernetics


and describe “classical” cybernetics. Let us call it “cybernetics 1.0”.
CYBERNETICS (from the Greek κυβερνητική “governance,”
κυβερνώ “to steer, navigate or govern,” κυβερνη “an administrative unit;
an object of governance containing people”4) is the science of general
regularities of control and information transmission processes in different
systems, whether machines, animals or society.
Cybernetics studies the concepts of control and communication in
living organisms, machines and organizations including self-organization.
It focuses on how a (digital, mechanical or biological) system processes
information, responds to it and changes or being changed for better func-
tioning (including control and communication).
Cybernetics is an interdisciplinary science. It originated “at the junc-
tion”5 of mathematics, logic, semiotics, physiology, biology and sociolo-
gy. Among its inherent features, we mention analysis and revelation of
general principles and approaches in scientific cognition. Control theory,
communication theory, operations research and others (see Section 1.1)
represent most weighty theories within cybernetics 1.0.

4
This root induced the words “governor”, “government” and “governance.”
5
Depending on the mutual penetration of subjects and methods, a pair of sciences often
appears at the junction of two sciences (e.g., physical chemistry and chemical physics).
7
In ancient Greece, the term “cybernetics” denoted the art of a munic-
ipal governor (e.g., in Plato’s Laws).
A. Ampere (1834) related cybernetics to political sciences: the book
[6] defined cybernetics (“the science of civil government”) as a science of
current policy and practical governance in a state or society.
B. Trentowsky (1843, see [136, 201]) viewed cybernetics as “the art
of how to govern a nation.”
In its Tektology (1925, see [29]), A. Bogdanov examined common
organizational principles for all types of systems. In fact, he anticipated
many results of N.Wiener and L. Bertalanffy, as the both were not famil-
iar with Bogdanov’s works.
The history and evolution of cybernetics can be traced in [85, 84,
179, 65, 168, 206].
The modern (and classical!) interpretation of the term “cybernetics”
as “the scientific study of control and communication in the animal and
the machine” was pioneered by Norbert Wiener in 1948, see the mono-
graph [221]. Two years later, Wiener also added society as the third object
of cybernetics [225]. Among other classics, we mention William Ashby6
[14, 15] (1956) and Stafford Beer [23] (1959), who made their emphasis
on the biological and “economic” aspects of cybernetics, respectively.
Therefore, cybernetics 1.0 (or simply cybernetics) can be defined7 as
“THE SCIENCE OF CONTROL AND DATA PROCESSING IN
ANIMALS, MACHINES AND SOCIETY.” An alternative is the defini-
tion of Cybernetics (with capital C, to distinguish it from cybernetics
whenever confuse may occur) as “THE SCIENCE OF GENERAL
REGULARITIES OF CONTROL AND DATA PROCESSING IN
ANIMALS, MACHINES AND SOCIETY.” The second definition differs
from its first counterpart in the words “general regularities,” which is
crucial and will be repeatedly underlined and used below. In the former
case, the matter concerns “the umbrella brand,” i.e., the “integrated”
results of all sciences dealing with problems of control and data pro-
cessing in animals, machines and society. The latter case covers partial
“intersection” of these results8 (see Fig. 9), i.e., usage of common results
for all component sciences. Furthermore, we will adhere to this approach
6
Interestingly, W. Ashby introduced and analyzed the categories of “variety” and “self-
organization,” as well as the terms “homeostat” and “black box” in cybernetics.
7
These definitions will be addressed throughout the whole book, except the Conclusion.
8
Figuratively speaking, the central rode of the “umbrella.”
8
over and over again for discrimination between the corresponding umbrel-
la brand and the common results of all component sciences in the context
of different categories such as interdisciplinarity, systems analysis, organ-
ization theory, etc.

1.1. Wiener’s Cybernetics

Some historical facts (an epistemological view). Any science is de-


termined by its “subject” (problem domain) and “method” (an integrated
set of methods) [112, 131, 148]. Therefore, sciences9 can be divided into:
– subject-oriented sciences studying a certain subject by different
methods (e.g., physics, biology, sociology);
– method-oriented sciences (in the restricted sense, the so-called
model-based sciences) developing a certain set of methods applicable to
different subjects; for instance, a classical example is applied mathemat-
ics: the apparatus and methods of its branches (differential equations,
game theory, etc.) serve for description and analysis of different-nature
systems;
– synthetic sciences (“metasciences”) mostly developing and/or gen-
eralizing methods of certain sciences in application to subjects of these
and/or other sciences (e.g., operations research, systems analysis, cyber-
netics). With the course of time, synthetic sciences find or generate their
“own” subjects and methods.
As sciences of any types develop, their subjects and methods are split
and intersected by each other, causing further differentiation of sciences.
The following conditions guarantee the appearance (1 and 2) and
survival (3) of synthetic sciences:
1) A sufficient development level of origin/source sciences;
2) Numerous analogies (and then generalizations) among partial re-
sults of source sciences;

9
This somewhat conditional differentiation applies not only to sciences, but to research-
ers. As mentioned in [149], in some fields of science researchers are traditionally divided
into two categories. The first one is called “screwmen.” They study new problem domains
(“screws”) using common methods (“spanners”). The second category is known as
“spannermen”; such researchers design new technologies of cognition (methods, “span-
ners”) and illustrate their efficiency in different problem domains (for unscrewing com-
mon “screws”).
9
3) Rather easy and fast generation/accumulation of nontrivial theoret-
ical and applied results and their popularization, within the scientific
community and everymen.
Speaking about cybernetics, the first and second conditions had been
satisfied by the middle of the 1940’s (see the Introduction). And the long-
term cooperation between N. Wiener and biologists, alongside with his
wide and deep professional interests (recall Wiener processes, Banach-
Wiener spaces, the Wiener-Hopf equations) ensured “subjective” satisfac-
tion of these conditions. In its late interview to Russian Studies in Philos-
ophy (1960, No. 9), Wiener noted that “the aim was to unite efforts in
different branches of science and get focused on uniform solution of
similar problems.” The third condition–rapid accumulation and populari-
zation of new results–was also realized, see the discussion below.
In 1948 integration of results obtained by different sciences and their
substantiated applicability to different subjects (see Fig. 1) gave birth to a
new synthetic science known as Wiener’s cybernetics.
Method-oriented sciences Subject-oriented sciences

Engineering
sciences
Control
theory Control Machine

CYBERNETICS Animal Biology

Communication Society
theory Communication

Social
sciences

Fig. 1. The phylogenesis of Wiener’s cybernetics

A science as a system of knowledge has the following epistemologi-


cal functions [149]:
– descriptive (phenomenological) function, i.e., acquisition and ac-
cumulation of data and facts. Any science starts from this function, viz.,
answering to the question “What is the structure of the world?”, as any
science can be based on very many facts. From this viewpoint, cybernetics

10
as a synthetic science10 mostly employs the results of its components
(source sciences);
– explanatory (explicative) function, i.e., elucidation of phenomena
and processes, their internal mechanisms. Here the question to-be-
answered is “Why does the world is exactly this?”. In this function, cy-
bernetics plays a more visible role: even analogies may have powerful
elucidation;
– generalizing function, i.e., formulation of laws and regularities sys-
tematizing and absorbing numerous fragmented phenomena and facts (the
associated question is “What are the common features of ... ?”). Perhaps,
this is the main function of cybernetics, since generalizations (in the form
of laws, regularities, models, research approaches) comprise the frame-
work of its results;
– predictive (prognostic) function, i.e., scientific knowledge allow
predicting new processes and phenomena (this function answers the
question “What and why will happen?”). Efficient forecasting is possible
using substantiated analogies and constructive generalizations within
synthetic science cybernetics;
– prescriptive (normative) function, i.e., scientific knowledge allow
organizing activity with certain goals (the corresponding question is
“What and how should be done for goal achievement?”). Normative
function has a close connection with solution of control problems, an
important subject of cybernetics.
Definitions. Just like any comprehensive category, cybernetics would
hardly possess a unique definition. Moreover, the meanings of terms
describing this category also evolve with the course of time. Let us give a
series of widespread definitions of cybernetics:
“A science concerned with the study of systems of any nature which
are capable of receiving, storing, and processing information so as to use
it for control”–A. Kolmogorov;
“The art of steersmanship: deals with all forms of behavior in so far
as they are regular, or determinate, or reproducible: stands to the real
machine–electronic, mechanical, neural, or economic–much as geometry
stands to real object in our terrestrial space; offers a method for the scien-

10
For instance, A. Kolmogorov believed that cybernetics is not a science but a scientific
direction; however, the listed functions also apply to the latter.
11
tific treatment of the system in which complexity is outstanding and too
important to be ignored.”–W. Ashby;
“A branch of mathematics dealing with problems of control,
recursiveness, and information, focuses on forms and the patterns that
connect.”–G. Bateson;
“The art of effective organization.”–S. Beer;
“The art of securing efficient operation.”–L. Couffignal;
“The art and science of manipulating defensible metaphors.”–
G. Pask;
“The art of creating equilibrium in a world of constraints and possi-
bilities.”–E. Glasersfeld;
“The science and art of understanding.”–H. Maturana;
“A synthetic science of control, information and systems”–
A.G. Butkovsky;
“A system of views a governor must have for efficient control of its
κυβερνη”–N. Moiseev;
“The art of interaction in dynamic networks.” – R. Ascott.
Almost all definitions involve the terms “control” and “system,” see
the definition of “cybernetics 2.0” in the Conclusion. Therefore, they are
mutually noncontradictory and well consistent with the definition of
cybernetics accepted by us.
Consequently, Wiener’s cybernetics has the following key terms:
control, communication, system, information, feedback, black box, varie-
ty, homeostat.
Cybernetics today (disciplines included in cybernetics in the de-
scending order of their “grades” of membership, see Fig. 9, with year of
birth if available):
– control theory11 (1868–the papers [127, 216] published by
J. Maxwell and I. Vyshnegradsky);
– mathematical theory of communication and information (1948–
K. Shannon’s works [187, 188]);
– general systems theory, systems engineering and systems analysis12
(1968–the book [26] and 1956–the book [92]);

11
According to an established tradition, control science will be called control theory (yet,
such name narrows its subject).
12
Chapter 4 discusses the history of these scientific directions in more details.
12
– optimization (including linear and nonlinear programming; dynam-
ic programming; optimal control; fuzzy optimization; discrete optimiza-
tion, genetic algorithms, and so on);
– operations research (graph theory, game theory and statistical deci-
sions, etc.);
– artificial intelligence (1956–The Dartmouth Summer Research Pro-
ject on Artificial Intelligence);
– data analysis and decision-making;
– robotics
and others (purely mathematical and applied sciences and scientific direc-
tions, in an arbitrary order) including systems engineering, recognition,
artificial neural networks and neural computers, ergatic systems, fuzzy
systems (rough sets, grey systems [91, 94, 162, 165]), mathematical logic,
identification theory, algorithm theory, scheduling theory and queuing
theory, mathematical linguistics, programming theory, synergetics and all
that jazz.
In its components, cybernetics intersects considerably with many
other sciences, in the first place, with such metasciences as general sys-
tems theory and systems analysis (see Chapter 4) and informatics13 (see
the Conclusion).
There exist a few classical monographs and textbooks on Cybernetics
with its “own” results; here we refer to [2, 14, 22, 23, 26, 62, 63, 133,
222-225]. On the other hand, textbooks on cybernetics (mostly published
in the former USSR) include many of the above-mentioned directions (par
excellence, control in technical systems and informatics)–see [52, 68, 108,
113, 119].
The prefix “cyber” induces new terms on a regular basis, viz.,
cybersystem, cyberspace, cyberthreat, cybersecurity, etc. In a broader
view of things, this prefix embraces all connected with automation, com-
puters, virtual reality, Internet and so on.14
Nowadays, cybernetics attracts the attention of several hundreds of
dedicated research centers (institutes, departments, research groups) and

13
Or even with computer science, but we will omit this aggregative term due to its unde-
termined and eclectic character.
14
Perhaps, this reflects the word “cybernetics” in mass consciousness, even despite that
experts in the field disagree with such (general and simplified) usage of the prefix.
13
associations15 worldwide (with explicit usage of the term “cybernetics” in
their names), plus hundreds of scientific journals and regular conferences.
For instance, see Internet resources on cybernetics:
– http://www.asc-cybernetics.org/
– http://pespmc1.vub.ac.be/
– http://wosc.co/
– http://neocybernetics.com/wp/links/
and others.
“Sectoral” types of cybernetics. Alongside with general cybernet-
ics, there exist special (“sectoral”) types of cybernetics [113]. A most
natural approach (which follows from Wiener’s extended definition) is to
separate out technical cybernetics, biological cybernetics and socioeco-
nomic cybernetics besides theoretical cybernetics (i.e., Cybernetics).
It is possible to compile a more complete list of special types of cy-
bernetics (in the descending order of the current level of exploration):
– technical cybernetics, engineering cybernetics;
– biological and medical cybernetics, evolutionary cybernetics, cy-
bernetics in psychology [5, 9, 10, 15, 24, 61, 100, 109, 160, 169, 202];
– economic cybernetics [22, 23, 99, 138, 227];
– physical cybernetics (to be more precise, “cybernetical physics”16,
see [203]);
– social cybernetics, educational cybernetics;
– quantum cybernetics (quantum systems control, quantum compu-
ting) (see surveys in [69, 72]).
As standing apart, we mention a branch of biological cybernetics
known as cybernetic brain modeling integrated with artificial intelligence,
neural and cognitive sciences. A romantic idea to create a cybernetic
(computer-aided) brain at least partially resembling a natural brain stimu-
lated the founding fathers of cybernetics (see the works of W. Ashby [15],
G. Walter [218], M. Arbib [11], F. George [61], K. Steinbuch [193] and
others) and their followers (for a modern overview, we refer to [169]).

15
Principia Cybernetica (V. Turchin et al.), American Society for Cybernetics
(http://www.asc-cybernetics.org), World Organization of Systems and Cybernetics, to
name a few.
16
Cybernetical physics is a science studying physical systems by cybernetical methods.
Owing to the maturity of physical objects modeling (in the sense of duration and depth),
today the results in this field can be stated as general and well grounded laws, see [59, pp.
38-40] and below.
14
Bibliometric analysis. The degree of penetration of cybernetics into
other sciences and the scale of its “synthetic” character can be estimated
using a simple bibliometric analysis. Fig. 2 and Fig. 3 demonstrate the
usage of the terms “Cybernetics” and “Control” in scientific publications
(paper titles) indexed by Scopus. Clearly, these terms appear interdiscipli-
nary and widespread in many branches of modern science.

Fig. 2. The usage of the term “Cybernetics” by scientific branches


in paper titles indexed by Scopus

Fig. 3. The usage of the term “Control” by scientific branches


in paper titles indexed by Scopus
15
Fig. 4 and Fig. 5 illustrate the usage of the terms “Cybernetics” and
“Control” by years in scientific publications indexed by Scopus.

Fig. 4. The usage of the term “Cybernetics” by years


in publications indexed by Scopus

Fig. 5. The usage of the term “Control” by years


in publications indexed by Scopus
16
And finally, Fig. 6 shows the usage of the terms “Cybernetics” and
“Control” by years in the texts of publications indexed by Google Scholar.
The dip observed in recent years can be explained by indexing delays.
1 800 000 60000
Control Cybernetics
1 600 000
50000
1 400 000

1 200 000 40000

1 000 000
30000
800 000

600 000 20000

400 000
10000
200 000

0 0
1947 1952 1957 1962 1967 1972 1977 1982 1987 1992 1997 2002 2007 2012

Fig. 6. The usage of the terms “Cybernetics” and “Control” by years in


the texts of publications indexed by Google Scholar

1.2. Cybernetics of Cybernetics and Other Types of Cybernetics

In addition to Wiener’s classical cybernetics, the last 50+ years


yielded other types of cybernetics declaring their connection with the
former and endeavoring to develop it further.
No doubt, the most striking phenomenon was the appearance of se-
cond-order cybernetics (cybernetics of cybernetics, metacybernetics, new
cybernetics; here “order” corresponds to “reflexion rank”). Cybernetics of
cybernetic systems is associated with the names of M. Mead, G. Bateson
and H. Foerster and puts its emphasis on the role of subject/observer
performing control17 [20, 54, 55, 81, 128] (see Fig. 7). The central concept
of second-order cybernetics is an observer as a subject refining the subject
from the object (indeed, any system is a “model” generated from reality
for a certain cognitive purpose and from some point of view/abstraction).

17
Such approach has been and still is conventional for theory of control in organizations
(e.g., see Fig. 4.15 in [131] and [158]).
17
Second-order cybernetics

OBS ERV ER

First-order cybernetics

CONTROL SUBJECT

State of controlled
Cont rol object

CONTROLLED OBJECT

External disturbances

Fig. 7. First- and second-order cybernetics

H. Foerster noted that “a brain is required to write a theory of a brain.


From this follows that a theory of the brain, that has any aspirations for
completeness, has to account for the writing of this theory. And even more
fascinating, the writer of this theory has to account for her or himself.
Translated into the domain of cybernetics; the cybernetician, by entering
his own domain, has to account for his or her own activity. Cybernetics
then becomes cybernetics of cybernetics, or second-order cybernetics.”
[55].
In contrast to Wiener’s cybernetics, second-order cybernetics pos-
sesses the conceptual-philosophical character (for a mathematician or
engineer, it is demonstrative that all publications on second-order cyber-
netics contain no formal models, algorithms, etc.). In fact, this type of
cybernetics “transmits” the complementarity principle (with insufficient
grounds) from physics to all other sciences, phenomena and processes.
Moreover, a series of works postulated that any system must have positive
feedback loops amplifying positive control actions (e.g., see [124]). But
any expert in control theory knows the potential danger of such loops for
system stability!

18
The “biological” stage in second-order cybernetics is associated with
the names of H. Maturana and F. Varela [125, 126, 210] and their notion
of autopoiesis (self-generation and self-development of systems).
F. Varela underlined that “first-order cybernetics is the cybernetics of
observed systems; second-order cybernetics is the cybernetics of observ-
ing systems.” The latter focuses on feedback of a controlled system and an
observer.
Therefore, the key terms of second-order cybernetics are
recursiveness, self-regulation, reflexion, autopoeisis. For a good survey of
this direction, we refer to [116].
P. Asaro [13] believed that there exist three interpretations of cyber-
netics (actually, we have mentioned the first two above):
1) the narrow interpretation, i.e., a science studying feedback control;
2) the wide interpretation, i.e., “cybernetics is all the things, and we
live in the Century of Cybernetics”;
3) the intermediate (epistemological) interpretation, i.e., second-order
cybernetics (an emphasis on feedback of a controlled system and an
observer).
However, the historical picture has appeared much more colorful and
diverse, not confining to the second order–see Fig. 8.
Some authors adopt the terms “third-order cybernetics” (social
autopoeisis; second-order cybernetics considering autoreflexion) and
“fourth-order cybernetics” (third-order cybernetics considering observer’s
system of values), but they are conceptual and still have no generally
accepted meanings (e.g., see a discussion in [31, 95, 121, 122, 140, 206,
207]).
For instance, V. Lepsky wrote: “Third-order cybernetics can be
formed basing on the thesis “from observing systems to self-developing
systems.” In this case, control is gradually transformed into a wide spec-
trum of support processes for system self-development, namely, social
control, stimulation, maintenance, modeling, organization, “assem-
bly/disassembly” of subjects and others.” [118, p. 7793].

19
Post-cybernetics
Cybernetics Autopoeisis Evergetics

Neo-cybernetics

1950 1960 1970 1980 1990 2000 2010

Conceptual
cybernetics Control
of third and methodology
Second-order fourth orders
cybernetics
Hi-Hume
Cybernetics

Fig. 8. The ontogenesis of cybernetics: different types of cybernetics

We point out other directions (see Table 1 and Chapter 2 of the


book):
- homeostatics (Yu. Gorsky and his scientific school), a science stud-
ying contradictions control for the sake of maintaining the permanency of
processes, functions, development trajectories, etc. [71];
– neo-cybernetics (B. Sokolov and R. Yusupov), an interdisciplinary
science which elaborates a methodology of stating and solving analysis
and synthesis problems of intelligent control processes and systems for
complex arbitrary-nature objects [191, 192];
– neo-cybernetics (S. Krylov) [111];
– new cybernetics, post-cybernetics (G. Tesler), a fundamental sci-
ence about general laws and models of informational interaction and
influence in processes and phenomena running in animate, inanimate and
artificial nature [199]. Interestingly, K. Kolin had proposed almost a same
definition to informatics 20 years before G. Tesler, see [101];
– evergetics (V. Vittikh), a value-oriented science about control pro-
cesses in a society, which focuses on problem situations for a group of
heterogeneous actors with different viewpoints, interests and value prefer-
ences [212]. In other words, evergetics can be defined as third-order
cybernetics for interacting control subjects. According to Vittikh’s fair
remark, in everyday social life control processes will be realized by the
“tandem” of common and professional control experts (theoreticians): the
former face concrete problem situations in daily routine and acquire
conventional knowledge (in the sense of H. Poincare) on the situation and
define directions of its control, whereas the latter create necessary meth-
20
ods and means for their activity. Involvement of “common” people into
social control processes is an important development trend of control
science.
– subject-oriented control in noosphere, the so-called Hi-Hume Cy-
bernetics (V. Kharitonov and A. Alekseev), a science mostly considering
subjectness and subjectivity of control [79].

Table 1. Different types of cybernetics


Type Authors Period
Cybernetics N. Wiener the 1948-
W. Ashby 1950’s
S. Beer
Second-order cybernetics M. Mead the 1960-
G. Bateson 1970’s
H. Foerster
Autopoiesis H. Maturana the 1970’s
F. Varela
Homeostatics Yu. Gorsky the 1980’s
Conceptual cybernetics of third V. Kenny the 1990-
and fourth orders R. Mancilla 2010’s
S. Umpleby
Neo-cybernetics B. Sokolov the 2000’s
R. Yusupov
Neo-cybernetics S. Krylov the 2000’s
Third-order cybernetics V. Lepsky the 2000’s
New cybernetics, post-cybernetics G. Tesler the 2000’s
Control methodology D. Novikov the 2000’s
Evergetics V. Vittikh the 2010’s
Subject-oriented control in V. Kharitonov the 2010’s
noosphere (Hi-Hume Cybernet- A. Alekseev
ics)

It is possible to introduce the notion of “fifth-order cybernetics” as


fourth-order cybernetics considering the mutual reflexion of control
subjects [158] making coordinated decisions, etc. Note that all types of
cybernetics in Table 1 are conceptual, i.e., absorbed by Cybernetics.

21
The observed variety of the approaches claiming (explicitly or im-
plicitly) to be a new mainstream in classical cybernetics development
seems natural, as reflecting the evolution of cybernetics. With the lapse of
time, certain approaches will be further developed, others will stop grow-
ing. Of course, it is extremely desirable to obtain a general picture with
integration, generalization and joint positioning of all existing approaches
or most of them (see the Conclusion).

1.3. Achievements and Disillusions of Cybernetics

Cybernetics has been always assigned a wide range of assessments


by experts and everymen (at least, since the middle of the 1960’s), from
“cybernetics has discredited itself against all expectations and ceased to
exist by today” to “cybernetics covers all the things.”18 As ever, the truth
is a golden mean.
Some doubts in the existence of cybernetics “today” and arguments
witnessing for it (e.g., see [170, 191, 192]) have been repeatedly stated
starting from the middle of the 1980’s. Here are some quotations:
– “As a scientific discipline, cybernetics still exists but its claims for
the role of some all-embracing control science disappeared.” [135];
– “We have to acknowledge that general cybernetics has failed to
form a scientific discipline.” “… It is difficult to find a specialist identify-
ing himself as a cybernetician.” [173];
– “Today the term “cybernetics” is mentioned here, there and every-
where on and off the point.” [136].
Such opinions are only partially correct. Cybernetics was born in the
middle of the 1940’s as the science of “control and communication in the
animal and the machine,” or even as the science of GENERAL control
laws (recall the definitions of cybernetics and Cybernetics above and Fig.
9). The triumph of cybernetics in the 1950-1960’s, namely, the appear-
ance of technical, economic, biological and other types of cybernetics,
their close connections with operations research, mathematical control
theory, as well as intensive application of its results in design and refine-
ment of technical and information systems, created the illusion of univer-
salism and the illusion of inevitable rapid progress of cybernetics in
future. Nevertheless, in the early 1970’s the development of cybernetics

18
This also applies to systems theory and systems analysis, see Chapter 4.
22
slowed down, its integral flow was decomposed into numerous partial
subflows and “lost in details”: the number of scientific directions19 (see
Fig. 9) increased and each of them continued further development, but
general regularities were almost not identified and not systematized.
Actually, cybernetics had rapid growth owing to its components, but
Cybernetics stood still.
CYBERNETICS
1868

1948
Control
theory 1948

Information
theory
Mathematical
… communication
theory 1968

Data analysis Cybernetics


and decision- (1948) General
making systems theory
1944 and systems
analysis
1956
Operations …
research Artificial
intelligence
Optimization
1943
1956

Fig. 9. The composition and structure of cybernetics

Concerning Fig. 9 and similar ones (see Fig. 20, Fig. 21 and Fig. 55),
the author addresses esteemed readers with an appeal to acknowledge that
any ideas about the correlation of sciences and their branches are “ego-
centric”–a scientist places its own (“favorite”) science “in the core.”
Moreover, any scientific branch or scientific school hyperbolizes its
achievements and capabilities. Such subjectivism seems natural, and a real
picture can be always reconstructed with appropriate corrections to it.

19
Exactly scientific directions, i.e., sciences, group of sciences and application domains.
23
Another argument: since the 1950’s, the mankind has been observing
the “exponential” growth of technological innovations and the same
growth of scientific publications, parallel to sciences differentiation
(N. Wiener wrote: “Since Leibniz there has perhaps been no man who has
had a full command of all the intellectual activity of his day.”
[221, p. 43]). An interesting paradox: over this period, the number of
researchers, scientific papers, journals and conferences has been increas-
ing, but almost without the appearance of revolutionary fundamental
scientific discoveries “clear to everymen.” Fundamental science “has
passed ahead of” technologies and its groundwork is now implemented in
new technologies. Yet, intensive development of fundamental science
cannot be stimulated without an explicit mass “demand” from technolo-
gies.
In the era of cumulative differentiation of sciences, cybernetics has
been remaining a striking example of the synergetic effect, i.e., a success-
ful attempt to integrate different sciences, to search their common lan-
guage and regularities. Unfortunately, it is one of the last examples:
modern fashionable “convergent sciences” (NBICS: nano-, bio-, informa-
tional, cognitive and humanitarian social sciences) have still not realized
themselves in this sense. Indeed, the widespread “interdisciplinarity” is
rather an advertising umbrella brand or a real “junction” of two or more
sciences. Genuine Interdisciplinarity must operate common results and
regularities of several sciences.
As an epistemological digression, note that the dialectic spiral “from
partial to generalizations, from generalizations to new partial results” is
characteristic for any-scale theory, i.e., from partial (yet, integral) direc-
tion of investigations20 to full-scale scientific research (see Fig. 10 import-
ed from [148]). Wiener’s ideas about the general regularities of control
and communication in different-nature systems were the result of general-
izing some (of course, not all!) achievements of automatic control theory,
communication theory, physiology and a series of other sciences of that
time. Wiener’s cybernetics with the key concepts of feedback (causality),

20
For instance, an efficient solution method for a certain class of control problems
becomes applicable to problems in adjacent fields (e.g., communication, production, etc.).
Thereby, this method is “transferred” from control theory to cybernetics. And then, it can
be an asset of applied mathematics, i.e., a “spanner” for experts in various fields (when-
ever studied systems satisfy its initial requirements).
24
homeostasis and others spurred new results in control, informatics and
other sciences.

THE WHOLE SET OF CERTAIN RESULTS

PRIMARY GENERALIZATIONS
The process of ascending from

SECONDARY GENERALIZATIONS
the concrete to the abstract

AND SO ON

The Backbone Element:


a concept, a research approach,
a system of axioms, etc.
The process of ascending from
the abstract to the concrete

CONCEPTUAL STATEMENTS
REQUIREMENTS
MECHANISMS

PROCEDURES
CONDITIONS
PRINCIPLES

AND SO ON
MODELS

Fig. 10. The logical structure of a theory [148]

25
Thus, the “romantic” period (see the Introduction) was followed by
the period of rapidly obtained results, ergo by the growing expectations.
Those expectations were not necessarily professional. Cybernetics became
fashionable and many authors started its popularization.21 Sometimes, the
number of popularizers even exceeded the number of professionals (for
the sake of justice, we emphasize that professionals realized not all their
expectations). A. Kolmogorov was right saying that “I do not belong to
great enthusiasts of all rich literature on cybernetics published today and
see numerous exaggerations (on the one part) and much oversimplifica-
tion (on the other part) in it.” [104].
Perhaps, such situation is typical for the development of scientific
branches and directions. There exist many examples of failed expectations
originally created and maintained by dilettantes. For instance, the termi-
nology of rather fruitful independent sciences such as nonlinear dynamics
and synergetics [45, 77, 120, 175, 194] (attractors, bifurcations, etc.) is
often employed by humanists for constructing a scientific entourage for
the outsiders. Fuzzy set theory, artificial neural networks, genetic algo-
rithms and many other scientific fields have already passed or are now
facing a crisis due the collapse of corresponding overrated expectations.
Consider the following groups of subjects:
– researchers focused on cybernetics proper;
– researchers representing adjacent (component) sciences;
– popularizers of cybernetics (mass media or dilettantish “research-
ers” interpreting the results of others22);
– authorities (“politicians”) and potential users of applied results
(“customers”) in business structures.
The failed expectations for cybernetics caused disillusions of all these
groups. Answering to the question “Where are the results?”, experts in
cybernetics parried: “We work as hard as possible; all promises were
given by popularizers and they must bear the responsibility.” Due to their

21
Actually, the first popularizer was N. Wiener himself. Later, he mentioned that the
appearance of the book [221] in a moment transformed him from a working scientist with
a definite authority in his research field into some public figure. That was pleasing, but
also had negative consequences, as henceforth N. Wiener was obliged to maintain busi-
ness relations with various scientific groups and take part in a movement which rapidly
gained in scope so that he could not even cope with it.
22
Such “researchers” exist in any science, especially in and around intensively developing
ones.
26
sound “jealousy” to cybernetics, the representatives of adjacent sciences
replied “The things are going well with us”23 (really, many “components”
of cybernetics such as control theory, informatics and others were quite
successful, see Fig. 9). Popularizers infrequently feel pangs of con-
science24 and can always note: “We are not experts, we were deluded.”
With the course of time, politicians also felt definite pessimism over
cybernetics, i.e., particularly due to the attitude towards cybernetics in the
early 1950’s in the USSR, the Chilean experiments of S. Beer’s team
(implementation of cybernetical ideas and approaches in real economy
management) and V. Glushkov’s unrealized intentions to deploy all-
embracing computer-based centralized planning in the USSR.
No guilty persons found, something failed, and that’s it. Actually, the
situation is not so bad as it seems to be. First, cybernetics is rather
efficient as an integrative science: its components have been and will be
developing for years, while a unique look and a holistic picture covering a
whole group of sciences is surely needed (see Chapter 2). Reflexion with
respect to disillusions and their reasons is anyway useful.
Second, for several decades cybernetics was considered as a “magic
lamp” throwing the light on the correct structure of any subject domain
and systematizing its organization (N. Moiseev noted that cybernetics
defines “a thinking standard” [136]). In many cases (technical systems,
numerous results in biology and economics, etc.), the hopes were justified,
inducing higher expectations. Any synthetic science including cybernetics
represents not a “lamp,” but a “lens” properly focusing rays (scientific and
applied results) from a “source” (concrete sciences): a lens gives no light,
but acts as a light converter.
The main problem of cybernetics as a “lens” consists in the follow-
ing. Except the founding fathers of classical cybernetics (N. Wiener,
W. Ashby and S. Beer), just a few researchers studied Cybernetics
deeply and professionally endeavoring to reveal, formulate and develop
its general laws (see Chapter 3), despite the huge growth of knowledge in
adjacent sciences within the past decades. (A new turn of appreciable
generalizations took no place, see Fig. 10). Moreover, the

23
In fact, valuable results in automatic control theory, statistical communication theory,
etc. were followed by some recession (quite naturally, see Fig. 32).
24
During its speech at 1962 IFIP Meeting, USSR representative A. Dorodnitsyn suggested
two terms for the glossary of information processing, namely, “Cybernetics active” and
“Cybernetics talkative.”
27
interdisciplinarity of cybernetics (multiple subjects and methods of
study25) testified to its “fuzziness.” Contrariwise, Cybernetics is a more
holistic science with its own subject–general regularities of control and
communication. Therefore, experts and specialists should pay their
attention and apply every effort to develop Cybernetics!
Concluding this section, recall the “principle of uncertainty” de-
scribed in [149]: epistemologically weak sciences introduce the minimal
constraints (or no constraints at all) and obtain the fuzziest results. Contra-
riwise, epistemologically strong sciences impose many limiting condi-
tions, involve scientific languages, but yield more precise (and well-
grounded) results. However, the field of their application appears rather
narrowed (i.e., clearly bounded by these conditions). In other words, the
current level of science development is characterized by certain mutual
constraints imposed on results “validity” and results applicability, see Fig.
11. That is, the “product” of the domains of results applicability and
validity does not exceed a constant (increasing the value of a “multipli-
cand” reduces the value of another “multiplicand”).

The Domain of Applicability

Epistemologically
weak sciences

Cybernetics

Decision
theory

… Epistemologically
Operations research strong sciences

Control theory …

The Domain of Validity

Fig. 11. The “principle of uncertainty”

25
In this sense, interdisciplinarity is rather a negative feature.
28
But this regularity applies only to a current development level of a
corresponding science. The presence of generalizations (the main role of
Cybernetics!) extends the boarders by shifting the curve to the right and
top (see Fig. 11). As a result, some progress is achieved in the both do-
mains.

2. Cybernetics, Control Philosophy and Control Methodology

Having reached a certain level of epistemological maturity, scientists


perform “reflexion” by formulating general laws in corresponding scien-
tific fields, i.e., create metasciences [149, 152]. On the other part, any
“mature” science becomes the subject of philosophical research. For
instance, the philosophy of physics appeared at the junction of the 19th
century and the 20th century as the result of such processes.
Originated in the 1850’s, research in the field of control theory led to
the appearance of other metasciences, i.e., cybernetics and systems analy-
sis. Moreover, cybernetics quickly became the subject of philosophical
investigations (e.g., see [20, 50, 54, 87, 97, 98, 126, 176, 207, 208])
conducted by “fathers” of cybernetics and professional philosophers.
The 20th century was accompanied with the rapid progress of man-
agement science [38, 131, 157] as a branch of general control theory
studying practical control in organizational systems. By the beginning of
the 2000’s, management science has engendered management philosophy.
Books and papers entitled “Management Philosophy” and “Control Phi-
losophy” appeared exactly at that times (for instance, see references in
[152]); as a rule, their authors represented professional philosophers.
Generally speaking, one may acknowledge the long-felt need for more
precise mutual positioning of philosophy and control, methodology and
control, as well as analysis of general laws and regularities of complex
systems functioning and control.

2.1. Control Philosophy

Historically (and similarly to the subjects of most modern sciences),


control problems analysis was first the prerogative of philosophy.
R. Descartes was used to say, “Philosophy is like a tree whose roots are

29
metaphysics and then the trunk is physics. The branches coming out of the
trunk are all the other sciences.”
R. Mirzoyan felt rightly that, on the basis of historical and philosoph-
ical analysis, first control/management theorists were exactly philosophers
[135]. Confucius, Lao-tzu, Socrates, Platon, Aristotle, N. Machiavelli,
T. Hobbes, I. Kant, G. Hegel, K. Marx, M. Weber, A. Bogdanov–this is a
short list of philosophers that laid down the foundations of modern control
theory for the development and perfection of managerial practice.
Consider Fig. 12 [152] illustrating different connections between the
categories of philosophy and control; they are treated in the maximal
possible interpretation (philosophy includes ontology, epistemology,
logic, axiology, ethics, aesthetics, etc.; control is viewed as a science and
a type of practical activity). We believe that the three shaded domains in
Fig. 12 are the major ones.

PHILOSOPHY

Control
philosophy

Epistemological, Ontological, logical,


logical and other ethical and other
foundations of foundations of
managerial practice
control science Control problems
as the subject
of research in
philosophy

Management
Cybernetics philosophy

Control Managerial
science practice

SCIENCE CONTROL PRACTICE

Fig. 12. Philosophy and control

30
Presently, concrete control problems are no more the subject of phil-
osophical analysis. Philosophy (as a form of social consciousness, the
theory of general principles of entity and cognition, human attitude to the
reality, as the science of universe laws of natural development) studies
GENERAL problems and regularities separated out by experts in certain
sciences.
V. Diev believed that control philosophy is “a system of generalizing
philosophical judgments about the subject and methods of control, the
place of control among other sciences and in the system of scientific
cognition, as well as about its cognitive and social role in a modern socie-
ty.” [50, p. 36].
One can define control philosophy as a branch of philosophy con-
nected with comprehension and interpretation of control processes and
control cognition, studying the essence and role of control [152]. Such
meaning of the term “control philosophy” (see the dashed-line contour in
Fig. 12) has a rich internal structure and covers epistemological research
of control science, the analysis of logical, ontological, ethical and other
foundations (both for control science and managerial practice).
Cybernetics (with capital C, as a branch of control science, studying
its most general theoretical regularities). According to V. Diev, “... for
many scientific disciplines, there exists a range of problems related to
their foundations and traditionally referred to as the philosophy of a
corresponding science. Control science follows this tradition, as well.”
[50, p. 36]. Foundations of control science also include general regulari-
ties and principles of efficient control representing the subject of Cyber-
netics (see Chapter 3).
In the 1970-1990’s, against the background of first disillusions of
cybernetics, the only bearers of canonical cybernetic traditions were
philosophers (!), whereas experts in control theory lost their confidence in
ample opportunities of cybernetics.
Things can’t carry on as they are. On the one hand, philosophers vi-
tally need knowledge of the subject (actually, the generalized knowledge).
In this context, V. Il’in mentioned that “philosophy represents second-
rank reflexion; it provides theoretical grounds to other ways of spiritual
production. The empirical base of philosophy consists in specific reflec-
tions of different types of cognition; philosophy covers not the reality
itself, but the treatment of reality in figurative and category-logical
forms.” [87].
31
On the other hand, experts in control theory need “to see the wood
for the trees.” Hence, one can hypothesize that Cybernetics must and
would play the role of control “philosophy” (here quotation marks are
crucial!) as a branch of control theory, studying its most general regulari-
ties. Here the emphasis should be made on constructive development of
control philosophy, i.e., on formation of its content through obtaining
concrete results (probably, first partial results and then general ones).
Reflexion can be continued by considering cybernetics philosophy, and so
on.
The book [152] briefly analyzed the correlation of control philosophy
(as a branch of philosophy studying general problems of control theory
and practice), Cybernetics (as a branch of control science generalizing the
methods and results of solving theoretical problems of control) and man-
agement “philosophy” (as a branch of control science generalizing the
experience of successful managerial practice), see Fig. 13.

PHILOSOPHY

Control philosophy

General General
regularities of regularities of Ontological, ethical
Epistemological, and other foundations
logical and other efficient control control activity
foundations

Theory verification,
practical experience
Cybernetics generalization, etc. Management "philosophy"

CONTROL THEORY

Fig. 13. Control philosophy, Cybernetics


and management “philosophy”

32
2.2. Control Methodology

Methodology is the theory of activity organization [148, 149]. Ac-


cordingly, the subject of methodology is organization of an activity (an
activity is a purposeful human action).
Control activity represents a certain type of practical activity. Control
methodology is the theory of organization of control activity, i.e., the
activity of a control subject [152]. Whenever a control system incorpo-
rates a human being, control activity becomes activity on activity organi-
zation. Control theory puts its emphasis on the interaction of control
subject and controlled object (the latter can be another subject), see Fig.
14. At the same time, control methodology explores the activity of a
control subject, ergo has-to-be-included in Cybernetics.

Control
methodology

Control subject

Control
theory

Controlled object

Fig. 14. Control methodology and control theory

The development of control methodology formulated the structure of


control activity (see Fig. 15) and identified the structural components of
control theory [152].

33
CONTROL SUBJECT
Conditions Norms Principles

1 2 3 4 5

Technology
Needs, (content and
Goal Task forms, methods
Action Result
motives
and means)

Assessment

Criteria
CORRECTIONS Self-regulation

State of
Control
controlled
object

CONTROLLED OBJECT
Conditions Norms Principles

1 2 3 4 5

Technology
Needs, (content and
Goal Task forms, methods
Action Result
motives
and means)

External disturbances

Assessment

Criteria
CORRECTIONS Self-regulation

Fig. 15. Structural components of control activity


34
A theory is an organizational form of scientific knowledge about a
certain set of objects, representing a system of interconnected assertions
and proofs and containing methods of explanation and prediction of
phenomena and processes in a given problem domain, i.e., of all phenom-
ena and processes described by this theory. First, any scientific theory
consists of interrelated structural elements. Second, any theory includes
in its initial basis a backbone element [148].
The backbone element of control theory (for social systems, organi-
zational systems and other interdisciplinary systems) is the category of
organization26; indeed, control is the process of organizing which leads to
the property of good organization as a property in a controlled system
(see the Conclusion).
The structural components of control theory (see Fig. 16) are:
– control tasks;
– scheme of control activity;
– conditions of control;
– types of control;
– subjects of control;
– methods of control;
– forms of control;
– control means;
– control functions;
– factors having an impact on control efficiency;
– control principles;
– control mechanisms.
They are considered in detail in [152].

26
Note that “organism” and “organization” are paronyms.
Scheme
Subjects of control
of control activity
Methods and
types of control
Control tasks and
control
mechanisms COMPONENTS OF
CONTROL THEORY

Forms of control
Control
principles

Conditions
of control
Control functions Control means

Fig. 16. Components of control theory

The foundations of control methodology, the characteristics of con-


trol activity, its logical and temporal structures, as well as the structure of
control theory (as a set of stable relations among its components) are
discussed in [152, 157, 158].

3. Laws, Regularities and Principles of Control

Among important subjects studied by Cybernetics, we mention


laws, regularities and principles of complex systems functioning and
control.
Laws, regularities and principles. According to Merriam-Webster
Dictionary, a principle is:
1. a basic statement of a certain theory, science, etc.; a guidance idea
or a basic rule of an activity;
2. an internal belief or view of something, which defines norms of
behavior;
3. a key feature in the structure of a mechanism, a device or an in-
stallation.
Let us adhere to the first interpretation of the term “principle”; thus,
control principles will be understood as the rules of control activity. We

36
will also address its third interpretation as a key feature in system struc-
ture.
A law is a permanent cause-and-effect relation of phenomena or pro-
cesses.
A law is a necessary, essential, stable and repetitive relation among
phenomena.
In contrast to laws, regularities are not compulsory; principles can be
treated as strict imperatives or desirable properties.
There exists a hierarchy of laws and principles (see Fig. 17): philo-
sophical laws are most general; the next level is occupied by more “par-
tial” logical and other general scientific laws and principles (including the
ones of cognition and practical activity, see [148, 149]); and finally, laws,
regularities and principles of specific sciences appear least general (on
the one hand, control theory as a science possesses its own laws and
principles; on the other hand, it employs laws and principles of other
sciences relating to a controlled object).

Philosophical laws

Logical and other


general scientific laws
and principles

Laws, regularities and


principles of specific sciences

Laws and General systems
princi ples regularities and
of control princi ples

General princi ples


of functioning in
bi ological systems

Fig. 17. The hierarchy of laws, regularities and principles

Which are general and accepted control laws? Unfortunately, today


one would hardly provide an exhaustive answer.
First, we should distinguish between two well-established interpre-
tations of the term “control law.” The general interpretation has been
37
already given. The narrow interpretation states that a control law is a
relationship or a class of relationships between control actions and avail-
able information on the state of a controlled system (i.e., the law of
proportional control, proportional-integral control, etc.). We are con-
cerned with the general interpretation.
Second, it would seem that many laws of modern control science are
not control laws in the above general sense. For instance, feedback con-
trol is widespread in control theory but does not appear universe. Indeed,
there exists programmed control, and other types of control involving no
direct information on the current state of a controlled system.
Third, “control laws” mentioned in scientific literature (such as the
presence of a goal, the presence of a feedback, etc.) are rather control
principles or control regularities than control laws27 (see below). We
consider well-known control laws.
GENERAL CONTROL LAWS (REGULARITIES):
1) The law of goal-directedness–any control has a goal;
2) The law of requisite variety (sometimes called the adequacy prin-
ciple stated by W. Ashby [14])–the variety28 of a controller must be
adequate to the variety of a controlled object.29 In [19] variety was treated
as complexity, and the law of requisite variety was formulated as the law
of requisite complexity. Ashby himself believed that “every law of nature
is a constraint” [14].
3) The law of emergence (synergy) is the main law of systems theo-
ry. It claims that “the whole is greater than the sum of its parts” (Aristo-
tle); in other words, the properties of a system are richer than the “sum”
of the properties of its elements. W. Ashby believed that, the greater is a
system and the bigger are the existing difference between the sizes of the
whole and its parts, the higher is the probability that the properties of the
whole differ appreciably from the properties of its parts.
4) The law of external complementarity was suggested by S. Beer
(the so-called third principle of cybernetics): any control system needs a
black box, i.e., certain reserves for compensating the disregarded impact
27
The following opinion is also cultivated in scientific community. Being a language,
mathematics has no inherent laws (e.g., in contrast to natural sciences); similarly, control
theory as a general descriptive language of control processes operates no inherent laws
till a class of controlled objects is specified.
28
A quantitative characteristic of a system determined as the number of admissible states
or the logarithm of this quantity.
29
The law of requisite variety should be given a more precise definition: the variety of a
controller must be adequate to the variety of a CONTROLLED SUBJECT reflecting the
goal aspects of a controlled object. Indeed, one would hardly imagine a “controller” with
greater variety than a human being.
38
of external and internal environments (actually, this idea underlies robust
control).
5) The law or principle of feedback (cause-and-effect relations) –see
below.
6) The law of optimality–a control action must be “best” in the sense
of goal attainment under existing constraints.30 L. Euler wrote: “Since the
fabric of the universe is most perfect and the work of a most wise Crea-
tor, nothing at all takes place in the universe in which some rule of max-
imum or minimum does not appear.” On the other part, Yu. Germeier
thought that by observing a certain behavior of a system one can a poste-
riori construct a functional optimized by this behavior [64]. The law of
optimality does not imply that all real systems are optimal, i.e., have the
maximum efficiency; rather, it serves as a norm for artificial/control
systems designers.
The above-mentioned principles are often accompanied by the prin-
ciples of causality, decomposition (analysis), aggregation (synthesis),
hierarchy, homeostasis, consistency (prior to control design, consider the
problems of observability, idenfiability, controllability including stabil-
ity), adaptability, and others.
Some authors (e.g., see [80, 174] and a survey in [152]) proposed
their own laws, regularities and principles of cybernetics, control and
development. First, many principles stated in literature are disputable, as
representing the examples of an unadapted groundless transfer and/or
“generalization” of results. For instance, V. Pareto established empirical-
ly that 20% of population own 80% of capital in the world [164]. Nowa-
days, the Pareto principle (also known as “the 80/20 principle” or “the
beer law”31) is formulated as a universal natural law without proper
substantiation:
– 20% of efforts yield 80% of a result;

30
Optimization consists in seeking for best alternatives among a set of admissible ones
under given constraints (optimal alternatives). In this phrase each word is important.
“Best” means the presence of a criterion (or several criteria) and a way (several ways) to
compare alternatives. It is crucial to take into account existing conditions and con-
straints: their variation possibly leads to a situation when other alternatives appear best
under a same criterion (same criteria). The notion of optimality has received a rigorous
and exact representation in different mathematical sciences, has firmly entrenched in
practical design and exploitation of technical systems, has played a prominent role in
formation of modern systems ideas. Moreover, this notion is widespread in administrative
and public practice and is known to almost everyone. Obviously, aspiration for increasing
the efficiency of any purposeful activity has found its expression, a clear and intelligible
form in the idea of optimization.
31
20% of people drink 80% of beer.
39
– 80% of company’s stocking cost corresponds to 20% of its product
types;
– 80% of company’s sales income is made by 20% of its customers;
– 80% of problems are created by 20% of causes;
– 20% of working time is spent on 80% of work;
– 80% of work is performed by 20% of employees, and so on.
Another example concerns the principle of harmony. Using the pro-
portions established by L. da Vinci (the golden section) and the well-
known properties of the Fibonacci sequence, one postulates the corre-
sponding ratio of other indicators (e.g., the number of employees, wages,
budget articles, and so on).
Such “principles” and their apologists can be treated with a smile, as
both the former and the latter have no attitude to science proper.
Second, all researchers (!) have not stated any enumeration bases for
principles and laws suggested by them. This fact testifies to their possible
non-universalism, as well as to incomplete enumeration, its weak sound-
ness, possible internal inconsistency, etc.
And third, the list of laws, regularities and principles should be ex-
tended and systematized.
As an illustration, consider some principles and laws of control and
functioning of complex systems proposed by different authors.
PRINCIPLES OF COMPLEX SYSTEMS FUNCTIONING [86,
p. 60–67]:
1) The principle of reactions–responding to an external influence, a
system reinforces processes to compensate it (the Le Chatelier–Brown
principle imported from physics and chemistry).
2) The principle of system cohesion–a system's form is maintained
by a balance, static or dynamic, between cohesive and dispersive influ-
ences. The form of an interacting set of systems is similarly maintained.
3) The principle of adaptation–for continued system cohesion, the
mean rate of system adaptation must equal or exceed the mean rate of
changes of environment (the response times obey the reverse rule).
4) The principle of connected variety–interacting systems stability
increases with variety, and with the degree of connectivity of that variety
within the environment.
5) The principle of limited variety–variety in interacting systems is
limited by the available space and the minimum degree of differentiation.
6) The principle of preferred pattern–the probability that interacting
systems will adopt locally-stable configurations increases both with the
variety of systems and with their connectivity.

40
7) The principle of cyclic progression–interconnected systems driven
by an external energy source will tend to a cyclic progression in which
system variety is generated, dominance emerges to suppress the variety,
the dominant mode decays or collapses, and survivors emerge to regener-
ative variety.
According to [156], most well-known principles and laws of func-
tioning of complex (in the first place, biological) systems are exactly
regularities or hypotheses. To explain this statement, consider
PRINCIPLES OF BIOLOGICAL SYSTEMS32 FUNCTIONING
which are also the subject of Cybernetics (see surveys in [10, 156]).
1. The principle of least action. A dynamic system moves from an
initial configuration to a final configuration in a specified time along a
trajectory which minimizes the action (a functional of the trajectory).
Actually, this principle coincides with the law of optimality pioneered in
physics in the 1790–1800’s.
2. The principle of the permanent inequilibrium (E. Bauer, 1935).
The living and only the living systems are never in an equilibrium, and,
on the debit of their free energy, they continuously invest work against
the realization of the equilibrium which should occur within the given
outer conditions on the basis of the physical and chemical laws [21, p. 43]
(see the principle of reactions).
3. The principle of simplest structure (N. Rashevsky, 1943). A
concrete structure of a living system which exists in nature is the simplest
among all structures being able to perform a given function or a set of
functions [178].
4. The principle of feedback (see also the principle of functional
systems by P. Anokhin [9]). In this context, we have to mention his
principle of anticipatory reflection or reality: “One universal regularity
was formed during the adaptation of the organisms to the environment,
which was further developed during the whole period of evolution of
living organisms: the highest order of speed for the reflection of the low
speed deployment of the events of the real world.” [7]. A complex adap-
tive system responds not to an external influence as a whole, but “to the
first chain of a repeated series of external influences.” [7].
Practical realizations of the principle of feedback have a long histo-
ry–from several mechanisms in Egypt (Ctesibius’ water clock, the 2nd-
3rd century B.C.) to perhaps first feedback usage in Drebbel’s thermostat
(1572–1633), Polzunov’s water-level float regulator (1765) and Watt’s

32
Interestingly, the overwhelming majority of these principles were formulated in the
1940-1960's.
41
steam engine governor (1781), Jacquard’s loom with program control
(1804–1808), etc.
The pioneering fundamental works on mathematical control theory
were published by J. Maxwell [127] and I. Vyshnegradsky [216]33. Actu-
ally, the first general systematic analysis of feedback was performed by
P. Anokhin (1935) [8], later jointly by A. Rosenblueth, N. Wiener, and
J. Bigelow (1943) [181] and, in the final statement, by N. Wiener (1948)
[221]. For justice’ sake, note that feedback was studied and used in
electrical engineering in the 1920’s.
5. The principle of least interaction (I. Gelfand and M. Tseitlin,
1962 [60]). Nerve centers aspire to achieve a situation when afference
(informational and control flows and signal transmitted in central nervous
system) is minimal. In other words, a system functions rationally in some
external environment if it seeks to minimize interaction with the envi-
ronment [202].
6. The principle of brain’s stochastic organization (A. Kogan,
1964 [100]). Each neuron has no independent function, i.e., is a priori not
responsible for solution of a concrete task; all tasks are distributed ran-
domly.
7. The principle of hierarchical organization (particularly, infor-
mation processing by brains), see the works of N. Amosov,
N. Bernshtein, G. Walter, and W. Ashby [5, 15, 24, 218]. Achieving a
whole goal is equivalent to achieving the set of its subgoals.
8. The principle of adequacy (W. Ashby, 1956 [14],
Yu. Antomonov [10] and others). For effective control the complexity of
the controller (dynamics of its states) must be adequate to the complexity
(rate of change) of controlled processes. In other words, the “capacity” of
the controller defines the absolute limit of control regardless of the capa-
bilities of the controlled system (see the law of requisite variety above).
9. The principle of probabilistic prediction in actions design
(N. Bernshtein, 1966) [24]. The world is reflected in two models, viz., the
model of the desired future (probabilistic prediction based on previously
accumulated experience) and the model of the backward (which explicitly
reflects the observed reality).
10. The principle of necessary degree of freedom selection
(N. Bernshtein, 1966). Initially, learning involves more degrees of free-
dom of a learned system than is actually required for learning [24]. Dur-
ing learning, the number of “involved” variables decreases as inessential

33
The first course of lectures entitled "Theory of direct-action regulators" by D. Chizhov
appeared in Russia in 1838.
42
variables are “eliminated” (compare this principle with the phenomena of
generalization and concentration of nervous processes–I. Pavlov,
A. Ukhtomskii, P. Simonov, and others).
11. The principle of determinism destruction (H. Foerster,
Yu. Antomonov [10, 55] and others, 1966). To achieve a qualitatively
new state and to increase the level of system organization, it is necessary
to destroy (rearrange) the existing deterministic structure of connections
among system elements, which was formed by the previous experience.
12. The principle of requisite variety (W. Ashby, 1956). This prin-
ciple (see above) is close to the principle of adequacy [14].
13. The principle of natural selection (S. Dancoff, 1953). In sys-
tems becoming efficient due to natural selection, the variety of mecha-
nisms and capacity of information transmission channels does not appre-
ciably exceed the minimum level required [48].
14. The principle of deterministic representation (J. Kozielecki,
1979 and others). Modeling of decision-making by an individual admits
that its beliefs about the reality do not contain random variables and
uncertain factors (the consequences of decisions depend on well-defined
rules) [109].
15. The principle of complementarity (inconsistency) (N. Bohr,
1927; L. Zadeh, 1973). The high accuracy of description of a certain
system is inconsistent with its high complexity [228]. Sometimes, this
principle is given a simpler interpretation: the real complexity of a system
and the accuracy of its description are roughly inversely proportional.
16. The principle of monotonicity (“keep the achieved,” W. Ashby,
1952). In learning, self-organization, adaptation, etc., a system must
“keep” an achieved (current) positive result (equilibrium, goal of learn-
ing, etc.) [14, 15].
17. The principle of natural technologies in biological systems (A.
Ugolev, 1967 [205]). The principle of block structure states that physio-
logical functions and their evolution are based on combinations of uni-
versal functional blocks implementing different elementary functions and
operations.
At the first glance, the discussed principles of functioning of biolog-
ical systems can be formally divided into natural-scientific approaches
(e.g., principles no. 1, 2, 5, 8, and 15), empirical approaches (e.g., princi-
ples no. 4, 6, 10, 11, 14, 16, and 17) and intuitive approaches (principles
no. 3, 7, 9, 12, and 13).
Natural-scientific approaches (“laws”) reflect the general regulari-
ties, constraints and capabilities of biological systems imposed by natural
laws. As a rule, empirical principles are formulated via analysis of exper-
43
imental data, the results of experiments and observations, thereby having
a more local character than natural-scientific approaches. And finally,
intuitive laws and principles (in idea, not contradicting natural-scientific
ones and being consistent with empirical ones) appear least formal and
universal, as proceeding from intuitive understanding and common sense.
Yet, a detailed consideration shows that all the “natural-scientific”
principles mentioned above are rather empirical and/or intuitive (not
formally justified). For instance, the principle of least action (seemingly,
a classical physical law) is formulated for mechanical systems (there exist
its analogs in optics and other branches of physics). And its unadapted
application to biological and other systems becomes somewhat incorrect
and partially substantiated. In other words, that biological systems obey
the principle of least action is merely a hypothesis made by researchers:
today, in many cases it possesses no well-defined grounds.
Therefore, all the well-known and accepted principles (and laws) of
biological systems functioning agree with one of the following standard
statements: a regularity–“if a system has a (concrete) internal structure,
then it demonstrates an (appropriate) behavior” or: a hypothesis–“if a
system demonstrates a (concrete) behavior, then it most likely has an
(appropriate) internal structure.” Here the words “most likely” are essen-
tial: first-type statements establish sufficient conditions for realization of
an observed behavior and can be (partially or completely) verified during
experiments; second-type statements act as hypotheses, i.e., “necessary”
conditions (in most cases, postulated without rigorous argumentation and
bearing the explanatory function) which are imposed on the structure and
properties of a system on the basis of its observation.
Partiсular laws and principles. We emphasize that different
branches of control theory formulate separate laws and principles valid
under corresponding assumptions. Here are some examples.
The book [59] presented several laws of cybernetical physics:
– The value of any controlled invariant of a free system can be
changed in an arbitrary quantity via arbitrarily small feedback;
– For a controlled Lagrange or Hamilton system with a small dissi-
pation rate ρ, the energy achievable by a control action of a level γ has
the order of (γ/ρ)2;
– Each controllable chaotic trajectory can be transformed into a pe-
riodic one using an arbitrarily small control action.
The book [157] introduced several principles of control in organiza-
tions:

44
– The principle of agents’ game decomposition, stating that a Prin-
cipal applies controls implementing a dominant strategy equilibrium of
agents’ game;
– The principle of functioning periods’ decomposition, stating that a
Principal applies controls making agents’ decisions independent from
game history;
– The principle of trust (the fair play principle [36, 39] and the reve-
lation principle [141] as its analog), stating that an agent trusts infor-
mation reported by a Principal, whereas the latter makes decisions assum-
ing the truth of information reported by the former;
– The principle of sufficient reflexion, stating that the reflexion depth
of an agent is defined by its awareness.
Obviously, the above and similar laws and principles represent fruit-
ful and general results derived in separate branches of control theory, but
have no universal character: they are inapplicable or selectively applica-
ble in “adjacent” branches.
CONTROL PRINCIPLES34 [152].
Principle 1 (the principle of hierarchy). Generally, a control system
has a hierarchical structure. It must agree with the functional structure of
a controlled system and not contradict the hierarchy of (horizontally or
vertically) adjacent systems. Tasks and resources supporting the activity
of a controlled system must be decomposed according to its structure.
Principle 2 (the principle of unification). Controlled systems and
control systems of all levels must be described and studied using common
principles (this applies both to the parameters of their models and the
efficiency criteria of their functioning). However, such principles must
not eliminate the necessity of considering the specifics of a concrete
system. Most real control situations can be reduced to a set of the so-
called typical situations, where the corresponding typical decisions ap-
pear optimal.
On the other part, control inevitably causes specialization (restriction
of variety) of control subjects and controlled subjects.
Principle 3 (the principle of purposefulness). Any impact of a con-
trol system on a controlled system must be purposeful.
Principle 4 (the principle of openness). Operation of a control sys-
tem must be open to information, innovations, etc.
34
Of course, ideally all principles should not be stated as requirements to control systems
(“it must be that…”, “it is necessary that…” and so on), which can satisfied or not
satisfied. Instead, the general approach should be, whenever a certain principle fails, a
control system appears unable to work properly. Unfortunately, such “hard” principles
do not exist (perhaps, except the feasibility of control).
45
Principle 5 (the principle of efficiency). A control system must im-
plement the most efficient control actions from the set of feasible control
actions (also see the principle of extremization).
Principle 6 (the principle of responsibility). A control system ap-
pears responsible for decisions made and the efficiency of controlled
system operation.
Principle 7 (the principle of non-interference). Any-level Principal
interferes in a process iff its direct subordinates are unable to implement a
complex of necessary functions (at present and/or based on a forecast).
Principle 8 (the principle of social and state control, participation).
Control of a social system must aim at the maximal involvement of all
interested subjects (society, bodies of state power, individual and artifi-
cial persons) in the development of a controlled system and its operation.
Principle 9 (the principle of development). A control action lies in
modifying a control system proper (being induced from within, it can be
treated as self-development). The matter also concerns the development
of a controlled system.
Principle 10 (the principle of completeness and prediction). Under a
given range of external conditions, the set of control actions must ensure
posed goals (the completeness requirement) in an optimal and/or feasible
way. This must be done taking into account a possible response of a
controlled system to certain control actions in predicted external condi-
tions.
Principle 11 (the principle of regulation and resource provision).
Control activity must be regulated (standardized) and correspond to
constraints set by a metasystem (a system possessing a higher hierar-
chical level). Any management decision or control action must be feasi-
ble (also, in the sense of provision with necessary resources).
Principle 12 (the principle of feedback). Efficient control generally
requires information on the state of a controlled system and on the condi-
tions of its functioning. Moreover, implementation of a control action and
corresponding consequences must be monitored by a control subject.
Principle 13 (the principle of adequacy). A control system (its struc-
ture, complexity, functions) must be adequate to a controlled system (to
its structure, complexity, functions, respectively). Problems to-be-solved
by a controlled system must be adequate to its capabilities.
Principle 14 (the principle of well-timed control). This principle
states that, in real-time control, information required for decision-making
must be supplied at the right time. Moreover, management decisions
(control actions) must be made and implemented (chosen and generated,
respectively) quickly enough according to any changes in a controlled
46
system and external conditions of its functioning. In other words, the
characteristic time of management decisions or control actions must not
exceed the characteristic time of changes in a controlled system (i.e., a
control system must be adequate to controlled processes in the sense of
their rate of change).
Principle 15 (principle of predictive reflection). A complex adaptive
system predicts feasible changes in essential external parameters. Conse-
quently, when generating control actions, one should predict and antici-
pate such changes.
Principle 16 (the principle of adaptivity). The principle of predictive
reflection underlines the necessity of predicting the states of a controlled
system and corresponding actions of a Principal. In contrast, the principle
of adaptivity states that (1) one must consider all available information on
the history of controlled system functioning and (2) once made decisions
or chosen control actions (and the corresponding principles of decision-
making) must be regularly revised (see the principle of well-timed con-
trol) following any changes in the states of a controlled system and in the
conditions of its functioning.
Principle 17 (the principle of rational decentralization). This prin-
ciple claims that, in any complex multi-level system, there exists a ration-
al decentralization level for control, authorities, responsibility, awareness,
resources, etc. Rational decentralization implies adequate decomposition
and aggregation of goals, problems, functions, resources, and so on.
In [154] it was shown that multilevel hierarchical systems gain new
properties (in comparison with two-level ones) mainly due to the follow-
ing factors:
– the “aggregative” factor, consisting in aggregation (“convolu-
tion,” “compression,” and so on) of information about system elements,
subsystems, an environment, etc. as the level of hierarchy grows;
– the “economic” factor, consisting in variation of financial, materi-
al and other resources of a system under any changes in the composition
of its components;
– the “uncertainty” factor, consisting in variation of the awareness
of system elements about the essential (internal and external) parameters
of their functioning;
– the “organizational” factor, consisting in power sharing, i.e., the
feasibility of some system elements to establish “rules of play” for the
other;
– the “informational” factor, consisting in variation of informational
load on system elements.

47
“In fact, any complex system, whether it has arisen naturally or been
created by human beings, can be considered organized only if it is based
on some kind of hierarchy or interweaving of several hierarchies. At least
we do not yet know any organized systems that are arranged differently.”
[203, p. 37].
Principle 18 (the principle of democratic control, also known as the
principle of anonymity). This principle requires equal conditions and
opportunities for all participants of a controlled system (without a priori
discrimination in informational, material, financial, educational and other
resources).
Principle 19 (the principle of coordination). This principle declares
that, under existing institutional constraints, control actions must be
maximally coordinated with the interests and preferences of controlled
subjects.
Principle 20 (the principle of ethics, the principle of humanism) im-
plies that, in management and control, consideration of existing ethical
norms (in a society or an organization) has a higher priority over other
criteria.
Note that the above control principles are applicable almost to any-
nature systems (probably, except the principle of social and state control
and the principle of coordination, making no sense in control of technical
systems).
Possible classification bases for the listed control principles are the
relations between objects (a controlled system, a control system, an
external environment–see Fig. 18) or the temporal relations (past, present,
future–see Fig. 19).

48
External
Principles of:
environment
2) unification
4) openness
8) participation
16) adaptivity Principles of:
1) hierarchy
9) development
10) completeness
and prediction
Control 11) regulation and
resource provision
Principles of: system 13) adequacy
3) purposefulness
14) well-timed
5) efficiency
control
6) responsibility
17) rational
7) non-interference
decentralization
12 feedback
15) predictive reflexion
18) anonymity
19) coordination Controlled
system

Fig. 18. Control principles: A classification


based on the relations between objects

Past Present Future

The principle The principles of The principles of


of adaptivity adequacy and prediction and
well-timed control predictive reflection

Control
system

Fig. 19. Control principles: A classification


based on the temporal relations

Therefore, the general laws and principles of control are the subject
of Cybernetics. Their list is far from final canonization, and its supple-
mentation and systematization represent a major task of Cybernetics!
49
4. Systems Theory and Systems Analysis.
Systems Engineering

Logically and historically, the content of cybernetics has indissolu-


ble connection with the category of “system” (see Appendix I). Here the
key role belongs to two terms–systems approach and systems analysis.
From the historical perspective, general systems analysis appeared
within the framework of general systems theory (GST) founded by biolo-
gist L. Bertalanffy: in the 1930’s he proposed the concept of an open
system [27]. The first complex publications on GST were [25, 26], see
also [30, 163, 177]. Interestingly, the term “systems analysis” originated
in RAND Corporation reports dating back to 1948 (the first book was
[92]).
The later development of systems analysis in the USSR (Russia) and
other countries was different. First of all, systems analysis was assigned
nonidentical interpretations. Our discussion begins with the traditions of
the Russian scientific schools.
SYSTEMS APPROACH is a direction in the methodology of scien-
tific cognition and social practice, which studies objects as systems, i.e.,
an integral35 set of elements in the aggregate of their relations and con-
nections.36 Systems approach facilitates adequate problem formulation in
concrete sciences and gives efficient strategies of their study.
Systems approach is a general way of activity organization, which
embraces any type of activity, reveals regularities and interconnections
for their efficient usage [148].
SYSTEMS ANALYSIS (“a practical methodology of problem solv-
ing”) is a set of methods oriented towards analysis of complex systems
(technical, economic, ecological, educational and other ones).
As a rule, systems studies result in a choice of a well-defined alterna-
tive (a development program of an organization or a region, design pa-
rameters, etc.). Systems approach is valuable, since consideration of
systems analysis categories underlies general logical and sequential
solution of control and decision-making problems. The efficiency of
problem solving using systems analysis depends on the structure of
problems [148].
Being remarkable for its interdisciplinary status, systems analysis
considers, e.g., an activity as a complex system aiming at elaboration,

35
Integrity and commitment to a common goal form a backbone factor.
36
An aggregate of stable connections among system elements, ensuring its integrity and
self-identity, is called its structure.
50
substantiation and implementation of complex problem solving including
political, social, economic, technical and other problems [166].
To solve well-defined problems (i.e., the ones which admit an explic-
it quantitative description and strong formalization), systems analysis
employs optimization and operations research methods: a researcher
constructs an adequate mathematical model and seeks for optimal pur-
poseful actions (control) within the model. To solve ill-defined problems,
systems analysis operates different techniques including typical stages
(see Table 2 for a series of common approaches to systems analysis and
strategic analysis of problem solving). Actually, systems analysis sug-
gests universal methods of problem solving applicable to a wide range of
fields: organizational control, economics, military science, engineering,
and others.

51
Table 2. Systems analysis and strategic analysis of problem solving (see [73])
E. Golubkov P. Drucker D. Novikov S. Optner N. Fedorenko Yu. Chernyak S. Young
1. Problem statement 1. Purpose and 1. Monitoring and 1. Symptoms 1. Problem 1. Problem analysis 1. Goal-setting for
2. Examination expected results analysis of actual identification formulation 2. Definition of organization
3. Analysis 2. Key elements to state 2. Problem urgency 2. Definition of system 2. Problem
4. Preliminary process design: time, 2. Forecasting of estimation goals 3. Structural identification
judgment resources, budget, evolution 3. Goal-setting 3. Data acquisition analysis 3. Diagnosing
5. Confirmation major steps 3. Goal-setting 4. Definition of 4. Elaboration of the 4. Formation of 4. Decision search
6. Final judgment 3. Roles and responsi- 4. Choosing system structure and maximal number of general goal and 5. Assessment and
7. Implementation of bilities of self- technology of its defects alternatives criterion choice of alterna-
chosen decision assessment team activity 5. Capabilities 5. Selection of 5. Goal decomposi- tives
4. Elements essential to 5. Planning and assessment alternatives tion, identification 6. Decision
success: resources allocation 6. Alternatives 6. Modeling by of demands in negotiation
- Utilizing an experi- 6. Motivation search equations, programs resources and 7. Decision
enced facilitator; 7. Control and 7. Alternatives or scenarios processes approval
- Engaging dispersed operative manage- assessment 7. Costs estimation 6. Identification of 8. Preparation for
leadership; ment 8. Decision 8. Sensitivity tests resources and decision implemen-
- Encouraging 8. Reflexion, elaboration (parametric analysis) processes tation
constructive dissent; analysis and 9. Decision 7. Forecasting and 9. Decision
- Using data to inform improvement of acceptance analysis of future application control
dialogue. activity 10. Decision conditions 10. Efficiency
procedure initiation 8. Assessment of verification
11. Decision goals and means
implementation 9. Selection of
control alternatives
12. Assessment of 10. Diagnosis of
implemented existing system
decision and its 11. Elaboration of
consequences complex develop-
ment program
12. Design of
organization for
goals’ achievement
Therefore, in the USSR systems analysis was considered side by side
with systems theory (and later almost “absorbed” the latter) as a set of
general principles of examining any systems (systems approach). Similar-
ly to cybernetics, systems analysis (being an integrative science) admits
the “umbrella” definition as a union of different component sciences
under the auspices of “systemacy”: artificial intelligence, operations
research,37 decision theory, systems engineering and others, see Fig. 20.
According to this viewpoint, systems analysis has almost no its own
results.

SYSTEMS ANALYSIS

Artificial
intelligence
Information
… technology

Data analysis
and decision- Systems General
making analysis
systems theory

Ope rations
research
Systems
engineering
Optimization

Fig. 20. The composition and structure of systems analysis

This result has definite causes: historically, systems analysis ap-


peared via development of operations research (see the first book [139],
the classical textbook [217],38 the modern textbooks [82, 198]) and sys-
37
Systems analysis and operations research are correlated as strategy and tactics, see
[92, p. 1].
38
The classical range of operations research includes choice problems, multicriteria
decision-making, linear, nonlinear and dynamic programming, Markov processes,
queuing theory, game-theoretic methods in decision-making, networked planning and
reliability theory.
tems engineering (see the first book [70]). With the course of time, op-
erations research transformed into management science39 with basic
applications to control of organizational and production systems [17].
Nowadays, many Russian scientists (e.g., [110, 136, 182, 197]) still
understand systems analysis as an aggregate of methods of optimization,
operations research, decision-making, mathematical statistics and others,
in addition to the concept of systemacy proper. In this context, we refer to
the classical textbook [166]. For an interested reader, the publications
[197, 213] survey the history of systems analysis development in the
USSR and Russia.
The second interpretation of systems analysis (by analogy with Cy-
bernetics, Systems analysis with capital S–compare Fig. 9 and Fig. 20)
covers the general laws, regularities, principles, etc. of functioning and
exploration of different-nature systems. Here the main body of scientific
results is the philosophical and conceptual aspects of systems analysis
and general systems theory, see [28, 46, 184, 204].
Among Soviet and Russian scientific schools focused on Systems
analysis, we emphasize two fruitful theoretical and applied research
groups, viz., the methodological school of G. Schedrovitsky [186] and the
followers of S. Nikanorov–“the school of conceptual analysis and design
of organizational control systems”[146]. The both schools operate the
categories of system, control, organization and methodology, as well as
seek to analyze and synthesize most general solution methods for a wide
range of problems. In other words, they are inseparably linked with
Cybernetics.
Systems analysis, just like cybernetics, endures the “romantic” peri-
od and the period of disillusions (see Section 1.3). “Presently, the terms
“analysis of systems” or “systems analysis” often excite the antithetical
feelings of different people. On the one part, here is faith in the omnipo-
tence of the new approach capable of solving difficult and large-scale
problems and, on the other part, charges of dalliance decorated by a
fashionable terminology.” [114]. These words of O. Larichev preserve
their topicality even now. Both Cybernetics and Systems Analysis need
GENERAL results including generalizations from intensively developing
sciences in the “umbrella brand” of systems analysis (see Fig. 20).
Systems theory and systems engineering. Let us analyze “systems”
terminology in the English segment of publications. The high level of
abstracting and generality of systems studies in the USSR and Russia
corresponds to the English terms “general systems theory” (initially) and

39
S. Beer defined management science as “the business use of operations research.”
54
systems science (present days). In other words, “systems analysis” as it is
comprehended in Russia rather matches “systems science” (SS) in foreign
research, as sciences about systems, systems studies–see Fig. 21.
As a matter of fact, general systems science evolved in several direc-
tions. First, its “mainstream” gave birth to two known subdirections:
K. Boulding’s theory of systems classes [30] and P. Checkland’s soft
systems methodology [43, 44].
Second, note that the 1950-1970’s were remarkable for a significant
breakthrough in mathematical system theory [40, 93, 132, 133], which
later merged with control theory.
Third, we naturally mention systems dynamics exploring the influ-
ence of system elements and structure on its behavior in time. Here the
main apparatus includes simulation modeling of differential equations or
discrete mappings. The pioneering works were [57, 58] and the most
famous application to global development was described in the book
[130]. The state-of-the-art in this field can be traced in [78, 129].
Reverting to the subject, systems analysis actually concerns any ana-
lytical study assisting a decision-maker to choose an appropriate course
of actions [147].
Subsequently, SA developed towards systems engineering (SE) (see
the classical publications [67, 70]). This is a branch of science and tech-
nology covering the whole life cycle of a complex system (design, pro-
duction, testing, exploitation, support, maintenance and repair, upgrade
and utilization).
As years passed, SA became a set of practice-oriented analysis tech-
nologies for concrete systems, i.e., products and/or services [142, 185,
209, 219]. Systems analysis goes in parallel with systems design (SD),
systems development and other associated stages.
Nowadays, SS and SE (e.g., see the modern textbooks and standards
[49, 76, 88, 185, 196, 209, 219]) comprise SA, SD, product lifecycle
management (PLM), project and program management, several branches
of management science and others, as illustrated by Fig. 21. And general
systems theory forms their common methodological core, see Fig. 21.
Most applications of SE are complex technical and organization-
technical systems, as well as software development.

55
SYSTEMS SCIENCE

Systems
Analysis Systems
… Design

General
Management Systems Systems
Science Theory Development
Project and
Program
Management

PLM

Fig. 21. The composition and structure of systems science

Presently, the branches of “systems studies” (SA, SD, …) are rather


an aggregate of technologies and a common language in the form of
standards (arising through generalization of successful practical experi-
ence, see the Conclusion) than scientific directions.
Systems of systems. An intensively developing direction of systems
theory and systems engineering embraces the problematique of a so-
called system of systems; it considers the interaction of autonomous (self-
sufficient) systems jointly forming an integral system with its own goals,
functions, etc. Among examples, we refer to networks of networks,
SmartGrid in power engineering, the interaction of units and corps in
military science, complex production processes, and so on. This direction
employs the concept of holism40 [190] and dates back to the late 1960's
(see the classical paper [1] by R. Ackoff). A good survey of latest
achievements can be found in [90].

40
Holism is an approach treating complex systems as a whole; it claims that the proper-
ties of complex systems cannot be derived via examining the properties of their elements.
56
5. Some Trends and Forecasts

Any mature science necessarily predicts its own development and


the development of adjacent sciences. As Cybernetics represents a
metascience (see Chapter 2) with respect to its components–control
theory and others, its functions should include analyzing their trends,
seeking for generalizations and forecasting. Ideally, the matter concerns
normative forecasting, i.e., constructing a multi-alternative scenario
forecast with separation of desired trajectories and an action plan for their
implementation.
The current chapter reveals a series of trends in control theory (their
list does not claim to be exhaustive, rather a call for such activities).
Particularly, we consider in brief the topic structure of some leading
control conferences (Section 5.1), interdisciplinarity (Section 5.2),
“networkism” (Section 5.3), heterogeneous models and hierarchical
modeling (Section 5.4), “intellectualization” and reflexion (Section 5.5),
big data and big control (Section 5.6). However, our discussion does not
touch internal paradigm problems of different branches in control theory
including the effects of their “linear” development, aspiration towards
self-isolation41 and others.

5.1. Topic Analysis of Leading Control Conferences

Nowadays, several hundreds of scientific conferences (seminars,


symposia, meetings, etc.) are organized yearly worldwide on certain
aspects of control theory and applications. Yet, there are a few “emblem-
atic” leading scientific events reflecting and predetermining basic trends
[150]. Being subjective and not pretending to a complete overview, the
author emphasizes triennial world congresses conducted by International
Federation of Automatic Control (IFAC) and annual Conferences on
Decisions and Control (CDC) under the auspices of Institute of Electrical
and Electronics Engineers (IEEE). Alongside with these major events (or
even jointly with CDC), there exist regular “national”42 conferences:
American Control Conference (ACC) and European Control Conference
(ECC). In the USSR, the role of such national conferences belonged to

41
The existing grant-based funding of research facilitates differentiation of sciences and
partially stimulates the existence of scientific self-reproducing “sects” in all fields of
investigations.
42
Actually, these conferences gather researchers from many other countries.
57
All-Union Meetings on Regulation Theory (later, on Automatic Control
and, then, on Control Problems). Interestingly, the gradually changing
title of these scientific events agrees with the evolution of control theory
and its subjects (see below).
Generally speaking, world science demonstrates a stable growth of
publications dedicated to control (see Fig. 4–Fig. 6).

Fig. 22–Fig. 24 present the “quantitative” comparison43 of the topics


at 2011 and 2014 IFAC World Congresses (also, see [171]), ACC-2011,
CDC-ECC-2011, CDC-2012, CDC-2013 and All-Russia Meeting on
Control Problems (AMCP-2014).44
General topics. The author has classified the papers, being mostly
concerned with relative (not absolute) indexes: they reflect the current
distribution and dynamics of priorities, despite the subjectiveness and
certain arbitrariness of classification bases. The following groups of
topics have been identified via expertise: mathematical control theory
(mathematical results invariant with respect to application domains of
controlled objects), “classics” (automatic control theory (ACT) in a wide
interpretation45), “networked control” (covering situations when a control
object and/or subject and/or communication between them has a net-
worked structure), technical means of control, applied control problems,
see Fig. 22- Fig. 24.

43
All figures show the relative shares of papers having a corresponding topic.
44
In AMCP-2014, about 25-33% of the papers were dedicated to control problems in
interdisciplinary systems (socioeconomic, organizational and technical, etc.). They have
been eliminated from our analysis.
45
Notwithstanding its “classical” character, ACT has an intensive development, includ-
ing the appearance of new problems in well-known fields (e.g., in linear control systems)
and new controlled objects (e.g., the rapid growth of publications on quantum systems
control).
58
Fig. 22. The general topics of ACC and CDC
Fig. 22 and Fig. 23 illustrate (a) the relative “stability of traditions”
of appropriate scientific events and (b) a well-known fact that CDC are
more “theoretical,” whereas IFAC congresses are par excellence applica-
tion-oriented. In this sense, AMCP-2014 most likely follows the tradition
of IFAC congresses.

Fig. 23. The general topics of IFAC congresses and AMCP-2014

59
Fig. 24. The topics of papers at AMCP-2014

Networked control. We emphasize the growing interest of re-


searchers in networked control problems (the number of papers in peer-
reviewed journals has almost doubled within 5-6 years). This observation
also follows from the analysis of publications indexed by Web of Sci-
ence, see Fig. 25.

Fig. 25. The number of papers on networked control published worldwide


(according to Web of Science)

Fig. 26 and Fig. 27 specify the topics of networked control by the


levels of agents architecture in multi-agent systems (MAS) and problems
treated at these levels (see Section 5.3). The following groups of topics
have been identified via expertise: MAS and consensus problems, com-

60
munications in MAS, cooperative control, upper levels of control (strate-
gic behavior of agents), “others” (mostly, information and communica-
tion networks with a slight emphasis on control problems).

Fig. 26. Specification of networked control topics at ACC and CDC

Fig. 27. Specification of networked control topics


at IFAC congresses and AMCP -2014

According to Fig. 26 and Fig. 27, investigators gradually shift their


efforts towards higher levels of agents’ architecture, i.e., from consensus
and communications problems to cooperative control and strategic behav-
ior models of agents [56].
Applications. Fig. 28 and Fig. 29 specify the internal structure of
applied topics at the scientific events under consideration. The following
groups of applications have been identified via expertise: power engi-
neering, biology and medicine, aerospace, production (mostly, industrial
production), mechatronics and robotics, transport (mostly, automobile

61
transport and traffic), marine vehicles and “others” (from agriculture to
education).

Fig. 28. Specification of applied topics at ACC and CDC

Fig. 29. Specification of applied topics


at IFAC congresses and AMCP-2014

Clearly, in recent years an emphasis has been gradually changing


from traditional control problems in production and telecommunication
systems to power engineering and biomedical applications.

5.2. Interdisciplinarity

62
Modern control theory (see Fig. 30 and Fig. 32) studies control prob-
lems for different classes of controlled objects by designing or applying
appropriate methods and means of control.

OBJECTS METHODS

CONTROL

MEANS

Measuring,
converting, Informational,
actuation computing

Fig. 30. Controlled objects, methods and means of control

The term “interdisciplinarity” as staying at the junction of scienc-


es,46 their branches, etc. reflects the variety of controlled objects and the
variety of methods and means of control. (Interdisciplinarity with capital
I reflects their generality). This subsection mostly deals with the variety
of controlled objects.
For a certain class of controlled objects, the life cycle structure of
control theory is illustrated by Fig. 31. Using information acquired by a
corresponding science about a controlled object,47 control experts formu-
late appropriate models and perform their theoretical study (analysis and
synthesis of control actions, exploration of different properties such as
observability, identifiability, controllability, stability and others).

46
According to Merriam-Webster dictionary, science is knowledge about or study of the
natural world based on facts learned through experiments and observation; a particular
area of scientific study (such as biology, physics, or chemistry) or a particular branch of
science; a subject that is formally studied in a college, university, etc.
47
In the case of technical systems, initial information “suppliers” are mechanics, aerody-
namics, and so on.
63
… Scientific knowledge about
controlled object

Control theory

Applications of theory

Control technologies …

Characteristic time of development

Fig. 31. The life cycle of control theory


for a certain class of controlled objects

Subsequently, the theory finds applications implemented in the form


of control technologies. From the temporal viewpoint, each class of
controlled objects perhaps demonstrates its own characteristic time of
development and “golden period” with the maximum pace of results
accumulation (see Fig. 31).
Control theory embraced various controlled subjects and objects dur-
ing more than a century and a half of its development, see Fig. 32. In the
area of technical and organization-technical systems, the main emphasis
has been recently shifted to decentralized intelligent systems (see Section
5.5). For instance, more and more research works are dedicated to upper
control levels in the terminology of their types’ hierarchy [211]:
1) programmed control;
2) feedback control;
3) robust control;
4) adaptive control;
5) intelligent control;
6) intellectual (smart) control (in contrast to intelligence, intellectu-
ality means the presence of autonomous goal-setting (autonomous and
adaptive generation of efficiency criteria) in control loops).

64
Within the last 50 years,48 mathematical control theory simultane-
ously involved new and new classes of controlled objects (since the 1950-
1960’s–economic systems, later ecological-economic and other systems).
Concerning the recent decades, the focus of attention has been gradually
drifting to living systems and social systems. The fruitful development of
the corresponding branches of control theory and accumulation of
knowledge about controlled objects require a close cooperation between
mathematicians (control experts) and representatives of associated sci-
ences.
Moreover, the application domain of control theory becomes wider.
A key problem in its methods dissemination (the integration problem) is
the availability of sufficiently adequate models of controlled objects.
Again, here we need a close cooperation between control experts and
representatives of associated sciences (physics, economics, biology,
sociology and others).
For a large scientific organization, institution or scientific school to
maintain and/or gain leading positions in the field of control in several
decades when seeming new objects of control will become classical, it is
necessary to initiate their intensive research right now!

48
Interestingly, a broader retrospective review indicates that social systems cyclically
interchange with technical ones in the focus of control theory, getting “back” at a new
turn of the dialectical spiral. Indeed, perhaps the first object of control (in the prehistoric
society) was a group of people, later on - transport and elementary mechanisms, again
followed by groups of people (Plato-N. Machiavelli-F.Bacon-T.Gobbs-…-A. Ampere-
B. Trentowski). Starting from the middle of the 19th century, control theory switched to
technical (mechanical) systems. Today, control of human beings, their groups and/or
collectives is again on the agenda.
65
Mechanical systems Technical systems Organization-technical Decentralized (networked)
and informational systems intelligent systems

1
1– technical systems
2– economic systems
3– ecological-economic systems
4– living systems
5– social systems

P rincipal
3

Agent

???

1860’s 1900’s 1930’s 1940’s 1950’s 1960’s 1970’s 1980’s 1990’s 2000’s 2010’s 2020’s

Fig. 32. The past, present and future of control theory
More frequently, controlled objects represent the so-called interdis-
ciplinary-nature systems [152]. Imagine that the corresponding classifica-
tion is based on the subject of human activity (“nature – society – produc-
tion”). In this case, we may distinguish among organizational systems
(people), ecological systems (nature), social systems (society), as well as
economic (technical) systems (production), see Fig. 33. Different paired
combinations emerge at the junction of these classes of systems:
organization-technical systems;
socio-economic systems;
ecological-economic systems;
socio-ecological systems;
normative-value systems;
noosphere systems.49

Economic Technical
systems systems

Production

Organization-technical
and man-machine
systems Ecological-
Socioeconomic
systems economic
systems

Organizational
systems
(human being)

Normative- Noosphere
value systems systems

Social systems Ecological


(society) systems
Social-ecological systems (nature)

Fig. 33. Systems of interdisciplinary nature: A classification

49
Systems, where a specially organized activity of human beings is a determining factor
for the development of large-scale (global) ecological systems.
Separation of the following research priorities evidences the grow-
ing interest of investigators in interdisciplinary-nature systems:
– US National Science Foundation: group control, spacecraft clus-
ters, combat control, control of financial and economic systems, control
of biological and ecological systems, multiple-profile teams in control
loop, etc.;
– Research in the European Union: man-machine symbiosis (model-
ing of a human being in control loops including the case of a controlled
subject), complex distributed systems and quality improvement of sys-
tems in an uncertain environment (global manufacturing, security, heter-
ogeneous control strategies, new principles of multidisciplinary coordina-
tion and control) and others;
– Key directions of fundamental research by the Russian Academy
of Sciences: control in interdisciplinary models of organizational, social,
economic, biological and ecological systems; group control; cooperative
control and others.
The paper [66] mentioned three global challenges to cybernetics,
namely, transitions:
1) from nonliving to living (from chemistry to biology);
2) from living to intelligent (from living organisms to human con-
sciousness);
3) from human consciousness to human spirit as the highest level of
consciousness.
The specifics of interdisciplinary-nature systems incorporating hu-
man beings as a control object consist in the following:
– independent goal-setting, purposeful behavior (conscious infor-
mation misrepresentation and strategic behavior, non-fulfillment
of commitments, etc.);
– reflexion (nontrivial mutual awareness, foresight, behavior fore-
casting for a Principal or control object/subject, the effect of roles
exchange,50 etc.);
– bounded rationality (decision-making in uncertain conditions and
under existing constraints on the volume of processed data);
– cooperative and/or competitive interaction (formation of coali-
tions, informational contagion, etc.);
– hierarchical structure;
– multicomponent structure;

50
In systems whose elements have strategic behavior, discrimination between control
subjects and controlled ones can be ambiguous; e.g., in some situations a subordinate
manipulates its superior.
68
– distributed/networked structure and/or different scale (in space
and/or time, see the paper [135] discussing the principle of requi-
site variety and its extension to multiscale systems).
Historically, “mechanical” systems (later, technical ones) were the
first classes of controlled objects theoretically studied on a mass scale
(see Fig. 32). As a matter of fact, most deep and extensive theoretical
results of control were obtained exactly for these classes. As new con-
trolled objects appear, researchers naturally endeavor to perform “results
transfer,” i.e., translate some existing results to the new objects. That was
exactly the case for interdisciplinary-nature systems: general results of
Cybernetics and concrete analysis results of control problems for tech-
nical systems were transferred to the former, see arrow I in Fig. 34.

Multi-agent systems

Control models II Control models and


and methods of methods of
technical systems interdisciplinary-nature
systems
I

Interdisciplinary-nature
Technical systems systems

Fig. 34. Results transfer

Following accumulation of its own results within the framework of


control models and methods of “non-mechanical” (e.g., living systems51)
and/or interdisciplinary-nature systems (e.g., socioeconomic systems), the

51
For the sake of justice, note that at all times living systems encouraged scientists and
engineers to apply analogies, i.e., to “repeat” certain properties of living nature objects
in artificial systems.
69
inverse tendency has been gradually showing itself–more and more
artificial technical or informational systems are assigned the inherent
properties of social or living systems. This represents a basic trend which
will be perhaps intensified in future. In many cases, multi-agent systems
act as a tool of “inverse results translation” (see arrow II in Fig. 34).
Multi-agent systems are discussed below. For instance, such inverse
translation takes place in numerous manifestations of “intellectuality”:
cooperative behavior, reflexion, etc.

5.3. “Networkism”

For the recent 15 years, a modern tendency in control theory has


been seeking towards “miniaturization,”52 “decentralization” and “intel-
lectualization” in systems of very many interacting autonomous agents
having social, technical or informational nature. Inherent properties of
multi-agent systems (MAS) such as decentralized interaction and agents’
multiplicity induce fundamentally new emergent properties (autonomy,
smaller vulnerability to unfavorable factors, etc.) crucial in several appli-
cations [180, 189, 226].
MAS can be divided into hardware (pioneering publications dating
back to the middle of the 1990’s) and software ones (since the middle of
the 1970’s), as illustrated by Fig. 35. The former include mobile robots
(wheeled robots, unmanned aerial vehicles (UAV), autonomous un-
manned submersibles (AUS), etc.), control systems of complex industrial
and technological objects (computer-aided control systems of industrial
processes, power engineering–SmartGrid and so on). The latter include
control systems, where agents are softbots, i.e., autonomous programmed
modules solving distributed optimization problems according to estab-
lished protocols (possible applications are logistics systems in manufac-
turing and transport, softbots in digital networks, i.e., real-time schedul-
ing, assignment of functions and tasks, and so on).

52
Control problems of quantum systems are mostly treated in theory, but micro-level
controlled objects (“microsystems”) have become almost common.
70
MAS

Hardware MAS including Software MAS


program-technical MAS
(“softbots”)

Wheeled … UAV AUS


robots

Fig. 35. Types of multi-agent systems

On the other hand, a striking tendency of recent 10-15 years con-


cerns transition from centralized control (a same control system respon-
sible for each of several controlled objects, e.g., agents including their
pairwise interactions) to decentralized control (a control network is
superstructed over a network of interacting objects), and then to commu-
nication between control systems and agents via a network. Here a sepa-
rate problem lies in control of this network, see Fig. 36. Networked MAS
are considered in the next section.
Consequently, today “networkism” exists in controlled objects, con-
trol systems and their interaction. In many cases, a control system is even
“immersed” into a controlled object, thereby forming an integrated (per-
haps, hierarchically organized) network of interacting agents. The num-
ber of research works on networks (in the wider interpretation,53 infor-
mation and communication technologies (ICT), Internet and other
technologies in complex distributed systems) is huge and still continues
to grow (see Section 5.1).

53
Not to mention the penetration of ICT into engineering and everyday life, the associated
educatory and social capabilities and threats.
71
Centralized control
Decentralized control

NETWORK

Fig. 36. Decentralized control

Today, the overwhelming majority of multi-agent systems investiga-


tions are theoretical, despite their mass character. As a rule, consideration
gets confined to computing experiments, and there exist merely a small
number of open-access publications describing real applications of MAS.
The forthcoming years will be remarkable for transition from the so-
called C3 paradigm (joint solution of Con-
trol + Computations + Communications problems) to the C5 concept
(Control + Computations + Communications + Costs + Life Cycle). Here
the above-mentioned problems are solved taking into account cost aspects
(in the general sense) over the whole life cycle of a system including the
joint design of a control system and its controlled object.
Speaking about “networkism,” we have to touch “network-
centrism”54 extremely fashionable nowadays (also called “network-
centric fever”). It admits several interpretations covering organization and
analysis principles of any networks in principle or temporary networks
created for specific task or mission execution at a right place and right
time (networked organizations, e.g., interaction of military units in a
combat theater). This approach finds wide application in network-centric
warfare problems for vertical and horizontal integration of all elements
during a military operation (control, communication, reconnaissance and
annihilation systems).
Another manifestation of “networkism” concerns the growing popu-
larity of distributed decision support systems. The intensive development
of ICT increases the role of informational aspects of control in decentral-
ized hierarchical systems (an example is decision-making support in
distributed decision systems which integrate heterogeneous information
on strategic planning and forecasting from different government authori-

54
Network-centrism operates its own abbreviations differing from control theory (see
above): C3I–Command, Control, Communications and Intelligence, C4I–Command,
Control, Communications, Computers and Intelligence, and others.
72
ties and industrial sectors). One of such aspects consists in informational
control as a purposeful impact on the awareness of controlled subjects;
therefore, a topical problem is to develop a mathematical apparatus
providing an adequate description for an existing relationship between the
behavior of system participants and their mutual awareness [158].
Design of intelligent analytic systems for informational and analytic
support of goal-setting and control cycle represents another important
informational aspect of control in decentralized hierarchical systems.
Here it seems necessary to substantiate methodological approaches to
control efficiency in decentralized control systems, including elaboration
of principles and intelligent technologies for data acquisition, representa-
tion, storage and exchange.
We underline that an appreciable share of information required for
situation assessment, goal-setting and control strategy choice in decen-
tralized systems is ill-structured (mostly, in the form of text). And there
arise the problems of relevant search and further analysis of such infor-
mation. The described circumstances bring to the need for suggesting
new information retrieval methods (or even knowledge processing meth-
ods) based on proper consideration of its lexis and different quantitative
characteristics and, moreover, on analysis of its semantics, separation of
target data and situation parameters, assessment of their dynamics and
scenario modeling of situation development in future periods.

5.4. Heterogeneous Models and Hierarchical Modeling

In recent years, control theory more and more addresses the term of
system “heterogeneity” comprehended, in the first place, as the multiplic-
ity of its mathematical description (e.g., descriptive dissimilarity of
separate subsystems: the type and scale of time/space of subsystems
functioning, multi-type descriptive languages for certain regularities of a
studied object, etc.). “Heterogeneity” also means complexity appearing in
(qualitative, temporal and functional) dissimilarity, (spatial and temporal)
distribution and the hiеrarchical/networked structure of a controlled
object and an associated control system (see Section 5.3).
An adequate technology for design and joint analysis of a certain set
of heterogeneous systems models is the so-called hierarchical modeling.
According to this technology, models describing different parts of a
studied system or its different properties (perhaps, with different levels of
detail) are ordered on the basis of some logic, thereby forming a hierar-
chy or a sequence (a horizontal chain). Generally, lower hierarchical
levels correspond to higher levels of detail in modeled systems descrip-
73
tion. Each element of a sequence possesses almost same level of detail,
and the results (outputs) of a current model represent input data for a next
model. Such approach to modeling was born and further developed in the
1960–1970’s [40, 133].
In some sense, hierarchical models are a wider category than hybrid
models and the multi-model approach. A hybrid model is a model com-
bining elements of two or more models reflecting different aspects of a
studied phenomenon or process and/or employing different apparatuses
(languages) of modeling–see Fig. 37. For instance, a hybrid model can
include discrete and continuous submodels, digital and analog submodels,
and so on.

Model 1 Hybrid Model 2


model

Fig. 37. The narrow interpretation of a hybrid model

In the wider interpretation, a hybrid model represents a complex of


models each chosen under well-defined conditions, see Fig. 38. As an
example, consider hybrid dynamic systems (HDS, also known as switch-
ing systems). The expression in the right-hand side of the HDS differen-
tial equation is chosen from a given set of options depending on the
current state of the system and/or time and/or auxiliary conditions.
Within the multi-model approach, several models are used sequen-
tially or simultaneously with further or current analysis and selection of
“best” results.
External Model selection RESULT
information principle

Model 1 Model 2 … Model n

Fig. 38. The modern interpretation of a hybrid model.


The multi-model approach
74
Hierarchical (sequential) models may have a more complex struc-
ture, see Fig. 39. At each level, a model can be hybrid or follow the
multi-model approach. Hierarchical models lead to the problems of
aggregation and decomposition well-known in mathematical modeling.

Model n

THE LEVEL OF ABSTRACTING


THE LEVEL OF DETAIL

RESULT
External
information
Model 2

Model 1

Fig. 39. A hierarchical (sequential) model

The next subsection gives some examples of hierarchical models.


5.4.1. A model of warfare [153]. Suppose that opponents choose
the “spatial” distribution of their forces (among springboards) one-time
and simultaneously. In this case, we obtain the colonel Blotto game
(CBG55), where the winner at each springboard results from solving the
corresponding Lanchester’s equations. In other words, it is possible to
study an “hierarchical” model as follows. At the upper level, players
allocate their forces among springboards within a certain variation of the
game-theoretic model of the CBG. At the lower level, the result of a
battle at each springboard is described by some modification of
Lanchester’s model. The complexity of such hierarchical models lies in

55
The classical CBG has the following statement. Two commanders (colonels Blotto and
Lotto) distribute their forces among a finite number of springboards. The winner at each
springboard is the player having more forces. Each commander strives for winning at as
many springboards as possible.
75
that, in most cases, it is difficult to find the analytical solution to the CBG
(see a survey in [107]).
In addition, Lanchester’s models allow the hierarchical approach. At
the lower level, the Monte Carlo method serves for simulating the inter-
action of separate military units. At the middle level, this interaction is
described by Markov models. And finally, the upper (aggregated, deter-
ministic) level involves Lanchester’s differential equations proper. By
introducing control variables (temporal distributions of forces and means,
reserves engagement, etc.), one can superstruct control problems “over”
these models (in terms of controlled dynamic systems, differential and/or
repeated games, etc.). Consequently, we obtain the following hierarchical
model, illustrated by Table 3.

Table 3. The model of warfare


Hierarchical Modeled phenome- Modeling tools
level na/processes
5 Spatial distribution of The colonel Blotto game
forces and means and its modifications
4 Temporal distribution of Optimal control, repeated
forces and means games, etc.
3 Size dynamics Lanchester’s equations and
their modifications
2 “Local” interaction of units Markov models
1 Interaction of separate Simulation, the Monte
military units Carlo method

5.4.2. The model of distributed penetration through a defense


system (the so-called diffuse bomb problem [105]).
An example of the hierarchical model of a MAS is the diffuse bomb
problem stated below.
A group of autonomous moving agents must hit a target with given
coordinates. At each time step, any agent can be detected and destroyed
by a defense system (with a certain probability). Detection/annihilation
probability depends on agent’s coordinates and speed, as well as on the
relative arrangement of all objects in the group. The problem is synthesiz-
ing algorithms of decentralized interaction among agents and their deci-
sion-making (the choice of direction and speed of their motion) to max-
imize the number of agents reaching the target. Agents appear
“intelligent” in the following sense. Some agents (reconnaissance) can
acquire on-line information on the parameters of the defense system. By

76
observing the behavior of the reconnaissance agents, the rest ones per-
form “reflexion,” assess the limits of dangerous areas and solve the posed
problem. Strategic interaction of counteracting sides can be described in
terms of game theory, see [106].
The following hierarchical model defined by Table 4 serves for ap-
praising and choosing most efficient algorithms of behavior in [105]:

Table 4. The diffuse bomb model


Hierarchical Modeled phenome- Modeling tools
level na/processes
6 Choosing the set of agents Discrete optimization
and their properties methods
5 Choosing the paths and Optimal control
speeds of agents
4 Agent’s forecast of the Reflexive games. The
behavior of other agents reflexive partitions method
3 Detection probability mini- Algorithms of course
mization based on current choice
information
2 Collisions avoidance, obsta- Algorithms of local paths
cles avoidance choice
1 Object’s movement towards Dynamic motion equations
a target

5.4.3. The hierarchical structure of agents in multi-agent systems


(MAS). In multi-agent systems (see Sections 5.1 and 5.3), the hierarchy
of models is inter alia generated by the functional structure of the agent.
The latter may have several hierarchical levels, see Fig. 40 [153, 158].
The lowest (operational) level is responsible for implementation of ac-
tions (e.g., motion stabilization with respect to a preset path). Tactical
level corresponds to actions’ choice (including interaction with other
agents). Strategic level is in charge of decision-making, learning and
adaptivity of behavior. And finally, the highest level (goal-setting) an-
swers the principles of goal-setting and choice of the mechanisms of
functioning for agents. The diffuse bomb problem in subsection 5.4.2
realizes the general structure described by Fig. 40.

77
Goal-setting level
(including control of Confrontation
mechanisms of Hierarchies

Game theory
functioning)

Models of collective
Collective

behavior
Strategic level Decision-making
(decision-making,
External information

adaptation, learning,
reflexion) Cooperative
Control

intelligence
Artificial
Distributed
Optimization (e.g.
Tactical level Task Assignment)

Dynamic systems
Mission Planning

Formation Control
Operational level Consensus
(execution level) Problem
Action

Fig. 40. The hierarchical structure of an agent in MAS

The structure presented by Fig. 40 seems rather universal. However,


most realizations of multi-agent systems involve merely two lower levels
and the framework of dynamic systems theory.
In mission planning problems, one can use different means of artifi-
cial intelligence, e.g., neural networks, evolutionary and logical methods,
etc.
Also, let us mention distributed optimization (agent-based compu-
ting, see [32]) as a direction of modern optimization widespread in MAS.
Its key idea consists in the following. An optimization problem of a
multivariable function is decomposed into several subproblems solved by
separate agents under limited information. For instance, each agent is
“responsible” for a certain variable; at a current step, it chooses the value
of this variable, being aware of the previous choice of some its “neigh-
bors” and seeking to maximize its own local “goal function.” Given an
initial (global) goal function, is it possible to find the “goal functions” of
agents and their interaction rules so that the autonomous behavior of
agents implements a centralized optimum? (in algorithmic/computational
game theory [4, 123], this optimum can correspond to a Nash equilibrium
or a Pareto efficient state of agents’ game).

78
Consider the strategic level of agent’s architecture, which answers
for adaptation, learning, reflexion and other aspects of strategic decision-
making. Game theory and theory of collective behavior analyze interac-
tion models for rational agents. In game theory, a common scheme con-
sists in (1) describing the “model of a game,” (2) choosing an equilibrium
concept defining the stable outcome of the game and (3) stating a certain
control problem–find the values of controlled “game parameters” imple-
menting a required equilibrium (see Fig. 41, where “levels” correspond to
the functions of science discussed in Section 1.1).
Taking into account informational reflexion leads to the necessity of
constructing and analyzing awareness structures [158]. This enables
defining an informational equilibrium, as well as posing and solving
informational control problems. Taking into account strategic reflexion
generates a similar chain marked by heavy lines, i.e., posing and solving
“reflexive control” problems [154].

Level Models of Models of


collective Game theory reflexive
behavior decision-making

MODELS OF Informational Strategic


INFORMATIONAL reflexion reflexion
Phenomenological REFLEXION
(descriptive) Awareness
structures MODELS OF STRATEGIC REFLEXION

Reflexive structures Reflexive structures


k-level models; Reflexion
models of
“Optimization” models Equilibrium concepts cognitive
models in
of collective behavior bimatrix games
Informational hierarchies, etc.
equilibriu m
Reflexive
Predictive Nash equilibriu m
equilibriu m

Control problems

Normative
Informational Reflexive
control control

Fig. 41. Decision-making:


informational and strategic reflexion

5.4.4. The model of informational confrontation in social net-


works. The object and means of control is a social network or another
“networked” object [75, 89].
One can distinguish among several levels of description and analysis
of social networks, see Table 5. At level 1 (the lowest one), a network is
79
considered “in toto”; such description provides no details but is essential
for rapid analysis of general properties enjoyed by the object. The aggre-
gated description of a network employs statistical methods, semantic
analysis techniques, etc.
Level 2 examines the structural properties of a network using the
framework of graph theory.
The informational interaction of agents is analyzed at level 3; here
we dispose of a wide range of applicable models (Markov models, finite-
state automata, models of innovations diffusion, infection models, and
others).
Level 4 involves optimal control or discrete optimization methods to
formulate and solve control problems.
And finally, level 5 serves to describe the interaction of subjects af-
fecting a social network (pursuing their individual goals). As a rule, this
level utilizes game theory including reflexive games.
Consequently, we arrive at the following hierarchical model illus-
trated by Table 5.

Table 5. The model of informational confrontation in social networks


Hierarchical Modeled phenome- Modeling tools
level na/processes
5 Informational confron- Game theory, decision theory
tation
4 Informational control Optimal control, discrete
optimization
3 Informational interac- Markov models, finite-state
tion of agents automata, models of innova-
tions diffusion, infection
models, etc.
2 Analysis of structural Graph theory
properties of a network
1 General analysis of a Statistical methods, semantic
network analysis techniques, etc.

The example of social media [75] highlights the problems of social,


economic and informational security in ICT. Technological progress
gradually increases its pace, and society appears unable to fully realize
new opportunities and threats created by a certain technology. While
discovering atomic power, scientists recognized possible problems in the
case of its military application (e.g., recall the Einstein–Szilárd letter to

80
the US President F.D. Roosevelt in 1939). Today, even experts have no
totally clear understanding of the social impact of ICS. No doubt, ICT
provide ample opportunities for decision-making, particularly, for exper-
tise [73]. On the other hand, there arise new problems, too.
The results of functioning of computer-aided decision support sys-
tems (including the ones obtained within some formal models using
modern ICT) are applied to make real important decisions. Hence, this
aggravates security problems, i.e., making decisions and their conse-
quences proof against the negative impacts of all the participating ele-
ments (both hardware components and active subjects).
Furthermore, society and government display growing interest in so-
cial media (online networks) as a source of specific information for
predictive detection of aborning implicit tendencies to-be-controlled.
In other words, we inevitably face the problems of social, economic
and informational security for an individual, society and a whole country:
social, expert and other networks actually form an arena of informational
contagion when control subjects struggle for the “minds” of other net-
work members, whereas a social network itself represents an object
and/or tool of informational impacts.
5.4.5. “Hierarchical automation” in organization-technical sys-
tems. Since the 1980’s, production systems have followed a long path
from flexible to holonic systems. In recent years, they attract the growing
interest of researchers in connection with new market challenges: the
efficiency of production specialization and decentralization, product and
service differentiation, etc. There appear networked productions and
“cloud” productions. Along with implementation of fundamentally new
technologies of production (nanotechnologies, additive technologies,
digital production, and so on), we observe gradual changes in its organi-
zation, i.e., the emphasis is shifted from operations automation to control
automation at all life cycle stages.
Existing challenges such as:
– a huge number of product’s customized configurations;
– integration of small- and large-scale production;
– lead-time reduction for an individual order;
– supply chains integration for stock optimization;
and others call for solutions guaranteeing:
– the universality of production systems and their separate compo-
nents;
– the capability of rapid and flexible adjustment with respect to new
tasks;

81
– autonomous decision-making in production owing to high-level
control automation;
– survivability, replicability and scalability owing to network-centric
control and multi-agent technologies;
– decision-making in production with proper consideration of eco-
nomic factors, etc.
Modern production systems have a hierarchical structure, as indicat-
ed by Fig. 42. And the complexity of control problems treated induces
their decomposition into decision-making levels. Each level in control
problems solution corresponds to its own goals, models and tools (Fig.
42) at each stage of control (organizing, planning, implementing, control-
ling and analyzing). Hence, in organizational-technical production sys-
tems it is possible (and necessary) to apply hierarchical modeling.

Strategic
planning • Scenario-based financial and economic analysis

Structure
design • Discrete optimization models, networked games

Assortment
planning • Economic demand models, discrete optimization models

Assembly line balancing • Discrete optimization models

• Linear models, game-theoretic models of enterprises


Supply planning
interaction
• Optimization models, scenario analysis, mechanism
Production planning
design (order assignment)
• Game-theoretic models, distributed optimization,
Purchase planning
mechanism design (tournaments and tendes)

Shopfloor planning and schedullng • Distributed optimization

Movement planning (robotized machines and trolleys) • Optimal control problems

• Distributed parameter models, fuzzy


Industrial process control
logic, simulation modeling

Fig. 42. Hierarchical models in production systems

This possibility is implemented, but on an irregular and


unsystematized basis. Obviously, one can solve real problems of automa-
tion, analysis and decision support for production systems only within
appropriate computer-aided informational systems. As an illustration,
consider the classes of such systems in the ascending order of their “hier-
archical level”:
– lower-level control systems (PLC, MicroPC, …);
– supervising and scheduling systems (SCADA, DCS, …);

82
– production planning and management systems (MRP, CRP, …,
MRP2, …);
– integrated systems (MES, …, ERP., …);
– systems responsible for interaction with an external environment
or development (SCM, CRM, PMS, …);
– upper-level analytic systems (OLAP, BSC, DSS, …).
These classes of systems use mathematical models, but very sparse-
ly; as a rule, the higher is the level of hierarchy,56 the lesser is their usage.
For instance, lower-level controllers employ in full automatic control
theory; project management systems (PMS) incorporate classical algo-
rithms for critical path search, Monte Carlo methods for project duration
estimation, and heuristics for resources balancing; ERP systems and
logistics systems (SCM) involve elementary results from stock manage-
ment theory, and so on.
Nevertheless, full-fledged implementation of the so-called “hard”
models and “quantitative science” (operations research, discrete optimiza-
tion, data analysis and other branches of modern applied mathematics) in
informational systems still waits in the wings.
Several global problems exist here. On the one hand, mathematical
models require very accurate and actual information often associated with
inadmissibly high organizational and other costs. On the other hand, in
many cases “soft” models (putting things in order in production process-
es, implementation of typical solutions and standards in the form of
qualitative best practices, etc.) yield an effect exceeding manyfold the
outcomes of quantitative models, yet consume reasonable efforts. There-
fore, it seems that quantitative models should be applied at the second
stage, “extracting” the remainder of potential efficiency increase.
Concluding this section dedicated to heterogeneous models and hier-
archical modeling, we underline a series of their common classes of
problems. Modern controlled objects are complicated so that sometimes a
researcher would hardly separate out purely hierarchical or purely net-
worked components. In such cases, it is necessary to consider networks of
hierarchies and hierarchies of networks.
First, at each level models have their own intricacies induced by a
corresponding mathematical apparatus. Moreover, there arise “conceptual
coupling” dilemmas and the common language problem among the
representatives of different application domains.

56
This statement is true for separate informational systems and for integrated informa-
tional systems of product life cycle management (PLM) including computer-aided design
systems, which realize the complex of the listed functions.
83
Second, a complex of “joined” models inherits all negative proper-
ties of each component. Just imagine that, at least, one model in a “chain”
admits no analytic treatment; then the whole chain is doomed to simula-
tion modeling. The speed of computations in a chain is determined by the
slowest component, and so on.
And third, it is necessary to assess the comparative efficiency of the
solutions of aggregated problems, as well as to elaborate and disseminate
typical solutions of corresponding control problems in order to transfer
them to the engineering ground.

5.5. Strategic Behavior

Control theory has followed a long path of development from auto-


matic regulation systems to intelligent control systems, as illustrated by
Fig. 32 and Fig. 43.
Intelligent control can be defined in different ways, namely, as con-
trol including goal-setting [211]; as control based on artificial intelligence
methods (e.g., artificial neural networks, evolutionary (genetic) algo-
rithms, logical inference or logical and dynamic models, knowledge
representation and knowledge management, etc.57); as control imitating
human behavior, and so on. Not all “definitions” seem appropriate.

AUTOMATIC AUTOMATIC INFORMATION INTELLIGENT


REGULATION CONTROL AND CONTROL CONTROL
SYSTEMS SYSTEMS SYSTEMS SYSTEMS

Fig. 43. From automatic regulation to intelligent control

Unfortunately, the term “intelligent” has become a fashionable at-


tachment to control system (behavior, etc.) description, and the absence
of such characteristic is interpreted as being out-of-date. This “devalues”
the whole essence of intelligence.
In the previous sections, we have identified several properties of in-
terdisciplinary-nature systems comprising human beings (or artificial
systems “imitating” human beings) such as independent goal-setting,
purposeful behavior, reflexion, bounded rationality, cooperative and/or
57
Each of these classes possesses certain advantages and shortcomings, especially, in the
sense of real-time requirements. Today, the choice of concrete tools is defined by the skill
of a researcher or engineer, as well as by accumulated experience and traditions of
corresponding scientific schools. Global challenges concern maximum suppression of
existing shortcomings of separate tools and design of general methods for their integra-
tion subject to posed problems.
84
competitive interaction. All these properties can be covered by the cate-
gory of strategic behavior. From the historical perspective, systematic
consideration of the human factor (including strategic behavior of a
controlled object) in mathematical control problems was pioneered in
theory of active systems in the late 1960’s. This theory was founded by
V. Burkov, see the first publications [36, 39], the survey [37] and the
modern textbooks and monographs [38, 131].
Further exposition focuses on some actual aspects of strategic behav-
ior. However, we will not discuss many “internal” problems of associated
scientific directions such as game theory, mechanism design, and others.
Intelligent multi-agent systems. One modern tendency in theory of
multi-agent systems, game theory and artificial intelligence lies in that
researchers strive for their integration. Yet, game theory and artificial
intelligence aim at higher levels of agent’s architecture, see Fig. 40.
Within the so-called algorithmic game theory [4], one would ob-
serve “transition downwards” (see Fig. 44), i.e., from the uniform de-
scription of a game to its decentralization and analysis of the feasibility of
implementing autonomously the mechanisms of equilibrium behavior and
realization. On the other hand, theory of MAS moves “upwards” (see Fig.
44) in a parallel noncoincident way due to the local character of scientific
communities. Theory of MAS aspires after better consideration of strate-
gic behavior and design of typical test problems and scenarios. The latter
are necessary, since in most cases tactical level employs certain heuristic
algorithms to-be-compared in terms of complexity, efficiency and other
criteria (the number of heuristic algorithms demonstrates rapid growth
owing to intensive research of multi-agent systems).
The concept of bounded rationality gradually becomes widespread
in analysis (perhaps, this tendency will be even stronger in future): in the
absence of time, possibility or vital necessity, investigators search for
admissible pseudo-optimal control actions instead of optimal ones (in
many situations, on the basis of heuristic methods).
Furthermore, consideration of the human factor calls for employing
mechanism design [131] and behavioral theories (experimental econom-
ics, experimental game theory, see a survey and references in [155]). The
“normative” picture of interaction between MAS and strategic behavior
sciences has the form demonstrated by Fig. 45.

85
Game theory

Distributed Algorithmic game


optimization theory

MAS, group
control

Fig. 44. MAS and strategic behavior: state-of-the-art

Game theory,
mechanism design

Collective behavior
theory, bounded
rationality

Distributed Algorithmic game


optimization theory Experimental economics,
experimental game theory

MAS, group
control

Fig. 45. MAS and strategic behavior sciences: the normative picture of
interaction

In addition, we emphasize that aspiration for maximum intellectual-


ization is bounded by “costs” (computational, cognitive, tactical and
technical, economic and other costs), see Fig. 46. In other words, in MAS
agents must have a rational “intellectualization” level being adequate to
a posed problem in terms of “costs.” On the other hand, aspiration for
86
maximum intellectualization as maximization of the guaranteed efficien-
cy of MAS functioning over the set of feasible situations at goal-setting
level corresponds to decentralization rejection, i.e., transition to a central-
ized system.

"Result"

"Effect" "Costs"

"Intellectualization"

Fig. 46. The price of “intellectualization”

Reflexion. Game theory studies interaction of superintelligent agents


with same cognitive capabilities as their researcher [141], whereas theory
of collective behavior proceeds from agents’ rationality (or bounded
rationality). A possible bridge58 between them for transition from rational
to superintelligent agents consists in increasing agents’ “intellectualiza-
tion” by endowing them with reflexive capabilities, see Fig. 47 and
surveys in [154, 158].

Theory of collective Game theory


behavior REFLEXION
(rational agents) (superintelligent players)

"INTELLECTUALIZATION" LEVEL

Fig. 47. Reflexion and growing “intellectualization”

58
Alternatives are, e.g., consideration of evolutionary games [220] or learning effects in
games [141].
87
Informational reflexion is the process and result of agent’s thinking
about (a) the values of uncertain parameters and (b) what its opponents
(other agents) know about these values. Here the “game” component
actually disappears–an agent makes no decisions.
Strategic reflexion is the process and result of agent’s thinking about
which decision-making principles its opponents (other agents) employ
under the awareness assigned by it via informational reflexion, see Fig.
41.
A key role belongs to the notions of informational/reflexive struc-
tures describing the nontrivial mutual awareness of agents (or their self-
awareness, see ethical choice models in [115]) and phantom agents
existing in the minds of other real and phantom agents and possessing
certain awareness.
The concept of phantom agents yields rigorous statement of reflexive
games as games of real and phantom agents (the term suggested in 1965
by V. Lefevbre [116]). Moreover, this concept allows defining informa-
tional equilibria as a generalization of Nash equilibria for reflexive
games: each (real or phantom) agent evaluates its subjective equilibrium
(an equilibrium in a game this agent thinks it actually plays) based on an
existing hierarchy of believes about the objective and reflexive realities
[158].
Reflexive games research yields the following. First, it provides a
uniform methodology and mathematical framework to describe and
analyze various situations of collective decision-making by agents pos-
sessing different awareness, to study the impact of reflexion ranks on
agents’ payoffs, to obtain conditions of existence and implementability of
informational equilibria, etc. Second, such research makes it possible to
establish the existence conditions and properties of an informational
equilibrium, as well as to pose constructively and correctly the problem
of informational control. In this problem, a Principal has to find an
awareness structure such that the informational equilibrium implemented
in it appears most beneficial to it. An interested reader can find a neces-
sary theoretical background and numerous applications of reflexive
games and informational control in the book [158].
The achievements and illusions of “emergent intelligence.” This
section ends with a brief consideration of a phenomenon related to “intel-
ligent” control and behavior of artificial (e.g., multi-agent) systems.
In the two recent decades, much attention of researchers in cybernet-
ics and artificial intelligence has been paid to emergent intelligence. A
system composed of very many relatively simple homogeneous elements

88
(e.g., agents in MAS59) locally interacting with each other and an external
environment demonstrates a complex60 “intelligent” behavior in compari-
son with the simplicity of its elements. Investigations in this field are also
motivated by existing analogs in nature (Swarm Intelligence, i.e., heuris-
tic algorithms of distributed optimization in ant colonies and beehives,
flocks of birds, fish shoals, etc.).
Such systems enjoy a series of obvious advantages: the cheapness
and simplicity of a separate element, local fault-tolerance, scalability,
reconfigurability, asynchrony, parallel processing of local information
(ergo, high-level performance of real-time operation). They have numer-
ous applications: social systems (crowd wisdom, e-expertise, social
networks, etc.), economic systems (financial and other markets, national
and regional economics, etc.), telecommunication networks, models of
production and transport logistics systems, robotics, knowledge extrac-
tion (particularly, from Internet), Internet of Things and others [53, 73,
75, 151, 183, 195].
The appearance of qualitatively new properties in a whole system
(against the individual properties of its elements), i.e., transition from
simple local and decentralized interaction of elements to a nontrivial and
complex global behavior, allows treating the latter as adaptive and self-
organizing. Indeed, nonlinearity, evolution, adaptivity and self-
organization are the characteristic features of real modern complex
systems (e.g., see examples and their discussion in [183]).
In addition to many achievements and good prospects, emergent in-
telligence sometimes creates several illusions. Actually, emergent intelli-
gence concerns artificial systems, but adaptation and self-organization
(despite all their pluses) are embedded at the stage of system design.
Notwithstanding the law of emergence (the whole is greater than the sum
of its parts, see above), the behavior of artificial systems gets predeter-
mined by the behavior/interaction of its elements.
Similar delusions occurred in the history of science (e.g., at the early
development stages of cybernetics and artificial intelligence61). They

59
This class also includes the problematique of artificial neural and immune networks,
probabilistic automata, genetic algorithms, and so on.
60
Some authors insist on the birth of a new science called complexity science.
61
A cybernetical system always has the behavior defined by its embedded algorithms
(“stochastic,” “nondeterministic,” and others), despite the seeming generation of new
knowledge or demonstration of qualitatively new (“unexpected”) behavior. This is
especially the case under interaction of very many elements (a simple-structure system
shows a complex behavior).
89
induced much disappointment and put the brakes on the evolution of
these scientific directions.
Furthermore, recall that MAS realize heuristics and it is necessary to
assess the guaranteed efficiency of their solutions, see above.
Generally speaking, there exist three large sources of “new” proper-
ties of a system:
- additive interaction62 of its elements;
- for an observer/researcher having limited information and cognitive
capabilities, the multiplicity of elements and their mutual relations (per-
haps, nonlinear, asynchronous, with delayed information exchange, etc.)
makes it impossible to conduct a mental experiment for reproducing
agents’ behavior in detail; and computer simulation yields “surprising”63
results (an unexpected system behavior);
- artificial randomization (embedded into behavioral algorithms de-
scribing agents’ interaction with each other and/or an external environ-
ment) is necessary for variety creation (in the final analysis, for self-
organization).64

5.6. Big Data and Big Control

In information technology, big data represents a direction of theoret-


ical and practical investigations on the development and application of
handling methods and means for the big volumes of unstructured data.
Perhaps, the term was first mentioned in the special issue of Nature
[144].
Big data handling comprises their65:
– acquisition;
– transmission;

62
For instance, a microrobot cannot move a heavy load, in contrast to many microrobots
applying their joint efforts.
63
The complete model of a system is so complicated that the appearance of new proper-
ties represents a “miracle” for an external observer (at the same time, scientists inten-
sively exploit it and start believing that an artificial system can demonstrate an “inde-
pendent” behavior).
64
An uncertainty is always induced by some other uncertainty potentially comprising lack
of knowledge (insufficient information) and/or the action of random factors (an uncertain-
ty never arises from an abstract “complexity” and similar conceptual factors). Facing an
“uncertainty,” one should analyze cause-and-effect relations and seek for its source
(“initial uncertainty”). Of course, different complexity factors merely get the things into
muddle.
65
In some classifications, big data handling is associated with 4D (data discovery,
discrimination, distillation and delivery/dissemination).
90
– storage (including recording and extraction);
– processing (transformation, modeling, computations and analysis);
– usage (including visualization) in practical, scientific, educational
and other types of human activity.
In the narrow interpretation, the term “big data” sometimes covers
only the technologies of their acquisition, transmission and storage. In
this case, big data processing (including construction and analysis of
corresponding models) is called big analytics (including big computa-
tions), whereas visualization of the corresponding results (depending on
user’s cognitive capabilities) is called big visualization (see Fig. 50).

Big Big
data analytics

Big
visualization

Fig. 48. “The big triad”66: Data, analytics, visualization

The universal cycle of big (generally, any) data handling is illustrat-


ed by Fig. 51. Here the key role belongs to an object and a subject (a
“customer”); the latter requires knowledge on the state and dynamics of
the former. However, sometimes there exists a chasm between data
acquired on an object and knowledge necessary for a subject. Primary
data must be preprocessed, i.e., transformed into more or less structured
information. Subsequently, necessary knowledge is extracted from this
information depending on a specific task solved by a subject.

66
We will not discuss another fashionable triad (big data, high-performance computa-
tions, cloud technologies).
91
Object CONTROL Subject

Acquisition Usage

Knowledge
Transmission
Data
Processing
Transmission Transmission

Processing
Transmission Transmission

Storage Information Storage

Fig. 49. The universal cycle of big data handling

Particularly, a subject may adopt this knowledge for object control,


viz., exerting purposeful impacts on an object to ensure its required
behavior. Control can be automatic in a special case (an inanimate sub-
ject). Perhaps, the term “big control”67 will become common soon for
indicating control based on big data, big analytics and, possibly, big
visualization,68 see [151].
The overwhelming majority of big data investigations create the
technologies of big data acquisition, transmission, storage and prepro-
cessing, whereas big analytics and visualization receive by far less con-
sideration. However, the emphasis is gradually shifted towards efficient
algorithms of big data handling.
Sources and “customers” of big data:
– science (astronomy and astrophysics, meteorology, nuclear phys-
ics, high-energy physics, geoinformation systems and navigation systems,

67
As we have mentioned above, in the recent 15 years experts in control theory have
tended to consider the problems of control, computations and communication jointly (the
so-called C3 problem (Control, Computation, Communication)). According to this
viewpoint, control actions are synthesized in real time taking into account the existing
delays in communication channels and information processing time (including computa-
tions). There is another generally accepted term (large-scale systems control), but big
data can be generated by “small” systems.
68
An alternative interpretation of “big control” concerns control of big data handling
processes. Actually, this represents an independent and nontrivial problem.
92
distant Earth probing, geology and geophysics, aerodynamics and hydro-
dynamics, genetics, biochemistry and biology, etc.);
– Internet (in the wide sense, including Internet of things) and other
telecommunication systems;
– business, commerce and finances, as well as marketing and adver-
tising (including trading, targeting and adviser systems, CRM-systems,
RFID–radiofrequency identifiers used in sales, transportation, logistics
and so on);
– monitoring (geo-, bio-, eco-; space, air, etc.);
– security (military systems, antiterrorist activity, etc.);
– power engineering (including nuclear power engineering),
SmartGrid;
– medicine;
– governmental services and public administration;
– production and transport (objects, units and assemblies, control
systems, etc.).
Numerous applications69 of big data in these fields can be found in
popular science literature (or even “glossy” journals) available at public
Internet sources. We will not describe these applications here to avoid
embarrassing “zettabytes” and “yottabytes.”
In almost all fields cited, the modern level of automation is such that
big data have automatic generation. Therefore, the following question
gains growing importance. What is the volume of “lost” data flows (due
to insufficient capabilities or time for their storage or processing)? This
question seems correct for an engineer in ICT, but not for a scientist or a
user of big data processing results. Rather, the former and the latter
would ask “What are essential losses in this case?” and “What are the
changes if we successfully acquired and processed all data?”, respective-
ly.
Traditionally, big data are unstructured data whose volume exceeds
the available handling capabilities in required time. However, this defini-
tion appears somewhat “cunning”: data considered big today cease to be
such tomorrow owing to the progress of data handling methods and
means. Data that looked big several hundreds or even thousands of years
ago (in the absence of automatic treatment) are easily processed today by
home computers. The competition between the (hypothetic) computation-

69
The principal idea of using big data is revealing “implicit regularities,” i.e., answering
nontrivial questions: epidemic prediction based on information from social networks and
sales in drugstores; medical and technical diagnostics; retention of clients by analyzing
sellers’ behavior in stores (the spatial movements of RFID-tags of products); and others.
93
al demands of mankind and corresponding technical capabilities had been
known very long ago. Of course, the capabilities have been always chas-
ing the needs. And the gap between them represents a monumental stimu-
lus for science development. Researchers have to suggest simpler (yet,
adequate) models, design more efficient algorithms, etc.
Sometimes, the definition of big data includes the so-called 5V
properties (Volume, Velocity, Variety, Veracity, Validity). Alternatively,
the difference between the big volume of conventional data and big data
proper is that the latter form the big flow of unstructured70 data (in the
sense of volume and velocity as the volume per unit time).
In the wide comprehension, the unstructuredness of big data (text,
video, audio, communications structures, etc.) is actually their character-
istic feature and a challenge for applied mathematics, linguistics, cogni-
tive sciences and artificial intelligence. Creation of real-time processing
technologies,71 including the feasibility of implicit information revelation,
for large flows of text, audio, video and other information forms the
mainstream of applications of the above sciences72 to ICT.
Therefore, we observe a direct (and explicit) query from technolo-
gies to science. The second explicit query concerns adaptation of tradi-
tional statistical analysis, optimization and other methods to big data
analysis. Furthermore, it is necessary to develop new methods with due
consideration of big data specifics. A modern fashionable trend is boost-
ing analytics tools (generally, business analytics) for big data. But their
list almost coincides with the classical kit of statistical tools (or is even
narrower, since some methods are inapplicable to big data). This is also
the case for:
– machine learning methods (support vector machine, random for-
ests, artificial neural networks, Bayesian networks including separation of
informational attributes and dimension reduction of attribute spaces in
model relearning) and artificial intelligence methods;
– high-dimensional optimization problems (in addition to traditional
parallel computing, intensive research focuses on distributed optimiza-
tion);

70
Data unstructuredness can be the result of their omissions and/or different scales of
studied phenomena and processes (in space and time, see the so-called multi-scale
systems).
71
In the first place, these technologies must perform data aggregation (e.g., detecting
changes in technological data or storing aggregated indices). Really, one does not need
all data (especially, “homogeneous” data).
72
Mathematics rather easily operates structured data; and so, data structuring makes an
important problem.
94
- discrete optimization methods (here an “alternative” lies in applica-
tion of multi-agent program systems–see the above discussion of distrib-
uted optimization problems).
The common feature in the stated queries of technologies to science
is the insufficiency of adaptation or small modification of well-known
tried-and-true methods. We have to be aware of the following. Generally,
automatic modeling (by traditional tools73) based on raw data represents
just a fashionable delusion. 74 We expect to suggest algorithms and apply
them to bulky volumes of unstructured (often irrelevant) information,
thereby improving the efficiency of decision-making (recall the “emer-
gent intelligence illusion”). There exist no miracles in science: generally,
new conclusions require new models and new paradigms (e.g., see the
books on science methodology [112, 149]).
The complexity of the surrounding world grows at a smaller rate
than the capabilities of data detection (“measurement”) and storage.
Perhaps, these capabilities have exceeded the ability of mankind to real-
ize the feasibility and reasonability of their usage. In other words, we
“choke” with data, trying to find what to do with them.
However, there exists an alternative viewpoint of this situation as
follows. Obtaining big data (having an arbitrary large volume) is possible
and easy enough (obvious examples arise in combinatorial optimization,
nonlinear dynamics or thermodynamics, see below). But we have to
understand how to manage big data (and ask the Nature correct ques-
tions). Furthermore, it is possible to construct an arbitrary complex model
using big data and then try to reach a higher accuracy within the model.
But the associated dilemma is whether we obtain new results or not (in
addition to very many new problems75). Long ago mathematicians and
physics knew that increasing the dimensionality and complexity of a
model (aspiration for considering more factors and relations among them)
does not necessarily improve the quality of modeling results; sometimes,
it even carries to the point of absurdity. 76

73
An additional encumbrance is the accumulated experience of a researcher/developer
and the traditions of his scientific school. Successful solution of a certain problem leads
to the conviction that same methods (only!) are applicable to the rest open problems.
74
In some cases, additional information can be obtained by increasing the volume of data
(under correct processing).
75
We recognize the importance of model’s adequacy and stability of modeling results, but
omit these problems.
76
Not to mention situations, when existing scientific paradigms make it impossible in
principle to model system behavior on a large time horizon (e.g., accurate weather
forecasting).
95
Based on analysis of several examples, the paper [151] distinguished
between natural and artificial big data depending on their source. In the
former case, data are generated by some independent object and we
(“investigators”) decide what should be “measured.” In the latter case, the
source of big data is a model; complexity (data flow) is partially con-
trolled and defined during simulation.
“Recipes.” There exist four large groups of subjects (see Fig. 50)
operating (explicitly or implicitly) big data in their professional (scientific
and/or practical) activity:
– manufacturers of big data handling tools (software/hardware de-
velopers, suppliers, consultants, integrators, etc.);
– designers of big data handling methods (experts in applied mathe-
matics and computer science);
– specialists in application domains (scientists focused on real ob-
jects or their models) that represent big data sources;
– customers utilizing or planning to utilize the results of big data
analysis in their activity.

Manufacturers Designers
of big data of big data
handling tools handling methods

Big
data

Specialists in
Customers
application domains

Fig. 50. Subjects operating big data

Representatives of the mentioned groups interact with each other


(see the dashed lines in Fig. 51). The normative (“ideal”) division of
“responsibility areas” is illustrated by Fig. 51; here the thickness of
arrows corresponds to the level of involvement.

96
Manufacturers of big data Designers of big data
handling tools handling methods Specialists in Customers
application domains

Data Information Information Knowledge


Object Data acquisition, acquisition,
transmission and processing processing utilization
transmission and
storage processing

Fig. 51. The division of “responsibility areas”

Proceeding from sensus communis and not claiming to be construc-


tive, we formulate the following general “recipes” for the listed groups of
subjects.
For manufacturers of big data handling tools: with the course of
time, it will be difficult to sell big data solutions (including analytical
ones) without suggesting new adequate mathematical methods and stipu-
lating for the feasibility of a close cooperation between customers, the
developers of appropriate methods and specialists in application domains.
For mathematicians (the author’s “brothers-in-arms”): a topical que-
ry concerns adapting well-known methods and developing new pro-
cessing methods (in the first place, with nonlinear complexity!) for the
large flows of unstructured data representing a good testing area for new
models, methods and algorithms (to the extent possible, at the expense of
manufacturers and/or customers).
For specialists in application domains: big data technologies lead to
new capabilities for acquiring and storing the bulky arrays of “experi-
mental” information, conducting the so-called computing experiments;
the associated methods of applied mathematics enable systems generation
and rapid verification of hypotheses (revelation of implicit regularities).
For customers: the expensive technologies of big data acquisition
and storage would hardly be economically sound without involving
specialists in appropriate methods and subject areas (only if it is absolute-
ly clear which questions a customer would like to answer using
big data77).
As a positive trend in big data handling, note aspiration for seeking
adequate macrodescriptions of big systems. For instance, consider re-
search works on social systems modeling, i.e., social networks, mob and
so on, which involve microdescriptions (at the level of separate agents)

77
Though, it is possible to store data de bene esse (e.g., to verify a certain hypothesis in
future based on them).
97
[18] and macrodescriptions (in terms of distribution functions of essential
parameters) [34], as well as establish a correspondence between them
[33]. Such approach is also developed within the framework of
sociophysics and ecophysics, where statistical physics tools are applied to
model complex networks and big socioeconomic systems.
Some threats. In addition to the emphasized necessity of searching
for adequate simple models and the alerting trend of anticipatory technol-
ogies development, we expect the future relevance of the following
problems (the list below is unstructured and incomplete).
The informational security of big data. This requires adaptation
of well-known methods and tools, as well as development of fundamen-
tally new ones. Really, alongside with the growing topicality of
cybersecurity problems (in the wide sense, the informational security of
control systems) and the problem of security “against information” (espe-
cially, in social networks), one should consider the specifics of big data
proper.
The energy efficiency of big data. Even today, data processing
centers represent a considerable class of power consumers. The bigger are
data to-be-processed, the higher is energy needed.
The principle of complementarity was established in physics long
ago; it declares that measurements modify the state of a system. Howev-
er, does it apply to social systems whose elements (people) are active,
i.e., possess their own interests and preferences, choose their actions
independently, etc. [36, 131, 157]?
A demonstration of this principle lies in the so-called information
manipulation (strategic behavior). According to theory of choice [36, 38,
39, 131], an active subject reports information by forecasting the results
of its usage; generally speaking, an active subject does not adhere to
truth-telling.
Another example concerns the so-called active forecasting: a system
changes its behavior based on new knowledge about itself [158].
Are these and similar problems eliminated or aggravated in the case
of big data?
Recall the principle of uncertainty in the following (epistemologi-
cal) statement [149]: the current level of science development is charac-
terized by certain mutual constraints imposed on results “validity” and
results applicability, see Fig. 11. In the context of big data, this principle
means the existence of a rational balance between the level of detail in
the description of a studied system and the validity of results and conclu-
sions to-be-made on the basis of this description.

98
A traditional assumption in design and operation of information
systems (corporate systems, decision support systems of governmental
services, inter-agency circulation of documents, etc.) is that all infor-
mation in such systems must be complete, unified and publicly available
(under existing access rights). But it is possible to show the “distorting-
mirror” reality to each person, i.e., to create an individual informational
picture,78 thereby performing informational control [157, 158]. Should
we strive for or struggle against these effects in the field of big data?
Summarizing the above consideration of trends and forecasts in con-
trol theory, we declare that a similar (or even more systematic, regular
and in-depth) analysis is vital for other sciences, viz., cybernetics, sys-
tems analysis, optimization, artificial intelligence, etc. This would give an
impetus for the evolution of Cybernetics via the appearance of new
generalizations in the form of corresponding laws, regularities, principles
and so on.

Educational support. Concluding this chapter, we discuss a partial


(yet, important) aspect of the modern state of control theory, namely, its
educational support.
Let us appeal to readers considering themselves as experts in control
theory with the following dilettantish request79: “Please, recommend a
textbook on modern control theory (a one-year course not restricted to
automatic control theory (ACT) or even to linear systems, robust control
or another branch of control theory) so that an uninitiated student special-
izing in mathematics or engineering would form a complete and, con-
versely, superficial notion of modern control theory.”
Unfortunately, the request leads to deplorable results. On the one
part, there are good reference books [200], textbooks and handbooks on
ACT, both classical (e.g., see [167, 215] and a survey in [83]) and mod-
ern ones (e.g., see [3, 16, 35, 51, 161]). On the other part, excellent
textbooks and monographs focus on separate branches of ACT: robust
control [171], nonlinear systems control [96] and others.
And so, modern textbooks and reference books provide a good cov-
erage of classical ACT, but almost ignore general statements of control
problems and decision-making problems (being confined to dynamic

78
At the very least, a fragment of the “objective” picture (hushing up the whole truth); at
the most, an arbitrary inconsistent system of beliefs about the reality.
79
Another “educational” question ensuing from the generality of control laws and
principles can be stated as follows: “Is it better to organize a department for control
problems in each “sectoral” university or a university dedicated to control problems with
“sectoral” departments?”. The book will touch this question.
99
systems as a “universal” descriptive framework for any controlled ob-
jects) and pay little attention to intelligent control, networked control, the
“sectoral” specifics of controlled objects and so on. To our regret and
despite the efforts of N. Wiener and its followers on creation of a univer-
sal control science, none of textbooks on ACT deeply treats the generality
of laws and processes of control in the animal, machine and society.
Imagine that this request (“Please, recommend a textbook on modern
control theory...”) is addressed to a potential reader without well-
developed skills in higher mathematics (e.g., a schoolchild). How can we
make the results of modern control theory clear to such readers? Here the
situation seems even worse. Of course, (a) the amount of scientific
knowledge accumulated in control theory is huge, (b) the study of this
knowledge requires special training, (c) a dilettante would never perceive
it, (d) the described function is performed by handbooks and reference
books,... But a counterargument is that today many sciences (physics,
chemistry, biology) can be presented at the levels of a school textbook, a
university textbook or a scientific monograph. For instance, such “ency-
clopaedic” textbooks exist for other “capacious” sciences, namely, infor-
matics, artificial intelligence, game theory, operations research, etc. Why
are there no school textbooks on control theory80 and only a few broad
university textbooks? Creation of easy-to-understand (yet, rigorous and
complete) textbooks on control theory is an urgent challenge for experts
in the field!

Conclusion: Cybernetics 2.0

Therefore, we have briefly considered the history of cybernetics and


its state-of-the-art, as well as the development trends and prospects of
several components of cybernetics (mainly, control theory). What are the
prospects of cybernetics? To answer this question, let us address the
primary source–the initial definition of cybernetics as the science of
CONTROL and COMMUNICATION.
Its interrelation with control seems more or less clear. At the first
glance, this is also the case for communication: by the joint effort of
scientists (including N. Wiener), the mathematical theory of communica-
tion and information appeared in the 1940’s (quantitative models of
information and communication channels capacity, coding theory, etc.).

80
Speaking about “control theory,” we mean exactly mathematical control theory (and
not a corresponding branch of management science discussed in numerous bélles-léttres
textbooks available today at stores).
100
But take a broader view of communication.81 Both in the paper [181]
and in the original book [221], N. Wiener explicitly or implicitly men-
tioned interrelation or intercommunication or interaction–reasonability
and causality (cause-effect relations). Really, in feedback control sys-
tems, control-effect is defined by its cause, i.e., the state of a controlled
system (plant); conversely, control supplied to the input of a plant is
induced by its cause, i.e., the state of a controller, and so on. No doubt,
the channels and methods of communication are important but secondary
whenever the matter concerns universal regularities for animals, ma-
chines and society.
A much broader view of communication implies interpreting com-
munication as INTERCOMMUNICATION, e.g., between elements of a
plant, between a controller and a plant, etc. including different types of
impacts and interactions (material, informational and other ones). “Inter-
communication” is a more general category than “communication.”
In the general systems context, intercommunication corresponds to
the category of ORGANIZATION (see its definition and discussion
below). Therefore, a simple correction (replacing “communication” with
“organization” in Wiener’s definition of cybernetics) yields a more
general and modern definition of cybernetics: “the science of systems
organization and their control.” We call it cybernetics 2.0.
Making such substitution, we get distanced from informatics. Con-
sider the soundness and consequences of this distancing.
Cybernetics and informatics. Nowadays, cybernetics and informat-
ics form independent interdisciplinary fundamental sciences [101]. Ac-
cording to a figurative expression of B. Sokolov and R. Yusupov [191],
informatics and cybernetics are “Siamese twins.” Yet, in nature Siamese
twins represent pathology.82

81
Academician A. Kolmogorov was against such interpretation. In 1959 he wrote:
“Cybernetics studies any-nature systems being capable to perceive, store and process
information, as well as to use it for control and regulation. Cybernetics intensively
employs mathematical methods and aims at obtaining concrete special results, both in
order to analyze such systems (restore their structure based on experience of their
operation) and to design them (calculate schemes of systems implementing given actions).
Owing to this concrete character, cybernetics is in no way reduced to the philosophical
discussion of reasonability in machines and the philosophical analysis of a circle of
phenomena explored by it.” We venture to disagree with this opinion of a great Soviet
mathematician.
82
For instance, the definition of informatics as the “union” of general laws of informatics
and control would induce a megascience without concrete content, subsisting at concep-
tual level exclusively.
101
Cybernetics and informatics have a strong intersection (including the
level of common scientific base–statistical information theory83). Their
accents much differ. The fundamental ideas of cybernetics are Wiener’s
“control and communication in the animal and the machine,” whereas the
fundamental ideas of informatics are formalization (theory) and comput-
erization (practice). Accordingly, in the mathematical sense cybernetics
bases on control theory and information theory, whereas informatics
proceeds from theory of algorithms and formal systems.84
The subject of modern informatics (or even the “umbrella brands” of
informational sciences) covering information science, computer science
and computational science [102] are informational processes.
Indeed, on the one hand, information processing arises everywhere
(!), not only in control and/or organizing. On the other hand, information-
al processes and corresponding information and communication technol-
ogy are integrated into control processes85 so that their discrimination
seems almost impossible. A close cooperation of informatics and cyber-
netics at partial operational level will be continued and even extended in
future.
Organization. Organization theory. Organizational culture. Ac-
cording to the definition provided by Merriam-Webster dictionary, an
organization is:
1. The condition or manner of being organized;
2. The act or process of organizing or of being organized;
3. An administrative and functional structure (as a business or a po-
litical party); also, the personnel of such a structure–see Fig. 52.

83
Note that mathematical (statistical) theory of communication and information operates
quantitative assessments of information. Unfortunately, no essential advancements have
been made in the field of substantial (semantic) value of information. This problem is still
a global challenge of informatics.
84
This distinction partly elucidates why some sciences often related to informatics or
computer sciences have not been reflected in the book: theory of formal languages and
grammars, “true” artificial intelligence (knowledge engineering, reasoning formaliza-
tion, behavior planning, etc. instead of artificial neural networks as a modern empirical
engineering science), automata theory, computational complexity theory, and so on.
85
N. Wiener believed that control processes are, in the first place, informational process-
es: information acquisition, processing and transmission (see the above discussion of
joint solution of problems appearing in control, computations and communication).
102
ORGANIZATION

Property Process Organizational system


The condition or The act or An association of people
manner of being process of being engaged in joint
organized organizing or of implementation of a
being organized certain program or task,
acting based on specific
procedures and rules–
mechanisms of operation

Fig. 52. Definition of organization

The present book uses the notion “organization” mostly in its second
and first meanings, i.e., as a process and a result of this process. The third
meaning (an organizational system) as a class of controlled objects ap-
pears in theory of control in organizational systems [131, 157].
At descriptive (phenomenological) and explanatory levels, “system
organization” reflects HOW and WHY EXACTLY SO, respectively, a
system is organized (organization as a property). At normative level,
“system organization” reflects how it MUST be organized (requirements
to the property of organization) and how it SHOULD be organized (re-
quirements to the process of organization).
A scientific branch responsible for the posed questions (Organiza-
tion86 theory, or O3 (organization as a property, process and system,
by analogy to C3 as discussed above) has almost not been developed to-
date. Yet, this branch obviously has a close connection and partial inter-
section with general systems theory and systems analysis (mostly focused
on descriptive level problems and a little bit dealing with normative level
ones), as well as with methodology (as the general science of activity
organization [148]). Creating a full-fledged Organization theory is a
topical problem of cybernetics!

86
Note that there also exists “theory of organizations” (“organizational theory”) - a
branch of management science, both in its subject (organizational systems) and methods
used. Unfortunately, numerous textbooks (and just a few monographs!) give only descrip-
tive generalizations on the property and process of organization in their Introductions,
with most attention then switched to organizational systems, viz., management of organi-
zations (for instance, see the classical textbooks [47, 134]).
103
Speaking about the notion of organization, one should not ignore the
phenomenon of organizational culture. Different historical periods of
civilization evolvement are remarkable for different types of activity
organization now called organizational culture, see Table 6.

Table 6. Types of organizational culture: A characterization [148, 152]


The types of organi- The methods of The forms of social struc-
zational culture normalization and ture implementing the
translation of corresponding method
activity
Communities based on the
Traditional Myths and rituals
kinship principle
Corporations with a formal
Samples and recipe hierarchical structure (mas-
Corporate-handicraft
for their recreation ters, apprentices, and jour-
neymen)
Professional organizations
Theoretical
Professional based on the principle of
knowledge in the
(scientific) ontological relations (rela-
form of text
tions of objective reality)
Technological society being
Projects, programs structured by the communi-
Project-technological
and technologies cative principle and profes-
sional relations
(Individual) and
collective
Networked society of
Knowledge-based knowledge about
knowledge.
activity organiza-
tion

Presently, the knowledge-based type of organizational culture grad-


ually manifests itself. Here exactly (individual and collective) knowledge
about activity organization (!) is the product and way of activity normali-
zation and translation, while networked society of knowledge87 is the form

87
The author believes that “the knowledge-based type of organizational culture,”
“knowledge society,” “knowledge management” and others are lame terms in this
context. Really, a preceding type of organizational culture–the professional (scientific)
one–was also founded on scientific knowledge. Nevertheless, these terms are widely used.
Let us clarify the meaning of knowledge here. In the professional (scientific) type of
organizational culture, the leading role belonged to scientific knowledge in the form of
104
of social structure (nowadays, the term “knowledge economics” has wide
spread occurrence). Cybernetics 1.0 de bene esse matched the project-
technological type of organizational culture, whereas cybernetics 2.0
corresponds to the knowledge-based type (at the new stage of develop-
ment, organization becomes crucial).
Consider the correlation of the two basic categories in the definition
of cybernetics 2.0 (“organization” and “control”).
Control is “an element, function of different organized systems (bio-
logical, social, technical ones) preserving their definite structure, main-
taining activity mode, implementing a program, a goal of activity.”
Control is “an impact on a controlled system, intended for ensuring its
necessary behavior” [157].
Consequently, the categories of organization and control do inter-
sect, but do not coincide. The former fits system design and the latter fits
system functioning88; they are jointly realized during system implementa-
tion and adaptation, see Fig. 53. In other words, organization (strategic
loop) “foregoes” control (tactical loop).

Organization Control
I II III

Design Implementation Functioning

AGGREGATIVE STAGES OF
SYSTEM LIFE CYCLE

Fig. 53. Organization and control

The domains in Fig. 53 have the following content (as examples):


I. Design (construction) of systems (including their stuff, structure
and functions)–organization but not control (despite that theory of control
in organizational systems suggests stuff control and structure control).
II. Joint design of a system and a controlled object. Adaptation. Con-
trol mechanisms adjustment.
III. Functioning of controllers in technical systems–control but not
organization.

texts. The knowledge-based type of organizational culture operates knowledge of people


and organizations about activity organization.
88
A conditional analogy: organization corresponds to deism (the creator of a system does
not interfere in its functioning), while control corresponds to teism (the opposite picture).
105
Organization and control can have a “hierarchical” correlation.89 On
the one part, control process calls for organization (organization as a
stage in Fayol’s management cycle and a function of organizational
control, see [131]). On the other part, organization process (e.g., system
life cycle) might and should be controlled.
Following the complication of systems created by mankind, the pro-
cess and property of organization will attract more and more attention.
Indeed, control of standard objects (e.g., controller design for technical
and/or production systems) gradually becomes a handicraft rather than a
science; modern challenges highlight standardization of activity organiza-
tion technologies, creation of new activity technologies, etc. (activity
systems engineering).
A fruitful combination of organization and control within cybernet-
ics 2.0 would give a substantiated and efficient answer to the primary
question of activity systems engineering: how should control systems for
them be constructed? Actually, this is a “reflexive” question related to
second-order and even higher-order cybernetics. Mankind has to learn to
design and implement control systems for complex systems (high-
technology manufacturing, product life cycle, organizations, regions,
etc.), similarly to the existing achievements in technical systems engi-
neering.
Cybernetics is important from general educational viewpoint, since it
forms the integral modern scientific world outlook.
Cybernetics 2.0. We have defined cybernetics 2.0 as the science of
(general regularities in) systems organization and their control.
A close connection between cybernetics and general systems theory
and systems analysis, as well as the growing role of technologies (see
Fig. 9 and Fig. 20-Fig. 21) leads to a worthy hypothesis. Cybernetics 2.0
includes cybernetics (Wiener’s cybernetics and higher-order cybernetics
discussed in Section 1.2), Cybernetics, and general systems theory and
systems analysis with results in the following forms:
– general laws, regularities and principles studied within
metasciences–Cybernetics and Systems analysis;
– a set of results obtained by sciences-components (“umbrella
brands”–cybernetics and systems studies uniting appropriate sciences);
– design principles of corresponding technologies.
89
Generally speaking, the correlation of organization and control is far from trivial and
requires further perception. For instance, in multi-agent systems decentralized control
(choosing the laws and rules of autonomous agents interaction) can be treated as organi-
zation. Another example is the Bible as a tool of organization [174] (a system of norms
making common knowledge and implementing institutional control of a society).
106
We discuss the latter in detail. A technology is a system of condi-
tions, forms, criteria, methods and means of solving a posed problem
[148, 149]. Today technologies standardize craft/skill90 and art91 via
identification and generalization of best practices; creation of technolo-
gies calls for appropriate scientific grounds, see Fig. 54.

Science Craft

Laws, regularities, Wide practice


principles, etc.

Technologies

Individual
(creative)
experience

Art

Fig. 54. Science, technology, craft and art

We separate out the following general technologies:


– systems technologies (general principles; activity organization);
– informational technologies (activity support type);
– organizational technologies (coordinated joint activity implemen-
tation).
Alongside with general technologies, there exist “sectoral” technol-
ogies of practical activity (“production”); they depend on application
domains and possess specifics.
According to this viewpoint, complex study and design of any sys-
tems (whether machines, animals or society) within cybernetics 2.0
employs corresponding results obtained by method- and subject-oriented
sciences, as well as by general and sectoral technologies–see Fig. 55.

90
A craft is a personal skill of routine operations based on experience.
91
Art is a system of techniques and methods in some branch of practical activity; the
process of talent usage; an extremely developed creative skill or ability.
107
SYSTEM
Metasciences

“Subject” “Methods” Systems Informational Organizational


technologies technologies technologies

Subject-oriented Method-oriented
sciences sciences
TECHNOLOGIES (implementation)
SCIENCES (research)

CYBERNETICS 2.0

Fig. 55. Sciences and technologies


Keywords for cybernetics 2.0 are control, organization and system
(see Fig. 56).

TECHNOLOGIES

CYBERNETICS 2.0

CONTROL ORGANIZATION

SYSTEM

SCIENCE

Fig. 56. Keywords of cybernetics 2.0

Similarly to cybernetics in its common sense, cybernetics 2.0 has a


conceptual core (Cybernetics 2.0 with capital C). At conceptual level,
Cybernetics 2.0 is composed of control philosophy (including general
laws, regularities and principles of control), control methodology, Organ-
ization theory (including general laws, regularities and principles of (a)
complex systems functioning and (b) development and choice of general
technologies), as illustrated by Fig. 57.
Basic sciences for cybernetics 2.0 are control theory, general sys-
tems theory and systems analysis, as well as systems engineering–see
Fig. 57.
Complementary sciences for cybernetics 2.0 are informatics, optimi-
zation, operations research and artificial intelligence–see Fig. 57.
cybernetics 2.0

Conceptual level
Cybernetics 2.0

CONTROL CONTROL
PHILOSOPHY METHODOLOGY

ORGANIZATION
THEORY

The level of basic


sciences

GENERAL SYSTEMS THEORY


CONTROL THEORY
AND SYSTEMS ANALYSIS

SYSTEMS
ENGINEERING

The level of complementary sciences

INFORMATICS OPTIMIZATION


OPERATIONS ARTIFICIAL
RESEARCH INTELLIGENCE

Fig. 57. The composition and structure of cybernetics 2.0

The general architecture of cybernetics 2.0 (see Fig. 57) admits pro-
jection to different application domains and branches of subject-oriented
sciences depending on a class of posed problems (technical, biological,
social, etc.).
The prospects of cybernetics 2.0. Further development of cybernet-
ics has several alternative scenarios as follows:
– the negativistic scenario (the prevailing opinion is that “cybernet-
ics does not exist” and it gradually falls into oblivion);
– the “umbrella” scenario (owing to past endeavors, cybernetics is
considered as a “mechanistic” (non-emergent) union, and its further
development is forecasted using the aggregate of trends displayed by the

110
basic and complementary sciences under the “umbrella brand” of cyber-
netics);
– the “philosophical” scenario (the framework of new results in cy-
bernetics 2.0 includes conceptual considerations only–the development of
conceptual level);
- the subject-oriented (sectoral) scenario (the basic results of cyber-
netics are obtained at the junction of sectoral applications);
– the constructive-optimistic (desired) scenario (the balanced devel-
opment of the basic, complementary and “conceptual” sciences is the
case, accompanied by the convergence and interdisciplinary translation
of their common results, with subsequent generation of conceptual level
generalizations (realization of Wiener’s dream “to understand the region
as a whole,” see the epigraph to this book).
Let us revert to the trends and groups of subjects mentioned in Sec-
tion 1.3. Note that the development of cybernetics 2.0 in the conditions of
intensified sciences differentiation provides the following (see Fig. 58):
- for scientists specialized in cybernetics proper and the representa-
tives of adjacent sciences: the general picture of a wide subject domain
(and a common language of its description), the positioning of their
results and promotion in new theoretical and applied fields;
- for potential users of applied results (authorities, business struc-
tures): (1) confidence in the uniform positions92 of researchers; (2) more
efficient solution of control problems for different objects based on new
fundamental results and associated applied results.
Main challenges are control in social and living systems. Several
classes of control problems seem topical, namely:
- network-centric systems (including military applications, net-
worked and cloud production);
- informational control and cybersafety;
- life cycle control of complex organization-technical systems;
- activity systems engineering.
Among promising application domains, we mention living systems,
social systems, microsystems, energetics and transport.
There exists a series of global challenges to cybernetics 2.0 (i.e., ob-
served phenomena going beyond cybernetics 1.0), see Chapter 5:
1) the scientific Tower of Babel (interdisciplinarity, differentiation
of sciences; in the first place, in the context of cybernetics–sciences of
control and adjacent sciences);

92
The diversity and inconsistency of opinions and approaches suggested by experts
(subordinates) always confuse customers (superiors).
111
2) centralization collapse (decentralization and networkism, includ-
ing systems of systems, distributed optimization, emergent intelligence,
multi-agent systems, and so on);
3) strategic behavior (in all manifestations, including interests in-
consistency, goal-setting, reflexion and so on);
4) complexity damnation (including all aspects of complexity and
nonlinearity93 of modern systems, as well as dimensionality damnation–
big data and big control).

CHALLENGES CLASSES OF
PROBLEMS

cybernetics 2.0

APPLICATION
DOMAINS

Fig. 58. The challenges, classes of problems and


application domains of cybernetics 2.0

Thus, the main tasks of cybernetics 2.0 are developing the basic and
complementary sciences, responding to the stated global challenges, as
well as advancing in appropriate application domains, see Fig. 58.
And here are the main Tasks of Cybernetics 2.0:
1) ensuring the Interdisciplinarity of investigations (with respect to
the basic and complementary sciences, as illustrated by Fig. 57);
2) revealing, systematizing and analyzing the general laws, regulari-
ties and principles of control for different-nature systems within control
philosophy; this would require new and new generalizations (see Fig. 10);
3) elaborating and refining Organization theory (O3).
This book has described the phylogenesis of a new stage of cyber-
netics–cybernetics 2.0. Further development of cybernetics would call for
considerable joint effort of mathematicians, philosophers, experts in
control theory, systems engineering and many others involved.

93
Figuratively, in this sense cybernetics 2.0 has to include nonlinear automatic control
theory studying nonlinear decentralized objects with nonlinear observers, etc.
112
References

1 Ackoff R. Towards a System of Systems Concepts // Management


Science. 1971. Vol. 17. No 11. P. 661–671.
2 Ackoff R., Emery F. On Purposeful Systems: An Interdisciplinary
Analysis of Individual and Social Behavior as a System of Purposeful
Events. 2nd ed. – New York: Aldine Transaction, 2005. – 303 p.
3 Albertos P., Mareels I. Feedback and Control for Everyone. – Ber-
lin: Springer, 2010. – 318 p.
4 Algorithmic Game Theory / Eds. Nisan N., Roughgarden T.,
Tardos E., and Vazirani V. – New York: Cambridge University Press,
2009. – 776 p.
5 Amosov N. Modeling of Complex Systems. – Kiev: Naukova
Dumka, 1968. – 81 p. (in Russian)
6 Ampère A.-M. Essai sur la philosophie des sciences. – Paris: Chez
Bachelier, 1843. P. 140–142.
7 Anokhin P. Anticipatory Reflection of Reality // Russian Studies
in Philosophy. 1962. No. 7. P. 97–112. (in Russian)
8 Anokhin P. The Center-Periphery Problem in the Modern Physiol-
ogy of Neural Activity. – Gorky, 1935. P. 9–70. (in Russian)
9 Anokhin P. Theory of Functional System as a Premise of Physio-
logical Cybernetics Development / Biological Aspects of Cybernetics. –
Moscow: USSR Academy of Sciences, 1962. P. 74–91. (in Russian)
10 Antomonov Yu. Modeling of Biological Systems: A Handbook. –
Kiev: Naukova Dumka, 1977. – 259 p. (in Russian)
11 Arbib M. The Metaphorical Brain: An Introduction to Cybernet-
ics as Artificial Intelligence and Brain Theory. – New York: Wiley, 1972.
– 384 p.
12 Arrow K. Social Choice and Individual Values. – New York:
Wiley, 1951. – 99 p.
13 Asaro P. Whatever Happened to Cybernetics / Geist in der
Maschine. – Wien: Verlag Turia, 2010. – P. 39–50.
14 Ashby W. An Introduction to Cybernetics. – London: Chapman
and Hall, 1956. – 295 p.
15 Ashby W. Design for a Brain: The Origin of Adaptive Behavior.
– New York: John Wiley & Sons, 1952. – 298 p.
16 Astrom K., Murray R. Feedback Systems: An Introduction for
Scientists and Engineers. – Princeton: Princeton University Press, 2012. –
408 p. (http://press.princeton.edu/titles/8701.html)

113
17 Baker K., Kropp D. Management Science: Introduction to the
Use of Decision Models. – New York: John Wiley and Sons Ltd, 1985. –
650 p.
18 Barabanov I., Korgin N., Novikov D., Chkhartishvili A. Dynamic
Models of Informational Control in Social Networks // Automation and
Remote Control. 2010. Vol. 71. No. 11. P. 2417–2426.
19 Bar-Yam Y. Multiscale Variety in Complex Systems // Complex-
ity. 2004. Vol. 9. No 4. P. 37–45.
20 Bateson G. Steps to an Ecology of Mind. – San Francisco: Chan-
dler Pub. Co., 1972. – 542 p.
21 Bauer E. Theoretical Biology. – Moscow, Leningrad: All-USSR
Institute of Experimental Medicine, 1935. – 206 p. (in Russian)
22 Beer S. Brain of the Firm: A Development in Management Cy-
bernetics. – London: Herder and Herder, 1972. – 319 p.
23 Beer S. Cybernetics and Management. – London: The English
University Press, 1959. – 214 p.
24 Bernstein N. Sketches on the Physiology of Movements and the
Physiology of Activity. – Moscow: Meditsina, 1966. – 347 p. (in Rus-
sian)
25 Bertalanffy L. General System Theory – a Critical Review //
General Systems. 1962. Vol. 7. P. 1–20.
26 Bertalanffy L. General System Theory: Foundations, Develop-
ment, Applications. – New York: George Braziller, 1968. – 296 p.
27 Bertalanffy L. The Theory of Open Systems in Physics and Biol-
ogy// Science. 1950. 13 Jan. Vol. 111. P. 23–29.
28 Blauberg I., Yudin E.G. The Formation and Essence of Systems
Approach. – Moscow: Nauka, 1973. – 271 p. (in Russian)
29 Bogdanov A. The General Organizational Science. – Moscow:
Ekonomika, 1913-17. Vol. 1-2., 1925-29. Vol. 3. (in Russian) /
Bogdanov A. Algemeine Organisationslehre (Tektologie). – Berlin:
Hirzel, 1926. I; 1928. II / Bogdanov A. Essays in Tektology. – Seaside:
Intersystems Publications, 1980. – 291 p.
30 Boulding K. General System Theory – The Skeleton of Science //
Management Science. 1956. Vol. 2. P. 197–208.
31 Boxer P., Kenny V. Lacan and Maturana: Constructivist Origins
for a 30 Cybernetics // Communication and Cognition. 1992. Vol. 25. No.
1. P. 73–100.
32 Boyd S., Parikh N., Chu E., et al. Distributed Optimization and
Statistical Learning via the Alternating Direction Method of Multipliers //
Foundations and Trends in Machine Learning. 2011. No. 3(1). P. 1–122.

114
33 Breer V.V., Novikov D.A., Rogatkin A.D. Micro- and
Macromodels of Social Networks // Automation and Remote Control.
Part 1: General Theory; Part 2: Identification and Simulation Experi-
ments. 2015.
34 Breer V.V., Novikov D.A., Rogatkin A.D. Stochastic Models of
Mob Control // Large-Scale Systems Control. 2014. No. 52. P. 85–117.
(in Russian)
35 Bubnicki Z. Modern Control Theory. – Berlin: Springer, 2005. –
423 p.
36 Burkov V. Foundations of Mathematical Theory of Active Sys-
tems. – Moscow: Nauka, 1977. – 255 p. (in Russian)
37 Burkov V., Enaleev A. Stimulation and Decision-Making in the
Active Systems Theory: Review of Problems and New Results // Mathe-
matical Social Sciences. 1994. Vol. 27. P. 271–291.
38 Burkov V., Goubko M., Korgin N., Novikov D. Introduction to
Theory of Control in Organizations. – New York: CRC Press, 2015. –
352 p.
39 Burkov V., Lerner A. Fairplay in Control of Active Systems /
Differential Games and Related Topics. Amsterdam, London: North-
Holland Publishing Company, 1971. P. 164–168.
40 Buslenko N. Modeling of Complex Systems. – Moscow: Nauka,
1978. – 420 p. (in Russian)
41 Cannon W. The Wisdom of the Body. – New York: Norton,
1932. – 312 p.
42 Casti J. Connectivity, Complexity and Catastrophe in Large-Scale
Systems. – Chichester: John Wiley and Sons, 1979. – 203 p.
43 Checkland P. Soft System Methodology: A Thirty Years Retro-
spective // Systems Research and Behavioral Science. 2000. Vol. 17. P.
11–58.
44 Checkland P. Systems Thinking, Systems Practice. – Chichester:
John Wiley & Sons Ltd., 1981. – 331 p.
45 Chernavsky D. Synergetics and Information. – Moscow: Editorial
URSS, 2004. – 288 p. (in Russian)
46 Chernyak Yu. Systems Analysis in Economy Management. –
Moscow: Ekonomika, 1975. – 191 p. (in Russian)
47 Daft R. Organization Theory and Design. 11th ed. – New York:
Cengage Learning, 2012. – 688 p.
48 Dancoff S., Quastler H. The Information Content and Error Rate
of Living Things / Essays on the Use of Information Theory in Biology. –
Illinois: University of Illinois Press, 1953. P. 263–274.

115
49 Dennis A., Wixom B., Roth R. Systems Analysis and Design. 5th
ed. – New York: Wiley, 2012. – 594 p.
50 Diev V. Control. Philosophy. Society // Voprosy Filosofii. 2010.
No. 8. P. 35–41. (in Russian)
51 Dorf R., Bishop R. Modern Control Systems. 12th ed. – Upper
Saddle River: Prentice Hall, 2011. – 1111 p.
52 Druzhinin V., Kontorov D.S. Introduction to Conflict Theory. –
Moscow: Radio i Svyaz’, 1989. – 288 p. (in Russian)
53 Emergent Intelligence of Networked Agents / Ed. by
Namatame A., Kurihara S., Nakashima H. – Berlin: Springer, 2007. –
261 p.
54 Foerster H. The Cybernetics of Cybernetics. 2nd edition. Minne-
apolis: Future Systems, 1995. – 228 p.
55 Foerster H. Understanding Understanding: Essays on Cybernetics
and Cognition, New York: Springer-Verlag, 2003. – 362 p.
56 Forrest J., Novikov D. Modern Trends in Control Theory: Net-
works, Hierarchies and Interdisciplinarity // Advances in Systems Science
and Application. 2012. Vol.12. No. 3. P. 1–13.
57 Forrester J. Industrial Dynamics. – Cambridge: Pegasus Commu-
nications, 1961. – 464 p.
58 Forrester J. Principles of Systems. – Cambridge: Pegasus Com-
munications, 1968. – 387 p.
59 Fradkov A. Cybernetical Physics: From Control of Chaos to
Quantum Control (Understanding Complex Systems). – Berlin: Springer,
2006. – 236 p.
60 Gelfand I., Gurfinkel V.S., Tseitlin M.L. On Tactics of Complex
Systems Control in Connection with Physiology / Biological Aspects of
Cybernetics. – Moscow: USSR Academy of Sciences, 1962. P. 66–73. (in
Russian)
61 George F. The Brain as a Computer. – New York: Pergamon
Press, 1962. – 437 p.
62 George F. The Foundations of Cybernetics. – London: Gordon
and Breach Science Publisher, 1977. – 286 p.
63 George F.H. Philosophical Foundations of Cybernetics. – Kent:
Abacus Press, 1979. – 157 p.
64 Germeier Yu. Non-Antagonistic Games, 1976. – Dordrecht: D.
Reidel Publishing Company, 1986. – 331 p.
65 Gerovich S. From Newspeak to Cyberspeak: A History of Soviet
Cybernetics. – Cambridge: MIT Press, 2002. – 383 p.
66 Gershenson C., Csermely P., Érdi P., Knyazeva H., Laszlo A.
The Past, Present and Future of Cybernetics and Systems Research //
116
Systems. Connecting Matter, Life, Culture and Technology. 2013. Vol. 1
No 3. P. 4–13.
67 Gigch J. Applied General Systems Theory. 2nd ed. – New York:
Harper & Row, 1978. – 736 p.
68 Glushkov V. Introduction to Cybernetics. – Kiev: Ukr. SSR
Academy of Sciences, 1964. – 324 p. (in Russian)
69 Gonçalves C. Quantum Cybernetics and Complex Quantum Sys-
tems Science – A Quantum Connectionist Exploration // Neuroqantology.
2015. Vol. 13. No 1.
70 Goode H., Machol К. System Engineering: an Introduction to the
Design of Large-scale Systems. – New York: McGrawhill Book Compa-
ny, 1957. – 551 p.
71 Gorsky Yu. A System-Informational Analysis of Control Pro-
cesses. - Novosibirsk: Nauka, 1988. - 327 p. (in Russian)
72 Grössing G. Quantum Cybernetics. Toward a Unification of
Relativity and Quantum Theory via Circularly Causal Modeling. – New
York: Springer, 2000. – 153 p.
73 Gubanov D., Korgin N., Novikov D., Raikov A. E-Expertise:
Modern Collective Intelligence. – Heidelberg: Springer, 2014. – 150 p.
74 Gubanov D., Makarenko A., Novikov D. Analysis Methods for
the Terminological Structure of a Subject Area // Automation and Re-
mote Control. 2014. Vol. 75. No. 12. P. 2231–2247.
75 Gubanov D., Novikov D., Chkhartishvili A.G. Social Networks:
Models of Informational Influence, Control and Confrontation. – Mos-
cow: Fizmatlit, 2010. – 228 p. (in Russian)
76 Guide to the Systems Engineering Body of Knowledge (SEBoK)
v1.3.2. BKCASE, INCOSE 2015. – 971 p.
77 Haken H. Advanced Synergetics: Instability Hierarchies of Self-
Organizing Systems and Devices. 2nd ed. – New York: Springer-Verlag,
1993. – 356 p.
78 Handbook of Dynamic Systems Modeling / Ed. by P. Fishwick. –
New York: CRC Press, 2007. – 760 p.
79 Kharitonov V., Alekseev A.O. The Concept of Subject-Oriented
Control in Social and Economic Systems // Polythematic Electronic
Journal of Kuban State Agricultural University [Electronic source]. –
Krasnodar: Kuban State Agricultural University, 2015. – No. 05 (109). –
IDA [article ID]: 1091505043. – Available at
http://ej.kubagro.ru/2015/05/pdf/43.pdf. (in Russian)
80 Heylighen F. Principles of Systems and Cybernetics: An Evolu-
tionary Perspective / Cybernetics and Systems’92. – Singapore: World
Science, 1992. P. 3–10.
117
81 Heylighen F., Joslyn C. Cybernetics and Second-Order Cybernet-
ics / Encyclopedia of Physical Science & Technology. 3rd ed. – New
York: Academic Press, 2001. P. 155–170.
82 Hillier F. and Lieberman G. Introduction to Operations Research
(8th ed.). – Boston: McGraw-Hill, 2005. – 1061 p.
83 Historic Control Textbook / Ed. by J. Gertler. – Oxford: Elsevier,
2006. – 304 p.
84 Vus M.A. The History of Informatics and Cybernetics in Saint
Petersburg (Leningrad). Vol. 1. Striking Historical Examples // Ed. by
Corr. Member of RAS R.M. Yusupov; Institute of Informatics and Auto-
mation of RAS. – St. Petersburg: Nauka, 2008. – 356 p. (in Russian)
85 The History of Cybernetics / Ed. by Ya.I. Fet. – Novosibirsk:
Geo, 2006. – 339 p. (in Russian)
86 Hitchins D. Putting Systems to Work. – New York: Wiley, 1993.
– 342 p.
87 Il’in V. The Philosophy and History of Science. – Moscow:
Lomonosov Moscow State University, 2005. – 432 p. (in Russian)
88 INCOSE Systems Engineering Handbook Version 3.2.2 – A
Guide for Life Cycle Processes and Activities / Ed. by C. Haskins. – San
Diego: INCOSE, 2012. – 376 p.
89 Jackson M. Social and Economic Networks. – Princeton: Prince-
ton Univ. Press, 2010. – 520 p.
90 Jaradat R., Keating C. A Histogram Analysis for System of Sys-
tems // International Journal System of Systems Engineering. 2014. Vol.
5. No. 3. P. 193–227.
91 Julong D. Introduction to Grey System Theory // The Journal of
Grey System. 1989. Vol. 1. P. 1–24.
92 Kahn H., Mann I. Techniques of Systems Analysis. – Santa Mon-
ica: RAND Corporation, 1956. – 168 p.
93 Kalman R., Falb P., Arbib M. Topics in Mathematical System
Theory. – McGraw Hill Book Co., 1969.
94 Kaufman A. Introduction to Fuzzy Arithmetic. – New York: Van
Nostrand Reinhold Company, 1991. – 384 p.
95 Kenny V. There’s Nothing Like the Real Thing. Revisiting the
Need for a Third-Order Cybernetics // Constructivist Foundations. 2009.
No 4(2). P. 100–111.
96 Khalil H. Nonlinear Systems. 2nd ed. – Upper Saddle River: Pren-
tice Hall, 1996. – 734 p.
97 Klaus G. Kybernetic und Gesellschaft. – Berlin: Veb Deutscher
Verlag der Wissenschaften, 1964. – 384 p.

118
98 Klaus G. Kybernetik in Philosophischer Sicht. – Berlin: Dietz
Verlag Berlin, 1961. – 491 p.
99 Kobrinsky N., Maiminas E.Z., Smirnov A.D. Economic Cyber-
netics. – Moscow: Ekonomika, 1982. – 408 p. (in Russian)
100 Kogan A., Naumov N.P., Rezhabek V.G., Chorayan O.G. Bio-
logical Cybernetics. – Moscow: Vysshaya Shkola, 1972. – 384 p. (in
Russian)
101 Kolin K. Philosophical Problems of Informatics. – Moscow:
BINOM, 2010. – 270 p. (in Russian)
102 Kolin K. The Formation of Informatics as a Fundamental Sci-
ence and a Complex Scientific Problem // Sistemy i Sredstva Informatiki.
2006. Special Issue on Scientific and Methodological Problems of Infor-
matics. P. 7–58. (in Russian)
103 Kolin K. The Structure of Scientific Research on the Complex
Problem of Informatics / Sotsial’naya Informatika. - Moscow: Higher
Commercial School, 1990. P. 19–33. (in Russian)
104 Kolmogorov A. Mathematics – A Science and Profession //
Kvant. No. 64. – Moscow: Nauka, 1988. P. 43–62. (in Russian)
105 Korepanov V., Novikov D. The Diffuse Bomb Problem // Au-
tomation and Remote Control. 2013. Vol. 74. No 5. P. 863–874.
106 Korepanov V., Novikov D. Models of Strategic Behavior in the
Diffuse Bomb Problem // Control Sciences. 2015. No. 2. P. 38–44. (in
Russian)
107 Korepanov V., Novikov D. Reflexive Colonel Blotto Game //
Control Systems and Information Technology. 2012. No. 1 (47). P. 55–
62. (in Russian)
108 Korshunov Yu. Mathematical Foundations of Cybernetics. –
Moscow: Energoatomizdat, 1987. – 496 p. (in Russian)
109 Kozielecki J. Psychological Decision Theory. – London:
Springer, 1982. – 424 p.
110 Kozlov V. Systems Analysis, Optimization and Decision-
Making. – Moscow: Prospekt, 2010. – 176 p. (in Russian)
111 Krylov S. Neocybernetics: Algorithms, Evolution Mathematics
and Future Technologies. – Moscow: LKI, 2008. – 288 p. (in Russian)
112 Kuhn T. The Structure of Scientific Revolutions. – Chicago:
University of Chicago Press, 1962. – 264 p.
113 Kuzin L. The Foundations of Cybernetics. – Moscow: Energiya,
1979. Vol. 1. – 504 p. Vol. 2. – 584 p. (in Russian)
114 Larichev O. Systems Analysis: Problems and Prospects // Au-
tomation and Remote Control. 1975. Vol. 36. No. 2. P. 241–249.

119
115 Lefevbre V. Algebra of Conscience. – London: Springer, 2001.
– 372 p.
116 Lefevbre V. Second-Order Cybernetics in the Soviet Union and
Western Countries // Reflexive Processes and Control. 2002. No. 1. Vol.
2. P. 96–103. (in Russian)
117 Lefevbre V. The Structure of Awareness: Toward a Symbolic
Language of Human Reflexion. – New York: Sage Publications, 1977. –
199 p.
118 Lepsky V. The Philosophy and Methodology of Control in the
Context of Scientific Rationality Development / XII All-Russian Meeting
on Control Problems. – Moscow: Trapeznikov Institute of Control Sci-
ences, 2014. P. 7785–7796. (in Russian)
119 Lerner A. Fundamentals of Cybernetics. – Berlin: Springer,
1972. – 294 p.
120 Malinetsky G., Potapov A.B., Podlazov A.V. Nonlinear Dynam-
ics: Approaches, Results, Expectations. 3rd Ed. – Moscow: URSS, 2011. –
280 p. (in Russian)
121 Mancilla R. Introduction to Sociocybernetics (Part 1): Third-
Order Cybernetics and a Basic Framework for Society // Journal of
Sociocybernetics. 2011. Vol. 42. No. 9. P. 35–56.
122 Mancilla R. Introduction to Sociocybernetics (Part 3): Fourth-
Order Cybernetics // Journal of Sociocybernetics. 2013. Vol. 44. No. 11.
P. 47–73.
123 Mansour Y. Computational Game Theory. – Tel Aviv: Tel Aviv
University, 2003. – 150 p.
124 Maruyama M. The Second Cybernetics: Deviation-Amplifying
Mutual Causal Processes // American Scientist. 1963. Vol. 5. No. 2. P.
164–179.
125 Maturana H., Varela F. Autopoiesis and Cognition. – Dordrecht:
D. Reidel Publishing Company, 1980. – 143 p.
126 Maturana H., Varela F. The Tree of Knowledge. – Boston:
Shambhala Publications, 1987. – 231 p.
127 Maxwell J.C. On Governors // Proceedings of the Royal Society
of London. 1868. Vol. 16. P. 270–283.
128 Mead M. The Cybernetics of Cybernetics / Purposive Systems.
Ed. by H. von Foerster et al. – New York: Spartan Books, 1968. P. 1–11.
129 Meadows D. Thinking in Systems. – London: Earthscan, 2009. –
218 p.
130 Meadows D., Randers J., Behrens W. The Limits to Growth. –
New York: Universe Books, 1972. – 205 p.

120
131 Mechanism Design and Management: Mathematical Methods
for Smart Organizations / Ed. by Prof. D. Novikov. – New York: Nova
Science Publishers, 2013. – 163 p.
132 Mesarovic M. Takahara Y. General Systems Theory: Mathemat-
ical Foundations (Mathematics in Science and Engineering). – Elsevier,
1975. – 322 p.
133 Mesarović M., Mako D., Takahara Y. Theory of Hierarchical
Multilevel Systems. – New York: Academic, 1970. – 294 p.
134 Milner B. Theory of Organization. 2nd ed. – Moscow: INFRA-
M, 2000. – 480 p. (in Russian)
135 Mirzoyan R. Control as a Subject of Philosophical Analysis //
Russian Studies in Philosophy. 2010. No. 4. P. 35–47. (in Russian)
136 Moiseev N. Mathematical Problems of Systems Analysis. –
Moscow: Nauka, 1981. – 488 p. (in Russian)
137 Moiseev N. People and Cybernetics. – Moscow: Molodaya
Gvardiya, 1984. – 224 p. (in Russian)
138 Morris W. Management Science: A Bayesian Introduction. –
New York: Prentice Hall, 1968. – 226 p.
139 Morse P., Kimball G. Methods of Operations Research. – New
York: Wiley, 1951. – 258 p.
140 Müller K. The New Science of Cybernetics: A Primer // Journal
of Systemics, Cybernetics and Informatics. 2013. Vol. 11. No. 9. P. 32–
46.
141 Myerson R. Game Theory: Analysis of Conflict. – London:
Harvard Univ. Press, 1991. – 568 p.
142 NASA Systems Engineering Handbook. 2007. – 360 p.
143 Nash J. Non-cooperative Games // Ann. Math. 1951. Vol. 54. P.
286–295.
144 Nature. – 2008. September 3 (Special Issue).
145 Neumann J., Morgenstern O. Theory of Games and Economic
Behavior. – Princeton: Princeton University Press, 1944. – 776 p.
146 Nikanorov S. Conceptualization of Subject Domains. – Mos-
cow: Kontsept, 2009. – 268 p. (in Russian)
147 Novick D. Program Budgeting. – Cambridge: Harvard Universi-
ty Press, 1965. – 88 p.
148 Novikov A., Novikov D. Methodology. – Moscow: Sinteg,
2007. – 668 p. (in Russian)
149 Novikov A., Novikov D. Research Methodology: From Philoso-
phy of Science to Research Design. – Amsterdam, CRC Press, 2013. –
130 p.

121
150 Novikov D. Analysis of Some Leading Conferences on Control
Problems // Automation and Remote Control. 2014. No. 12. P. 160–166.
(in Russian)
151 Novikov D. Big Data and Big Control // Advances in Systems
Studies and Applications. 2015. Vol. 15. No. 1. P. 21–36.
152 Novikov D. Control Methodology. – New York: Nova Science
Publishers, 2013. – 76 p.
153 Novikov D. Hierarchical Models of Warfare // Automation and
Remote Control. 2013. Vol. 74. No. 10. P. 1733–1752.
154 Novikov D. Mechanisms of Functioning of Multilevel Organiza-
tional Systems. – Moscow: Control Problems Foundation, 1999. – 150 p.
(in Russian)
155 Novikov D. Models of Strategic Behavior // Automation and
Remote Control. 2012. Vol. 73. No. 1. P. 1–19.
156 Novikov D. Regularities of Iterative Learning. – Moscow:
Trapeznikov Institute of Control Sciences RAS, 1998. – 98 p. (in Rus-
sian)
157 Novikov D. Theory of Control in Organizations. – New York:
Nova Science Publishers, 2013. – 341 p.
158 Novikov D., Chkhartishvili A. Reflexion and Control: Mathe-
matical Models. – London: CRC Press, 2014. – 298 p.
159 Novikov D., Rusyaeva E. Foundations of Control Methodology
// Advances in Systems Science and Application. 2012. Vol. 12. No. 3. P.
33–52.
160 Novosel’tsev V. Control Theory and Biosystems. – Moscow:
Nauka, 1978. – 319 p. (in Russian)
161 Ogata K. Modern Control Engineering. 5th ed. – Upper Saddle
River: Prentice Hall, 2010. – 905 p.
162 Orlovski S. Optimization Models Using Fuzzy Sets and Possi-
bility Theory. – Berlin: Springer, 1987. – 452 p.
163 Oрtner S. Systems Analysis for Business Management. – New
York: Prentice Hall, 1960. – 190 p.
164 Pareto V. Cours d’Economie Politique. Vol. 2. 1897. – 420 p.
165 Pawlak Z. Rough Sets: Theoretical Aspects of Reasoning about
Data. – Dordrecht: Kluwer Academic Publishing, 1991.
166 Peregudov F., Tarasenko F. Introduction to Systems Analysis. –
OH: Columbus: Glencoe/Mcgraw-Hill, 1993. – 320 p.
167 Pervozvansky A. A Course on Automatic Control Theory. –
Moscow: Nauka, 1986. – 616 p. (in Russian)
168 Peters B. Normalizing Soviet Cybernetics // Information & Cul-
ture: A Journal of History. 2012. Vol. 47. No. 2. P. 145–175.
122
169 Pickering A. The Cybernetic Brain. – Chicago: The University
of Chicago Press, 2010. – 537 p.
170 Polonnikov R., Yusupov R.M. Will the 20th Century Perceive
Cybernetics // Problemy Upravleniya i Informatiki. 2001. No. 6. P. 132–
152. (in Russian)
171 Polyak B., Scherbakov P. Robust Stability and Control. – Mos-
cow: Nauka, 2002. – 303 p. (in Russian)
172 Polyak B., Stepanov O., Fradkov A.L. The 19th IFAC World
Congress // Automation and Remote Control. 2015. No. 2. P. 150–156.
(in Russian)
173 Pospelov I. A Preface to Wiener’s books “The Human Use of
Human Beings. Cybernetics and Society” and “God and Golem”. –
Moscow: Taideks, 2003. – 248 p. (in Russian)
174 Prangishvili I. Systems Approach and System-wide Regularities.
– Moscow: SINTEG, 2000. – 528 p. (in Russian)
175 Prigogine I., Stengers I. Order Out of Chaos. – New York: Ban-
tam Books, 1984. – 285 p.
176 Pushkin V., Ursul A.D. Informatics, Cybernetics, Intelligence:
Philosophical Sketches. – Kishinev: Shtiintsa, 1989. – 341 p. (in Russian)
177 Rapoport A. General System Theory: Essential Concepts & Ap-
plications. – Kent: Abacus Press, 1986. – 250 p.
178 Rashevsky N. Outline of a New Mathematical Approach to
General Biology // Bulletin of Mathematical Biophysics. 1943. Vol. 5. P.
33–47, 49–64, 69–73.
179 Fet Ya.I. A Reading Book on the History of Informatics / Ed. by
B.G. Mikhailichenko; Institute of Computational Mathematics and Math-
ematical Geophysics, Siberian Branch of RAS. – Novosibirsk: Geo, 2014.
– 559 p. (in Russian)
180 Ren W., Yongcan C. Distributed Coordination of Multi-agent
Networks. – London: Springer, 2011. – 307 p.
181 Rosenblueth A., Wiener N., Bigelow J. Behavior, Purpose and
Teleology // Philosophy of Science. 1943. No. 10. P. 18–24.
182 Rukov A. Models and Methods of Systems Analysis: Decision-
Making and Optimization. – Moscow: Moscow Institute of Steel and
Alloys, 2005. – 352 p. (in Russian)
183 Rzevski G., Skobelev P. Managing Complexity. – London: WIT
Press, 2014. – 216 p.
184 Sadovsky V. Foundations of General System Theory. – Mos-
cow: Nauka, 1978. – 280 p. (in Russian)
185 Satzinger J., Jackson R., Burd S. Introduction to Systems Analy-
sis and Design. 6th ed. – Boston: Course Technology, 2011. – 512 p.
123
186 Schedrovitsky G. Selected Proceedings. - Moscow: Higher
School of Culturology, 1995. - 800 p. (in Russian)
187 Shannon C. A Mathematical Theory of Communication // Bell
System Technical Journal. 1948. Vol. 27. P. 379–423, 623–656.
188 Shannon C., Weaver W. The Mathematical Theory of Commu-
nication. – Illinois: University of Illinois Press, 1948. – 144 p.
189 Shoham Y., Leyton-Brown K. Multiagent Systems: Algorithmic,
Game-Theoretic, and Logical Foundations. – Cambridge: Cambridge
University Press, 2008. – 504 p.
190 Smuts J. Holism and Evaluation. – London: Macmillan, 1926. –
368 p.
191 Sokolov B., Yusupov R.M. Analysis of Interdisciplinary Interac-
tion between Modern Informatics and Cybernetics: Theoretical and
Practical Aspects // XII All-Russian Meeting on Control Problems. –
Moscow: Trapeznikov Institute of Control Sciences RAS, 2014. P.
8625–8636. (in Russian)
192 Sokolov B., Yusupov R.M. Neocybernetics in the Modern Struc-
ture of System Knowledge // Robototekhnika i Tekhnicheskaya
Kibernetika. 2014. No. 2(3). P. 3–10. (in Russian)
193 Steinbuch K. Automat und Mensch. Kybernetische Tatsachen
und Hypothesen. – Berlin: Springer-Verlag, 1963. – 392 p.
194 Strogats S. Nonlinear Dynamics and Chaos: With Applications
to Physics, Biology, Chemistry, and Engineering (Studies in Nonlineari-
ty). – Boulder: Westview Press, 2001. – 512 p.
195 Surowiecki J. The Wisdom of Crowds: Why the Many Are
Smarter Than the Few and How Collective Wisdom Shapes Business,
Economies, Societies and Nations. – New York: Doubleday, 2004. –
336 p.
196 Systems Engineering Guide. – Bedford: MITRE Corporation,
2014. – 710 p.
197 Systems Theory and Systems Analysis in Control of Organiza-
tions: A Handbook / Ed. by V.N. Volkova and A.A. Emel’yanov. –
Moscow: Finansy i Statistika, 2006. – 848 p. (in Russian)
198 Taha H. Operations Research: An Introduction (9th ed.). – New
York: Prentice Hall, 2011. – 813 p.
199 Tesler G. New Cybernetics. – Kiev: Logos, 2004. – 404 p. (in
Russian)
200 The Control Handbook. 2nd ed. Ed. By W. Levine. – New York:
CRC Press, 2010. – 786 p.
201 Trentowski B. Stosunek Filozofii do Cybernetyki, Czyli Sztuki
Rządzenia Narodem. – Warsawa, 1843. – 195 p.
124
202 Tsetlin M. Studies on Automata Theory and Modeling of Bio-
logical Systems. – Moscow: Nauka, 1969. – 316 p. (in Russian)
203 Turchin V. The Phenomenon of Science. – New York: Colum-
bia University Press, 1977. – 348 p.
204 Uemov A. Systems Approach and General Systems Theory. –
Moscow: Mysl’, 1978. – 272 p. (in Russian)
205 Ugolev A. Natural Technologies of Living Systems. – Lenin-
grad: Nauka, 1987. – 317 p. (in Russian)
206 Umpleby S. A Brief History of Cybernetics in the United States
// Austrian Journal of Contemporary History. 2008. Vol. 19. No. 4. P. 28–
40.
207 Umpleby S. The Science of Cybernetics and the Cybernetics of
Science // Cybernetics and Systems. 1990. Vol. 21. No. 1. P. 109–121.
208 Ursul A. The Nature of Information. – Moscow: Politizdat,
1968. – 288 p. (in Russian)
209 Valachich J., Jeorge J., Hoffer J. Essentials of Systems Analysis
and Design. 5th ed. – Prentice Hall: Pearson, 2012. – 445 p.
210 Varela F. A Calculus for Self-reference // International Journal
of General Systems. 1975. Vol. 2. P. 5–24.
211 Vassilyev S., Zherlov A.K., Fedosov E.A., Fedunov B.E. Intelli-
gent Control of Dynamic Systems. – Moscow: Fizmatlit, 2000. – 352 p.
(in Russian)
212 Vittikh V.A. Evolution of Ideas on Management Processes in
the Society: From Cybernetics to Evergetics // Group Decision and Nego-
tiation. http://link.springer.com/article/10.1007/s10726-014-9414-
6/fulltext.html. Published online on September 14, 2014.
213 Volkova V. From the History of Systems Analysis Evolution in
Russia. – St. Petersburg: St. Petersburg State Technical University, 2001.
– 210 p. (in Russian)
214 Volkova V., Denisov A.A. The Foundations of Systems Theory
and Systems Analysis. 2nd ed. – St. Petersburg: St. Petersburg State
Technical University, 2001. – 512 p. (in Russian)
215 Voronov A. Controllability, Observability, Stability. – Moscow:
Nauka, 1979. – 339 p. (in Russian)
216 Vyshnegradsky I. On Direct-Action Controllers // Izvestiya St.
Petersburg Practical technological Institute. 1877. Vol. 1. P. 21–62. (in
Russian)
217 Wagner H. Principles of Operations Research. 2-nd ed. – NJ
Upper Saddle River: Prentice Hall, 1975. – 1039 p.
218 Walter G. The Living Brain. – London: Pelican Books, 1963. –
255 p.
125
219 Wasson C. System Analysis, Design and Development: Con-
cepts, Principles and Practices. – Hoboken: Wiley, 2006. – 832 p.
220 Weibull J. Evolutionary Game Theory. – Cambridge: The MIT
Press, 1995. – 256 p.
221 Wiener N. Cybernetics: or the Control and Communication in
the Animal and the Machine. – Cambridge: The Technology, 1948. –
194 p.
222 Wiener N. Ex-Prodigy: My Childhood and Youth. – Cambridge:
The MIT Press, 1964. – 317 p.
223 Wiener N. God and Golem, Inc.: A Comment on Certain Points
where Cybernetics Impinges on Religion. – Cambridge: The MIT Press,
1966. – 99 p.
224 Wiener N. I Am Mathematician. – Cambridge: The MIT Press,
1964. – 380 p.
225 Wiener N. The Human Use of Human Beings. Cybernetics and
Society. – Boston: Houghton Mifflin Company, 1950. – 200 p.
226 Wooldridge M. An Introduction to Multi-Agent Systems. – New
York: John Wiley and Sons, 2002. – 376 p.
227 Young S. Management: A Systems Approach. – Glenview:
Scott, Foresman and Company, 1966. – 360 p.
228 Zadeh L. Outline of a New Approach to the Analysis of Com-
plex Systems and Decision Processes / IEEE Trans. Syst., Man, Cybern.
1973. Vol. SMC-3. No. 1. P. 28–44.

126
94
Appendix I: A List of Basic Terms

ACTIVITY is an energetic interaction of a human being with an en-


vironment, where the former plays the role of a subject exerting a pur-
poseful impact on an object and satisfies its needs. The basic structural
components of activity are illustrated by Fig. 15.
ADAPTATION is a process establishing or maintaining system’s
adjustment (i.e., keeping up its key parameters) under changing condi-
tions of an external and internal environment. Quite often, the term
“adaptation” means the result of such process–system’s fitness to some
factor of an environment. The notion of adaptation was pioneered in the
context of biological systems, first of all, a separate organism (or its
organs and other subsystems) and then a population of organisms. Fol-
lowing the appearance of cybernetics, where an adaptation mechanism is
a negative feedback loop ensuring a rational response of a complex
hierarchical self-controlled system to varying conditions of an environ-
ment, the notion of adaptation has become widespread in social and
technical sciences.
ANALYSIS is a mental operation which decomposes a studied
whole into parts, separates out particular attributes and qualities of a
phenomenon or process, relations of phenomena or processes. Analysis
procedures represent an integral component in any study of an object and
usually form its first phase: a researcher passes from object exploration as
a whole to revelation of its structure, composition, properties and attrib-
utes. Analysis is a theoretical method-operation inherent to any activity.
BEHAVIOR is one of several sequences of movements or actions
possible in given conditions (a given environment). Behavioral phenome-
na are inseparably linked with the environment they take place in. Some-
times, human behavior means only the external manifestation of human
activity.
BLACK BOX is a system whose internal structure and mechanism
of functioning are very complicated, unknown or negligible within the
framework of a given problem (i.e., only external behavior makes sense).
CONTROL is 1) an element, function of different organized systems
(biological, social, technical ones) preserving their definite structure,
maintaining activity mode, implementing a program, a goal of activity; an
impact on a controlled system, intended for ensuring its necessary behav-

94
Analysis methods for the terminological structure of a subject area were studied in
[74].
127
ior; 2) the science of control; 3) an object, i.e., a tool of control, a struc-
ture (e.g., a department) of several subjects performing control.
DEVELOPMENT is an irreversible, directed and consistent change
of material and ideal objects. Development in a desired direction is called
progress. Development in an undesired direction is called a regress.
DIVERSITY is a quantitative characteristic of a system, which
equals the number of its admissible states or the logarithm of this number.
EXTERNAL ENVIRONMENT is a set of all objects and subjects
lying outside a given system, whose behavior and/or changed properties
affects the system and all objects/subjects whose behavior and/or proper-
ties vary depending on system’s behavior.
FEEDBACK (FB) is a reverse impact exerted by the results of a cer-
tain process on its behavior; information on the state of a controlled
system, which is supplied to a control system (see CONTROL). FB
characterizes control systems in wild life, society and technology. There
exist positive and negative FB. If the results of a process strengthen its
effect, FB is positive. Negative FB takes place whenever the results of a
process weaken its effect. Negative FB stabilizes process behavior,
whereas positive FB often accelerates process evolution and causes
oscillations. In complex systems (e.g., social or biological ones), it seems
difficult or even impossible to identify FB types. In addition, FB loops
are classified based on the character of bodies and media realizing them:
mechanical (e.g., the negative FB realized by Watt’s steam engine gover-
nor); optical (e.g., the positive FB realized by an optical cavity in a laser);
electrical, and others. The notion of FB as a form of interaction plays an
important role in the analysis of complex control systems (their function-
ing and development) in wild life and society.
FUNCTION is 1) (philosophy) a phenomenon dependent on another
phenomenon, which varies simultaneously with the latter;
2) (mathematics) a law assigning a certain well-defined quantity to each
value of a variable (argument), as well as this quantity itself; a ratio of
two (or more) objects such that variation of one object causes an appro-
priate variation of another object (other objects); 3) a job performed by an
organ or organism; 4) a role or meaning of something; a role a subject or
a social institute plays with respect to the needs of an upper subsystem or
the interests of its groups and individuals; a duty or circle of activity.
GOAL is anything strived for or to-be-implemented. In philosophy,
a goal (of an action or activity) is an element in the behavior and con-
scious activity of a human being, which characterizes anticipation in
thinking of the activity result and ways of its implementation using

128
definite forms, methods and means. A goal represents a way of integrat-
ing different actions of a human being into a certain sequence or system.
HIERARCHY (from the Greek εραρχία “rule of a high priest”) is a
structural organization principle of complex multilevel systems, which
lies in ordering the interaction between levels of a system (top-bottom),
characterizes the mutual correlation and collateral subordination of pro-
cesses at different levels and ensures its functioning and behavior in
whole.
HOMEOSTAT (from the Greek ὁμοιος “like, resembling” and
στάσις “a standing still ”) is 1) the capability of an open system for pre-
serving its internal state invariable via coordinated responses for main-
taining a dynamic equilibrium; 2) (biological systems) the permanence of
characteristics essential for system’s vital activity under existing disturb-
ances in an external environment; the state of relative constancy; the
relative independence of an internal environment from external condi-
tions [14, 41, 160].
MODEL (in wide sense) is any image, analog (mental or condition-
al, e.g., a picture, description, scheme, diagram, graph, plan, map, and so
on) of a certain object, process or phenomenon (the original of a given
model); a model is an auxiliary object chosen or transformed for cogni-
tive goals, which provides new information about the primary object.
Model design proper does not guarantee that the resulting model answers
its purposes. For normal functioning, a model must meet a series of
requirements such as inherence, adequacy and simplicity.
ORGANIZATION: is 1) the internal order, coordinated interaction
of more or less differentiated and autonomous parts of a whole, caused by
its structure; 2) a set of processes or actions leading to formation or
perfection of interconnections between the parts of a whole; 3) an associ-
ation of people engaged in joint implementation of a certain program or
task, using specific procedures and rules, i.e., mechanisms of operation (a
mechanism is a system or device determining the order of a certain activi-
ty). The last meaning of the term “organization” is the definition of an
organizational system. The category of organization is a backbone ele-
ment of control theory [157].
SELF-ORGANIZATION is a process leading to creation, reproduc-
tion or perfection of complex system organization. Self-organization
processes run only in systems having a high level of complexity and a
large number of elements with nonrigid (e.g., probabilistic) connections.
Self-organization properties are inherent to objects of different nature,
namely, a living cell, an organism, a biological population,

129
biogeocenosis, a collective of human beings, complex technical systems,
etc. Self-organization processes run via readjusting the existing connec-
tions and forming new connections among system elements. A distinctive
feature of such processes is their purposeful, yet natural (spontaneous)
character. Self-organization processes imply system interaction with an
external environment, are somewhat autonomous and relatively inde-
pendent from an environment.
SELF-REGULATION is generally defined as reasonable function-
ing of living systems; it represents a closed control loop (see
FEEDBACK), where the subject and object of control do coincide. Self-
regulation has the following structure: an activity goal accepted by the
subject, a model of significant activity conditions, a program of actions
proper, a system of activity efficiency criteria, information on real results
achieved, an assessment of the existing correspondence between real
results and efficiency criteria, decisions on the necessity and character of
activity corrections.
STRUCTURE is a set of stable connections among the elements of a
certain system, ensuring its integrity and self-identity.
SYNERGETICS is an interdisciplinary research direction of self-
organization processes in complex systems, which describes and explains
the appearance of qualitatively new properties and structures at the
macrolevel as the result of interactions among the elements of an open
system at the microlevel. Synergetics employs the framework of nonline-
ar dynamics (including catastrophe theory) and nonequilibrium thermo-
dynamics.
SYNTHESIS is a mental operation which integrates different ele-
ments or sides of a certain object in a comprehensive whole (a system).
Synthesis appears opposite to and has an indissoluble connection with
analysis. Synthesis represents a theoretical method-operation inherent to
any activity.
SYSTEM is a set of elements having mutual relations and connec-
tions, which forms a definite unity and is dedicated to goal achievement.
Systems have the following basic features: integrity, relative isolation
from an external environment, connections with the environment, the
existence of parts and their connections (structuredness), whole system
dedication to goal achievement.
UNCERTAINTY is the absence or incomplete definition or infor-
mation.

130
Appendix II: Topics for Further Self-study

1) The scientific discoveries of the 20th century. The interdisciplinary


translation of results
2) Ampere’s cybernetics
3) Trentovsky’s cybernetics
4) Bogdanov’s tectology
5) N. Wiener and its contribution to cybernetics
6) W. Ashby and its contribution to cybernetics
7) S. Beer and its contribution to cybernetics
8) L. von Bertalanffy and general systems theory
9) H. Foerster and general systems theory
10) A. Berg and its contribution to cybernetics
11) V. Glushkov and its contribution to cybernetics
12) A. Kolmogorov and its contribution to cybernetics
13) A.A. Lyapunov and its contribution to cybernetics
14) The history of controller theory
15) The history of control theory
16) The history of general systems theory and systems analysis
17) The history of informatics
18) The history of artificial intelligence
19) The history of operations research
20) The history of cybernetics in the USSR and USA
21) The history of systems science and systems engineering
22) Ontological analysis of basic definitions in cybernetics
23) Systems of systems
24) Bibliometric analysis of general cybernetics and applied cybernetics
25) Bibliometric analysis of conferences on cybernetics
26) Second-order cybernetics
27) Autopoiesis
28) Third- and higher-order cybernetics
29) Economic cybernetics
30) Cybernetical physics
31) Control philosophy
32) Control methodology
33) The philosophy and methodology of informatics. Information philos-
ophy
34) The methodology of “soft” systems
131
35) Boulding’s system classes
36) Systems dynamics
37) Laws, regularities and principles of control
38) Solution methods for weakly formalized problems
39) Hybrid models. The multimodel approach. Hierarchical modeling
40) “Hard” and “soft” models
41) Organization theory
42) Emergent intelligence
43) Big data and control problems

132
About the Author

NOVIKOV,
DMITRY A.

Born in 1970. Dr. Sci. (Eng.), Prof., cor-


responding member of the Russian Academy
of Sciences. Deputy Director of Trapeznikov
Institute of Control Sciences of the Russian
Academy of Sciences, and head of Control
Sciences Department at Moscow Institute of
Physics and Technology.

Author of over 500 scientific publications on theory of control for


interdisciplinary systems, methodology, systems analysis, game theory,
decision-making, project management and control mechanisms for organ-
izational and socioeconomic systems.

E-mail: [email protected], www.mtas.ru.

133

You might also like