Integral Biomathics
A Post-Newtonian View into the Logos of Bio
Plamen L. Simeonov
28. February 2007
Integral Biomathics: A Post-Newtonian View into the Logos of Bio
Plamen L. Simeonov
Technische Universität Berlin
[email protected]
Abstract
This work addresses the phenomena of emergence, adaptive dynamics and evolution of selfassembling, self-organizing, self-maintaining and self-replicating biosynthetic systems. We regard this
research as an integral part of the emerging discipline of nature-inspired or natural computation i.e.
computation inspired by or occurring in nature. Within this context, we are interested in studies which
represent a significant departure from traditional theories about complex systems and selforganization, emergent phenomena and artificial biology. In particular, these include non-conventional
approaches exploring (i) the aggregation, composition, growth and development of physical forms and
structures along with their networks of production (autopoiesis), and (ii) the associated abstract
information structures and computational processes.
Our ultimate objective is to unify classical mathematical biology with biomathics (or biological
mathematics) on the way to genuine biological system engineering. The convergence of these
disciplines is going to be carried out both from the perspective of traditional (analytic) life and
physical sciences, as well as from the one of engineering (synthetic) sciences. In this regard, our
approach differs from most present day efforts of biomimetics in automata and computation design to
develop autonomic systems by emulating a limited set of “organic” features using traditional
mathematical methods and computational models which are suitable for physical sciences, rather than
for life sciences. We call this new field integral biomathics.
This paper presents a survey of approaches related to the above domain and defines a generalized
epistemological model with the objective to set out an ecology for symbiotic research in life, physical
and engineering sciences.
____________
Keywords: systems biology; synthetic biology; relational biology; autopoiesis; theoretical physics;
evolving formal models; naturalistic computation; non-Turing and post-Newtonian computation; self* biosynthetic systems; artificial life.
Integral Biomatics
28.02.2007
Integral Biomathics: A Post-Newtonian View into the Logos of Bio
Plamen L. Simeonov
Technische Universität Berlin
[email protected]
”I’m not happy with all the
because Nature isn’t classical…
Can you do it with
It is not a Turing
analyses that go with just classical theory,
How can we simulate the quantum mechanics?..
a new kind of computer - a quantum computer?
machine, but a machine of a different kind.”
Richard Feynman, Simulating physics with computers, 1981.
1. Introduction
This work addresses the phenomena of emergence, adaptive dynamics and evolution of selfassembling, self-organizing, self-maintaining and self-replicating biosynthetic systems. We regard this
research as an integral part of the emerging discipline of nature-inspired or natural computation i.e.
computation inspired by or occurring in nature (Ballard, 1997; Shadboldt, 2004; MacLennan, 2005;
Zomaya, 2006). Within this context, we are interested in studies which represent a significant
departure from traditional theories about complex systems and self-organization, emergent phenomena
and artificial biology. In particular, these include non-conventional approaches exploring (i) the
aggregation, composition, growth and development of physical forms and structures along with their
networks of production (autopoiesis), and (ii) the associated abstract information structures and
computational processes.
This study was motivated by previous research in system design and network engineering (Simeonov,
1998; Simeonov, 1999a/b/c; Simeonov, 2002a/b/c), where the limits of contemporary information
technology and multimedia communication systems were identified and a novel approach towards
autopoietic networking was proposed. The present work continues the above line of research towards
deeper understanding of biological phenomena such as emergence and organisation in a holistic
manner as seen in relational biology (Rashevsky, 1954 ff.; Rosen, 1958a/b ff.). Hence, our ultimate
objective is to unify classical mathematical biology with biomathics 1 on the way to genuine biological
system engineering. Therefore, this study is going to be carried out both from the perspective of
traditional (analytic) life and physical sciences, as well as from the one of engineering (synthetic)
sciences. In this regard, our approach differs from most present day efforts of biomimetics 2 in
automata and computation design to develop autonomic systems by emulating a limited set of
“organic” features using traditional mathematical methods and computational models which are
suitable for physical sciences, rather than for life sciences.
In addition, it is essential to note that also classical information theory (Shannon, 1948) should be
developed along the same line of research in order to obtain an authentic picture of natural biological
systems that will enable the creation of artificial ones. This viewpoint has certainly become an
important issue in the design of complex networked systems deploying large numbers of distributed
components with dynamic exchange of information in the presence of noise and under power and
bandwidth constraints in the areas of telecommunications, transport control and industrial automation.
1
Biomahtics or biological mathematics is defined as the study of mathematics as it occurs in biological systems. In contrast,
mathematical biology is concerned with the use of mathematics to describe or model biological systems, (Rashevsky, 1940).
2
Biomimetics is generally defined as the “concept of taking ideas from nature and implementing them in another technology
such as engineering, design, computing, etc.”, cf. http://www.bath.ac.uk/mech-eng/biomimetics/about.htm.
–1 –
Integral Biomatics
28.02.2007
To address these critical issues, researchers pursue the amelioration and unification of classical
theories such as those of control, thermodynamics and information. For instance, Allwein and
colleagues propose the integration of Shanon’s general quantitative theory of communication flow
with the Barwise-Seligman general qualitative theory of information flow to obtain a more powerful
theoretical framework for qualitative and quantitative analysis of distributed information systems,
(Allwein et al., 2004). Other authors are concerned with important theoretical issues such as the
estimation of reliable noisy digital channel state (Matveev & Savkin, 2004) and the treatment of data
density equilibrium analogous to thermodynamic equilibrium, (Kafri, 2006, 2007).
Some works also introduce the physics of information in the context of biology and genomics,
(Adami, 2004). However, what is important for the design of naturalistic systems is the perception of
signalling and information content including their processing and distribution from the perspective of
biological systems (Miller, 1978) and in correlation with autonomous regulation of power
consumption and other life maintaining mechanisms. This topic has not been addressed sufficiently by
present research in both natural and artificial systems. Therefore, it should become an integral part of
the models and methods of our approach to naturalistic computation.
This presentation is organized as follows. Our research develops along two planes 3 or conceptions of
discourse, the physical or the ‘realization’ plane and the logical or ‘abstract’ one. The following two
sections review previous research in naturalistic computation along these planes. Section four is
devoted to non-classical computation models beyond the Turing machine model. Next, section five
introduces the kernel part of this article, the integral approach to biological computation. Section six
discusses the implications of the new field. Finally, section seven presents the conclusions with an
outlook for research in integral biomathics.
2. The Physical Plane
The physical plane lies within the domains of autonomous cellular automata (CA) and evolving
complex systems such as autopoietic, autocatalytic and non-linear eco-networks. This area comprises
classical systems and ‘discrete’ automata theories endorsed by artificial intelligence (AI) and artificial
life (ALife) approaches such as evolutionary computation (Fogel et al., 1966), synthetic neural
networks (Dyer, 1995) or adaptive autonomous agents (Maes, 1995) for transducing knowledge from
biology and related life science disciplines into computer science and engineering. Such systems and
automata are often referred to as bio-inspired or organic.
The conceptual and theoretical foundations of these fields have been elaborated in previous works on
self-replicating systems such as Boolean models of neural networks (Pitts, 1943; McCullough & Pitts,
1943), Moore’s artificial living plants (1956a), sequential machines (1956b) and other machine models
(1962), von Neumann’s kynematic model and Universal Constructor (1966), Conway’s game of ‘Life’
(Gardner, 1970-71), Arbib’s self-replicating spacecraft (1974), Dyson’s self-replicating systems
(1970, 1979) and Drexler’s bio-nanomolecular engines (1986, 1992). Some of these architectures such
as the Codd’s and Morita’s simplified cellular automations have been realized in practice, (Codd,
1968; Winograd, 1970; Morita & Imai, 1995). However, discrete mathematics has its application
limits in modelling and emulating biological systems. Richard Feynman came to a similar conclusion
that classical computers are inappropriate for simulating quantum systems (1982, 1985). Apart from
maintaining living functions, biological systems demonstrate complex computational mechanisms.
Only those artificial or hybrid biosynthetic systems exhibiting a behaviour which is characteristic to
the one of natural organic systems could truly reflect the essence of biological computing.
3
We refer here to the first two spheres of Penrose's “Three Worlds Model“ (Penrose 1995, p. 414; Penrose, 2004, p. 18), the
physical world of phenomena and the Platonic mathematical world.
–2 –
Integral Biomatics
28.02.2007
Therefore, we argue that this area of research needs to be placed on broader foundations which more
adequately reflect the emergence and organisation of artefacts and processes in nature than modern
discrete automata and computation approaches. In this work, we are willing to expand and enforce
frontier research on the emergence and organisation of living forms (Thompson, 1917; Bertalanffy,
1928; Franck, 1949; Rashevsky, 1954; Miller, 1978; Thom, 1989; Rosen, 1991) and their networks of
production within a broader discourse and beyond the classical state automata theory and traditional
models in life and physical sciences.
There has been a number of artificial life techniques using computational models and algorithms
adopted from life science disciplines such as genetics, immunology and neurology, as well as
evolutionary and molecular biology (Schuster 1995; Ray 1995). Among the most prominent examples
for artificial life systems are Tierra (Ray 91) and Avida (Adami 98). Other well known references are
the SCL model (McMullin 1997a), the self-assembling cells (Ono & Ikegami 1999, 2000) and the selfassembling lipid bilayers (Rasmussen et al., 2001). Recently, artificial chemistry approaches have
been obtaining much attention (di Feroni & Dittrich, 2002; Hutton, 2002; Matsumaru et al., 2006). In
particular, experimentation on design and implementation of living systems from scratch is becoming
now an intensive research area (Bedau 2005; Bedau 2006). A good intermediate report of the field is
given in (McMullin, 2004).
As yet, the disciplines of artificial life and chemistry still hide many open issues (Bedau et al., 2000;
McMullin, 2000a) including such as the controversial matter about the possibility to engineer
molecular and self-replicating assembler (chemistry with/-out mechanics or the Smalley vs. Drexler
debate) in nanotechnology (Baum, 2003; Freitas & Merkle, 2004). One of them refers to a basic
question and probably the most known effort to explain life in general, the autopoietic theory (Varela
et al., 1974; Maturana & Varela, 1980) which has influenced various sciences including biology and
sociology for the past 30 years (Kneer & Nassehi, 1993). Autopoiesis provides a good model for
understanding the organization and evolution of natural living systems. The question in artificial life
research is, however, whether this model can serve as a base for creating synthetic organisms that
mimic real ones.
To this moment, autopoiesis was used as the base for a number of computational models, simulations,
engineering and architecture solutions (Zeleny & Pierre, 1975; Zeleny, 1978; McMullin & Varela
1997; Cardon & Lesage 1998; Ruiz-del-Solar & Köppen, 1999; McMullin & Groß 2001; Simeonov,
2002; Kawamoto, 2003; Keane, 2005). However, autopoietic theory has obviously failed to provide a
consistent view on the spontaneous organization of living systems in a quite early stage of its
development (Zeleny, 1980). Since then, it has been subject of various critics and controversial
discussions.
Thus, a key point in understanding the mechanisms of self-assembly and self-organisation in living
systems is the notion of organisational closure (McMullin 2000b). Acording to Maturana and Varela,
an autopoietic system is not only one that is (a) clearly separated from its environment by a boundary,
but also one that has (b) an internal organisation capable of dynamically sustaining itself (including its
boundary). It is yet not clear how stringent this definition should be taken. Today, autopoiesis appears
to be still the kind of “theory-at-work” which is very general and undifferentiated both in terms of
mathematical formalization and technical implementation. The following two sections illustrate this
evidence.
2.1 The Formalization Gap
In his book “Life itself” (Rosen, 1991), Robert Rosen presents a category theoretical framework for
formalization of living systems he studied over three decades, (Rosen, 1958 ff.). Rosen places what we
call the Fundamental Question of Artificial Life. According to his conclusion, living systems, which
are essentially metabolism/repair (M, R) systems, are not realizable in computational universes.
–3 –
Integral Biomatics
28.02.2007
If Rosen was right, his conclusion could mean that Artificial Life cannot exist at all, or at least in
computational spaces as we know them now. It could be the case that the entire ALife research is
going in the wrong direction. Letelier et al. (2004) analysed (M, R) systems from the viewpoint of
autopoietic theory. They provided an algebraic example on defining metabolic closure while
suggesting a relationship to autopoiesis. In a series of works (1997, 2002, 2006), reflecting the
contributions of Rosen (1972, 1991), Luhman (Kneer & Naseli, 1993) and Kawamoto (1995, 2000),
Nomura reviewed the formal roots of autopoiesis in the light of category theory (Mitchell, 1965) and
proposed a mathematical model of autopoiesis based on Rosen's definition (1997). After having
examined some of the central postulates of the autopoietic theory, Nomura comes to the conclusion
that previous research has not delivered an unambiguous description of the phenomenon (♣ 4 ) and
proposes a more general and strict formal definition.
Another category theoretical argument and revision of Rosen’s theorem was provided by Chu and Ho
(Chu & Ho, 2006). The authors review the essence of Rosen's ideas leading up to his rejection of the
possibility to simulate real artificial life in computing systems. They argue that the class of machines
Rosen distinguished from closed systems is not equivalent with realistic formalization of ALife and
conclude that Rosen's central proof, stating that living systems are not mechanisms, is wrong. As a
result, Rosen's claim remains an open issue. The conclusion that some of Rosen's central notions were
probably ill defined provides some interesting theoretical concerns which deserve further
investigation. Yet, Rosen himself warned in his book: “there is no property of an organism that cannot
at the same time be manifested by inanimate systems” (Life itself, page 19). Thus, having him
apparently failing in his own proof does not change the matter at all. From the viewpoint of
contemporary logic, we cannot take for granted that organisms are mechanisms and that they can be
constructed in the way we used to build machines.
In a recent paper (2007), Nomura analyses the possibilities of algebraic description of living systems
and clarifies the differences between the aspects of closedness required in (M, R) systems and
autopoiesis. He discovers two essential differences between autopoietic and (M, R) systems. The first
one is the difference on forms of their closedness under entailment of the components and categories
required for the description of closedness. The second one is the distinction between organization and
structure. Nomura points out that the first difference depends on the assumption that completely closed
systems, modelled as an isomorphism from the space of operands to the space of operators, are
necessary conditions of autopoiesis. However, this requirement has not been yet proved in a
mathematically strict way. Furthermore, the definition of autopoiesis itself deserves a special attention.
There were differences in the interpretation of autopoiesis between Maturana and Varela. When
Varela collaborated with McMullin on computational models of autopoiesis, the original algorithm
was revised within the same year (McMullin, 1997b; McMullin & Varela, 1997). We assume that
Varela may not have considered the implications of autopoietic theory on the axiomatization of
discrete mathematics for modelling biological processes.
Summarizing all these facts, we made the following two conclusions: (i) we need a more precise and
formal definition of autopoiesis and its relationship to (M, R) systems, as outlined by (Nomura, 2007);
(ii) we need novel mathematical techniques and tools which adequately describe and simulate
biological processes, (Rashevsky, 1954, 1960, 1961). In other words, in order to make progress in this
area, we need to either provide further means for formalization, or invent new formal approaches that
suite best the original definition, or redefine (refine, extrapolate) the theory itself. Rosen tried to
distinguish life systems from machines with his original definition. Although his proof was
incomplete, Chu and Ho identified the importance of Rosen's idea itself. Nomura also agrees with
them in this point (2007). Therefore, in order to provide the engineering base for artificial organisms
and systems in the physical space, we need to further investigate the necessary conditions for
modelling characteristics of living systems and provide more stringent definitions of life systems and
machines based on Rosen's attempt.
4
With the symbol ♣ we denote the anchor arguments of our conclusion about integral biomathics henceforth.
–4 –
Integral Biomatics
28.02.2007
2.2 The Implementation Gap
In the C5 database product presentation, Stalbaum makes the point that we should differentiate
between the implementation of computational autopoiesis as proof of concept (‘computational
autopoiesis is possible') and the potential practical applications of such a system. He reckons that
Varela's early work makes a strong case for the former (McMullin & Varela, 1997), but that it were a
substantially different problem to design autopoietic automata implementing computing applications
such as database (self-)management systems. Stalbaum states that the challenge in finding or
engineering congruency between autopoietic systems and problems that yield solutions were
enormous. Indeed, the demonstration that autopoiesis can be used for computational and
communication processes using a minimal implementation, e. g. of an artificial chemistry model, is a
relatively simple However, the truthfulness of computational autopoiesis does not necessarily imply
that autopoiesis can be effectively implemented to perform work. This is because the internal purpose
of an autopoietic system is restricted to the ongoing maintenance of its own organization. Yet, this
goal becomes a problem for anyone with the intention to use autopoiesis for the purpose of
computation as we know it, i.e. to deliver an output result from a given number of inputs within a
limited number of steps. In fact, computation might be a by-product of ongoing structural coupling (a
posteriori) between a collection of autopoietic elements such as neurons and their environment, but it
cannot be deterministically defined as a purposeful task for the solution of a specific problem or class
of problems in the way we are used to expect from today's computational and engineered systems. In
other words, we can not count on the natural drift inherent in living systems to directly solve problems
that do not primarily serve for conservation, adaptation and maintenance of those systems. In addition,
the multiple orders of structural coupling in autopoiesis define even a more complex picture of the
interacting units. In this respect, autopoietic computing is analogous 5 to both associative computing
(Wichert, 2000) and quantum computing (Feynmann, 1982-1985). It demonstrates an overlayered
potential multiplicity of results which becomes apparent at the very moment of system interrogation.
The following example illustrates the problem of implementing artificial biology using conventional
computing techniques more vividly. A bottom-up synthesis approach, Substrate Catalyst Link (SCL),
build on the concept of evolving autopoietic artificial agents (McMullin & Gross, 2001) was
integrated within a top-down analytical system, Cell Assembly Kit (CellAK), for developing models
and simulations of cells and other biological structures (Webb & White, 2004). The original top-down
design of the CellAK system was based on the object-oriented (OO) paradigm with the Unified
Modelling Language (UML) and the Real-Time Object-Oriented Methodology (ROOM) formalisms,
with models consisting of a hierarchy of containers (ex: cytosol), active objects with behaviour (ex:
enzymes, lipid bilayers, transport proteins), and passive small molecules (ex: glucose, pyruvate). Thus,
the enhanced CellAK architecture comprised a network of active objects (polymers), the behaviour of
which causally depended partly on their own fine-grained structure (monomers), where this structure
was constantly changing through interaction with other active objects. In this way, active objects
influence other active objects by having an effect on their constituent monomers. The enhanced tool
was validated quantitatively vs. GEPASI (Mendes, 1993) and demonstrated its capability for
simulating bottom-up synthesis using the cell bilayer active object. The authors claim that this result
clearly confirms the value of agent-based modelling techniques reported in (Kahn et al., 2003).
However, there is a major difficulty in implementing this method for more complex organic structures
than lipid bilayers such as enzymes and proteins (Rosen, 1978). This is because amino acids that
compose proteins are coded in the DNA; their order to form a folding 3D shape is of crucial
importance. Therefore, the behaviour of a protein is an extremely complex function of its fine-grained
structure. This turns quite easily the design and validation of artificial biological structures, such as
medicaments, using conventional computing techniques into a problem of polynomial complexity.
Causal relationships may not provide the unique base for investigating biological processes. We do not
believe that significant progress can be made in this area without a paradigm change.
5
meaning proportional (Latin: rational), or following the principle of mediation in the Pythagorean sense, (Guthrie, 1987).
–5 –
Integral Biomatics
28.02.2007
Taking into account the above arguments about formalization and implementation, we expose the need
for a new broader and unifying automata theory for studying ‘natural’ and artificial living systems, as
well as the combination of both, the cybernetic organism (cyborg) or the hybrid between an
animal/plant and a machine. Furthermore, we need a new kind of unifying and real Artificial
Intelligence, the expected ‘quantum leap for AI’ (Hirsch, 1999), that goes beyond the pioneer days
heuristics and hypothesis-driven modelling by placing itself much closer to the essence of living
systems and closer to the nature of the underlying processes in both organic and inorganic system
complexes.
3. The Logical Plane
The second plane for research we are interested in is the abstract or logical plane in the Pythagorean
sense of the word 6 , i.e. one that implies analogy or proportion, relation that is generalized as a law,
habit, principle or basic pattern of system organization. The essential distinction of this conception
from Hilbert’s pure syntactic definition is the inclusion of semantics or context-related information
which allows the multiplicity of interpretations, a characteristics typical for biological, social and ecosystems. The logical plane is concerned with the development of new integral computational
paradigms that reflect the emergence and organization of information for living systems in a more
adequate way than traditional formal approaches based on binary logic and the Church-Turing thesis,
(Church, 1932-41; Turing, 1936-51). The latter represents an idealized, but not necessarily unique,
model for computation, where representations are formal, finite and definite (MacLennan, 2004).
Exposing these characteristics itself, the Church-Turing theory of discrete states excludes per
definition alternative approaches to computation. Even worse, as it has been successful for the past 70
years, the Church-Turing model evokes the conviction in the majority of our contemporaries that it is
universal, as demonstrated in (Shannon, 1956), and that there is no other option for computation at all.
Thus, binary logic remains a good working, and of course, economic model for our everyday
discourse, just as the Newtonian celestial mechanics was useful for global navigation until the arrival
of Einstein’s theory of relativity which made extraterrestrial journeys possible.
Therefore, we expose the need for new theories of computation in order to understand tough issues in
science such as the emergence and evolution of brain and thought. In his books “The emperor's new
mind” (Penrose, 1989) and “Shadows of the mind” (Penrose, 1994), Roger Penrose’s main theme was
the understanding of mind from the perspective of contemporary physics and biology. While
discussing the non-computational physics of mind, he placed indirectly what we call the Fundamental
Question of Strong Artificial Intelligence 7 . Penrose claimed to prove that Gödel's incompleteness
theorem 8 implies that human thought cannot be mechanized. In fact, Penrose does not actually use
Gödel's theorem, but rather an easier result inspired by Gödel, namely, Turing's theorem that the
halting problem is unsolvable.
Penrose's key idea was essentially the same as that of the philosopher J. R. Lucas (Lucas, 1961) who
argued as follows. Gödel's incompleteness theorem shows that, given any formal system, there is a
true sentence which the formal system cannot prove to be true. But since the truth of this unprovable
sentence is proved as part of the incompleteness theorem, humans can prove the sentence in question.
Hence, human abilities cannot be captured by formal systems made by humans. This corresponds to
the eye’s blind spot paradox 9 which in our view should be regarded rather as a principle in biological
systems.
6
Guthrie, 1987
“...according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed
computer really is a mind", (Searle, 1980).
8
stating that Number Theory is more complex than any of its formalizations (Gödel, 1931-34)
9
The blind spot in an eye’s vision field is caused by the lack of light-detecting photoreceptor cells on the optic disc of the
retina where the optic nerve passes through it. Usually, this ‘defect’ is not perceived, since the brain compensates the missing
details of the image with information from the other eye.
7
–6 –
Integral Biomatics
28.02.2007
The question here is again, as in Rosen’s thesis about Artificial Life (Rosen, 1991), s. Section 2.1,
whether this proof is correct or not. If Penrose and Lucas are right, their conclusions might leads to the
interesting result that Strong Artificial Intelligence cannot be realized at all or at least in computational
spaces as we know them now.
Lucas’s standpoint was already criticized by Benacerraf (1967). Some of Penrose's technical errors
were pointed out in (Boolos et al., 1990) and (Putnam, 1995). LaForte and colleagues review
Penrose’s arguments and demonstrate, following Benaceraf’s line of thought, that they depend
crucially on ambiguities between precise and imprecise definitions of key terms (♣♣), (LaForte et al.,
1998). The authors show that these ambiguities cause the Gödel/Turing diagonalization argument to
lead from apparently intuitive claims about human abilities to paradoxical or highly idiosyncratic
conclusions, and conclude that any similar argument will also fail in the same ways. This is a similar
situation as with the contra-proof of Rosen’s thesis discussed above, (Nomura, 2006; Chu et al., 2006).
In fact, the arguments of Lucas and Penrose have the same foundation as Rosen’s discussion about the
entailment of formal systems (Rosen, 1991, pp. 47-49), stating that “from the standpoint of the
formalism [assuming here Gödel's thesis], anything, that happens outside is accordingly unentailed.“
However, Aristotle’s fourth category about the Final Cause 10 , which appears to violate the
unidirectional (temporal) flow from axioms to theorems, places (some) human abilities inside the
formal system. Rosen finds a solution to this contradiction by postulating that final causation requires
modes of entailment that are not generally present in formalisms. Following Rashevsky’s concept of
relational biology (Rashevsky, 1960), he also suggests the possibility to separate finality from
teleology by retaining the former while e. g. discarding the latter. A similar approach could be taken
here to investigate the Lucas/Penrose arguments more precisely. Another solution may be placed
entirely or partly on quantum mechanical foundations. The final word about strong artificial
intelligence has not been said yet.
However, whatever result the polemic around computation and human intelligence may deliver, as in
the case with artificial life, we cannot take for granted that human thought is mechanistic, i.e. formal,
finite and definite, and that it can be constructed in the way we used to build computers so far.
The following example illustrates the above viewpoint. Although being purely subjective, it does not
require a formal proof to be apprehended by the reader. It is the question of human intuition as a
creative act, a process which cannot be expressed by formal means. In our case, a two-dimensional
concentric multi-ring topology of a logical peer-to-peer overlay communications network with the best
robust resource discovery and routing algorithm known yet (Wepiwé & Simeonov, 2006) came into
being as an analogy, i.e. involving semantics, of a natural system model, the electromagnetic field
gradient structure of a metal sphere. It simply popped up intuitively during a discussion with the
persuasion that this is the optimal system architecture for the domain in question (Wepiwé &
Simeonov, 2005). This happened a priory to proving that very fact. There are numerous examples in
the scientific literature that confirm such common experiences and the Gödelian implicit truthwithout-proof, so that the process of its discovery, or disentanglement of semantics into the axiomatic
frame of a specific formalism, can be taken as objective reality and principle in living systems
(Penrose, 1994, 2004). Hence, there is no need of hypothetical theories about time travels and precognition; we simply face a different kind of awareness and computation from those that
contemporary technology delivers to us. Ultimately, from the standpoint of Aristotelian epistemology,
it is not the question what and how was something proved, but rather why did this happen. A
reasonable answer may contain the assumption that the solution of a problem is delivered by the
human system itself, as a useful answer to the stimulus of a purposeful (re-)search through autopoietic
response (natural drift) which belongs to the nature of the phenomenon itself.
10
a proposition that requires something of its own effect.
–7 –
Integral Biomatics
28.02.2007
Therefore, we claim that computation which occurs in nature always involves semantics and cannot be
expressed within formalisms in purely syntactic terms. Whereas this argument is relatively weak in
physical systems, an adequate picture of biological phenomena requires semantics.
In our view, new opportunistic theories of computation should basically regard computation as major
property of living matter and be able to develop their own principles within their specific domains of
application. In this respect, our approach to the logical plane is conform with the one of MacLennan in
his definition of computation as a physical process for the abstract manipulation of abstract objects,
the nature of which could be discrete, continuous or hybrid (MacLennan, 2004, 2006). Besides, the
introduction of broader and integral concepts for computing is supported by such arguments as the fact
that a computer’s functioning is based on the state superposition principle can be realised both with
classical and quantum elements (Oraevsky, 2000).
Basically, MacLennan’s investigation into these alternative models of information processing led to
similar conclusion for computation as the ones of Feynman for quantum systems (1982, 1985) and
Rosen for artificial biology (1991, 1999). He argues that conventional digital computers are
inadequate for realizing the full potential of natural computation, and therefore that alternative and
more brain-like technologies should be developed. In his view, which we share, the Turing Machine
model is unsuited to address the class of questions about natural computing (MacLennan 2003a).
Furthermore, MacLennan argues that natural computation which occurs in natural neural networks
requires continuous and non-Turing models of computation (MacLennan 2003b), although these
processes cannot be regarded as ’computation’ in terms of the Turing machine definition (Turing,
1937). Yet, Turing’s thesis is not the ultimate verdict about computing. We only need to refer to
Euclid’s geometry in a historical context. There are numerous examples about how new
generalizations in mathematics and physics emerged out of apparent axioms and postulates. When
shifting the perspective, the latter turned out to be conceptual deadlocks as they were not able to deal
with new ideas or factual observations. When the new theories were finally reconciled with the
established world order, the apparent ‘paradoxes’ turned out to be new, more general facts and the old
frame of thought became a special case within the new one (Rosen, 1991). It is a well known fact 11
that science requires sometimes a few iterations of denial and rediscovery over the centuries until a
new idea or paradigm is accepted by the dominating majority (Sacks, 1995).
We have a similar situation with the dominant computing paradigm today, the Turing Machine (TM)
model. This concept is leading now to a major crisis 12 in computer science, analogous to the one of
irrational numbers in mathematics a few centuries ago. A major paradox with adopting the TM model
for natural computing was pointed out in (MacLennan, 2004). Accordingly, formal logic considers a
function computable if for any input data the corresponding output would be produced after finitely
many steps, i. e. a proof can be of any finite length. For the sake of completeness and consistency, the
TM model imposes no bounds on the length of the individual steps and on the size of formulas, so
long as they are finite. The concept of time assumed in the TM model is not discrete time in the
familiar sense that each time interval has the same duration. Therefore, it is more accurate to call TM
time a sequential time (MacLennan, 2006). Since the individual steps have no definite duration, there
is no use to count the number of steps to translate that count into real time. Consequently, the only
reasonable way to compare the time required by computational processes is in terms of their
asymptotic behaviour. Therefore, once we have chosen to ignore the speed of the individual steps, all
we have as complexity measure is the size of the formulas produced during the computation or the
growth degree of the number of steps with the size of the input.
11
Max Planck: "A new scientific truth does not triumph by convincing its opponents and making them see the light, but
rather because its opponents eventually die and a new generation grows up that is familiar with it."
12
Indeed there is nothing wrong with the TM model, as long as it is exploited within its frame of relevance which deals with
questions about formal derivability and the limits of effective calculability.(MacLennan, 2006).
–8 –
Integral Biomatics
28.02.2007
In fact, Turingian sequential time is reasonable in a model of formal derivability or effective
calculability, since the time duration required for individual operations was irrelevant to the research
questions of formal mathematics. However, this perspective leads to the result that any polynomialtime algorithm is “tractable“ and “fast” (e.g. matrix multiplication ~ O(N3)) and that exponential-time
algorithms are “intractable“ (e.g. Traveling Salesman), whereas problems which are polynomial-time
reducible are virtually identical. This step duration independent situation is indeed peculiar for natural
systems. MacLennon ironically describes it as one where “an algorithm that takes N100 years is fast,
but one that takes 2N nanoseconds is intractable.” We cannot really afford to ignore the duration of the
steps in natural computing, where instant response has the value of survival for a living organism.
„In nature, asymptotic complexity is generally irrelevant... Whether the algorithm is linear,
quadratic, or exponential is not so important as whether it can deliver useful results in required
real-time bounds for the inputs that actually occur. The same applies to other computational
resources. … it is not so important whether the number of neurons required varies linearly or
with the square of the number of inputs to the net; what matters is the absolute number of
neurons required for the numbers of inputs there actually are, and how well the system will
perform with the number of inputs and neurons it actually has.“ (MacLennan, 2006).
In addition, adaptation and reaction to unpredicted stimuli, large scale cluster optimizations and
continuity of input and output values in space and time, such as those occurring in neural networks,
immune systems and insect swarms are other characteristics of the natural system that lie outside the
scope of the TM model and cannot be addressed appropriately with traditional analytical approaches.
Finally, evolvability and robustness, (Wagner, 2005), i.e. the continuous development and the
effective (not necessarily the correct!) operation in the presence of noise, uncertainty, imprecision,
error and damage complete the fragmentary set of characteristics of living systems, (Rashevsky, 1954,
1960, 1965; Miller, 1978).
The following section discusses some alternative approaches to the TM model in more detail.
4. About Non-classical Computation Models
The term non-classical computation denotes computation beyond and outside the classical Turing
machine model such as extra-Turing, non-Turing and post-Newtonian computation (Stannet, 1991).
Representative approaches include Super-Turing and hypercomputation, as well as nano-, quantum,
analog and field computation.
Super-Turing computation (Siegelmann, 1995, 1996a, 2003) is a synonym for any computation that
cannot be carried out by a Turing Machine, as well as any (algorithmic) computation carried out by a
Turing Machine. Furthermore, Super-Turing computers are any computational devices capable to
perform Super-Turing computation, e. g. non-Turing computable operations such as integrations on
real-valued functions that provide exact rather than approximate results.
In fact, Turing himself proposed a larger class of “non-Turing” computing architectures including
oracle machines (o-machines), choice machines (c-machines), and unorganized machines (umachines). He did not anticipate that his original I/O model will dominate computer science over five
decades. Other approaches include pi-calculus (Milner, 1991 & 2004), $-calculus (Eberbach, 20002001), Evolutionary Turing Machines 13 (Eberbach & Wegner, 2003; Eberbach, 2005) and (recurrent)
neural networks 14 (Garzon & Franklin, 1989; Siegelmann & Sontag, 1992; Siegelmann, 1993).
13
14
more complete model for evolutionary computing than common Turing Machines, recursive algorithms or Markov chains
some of them with real numbers as weights
–9 –
Integral Biomatics
28.02.2007
Furthermore, there are Interaction Machines 15 (Wegner, 1997-98), Persistent Turing Machines 16
(Goldin & Wegner, 1999; Goldin, 2000), Site and Internet Machines (van Leeuwen & Wiedermann,
2000a/b) and finally, self-replicating cellular automata (von Neumann, 1966). The latter we regard as
a special class generic self-realizable computing architecture that we assigned to the physical plane for
the purpose of our dyadic model of an evolving intelligent “computational organism” discussed in the
next section. In fact, CA and some other Super-Turing architectures belong to both planes since they
can be both an abstract theoretical concept and its physical realization at the same time.
Hypercomputation (Copeland, 2004) studies models of computation that expand the concept of
computation beyond the Church-Turing thesis and perform better than the Turing machine. It refers to
various proposed methods for the computation of non-Turing computable functions such as the
general halting problem. A good survey report in this area is given in (Ord, 2002). Here the author
introduces ten different types of hypermachines 17 and compares their capabilities while explaining
how such non-classical models fit into the classical theory of computation. His central argument is that
the Church-Turing thesis is commonly misunderstood. Ord claims that the negative results of Gödel
and Turing depend mainly on the nature of physics 18 (♣♣♣). Other authors also approve this view
(Kieu, 2002). We also share Ord's thesis that the Turing machine model is based on concepts conform
to the Newtonian physics and on inadequate (from biological viewpoint) mathematical abstractions
such as negligible power consumption and unlimited memory. On the other hand, quantum
computation has already demonstrated 19 that the feasibility of algorithms depends on the nature of the
physical laws themselves. Thus, extrapolating new physical laws would automatically mean new
computation approaches. This holds also in the case of biology. In fact we face an epistemological
problem. The more we know about the natural phenomena, the more we expand our models.
Therefore, to deal with the rising complexity of abstractions, we need an evolving networked model
for living systems (Capra, 1997) that addresses a new, inclusive theory of computation and automation
based on the principles about emergence and self-organisation from general system theory
(Bertalanffy, 1950 ff.), living systems theory (Miller, 1978) and systems biology (Kitano, 2002).
Mathematical models for hypercomputers include:
•
•
•
•
a Turing machine that can complete infinitely many steps, (Shagrir & Pitowsky, 2003);
an idealized analogous computer (MacLennan, 1990; Siegelmann, 1996b), a real (numbers)
computer that could perform hypercomputation if physics allows in some way the
computation with general real variables, i.e. not only computable real numbers;
a relativistic digital computer working in a Malament-Hogarth space-time which can perform
an infinite number of operations while remaining in the past light cone of a particular spacetime, (Etesi &. Németi, 2002);
a quantum mechanical system, but not an ordinary qubit quantum computer, which uses (e. g.)
an infinite superposition of states to compute non-computable functions, (Feynmann, 1982,
1985).
Currently all these devices are only theoretical concepts, but they may move some day to the physical
(realization) plane and become our everyday reality in the same way as von Neumann’s cellular
automata did.
15
while Turing machines cannot accept external input during computation, interaction machines extend the TM model by
input and output actions that support dynamic interaction with an external environment.
16
multitape machines with a persistent worktape preserved between successive interactions; they represent minimal
extensions of TM that express interactive behaviour characterized by input-output streams.
17
infinite state TM, probabilistic TM, error prone TM, accelerated TM, infinite time TM, fair non-deterministic TM, coupled
TM, TM with initial inscriptions, asynchronous networks of TM, O-machines (also classified as Super-Turing computation).
18
This result is comparable with the ones in the discussions on the mechanization of life (♣) and thought (♣♣).
19
e.g. through such effects as quantum superposition, quantum entanglement or wave function collapse.
– 10 –
Integral Biomatics
28.02.2007
Nanocomputation (MacLennan, 2006) involves computational processes with nano-devices and
information structures which are not fixed, but in constant flux and temporary equilibria. It includes
sub-atomic and molecular modes of computation such as quantum computation (Nielsen & Chuang,
2000) and DNA computation (Amos, 2005).
A fundamental characteristic of nanocomputation is the microscopic reversibility in the device and
information structures. This means that chemical reactions always have a non-zero probability of
backwards flow path. Therefore, molecular computation systems must be designed so that they
accomplish their purposes in spite of such reversals. Furthermore, computation proceeds
asynchronously in continuous-time parallelism and superposition. Also, operations cannot be assumed
to proceed correctly and the probability of error is always non-negligible. Therefore, errors should be
built into nanocomputational models from the very beginning. Due to thermal noise and quantum
effects errors, defects and instability are unavoidable and must be taken as given. Examples of
nanocomputing devices include quantum logic gates and DNA chips.
Analog / continuous computation uses physical phenomena (mechanical, electrical, etc.) to model
the problem being solved, by using uninterrupted varying values of one kind of physical parameter
(e.g. water/air pressure, electrical voltage, magnetic field intensity, etc.) to obtain, measure and
represent other as a goal function. A major characteristic is the operation on signals without
conversion (sampling and integration) and in their natural continuous state.
Analog computers has been used since ancient times 20 in agriculture, construction and navigation
(Bromley, 1990). When in 1941 Shannon proposed the first General Purpose Analog Computer
(Shannon, 1941) as a mathematical model of an analog device, the Differential Analyser (Bush, 1931),
this invention announced the age of electronic analog computers (Briant et al., 1960). From the very
beginning, they stepped in competition with their digital brothers and initially outperformed them by
optimally deploying electronic components (capacitors, inductors, potentiometers, and operation
amplifiers). Analog computers have three major advantages over digital ones: i) instantaneous
response, ii) inherent parallelism, and iii) time-continuity (no numerical instabilities or time steps).
They are well suited to simulating highly complex and dynamic systems in real time and accelerated
rates such as aircraft operation, industrial chemical processes and nuclear power plants. Until 1975
analog computers were considered to be unbeatable in solving scientific and engineering problems
defined by systems of ordinary differential equations. However, their major disadvantage, the limited
precision of results (3 to 5 digits), their size and price made them unsuitable for future applications
with the advent of the transistor and the growing performance of integrated circuits in digital
computers.
Nevertheless, the interest of the scientific community in continuous computation arises now from
several different perspectives, (Graca, 2004). Recent research in computing theory of stochastic analog
networks (Siegelmann, 1999), neural networks and automata (Siegelmann, 1997, 2002) challenged the
longstanding assumption that digital computers are more powerful than analog ones. The analog
formulation of Turing’s computability thesis suggests now that no possible abstract analog device can
have more computational capabilities than neural networks, (Siegelmann & Fishman, 1998;
Siegelmann et al., 1999; Natschläger & Maass, 1999).
Field computation (MacLennan, 1990, 1999, 2000) can be considered as a special case of neural
computation which operates on data which is represented in fields. The latter are either spatially
continuous arrays of continuous value, or discrete arrays of data (e.g. visual images) that are
sufficiently large that they may be treated mathematically as though they are spatially continuous. A
Fourier transform e.g. of visual images is an example of field computation.
20
e. g. the Antikythera mechanism, the earliest known mechanical analog computer (dated 150-100 BC), designed to
calculate astronomical positions, (François, 2006).
– 11 –
Integral Biomatics
28.02.2007
This approach provides a good base for naturalistic computation. It is a model for information
processing inside of cortical maps in the mammalian brain. Field computers can operate in discrete
time, like conventional digital computers, or in continuous time like analog computers.
Other realizations of field computation could be analog matrices of field programmable gate arrays
(FPGAs) and grids thereof for image and signal processing problems. Further examples for field
computation include optical computing (Stocker & Douglas, 1999; Woods, 2005), as well as
Kirchhoff-Lukasiewicz machines 21 and very dense cellular automata (MacLennan, 2006).
Finally, quantum computation which was traditionally concerned with subatomic discrete systems
such as qubits has realized that many quantum variables with continuous character, such as the
position and momentum of electromagnetic fields, can be quite useful. Noise is a difficult problem for
quantum computation and continuous variables are more susceptible to noise than discrete ones. Thus,
quantum computation over continuous variables becomes interesting option towards a robust and
fault-tolerant quantum computation and the simulation of continuous quantum systems such as
quantum field theories. In (Lloyd & Braunstein, 1999), the authors provide the necessary and
sufficient conditions for the construction of a universal quantum computer capable to perform
"quantum floating point" computations for the amplitudes of the electromagnetic field.
The above list of alternative models to classical computation is not exhaustive, but it provides a good
starting base for transition to the next section which introduces the main part of this contribution.
5. Integral Biomathics
The review of the diverse non-classical computation approaches beyond the Turing frame in the
previous section leads to the conclusion that these models hypothesize different post-Newtonian era
laws of physics as a precondition for their implementation. Although modern computer science has not
really entered the relativistic sub-nuclear age of modern physics yet, the abundance of computational
ideas and approaches indicates that researchers are well aware of the fact about the limitations of the
Turing computation model. They are certainly going to use every discovery and invention in physics
to realize their concepts. Perhaps the most significant aspect of this finding in historical perspective is
the comeback of analog computing now based not on mechanical components and electronic circuits,
but on artificial neural networks and continuous quantum computation. This is a very interesting fact
which shows that computation is now closer to biology than to physics.
Indeed, computer science and biology maintain today a similar relationship like 19th and 20th century
mathematics and physics. The progress in the one filed will influence the progress in the other and
vice versa, for we cannot avoid analogy and semantics (in terms of formal logic as we know it) and
limit ourselves to a priori established conventions (based on past facts) about what is general and what
is special in theoretic research. In this respect we are lead here by the ideas presented in the
introductory chapters of Rosen’s Life itself (1991).
It becomes evident from the discussion in the previous two sections that the two fields or planes of
research are closely interdependent through synergy and correlation and by addressing a number of
parallelized phenomena, paradoxes and questions, thus representing a congruent pair, a dyad, of
knowledge in complex system design and automation that deserves a special attention by the scientific
community. It is therefore our intention to get beyond the limits of autopoietic theory and Turing
computation and explore new computational models in complex autonomous systems. Our focus
represents the research in natural automation and computation, including models of information
emergence, genetic encoding and transformations into molecular and organic structures. This area is
the joint domain of life sciences, physical sciences and cybernetics.
21
J. Mills, http://www.cs.indiana.edu/~jwmills/ANALOG.NOTEBOOK/klm/klm.html.
– 12 –
Integral Biomatics
28.02.2007
In this respect, we are interested to investigate natural systems which are robust, fault-tolerant and
share the characteristics of both living organisms and machines which can be implemented and
maintained as autonomous organisations on molecular and atomic scale without being planned.
Therefore, we are going to use concepts, models and methods from such disciplines as evolutionary
biology, synthetic microbiology, molecular nanotechnology, neuroscience, quantum information
processing and field theory. We are going to enhance them with other techniques and formalisms from
classical and non-classical computation, network and information theory to address specific challenges
in the pervasive cyberzoic era of integral research beyond nano-robotics and nano-computation into
molecular self-assembly, synthetic morphogenesis, evolutionary computation, swarm intelligence,
evolvable morphware, adaptive behaviour and co-evolution of biosynthetic formations (Zomaya,
2006). What we mean by this is ultimately the convergence and transformation of biology,
mathematics and informatics into a new naturalistic discipline that we call integral biomathics.
This new field is founded on principles of mathematical biophysics (Rashevsky, 1948; Rosen, 1958),
systems biology (Wolkenhauer, 2001; Alon, 2007) and information theory (Shannon, 1948) endorsed
by biological communication and consciousness studies such as those described in (Sheldrake, 1981;
Crick, 1994; Penrose, 1996; Hameroff, 1998; Edelman & Tononi, 2000). Here the term ‘integral’
implies also ‘relational’ and denotes the associative and comparative character of the field from the
perspective of cybernetics (Bateson 1972) and systemics (François, 1999). Hence, the goal of integral
biomathics is the integration of the numerous system-theoretic and pragmatic approaches to artificial
life and natural computation and communication within a common research framework. The latter
pursues the creation of a stimulating ecology of disciplines for studies in life sciences and computation
oriented towards naturalistic system engineering. Our approach does not antagonize old theories and
results. It does not defeat recent or new ones either. In this way, we answer the rising appeal of
systems biologists to develop integrative and reconciling philosophies to the diverse approaches to
genetics, molecular and evolutionary biology, (O’Maley & Dupré, 2005).
Thus, following the line of research set up by the pioneer works of Rashevsky’s (1940 ff.), Rosen
(1958 ff.), Bateson (1972), Miller (1978), Maturana and Varela (1980), integral biomathics is going to
address questions arising in the widened relational theory of natural and artificial systems. It is
concerned with the evolutionary dynamics of living systems (Nowak, 2006) in a unified manner while
accentuating the higher-layer dynamic relationship, interplay and cross-fertilization among the
constituting research areas. In particular, integral biomathics is dedicated to the construction of general
theoretical formalisms related to all aspects of emergence, self-assembly, self-organisation and selfregulation of neural, molecular and atomic structures and processes of living organisms, as well as on
the implementation of these concepts within specific experimental systems such as in silico
architectures, embryonic cell cultures, wetware components (artificial organic brains, neurocomputers)
and biosynthetic tissues, materials and nano-organisms (cyborgs). Therefore, the research methods of
integral biomathics include not only the traditional ones of pragmatic and theoretical biology,
involving such disciplines as molecular biology and functional genomics, but also the dynamic
inclusion of novel computational analysis and synthesis techniques which are characteristic for the
corresponding frame of relevance, FoR (MacLennan, 2004) and beyond the existing taxonomic
framework for modelling schemes (Finkelstein et al., 2004). This corresponds to a qualitatively new
development stage in systems biology and engineering.
The algorithmization of sciences not only placed biology closer to the traditional ‘hard’ sciences such
as physics and chemistry, but also provided the base for a paradigm shift in the role distribution
between biology and mathematics, (Easton, 2006). Therefore, the goal of this new field, – biologically
driven mathematics and informatics (biomathics), and biological information theory, – is the
elaboration of naturalistic foundations for synthetic biology, systems bioengineering, biocomputation
and biocommunication which are based on understanding the patterns and mechanisms for emergence
and development of living formations.
– 13 –
Integral Biomatics
28.02.2007
Our major objection, however, to previous efforts for unification in this field is the one about the roles
of causation and entailment in the process of creating and organising life forms. These processes are
qualitatively different from the system models we know in physical sciences. Therefore, integral
biomathics aims at: (i) removing the restrictive reductionist hypotheses of contemporary physics, (ii)
adopting the appropriate mathematical formalisms and models, and (iii) deriving new formalisms out
of the biological reality that face the complexity of living systems more adequately.
Such arguments are in line with pioneer research in mathematical biophysics (Rashevsky, 1948, 1954)
as well as with recent results in systems biology (Westerhoff et al., 2004; Mesarovic et al., 2004).
Therefore, we provide an extended model about the interdependence between the various disciplines
in this field based on Rosen’s category-theoretical definition (Rosen, 1991, p. 60).
Figure 1 illustrates Rosen’s relational model of science 22 based on the concept of Natural Law
(Whitehead, 1929, 1933). The arrows 1 and 3 represent the recursive entailment structures within the
corresponding domains, – causation in the natural (physical) world and inference in the formal
(abstract) world, – whereas the arrows 2 and 4 express the possibility of consistent use of syntactic and
semantic truth through encoding/measurement and decoding/realization. In particular, arrow 2 depicts
the flow path of abstraction and generating hypotheses about the natural world in physical (analytical)
sciences. Arrow 4, in turn, represents the path of de-abstraction and creating forecasts about natural
world events using formal models and theories. These forecasts are then used to prove the truth of
scientific theories, inferred from hypotheses in the formal world, by observation and measurement in
the “internal” loop 1 of the physical world 23 . At the same time, arrow 4 shows the path of invention
and engineering of artificial systems in synthetic sciences (e.g. computer science) that emerge out of
mathematical models and theorems within the “internal” loop 3 in the formal world. Thus, arrow 4 can
be regarded as a double one. In engineering, when we start with inference systems in the formal world
(e.g. our minds) that are intended to realize physical world systems systematically, i.e. “by natural
law”, we face the so-called realization problem, according to Rosen, which involves “modes of
entailment falling completely outside contemporary science” 24 .
Figure 1: Rosen’s modelling relation for science and natural law (Rosen, 1991, p. 60)
22
associated with physical systems.
There are two separate paths of cognition in (physical) science: 1 and 2 + 3 + 4. If both of them deliver the same replicable
results, the theory derived from the hypothesis in the formal world becomes a model of the natural world, (Rosen, 1991).
24
e.g. knowledge through intuition mentioned in section 3.
23
– 14 –
Integral Biomatics
28.02.2007
Indeed, both arrows 2 and 4 of encoding and decoding remain unentailed 25 according to Rosen’s
model. These arrows are not part of the natural world, nor of its environment; they do not belong to
the formal world either. They appear like mappings, but they are not such in any formal sense. This
finding is consistent with Gödel’s incompleteness theorems 26 that state there is always a true statement
within a formal system that cannot be proved within that system and requires a higher level formalism.
Therefore, in a next step, we propose four substantial amendments to Rosen’s modelling relation in
Fig. 1 defining the major distinctions of biological systems when compared to physical systems.
Figure 2 depicts these changes which extrapolate and endorse the interchange circle model between
synthetic biology and systems biology presented in (Barret et al., 2006). Firstly, we adopt Rosen’s
postulate about the generalization of biology over physics (Rosen, 1991, 1999). In other words we
regard physical systems as a subset of the more complex biological systems and not the reverse way.
Secondly, we adopt and develop some concepts related to the inductive force field in morphogenesis
(Thompson, 1917; Franck, 1949) and the formative causation or ‘morphic’ resonance hypothesis 27
(Sheldrake, 1981) as basic organization principles in biological systems, analogous to those in
quantum physics 28 . Thirdly, we replace the human-centric concept of ‘natural law’ by the biological
one of natural habit 29 which is synonymous with ‘natural pattern’. Finally, we apply the principle of
‘lateral’ induction 30 in the formal world for creating new formalisms about natural systems
through pattern recognition and analogy, based on observations, experiments and associations
with other formalisms. Lateral induction corresponds to morphic resonance in the natural world
and addresses the larger set of formal systems that reflect the behaviour of biological systems from the
standpoint of integral biomathics. In this way, biology is generalized and physics becomes its special
case with causation and inference being entailments of resonance and induction respectively.
Figure 2: The revised Rosen’s modelling relation with generalization of biology and natural habit
25
There is no mechanism within the formal world to change an axiom or a production rule and there is no such mechanism
within the physical world to change the flow of causation.
26
in particular, the second incompleteness theorem: “For any formal theory T including basic arithmetical truths and also
certain truths about formal provability, T includes a statement of its own consistency if and only if T is inconsistent.”
27
cf. section 6 for more details.
28
"The universe -- being composed of an enormous number of these vibrating strings --is akin to a cosmic
symphony. " (Greene, 1999).
29
30
coined by Sheldrake; the term “law” is too strong for biology, whereas habits are less restrictive towards changes.
lateral induction is analogous to de Bono’s concept of ‘lateral thinking’ (de Bono, 1967); cf. section 6 for more details.
– 15 –
Integral Biomatics
28.02.2007
We still do not know how does an idea emerge from, relates to, coincides/reacts with or is influenced
by other ideas in human’s minds. Thoughts exhibit such properties as quantum entanglement (Aspect
et al., 1982). Yet, we do not know the idiosyncratic nature of such mechanisms as (spontaneous)
inversion, correlation, fusion, fission of concepts. This is an interesting research area along the
pathways of arrows 3 and 4 on figure 2. Furthermore, in the bidirectional relation of natural habit
between the two worlds on figure 2, we identify two epistemological processes denoted by arrows 2
and 4. First, it is the process of pattern recognition in the natural world and the phenomenon of
memory viewed by the self or the formal world (arrow 2). Second, it is the case of relational science in
the formal world based on formal theories of inference in the first concentric circle of mathematical
models, but also on metaphors, analogies and non-local relations or induction in the second
incorporating circle (arrow 4). The latter is associated with the next level entailments in the formal
world and with resonance in the natural world related to such phenomena as life, thought and
consciousness. Thus, arrows 2 and 4 represent an open infinite spiral of science development rather
than a closed stuttering loop.
Figure 3 represents the evolution of formalizations and realizations with the shifted perspective of
higher layer entailments along the time axis. It suggests that a vertical view on the slices of shifted flat
world layers along the temporal axis may deliver new insights into the cross-layer links between the
entities in the diagram and about the real dimensions of the interplay within the formal and natural
worlds. Similar models of the epistemological ontogenesis are present in Rashevsky’s topological
overlays of allocated organic functions (Rashevsky, 1961) and in the amino-acid interrelationships
within the 3D folding structure of DNA (Crick, 1988). This layered interrelationship within the
developmental spiral of the world “versions” results from paradigm changes and inflection points in
the formal world.
Complexity in the natural world is manifested through implicate and explicate order which
corresponds to semantic enfolding and unfolding in the formal world (Bohm, 1980). The later is
associated in our enhanced model with recursive pattern generation (including failures) followed by
reflexive processes of evaluation, change and adaptation in response to external stimuli in the
presence of noise and disturbance. Essentially, we propose an integral evolving model of a layered
dynamic interdependence between the logical formal meta-computational and reasoning system world
and the natural autonomous biophysical system world. The two worlds represent a dyad in perpetual
development which ultimately embeds the unentailed relationships 2 and 4 of Rosen’s original
modelling relation in figure 1.
Figure 3: The evolving Rosen’s modelling relation for science
– 16 –
Integral Biomatics
28.02.2007
The above amendments in Rosen’s modelling relation are necessary because in a historical perspective
mathematics has been derived from and developed for descriptions of physical phenomena.
Mathematics was proved to be a suitable tool for statics, celestial mechanics, thermodynamics,
electromagnetism and relativistic theory. However, most of its formalisms might be insufficient to
handle biology in a straightforward manner. We may need to develop new mathematical foundations
for modelling different types of biological systems just as von Neumann and Dirac proposed their own
formalisms to explain quantum mechanics. Indeed, quantum theory and theoretical biology are closely
related to each other (Rashevsky, 1961), which means that biomathics could successfully adopt and
develop the formal toolset of quantum theory.
The basic characteristic of physical systems is that they are closed and that everything that could not
be included within a closed system is neglected. However, in biology we face an inverted world to the
one of physics, because systems are basically open and because the second law of thermodynamics
about the system equilibrium is defined in terms of order (negentropy 31 ) instead of chaos (entropy).
Therefore, we need now a different, kind of mathematics, a new kind of science (Wolfram, 2002) that
is devoted to the discovery of recursive patterns of organization in biological systems (Miller, 1978),
such as those of neural activity where neural systems could be understood in terms of pattern
computation and abstract communication systems theory, (Andras, 2005). This science should be able
to deal with the complexity of biological systems 32 by restructuring its ontology base to correct its
models whenever appropriate.
6. Discussion
The presented approach to integral biomathics in the previous section appears related to the classical
dialectical scheme of thesis-antithesis-synthesis, (Hegel, 1807). Yet, we have a different motivation
for explaining the philosophy of this new discipline that we derived from the analysis of the latest
developments in the participating scientific fields discussed in sections 1-4. In particular, we identified
the following categories that play special roles in integral biomatics:
Syntax vs. Semantics. Formal concepts are either purely syntactic in the Hilbert’s sense or they are
created through recursive extrapolation from other formal concepts. In the second case, they contain a
semantic component of truth that links them to the context of those previous concepts they were
derived from. Rashevsky pointed out that there are different ways in mathematics to approach the
relational problem in biology using such means as set theory, topology or group theory (1961). Each
one of these approaches is able to represent different aspects of relations in a system within different
contexts. Thus, context (semantics) and relation (syntax) can be interchanged depending on the
purpose of the description. Therefore, the author regards even pure syntactic models in the formal
world as semantic inclusions of axiomatic truths which are obtained in an empirical way from the
physical world through the associative link of natural habit.
In order to change the historical discrimination between syntax and semantics in the formal world
imposed by the different perspective of our present day understanding of the natural world, we need to
step back and generalize the description of the domain of discourse. This can be realized by taking out
the restrictions placed by the TM model 33 to a degree that allows to reconcile traditional Turing based
(incl. Super-Turing) and non-Turing based approaches in computation (Rosen, 1991; Hogarth, 1994).
The above considerations demand a systematic study of the traditional and the opportunistic
approaches to computation and automation along with their relations and frames of relevance.
31
Schrödinger, 1944.
At a certain level of complexity, human beings and machines cannot recognise patterns anymore.
33
derived from the principles of Newtonian mechanics
32
– 17 –
Integral Biomatics
28.02.2007
Resonance vs. Causation. Sheldrake’s hypothesis of formative causation or morphic resonance states
that morphogenetic fields shape and organize systems at all levels of complexity (atoms, molecules,
crystals, cells, tissues, organs, organisms, societies, ecosystems, planetary systems, galaxies, etc.).
Accordingly, “morphogenetic fields play a causal role in the development and maintenance of the
forms of systems“, (Sheldrake, 1981, p. 71); they contain an inherent memory given by the processes
of morphic resonance in the past, where each entity has access to a collective memory.
Our approach differs from Sheldrake’s original definition in two points. Firstly, we clearly distinguish
between causation and resonance as universal organization principles of the Natural World (Fig. 2).
Whereas in causation we can semantically identify multiple linear cause–implication chains about
events in the domain of discourse, we understand resonance as non-linear, non-local and (sometimes)
semantically ‘hidden’ spatio-temporal relationships between entities in the broad sense. The
entailment of causation within resonance is however allowed in our model analogously to the scales of
interactions in physics. Secondly, resonance is for us a dyad consisting of (i) energetic resonance as
we know it in physics (wave mechanics, electromagnetism, quantum mechanics, string theory, etc.),
and (ii) information resonance corresponding to both classical communication theory (Shannon, 1948)
and to Shelldrake’s morphogenetic field theory.
Induction vs. Inference. With the term ‘induction’ in integral biomatics we do not mean the classical
mathematical induction used for formal proofs. It is the counterpart of resonance within the formal
world which entails not only the classical formal reasoning theories in mathematics, but also (yet)
unknown structures and pathways of logic based on complex, semantically enfolded relationships
within and between the formalisms. We can imagine induction as a the self-organized process of
generating and evolving formalisms from a set of basic axioms and theorems that can mutate
depending on the results of cognition.
Synchronicity and Chance vs. Determinism. When we observe the above categories as an evolving
dynamic set of cross-interacting patterns of structures and processes for organization and exchange
between the natural and the formal worlds, we can realize the third epistemological dimension about
the multi-relational helix Gestalt of scientific exploration along the temporal axis (Fig. 3). We believe
that the usage and verification of this macro-model can deliver new insights into the natural
phenomena of synchronicity and chance in the context of emergence, differentiation, organization and
development of biological structures and processes. Following the line of thought from the previous
paragraphs, we can hypothesize that determinism is entailed within synchronicity and chance.
7. Conclusions
Recent efforts in autonomic computing (IBM, 2001; Kephart & Chess, 2003) and autonomic
communications (Smirnov, 2005; Dobson et al., 2006) is interested in studying how such systems can
automatically adjust their performance and behaviour in response to the changing conditions of their
work environments. The goal of this research involving the whole scale of contemporary computer
science from automata theory to artificial intelligence is to improve and enhance the complex design
of modern computing and communications architectures with capabilities that occur in living systems
such as self-configuration, self-optimization, self-repair and self-protection. Expected is the
development of autonomic algorithms, protocols, services and architectures for the next generation
pervasive Internet that "evolve and adapt to the surrounding environments like living organisms
evolve by natural selection" (Miorandi, 2006).
Formality in contemporary computing means that information processing is both abstract and syntactic
and that the operation of a calculus depends only on the form, i.e. on the organisation of the
representations which has been considered as deterministic. Yet, the more computation pervades into
natural environments, the more it faces critical phenomena of incompatibility, (Koestler, 1967).
– 18 –
Integral Biomatics
28.02.2007
One of them is the oversimplification of state-based computing models discussed in the previous
sections. Although we are equipped today with a whole range of theories and tools for dealing with
complexity in nature (Boccara, M. 2004; Sornette, 2004), we miss the real essence of natural
computation in such areas as geology, meteorology and stock exchange.
The diverse models and tools for solving differential and recurrence equations, for modelling
stochastic processes and power law distributions using cellular automata and networks are operating
on Turing machine computer architectures based on the concept of state. The latter is central for
Newtonian mechanics which describes reality as sets of discrete interactions between stable elements.
Yet, physics had already a paradigm shift towards quantum mechanics in the past century and natural
systems are now seen at their utmost detail to match the wave function equation which describes
reality as a mesh of possibilities where only an undifferentiated potential describes what might be
observed a priori to measurement. It was shown in this paper that biological systems are even more
complex than physical ones and that we need a different paradigm embedded in the underlying
computation architecture in order to achieve the vision of truly autonomic computing and
communications.
Networks are seen as the general organizing principle of living matter. A recent IBM report in Life
Sciences (Burbeck & Jordan, 2004) referred to a Science article which made this conclusion (Oltvai &
Barabasi, 2002). Indeed, this notion originates from Rashevsky’s biological topology (1954) and
Rosen’s metabolism-repair model (1958) which became the foundations of relational and theoretical
biology. The network concept in life was then reinforced 20 years later by the autopoietic theory of
Maturana and Varela (1974) and by Miller’s living systems (1978).
Almost a decade before carrying out the first computer simulation experiments of autopoiesis,
(MacMullin & Varela, 1997), Varela and Letelier tested Sheldrake's theory of morphic resonance in
silicon chips using a microcomputer simulation, (Varela & Letelier, 1988). Briefly, they let a crystal
grow in silico. It was expected that if the morphic fields theory were correct, the synthesis of the same
crystal pattern will be accelerated after millions of iterations. Yet, Varela and Letelier found that there
was no detectable acceleration of the growing process at all. They concluded that either Sheldrake's
hypothesis is falsified or that it does not apply to silicon chips. However, there might be other
explanations of this result. One of them could be that conventional Turing machines were used to
simulate the above experiments. In this case, new computation models beyond the TM concept are
needed to address such issues as autopoiesis.
Several issues arise in the investigation of non-Turing computation: (i) What is computation in the
broad sense? (ii) What frames of relevance are appropriate to alternative conceptions of computation
(such as natural computation and nanocomputation), and what sorts of models do we need for them?
(iii) How can we fundamentally incorporate error, uncertainty, imperfection, and reversibility into
computational models? (iv) How can we systematically exploit new physical processes (molecular,
biological, optical, quantum) for computation? (MacLennan, 2006).
Integral biomatics is a new approach towards answering these questions and towards shifting the
computation paradigm closer to the domains of quantum physics and biology (Baianu, 1980, 1983,
2004) with the ultimate objective of creating artificial life systems that evolve harmoniously with
natural ones.. This new discipline is particularly interested in four essential HOW 34 questions:
• how life and life-like properties and structures emerge from inorganic components.
• how abstract ontological categories and semantics entailments emerge in living systems.
• how cognitive processes emerge and evolve in natural systems.
• how life related information is transferred in space and time.
34
of course, these questions imply also the Aristotelian WHY.
– 19 –
Integral Biomatics
28.02.2007
A starting point in this quest is the definition of the theoretical framework where the understanding
autopoisis plays a central role. Kawamoto's extensive definition of autopoiesis can be described as
follows 35 (Kawamoto, 2000):
"An autopoietic system is organized as a network of processes of production of elements. Then,
(i) the elements of the system become the components only when they re-activate the network
that produces these elements,
(ii) when the sequence of the components construct a closed domain, it constitutes the system
as a distinguishable unity in the domain in which they exist."
The main difference of Kawamoto’s definition of autopoiesis from the original one by Maturana and
Varela lies in the second part of the definition (ii). It distinguishes between the living system itself 36
(German: "sich") constituted as the network of productions from the self 37 (German: "Selbst") as the
closed domain in the space. According to Kawamoto, this extension makes it possible to represent the
aspect that the entity Selbst (self, syntax) changes while sich (itself, semantics), which ultimately
represents its self-awareness, is maintained in such way as Schizophrenia 38 . Kawamoto argues that
this distinction is ambiguous in Maturana’s and Varela's original definition and that it causes
misunderstanding of autopoiesis. Nomura’s formal model of autopoiesis in (Nomura, 2006) is actually
affected by the above aspect. The organization is closed and maintained in a specific category and the
structure is open and dynamic in a state space.
Indeed, the first part of Kawamoto’s definition of autopoiesis is more precise than the original one of
Maturana and Varela. It provides an initial condition which can be regarded as the "birth" of the living
system. The second part of the definition also identifies a more distinct characteristic than the original
one (Maturana & Varela, 1980). To be more formal, we could add here also the gerund form of the
verb, because the construction of a the closed system takes place permanently, i. e. in every single
moment, so that the system does not die and then revives again and again. Thus, the distinction from
the environment is always present and includes the processes of metabolism and repair which maintain
the development of the living entity and its equiibrium/exchange with the environment (homeostasis).
Living systems are open in physical spaces. But autopoiesis requires closedness of organization in
living systems. This implies that openness – closedness, (enfolding – unfolding or syntax – semantics)
lie within observers' physical perspective level and another level beyond it, as described in Nomura’s
model of two levels illustrated with the relations on figure 3. Indeed, the layering of perspectives has
also the dimensions of Maturana, Varela and Luhman who defined the categories of first, second and
third order autopoiesis starting from molecules and cells, and moving up through organic systems and
individual beings to species and social organisations. Here we could ask ourselves whether Nomura’s
model represents an orthogonal view to the classical model of autopoiesis while containing subsets or
overlays of the three orders of autopoiesis defined there.
The above problem is not explicitly dealt with in Nomura’s paper (Nomura, 2006). However, the
existence of an isomorphism between operands and operators, the necessary condition of completely
closed systems, is implied from the orthogonal view mentioned in (Soto-Andrade & Varela, 1984).
This perspective appears also when we re-consider Rosen's idea (Rosen, 1991). Nevertheless, some
hardliner philosophers including Kawamoto argue that since the view of the relation between inputs
and outputs in the system is the one of the external observer, it does not clarify the organisation or the
operation of the productions in the system itself, (Nomura, 2007).
35
36
37
38
personal correspondence with Tatsuya Nomura, November 2006.
or the “self”-part of the process definition
or the physical entity of the system as epistemological distinction or even awareness (e.g. of a social group)
Kawamoto developed this extension from psychiatric perspective.
– 20 –
Integral Biomatics
28.02.2007
Consequently, any description of this level is impossible with the current mathematics at hand.
Nomura reckons 39 that this impossibility implies the difference between the perspectives of quantum
physics and autopoiesis regarding the role of the observer (Toschek & Wunderlich, 2001). Finally,
each perspective has its own rules and frame of relevance as MacLennan states, so that we should
rather ask ourselves at which level do we define autopoiesis in the classical way. At the cellular level
that could be a good model, but at molecular level we begin to encounter quantum effects such as nonlocality or entanglement (Aspect et al. 1982). Since Rosen claimed that a material system is an
organism if and only if it is closed to efficient causation, the above evidence leaves open the question
at which level a system can be defined as open or closed and if there can be provided a strict
separation of concepts. The whole circle of questions around these definitions is not complete at the
current state and their formulation and answer will require further studies.
The most interesting result of the research surveys presented in this paper are the parallels in the
discussions about the formalizations of life (♣), thought (♣♣) and computation (♣♣♣) which we
regard as the fundamental questions of a new scientific discipline. Integral biomathics is envisioned to
provide a generalized epistemological framework and ecology for symbiotic research in life, physical
and engineering sciences. It is going to be another challenging mountaineering experience in
intellectual development. Yet, we remain optimistic, for history of science knows also other unusual
discovery pathways which proved to be successful in the long run (Crick, 1988).
_________________
Acknowledgements: The author deeply appreciates the valuable help of Prof. Tatsuya Nomura from
Ryukoku University (Japan) for discussing questions on the formalization of autopoiesis using
category theoretical models.
8. References
Adami, C. (1998). Introduction to artificial life. New York: Springer-Verlag.
-----. (2004) Information theory in molecular biology. arXiv:q-bio.BM/0405004, v1, 5 May 2004.
URL: http://www.arxiv.org/PS_cache/q-bio/pdf/0405/0405004.pdf.
Allwein, G., Moskowitz, I. S., Chang, L. W. (2004). A new framework for Shannon information
theory. NRL Memorandum Report: NRL/MR/5540-04-8748, January 30, 2004. Center for High
Assurance Computer Systems (CHACS), Naval Res. laboratory, Washington D.C., USA. URL:
http://chacs.nrl.navy.mil/publications/CHACS/2004/2004allwein-techmemo5540-048-8748.pdf.
Alon, U. (2007). An introduction to systems biology. Chapman & Hall/CRC. ISBN-13: 978-1-58488642-6.
Amos, M. (2005). Theoretical and Experimental DNA Computation. Springer-Verlag. ISBN: 978-3540-65773-6.
Andras, P. (2005). Pattern computation in neural communication systems. Biol. Cybern. SpringerVerlag. 17. May 2005. DOI 10.1007/s00422-005-0572-0.
39
personal correspondence with Tatsuya Nomura, December 2006.
– 21 –
Integral Biomatics
28.02.2007
Arbib, M. A. (1966). A Simple Self-Reproducing Universal Automaton, Information and Control.
-----. (1974). Cyril Ponnamperuma, In: A. G. W. Cameron (Ed.), The Likelihood of the Evolution of
Communicating Intelligences on Other Planets. Boston: Houghton Mifflin Company, 59-78.
Aspect, A., P. Grangier, and G. Roger (1982). Experimental realization of Einstein-Podolsky-RosenBohm Gedankenexperiment: A new violation of Bell's inequalities. Physical Review Letters 49, 91-94.
Baianu, I. C. (1980). Natural Transformations of Organismic Structures. Bull.Math. Biology, 42:431446.
-----. (1983). Natural Transformations Models in Molecular Biology. SIAM Natl. Meeting, Denver,
CO, USA.
-----. (2004). Quantum Genetics, Quantum Automata and Computation.
URL: http://cogprints.org/3676/01/QuantumAutnu2%5FICB.pdf.
Ballard, D.H. (1997). An Introduction to Natural Computation. The MIT Press, Cambridge. ISBN:
0262024209; reprinted by The MIT Press (1999): ISBN 0262522586.
Barrett, C. L., Kim, T. Y., Kim, H. U., Palsson, B. Ø., Lee, S. Y. (2006). Systems biology as a
foundation for genome-scale synthetic biology. Current Opinion in Biotechnology, Volume 17, Issue
5, October 2006, 488-492. Elsevier. Science Direct. URL: www.sciencedirect.com; also URL:
http://mbel.kaist.ac.kr/publication/int182.pdf.
Bateson, G. (1972). Steps to an ecology of mind. The University of Chicago Press. Chicago, IL, USA.
ISBN 0226-03905-6.
Bertalanffy, L. von (1928). Kritische Theorie der Formbildung, Berlin 1928 (Modern Theories of
Development. An Introduction to Theoretical Biology, Oxford 1933, New York 1962.
-----. (1950a). An outline of general systems theory. Philosophy of Science, Vol. 1, No. 2.
-----. (1950b). The theory of open systems in physics and biology, Science, 111:23-29.
-----. (1962). General system theory - a critical review, General Systems, 7:1-20.
-----. (1968). General System Theory: Foundations, Development, Applications. New York: George
Braziller.
-----. (1972). The model of open systems: Beyond molecular biology, in: Biology, History and Natural
Philosophy, A. D. Breck and W. Yourgrau (Eds), 17-30, New York.
Benacerraf, P. (1967). God, the Devil, and Gödel, The Monist 51, 9-32.
Baum, R. (2003). Nanotechnology: Drexler and Smalley make the case for and against molecular
assemblers. Chemical & Engineering News, December 1, 2003. Volume 81, Number 48, CENEAR 81
48. 37-42, ISSN 0009-2347. URL: http://pubs.acs.org/cen/coverstory/8148/8148counterpoint.html.
Bedau, M., McCaskill, J., Packard, P., Rasmussen, S., Green, D., Ikegami, T., Kaneko, K., & Ray, T.
(2000). Open problems in artificial life. Artificial Life, 6(4), Sept. 2000, 363–376, ISSN:1064-5462.
– 22 –
Integral Biomatics
28.02.2007
Bedau, M. et al. (2005). Evolutionary design of self-assembling chemical systems: models and
experiments. In: N. Krasnogor, S. Gustafson, D. Pelta and J.L. Verdegay, (Eds.)., Systems SelfAssembly: Multidisciplinary Snapshots, Elsevier.
Bedau, M. (2006). Automated design of artificial life forms from scratch., In: S. Artmann, P. Dittrich
(Eds.), Proc. of 7th German Workshop on Artificial Life (GWAL-7), July 26-28, 2006, Jena Germany,
IOS Press, ISBN 1-58603-644-0.
Boccara, M. (2004). Modeling complex systems. Springer-Verlag. ISBN 0-387-404462-7.
Bohm, D. (1980). Wholeness and the implicate order. Routledge, London, England. ISBN 0-41511966-9.
Boolos, B. et al. (1990). An Open Peer Commentary on “The Emperor's New Mind“. Behavioral and
Brain Sciences 13 (4) (1990) 655.
Briant, L. T., Just, L. C. Pawlicki, G. S. (1960). Introduction to electronic analogue computing.
Argonne National Laboratory Report. July 1960.
URL: http://dcoward.best.vwh.net/analog/argonne.pdf.
Bromley, A. G. (1990). Analog computing devices. in: Aspray, W. (Ed.), Computing before
computation. 156-199. Iowa State University Press, Ames, Iowa, USA. ISBN 0-8138-0047-1, URL:
http://ed-thelen.org/comp-hist/CBC.html.
Burbeck, S., Jordan, K. (2004). An assessment of systems biology. IBM Life Sciences, January 2004.
URL: http://www-1.ibm.com/industries/healthcare/doc/content/bin/AssessmentofSys.pdf.
Bush, V. (1931). The differential analyser. A new machine for solving differential equations. J.
Franklin Inst. 212, 447-488.
Capra, F. (1997). The web of life - a new scientific understanding of living systems. Anchor. ISBN-13:
978-0385476768.
Cardon, A., Lesage, F. (1998). Toward Adaptive Information Systems: considering concern and
intentionality. Proc. of the Eleventh Workshop on Knowledge Acquisition, Modeling and Management,
Banff, Alberta, Canada, 18-23 April, 1998. http://ksi.cpsc.ucalgary.ca/KAW/KAW98/cardon/.
Chu, D. and Ho, W. K. 2006. A Category Theoretical Argument against the Possibility of Artificial
Life: Robert Rosen's Central Proof Revisited. Artif. Life 12, 1 (Jan. 2006), 117-134. DOI=
http://dx.doi.org/10.1162/106454606775186392.
Church, A. (1932). ‘A set of Postulates for the Foundation of Logic’. Annals of Mathematics, second
series, 33, 346-366.
-----. (1936a). ‘An Unsolvable Problem of Elementary Number Theory’. American Journal of
Mathematics, 58, 345-363.
-----. (1936b). ‘A Note on the Entscheidungsproblem’. Journal of Symbolic Logic, 1, 40-41.
-----. (1937a). Review of Turing 1936. Journal of Symbolic Logic, 2, 42-43.
-----. (1937b). Review of Post 1936. Journal of Symbolic Logic, 2, 43.
-----. (1941). The Calculi of Lambda-Conversion. Princeton: Princeton University Press.
– 23 –
Integral Biomatics
28.02.2007
Codd, E. F. (1968). Cellular Automata, Academic Press, New York, ISBN: 0121788504.
Copeland, J. (2004), Hypercomputation: philosophical issues, Theoretical Computer Science, Vol. 317
Nr. 1-3, 251-267, June 4, 2004.
Corliss, W. R. (1988). Morphic resonance in silicon chips. Science Frontiers Online, No. 57: May-Jun.
1988. URL: http://www.science-frontiers.com/sf057/sf057g17.htm.
Crick, F. (1988). What mad pursuit; a personal view of scientific discovery. Basic Books, New York,
N.Y. Reprint edition (June 1990). ISBN-10: 0465091385, ISBN-13: 978-0465091386.
-----. (1994). The astonishing hypothesis: The scientific search for the soul. Touchstone: Simon &
Shuster. New York. ISBN 0-684-80158-2.
de Bono, E. (1967). The use of lateral thinking. Penguin Books. London. England.
di Fenicio, P. S., Dittrich. P. (2002). Artificial chemistry's global dynamic. Movements in the lattice of
organization. In: Journal of Three Dimensional Images, 16(4):160-163.
URL: http://www.informatik.uni-jena.de/~dittrich//p/SD2002.pdf.
Dobson, S., Denazis, S., Fernandez, A., Gaiti, D., Gelenbe, E. (2006). A Survey of autonomous
communications. ACM Trans. on Autonomous and Adaptive Systems, Vol. 1, No. 2, Dec. 2006, 223259. URL: http://www.simondobson.org/files/personal/dict/softcopy/ac-survey-06.pdf.
Drexler K. E. (1986). Engines of Creation, Reading. Anchor Books, New York. ISBN 0-385-19973-2.
URL: http://www.e-drexler.com/d/06/00/EOC/EOC_Table_of_Contents.html.
-----. (1992). Nanosystems: Molecular Machinery, Manufacturing and Computatio. ISBN 0-471—
57518-6. URL: http://e-drexler.com/d/06/00/Nanosystems/toc.html.
Dyer, M. G. (1995). Towards synthesizing artificial neural networks that exhibit cooperative
intelligent behaviour: some open issues in artificial life. In: C. Langton (Ed.), Artificial Life, 111-134,
MIT Press, ISBN 0-262-12189-1.
Dyson, F. J. (1970). The twenty-first century, Vanuxem Lecture delivered at Princeton University, 26
february 1970.
-----. (1979) Disturbing the Universe. Reading. Harper & Row, New York N.Y., ISBN 0-06-011108-9.
Eberbach E. (2000). Expressiveness of $-Calculus: What Matters?, in: M. Klopotek, M. Michalewicz,
S. T. Wierzchon (Eds.). Adances in Soft Computing, Proc. of the 9th Intern. Symp. on Intelligent
Information Systems IIS'2000, Bystra, Poland, Physica-Verlag, 2000, 145-157.
URL: http://www.cis.umassd.edu/~eeberbach/papers/iis2000.ps.
-----. (2001). $-Calculus Bounded Rationality = Process Algebra + Anytime Algorithms, in: J. C.
Misra (Ed.) Applicable Mathematics: Its Perspectives and Challenges, Narosa Publishing House, New
Delhi, Mumbai, Calcutta, 2001, 213-220.
URL: http://www.cis.umassd.edu/~eeberbach/papers/ebericrams.ps.
-----. (2005). Toward a Theory of Evolutionary Computation, BioSystems, vol.82, no.1, 2005, 1-19.
http://www.cis.umassd.edu/~eeberbach/papers/TowardTheoryEC.pdf.
– 24 –
Integral Biomatics
28.02.2007
Eberbach E., Wegner P. (2003). Beyond Turing Machines, Bulletin of the European Association for
Theoretical Computer Science (EATCS Bulletin), 81, Oct. 2003, 279-304. URL:
http://www.cis.umassd.edu/~eeberbach/papers/BeyondTM.pdf.
Easton, T. A. (2006). Beyond the algorithmization of sciences. Comm. ACM, 31-33. Vol. 49. No. 5.
May 2006.
Edelman G., Tononi G., (2000) A universe of consciousness: how matter becomes imagination. Basic
Books. ISBN 0-465-01377-5.
Etesi, G., Németi, I. (2002) Non-Turing computations via Malament-Hogarth space-times. Int.
J.Theor. Phys. 41, 341-370. URL: http://xxx.lanl.gov/, arXiv:gr-qc/0104023, 2002.
Feynman, R. P. (1982), Simulating physics with computers. Int. J. Theor. Phys., 21(6&7): 467-488.
-----. (1985). Quantum mechanical computers. Optics news 11, 11-20; also in Foundations of Physics,
16(6), 507-531, 1986.
Finkelstein, A., Hetherington, J., Li, L., Margoninski, O., Saffrey, P., Warner, A. (2004).
Computational challenges of systems biology. IEEE Computer, 26-33. May, 2004.
Fogel, L. .J., Owens, A. J., Walsh, M. J. (1966). Artificial Intelligence through Simulated Evolution,
John Wiley. ISBN: 0471265160.
Franck, O. (1949). Die Bahn der Gestalt: im Lichte des qualitativen Dynamismus. Emil Schmidt
Söhne, Flensburg.
François, C. (1999). Systemics and cybernetics in a historical perspective, Systems Research and
Behavioral
Science,
Syst.
Res.
16,
203-219.
URL:
http://wwwu.uniklu.ac.at/gossimit/ifsr/francois/papers/systemics_and_cybernetics_in_a_historical_perspective.pdf.
François, C. (2006). High tech from Ancient Greece. Nature 444: 551-552. DOI: 10.1038/444551a.
Freitas, R. A. and Merkle, R. C. (2004). Kinematic Self-Replicating Machines. Reading. Georgetown,
Texas: Landes Bioscience, 7. ISBN 1-57059-690-5.
Gardner, M. (1970). The fantastic combinations of John Conway's new solitaire game 'Life'. Scientific
American, October 1970.
-----. (1971). On cellular automata, self-reproduction, the Garden of Eden and the game 'Life'.
Scientific American, February 1971.
Garzon, M., Franklin, S. (1989). Neural computability II, Proc. 3rd Int. Joint Conf. on Neural
Networks: I, 631-637.
Gödel, K. (1931). Über unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I.
Monatshefte für Mathematik und Physik 38: 173-198; also as: Gödel, K. (1934). On Undecidable
Propositions of Formal Mathematical Systems, lecture notes by Kleene and Rosser at the Institute for
Advanced Study, reprinted in Davis, M. (ed.) 1965, The Undecidable, New York: Raven Press.
Goldin, D. Q., Wegner, P. (1999). Behavior and expressiveness of persistent Turing machines.
Technical Report: CS-99-14. Brown University Providence, RI, USA.
URL: http://citeseer.ist.psu.edu/goldin99behavior.html.
– 25 –
Integral Biomatics
28.02.2007
Goldin D. Q. (2000). Persistent Turing machines as a model of interactive computation. Proc. of the
First International Symposium on Foundations of Information and Knowledge Systems, (FoIKS 2000),
Burg, Germany, February 14-17, 2000, 116-135.
Graca, D. S. (2004). Some recent developments on Shannon's General Purpose Analog Computer.
URL: http://citeseer.ist.psu.edu/graca04some.html.
Greene, B. (1999). The elegant universe: superstrings, hidden dimensions and the quest for the
ultimate theory. New York, W. W. Norton & Company Inc., p. 146; Reprint Vintage Books. March
2000. ISBN 0-375-70811-1.
Guthrie, K. S. (1987) The Pythagorean sourcebook and library. Phanes Press. Michigan, USA. ISBN
0-933999-51-8.
Hameroff, S. (1998). Quantum computation in brain microtubules? The Penrose-Hameroff ‘Orch OR’
model of consciousness. Philosophical Transactions Royal Society London (A). 356:1869-1896.
http://www.quantumconsciousness.org/penrose-hameroff/quantumcomputation.html.
Hegel, G. W. F. (1807). System der Wissenschaft. Erster Theil, die Phänomenologie des Geistes.
Verlag Joseph Anton Goebhardt. Bamberg/Würzburg; also in Enlish as: Phenomenology of Spirit.
Oxford University Press, USA, February 1, 1979. ISBN-10: 0198245971; ISBN-13: 978-0198245971.
Hirsh, H. (1999). A Quantum Leap for AI, IEEE Intelligent Systems, vol. 14, no. 4, 9-16, Jul/Aug,
1999. http://doi.ieeecomputersociety.org/10.1109/MIS.1999.10014.
Hogarth, M. (1994). Non-Turing Computers and Non-Turing Computability, in D. Hull, M. Forbes,
and R. M. Burian (eds), Proc. of the Biennial Meeting of the Philosophy of Science Association (PSA),
Vol. 1. East Lansing: Philosophy of Science Association, 126-138.
URL: http://www.hypercomputation.net/download/1994a_hogarth.pdf.
Hutton, T. J. (2002). Evolvable self-replicating molecules in an artificial chemistry. Artif. Life 8, 4
(Sep. 2002), 341-356. DOI= http://dx.doi.org/10.1162/106454602321202417.
IBM (2001). Autonomic computing manifesto. URL: http://www.research.ibm.com/autonomic/.
Kafri, O. (2006). Information Theory and Thermodynamics. arXiv.org. cs.IT/0602023, 7 Feb. 2006.
URL: http://arxiv.org/ftp/cs/papers/0602/0602023.pdf.
-----. (2007). The Second Law and Informatics. arXiv.org. cs.IT/0701016, 3 Jan 2007. URL:
http://arxiv.org/ftp/cs/papers/0701/0701016.pdf.
Kahn, S., Makkena, R., McGeary, F., Decker, K., Gillis, W., Schmidt, C. (2003). A Multi-agent
system for the quantitative simulation of biological networks. In Proc. of the Second Annual
Conference on Autonomous Agents and Multi-agent Systems (AAMAS 03), Melbourne, Australia.
Kawamoto, H. (1995). Autopoiesis: The Third Generation System. Seido-sha Publishers. (in Japanese).
-----. (2000). The Extension of Autopoiesis. Seido-sha Publishers. (in Japanese). ISBN4-7917-5807-2.
-----. (2003). The Mystery of Nagi’s Ryoanji: Arakawa and Gins and Autopoiesis. INTERFACES
journal, no 21/22, vol. 1. Worcester MA and Paris: Holy Cross College and Université of Paris 7: 185101.
– 26 –
Integral Biomatics
28.02.2007
Keane, J. (2005). Practice-as-research and the "Realization of Living", In: Proc. of the 2005 SPIN
conference ("Speculation and Innovation: applying practice led research in the creative industries),
URL: www.speculation2005.qut.edu.au/papers/Keane.pdf.
Kephart, J. O., Chess, D. M. (2003). The vision of autonomic computing. IEEE Computer, 41-50,
January, 2003.
Kieu, T. D. (2002). Computing the Noncomputable. CoRR, quant-ph/0203034; also in: Contemporary
Physics 44 (2003), 51-77.
Kitano, H. (2002). Systems biology – a brief overview. Science 295: 1662-1664.
Kneer, G., Nassehi, A. (1993). Niklas Luhmanns Theorie Sozialer Systeme. Wilhelm Fink Verlag.
Koestler, A. (1967). The ghost in the machine. New York, NY. Macmillan Co. ISBN-10: 0090838807;
Penguin, Reprint edition (June 5, 1990). ISBN-13: 978-0140191929.
LaForte, G., Hayes, P. J., Ford, K. M. (1998). Why Gödel's Theorem Cannot Refute
Computationalism. Artificial Intelligence, vol. 104, no. 1-2, 265-286.
URL: http://citeseer.ist.psu.edu/laforte98why.html.
Letelier, J.C., Soto-Andrade, J., Abarzua, F. G., Cornish-Bowden, A., Cardenas, M. L. (2004).
Metabolic closure in (M,R) systems. In Proc. 9th Int. Conf. Simulation and Synthesis of Living Systems
(ALIFE9), 450–461.
Lloyd, S., Braunstein, S. L. (1999). Quantum computation over continuous variables. Phys. Rev.
Letters, 1784-1787, 22. Feb. 1999.
Lucas, J. R. (1961). Minds, machines, and Gödel, Philosophy 36 (1961) 120-124.
MacLennan, B. J. (1990). Field Computation: a theoretical framework for massively parallel analog
computation. Technical Report. UMI Order Number: UT-CS-90-100., University of Tennessee. URL:
http://citeseer.ist.psu.edu/maclennan91field.html.
-----. (1994). Continuous computation and the emergence of the discrete. Technical Report. UMI
Order Number: UT-CS-94-227, University of Tennessee.
-----. (1999). Field computation in natural and artificial intelligence.
URL: http://citeseer.ist.psu.edu/35986.html.
-----. (2000). An overview of field computation. URL: http://citeseer.ist.psu.edu/258064.html.
-----. (2003a), Transcending Turing computability, Minds and Machines 13: 3-22.
URL: http://citeseer.ist.psu.edu/maclennan01transcending.html.
-----. (2003b), Continuous information representation and processing in natural and artificial neural
networks, Tech. Report. UT-CS-03-508, Dept. Comp. Sci., Univ. Tennessee, Knoxville.
http://www.cs.utk.edu/~mclennan.
-----. 2004. Natural computation and non-Turing models of computation. Theor. Comput. Sci. 317,
issues 1-3 (4. Jun. 2004), 115-145. DOI= http://dx.doi.org/10.1016/j.tcs.2003.12.008. also as
Technical
Report
University
of
Tennessee,
UT-CS-03-509.
URL:
http://www.cs.utk.edu/~mclennan/papers.html.
– 27 –
Integral Biomatics
28.02.2007
-----. (2005). The Nature of Computation — Computation in Nature. Invited paper. Workshop on
Natural Processes and New Models of Computation, University of Bologna, Italy, June 2005.
-----. (2006). Super-Turing or Non-Turing? Invited presentation. Workshop Future Trends in
Hypercomputation (Trends’06), Sheffield, UK, 11-13 Sept. 2006.
Maes, P. (1995). Modeling adaptive autonomous agents. In: C. Langton (Ed.), Artificial Life, 135-162,
MIT Press, ISBN 0-262-12189-1.
Matsumaru, N. die Fenizio, P. S., Centler, F. Dittrich, P. (2006). On the evolution of chemical
organizations. In: S. Artmann, P. Dittrich (Eds.), Proc. of 7th German Workshop on Artificial Life
(GWAL-7), 135-146, July 26-28, 2006, Jena Germany, IOS Press, ISBN 1-58603-644-0.
Maturana, H., Varela, F. (1980). Autopoiesis and cognition: The realization of the living. D.
Publishing, The Netherlands.
Matveev, A. S., Savkin, A. V. (2004). An analogues of Shannon information theory for networked
control systems: state estimation via a noisy discrete channel. Proc. of 43rd IEEE Conference on
Decision and Control, Dec. 14-17, 2004, Atlantis, Bahamas. 4485-4490.
URL: http://ieeexplore.ieee.org/iel5/9774/30838/01429457.pdf.
McCulloch, W., Pitts, W. (1943). A logical Calculus of Ideas Immanent in Nervous Activity. Ibid., 5,
115-133.
McMullin, B. (1997a). SCL: an artificial chemistry in swarm. Technical Report: bmcm9702. Dublin
City University,School of Electronic Engineering; Working Paper 97-01-002, Santa Fe Institute, URL:
http://www.eeng.dcu.ie/~alife/bmcm9702/.
-----. (1997b). Computational Autopoiesis: The Original Algorithm. Working paper 97-01-001, Santa
Fe
Institute,
Santa
Fe,
NM
87501,
USA.
URL:
http://www.santafe.edu/sfi/publications/wpabstract/199701001.
-----. (2000a). John von Neumann and the evolutionary growth of complexity: Looking backwards,
looking forwards. Artificial Life, 6(4), 347–361.
-----. (2000b). Some remarks on autocatalysis and autopoiesis. Annals of the New York Academy of
Sciences, 901, 163–174. http://www.eeng.dcu.ie/~alife/bmcm9901/.
-----. 2004. Thirty years of computational autopoiesis: a review. Artificial Life 10, 3 (Jun. 2004), 277295. DOI= http://dx.doi.org/10.1162/1064546041255548.
MacMullin, B., Varela, F. (1997). Rediscovering computational autopoiesis. In: P. Husbands, I.
Harvey, (Eds.), Proc. of the Fourth European Conference on Artificial Life (ECAL-97), Brighton, UK,
July 1997. Cambridge, MA: MIT Press. URL: http://www.eeng.dcu.ie/~alife/bmcm-ecal97/.
McMullin, B., Groß, D. (2001). Towards the implementation of evolving autopoietic artificial agents.
Proc. of the 6th European Conference on Advances in Artificial Life, 440–443. New York: SpringerVerlag. URL: http://www.eeng.dcu.ie/~alife/bmcm-ecal-2001/ bmcm-ecal-2001.pdf.
Mendes, P. (1993) GEPASI: a software package for modelling the dynamics, steady states and control
of biochemical and other systems. Comput. Appl. Biosci. 9, 563-571, 1993.
Mesarovic, M. D., Sreenath S.W., Keene, J. D. (2004). Search for organizing principles: understanding
in systems biology. Systems Biology 1: 19-27.
– 28 –
Integral Biomatics
28.02.2007
Miller, J. G. (1978). Living systems. McGraw-Hill,. New York, N.Y. ISBN-13:0-07-042015-7.
Milner, R (1991). The Polyadic pi-Calculus: a Tutorial. URL: http://citeseer.ist.psu.edu/19489.html.
Miorandi, D. (2006). BIONETS: From pervasive computing environments to the Internet of the future.
FET Workshop Workshop/Brainstorming on Internet of the Future. IIT,Montreal, 29. Sept. 2006.
URL: http://www.iitelecom.com/fileadmin/files/miorandi-BIONETS.pdf.
-----. (2004). Communicating and Mobile Systems: the Pi-Calculus. Cambridge University Press; Nov.
17, 2004. 1st Edition, ISBN-13: 978-0521643207.
Mitchell, B. (1965). Theory of categories. Academic Press; ISBN 0124992501.
Moore, E. F. (1956a). Artificial Living Plants, Scientific American 195 (Oct 1956):118-126.
-----. (1956b). Gedanken-experiments on Sequential Machines, Automata Studies, Annals of
Mathematical Studies, no. 34, Princeton University Press, Princeton, N. J., 129 – 153.
-----. (1962). Machine models of self-reproduction, Proc. of Symposia in Applied Mathematics, vol.
14, 17-33. The American Mathematical Society.
Morita, K. and Imai, K.. (1995). Self-reproduction in a reversible cellular space, Proc. of Int.
Workshop on Machines and Computation, Paris, March 29-31, 1995.
Natschläger, T., Maass, W. (1999). Fast analog computation in networks of spyking neurons using
unreliable synapses. Proc. of European Symposium on Artificial Neural Networks (ESANN'99),
Bruges, Belgium, 21-23 April, 1999. 417-422. ISBN 2-600049-9-X.
URL: www.dice.ucl.ac.be/Proceedings/esann/esannpdf/es1999-252.pdf.
Nielsen M., Chuang, I. (2000). Quantum Computation and Quantum Information. Cambridge
University Press. ISBN 0-521-63503-9.
Nomura, T. (1997). An attempt for description of quasi-autopoietic systems using metabolism-repair
systems, http://citeseer.ist.psu.edu/nomura97attempt.html.
-----. (2002). Formal description of autopoiesis for analytic models of life and social systems. Proc. of
the Eighth International Conference on Artificial Life, 15–18. Cambridge, MA: MIT Press.
-----. (2006). Category theoretical formalization of autopoiesis from perspective of distinction between
organization and structure. Proc. of 7th German Workshop on Artificial Life (GWAL-7), July 31-38,
2006, Jena Germany, IOS Press, ISBN 1-58603-644-0.
-----. (2007). Category theoretical distinction between autopoiesis and (M,R) systems. Proc. of AB’07
(Second International Conference on Algebraic Biology), RISC, Castle of Hagenberg, July 2-4,
Austria (submited).
Nowak, M. A. (2006). Evolutionary dynamics. The Belknap Press of Harvard University Press.
Cambridge,
MA
and
London,
England,
ISBN-13:
978-0-674-02338-3.
URL:
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve\&db=pubmed\&dopt=Abstract\&list_ui
ds=16299766.
O’Maley, M. A., Dupré (2005). Fundamental issues in systems biology. BioEssays, Vol. 27, Nr.
12, 1270-1276. Wiley-Liss, New York, NY. ISSN 0265-9247.
– 29 –
Integral Biomatics
28.02.2007
Oltvai, Z. N., Barabasi, A.-L. (2002). Life’s complexity pyramid. Science. 298: 763-764.
Ono, N., Ikegami, T. (1999). Model of self-replicating cell capable of self-maintenance. In D.
Floreano, J. Nicoud, & F. Mondada, (Eds.), Proc. of the 5th European Conference on Artificial Life
(ECAL'99), 399–406.
-----. (2000). Self-maintenance and self-reproduction in an abstract cell model. Journal of Theoretical
Biology, 206, 243–253.
Ord, T. (2002), Hypercomputation computing more than the Turing machine. (2002-09-25)
oai:arXiv.org:math/0209332.
Oraevsky, A. N. (2000). On quantum computers QUANTUM ELECTRON, 2000, 30 (5), 457-458. DOI
10.1070/QE2000v030n05ABEH001742.
Penrose, R. (1989). The emperor's new mind. Oxford University Press, New York, 1989.
-----. (1996). Shadows of the mind, a search for the missing science of consciousness. Oxford
University Press, New York, 1994; also in Vintage, 1995. ISBN 0 09 958211 2.
-----. (2004). The road to reality. Vintage Books, London, 2004, ISBN 0-099-44068-7.
Pitts, W. (1943). The Linear Theory of Neuron Networks. Bull. Math. Biophys., 5, 23-31.
Putnam, H. (1995). Review of “Shadows of the Mind“. Bulletin of the American Mathematical Society
32. 370-373.
Rashevsky, N. (1940). Advances and applications of mathematical biology. Univ. of Chicago Press.
-----. (1948). Mathematical Biophysics. The University of Chicago Press, Chicago, IL, USA. ISBN
0486605752.
-----. (1954). Topology and life: In search of general mathematical principles in biology and sociology,
Bull. Math. Biophys. 16: 317-348. Springer-Verlag. ISSN: 0092-8240 (Print) 1522-9602 (Online).
-----. (1960). Mathematical Biophysics: Physico-Mathematical Foundations of Biology, Volume 2,
(3rd ed.) Dover Publications; ISBN 0486605752.
-----. (1961). Mathematical Principles in Biology and Their Applications. Charles C. Thomas.
Springfield, IL, USA. ISBN 039801552X.
-----. (1965). Models and mathematical principles in biology. In: Waterman/Morowitz (Eds.),
Theoretical and mathematical biology. 36-53. New York, Blaisdell.
Rasmussen, S., Baas, N., Mayer, B., Nilsson, M., and Oleson, M. (2001). Ansatz for dynamical
hierarchies. Artificial Life, 7(4), 329–354.
Ray, T. S. (1991). An approach to the synthesis of life. In: C. Langton, C. Taylor, J. D. Farmer, S.
Rasmussen (Eds.), Artificial life II, Santa Fe Institute studies in the sciences of complexity, Vol. X,
371-408, Addison-Wesley.
-----. (1995). An evolutionary approach to synthetic biology. Zen and the art of creating life. In: C.
Langton (Ed.), Artificial Life, 179-209, MIT Press, ISBN 0-262-12189-1.
– 30 –
Integral Biomatics
28.02.2007
Rosen, R. (1958a). A relational theory of biological systems. Bull. Math. Biophys., 20, 245-260.
-----. (1958b). The representation of biological systems from the standpoint of the theory of categories.
Bull. Math. Biophys., 20, 317-341.
-----. (1959). A relational theory of biological systems II, Bull. Math. Biophys. 21:109-128.
-----. (1961). A relational theory of the structural changes induced in biological systems by alterations
in environment, Bull. Math. Biophys. 23:165-171.
-----. (1964). Abstract Biological Systems as Sequential Machines, Bull. Math. Biophys., 26: 103-111;
239-246; 27:11-14;28:141-148.
-----. (1968). On Analogous Systems. Bull. Math. Biophys., 30: 481-492.
-----. (1972). Some Relational Cell Models: The Metabolism-Repair Systems, In: Foundations of
mathematical biology, Vol. 2. Academic Press, Ch. 4:217-253.
-----. (1978). Fundamentals of measurement and representation of natural systems. New York:
Elsevier North-Holland. ISBN-13: 978-0444002617.
-----. (1991). Life itself. New York: Columbia University Press, ISBN: 0-231-07565-0.
-----. (1999). Essays on life itself. New York: Columbia University Press, ISBN: 0-231-10510-X.
Ruiz-del-Solar, J., Köppen, M. (1999). Autopoiesis and Image Processing II: Autopoietic-agents for
Texture Analysis. URL: http://citeseer.ist.psu.edu/ruiz-del-solar99autopoiesis.html.
Sacks, O. (1995). Scotoma: forgeting and neglect in science. In: Silvers, R.B. (Ed.). Hidden histories
of science. The York Review of Books. ISBN 0-94030322-03-X.
Schrödinger, E. (1944). What is life? The physical aspect of the living cell. Cambridge University
Press, Cambridge. ISBN-10: 0521427088; ISBN-13: 978-0521427081.
Schuster, P. (1995). Extended molecular evolutionary biology: artificial life bridging the gap between
chemistry and biology. In: C. Langton (Ed.), Artificial Life, 39-60, MIT Press, ISBN 0-262-12189-1.
Shadboldt, N. (2004). Nature-Inspired Computing, Editorial. IEEE Intelligent Systems. Jan./Feb. 2004,
2-3, URL: www.computer.org/intelligent.
Shagrir, O., Pitowsky, I. (2003). Physical hypercomputation and the Chirch-Turing thesis. Minds and
Machines, 13, 87-101. Kluver, URL: http://edelstein.huji.ac.il/staff/pitowsky/papers/Paper%2036.pdf.
Shannon, C. E. (1941). Mathematical theory of the differential analyser. J. Math. Phys. 20, MIT, 337354.
-----. (1948). A mathematical theory of communication. Bell System Technical Journal, vol. 27, 379423 and 623-656, July and October, 1948.
URL: http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf.
-----. (1956). A universal Turing machine with two internal states. In: Automata Studies. McCarthy, J.,
Shannon, C. E., (Eds.). Princeton University Press. ISBN: 0691079161.
– 31 –
Integral Biomatics
28.02.2007
Sheldrake, R, (1981). A new science of life: the hypothesis of morphic resonance, Bond & Brigs,
London, 1981. ISBN 0-89281-535-3.
Searle, J. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457.
URL: http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html.
-----. (1993). Foundations of Recurrent Neural Networks. Ph.D. Thesis.
URL: http://citeseer.ist.psu.edu/173827.html.
Siegelmann, H.T. (1995). Computation Beyond the Turing Limit. Science, 268, 545-548.
-----. (1996a). The Simple Dynamics of Super Turing Theories. Theoretical Computer Science, 168 (2part special issue on UMC), 20 Nov. 1996, 461-472. URL: http://dx.doi.org/doi:10.1016/S03043975(96)00087-4.
-----. (1996b). Analog Computational Power, Science, 271(19), January 1996, 373.
-----. (1997). Neural networks and analog computation: beyond the Turing limit, Birkhauser Boston
Inc., Cambridge, MA. ISBN: 978-0-8176-3949-5.
-----. (1999) Stochastic analog networks and computational complexity. Journal of Complexity, 15(4),
451-475.
Siegelmann, H. T., Ben-Hur, A., Fishman, S. (1999). Computational Complexity for Continuous Time
Dynamics. Physical Review Letters, 83(7), 1463-1466.
-----. (2002). Neural automata and analog computational complexity', in: M.A. Arbib, (Ed.), The
Handbook of Brain Theory and Neural Networks, Cambridge, MA, The MIT Press, 2nd edition, 2002.
ISBN: 0262011972.
-----. (2003). Neural and Super-Turing Computing. Minds and Machines, 13(1), 103-114.
Siegelmann, H. T., Fishman, S. (1998). Computation by Dynamical Systems. Physica D 120, 214-235.
Siegelmann, H. T., Sontag, E. D. (1992). On The Computational Power of Neural Nets, Proc. 5th
Annual ACM Workshop on Computational Learning Theory, 440-449, Pittsburgh, July 1992.; also in:
Journal of Computer and System Sciences, 50 (1), 132-150. Feb., 1995. URL:
http://citeseer.ist.psu.edu/siegelmann91computational.html.
-----. (1994). Analog computation via neural networks, Theoretical Computer Science, vol. 131, Nr.2,
331-360, Sept. 12, 1994.
Simeonov, P. L. (1998). The SmartNet Architecture or Pushing Networks beyond Intelligence, Proc.
of ICIN'98 (5th International Conference on Intelligent Networks), 12-15 May, Bordeaux, France.
-----. (1999a). Towards a Vivid Nomadic Intelligence with the SmartNet Media Architecture, Proc. of
IC-AI'99 (1999 International Conference on Artificial Intelligence), June 28 - July 1, 1999, Las Vegas,
Nevada, USA, CSREA Press, Vol. I, 214-219, ISBN: 1-892512-16-5.
-----. (1999b). On Using Nomadic Services for Distributed Intelligence, Proc. of ICCCN'99 (Eight
IEEE International Conference on Computer Communications and Networks), October 11 - 13,
1999, Boston, MA, USA, IEEE Press, 228-231, ISBN: 0-7803-5794-9. also in Microprocessors and
Microsystems, Vol. 24, No. 6, 15 October 2000, Elsevier, 291-297.
– 32 –
Integral Biomatics
28.02.2007
-----. (1999c). The Wandering Logic of Intelligence: Or Yet Another View on Nomadic
Communications, Proc. of SMARTNET'99, 22-26 November 1999, AIT, Pathumthani, Thailand,
Kluver Academic Publishers, The Netherlands, 293-306, ISBN: 0-7923-8691-4.
URL: http://portal.acm.org/citation.cfm?id=647018.713598.
-----. (2002a). The Viator Approach: About Four Principles of Autopoietic Growth On the Way to
Hyperactive Network Architectures, Proc. of FTPDS’02 at the 2002 IEEE Int. Symposium on Parallel
& Distributed Processing (IPDPS’02), April 15-19, 2002, Ft. Lauderdale, FL, USA, IEEE Computer
Society, Washington, DC, USA, 320 - 327, ISBN:0-7695-1573-8.
URL: http://ieeexplore.ieee.org/iel5/7926/21854/01016528.pdf.
-----. (2002b). WARAAN: A Higher-Order Adaptive Routing Algorithm for Wireless Multimedia in
Wandering Networks, 5th IEEE International Symposium on Wireless Personal Multimedia
Communications (WPMC'2002), Oct. 27-30, 2002, Honolulu, Hawaii, USA, 1385-1389. URL:
http://www.db-thueringen.de/servlets/DerivateServlet/Derivate-6633/WPMC2002_S.pdf
-----. (2002c). The Wandering Logic Intelligence, A Hyperactive Approach to Network Evolution and
Its Application to Adaptive Mobile Multimedia Communications, Ph.D. Thesis, Technische Universität
Ilmenau, Faculty for Computer Science and Automation, Dec. 2002.
URL: http://www.db-thueringen.de/servlets/DerivateServlet/Derivate-2005/ilm1-2002000030.pdf.
Die Deutsche Bibliothek. URL: http://deposit.ddb.de/cgi-bin/dokserv?idn=974936766.
Smirnov, M. (2005). Autonomic Communication: towards network ecology. 3rd International
Workshop on Self-Adaptive and Autonomic Computing Systems (SAACS 05), Copenhagen, Denmark.
24. August 2005. URL: http://cms1.gre.ac.uk/conferences/saacs-2005/; http://www.autonomiccommunication.org/publications/.
Sornette, D. (2004). Critical phenomena in natural sciences. Chaos, fractals, selforganization and
disprder: concepts and tools. Springer-Verlag, ISBN 3-540-40754-5.
Soto-Andrade, J. and Varela, F. J. (1984). Self-reference and fixed points: a discussion and an
extension of Lawvere’s theorem. Acta ApplicandaeMathematicae, 2: 1-19.
Stalbaum, B. Toward Autopoietic Database,
http://www.c5corp.com/research/autopoieticdatabase.shtml.
Stannet, M. (1991). An introduction to post-Newtonian and non-Turing computation. Tech. Report CS
91-02, Dept. of Computer Science, Sheffield University, UK.
Stocker, A., Douglas, R. (1999). Computation of smooth optical flow in a feedback connected analog
network. NIPS Advances in Neural Processing Systems 11, Denver, Dec. 1998, MIT Press, 706-712,
URL: www.cns.nyu.edu/~alan/publications/conferences/NIPS98/Stocker_Douglas99.pdf.
Thom, R. (1989). Structural stability and morphogenesis . Perseus Publishing; ISBN 0201406853.
Thompson, D. W. (1917). On Growth and Form. Cambridge Univ. Press. ISBN 0521437768.
Toschek, P. E. and Wunderlich, Ch. (2001). What does an observed quantum system reveal to its
observer? Eur. Phys. Jour. D 14, 387-396.
Turing, A.M. (1937). On Computable Numbers, with an Application to the Entscheidungsproblem.
Proceedings of the London Mathematical Society, series 2, 42 (1936-37), 230-265.
– 33 –
Integral Biomatics
28.02.2007
-----. (1946). Proposal for Development in the Mathematics Division of an Automatic Computing
Engine (ACE). In: Carpenter, B.E., Doran, R.W. (Eds.) 1986. A.M. Turing's ACE Report of 1946 and
Other Papers. The MIT Press, Cambridge, Mass.
-----. (1947). ‘Lecture to the London Mathematical Society on 20 February 1947’. In Carpenter, B.E.,
Doran, R.W. (eds) 1986. A.M. Turing's ACE Report of 1946 and Other Papers. The MIT Press.
Cambridge, Mass.
-----. (1948). Intelligent Machinery. National Physical Laboratory Report. In: Meltzer, B., Michie, D.
(Eds.) 1969. Machine Intelligence 5. Edinburgh: Edinburgh University Press. (Digital facsimile
viewable at http://www.AlanTuring.net/intelligent_machinery.).
-----. (1950a). Computing Machinery and Intelligence. Mind, 59, 433-460.
-----. (1950b). Programmers' Handbook for Manchester Electronic Computer. University of
Manchester
Computing
Laboratory.
(Digital
facsimile
viewable
at
http://www.AlanTuring.net/programmers_handbook.).
-----. (1951a). Can Digital Computers Think?. In: Copeland, B.J. (Ed.) 1999. A Lecture and Two Radio
Broadcasts on Machine Intelligence by Alan Turing; also in: Furukawa, K., Michie, D., Muggleton, S.
(Eds.) 1999. Machine Intelligence 15. Oxford University Press. Oxford.
-----. (1951b) (circa). Intelligent Machinery, A Heretical Theory. In: Copeland, B.J. (Ed.) 1999. A
Lecture and Two Radio Broadcasts on Machine Intelligence by Alan Turing; also in: Furukawa, K.,
Michie, D., Muggleton, S. (Eds.) 1999. Machine Intelligence 15. Oxford University Press, Oxford.
van Leeuwen, J., Wiedermann, J. (2000a). The Turing Machine Paradigm in Contemporary
Computing. Techn. Report UU-CS-2000-33. Dept of Computer Science, Utrecht University, 2000;
also in: Engquist, B. and Schmid, W. (Eds.), Mathematics Unlimited -- 2001 and Beyond, SpringerVerlag, 2001. URL: http://citeseer.ist.psu.edu/vanleeuwen00turing.html.
-----. (2000b). Breaking the Turing barrier: the case of the Internet. Tech. Report, Inst. of Computer
Science, Academy of Sciences of the Czech Rep., Prague, 2000.
Varela, F., Maturana, H., Uribe, R. (1974). Autopoiesis: The organisation of living systems, its
characterization and a model. BioSystems 5, 187–196.
Varela, F., Letelier, J. C. (1988). Morphic resonance in silicon chips: an experimental test of the
hypothesis of formative causation. Skeptical Inquirer, 12:298-300.
von Neumann, John (1966). A. Burks (Ed.). The Theory of Self-reproducing Automata. Urbana, IL,
Univ. of Illinois Press.
Wagner, A. (2005). Robustness and Evolvability in Living Systems. Reading, Princeton Studies in
Complexity. Princeton University Press (July 5, 2005). ISBN-13: 978-0691122403.
Webb, K. and White, T. (2004). Combining analysis and synthesis in a model of a biological cell.
Proc. of 2004 ACM Symposium on Applied Computing, 185-190.
Wegner, P. (1997). Why interaction is more powerful than algorithms. Communications of the ACM.
40 (5), 80-91. May 1997, ISSN:0001-0782.
Wegner, P. (1998). Interactive foundations of computing. Theor. Comput. Sci. 192(2): 315-351.
– 34 –
Integral Biomatics
28.02.2007
Wepiwé, G., Simeonov, P. L. (2005). A communication network, a method of routing data packets in
such communication network and a method of locating and securing data of a desired resource in such
communication network. (A logical overlay network construction in large-scale dynamic distributed
systems based on concentric multi-ring topology and resource lookup with shortest path length). Int.
Patent WO/2007/014745 (PCT/EP2006/007582). Priority Date: 28.07.2005.
URL: http://www.wipo.int/pctdb/en/wo.jsp?wo=2007014745.
Wepiwé, G., Simeonov, P. L. (2006). HiPeer: A Highly Reliable P2P System, IEICE Trans.
Fundamentals, Special Issue on Information and Systems on Parallel and Distributed Computing and
Networking, Vol. E89-D, No. 2, Feb. 2006, 570-580, doi:10.1093/ietisy/e89-d.2.570,
URL: http://ietisy.oxfordjournals.org/cgi/content/refs/E89-D/2/570.
Westerhoff, H. V., Palsson, B.O. (2004). The evolution of molecular biology into systems biology.
Nature Biotechnology 22: 1249-1252.
Whitehead, A. N. (1929). Process and Reality: An Essay in Cosmology. New York, Free Press;
Corrected edition (July 1, 1979). The Free Press. ISBN-13: 978-0029345702.
Whitehead, A. N. (1933). Adventures of Ideas, New York, Free Press. 1st Free Press Pbk. Ed edition
(January 1, 1967). ISBN-13: 978-0029351703.
Wichert A. (2000). Associative computation. PhD thesis, Faculty of Computer Science, University of
Ulm, Germany, URL: http://vts.uni-ulm.de/query/longview.meta.asp?document_id=533.
Winograd, T. A. I. (1970). A simple algorithm for self-replication. A. I. Memo 197, Project MAC.
MIT.
Wolfram, S. (2002). A new kind of science. Wolfram Media. ISBN 1-57955-0888-8.
Wolkenhauer, O. (2001). Systems biology: The reincarnation of systems theory applied in biology?,
Briefings in Bioinformatics, 2(3):258-270; doi:10.1093/bib/2.3.258.
URL: http://bib.oxfordjournals.org/cgi/reprint/2/3/258.
Woods, D. (2005). Computational Complexity of an Optical Model of Computation. URL:
http://citeseer.ist.psu.edu/woods05computational.html.
Zeleny, M., Pierre, N. A. (1975). Simulation models of autopoietic systems. In: Proc. of the Summer
Computer Simulation Conference, 831-842, July 21-23 1975, San Francisco.
Zeleny, M. (1978). APL-AUTOPOIESIS: Experiments in self-organization of complexity. In:
Progress in cybernetics and systems research, Vol. III, 65-84. Hemisphere, Washington, DC.
-----. (1980). Autopoesis: A paradigm lost. In: M. Zeleny (Ed.), Autopoiesis, dissipative structures and
spontaneous social orders, 3-43. Westview Press. AAAS Selected Symposium 55. Boulder, CO, USA.
Zomaya, A. Y., Ed. (2006). Handbook of nature-inspired and innovative computing. Springer-Verlag,
ISBN-10: 0-387-40532-1, 2006.
– 35 –