International Journal of Astrobiology 7 (3 & 4): 293–300 (2008) Printed in the United Kingdom
doi:10.1017/S1473550408004308 f 2008 Cambridge University Press
Why did life emerge ?
Arto Annila1 and Erkki Annila2
1
Department of Physics, Institute of Biotechnology and Department of Biosciences, POB 64, University of Helsinki,
Finland
e-mail:
[email protected]
2
Finnish Forest Research Institute, Finland
Abstract : Many mechanisms, functions and structures of life have been unraveled. However, the
fundamental driving force that propelled chemical evolution and led to life has remained obscure. The
second law of thermodynamics, written as an equation of motion, reveals that elemental abiotic matter
evolves from the equilibrium via chemical reactions that couple to external energy towards complex
biotic non-equilibrium systems. Each time a new mechanism of energy transduction emerges, e.g., by
random variation in syntheses, evolution prompts by punctuation and settles to a stasis when the
accessed free energy has been consumed. The evolutionary course towards an increasingly larger energy
transduction system accumulates a diversity of energy transduction mechanisms, i.e. species. The rate of
entropy increase is identified as the fitness criterion among the diverse mechanisms, which places the
theory of evolution by natural selection on the fundamental thermodynamic principle with no
demarcation line between inanimate and animate.
Received 6 April 2008, accepted 26 September 2008
Key words : energy transduction, entropy, evolution, fitness criterion, free energy, natural selection.
Introduction
The theory of evolution by natural selection (Darwin
1859) pictures how biodiversity (Wilson 1992) has cumulated.
Fossil records and similarity among biological macromolecules are rationalized by projecting back in time from
the contemporary branches of life along paths that merge
over and over again into common ancestors (Woese 1998).
When descending to the epoch of chemical evolution (Oparin
1952; Miller 1953), devoid of genetic material and apparent
mechanisms of replication, it is unclear how natural selection
operates (Orgel 1998; Fry 2000; Gould 2002) on matter and
yields functional structures and hierarchical organizations
that are characteristics of life.
The basic question, why matter evolved from inanimate
to animate, is addressed in this study using the theory of
evolution by natural selection that was recently formulated in
thermodynamic terms (Sharma & Annila 2007). In nature
many phenomena follow the second law of thermodynamics,
also known as the principle of increasing entropy (Alonso &
Finn 1983). The law, as it given by Carnot, is simple: an energy difference is a motive force (Carnot 1977). For example,
heat flows from hot to cold and molecules diffuse from high
to low concentration. Energy also flows in chemical reactions
that transform compounds to other compounds to diminish
chemical potential energy differences. Eventually a stationary
state without energy gradients is reached. For example, the
chemical equilibrium (Gibbs 1993–1994 ; Atkins 1998) corresponds to the most probable distribution of reactants and
products. In general, all processes that level potential energy
gradients are referred to as natural processes (Kondepudi &
Prigogine 1998).
According to thermodynamics, evolution in its entirety is
also a natural process driven by the universal tendency to
diminish differences among energy densities. Although the
quest for higher entropy has for a long time been understood
as the primus motor of evolution and as the emergent motive
for orderly mechanisms and hierarchical organizations
(Lotka 1925; Salthe 1985 ; Brooks & Wiley 1986; Ulanowicz
& Hannon 1987 ; Weber et al. 1988; Schneider & Kay 1994;
Chaisson 1998; Swenson 1998 ; Lorenz 2002 ; Dewar 2003;
Salthe 2004 ; Lineweaver 2005), it nevertheless seems that the
second law has not acquired unanimous recognition as the
profound principle that also governs processes that we refer
to as living. The physical basis of the entropy law was recently
strengthened when it was derived from probability considerations and formulated as an equation of motion (Sharma
& Annila 2007). Now it is possible to deduce unmistakably
where a system under an influx of external energy is on its
way. In particular, it can be understood what is happening
when external energy from the Sun couples to numerous
chemical reactions that distribute matter on Earth.
The recently derived equation of evolution (Sharma &
Annila 2007) has already been used to account for the
emergence of chirality consensus and other standards of life
(Jaakkola et al. 2008b), as well as to tackle the puzzle of large
amounts of non-expressed DNA in eukaryotes (Jaakkola
et al. 2008a). Furthermore, skewed population distributions
293
294
Arto Annila and Erkki Annila
are ubiquitous characteristics of plant and animal populations in the same way as gene lengths and their cumulative
curves, e.g., species–area relationships have been shown to be
consequences of the second law (Grönholm & Annila 2007;
Würtz & Annila 2008). The global homeostatic characteristics that were articulated by the Gaia theory (Lovelock
1988) have also been placed on the same thermodynamic
foundation (Karnani & Annila 2008). Moreover, the ubiquitous imperative to disperse energy has been associated with
the principle of least action to describe flows of energy. The
flows are directed down along the steepest gradients, equivalent to the shortest paths, and flatten the manifold of energy
densities (Kaila & Annila 2008).
In this study, evolution, on all length scales and at all times,
is considered to display the ubiquitous principle of energy
dispersal. The subsequent thermodynamic analysis does
not bring forward essentially novel thoughts, but communicates the simple physical basis that underlies the earlier
reasoning about the emergence of life, the rise of complexity
and courses to hierarchical organizations. It is emphasized
that the study does not aim to expose any particular locus
or moment in time or precise primordial conditions from
which life sprang up. In fact, thermodynamics give no special
attributes to living systems but describe all matter as
compounds, i.e. heterogeneous substances (Gibbs 1993–1994)
at large entities. To recognize energy gradients as evolutionary forces paves the way for understanding why life
emerged.
On the entropy concept
The adopted view of entropy, i.e. entropy increases when
energy gradients diminish, is briefly contrasted with other
notions associated with the entropy concept. The standpoint
is traditional thermodynamics, because an energy gradient is
understood as a motive force but the equation of motion has
been obtained from the statistical probability calculation. In
contrast, the informational entropy defined mathematically
by Shannon (1948) does not explicitly recognize probability
as a physical motive (Martin 2007). Even without explicit
energetic terms it is possible to deduce mathematically, e.g.,
using Lagrange multipliers, the maximum entropy state,
because per definition at the stationary state the energy
differences, i.e. the driving forces, have vanished. However,
when using informational entropy the evolutionary course
itself that arrives at the stationary state remains unclear.
The maximum entropy principle formulated by Jaynes
(1957) builds on the abstract informational entropy but aims
at finding the paths that lead to increasingly more probable
states. These optimal paths are associated with the steepest ascents and are found by imposing constraints. The
resulting principle of maximum entropy production for nonequilibrium stationary states (Dewar 2003) parallels the
thinking in this study. However, the imposed constraints are
not a substitute for the adopted formalism that describes
mutually interdependent entities in energetic terms. The
diminishing energy density differences will, without further
guidance, direct the course along the shortest paths that are
equivalent to the steepest descents in the energy landscape
(Kaila & Annila 2008). Furthermore, it is important to realize
that the driving forces keep changing due to the motion that,
in turn, affects the forces. In other words, the trajectory of
evolution is non-deterministic. The course of a system is not
predetermined by the initial conditions or constraints because
the system is changing irreversibly either by acquiring or
loosing energy.
The adopted standpoint makes no principal distinction
between the concepts of non-equilibrium and equilibrium.
Typically systems that grow in their energy density are referred to as animate whereas those that shrink are regarded
mostly as inanimate. However, in both cases the principle
of diminishing gradients is the same. Both animate and
inanimate systems aim at stationary states governed by
the high-energy and low-energy surroundings, respectively.
Customarily the resulting high-energy animate state is referred to as the non-equilibrium whereas the low-energy
inanimate state is referred to as the equilibrium state. Here
the stationary state concept is preferred for both systems to
denote the state when there is an energy balance between the
system and its surroundings, irrespective of whether the surroundings are high or low in energy density. It is, of course,
somewhat of a subjective decision as to how one wishes to
label some entities as being parts of the system and others as
being parts of the surroundings. However, the choice is of no
consequence when using the adopted formalism. Entropy of
the system, just as entropy of its surroundings, will increase as
mutual differences in energy are levelling off.
Finally, it is emphasized that the adopted standpoint does
not associate high entropy with high disorder (Schrödinger
1948). Certainly many animate processes are driven to orderly functional structures to attain stationary states in their
high-energy surroundings just as many inanimate processes
are driven to disintegrate to disordered aggregates to attain
stationary states in their low-energy surroundings. However,
order or disorder is a consequence of energy dispersal, not an
end in itself or a motive force.
Evolution as a probable process
The consequences of thermodynamics on the emergence of
life are perhaps best exemplified by considering a primordial
pool (Darwin 1859; Miller 1953) that contains some basic
compounds. The compounds make a chemical system by reacting with each other and coupling to an external source
of energy, e.g., to high-energy radiation from the Sun. The
system is an energy transduction network that disperses
energy influx via chemical reactions among all compounds.
Obviously the particular compounds that happen to be in the
pool are very important for conceivable chemistry, but to
elucidate the general driving force that propels evolution no
presumptions are made about the ingredients. In other words,
the important mechanistic questions of how life came about
are not addressed in this study but the driving force, i.e. the
cause of why life emerged is clarified.
Why did life emerge?
It is perhaps a common thought but a misconception that
chemical reactions would be random without any preferred
direction. Reactions do take the direction of decreasing free
energy, which is equivalent to increasing entropy, i.e. the
basic maxim of chemical thermodynamics. This is also the
natural direction taken during chemical evolution. The
motion down along energy gradients can be pictured as a
sequence of steps where the system moves via chemical reactions from one distribution of primordial compounds to
another in the quest for attaining a stationary state in the
high-energy influx. To learn about the probable direction of
motion, the plausible states, i.e. distributions of compounds
(entities) in numbers Nj are compared by entropy (Sharma &
Annila 2007)
!
X
1 X
Nj
mk +DQjk xmj +RT ,
(1)
S=R ln P=
T j=1
k
where mk/RT=ln[Nkexp(Gk/RT)] denotes the chemical potential of substrates and mj of products. The average energy
RT concept is meaningful when the system is sufficiently
statistic (Kullback 1959). According to Eq. (1), entropy S is
a logarithmic probability measure of the energy dispersal.
When energy DQjk from the surroundings couples to a reaction, it will add to the substrate chemical potent mk and raise it
by DQjk to turn the energy flow from the excited substrate
potential mk+DQjk downhill towards the product potential
mj and power the endoergic reaction (mk+DQjk>mj). Without
the external energy the flow would be from mj to mk, thus in
the opposite exergonic direction but also then downhill.
The thermal excess of energy produced by the reaction is
ultimately dissipated from the system to the cold space.
Alternatively, reactions may be powered by an influx of highm matter (e.g., food) that is consumed in coupled exoergic
reactions to drive endoergic reactions. The resulting low-m
matter excess (e.g., excrement) is discarded from the system.
Thus the thermodynamic formula (Eq. (1)) speaks about
mundane matters in terms of physical chemistry. The value of
the general expression of entropy is that it serves to describe
concisely diverse energy transduction systems at various
levels of hierarchy. For a particular system detailed knowledge of the constituents, e.g., concentrations Nj, Gibbs free
energy Gj, influx DQjk and possible jk-reactions, can be given
to calculate entropy using Eq. (1).
The fitness criterion
The primordial pool contains at any given moment a distribution of compounds. A reaction that turns Nk to Nj (or vice
versa) will alter the distribution. The resulting distribution
can be compared with the initial one in Eq. (1) to deduce
whether the particular reaction changed the distribution to a
more probable one. Thus, for any given initial state, it can
deduced where the chemical system is most likely to be on its
way via chemical reactions. To infer the probable course of
evolution the time derivative of Eq. (1) gives the second law
of thermodynamics as an equation of motion (Sharma &
Annila 2007)
dS X dS dNj 1 X dNj X
=
mk +DQjk xmj
=
dt
dNj dt T j=1 dt
j=1
k
1 X
vj Aj o0,
=
T j=1
!
(2)
where the velocity of a reaction is vj=dNj/dt. The notation is
concise but it includes numerous chemical reactions that
eventually result in biological functions. The potential energy
difference that drives the reaction is also known as free energy, exergy or affinity (Kondepudi & Prigogine 1998) Aj=
gmk+DQjkxmj. Importantly Aj also includes the energy
influx. When Aj>0, there is free energy to increase the concentration (or population) Nj of molecular (or plant and
animal) species j. When Aj<0, then Nj is too high in relation
to the other ingredients Nk of the system. Then the population Nj is bound to decrease one way or another. As long as
there are energy density differences among the constituents of
the system or energy density differences with respect to the
surroundings, the system will evolve to decrease free energy,
i.e. to increase entropy via diverse processes.
Obviously, the mere thermodynamic driving force does not
result in evolution but it also takes mechanisms to conduct
energy. Eq. (2) contains the vital kinetics that are understood
by many models of chemical evolution to be important for life
to emerge (Kacser 1960; Eigen & Schuster 1979; Peacocke
1996). The kinetic rates (Sharma & Annila 2007)
X dNk
dNj
Aj
=rj
=x
dt
RT
dt
k
(3)
are proportional to the thermodynamic driving forces to
satisfy the balance equation. In other words, energy and
momentum are conserved in the reactions (Kaila & Annila
2008). The coefficient rj>0 depends on the mechanisms that
yield Nj. According to the self-similar thermodynamic description each mechanism is a system in itself. For example,
an enzyme is a catalytic mechanism that has resulted from a
folding process preceded by a chemical synthesis, both evolutionary courses in themselves. The coefficient is a constant as
long as the mechanism is stationary, i.e. not evolving itself
further. When Eq. (3) is inserted into Eq. (2), it is indeed
apparent from the quadratic form that dS/dto0. The familiar
approximations of the kinetic equation (Eq. (3)) are the massaction law (Waage & Guldberg 1864) and logistic equations
(Verhulst 1845) that picture concentrations Nj as motive
forces and muddle energetics in variable reaction rates. As a
result of using these approximate models that do not spell out
free energy as the driving force, kinetics and thermodynamics
appear inconsistent with each other. Consequently, thermodynamics seem insufficient for outlining evolutionary courses
and various kinetic scenarios acquire additional emphasis
(Pross 2003, 2005).
The thermodynamic value of an energy transduction
mechanism is only in its ability to attain and maintain highentropy states by energy conduction. The thermodynamic
295
296
Arto Annila and Erkki Annila
theory is unarmed to say specifically which mechanisms
might appear but once some have emerged, their contribution
to the reduction of free energy is evaluated according to
Eq. (2). Under the energy influx from the surroundings the
rate of reactions rj in Eq. (3) are very important because the
high-entropy non-equilibrium concentration compounds and
populations of species are constantly replenished by dissipative regeneration. Even a small advantage will accumulate
rapidly as an increased flow directs to increase further the
population of the superior transduction mechanism. This is
also known as the constructal law (Bejan 1997).
When some novel compounds happened to appear in the
primordial system due to random variation in chemical
syntheses, some of them may have possessed some elementary
catalytic activity. Even slightly higher rates of rj provided by
the emerging catalytic activity were very important to attain
more probable non-equilibrium states. They allowed the
energy difference between the chemical system and its highenergy surroundings to diminish faster (e.g., due to the sunlight). The dS/dt rate criterion will naturally select faster and
faster mechanisms as well as those mechanisms that recruit
more and more matter and energy from the surroundings to
the natural process. Therefore, any primordial energy transduction mechanism that was just slightly faster than its predecessor gained ground. The primitive chemical evolution
took the direction of dS/dt>0, just as the sophisticated
evolution does today. Indeed, contemporary catalysed reactions contribute to entropy by rapidly producing diverse
entities that then interact with each other within their lifetimes, i.e. they act as catalysts themselves.
According to the thermodynamics of open systems, every
entity, simple or sophisticated, is considered as a catalyst to
increase entropy, i.e. to diminish free energy. Catalysis calls
for structures. Therefore the spontaneous rise of structural
diversity is inevitably biased towards functional complexity
to attain and maintain high-entropy states. This quest to level
differences in energy by transduction underlies the notion
that evolution is progress. Once all differences in energy
densities ( gmkxDQjk+mj) have been abolished, the system
has reached a stationary state Smax and evolution dS/dt=0
has come to its end. At this maximum-entropy stationary
state, entities keep interacting with each other but there
are no net flows of energy among them and no net fluxes
from the surroundings to the system or vice versa. Frequent
mutual interactions maintain the most probable state by
quickly abolishing emerging potential differences. The system
is stable against internal fluctuations according to the
Lyapunov stability criterion (Kondepudi & Prigogine 1998;
Strogatz 2000), however, when there are changes in the
surrounding densities-in-energy, the system has no choice
but to adapt to them, i.e. to move by abolishing the newly
appeared gradients.
Steps towards life
The primordial pool, the simple chemical system having some
abiotic substances in equilibrium numbers N1, began to
evolve when a reaction pathway that coupled external energy
opened up and products Nj>1 began to form. Then the highsurrounding potential began to drain into the system as substrates transformed to products. This raised the overall
chemical potential of the system towards that of the highenergy radiation. Free energy kept diminishing and entropy
continued to increase when reactions yielded more and more
products from the substrates. During the natural process the
initial equilibrium state was lifted up from equilibrium to the
non-equilibrium state by the energy influx. Nevertheless,
it is important to keep in mind that all flows of energy were
downward and still are from high-energy sources to the repositories lower in energy. According to thermodynamics,
evolution from the equilibrium to the non-equilibrium was a
likely sequence of events, not a miraculous singular event. It
is the coupling of external energy that made the evolutionary
course probable.
The reasoning that the probable course is governed by
conditions is in agreement with Le Chatelier’s principle, i.e.
the conditions determine the stationary state of a reaction.
When the external energy coupled to the reactions, the conditions were in favour of the non-equilibrium stationary state
over the equilibrium state. Conversely, when the external
energy was reduced (e.g., during night or winter), the nonequilibrium state became improbable. Then the system took a
course towards the equilibrium, e.g., by consuming established stocks and even disintegrating prior mechanisms of
energy transduction during a prolonged starvation.
Remarkably, Eq. (1) has not been known explicitly until
recently. Importantly, it shows that the non-equilibrium
state, supported by the external energy, has higher entropy
than the equilibrium state. Thus all systems attempt to move
towards a more probable state by coupling to sources of
external energy. The attempt is successful when there are
abundant and versatile ingredients to capture the energy influx. To this end carbon chemistry by its impressive number
of combinatorial choices was and still is the treasure trove.
It allowed numerous mechanisms to emerge, e.g., due to a
random variation in the flows, and to increase energy transduction further by channelling more external energy into the
system and dispersing it further within the system. Thus the
second law of thermodynamics provides the intrinsic bias
for the emergence of functional structures to conduct energy.
The primordial systems, even without genetic material and
mechanisms of replication, were subject to evolutionary
forces, i.e. directional energy gradients. In the quest to level
differences in energy the primordial energy transduction networks expanded and eventually integrated in the global energy transduction system. Thus, it is accurate to say that there
is not only life on Earth but the planet is living (Lovelock
1988 ; Karnani & Annila 2008).
The thermodynamic formalism is self-similar. It is applicable to diverse levels of hierarchy including complex biological systems that are results of chemical reactions. Thus the
thermodynamic description not only outlines the primordial
course of chemical evolution but also reveals the characteristics of contemporary processes as well. The question of why
Why did life emerge?
Fig. 1. Evolution of a chemical system obtained from a simulation.
The simulation was programmed as steps of random syntheses in a
for-loop. External energy couples to steps of assembly
N1+Njx1 $ Nj according to Eq. (3) and energy dissipates in
dissipative degradations Nj $ jN1. Initially, the system contains only
basic constituents in numbers N1. At the time t=t1 a synthesis
pathway opens up. Entropy S increases rapidly (black) when matter
flows from N1 to new compounds ( j>1) in increasing numbers
g jNj (blue). The growth curve is representative for non-catalysed
reactions. At time t=t2 a second, but a faster (4x) pathway opens
up (green). New kinds of products quickly prompt the system but
soon the system is accumulating them more gradually as energy in
the new products becomes comparable to the original but
diminishing substrate compounds. The system prompts again when
a third pathway punctuates open at time t=t3 yielding catalytic
products (yellow), having higher activity with j. Later the evolution
settles to a new stasis. The form of an autocatalytic growth curve
depends on the specific mechanisms. At time t=t4 a fourth pathway
opens up (red), yielding products that are capable of slowly
recruiting more matter (N1) from outside and maintaining it in the
system. As a result the new pathway, even though it is slow, is
gaining ground in the overall entropy production. With the help of
the newest pathway the previously emerged fast catalytic pathway
will also have more matter to yield even better catalysts to attain
higher states of entropy, whereas the relative contribution of older
slower pathways continues to diminish, eventually facing
extinction.
life emerged and the question of what life is are thus tied
together. The natural process that accumulated early functional chemical compounds is the one and the same that today involves complex entities (species). The scale is different
and the mechanisms are versatile and more effective but the
principle is the same.
All organisms assemble via numerous chemical reactions.
The increase in numbers is, in the case of complex entities,
referred to as proliferation (Fig. 1). According to Eq. (2),
entropy also increases when different kinds of products
appear until the stationary state is attained. In the case of
complex entities this process is usually referred to as differentiation, which gives rise to biodiversity. In the case of a
single organism the process is called developmental differentiation, which results in maturity (Prigogine & Wiame 1946),
i.e. the stable maximum entropy state. Eq. (2) reveals that
entropy increases further when more external energy couples
to the reactions. This process corresponds to an energy intake, e.g., by photo- and chemosynthesis. Entropy will also
increase when the system acquires more matter. It has of
course been known for a long time that entropy of a larger
system is higher than that for a smaller, but otherwise similar,
system. When the energy intake involves complex entities it is
usually referred to as metabolism that powers natural processes such as growth and expansion.
The aforementioned processes from the elementary level of
chemical compounds to complex biological entities at higher
and higher levels of hierarchical organization are strikingly
similar to those that we recognize as the basic biological
processes. Yet they were exposed simply by considering
probabilities of states accessible for an open system undergoing chemical reactions (Sharma & Annila 2007). Thus it is
concluded that life is a natural process. It is a consequence of
increasing entropy, the quest to diminish free energy with no
demarcation between inanimate and animate. According to
thermodynamics there was no striking moment or no single
specific locus for life to originate, but the natural process has
been advancing by a long sequence of steps via numerous
mechanisms so far reaching a specific meaning – life.
The outlined course of evolution is understood by thermodynamics as a probable scenario. This statement may be
interpreted erroneously to imply that life should exist everywhere but apparently does not. Considering the cosmic
background spectrum where the appropriate energy range for
the processes referred to as biological spans only a minute
band, life is undoubtedly rare but not unnatural. The probability is not an abstract concept but inherently associated
with energy (also in the form of matter) as is obvious when
S in Eq. (1) is multiplied by T to give the overall kinetic
energy within the system (Kaila & Annila 2008). Free energy
drives evolution so that kinetic energy balances potential
energy and the energy in radiation. Probabilities are not
invariants but keep changing. When there is little energy or
when there are no mechanisms to couple to external energy
or few ingredients to make energy transduction machinery,
evolution will not advance very far. The very same laws of
thermodynamics that worked in the primordial world are
still working today. For example, when a biological system is
deprived of energy, e.g., an animal is deprived of food, its
existence becomes improbable. Thermodynamics is common
sense.
The equation of evolution
Considering the explanatory power of thermodynamics, it is
perhaps surprising that the probable course of evolution
cannot be solved and predicted in detail. The fundamental
reason is exposed by rewriting Eq. (2) for the probability
using the definition S=RlnP
X dNj Aj
dP
=LPo0 ; L=
:
dt
dt RT
j=1
(4)
The equation of motion cannot be solved analytically
(Sharma & Annila 2007) because the driving forces L keep
changing with changing flows. The non-conserved system,
297
298
Arto Annila and Erkki Annila
summarized by the probablity P, is changing because its
energy content is either increasing or decreasing. Chemical
reactions are endo- or exoergic, i.e. it is imposible for the
system to change its state without acquiring or loosing a
quantum. In other words, there are no invariants of motion,
which is the fundamental reason for the unpredictable
courses of evolution. New mechanisms accessing new potentials are in turn transformed into new mechanisms that redirect the flows of energy and so on. Even small perturbations
in the initial conditions affect the overall course and evolution
is by definition chaotic (Strogatz 2000).
Despite evolution being non-deterministic its main
characteristics are revealed by the equation of motion.
Notably when new means appear to conduct energy from
plentiful potentials, the probablity will increase rapidly. Then
evolution punctuates because suddenly there is much to draw
from and thus, according to Eq. (3), the rate dNj/dt is fast.
When the supplies narrow, the process slows down. Finally,
when the net resources have become exhausted, the system
settles to a stasis. This characteristic course of punctuations
and stases (Eldredge & Gould 1972) covers both complex
animate and simple inanimate systems (Bak 1996) (Fig. 1).
For the large global ecosystem the evolutionary course has
taken eons whereas a simple and small system will quickly
settle to a stasis.
The maximum-entropy steady-state distributions of energy
transduction mechanisms, e.g., populations Nj of species that
result from natural processes, are characteristically skewed
(Grönholm & Annila 2007 ; Jaakkola et al. 2008a ; Würtz &
Annila 2008). The distribution contains relatively few of the
most expensive mechanisms at the top of the energy transduction chain, i.e. food chain. They are thermodynamically
expensive hence rare but highly effective in energy transduction. The numerous mechanisms at the intermediate levels
are not particularly expensive but altogether conduct most of
the energy. The most inexpensive entities do not have many
mechanisms and thus they will not contribute much to the
overall energy transduction either.
The propagator L in Eq. (4) denotes the energy landscape
by tangential vectors that keep changing as energy flows
(Kaila & Annila 2008). A coordinate on the manifold of
energy densities is distinguished from another coordinate by
energy, thereby expressing the concept of identity in terms
of energy. Therefore evolution as an energy transduction
process can be viewed as an energy landscape in a flatting
motion. The thermodynamic analysis reveals that the manifold is not preset, i.e. deterministic. It is non-Euclidian
because the ‘ distances’ in free energy are directional (thus not
proper distances) and because the ‘ distance’ between two
energy densities will change when a third density of energy
comes within interaction range (thus the triangle inequality
need not be satisfied).
Discussion
To understand the origins and evolutions of complex systems,
thermodynamics calls our attention to not discarding the
principle of decreasing free energy, which is equivalent to the
principle of increasing entropy. Often the universal thermodynamic principle and the natural selection in the theory of
evolution are viewed as opposing forces. This is a misconception. The driving force due to external energy has remained obscure because the equation for the rate of entropy
increase (Eq. (2)) has been deduced but not derived from
the first principle probability calculation (Sharma & Annila
2007). Furthermore, when the entropy concept was formulated by statistical physics, free energy was not recognized as
the evolutionary force because it is absent at the equilibrium
that was determined mathematically using Lagrange multipliers rather than following the course directed by fading
forces. Consequently, the concepts of entropy and order have
become mixed with each other. Owing to the confusion it has
become the norm to say that living systems would export
entropy to maintain their internal high degree of order.
The objective is not to maintain order but to employ orderly
energy transduction machinery to diminish energy gradients.
The vital orderly mechanisms of energy transduction are not
low in entropy, i.e. improbable, when being parts of an external energy-powered system. It is emphasized that entropy
increases when differences in energy diminish, whereas disorder, or more precisely decoherence, increases during isergonic processes due to the stochastic exchange of quanta.
Indeed the pedagogical cliché of equating entropy with disorder is unnecessarily confusing and ultimately wrong (Sagan
2007). The common misconception that entropy of a living
system could possibly decrease at the expense of entropy
increasing in its surroundings does in fact violate the conservation of energy. It is possible, although statistically unlikely, that entropy of a system and its surroundings would
both decrease. This means that energy would transiently
flow upwards from a low to a high density. Thus the second
law of thermodynamics and the theory of evolution by
natural selection are not opposing but one and the same
imperative. There is no demarcation line between animate
and inanimate.
Natural selection by the rate of entropy increases among
alternative ways, i.e. mechanisms to conduct energy are the
self-consistent and universal criterion of fitness. In the primordial world any mechanism, irrespective of how simple
or elementary, did move towards more probable states.
Primordial catalysts, perhaps yielding only minute rate enhancements, could just have been the compounds themselves.
Later, when other, faster, ways opened up they were employed to reach states that were even higher in entropy. Thus
evolution is tinkering (Jacob 1977), and there might be only
very few clues left to track down specific chemical reactions
that began to increase the energy content of matter on Earth
by coupling to high-energy flux from Sun. Nevertheless, the
emergence of systems with increasingly higher degrees of
standards such as chirality in biological macromolecules
and common genetic code can be recognized as signposts of
evolution. We see nothing of these slow changes in progress,
until the hand of time has marked the long lapses of ages
(Darwin 1859).
Why did life emerge?
When a system cannot access more matter or energy, the
rates of energy transduction may still continue to improve to
reach higher states of entropy. The rates of entropy increase
are relative to one another. When ingredients are intrinsically
difficult to recruit to the natural process, even a slow process
is better than nothing. The dS/dt rate is a blind but highly
functional criterion. Over the eons rates have improved over
and over again to result in, e.g., efficient cellular metabolism
and an ecosystem food web. Today catalysed kinetics is so
ubiquitously characteristic of life that it is easily regarded as a
profound cause rather than being a consequence of the principle of increasing entropy by decreasing gradients in energy.
The dS/dt rate criterion guarantees that only those among the
diverse entities that are capable of contributing to entropy are
maintained in the system, i.e. will survive. The rate of entropy
increase as the selection criterion resolves the circular argument : fitness marks survival – survival means fitness. Natural
selection by the entropy increase rate may at first appear
merely as a conceptual abstraction or an oversimplification of
reality. Indeed it may be difficult to recognize the increase
of entropy, equivalent to the decrease of free energy, as the
common motive among many and intricate contemporary
mechanisms of life. However, intricacies and complexities
are in the machinery, not to be confused with the universal
objective.
The principle of increasing entropy explains why matter
organizes in functional structures and hierarchies. The order
and complexity in biological systems has no value as such.
Mechanisms and structures are warranted only by their energy transduction, i.e. the ability to attain and maintain highentropy states. A system cannot become larger than the one
where its entities still reach to interact with each other. For
example, molecules that are results of endoergic external
energy powered reactions, are bound to break down and thus
they may take part only in the reactions that they will reach
within their lifetimes. Further entropy increase may take
place when systems themselves become entities of a large
system at a higher hierarchical level with a larger range of
interactions. For example, molecules are entities of systems
known as cells that are entities of organisms and so on. The
principle dS>0 is also the universal condition of integration.
An organization will form when entropy increases more than
can be achieved by entities, as systems interact with their
surroundings independently. Some organisms, e.g., yeast,
exemplify the thermodynamic principle by switching between
uni- and multicellular modes of organization depending on
surrounding supplies (the potential energy gradients). Thus a
hierarchical organization is just a mechanism among many
others to conduct energy.
According to thermodynamics, mechanisms are consequences of the natural process, not conditions for life to
emerge. There is no requirement for an autocatalytic selfreplicating molecule being assembled by a fortuitous event
and being susceptible to mutations for natural selection to
operate on it. This is in agreement with the notion ‘metabolism first’, but without the incentive to discover a specific,
vital mechanism. There is no problem in evolution taking its
direction. It is always down along the energy gradients.
The role of heredity and information is not overlooked by
thermodynamic formalism either. It is incorporated in the
evolutionary processes as mechanisms. The physical view
of information gives understanding, e.g., to its dispersal in
genomes.
The unifying view of thermodynamics captures courses and
distributions of matter with no demarcation line between
living beings and inanimate objects. Stochastic processes act
on all matter and put it in motion towards increasing entropy.
The result is evolution, i.e. a series of steps from one state to
another to lower potential energy differences. Earth, our
home, is in between the huge potential energy difference due
to the hot Sun and the cold space. Biota emerged integrated
in processes of the atmosphere and geosphere to diminish the
energy differences by transduction. The theory of evolution
by natural selection formulated in thermodynamics roots
biology via chemistry to physics to widen contemporary discourse on the fundamentals of evolution and the emergence
of life.
Acknowledgments
We thank Christian Donner, Sedeer El-Showk, Mikael
Fortelius, Carl Gahmberg, Salla Jaakkola, Kari Keinänen,
Liisa Laakkonen, Martti Louhivuori, Kaj Stenberg, Mårten
Wikström and Peter Würtz for enlightening discussions and
valuable comments.
References
Alonso, M. & Finn, E.J. (1983). Fundamental University Physics Quantum
and Statistical Physics. Addison-Wesley, London.
Atkins, P.W. (1998). Physical chemistry. Oxford University Press, New
York.
Bak, P. (1996). How Nature Works : The Science of Self-Organized
Criticality. Copernicus, New York.
Bejan, A. (1997). Advanced Engineering Thermodynamics. Wiley, New
York.
Brooks, D.R. & Wiley, E.O. (1986). Evolution as Entropy : Toward a Unified
Theory of Biology. The University of Chicago Press, Chicago.
Chaisson, E.J. (1998). The cosmic environment for the growth of complexity. Biosystems 46, 13–19.
Darwin, C. (1859). On the Origin of Species. John Murray, London.
Dewar, R.C. (2003). Information theory explanation of the fluctuation
theorem, maximum entropy production, and self-organized criticality in
non-equilibrium stationary states. J. Phys. A : Math. Gen. 36, 631–641.
Eigen, M. & Schuster, P. (1979). The Hypercycle : A Principle of Natural
Self-organization. Springer, Berlin.
Eldredge, N. & Gould, S.J. (1972). Models in Paleobiology, ed. Schopf,
T.J.M., pp. 82–115. Freeman, Cooper, San Francisco.
Fry, I. (2000). The Emergence of Life on Earth. Rutgers University Press,
New Jersey.
Gibbs, J.W. (1993–1994). The Scientific Papers of J. Willard Gibbs. Ox Bow
Press, Woodbridge, CT.
Gould, S.J. (2002). The Structure of Evolutionary Theory. Harvard
University Press, Cambridge, MA.
Grönholm, T. & Annila, A. (2007). Natural distribution. Math. Biosci. 210,
659–667.
Jaakkola, S., El-Showk, S. & Annila, A. (2008a). The driving force behind
genomic diversity. Biophys. Chem. 134, 232–238. arXiv :0807.0892.
299
300
Arto Annila and Erkki Annila
Jaakkola, S., Sharma, V. & Annila, A. (2008b). Cause of chirality consensus. Curr. Chem. Biol. 2, 53–58.
Jacob, F. (1977). Evolution and tinkering. Science 196, 1161–1166.
Jaynes, E.T. (1957). Information theory and statistical mechanics. Phys.
Rev. 106, 620–630.
Kacser, H. (1960). Kinetic models of development and heredity. Models
and analogues in biology. Symp. Soc. Exptl. Biol. 14, 13–27. Cambridge
University Press, Cambridge.
Kaila, V.R.I. & Annila, A. (2008). Natural selection for least action. Proc.
R. Soc. A 464, 3055–3070.
Karnani, M. & Annila, A. (2008). Gaia again. Biosystems doi : 10.1016/
j.biosystems.2008.07.003.
Kondepudi, D. & Prigogine, I. (1998). Modern Thermodynamics. Wiley,
New York.
Kullback, S. (1959). Information Theory and Statistics. Wiley, New York.
Lineweaver, C.H. (2005). Cosmological and biological reproducibility:
limits of the maximum entropy production principle. In Non-equilibrium
Thermodynamics and the Production of Entropy: Life, Earth and Beyond,
eds Kleidon, A. & Lorenz, R.D. Springer, Heidelberg.
Lorenz, R.D. (2002). Planets life and the production of entropy. Int. J.
Astrobiology 1, 3–13.
Lotka, A.J. (1925). Elements of Mathematical Biology. Dover, New York.
Lovelock, J.E. (1988). The Ages of Gaia. Oxford University Press,
Oxford.
Martin, P. (2006). Beginning research on the quantification of spatial order.
In 18th Annual Colloquium of the Spatial Information Research Centre
(SIRC 2006 : Interactions and Spatial Processes), 6–7 November,
Dunedin, New Zealand, pp. 109–125.
Mendoza, E. (ed.) (1960). Reflections on the Motive Power of Fire by Sadi
Carnot and other Papers on the Second Law of Thermodynamics by
E. Clapeyron and R. Clausius. Dover, New York.
Miller, S.L. (1953). Production of amino acids under possible primitive
earth conditions. Science 117, 528.
Oparin, A.I. (1952). The Origin of Life. Dover, New York.
Orgel, L.E. (1998). The origin of life – a review of facts and speculations.
TIBS 23, 491–495.
Peacocke, A.R. (1996). A Pioneer of the Kinetic Approach. J. Theor. Biol.
182, 219–222.
Prigogine, I. & Wiame, J.M. (1946). Biologie et thermodynamique des
phenomenes irreversibles, Experientia 2, 451–453.
Pross, A. (2003). The driving force for life’s emergence: kinetic and
thermodynamic considerations. J. Theor. Biol. 220, 393–406.
Pross, A. (2005). On the emergence of biological complexity : life as a kinetic
state of matter. Orig. Life Evol. Biosph. 35, 151–166.
Sagan, D. (2007). From producing entropy to reducing gradients to
spreading energy : Understanding the essence of the second law. In AIP
Conference Proceedings (vol. 1033). Meeting the Entropy Challenge : An
International Thermodynamics Symposium in Honor and Memory of Professor Joseph H. Keenan, 4–5 Oct 2007, Cambridge, MA, USA. (eds.)
Beretta, G.P., Ghoniem, A. & Hatsopoulos, G., pp. 194–197. American
Institute of Physics, New York.
Salthe, S.N. (1985). Evolving hierarchal system. Columbia University Press,
New York.
Salthe, S.N. (2004). The origin of new levels in dynamical hierarchies.
Entropy 6, 327–343.
Schneider, E.D. & Kay J.J. (1994). Life as a manifestation of the 2nd law of
thermodynamics. Math. Comp. Model. 19, 25–48.
Schrödinger, E. (1948). What is Life? The physical aspects of the living cell.
Cambridge University Press, Cambridge.
Shannon, C.E. (1948). A mathematical theory of communication. Bell
System Technical Journal 27, 379–423 & 623–656.
Sharma, V. & Annila, A. (2007). Natural process – natural selection.
Biophys. Chem. 127, 123–128.
Strogatz, S.H. (2000). Nonlinear Dynamics and Chaos with Applications To
Physics, Biology, Chemistry and Engineering. Westview, Cambridge, MA.
Swenson, R. (1998). Spontaneous order, evolution, and autocatakinetics :
the nomological basis for the emergence of meaning. In van de Vijver, G.,
Salthe, S. & Delpos, M. (eds) Evolutionary Systems : Biological and
Epistemological Perspectives on Selection and Self-organization, pp. 155–
180. Kluwer, Dordrecht.
Ulanowicz, R.E. & Hannon, B.M. (1987). Life and the production of
entropy. Proc. R. Soc. London B. 232, 181–192.
Verhulst, P.F. (1845). Recherches mathématiques sur la loi d’accroissement
de la population. Nouv. Me´m. Acad. Roy. Sci. Belleslett. Bruxelles 18,
1–38.
Waage, P. & Guldberg, C.M. (1864). Forhandlinger, vol. 35 (VidenskabsSelskabet i Christiana).
Weber, B.H., Depew, D.J. & Smith, J.D. (eds) (1988). Entropy, Information
and Evolution. Cambridge, MA, MIT Press.
Wilson, E.O. (1992). The Diversity of Life. Norton, New York.
Woese, C.R. (1998). The universal ancestor. Proc. Natl. Acad. Sci. 95,
6854–6859.
Würtz, P, Annila, A. (2008). Roots of diversity relations. J. Biophys.
(in press).