BioSystems 60 (2001) 59 – 83
www.elsevier.com/locate/biosystems
Symbols and dynamics in the brain
Peter Cariani
Eaton Peabody Laboratory for Auditory Physiology, Massachusetts Eye and Ear Infirmary, 243 Charles St., Boston,
MA 02114, USA
Abstract
The work of physicist and theoretical biologist Howard Pattee has focused on the roles that symbols and dynamics
play in biological systems. Symbols, as discrete functional switching-states, are seen at the heart of all biological
systems in the form of genetic codes, and at the core of all neural systems in the form of informational mechanisms
that switch behavior. They also appear in one form or another in all epistemic systems, from informational processes
embedded in primitive organisms to individual human beings to public scientific models. Over its course, Pattee’s
work has explored (1) the physical basis of informational functions (dynamical vs. rule-based descriptions, switching
mechanisms, memory, symbols), (2) the functional organization of the observer (measurement, computation), (3) the
means by which information can be embedded in biological organisms for purposes of self-construction and
representation (as codes, modeling relations, memory, symbols), and (4) the processes by which new structures and
functions can emerge over time. We discuss how these concepts can be applied to a high-level understanding of the
brain. Biological organisms constantly reproduce themselves as well as their relations with their environs. The brain
similarly can be seen as a self-producing, self-regenerating neural signaling system and as an adaptive informational
system that interacts with its surrounds in order to steer behavior. © 2001 Elsevier Science Ireland Ltd. All rights
reserved.
Keywords: Adaptive systems; Biological cybernetics; Biological semiotics; Dynamical systems; Emergence; Epistemology; Evolutionary robotics; Genetic code; Neural code; Neurocomputation; Self-organization; Symbols
1. Symbols in self-production and in
percept-action loops
Theoretical biology has long attempted to answer fundamental questions concerning the nature
of life itself, its origins, and its evolution. Over
four decades, Howard Pattee has articulated a
series of questions that concern the origins and
evolutions of structural stability, hierarchical organization, functional autonomy, informational
process, and epistemic relation. These go to the
heart of how cognitive systems are grounded in
their material, biological substrates.
Organisms are dynamic material systems that
constantly reproduce their own material organization. In order to persist, organisms must maintain
both internal and external balance. They must
simultaneously create a stable, internal milieu
through self-production and establish stable, sustainable relations with their surrounds. Symbols
play fundamental roles in each of these realms.
DNA sequences constrain self-production and re-
0303-2647/01/$ - see front matter © 2001 Elsevier Science Ireland Ltd. All rights reserved.
PII: S0303-2647(01)00108-3
60
P. Cariani / BioSystems 60 (2001) 59–83
production. In percept-action loops, nervous systems continuously engage in informational transactions with their external environments to
adaptively steer behavior.
As a physicist, Pattee has always been deeply
interested in what differentiates organisms from
other material systems. How do we distinguish
living from non-living systems? Are systems ‘‘living’’ by virtue of special parts and/or relations
(e.g. DNA, RNA, proteins) or by virtue of coherent organization of their constituent processes? In
physics, the discovery of universal, natural laws in
organizationally simple systems is paramount,
while the more complex organisms of biology are
most intelligible in terms of special constraints
that capture the essential organizational and informational relations that make an organism a
living system. A physics of biology must therefore
grapple with questions of organization, information, and function.
Pattee has been deeply interested in the role of
physically embodied symbols in the ongoing selfproduction of the organism (Pattee, 1961). Informational function in a biological system involves
the switching of states by configurational rather
than energetic means. While two different strands
of DNA may have essentially the same energetics,
large differences in cellular and organismic behavior can arise purely from the different sequences
of symbols that they carry. The central role of
discrete, genetic coding mechanisms in biological
organisms prompted Pattee to pose a series of
fundamental questions. What does it mean to say
that there is a ‘‘code’’ in a natural system? What
distinguishes a non-informational process from an
informational process? How do the latter evolve
from the former or, in Pattee’s (1969) words,
‘‘how does a molecule become a message?’’ Must
all life depend upon a genetic code? If so, must
the informational vehicles be discrete tokens, or
might simple analog, metabolic self-production
suffice?
In addition to their internal role in self-production, informational processes play critical roles in
interactions with external environments. These
processes form the basis of biological epistemology, i.e. a ‘‘cognitive biology.’’ Organisms sense
their surrounds, anticipate what actions are ap-
propriate, and act accordingly. In perception, internal informational patterns are contingent upon
the interactions of sensory receptors with an external environment. These sensory ‘‘representations’’ inform anticipatory predictions that
determine which actions are likely to lead to
outcomes that fulfill biological system-goals (e.g.
homeostasis, nutrition, reproduction). The predictive decision process switches between the different alternative behavioral responses that are
available to the organism. Actions are thus coordinated with percepts in a manner that facilitates
effective, survival-enhancing behavior.
The operations of perception, coordination-anticipation, and action in the organism become the
measurements, predictive computations, and actions of generalized observer-actors. The stimuluscontingent actions of sensory organs resemble
measurements, while reliable couplings of inputs
to outputs, in the form of percept-action mappings, resemble computations. Thus, to the extent
that organisms react differentially to different environmental conditions, ‘‘modeling relations’’ and
‘‘percept-action cycles’’ are embedded in biological systems. At their core, then, almost all biological organisms can be seen as primitive epistemic
systems in their own right. Organisms, cognitive
systems, and scientific models thus share a common basic functional organization (Rosen, 1978,
1985, 2000; Pattee, 1982, 1985, 1995, 1996; Cariani, 1989, 1998b; Kampis, 1991a; Etxeberria,
1998; Umerez, 1998). Further, these systems to
varying degrees are adaptive systems that continually modify their internal structure in response to
experience. To the extent that an adaptive epistemic system constructs itself and determines the
nature of its own informational transactions with
its environs, that system achieves a degree of
epistemic autonomy relative to its surrounds (Cariani, 1992a,b, 1998a).
Like the organism as a whole, nervous systems
are self-constructing biological systems that are in
constant adaptive interaction with their environments. It is not surprising, then, that parallel
questions related to information and organization
arise in the study of the brain. How are the
informational functions of neurons to be distinguished from their non-informational functions?
P. Cariani / BioSystems 60 (2001) 59–83
How is the informational identity of a nervous
system maintained over the life of the organism?
What kinds of neural pulse codes subserve the
representation of information? What is the relationship between analog and discrete information
processing in the brain? What does it mean to say
that neurons perform ‘‘computations’’ or ‘‘measurements’’ or that ‘‘symbols’’ exist in the brain?
How should we think about the semiotics of such
symbols?
Nervous systems are biological systems that
reproduce their internal organizations over time,
they are information-processing systems that use
sensory data to steer effective action, they are
epistemic systems that assimilate the correlational
structure of their environments, and in addition,
they are also material systems that support conscious awareness. In this paper, we will discuss
these various aspects of nervous systems with
many of Pattee’s probing questions and organizing concepts in mind.
2. Regeneration of internal relations: organisms
as self-producing systems
The fundamental concept of a self-production
system is common to an organizational view of
both life and mind. A self-production system reproduces its own parts and regenerates its own
functional states. Both the material organization
that characterizes life and the informational order
that characterizes mind therefore necessarily involve regenerative processes at their cores. Regenerative ‘‘circular-causal’’ processes that renew
energy flows, material parts and functional relations continually recreate stable, ongoing systems
identities. Regenerations of parts and relations
between parts permit self-construction, self-repair,
and self-reproduction that allow energetically
open organizations to continually reproduce their
internal relations (Kampis, 1991b). The ensuing
dynamic orders of organisms and brains are more
flame-like than crystalline (Piatelli-Palmarini,
1980, introduction).
Thus far, our best theories of living organization all involve self-production networks, but differ in the role that symbols play in these networks
61
(Fig. 1). In his logical requisites for a self-reproducing automaton, von Neumann (1951) drew an
explicit functional dichotomy between plans
(genome) and the apparatus that interprets them
to construct a body (Fig. 1A). In metabolism-repair systems (Rosen, 1971, 1991) and symbol-matter systems (Pattee, 1982, 1995), a similar
complementarity exists between symbols (plans)
and physical dynamics (rate-dependent chemical
reactions).
However, metabolic descriptions that de-emphasize and eliminate the role of biological symbols have also been proposed (Fig. 1B). These
include autopoietic models (Varela, 1979; Maturana, 1981; Mingers, 1995) reaction networks,
hypercycles (Eigen, 1974), and autocatalytic networks (Kauffman, 1993). In these models, organizational stability comes from the dynamics of
rate-dependent chemical reactions rather than
from the stability of genetic sequences. Here, organizational memory is analog and implicit in the
dynamics, rather than discrete, explicit and insulated from them.
Roles for symbolic constraint and dynamically
based structural stability need not be mutually
exclusive. A reconciliation of the two views is to
see the cell in terms of analog biochemical kinetics
that are channeled by the regulatory actions of
discrete genetic switches (Fig. 1C). Biochemical
reactions are described in terms of rate-dependent
processes that critically depend on the passage of
time, while switches are described in terms of
states that are largely indifferent to time. Pattee
distinguishes rate-independent genetic, information storage and retrieval operations from rate-dependent processes that are involved in
construction, metabolism, and action (Pattee,
1979). The time-indifferent processes utilize independent discrete, inheritable, genetic ‘‘symbols’’
while time-critical processes depend on rate-dependent chemical dynamics. There is thus a way
of recognizing in natural systems those physical
mechanisms that can function as ‘‘symbols’’ or
‘‘records’’, i.e. the physical substrates of the semiotic. If we examine the workings of a digital
computer, we see that the behavior of the material
system can be described not only in terms of
rate-dependent dynamics (e.g. as differential equa-
62
P. Cariani / BioSystems 60 (2001) 59–83
Fig. 1. Three conceptions of the role of symbols in biological self-production. (A) John von Neumann’s (1951) mixed digital – analog
scheme for a self-producing automaton. Inheritable plans direct the construction of the plans themselves and the universal
construction apparatus. Once plans and constructor can reproduce themselves, then byproducts can be produced that need not
themselves be directly a part of the reproductive loop. (B) Non-symbolic self-production network in which there is no division
between plans and material parts. (C) Symbolically constrained self-production network in which geneticexpression sets boundary
conditions for metabolic reaction cycles through catalytic control points (concentric circles).
tions that embody the laws of classical physics),
but also in terms of rule-governed switchings between macroscopic operational states (e.g. as a
finite state automaton).
Some processes lend themselves better to symbolic description, others to dynamical description.
In von Neumann’s scheme (Fig. 1A), the different
processes can be described in terms of symbols
(plans, genetic strings), material parts (phenotype,
body), and construction mechanisms (von Neumann’s universal constructor, transcription-translation) that transform symbols into material
structures. The latter interpret symbols to direct
the construction of organized structures from basic parts. In this view, the organism interprets its
own symbols in order to continually construct its
own body1. Pattee has called this mixture of symbolic and non-symbolic action ‘‘semantic closure’’
(Pattee, 1982).
1
A concrete example involves the tRNA molecules that map
particular tri-nucleotide codons to particular amino acids in
transcription. These tRNA molecules that implement the interpretation of the genetic code are also themselves produced by
the cell, so that alternative, and even multiple interpretations
of the same nucleotide sequence would be possible (though
unlikely to be functionally meaningful). The cell fabricates the
means of interpreting its own plans.
P. Cariani / BioSystems 60 (2001) 59–83
Many different kinds of closure are possible2.
To the extent that material structures and functional organizations are continually regenerated
by internal mechanisms, some degree of material
and functional closure is achieved. This closure,
or internal causation, in turn creates domains of
partial structural and functional autonomy. Structure is created from within rather than imposed
from without. Closure thus creates a boundary
between an interior self-produced realm and an
exterior milieu that is beyond the control of the
self-production loop. For biological organisms,
closure and autonomy are always partial and
provisional because these systems depend on continuous material and informational exchange with
their environments.
3. Regeneration of informational pattern in the
nervous system
If organisms can be seen in terms of regenerations of material parts, minds can be seen in terms
of regenerations of informational orders. Organizational conceptions of both life and mind came
together early in Western natural philosophy, in
the form of Aristotle’s concept of psyche ( Hall,
1969; Modrak, 1987). Living organisms, nervous
systems, and societies of organisms are cooperative networks of active, but interdependent, semiautonomous elements. It is therefore not
surprising that conceptions of the coherent functional organization of nervous systems have developed in parallel with those for biological
organisms.
Anatomically, the nervous system consists of a
huge multiplicity of transmission loops: recurrent
multisynaptic connectivities, reciprocal innervations, and re-entrant paths (McCulloch, 1947;
Lorente de Nó and Fulton, 1949; Mesulam,
1998). Virtually every neuron in the system is part
of a signaling cycle, providing inputs to and receiving inputs from other elements in the network.
2
Many more aspects of closure are discussed elsewhere in
greater depth (Maturana, 1970, 1981; Varela, 1979; Pask,
1981; von Foerster, 1984a; von Glasersfeld, 1987; Chandler
and Van de Vijver, 2000).
63
These signaling cycles manifest themselves physiologically in terms of reciprocal activations, reverberations, and more complex, history-dependent
modes of activity (Gerard, 1959; Thatcher and
John, 1977). Theoretical neuroscientists have generally believed that this recurrent organization is
essential to the operation of the nervous system as
an informational system, on both macroscopic
and microscopic levels. Within individual neurons, a host of regenerative action– recovery cycles subserve synaptic action as well as the
generation and transmission of action potentials.
Thus, many of the first formal models of neural
networks dealt with the stability properties of
closed cycles of excitation and inhibition (Rashevsky, 1960), of pulse-coded ‘‘nets with circles’’
(McCulloch and Pitts, 1943; McCulloch, 1969a),
and assemblies of oscillators (Greene, 1962). At a
few junctures, formal relations between metabolic
networks and recurrent neural networks were also
considered (Rashevsky, 1960; Cowan, 1965; Maturana, 1970, 1981; Katchalsky et al., 1972; Varela,
1979; Haken, 1983; Minch, 1987; Kauffman,
1993).
Psychology in the mid-20th century was accordingly formulated in terms of switching between
reverberant signaling loops (McCulloch and Pitts,
1943; Hebb, 1949; Rashevsky, 1960; Greene,
1962; Hebb, 1966) (Fig. 2A). In these frameworks,
mental states could be seen as alternative eigenstates of a large, dynamical system (von Foerster,
1984a,b; Rocha, 1996, 1998). Different stimuli
would switch the resonant states of the system in
different ways, such that different motor response
patterns would be produced (Fig. 2B). Linkages
between particular stimulus classes and appropriate responses could then be implemented by
means of adjusting synaptic efficacies and/or
firing thresholds of excitatory and inhibitory elements so as to create mutually exclusive behavioral alternatives.
In the subsequent decades that saw the ascendance of the digital electronic computer, cybernetics-inspired notions of the brain as a set of tuned,
reverberant analog feedback circuits were replaced with accounts that relied on neural mechanisms of a more discrete sort: feature detectors,
decision trees, sequential-hierarchical processing,
64
P. Cariani / BioSystems 60 (2001) 59–83
Fig. 2. Stimulus-contingent switching between reverberant states. (A) Hebb’s conception of percept-action mappings using
reverberant loops. (B) Simplified state-transition diagram for this process. Depending upon the stimulus and the resulting neural
activity pattern, the network enters one of two resonant states (pattern-resonances), which subsequently produce different motor
responses. Resonant states at this level of description become the functional primitive (symbolic) states of higher-level descriptions.
The epistemic cut for this system lies at the point of contingency, where stimuli A and B cause different system-trajectories.
and high-level rule-systems. In the 1960s and
1970s, funding for research in information-processing shifted from neural networks towards the
more symbolically oriented, logic-based approaches of symbolic artificial intelligence, cognitive
psychology,
and
linguistics.
Strong
conceptions of minds as rule-governed symbolprocessing systems emerged from this movement.
The rise of the term ‘‘genetic program’’ reflected
the diffusion of the computer metaphor into
purely biological realms.
4. Symbols and dynamics in the brain
In this historical context, one could discuss the
competing paradigms of analog and digital computation in terms of their respective descriptions:
dynamical networks vs. symbolic computation
(Pattee, 1990). These two paradigms defined the
poles of the ‘‘symbol-matter’’ problem as it related to the description of the brain.
In the mid-1980s, neural network research was
revived under the rubric of ‘‘parallel distributed
processing’’, and neural network information-processing models reappeared in significant numbers
in the neurosciences. Currently, most neuroscientists who work on informational aspects of the
brain assume that the brain is a parallel, distributed connectionist network of one sort or
another. The great diversity of current neurocomputational approaches make the core assumptions
and boundaries of this paradigm hard to clearly
delineate, such that it can be fit within the categories of the symbol-matter dichotomy (Pattee,
1990; Cariani, 1997a).
How brain function is conceptualized thus depends heavily on which technological examples
are available, especially in the absence of strong
theories and decisive empirical data. The current
situation in the neurosciences regarding the neural
code is not unlike the situation in molecular biology before the elucidation of the genetic code.
Biologists understood that there had to be molecular mechanisms for heredity in the chromosomes, but did not have a specific understanding
of which aspects of chromosomal structure were
responsible for the transmission of genetic information. We understand that all of the information
necessary for perception, cognition, and action
must be embedded in the discharge activities of
neurons, but we do not yet have firm understanding or agreement as to which specific aspects of
neural discharge convey which specific kinds of
information.
P. Cariani / BioSystems 60 (2001) 59–83
65
Table 1
Global paradigms for brain function
Symbol processing
Dynamical systems
Neurocomputation
Explanatory mode
Functionalism:
Symbolic computation
Mass behavior:
System trajectories
Functionalism:
Neural codes and info. processing
Change
Rules
Physical laws
Neural mechanisms
View of cells
Genetic programs
Switching systems
Metabolic cycles
Autopoiesis
Adaptive computing elements
Neural architectonics
Brains
Discrete-state computer
Analog computer
Mixed analog–digital device
Neural primitives
Feature detectors
Channel-activations
Neural mass-statistics
Interneural correlations
Neural representations: rate-profiles and temporal
patterns
Symbols
Functional atoms
Attractor basins
Mutually exclusive patterns
Representation
Explicit mappings onto
symbol-states
Non-representational
Implicate embeddings
Analog and discrete modes
General and special-purposed
Information
processing
Sequential hierarchical
decision processes
Iterated computation
Functional modules
Resonance processes
Mass dynamics
Controllable dynamics
Chaos
Pattern-resonance and elaboration
Feature-detection, correlations
Hierarchical and heterarchical
Sequential and (a)synchronous
Many strategies for cracking the neural code
are being pursued. Some clues may be provided
by studying the parts of neural systems on molecular and cellular levels, but structural knowledge
by itself may not generate the functional heuristics
needed to reverse-engineer these systems. One can
have in hand a circuit diagram of an unknown
information-processing device, but still not understand what it is for, how it works, or what general
functional principles are employed in its design.
System-pathologies provide other clues for function: what functional deficits are associated with
damage to particular parts. One strives to identify
those parts that are essential for a given function
and those that are redundant or non-essential.
These strategies are presently limited by the relatively coarse character of physical lesions and the
systemic nature of genetic and pharmacological
interventions that do not readily yield much insight into the details of neural representations and
computations. Electrophysiological experiments
do provide these details, but the sheer complexity
of neural responses makes their meaningful interpretation difficult at best. Neurocomputational
approaches attempt to understand how the brain
works by developing functional models of neural
systems that have information-processing capabilities similar to those of nervous systems, simultaneously searching for existing neural structures
that might implement such mechanisms. It is in
the realm of neurocomputational theory that the
concepts of symbols and dynamics have their
greatest relevance.
Amongst global theories of how the brain functions as an informational system, there are currently three broad schools: the dynamical
approach, the symbolic approach, and the neural
information processing (neurocomputational) approach (Table 1). Although symbolic and dynamical approaches are quite disjoint, considerable
overlap exists between each of these and portions
of neurocomputational view.
The dynamical approach has been adopted by
research traditions that seek to understand the
brain in terms of analog, rate-dependent processes
and physics-style models: early formulations of
neural network dynamics (Beurle, 1956; Rashevsky, 1960; Greene, 1962), Gestalt psychology
66
P. Cariani / BioSystems 60 (2001) 59–83
(Köhler, 1951), Gibsonian ecological psychology
(Carello et al., 1984), EEG modeling (Basar, 1989;
Nunez, 1995), and dynamical systems theory
(Freeman, 1975, 1995, 1999; Haken, 1983, 1991;
Kugler, 1987; Kelso, 1995; van Gelder and Port,
1995). For dynamicists, the brain is considered as
a large and complex continuous-time physical system that is described in terms of the dynamics of
neural excitation and inhibition. The behavior of
large numbers of microscopic neural elements creates discrete basins of attraction for the system
that can be switched. These contingently stable
dynamical macro-states form the substrates for
mental and behavioral states. Some dynamics-oriented traditions have formulated analog alternatives to discrete computations with the aim of
explaining perceptual and behavioral functions
(Michaels and Carello, 1981; Carello et al., 1984),
while others are more concerned with the mass
dynamics of neural systems that account for their
observed exogenous and endogenous electromagnetic response patterns3.
In the neural and cognitive sciences, the symbol-based approach has been adopted by research
traditions whose subject matter lends itself to
orderly, rule-governed successions of discrete
functional states: symbolic artificial intelligence,
symbolically oriented cognitive science, and linguistics. Perception is seen in terms of microcomputations by discrete feature-detection elements,
while mental operations are conceptualized in
terms of computations on discrete, functional
symbolic states that are thought to be largely
autonomous
of
the
underlying
neural
microdynamics4.
3
The failure to find intelligible neural representations for
sensory qualities has led some theorists (e.g. Freeman, 1995;
Hardcastle, 1999) to propose that explicit representations do
not exist as such, at least on the level of the cerebral cortex,
and are therefore implicitly embedded in the mass-dynamics in
a more covert way.
4
Thus the belief in a ‘‘symbol level’’ of processing. The
model of vision laid out in Trehub (1991) is a good example of
the microcomputational approach to perception, while
Pylyshyn (1984) epitomizes the symbol-based approach to
cognition.
The brain may be best conceptualized in terms
of mixed analog– digital devices, since strong examples of both analog and discrete modes of
representation can be found there (von Neumann,
1958). Clearly, most sensory representations that
subserve sensory qualia such as pitch, timbre,
color, visual form, smell, taste, convey continuous
ranges of qualities, and most actions involve continuous ranges of possible movements. On the
other hand, cognitive representations, such as
those that subserve speech, language, thought,
planning, and playing music, by necessity involve
discrete functional states that must be organized
and combined in highly specific ways.
The neurocomputational approach includes a
variety of neurophysiological and neurocomputational perspectives that seek to understand on a
detailed level how neural populations process information (Licklider, 1959; McCulloch, 1965; Arbib, 1989; Marr, 1991; Churchland and
Sejnowski, 1992; Rieke et al., 1997). In the brain,
these alternatives are often conceptualized in
terms of analog and digital processes operating at
many different levels of neural organization: subcellular, cellular, systems level, continuous vs. discrete percepts and behaviors. On the subcellular
level, continuously graded dendritic potentials
influence the state-switchings of individual ion
channels whose statistical mechanics determine
the production of discrete action potentials
(‘‘spikes’’). Most information in the brain appears
to be conveyed by trains of spikes, but the way in
which various kinds of information are encoded
in such spike trains is not yet well understood.
Central to the neurocomputational view is the
neural coding problem — the identification of
which aspects of neural activity convey information (Mountcastle, 1967; Perkell and Bullock,
1968; Uttal, 1973; Cariani, 1995; Rieke et al.,
1997; Cariani, 1999).
Neurocomputational approaches presume that
ensembles of neurons are organized into functional ‘‘neural assemblies’’ (Hebb, 1949) and processing architectures that represent and analyze
information in various ways. The functional states
of a neural code can form highly discrete alternatives or continuously graded values. A simple
‘‘doorbell’’, code in which a designated neuron
P. Cariani / BioSystems 60 (2001) 59–83
either fires or does not (on/off), is an example of
the former, while an interspike interval code in
which different periodicities are encoded in the
time durations between spikes is an example of
the latter. The nature of a code depends upon
how a receiver interprets particular signals; in the
case of neural codes, receivers are neural assemblies that interpret spike trains. Thus, a given
spike train can be interpreted in multiple ways by
different sets of neurons that receive it.
The nature of the neural codes that represent
information determines the kinds of neural processing architectures that must be employed to
make effective use of them. If neural representations are based on across-neuron profiles of average firing rate, then neural architectures must be
organized accordingly. If information is contained
in temporal patterns of spikes, then neural architectures must be organized to distinguish different
time patterns (e.g. using time delays). The many
possible feedforward and recurrent neural net architectures range from traditional feedforward
connectionist networks to recurrent, adaptive resonance networks (Grossberg, 1988) to time-delay
networks (MacKay, 1962; Tank and Hopfield,
1987) to timing nets (Longuet-Higgins, 1989; Cariani, 2001). A given neurocomputational mechanism may be a special-purpose adaptation to a
specific ecological context, or it may be a generalpurpose computational strategy common to many
different ecological contexts and information processing tasks5.
5
Von Bekesy identified a number of striking localization
mechanisms in different sensory modalities that appear to
involve computation of temporal cross-correlation between
receptors at different places on the body surface. This suggests
the possibility of a phylogenetically primitive ‘‘computational
Bauplan’’ for information-processing strategies analogous to
the archetypal anatomical –developmental body plan of vertebrates and many invertebrates. One expects special-purpose
evolutionary specializations for those percept-action loops
whose associated structures are under the control of the same
sets of genes. Intraspecies communication systems, particularly
pheromone systems, are prime examples. Here, members of the
same species have common genes that can specify dedicated
structures for the production and reception of signals. The
signals are always the same, so that dedicated receptors and
labeled line codes can be used. One expects the evolution of
general-purpose perceptual mechanisms for those tasks that
67
Each general theoretical approach has strengths
and weaknesses. Symbol-processing models couple directly to input–output functions and are
interpretable in functional terms that we readily
understand: formal systems, finite-automata, and
digital computers. Dynamical approaches, while
further removed from functional states, directly
address how neural systems behave given the
structural properties of their elements. Neurocomputational, information-processing approaches at
their best provide bridges between structural and
functional descriptive modes by concentrating on
those aspects of structure that are essential for
function.
A general weakness of symbolic ‘‘black box’’
approaches lies in their assumption of discrete
perceptual and cognitive atoms. Symbolic primitives are then processed in various ways to realize
particular informational functions. However, in
abstracting away the neural underpinnings of
their primitives, these approaches may miss underlying invariant aspects of neural codes that
give rise to their cognitive equivalence classes6.
Historically, logical atomist and black box approaches have so ignored problems related to how
new symbolic primitives can be created (PiatelliPalmarini, 1980; Carello et al., 1984; Cariani,
1989, 1997a; Schyns et al., 1998). This problem in
psychology of forming new perceptual and conceptual primitives is related to more general problems of how qualitatively new structures and
levels of organization can emerge. Pattee and
Rosen originally addressed this problem of emerinvolve detection and recognition of variable parts of the
environment over which a species has no effective control,
such as the recognition of predators under highly variable
contexts (e.g. lighting, acoustics, wind, chemical clutter). In
this case, the system must be set up to detect properties, such
as form, that remain invariant over a wide range of conditions.
6
Strong physiological evidence exists for interspike interval
coding of periodicity pitch in the auditory system (Meddis and
Hewitt, 1991; Cariani and Delgutte, 1996; Cariani, 1999).
Interspike intervals form autocorrelation-like, iconic representations of stimulus periodicities from which pitch equivalences,
pitch similarities, and other harmonic relations are simply
derived. These relations require complex cognitive analysis if
spectrographic frequency-time representation is taken as primitive. Here is a potential example of cognitive structures that
arise out of the structure of underlying neural codes.
P. Cariani / BioSystems 60 (2001) 59–83
68
gence in the context of evolution of new levels of
cellular control (Pattee, 1973b; Rosen, 1973a) but
subsequently extended their ideas to the emergence of new epistemic functions (Rosen, 1985;
Pattee, 1995). Underlying these ideas are notions
of systems that increase their effective dimensionalities over time (Pask, 1960; Carello et al., 1984;
Cariani, 1989, 1993, 1997a; Kugler and Shaw,
1990; Conrad, 1998)7. Purely symbolic systems
self-complexify by multiplying logical combinations of existing symbol primitives, not by creating new ones. Because their state sets are much
more finely grained and include continuous,
analog processes, dynamical and neurocomputational models leave more room for new and subtle
factors to come into play in the formation of new
primitives. Dynamical and neurocomputational
substrates arguably have more potential for selforganization than their purely symbol-based
counterparts.
In the case of neural signaling systems as well
as in the cell, there are also means of reconciling dynamical models with symbolic ones — attractor basins formed by the dynamics of the
interactions of neural signals become the state
symbol alternatives of the higher-level symbolprocessing description8. Even with these inter7
For example, fitness landscapes increase in effective dimensionality as organisms evolve new epistemic functions. More
modes of sensing and effecting result in more modes of
interaction between organisms.
8
Complementarity between different modes of description
has been an abiding part of Pattee’s thinking. Pattee (1979)
explicated the complementarity between universal laws and
local rules, and outlined how organized material systems can
be understood in either ‘‘dynamic’’ or ‘‘linguistic’’ mode,
depending upon the organization of the system and the purposes of the describer. The dynamic mode describes the behavior of the system in terms of a continuum of states traversed
by the action of rate-dependent physical laws, while the linguistic mode describes the behavior of the system in terms of
rule-governed transitions between discrete functional states.A
simple switch can be described in either terms, as a continuous, dynamical system with two basins of attraction or as a
discrete system with two alternative states (Pattee, 1974). The
attractor basins of the dynamical system are the sign-primitives of the symbol system. How the switch should be described is a matter of the purposes to which the description is
to be used, whether the describer is interested in predicting the
state-trajectory behavior of the system or of outlining the
functional primitives it affords to some larger system.
pretational heuristics, there remain classical
problems of inferring functions from structures
and phase-space trajectories (Rosen, 1973b,
1986, 2000). While detailed state trajectories often yield insights into the workings of a system,
by themselves, they may not address functional
questions of how neurons must be organized in
order to realize particular system-goals. Much of
what we want to understand by studying biological systems are principles of effective design, i.e.
how they realize particular functions, rather
than whether these systems are governed by
known physical laws (we assume that they are),
or whether their state-transition behavior can be
predicted. Though they provide useful clues, neither parts lists, wiring diagrams, nor input– output mappings by themselves translate directly
into these principles of design. One can have in
hand complete descriptions of the activities of
all of the neurons in the brain, but without
some guiding ideas of how the brain represents
and processes information, this knowledge alone
does not lead inevitably to an understanding of
how the system works.
5. Symbols and dynamics in epistemic systems
Brains are more than simply physical systems,
symbol-processing systems, and neural information-processing architectures. They are also epistemic systems that observe and interact with their
environs. How biological systems come to be epistemic systems has been a primary focus of Pattee’s
theoretical biology. In addition to internalist roles
that symbols play in biological self-construction,
there are also externalist roles in epistemic operations: how symbols retain information related to
interactions with the environment. These interactions involve neural information processes for
sensing, deliberating, and acting (Figs. 3– 6).
These operations have very obvious and direct
analogies with the functionalities of the idealized
observer-actor: measurement, computation, prediction, evaluation, and action (‘‘modeling relations’’). In order to provide an account of how
P. Cariani / BioSystems 60 (2001) 59–83
69
Fig. 3. Operational and semiotic structure of scientific models. (A) Hertzian commutation diagram illustrating the operations
involved in making a prediction and testing it empirically. (B) Operational state transition structure for measurement, prediction,
and evaluation. Preparation of the measuring apparatus (reference state R1), the contingent nature of the measurement transition
(R1 transits to A, but could have registered B instead), computation of a prediction (A transits to PA by way of intermediate
computational states), and comparison with outcome of the second measurement (A vs. C). Epistemic cults demarcate boundaries
between operationally contingent, extrinsically caused events and operationally determinate, internally caused sequences of events.
modeling relations might be embedded in biological systems, essential functionalities of observeractors (measurement, computation, evaluation,
action) must be distinguished and clarified, and
then located in biological organisms. The latter
task requires a theory of the physical substrates of
these operations, such that they can be recognized
wherever they occur in nature. One needs to
describe in physical terms the essential operations
of observers, such as measurement, computation,
and evaluation. Once measurement and computation can be grounded in operational and physical
terms, they can be simultaneously seen as very
primitive, essential semiotic operations that are
present at all levels of biological organization and
as highly elaborated and refined externalized endproducts of human biological and social evolution. This epistemically oriented biology then
provides explanations for how physical systems
can evolve to become observing systems. It also
provides an orienting framework for addressing
the epistemic functions of the brain.
One of the hallmarks of Pattee’s work has been
a self-conscious attitude toward the nature of
physical descriptions and the symbols themselves.
Traditionally, our concepts regarding symbols,
signals, and information have been developed in
the contexts of human perceptions, representations, coordinations, actions, and communications and their artificial counterparts. The clearest
P. Cariani / BioSystems 60 (2001) 59–83
70
cases are usually artificial devices simply because
people explicitly designed them to fulfill particular
purposes — there is no problem of second-guessing
or reverse-engineering their internal mechanisms,
functional states, and system-goals. In the realm of
epistemology — how information informs effective
prediction and action — the clearest examples have
come from the analysis of the operational structure
of scientific models.
In the late 19th and early 20th century, physics
was compelled to adopt a rigorously self-conscious
and epistemologically based attitude towards its
methods and its descriptions (Hertz, 1894; Bridgman, 1936; Weyl, 1949a; Murdoch, 1987). The
situation in physics paralleled a self-consciousness
about the operation of formal procedures in mathematics. Heinrich Hertz (1894) explicated the operational structure of the predictive scientific model
(Fig. 4A), in which an observer makes a measurement that results in symbols that become the initial
conditions of a formal model. The observer then
computes the predicted state of a second observable
and compares this to the outcome of the corresponding second measurement. When the two
agree, ‘‘the image of the consequent’’ is congruent
with the ‘‘consequence of the image’’, and the
model has made a successful prediction.
The operational description of a scientific experiment includes the building of measuring devices,
the preparation of the measurement, the measurements themselves, and the formal procedures that
are used to generate predictions and compare
predictions with observed outcomes. When one
examines this entire context, one finds material
causation on one side of the measuring devices and
rule-governed symbol manipulation on the other9.
9
But the symbols themselves are also material objects that
obey physical laws. As Hermann Weyl remarked:
… we need signs, real signs, as written with chalk on the
blackboard or with pen on paper. We must understand
what it means to place one stroke after the other. It would
be putting matters upside down to reduce this naively and
grossly misunderstood ordering of signs in space to some
purified spatial conception and structure, such as that expressed in Euclidean geometry. Rather, we must support
ourselves here on the natural understanding in handling
things in our natural world around us. Not pure ideas in
pure consciousness, but concrete signs lie at the base, signs
which are for us recognizable and reproducible despite
If one were watching this predictive process from
without, there would be sequences of different
operational, symbol states that we would observe
as measurements and computations, and comparisons were made (Fig. 4B). Operationally, measurement involves contingent state transitions that
involve the actualization of one outcome amongst
two or more possible ones. The observer sees this
transition from many potential alternatives to one
observed outcome as a reduction of uncertainty, i.e.
gaining information about the interaction of sensor
and environment. In contrast to measurements,
computations involve reliable, determinate mappings of symbol states to other symbol states.
Charles Morris was the first to explicitly distinguish syntactic, semantic, and pragmatic aspects
of symbols (Morris, 1946; Nöth, 1990), and modeling relations can be analyzed in these terms. In
Hertz’s framework, measuring devices are responsible for linking particular symbol states to particular world states (or more precisely, particular
interactions between the measuring apparatus and
the world). Thus the measuring devices determine
the external semantics of the symbol states in the
model. Computations link symbol states to other
symbol states, and hence determine syntactic relations between symbols10. Finally, there are linksmall variations in detailed execution, signs which by and
large we know how to handle.
As scientists we might be tempted to argue thus: ‘As we
know’ the chalk mark on the blackboard consists of
molecules, and these are made up of charged and uncharged elementary particles, electrons, neutrons, etc. But
when we analyzed what theoretical physics means by such
terms, we saw that these physical things dissolve into a
symbolism that can be handled according to some rules.
The symbols, however, are in the end again concrete signs,
written with chalk on the blackboard. You notice the
ridiculous circle. (Weyl, 1949b)
10
Operationally, we are justified in describing a material
system as performing a ‘‘computation’’ when we can put the
observed state transitions of a material system under a well-specified set of observables into a 1:1 correspondence with the
state-transitions of a finite-length formal procedure, e.g. the
states of a deterministic finite-state automaton. This is a more
restrictive, operationally defined use of the word ‘‘computation’’
than the more common, looser sense of any orderly informational process. Relationships between the operations of the
observer (Fig. 4A) and the functional states of the predictive
process (Fig. 4B) are discussed more fully in (Cariani, 1989).
P. Cariani / BioSystems 60 (2001) 59–83
ages between the symbol states and the purposes
of the observer that reflect what aspects of the
world the observer wishes to predict to what
benefit. The choice of measuring devices and their
concomitant observables thus is an arbitrary
choice of the observer that is dependent upon his
or her desires and an evaluative process that
compares outcomes to goals. Constituted in this
way, the three semiotic aspects (syntactics, semantics, and pragmatics) and their corresponding operations (computation, measurement, evaluation)
are irreducible and complementary. One cannot
replace semantics with syntactics, semantics with
pragmatics, syntactics with semantics11.
The measurement problem, among other
things, involved arguments over where one draws
the boundaries between the observer and the observed world — the epistemic cut (Pattee, 2001,
this issue). Equivalently, this is the boundary
where formal description and formal causation
begin and where the material world and material
causation end (von Neumann’s cut). If the observer can arbitrarily change what is measured,
then the cut is ill defined. However, once measuring devices along with their operational states are
specified, then the cut can be operationally
defined. The cut can be drawn in the state-transition structure of the observer’s symbol states,
where contingent state transitions end and determinate transitions begin (Fig. 4B). These correspond to empirical, contingent measurement
operations and analytic, logically necessary formal operations (‘‘computations’’).
11
John von Neumann showed in the 1930s that attempts to
incorporate the measuring devices (semantics) into the formal,
computational part of the modeling process (syntactics) result
in indefinite regresses, since one then needs other measuring
devices to determine the initial conditions of the devices one
has just subsumed into the formal model (von Neumann,
1955). Unfortunately, this did not prevent others in the following decades from conflating these semiotic categories and
reducing semantics and pragmatics to logical syntax.
71
6. Epistemic transactions with the external world
How are we to think about how such modeling
relations might be embedded in the brain? In
addition to organizational closures maintained
through self-sustained, internally generated endogenous activity, nervous systems are also informationally open systems that interact with their
environments through sensory inputs and motor
outputs (Fig. 4). Together, these internal and
external linkages form percept-action loops that
extend through both organism and environment
(Uexküll, 1926) (Fig. 4A). Thus, both the internal
structure of the nervous system and the structure
of its transactions with the environment involve
‘‘circular-causal’’ loops (McCulloch, 1946; Ashby,
1960). The central metaphor of cybernetics was
inspired by this cyclic image of brain and environment, where internal sets of feedback loops themselves have feedback connections to the
environment, and are completed through it (de
Latil, 1956; McCulloch, 1965, 1969b, 1989; Powers, 1973). Thus, McCulloch speaks of ‘‘the environmental portion of the path’’ (Fig. 4B) and
Powers, emphasizing the external portion of the
loop, talks in terms of ‘‘behavior, the control of
perception’’ rather than the reverse (Powers,
1973). Clearly, both halves of the circle are necessary for a full account of behavior and adaptivity:
the nervous half and the environmental half.
In these frameworks, sensory receptors are in
constant interaction with the environment and
switch their state contingent upon their interactions. Effectors, such as muscles, act on the world
to alter its state. Mediating between sensors and
effectors is the nervous system, which determines
which actions will be taken, given particular percepts. The function of the nervous system, at its
most basic, is to realize those percept-action mappings that permit the organism to survive and
reproduce. Adaptive robotic devices (Fig. 4C) can
also be readily seen in these terms (Cariani, 1989,
1998a,b) if one replaces percept-action coordinations that are realized by nervous systems with
explicit percept-action mappings that are realized
through computations. These adaptive robotic
devices then have a great deal in common with the
formal, operational structure of scientific models
72
P. Cariani / BioSystems 60 (2001) 59–83
Fig. 4. Percept-action loops in organisms and devices. (A) Cycles of actions and percepts and the formation of sensorimotor
interactions (Uexküll, 1926). (B) Completion of a neural feedback loop through environmental linkages (McCulloch, 1946). (C)
Adaptive control of percept-action loops in artifical devices, showing the three semiotic axes (Cariani, 1989, 1997a,b, 1998a,b).
Evaluative mechanisms adaptively modify sensing and effector functionalities as well as steering percept-action mappings.
discussed above. In such adaptive devices (Fig.
4C), there is in addition to the percept-action loop
a pragmatic, feedback-to-structure loop that evaluates performance and alters sensing, computing,
and effector actions in order to improve measured
performance. Evaluations are operations that are
similar to measurements made by sensors, except
that their effect is to trigger a change in system
structure rather than simply triggering a change in
system state.
What follows is a hypothetical account of the
brain as both a self-production network and an
epistemic system. On a very high level of abstraction, the nervous system can be seen in terms of
many interconnected recurrent pathways that create sets of neural signals that regenerate themselves to form stable mental states (Fig. 5). These
can be thought of as neural ‘‘resonances’’ because
some patterns of neural activity are self-reinforcing, while others are self-extinguishing. Sensory
information comes into the system through
modality-specific sensory pathways. Neural sensory representations are built up through basic
informational operations that integrate information in time by establishing circulating patterns,
which are continuously cross-correlated with incoming ones (i.e. bottom-up/top-down interactions). When subsequent sensory patterns are
similar to previous patterns, these patterns are
built up, and inputs are integrated over time.
When subsequent patterns diverge from previous
patterns, new dynamically created ‘‘templates’’
are formed from the difference between expectation and input. The result is a pattern resonance.
P. Cariani / BioSystems 60 (2001) 59–83
73
Fig. 5. The brain as a set of resonant loops that interact with an external environment. The loops represent functionalities
implemented by means of pattern-resonances in recurrent networks.
Tuned neural assemblies can provide top-down
facilitation of particular patterns by adding them
to circulating signals. The overall framework is
close to the account elaborated by (Freeman,
1999), with its circular-causal reafferences, resonant mass dynamics, and intentional dimensions.
The neural networks that subserve these ‘‘adaptive resonances’’ have been elaborated in great
depth by Grossberg and colleagues (Grossberg,
1988, 1995) whose models qualitatively account
for a wide range of perceptual and cognitive
phenomena. Various attempts have been made to
locate neural resonances in particular re-entrant
pathways, such as thalamocortical and corticocortical loops (Edelman, 1987; Mumford, 1994).
For the most part, neural resonance models
have assumed that the underlying neural representations of sensory information utilize channel-
coded, input features and neural networks with
specific, adaptively modifiable connection weights.
However, a considerable body of psychophysical
and neurophysiological evidence exists for many
other kinds of neural pulse codes in which temporal patterns and relative latencies between spikes
appear to subserve different perceptual qualities
(Perkell and Bullock, 1968; Cariani, 1995, 1997b).
For example, patterns of interspike intervals correspond closely with pitch perception in audition
(Cariani and Delgutte, 1996) and vibration perception in somatoception (Mountcastle, 1993).
Neural resonances can also be implemented in the
time domain using temporally coded sensory information, recurrent delay lines, and coincidence
detectors (Thatcher and John, 1977; Cariani,
2001). In addition to stimulus-driven temporal
patterns, stimulus-triggered endogenous patterns
74
P. Cariani / BioSystems 60 (2001) 59–83
can be evoked by conditioned neural assemblies
(Morrell, 1967). Networks of cognitive timing
nodes that have characteristic time courses of
activation and recovery time have been proposed
as mechanisms for sequencing and timing of percepts and actions (MacKay, 1987). Coherent temporal, spatially distributed and statistical orders
(‘‘hyperneurons’’) consisting of stimulus-driven
and stimulus-triggered patterns have been proposed as neural substrates for global mental states
(John, 1967, 1972, 1976, 1988, 1990; John and
Schwartz, 1978).
In this present scheme, build-up loops and their
associated resonance processes are iterated as one
proceeds more centrally into successive cortical
stations. Once sensory representations are built up
in modality-specific circuits (e.g. perceptual resonances in thalamic and primary sensory cortical
areas), they become available to the rest of the
system, such that they activate still other neural
assemblies that operate on correlations between
sensory modalities (e.g. higher-order semantic resonances in association cortex). Subsequent buildup processes then implement correlational
categories further and further removed from sensory specifics. These resonances also involve the
limbic system and its interconnections, which then
adds affective and evaluative components to circulating sets neural signal-patterns (pragmatic evaluations). Similarly, circulating patterns activate
associated long-term memories, which in turn facilitate and/or suppress activation of other
assemblies.
Long-term memory is essential to stable mental
organization. Pattee has asserted that ‘‘life depends
upon records.’’ Analogously, we can assert that
mind depends upon memory. Like DNA in the cell,
long-term memory serves as an organizational
anchor that supplies stable informational constraints for ongoing processes. Do brain and cell
have similar organizational requirements for stability? Must this storage mechanism be discrete in
character? Like the cell, the nervous system is an
adaptive system that is constantly rebuilding itself
in response to internal and external pressures. As
von Neumann pointed out, purely analog systems
are vulnerable to the build up of perturbations over
time, while digital systems (based as they are on
functional states formed by basins of attraction)
continually damp them out (von Neumann, 1951).
Memory is surprisingly long-lived. We are intimately familiar with the extraordinary lengths of
time that memories can persist, from minutes,
hours, years, and decades to an entire lifetime.
Long-term memories survive daily stretches of
sleep, transient exposures to general anesthesia,
and even extended periods of coma. These are
brain states in which patterns of central activity are
qualitatively different from the normal waking
state in which memories are formed. What is even
more remarkable is the persistence of memory
traces in the face of constant molecular turnover
and neural reorganization.
The persistence of memory begs the fundamental
question of whether long-term memory must be
‘‘hard-coded’’ in some fashion, perhaps in molecular form, for the same reasons that genetic information is hard-coded in DNA (see John, 1967;
Squire, 1987 for discussions). DNA is the most
stable macromolecule in the cell. Autoradiographic
evidence suggests that no class of macromolecule
in the brain save DNA appears to remain intact for
more than a couple of weeks. These and other
considerations drove neuroscientists who study
memory to concentrate almost exclusively on
synaptic rather than molecular mechanisms
(Squire, 1987). While enormous progress has been
made in understanding various molecular and
synaptic correlates of memory, crucial links in the
chain of explanation are still missing. These involve the nature and form of the information being
stored, as well as how the neural organizations
would make use of this information. Currently, the
most formidable gap between structure and function lies in our primitive state of understanding of
neural codes and neural computation mechanisms.
Consequently, we cannot yet readily and confidently interpret the empirical structural data that
have been amassed in terms directly linked to
informational function. Presently, we can only
hypothesize how the contents of long-term memories might be stored given alternative neural coding
schemes.
By far the prevailing view in the neurosciences is
that central brain structures are primarily con-nectionist systems that operate on across-neuron average rate patterns. Neurons are seen as rate-in
P. Cariani / BioSystems 60 (2001) 59–83
tegrators with long integration times, which mandates that functionally relevant information must
be stored and read out through the adjustment of
inter-element connectivities. Learning and memory
are consequently thought to require the adjustment
of synaptic efficacies. Some of the difficulties associated with such associationist neural ‘‘switchboards’’ (e.g. problems of the regulation of highly
specific connectivities and transmission paths, of
the stability of old patterns in the face of new ones,
problems of coping with multidimensional, multimodal information) have been raised in the past
(John, 1967, 1972; Thatcher and John, 1977; Lashley, 1998), but these difficulties on the systems
integration level have been largely ignored in the
rush to explore the details of synaptic behavior. As
Squire (1987) makes clear, the predominant, conventional view has been that molecular hard coding
of memory traces is inherently incompatible with
connectionistic mechanisms that depend on synaptic efficacies.
Alternately, neurocomputations in central brain
structures might be realized by neural networks
that operate on the relative timings of spikes
(Licklider, 1951, 1959; Braitenberg, 1967; Abeles,
1990; Cariani, 1995, 1997a, 1999, 2000, 2001).
Neurons are then seen as coincidence detectors with
short time windows that analyze relative arrival
times of their respective inputs (Abeles, 1982; Carr,
1993). Although the first effective neurocomputational models for perception were time-delay networks that analyzed temporal correlations by
means of coincidence detectors and delay lines
(Jeffress, 1948; Licklider, 1951), few temporal neurocomputational models for memory have been
proposed (MacKay, 1962; Longuet-Higgins, 1987,
1989; Cariani, 2001).
The dearth of temporal models notwithstanding,
animals do appear to possess generalized capabilities for retaining the time course of events. Conditioning experiments suggest that the temporal
structure of both rewarded and unrewarded events
that occur during conditioning is explicitly stored,
such that clear temporal expectations are formed
(Miller and Barnet, 1993). Neural mechanisms are
capable of storing and retrieving temporal patterns
either by tuning dendritic and axonal time delays
to favor particular temporal combinations of in-
75
puts or by selecting for existing delays by adjusting
synaptic efficacies. By tuning or choosing delays
and connection weights, neural assemblies can be
constructed that are differentially sensitive to particular time patterns in their inputs. Assemblies can
also be formed that emit particular temporal patterns when activated (John and Schwartz, 1978). A
primary advantage of temporal pattern codes over
those that depend on dedicated lines is that the
information conveyed is no longer tied to particular
neural transmission lines, connections, and processing elements. Further, temporal codes permit
multiple kinds of information to be transmitted and
processed by the same neural elements (multiplexing) in a distributed, holograph-like fashion (Pribram, 1971).
Because information is distributed and not localized in particular synapses, such temporal
codes are potentially compatible with molecular
coding mechanisms (John, 1967). Polymer-based
molecular mechanisms for storing and retrieving
temporal patterns can also be envisioned in which
time patterns are transformed to linear distances
along polymer chains. A possible molecular mechanism would involve polymer-reading enzymes
that scan RNA or DNA molecules at a constant
rate (e.g. hundreds to thousands of bases/sec),
catalyzing bindings of discrete molecular markers
(e.g. methylations) whenever intracellular ionic
changes related to action potentials occurred.
Time patterns would thus be encoded in spatial
patterns of the markers. Readout would be accomplished by the reverse, where polymer-reading
enzymes encountering markers would trigger a
cascade of molecular events that would transiently
facilitate initiation of action potentials. Cell populations would then possess an increased capacity
to asynchronously regenerate temporal sequences
to which they have been previously exposed12.
12
See John (1967, 1972), John et al. (1973), Thatcher and
John (1977), and Hendrickson and Hendrickson (1998) for
longer discussions of alternative temporal mechanisms. Pattee’s polymer-based feedback shift register model of information storage (Pattee, 1961) was part of the inspiration for this
mechanism. DNA methylation might be a candidate marker,
since this mechanism is utilized in many other similar molecular contexts and there is an unexplained overabundance of
DNA methyltransferase in brains relative to other tissues
(Brooks et al., 1996).
76
P. Cariani / BioSystems 60 (2001) 59–83
Molecular memory mechanisms that were based
on DNA would be structurally stable, ubiquitous, superabundant, and might support genetically inheritable predispositions for particular
sensory patterns, such as species-specific bird
songs (Dudai, 1989).
Signal multiplexing and nonlocal storage of
information, whether through connectionist or
temporal mechanisms, permit broadcast strategies of neural integration. The global interconnectedness of cortical and subcortical structures
permits widespread sharing of information that
has obtained some minimal threshold of global
relevance, in effect creating a ‘‘global
workspace’’ (Baars, 1988). The contents of such
a global workspace would become successively
elaborated, with successive sets of neurons contributing correlational annotations to the circulating pattern in the form of characteristic
pattern-triggered signal-tags. Such tags could
then be added on to the evolving global pattern
as indicators of higher-order associations and
form new primitives in their own right (Cariani,
1997a).
Traditionally, the brain has been conceived in
terms of sequential hierarchies of decision processes, where signals represent successively more
abstract aspects of a situation. As one moves to
higher and higher centers, information about
low-level properties is presumed to be discarded.
A tag system, however, elaborates, rather than
reduces, continually adding additional annotative dimensions. Depending upon attentional
and motivational factors, such a system would
distribute relevant information over wider and
wider neural populations. Rather than a feedforward hierarchy of feature-detections and narrowing decision-trees, a system based on
signal-tags would more resemble a heterarchy of
correlational pattern-amplifiers in which neural
signals are competitively facilitated, stabilized,
and broadcast to produce one dominant, elaborated pattern that ultimately steers the behavior
of the whole. There would then be bi-directional
influence between emergent global populationstatistical patterns and those of local neural
populations. This comes very close to Pattee’s
concept of ‘‘statistical closure’’ (Pattee, 1973a),
which entails ‘‘the harnessing of the lower level
by the collective upper level’’. In terms of neural
signaling, local and global activity patterns interact, but the global patterns control the behavior of the organism as a unified whole.
7. Semiotic and phenomenal aspects of neural
activity
Pattee’s ideas have many far-ranging implications for general theories of symbolic function.
His description of symbols as rate-independent,
non-holonomic constraints grounds semiotics in
physics. His mapping of the operations of the
observer to operations of the cell produces a
biosemiotic ‘‘cognitive biology.’’ Pattee’s concept
of ‘‘semantic closure’’ involves the means by
which an organism selects the interpretation of
its own symbols (Pattee, 1985). The high-level
semiotics of mental symbols, conceived in terms
of neural pattern resonances in the brain, can be
similarly outlined to explain how brains construct their own meanings (Pribram, 1971;
Thatcher and John, 1977; Freeman, 1995, 1999;
Cariani, in press). Such neurally based theories
of meaning imply constructivist psychology and
conceptual semantics (Lakoff, 1987; von
Glasersfeld, 1987, 1995).
Within the tripartite semiotic of Morris (1946),
one wants to account for relations of symbols to
other symbols (syntactics), relations of symbols to
the external world (semantics), and relations of
symbols to system-purposes (pragmatics) (Fig. 6).
Neural signal tags characteristic of a given neural
assembly in effect serve as markers of symbol type
that can be analyzed and sequenced without regard for their sensory origins or motor implications. The appearance of such characteristic tags
in neural signals would simply signify that a particular assembly had been activated. These tags
could be purely syntactic forms shorn of any
semantic or pragmatic content. Other tags characteristic of particular kinds of sensory information
P. Cariani / BioSystems 60 (2001) 59–83
77
Fig. 6. Semiotics of brain states. (A) Basic semiotic relations between symbol, world, and purpose: syntactics, semantics, and
pragmatics. (B) Semiotic aspects of brain states. Semiotic functional division of labor via different sets of overlaid circuits. Neural
assemblies in sensory and motor systems provide semantic linkages between central brain states and the external world. Assemblies
that integrate and sequence internal representations for prediction, planning, and coordination implement syntactic linkages. Those
that add evaluative components to neural signals (e.g. limbic system) implement pragmatic linkages. Phenomenal correlates of these
semiotic aspects are sensations, thoughts, and motivational states (hungers, pains, drives, desires, emotions).
could bear sensory-oriented semantic content.
Tags characteristic of neural assemblies for planning and motor executions would bear action-oriented semantic content. Tags produced by neural
populations in the limbic system would indicate
hedonic, motivational, and emotive valences such
that these neural signal patterns would bear pragmatic content. These various kinds of neural signal tags that are characteristic of sensory, motor,
and limbic population responses would be added
through connections of central neural assemblies
to those populations. All of these different kinds
of neural signals would be multiplexed together
and interacting on both local and global levels to
produce pattern resonances. Thus, in a multiplexed system, there can be divisions of labor
between neural populations, but the various neural signals that are produced need not be constantly kept separate on dedicated lines.
Characteristic differences between tags could be
based on different latencies of response, different
temporal pattern, differential activation of particular sets of inputs, or even differential use of
particular kinds of neurotransmitters.
Which role a particular kind of tag would play
would depend on its functional role within the
system. Linkages between particular sensory patterns and motivational evaluations could be
formed that add tags related to previous reward
or punishment history, thereby adding to a sensory pattern a hedonic marker. In this way, pragmatic meanings (‘‘intentionality’’) could be
conferred on sensory representations (‘‘intensionality’’)13. Pragmatic meanings could be similarly
attached to representations involved in motor
planning and execution. Such emotive, motivational factors play a predominant role in steering
everyday behavior (Hardcastle, 1999). Neural signal tags with different characteristics could thus
differentiate patterns that encode the syntactic,
13
What we call here semantics and pragmatics are often
called the ‘‘intensional’’ and ‘‘intentional’’ aspects of symbols
(Nöth, 1990). Semantics and pragmatics have often been
conflated with injury to both concepts. Freeman (1999) argues
that we should also separate intent (forthcoming, directed
action) from motive (purpose). Many realist and model-theoretic frameworks that have dominated the philosophy of language and mind for the last half century ignore the limited,
situated, purpose-laden nature of the observer (Bickhard and
Terveen, 1995). Realist philosophers, e.g. (Fodor, 1987), have
defined ‘‘meaning’’ in such a way that it precludes any notion
that is brain-bound and therefore admits of individual psychological differences and constructive capacities [cf. Lakoff’s
(1987) critique of ‘‘objectivism’’]. Contra Fodor and Putnam,
meaning can and does lie in the head. The neglect of the
self-constructing and expansive nature of the observer’s categories has impeded the development of systems that are thoroughly imbued with purpose, directly connected to their
environs, and capable of creating their own conceptual primitives (Cariani, 1989; Bickhard and Terveen, 1995).
78
P. Cariani / BioSystems 60 (2001) 59–83
semantic, and pragmatic aspects of an elaborated
neural activity pattern. In the wake of an action
that had hedonic salience, associations between all
such co-occurring tags would then be stored in
memory. The system would thus build up learned
expectations of the manifold hedonic consequences of percepts and actions. When similar
circumstances presented themselves, memory
traces containing all of the hedonic consequences
would be read out to facilitate or inhibit particular action alternatives, depending upon whether
percept-action sequences in past experience had
resulted in pleasure or pain. Such a system, which
computes conditional probabilities weighted by
hedonic relevance, is capable of one-shot learning.
A system so organized creates its own concepts
and meanings that are thoroughly imbued with
purpose. Formation of new neural assemblies is
thus a means by which the brain can construct
adaptively what are in effect new measuring
devices that make new distinctions on an internal
milieu that is richly coupled to the external world
(Cariani, 1998a).
Finally, we know first hand that brains are
material systems capable of supporting conscious
awareness14. These classes of linkages between
neural patterns produced by sensory inputs (external semantics), those produced by internal coordinations (syntactics), and those produced by
intrinsic goal states may have correspondences in
the structure of experience. Those neural signal
patterns that are produced by processes that are
contingent relative to the internal signal-self-productions resemble measurement processes, and
these are experienced as sensations. Ordered sequences of neural signal patterns generated from
within the system would have the character of
successions of mental symbols, and these would
be experienced as thoughts. Those internal patterns that were related to goal-states have the
character of system imperatives to adjust behavior, and these would be experienced as desires and
pains. Actions would be experienced through their
14
We discuss elsewhere whether activation of particular
neurons is sufficent for conscious awareness or this depends
instead on coherent organizations of neural activity (Cariani,
2000).
effects on perceptions, exterioceptive and proprioceptive, sensory and hedonic.
As in the case of a scientific model, an epistemic
cut could be drawn at the point of contingency,
where the control of the nervous system ends and
sensory inputs become dependent at least in part
on the environment. This might then explain why,
when wielding a stick, the boundaries of one’s
body appear to move outward to the end of the
stick, as well as why we cease to experience as
sensations those processes that become reliably
controlled from within. This raises the possibility
that the structure of awareness is isomorphic to
the functional organization of informational process in the brain and, on a more abstract level, to
the operational structure of the ideal observer.
8. Conclusions
Using concepts developed and elaborated by
Howard Pattee, we have outlined common, fundamental roles that symbols might play in life and
mind. The organism produces and reproduces itself using genetic codes, while the mind continually regenerates its own organization through
neural codes. We then considered commonalities
between epistemic processes of organisms and
brains and the operational structure of scientific
models. The various roles of symbolic, dynamicsbased, and neurocomputational descriptions were
then evaluated in terms of the different aspects of
brain function that they illuminate. We then took
up the problem of neural coding and asked
whether brains require memory mechanisms that
perform organizational functions analogous to
those of genetic information in cells. A high-level
conception of the brain that combines self-production of neural signals and percept-action loops
was proposed, and the semiotic relations in such
systems were discussed. Finally, we briefly examined high-level similarities between the structure
of awareness and the operational structure of the
observer, and pondered whether self-regenerative
organization is essential to life, mind, and even
conscious awareness itself. The deep insights of
Howard Pattee into the essentials of biological
organization have proven invaluable in our
P. Cariani / BioSystems 60 (2001) 59–83
difficult but rewarding quest to understand how
brains work such that they can construct their
own meanings.
Acknowledgements
I owe a profound intellectual debt to Howard
Pattee, who introduced me to the world of symbols. I could not have asked for a more intellectually engaged and engaging mentor. The most
enduring lesson Howard taught me is the necessity of continuing to ask fundamental questions in
the face of a world obsessed with the accumulation of little facts. In the early stages of this
paper, I was much provoked by discussions with
the late Alan Hendrickson, who was searching for
molecular mechanisms for encoding time patterns.
Our
conversations
and
his
unpublished
manuscript on the engram prompted me to think
about the stabilization of organization that memory provides and to consider possible molecular
storage mechanisms. This work was supported by
grant DC3054 from the National Institute of
Deafness and Communications Disorders of the
National Institutes of Health.
References
Abeles, M., 1982. Role of the cortical neuron: integrator or
coincidence detector. Isr. J. Med. Sci. 18, 83 – 92.
Abeles, M., 1990. Corticonics. Cambridge University Press,
Cambridge.
Arbib, M.A., 1989. The Metaphorical Brain 2: Neural Nets
and Beyond. John Wiley, New York.
Ashby, W.R., 1960. Design for a Brain. Chapman & Hall,
London.
Baars, B.J., 1988. A Cognitive Theory of Consciousness. Cambridge University Press, Cambridge.
Basar, E., 1989. Brain natural frequencies are causal factors
for resonances and induced rhythms. In: Basar, E., Bulloch, T.E. (Eds.), Brain Dynamics. Springer-Verlag, Berlin,
pp. 425 – 457.
Beurle, R.L., 1956. Properties of a mass of cells capable of
regenerating pulses. Phil. Trans. Roy. Soc. Lond. B 240,
55 – 94.
Bickhard, M.H., Terveen, L., 1995. Foundational Issues in
Artificial Intelligence and Cognitive Science: Impasse and
Solution. Elsevier, New York.
79
Braitenberg, V., 1967. Is the cerebellar cortex a biological
clock in the millisecond range? Prog. Brain Res. 25, 334 –
346.
Bridgman, P.W., 1936. The Nature of Physical Theory. Princeton University Press, Princeton, NJ.
Brooks, P.J., Marietta, C., Goldman, D., 1996. DNA mismatch repair and DNA methylation in adult brain neurons. J. Neurosci. 16, 939 – 945.
Carello, C., Turvey, M.T., Kugler, P.N., Shaw, R.E., 1984.
Inadequacies of the computer metaphor. In: Gazzaniga,
M.S. (Ed.), Handbook of Cognitive Neuroscience. Plenum
Press, New York, pp. 229 – 248.
Cariani, P.A., 1989. On the Design of Devices with Emergent
Semantic Functions. Ph.D. thesis, State University of New
York at Binghamton. University Microfilms, Ann Arbor,
MI.
Cariani, P., 1992a. Emergence and artificial life. In: Langton,
C.G., Taylor, C., Farmer, J.D., Rasmussen, S. (Eds.),
Artificial Life II. Santa Fe Institute Studies in the Science
of Complexity, vol. 10. Addison-Wesley, Redwood City,
CA, pp. 775 – 798.
Cariani, P., 1992b. Some epistemological implications of
devices which construct their own sensors and effectors. In:
Varela, F., Bourgine, P. (Eds.), Towards a Practice of
Autonomous Systems. MIT Press, Cambridge, MA, pp.
484 –493.
Cariani, P., 1993. To evolve an ear: epistemological implications of Gordon Pask’s electrochemical devices. Syst. Res.
10 (3), 19 – 33.
Cariani, P., 1995. As if time really mattered: temporal strategies for neural coding of sensory information. Communication and Cognition –Artificial Intelligence (CC – AI) 12
(1– 2), 161 – 229 Reprinted in: K. Pribram (Ed.), Origins:
Brain and Self-Organization, Lawrence Erlbaum, Hillsdale,
NJ, 1994, pp. 208 – 252.
Cariani, P., 1997a. Emergence of new signal-primitives in
neural networks. Intellectica 1997 (2), 95 – 143.
Cariani, P., 1997b. Temporal coding of sensory information.
In: Bower, J.M. (Ed.), Computational Neuroscience:
Trends in Research. Plenum, New York, pp. 591 –598.
Cariani, P., 1998a. Epistemic autonomy through adaptive
sensing. In: Proceedings of the, 1998 IEEE International
Symposium on Intelligent Control (ISIC) held jointly with
the IEEE International Symposium on Computational Intelligence in Robotics (CIRA) and Automation and the
Intelligent Systems and Semiotics (ISAS), A Joint Conference on the Science and Technology of Intelligent Systems,
Sept. 14 – 17, 1998. National Institute of Standards and
Technology, Gaithersburg, MD, pp. 718 – 723.
Cariani, P., 1998b. Towards an evolutionary semiotics: the
emergence of new sign-functions in organisms and devices.
In: Van de Vijver, G., Salthe, S., Delpos, M. (Eds.),
Evolutionary Systems. Kluwer, Dordrecht, pp. 359 – 377.
Cariani, P., 1999. Temporal coding of periodicity pitch in the
auditory system: an overview. Neural Plasticity 6 (4), 147 –
172.
80
P. Cariani / BioSystems 60 (2001) 59–83
Cariani, P., 2000. Anesthesia, neural information processing,
and conscious awareness. Consciousness and Cognition 9
(3) 387 – 395.
Cariani, P., 2001. Neural timing nets for auditory computation, in: S. Greenberg and M. Slaney (Eds.), Computational Models of Auditory Function, IOS Press,
Amsterdam, pp. 139 – 152.
Cariani, P., in press. Cybernetics and the Semiotics of Translation, in: Tra segni. Athanor. Semiotica, Filosofia, Arte,
Letterature, XI, 2, 200.
Cariani, P.A., Delgutte, B., 1996. Neural correlates of the
pitch of complex tones. I. Pitch and pitch salience. II. Pitch
shift, pitch ambiguity, phase-invariance, pitch circularity,
and the dominance region for pitch. J. Neurophysiol. 76
(3), 1698 – 1734.
Carr, C.E., 1993. Processing of temporal information in the
brain. Annu. Rev. Neurosci. 16, 223 – 243.
Chandler, J.L.R., Van de Vijver, G., 2000. Closure: Emergent
Organizations and their Dynamics. Vol. 94, Annals of the
New York Academy of Sciences, New York.
Churchland, P.S., Sejnowski, T.J., 1992. The Computational
Brain. MIT Press, Cambridge.
Conrad, M., 1998. Towards high evolvability dynamics. In:
Van de Vijver, G., Salthe, S., Delpos, M. (Eds.), Evolutionary Systems. Kluwer, Dordrecht, pp. 33 – 43.
Cowan, J.D., 1965. The problem of organismic reliability. In:
Wiener, N., Schade, J.P. (Eds.), Cybernetics of the Nervous System. Elsevier, Amsterdam, pp. 9 – 63.
de Latil, P., 1956. Thinking by Machine. Houghton Mifflin,
Boston, MA.
Dudai, Y., 1989. The Neurobiology of Memory. Oxford University Press, Oxford.
Edelman, G.M., 1987. Neural Darwinism: The Theory of
Neuronal Group Selection. Basic Books, New York.
Eigen, M., 1974. Molecules, information, and memory: from
molecular to neural networks. In: Schmitt, F.O., Worden,
F.G. (Eds.), The Neurosciences: A Third Study Program.
MIT Press, Cambridge, MA, pp. 1 – 10.
Etxeberria, A., 1998. Embodiment of natural and artificial
agents. In: van de Vijver, G., Salthe, S., Delpos, M. (Eds.),
Evolutionary Systems. Kluwer, Dordrecht.
Fodor, J., 1987. Psychosemantics. The Problem of Meaning in
the Philosophy of Mind. MIT Press, Cambridge, MA.
Freeman, W.J., 1975. Mass Action in the Nervous System.
Academic Press, New York.
Freeman, W.J., 1995. Societies of Brains. A Study in the
Neuroscience of Love and Hate. Lawrence Erlbaum, New
York.
Freeman, W.J., 1999. Consciousness, intentionality, and
causality. In: Reclaiming Cognition, pp. 143 – 172 Reprint
of J. Conscious. Stud. 6(11 – 12), Freeman, W.J., Núñez, R.
(Eds.), Imprint Academic, Thorverton, UK.
Gerard, R.W., 1959. Neurophysiology: an integration
(molecules, neurons, and behavior). In: Field, J., Magoun,
H.W., Hall, V.E. (Eds.), Handbook of Physiology: Neurophysiology, vol. II. American Physiological Society, Washington, DC, pp. 1919 – 1965.
Greene, P.H., 1962. On looking for neural networks and cell
assemblies that underlie behavior. I. Mathematical model.
II. Neural realization of a mathematical model. Bull.
Math. Biophys. 24, 247 – 275, 395 –411.
Grossberg, S., 1988. The Adaptive Brain, vols. I – II. Elsevier,
New York.
Grossberg, S., 1995. Neural dynamics of motion perception,
recognition learning, and spatial attention. In: Port, R.F.,
van Gelder, T. (Eds.), Mind as Motion: Explorations in the
Dynamics of Cognition. MIT Press, Cambridge, MA, pp.
449 –490.
Haken, H., 1983. Synopsis and introduction. In: Basar, E.,
Flohr, H., Haken, H., Mandell, A.J. (Eds.), Synergetics of
the Brain. Springer-Verlag, Berlin, pp. 3 – 27.
Haken, H., 1991. Synergetic Computers and Cognition.
Springer-Verlag, Berlin.
Hall, T.S., 1969. Ideas of Life and Matter: Studies in the
History of General Physiology, 600 B.C. – 1900 A.D. University of Chicago, Chicago, IL 2 vols.
Hardcastle, V.G., 1999. It’s O.K. to be complicated: the case
of emotion. In: Reclaiming Cognition, pp. 237 – 249
Reprint of J. Conscious. Stud. 6(11 –12), Freeman, W.J.,
Núñez, R. (Eds.), Imprint Academic, Thorverton, UK.
Hebb, D.O., 1949. The Organization of Behavior. Simon and
Schuster, New York.
Hebb, D.O., 1966. A Textbook of Psychology, second ed. W.B
Saunders, Philadelphia, PA.
Hendrickson, A.E. and Hendrickson, D.E., 1998, The Engram: The Neural Code and the Molecular and Cellular
Basis of Learning and Memory Unpublished manuscript,
Verbier, Switzerland.
Hertz, H., 1894. Principles of Mechanics. Dover, New York
1956 reprint.
Jeffress, L.A., 1948. A place theory of sound localization. J.
Comp. Physiol. Psychol. 41, 35 – 39.
John, E.R., 1967. Mechanisms of Memory. Wiley, New York.
John, E.R., 1972. Switchboard vs. statistical theories of learning and memory. Science 177, 850 – 864.
John, E.R., 1976. A model of consciousness. In: Scwartz,
G.E., Shapiro, D. (Eds.), Consciousness and Self-regulation, vol. 1. Plenum, New York, pp. 1 – 50.
John, E.R., 1988. Resonating fields in the brain and the
hyperneuron. In: Basar, E. (Ed.), Dynamics of Sensory and
Cognitive Processing by the Brain. Springer-Verlag, Berlin,
pp. 368 – 377.
John, E.R., 1990. Representation of information in the brain.
In: John, E.R. (Ed.), Machinery of the Mind. Birkhauser,
Boston, MA, pp. 27–56.
John, E.R., Bartlett, F., Shimokochi, M., Kleinman, D., 1973.
Neural readout from memory. J. Neurophysiol. 36 (5),
893 – 924.
John, E.R., Schwartz, E.L., 1978. The neurophysiology of
information processing and cognition. Ann. Rev. Psychol.
29, 1 – 29.
Kampis, G., 1991a. Emergent computations, life, and cognition. World Futures 32 (2-3), 95 – 110.
P. Cariani / BioSystems 60 (2001) 59–83
Kampis, G., 1991b. Self-Modifying Systems in Biology and
Cognitive Science. Pergamon Press, Oxford.
Katchalsky, A.K., Rowland, V., Blumenthal, R., 1972. Dynamic Patterns of Brain Cell Assemblies: Neurosciences
Research Program. Bulletin 12 (1), 1 – 187.
Kauffman, S., 1993. The Origins of Order. Oxford University
Press, New York.
Kelso, J.A.S., 1995. Dynamic Patterns: The Self-Organization
of Brain and Behavior. MIT Press, Cambridge, MA.
Köhler, W., 1951. Relational determination in perception. In:
Jeffress, L.A. (Ed.), Cerebral Mechanisms in Behavior: The
Hixon Symposium. Wiley, New York, pp. 200 – 243.
Kugler, P.N., Shaw, R., 1990. On the role of symmetry and
symmetry-breaking in thermodynamics and epistemic engines. In: Haken, H. (Ed.), Synergetics of Cognition.
Springer Verlag, Heidelberg, pp. 296 – 331.
Kugler, P.N., Turvey, M.T., 1987. Information, Natural Law,
and the Self-assembly of Rhythmic Movement. Lawrence
Erlbaum Associates, Hillsdale, NJ.
Lakoff, G., 1987. Women, Fire, and Dangerous Things: What
Categories Reveal about the Mind. University of Chicago,
Chicago, IL.
Lashley, K.S., 1998. The problem of cerebral organization in
vision. In: Orbach, J. (Ed.), The Neuropsychological Theories of Lashley and Hebb. University Press of America,
Lanham, MD, pp. 159– 176 Reprinted from: Biol. Symp.
7(1942)301 – 322.
Licklider, J.C.R., 1951. A duplex theory of pitch perception.
Experientia VII (4), 128 – 134.
Licklider, J.C.R., 1959. Three auditory theories. In: Koch, S.
(Ed.), Psychology: A Study of a Science. Study I. Conceptual and Systematic. McGraw-Hill, New York, pp. 41 –
144.
Longuet-Higgins, H.C., 1987. Mental Processes: Studies in
Cognitive Science. The MIT Press, Cambridge, MA.
Longuet-Higgins, H.C., 1989. A mechanism for the storage of
temporal correlations. In: Durbin, R., Miall, C.,
Mitchison, G. (Eds.), The Computing Neuron. AddisonWesley, Wokingham, UK, pp. 99 – 104.
Lorente de Nó, R., Fulton, J.F., 1949. Cerebral cortex: architecture, intracortical connections, motor projections
(1933). In: Fulton, J.F. (Ed.), Physiology of the Nervous
System. Oxford University Press, New York, pp. 288 – 330.
MacKay, D.G., 1987. The Organization of Perception and
Action. Springer-Verlag, New York.
MacKay, D.M., 1962. Self-organization in the time domain.
In: Yovitts, M.C., Jacobi, G.T., Goldstein, G.D. (Eds.),
Self-Organizing Systems. Spartan Books, Washington, DC,
pp. 37– 48.
Marr, D., 1991. From the Retina to the Neocortex: Selected
Papers of David Marr. Birkhäuser, Boston, MA.
Maturana, H., 1970. The biology of cognition. In: Maturana,
H., Varela, F. (Eds.), Autopoiesis and Cognition. D.
Reidel, Dordrecht.
Maturana, H.R., 1981. Autopoiesis. In: Zeleny, M. (Ed.),
Autopoiesis: A Theory of the Living. North Holland, New
York.
81
McCulloch, R., 1989. Collected Works of Warren McCulloch,
vols. 1 – 4. Intersystems Publications, Salinas, CA.
McCulloch, W.S., 1946. A heterarchy of values determined by
the topology of nervous nets. Bull. Math. Biophys. 7 (2),
89 – 93.
McCulloch, W.S., 1947. Modes of functional organization of
the cerebral cortex. Fed. Proc. 6, 448 – 452.
McCulloch, W.S., 1965. Embodiments of Mind. MIT Press,
Cambridge, MA.
McCulloch, W.S., 1969a. Of digital oscillators. In: Leibovic,
K.N. (Ed.), Information Processing in the Nervous System.
Springer Verlag, New York, pp. 293 – 296.
McCulloch, W.S., 1969b. Regenerative loops. J. Nerv. Ment.
Dis. 149 (1), 54 – 58.
McCulloch, W.S., Pitts, W.H., 1943. A logical calculus of the
ideas immanent in nervous activity. In: McCulloch, W.S.
(Ed.), Embodiments of Mind. MIT Press, Cambridge, MA,
pp. 19 –39.
Meddis, R., Hewitt, M.J., 1991. Virtual pitch and phase
sensitivity of a computer model of the auditory periphery.
J. Acoust. Soc. Am. 89 (6), 2866 – 2894.
Mesulam, M.-M., 1998. From sensation to perception. Brain
121, 1013 – 1052.
Michaels, C.E., Carello, C., 1981. Direct Perception. PrenticeHall, Englewood Cliffs, NJ.
Miller, R.R., Barnet, R.C., 1993. The role of time in elementary associations. Curr. Direct. Psychol. Sci. 2 (4), 106 –
111.
Minch, E., 1987. The Representation of Hierarchical Structure
in Evolving Networks: State University of New York at
Binghamton.
Mingers, J., 1995. Self-Producing Systems. Plenum Press, New
York.
Modrak, D.K., 1987. Aristotle: The Power of Perception.
University of Chicago, Chicago, IL.
Morrell, F., 1967. Electrical signs of sensory coding. In: Quarton, G.C., Melnechuck, T., Schmitt, F.O. (Eds.), The
Neurosciences: A Study Program. Rockefeller University
Press, New York, pp. 452 – 469.
Morris, C., 1946. Signs, Language, and Behavior. George
Braziller, New York.
Mountcastle, V., 1967. The problem of sensing and the neural
coding of sensory events. In: Quarton, G.C., Melnechuk,
T., Schmitt, F.O. (Eds.), The Neurosciences: A Study
Program. Rockefeller University Press, New York.
Mountcastle, V., 1993. Temporal order determinants in a
somatosthetic frequency discrimination: sequential order
coding. Ann. NY Acad. Sci. 682, 151 – 170.
Mumford, D., 1994. Neuronal architectures for pattern-theoretic problems. In: Koch, C., Davis, J.L. (Eds.), LargeScale Neuronal Theories of the Brain. MIT Press,
Cambridge, MA, pp. 125 – 152.
Murdoch, D., 1987. Niels Bohr’s Philosophy of Physics. Cambridge University Press, Cambridge.
Nöth, W., 1990. Handbook of Semiotics. In: Indiana University Press. Indiana University Press, Indianapolis, IN.
82
P. Cariani / BioSystems 60 (2001) 59–83
Nunez, P.L., 1995. Towards a physics of neocortex. In: Nunez,
P.L. (Ed.), Neocortical Dynamics and Human EEG
Rhythms. Oxford University Press, New York, pp. 68 –
132.
Pask, G., 1960. The natural history of networks. In: Yovits,
M.C., Cameron, S. (Eds.), Self-organizing Systems. Pergamon Press, New York, pp. 232 – 263.
Pask, G., 1981. Organizational closure of potentially conscious
systems. In: Zeleny, M. (Ed.), Autopoiesis: A Theory of
Living Organization. North Holland, New York, pp. 265 –
308.
Pattee, H.H., 1961. On the origin of macromolecular sequences. Biophys. J. 1, 683 – 709.
Pattee, H.H., 1969). How does a molecule become a message?
Dev. Biol. 3 (Suppl.), 1 – 16.
Pattee, H.H., 1973a. The physical basis of the origin of
hierarchical control. In: Pattee, H. (Ed.), Hierarchy Theory: The Challenge of Complex Systems. George Braziller,
New York.
Pattee, H.H., 1973b. Physical problems in the origin of natural
controls. In: Locker, A. (Ed.), Biogenesis, Homeostasis,
Evolution. Pergamon Press, New York.
Pattee, H.H., 1974. Discrete and continuous processes in computers and brains. In: Guttinger, W., Conrad, M., Dal Cin,
M. (Eds.), The Physics and Mathematics of the Nervous
System. Springer-Verlag, New York.
Pattee, H.H., 1979. The complementarity principle and the
origin of macromolecular information. Biosystems 11,
217 – 226.
Pattee, H.H., 1982. Cell psychology: an evolutionary view of
the symbol-matter problem. Cogn. Brain Theory 5, 325 –
341.
Pattee, H.H., 1985. Universal principles of measurement and
language functions in evolving systems. In: Casti, J.L.,
Karlqvist, A. (Eds.), Complexity, Language, and Life:
Mathematical Approaches. Springer-Verlag, Berlin, pp.
268 – 281.
Pattee, H.H., 1990. The measurement problem in physics,
computation, and brain theories. In: Cavallo, M.E. (Ed.),
Nature, Cognition, and System. Kluwer, Winschoten.
Pattee, H.H., 1995. Evolving self-reference: matter, symbols,
and semantic closure. Commun. Cogn.– Artif. Intell. (CC –
AI) 12 (1-2), 9 – 27.
Pattee, H.H., 1996. The problem of observables in models of
biological organizations. In: Khalil, E.L., Boulding, K.E.
(Eds.), Evolution, Order, and Complexity. Routledge, London, pp. 249 – 264.
Pattee, H.H., 2001. The physics of symbols: bridging the
epistemic cut. Biosystems, 60, 5 – 21.
Perkell, D.H., Bullock, T.H., 1968. Neural coding. Neurosci.
Res. Prog. Bull. 6 (3), 221 –348.
Piatelli-Palmarini, M., 1980. Language and Learning. The
Debate between Jean Piaget and Noam Chomsky. Harvard
University Press, Cambridge, MA.
Powers, W., 1973. Behavior: The Control of Perception.
Aldine, New York.
Pribram, K.H., 1971. Languages of the Brain: Experimental
Paradoxes and Principles in Neurophysiology. PrenticeHall, New York.
Pylyshyn, Z., 1984. Computation and Cognition. MIT Press,
Cambridge, MA.
Rashevsky, N., 1960. Mathematical Biophysics: PhysicoMathematical Foundations of Biology, vols. I – II. Dover,
New York.
Rieke, F., Warland, D., de Ruyter van Steveninck, R., Bialek,
W., 1997. Spikes: Exploring the Neural Code. MIT Press,
Cambridge, MA.
Rocha, L., 1996. Eigen-states and symbols. Syst. Res. 13 (3),
371 – 384.
Rocha, L., 1998. Selected self-organization and the semiotics
of evolutionary systems. In: Van de Vijver, G., Salthe, S.,
Delpos, M. (Eds.), Evolutionary Systems. Kluwer, Dordrecht, pp. 341 – 358.
Rosen, R., 1971. Some realizations of (M,R) systems and their
interpretation. J. Math. Biophys. 33, 303 – 319.
Rosen, R., 1973a. On the generation of metabolic novelties in
evolution. In: Locker, A. (Ed.), Biogenesis, Homeostasis,
Evolution. Pergamon Press, New York.
Rosen, R., 1973b. On the relation between structural and
functional descriptions of biological systems. In: Conrad,
M., Magar, E.M. (Eds.), The Physical Principles of Neuronal and Organismic Behavior. Gordon and Breach, London, pp. 227 – 232.
Rosen, R., 1978. Fundamentals of Measurement and Representation of Natural Systems. North-Holland, New York.
Rosen, R., 1985. Anticipatory Systems. Pergamon Press,
Oxford.
Rosen, R., 1986. Causal structures in brains and machines.
Int. J. Gen. Syst. 12, 107 – 126.
Rosen, R., 1991. Life Itself. Columbia University Press, New
York.
Rosen, R., 2000. Essays on Life Itself. Columbia University
Press, New York.
Schyns, P.G., Goldstone, R.L., Thibaut, J.-P., 1998. The development of features in object concepts. Behav. Brain Sci.
21 (1), 1 –54.
Squire, L.R., 1987. Memory and Brain. Oxford Unversity
Press, New York.
Tank, D.W., Hopfield, J.J., 1987. Neural computation by
concentrating information in time. Proc. Natl. Acad. Sci.
USA 84, 1896 – 1900.
Thatcher, R.W., John, E.R., 1977. Foundations of Cognitive
Processes. In: Functional Neuroscience, vol. I. Lawrence
Erlbaum, Hillsdale, NJ.
Trehub, A., 1991. The Cognitive Brain. MIT Press,
Cambridge.
Umerez, J., 1998. The evolution of the symbolic domain in
living systems and artificial life. In: van de Vijver, G.,
Salthe, S., Delpos, M. (Eds.), Evolutionary Systems.
Kluwer, Dordrecht.
Uttal, W.R., 1973. The Psychobiology of Sensory Coding.
Harper and Row, New York.
P. Cariani / BioSystems 60 (2001) 59–83
van Gelder, T., Port, R.F., 1995. It’s about time: an overview
of the dynamical approach. In: Port, R.F., van Gelder, T.
(Eds.), Mind as Motion: Explorations in the Dynamics of
Cognition. MIT Press, Cambridge, MA, pp. 1 – 44.
Varela, F., 1979. Principles of Biological Autonomy. North
Holland, New York.
von Foerster, H., 1984a. Observing Systems. Intersystems
Press, Seaside, CA.
von Foerster, H., 1984b. On constructing a reality. In: Watzlawick, P. (Ed.), The Invented Reality. W.W. Norton,
New York, pp. 41– 62.
von Glasersfeld, E., 1987. The Construction of Knowledge:
Contributions to Conceptual Semantics. Intersystems
Press, Salinas, CA.
von Glasersfeld, E., 1995. Radical Constructivism: A Way of
Knowing and Learning. The Falmer Press, London.
83
von Neumann, J., 1951. The general and logical theory of
automata. In: Jeffress, L.A. (Ed.), Cerebral Mechanisms of
Behavior (the Hixon Symposium). Wiley, New York, pp.
1 –41.
von Neumann, J., 1955. Mathematical Foundations of Quantum Mechanics. Princeton University Press, Princeton, NJ.
von Neumann, J., 1958. The Computer and the Brain. Yale
University Press, New Haven.
von Uexküll, J., 1926. Theoretical Biology. Harcourt, Brace
and Co, New York.
Weyl, H., 1949a. Philosophy of Mathematics and Natural
Science. Princeton University Press, Princeton, NJ.
Weyl, H., 1949b. Wissenschaft als symbolische Konstruction
des Menchens. In: Eranos Jahrbuch, pp. 427 – 428 as
quoted in: Holton, 1988, Thematic Origins of Scientific
Thought, Harvard University Press, Cambridge, MA.
.