Academia.eduAcademia.edu

Symbols and Dynamics In the Brain

2001, Biosystems

Biosystems special issue on “Physics and evolution of symbols and codes”, in press (2001) Symbols and dynamics in the brain Peter Cariani Eaton Peabody Laboratory for Auditory Physiology, Massachusetts Eye and Ear Infirmary, 243 Charles St., Boston, MA 02114 USA Keywords Symbols, dynamical systems, neurocomputation, emergence, self-organization, adaptive systems, epistemology, biological cybernetics, genetic code, neural code, biological semiotics, evolutionary robotics Embedded fonts: Times, Helvetica, Helvetica-Bold, Helvetica-italics, Geneva, Geneva 1. Abstract The work of physicist and theoretical biologist Howard Pattee has focused on the roles that symbols and dynamics play in biological systems. Symbols, as discrete functional switching-states, are seen at the heart of all biological systems in form of genetic codes, and at the core of all neural systems in the form of informational mechanisms that switch behavior. They also appear in one form or another in all epistemic systems, from informational processes embedded in primitive organisms to individual human beings to public scientific models. Over its course, Pattee’s work has explored 1) the physical basis of informational functions (dynamical vs. rule-based descriptions, switching mechanisms, memory, symbols), 2) the functional organization of the observer (measurement, computation), 3) the means by which information can be embedded in biological organisms for purposes of self-construction and representation (as codes, modeling relations, memory, symbols), and 4) the processes by which new structures and functions can emerge over time. We discuss how these concepts can be applied to a high-level understanding of the brain. Biological organisms constantly reproduce themselves as well as their relations with their environs. The brain similarly can be seen as a self-producing, self-regenerating neural signaling system and as an adaptive informational system that interacts with its surrounds in order to steer behavior. 1 2. Symbols in self-production and in percept-action loops Theoretical biology has long attempted to answer fundamental questions concerning the nature of life itself, its origins, and its evolution. Over four decades Howard Pattee has articulated a series of questions that concern the origins and evolutions of structural stability, hierarchical organization, functional autonomy, informational process, and epistemic relation. These go to the heart of how cognitive systems are grounded in their material, biological substrates. Organisms are dynamic material systems that constantly reproduce their own material organization. In order to persist, organisms must maintain both internal and external balance. They must simultaneously create a stable, internal milieu through selfproduction and establish stable, sustainable relations with their surrounds. Symbols play fundamental roles in each of these realms. DNA sequences constrain self-production and reproduction. In percept-action loops, nervous systems continuously engage in informational transactions with their external environments to adaptively steer behavior. As a physicist, Pattee has always been deeply interested in what differentiates organisms from other material systems. How do we distinguish living from nonliving systems? Are systems “living” by virtue of special parts and/or relations (e.g. DNA, RNA, proteins) or by virtue of coherent organization of their constituent processes? In physics, the discovery of universal, natural laws in organizationally-simple systems is paramount, while the more complex organisms of biology are most intelligible in terms of special-constraints that capture the essential organizational and informational relations that make an organism a living system. A physics of biology must therefore grapple with questions of organization, information, and function. Pattee has been deeply interested in the role of physically-embodied symbols in the ongoing self-production of the organism (Pattee 1961). Informational function in a biological system involves the switching of states by configurational rather than energetic means. While two different strands of DNA may have essentially the same energetics, large differences in cellular and organismic behavior can arise purely from the different sequences of symbols that they carry. The central role of discrete, genetic coding mechanisms in biological organisms prompted Pattee to pose a series of fundamental questions. What does it mean to say that there is a “code” in a natural system? What distinguishes a non-informational process from an informational one? How do the latter evolve from the former, or in Pattee’s (1969) words “how does a molecule become a message?” Must all life depend upon a genetic code? If so, must the informational vehicles be discrete tokens or might simple analog, metabolic self-production suffice? 2 In addition to their internal role in self-production, informational processes play critical roles in interactions with external environments. These processes form the basis of biological epistemology, i.e. a “cognitive biology.” Organisms sense their surrounds, anticipate what actions are appropriate, and act accordingly. In perception, internal informational patterns are contingent upon the interactions of sensory receptors with an external environment. These sensory “representations” inform anticipatory predictions that determine which actions are likely to lead to outcomes that fulfill biological systemgoals (e.g. homeostasis, nutrition, reproduction). The predictive decision process switches between the different alternative behavioral responses that are available to the organism. Actions are thus coordinated with percepts in a manner that facilitates effective, survivalenhancing behavior. The operations of perception, coordination-anticipation, and action in the organism become the measurements, predictive computations, and actions of generalized observeractors. The stimulus-contingent actions of sensory organs resemble measurements, while reliable couplings of inputs to outputs, in the form of percept-action mappings, resemble computations. Thus to the extent that organisms react differentially to different environmental conditions, “modeling relations” and “percept-action cycles” are embedded in biological systems. At their core, then, almost all biological organisms can be seen as primitive epistemic systems in their own right. Organisms, cognitive systems, and scientific models thus share a common basic functional organization (Cariani 1998b; Cariani 1989; Etxeberria 1998; Kampis 1991a; Pattee 1982; Pattee 1985; Pattee 1995; Pattee 1996; Rosen 1978; Rosen 1985; Rosen 2000; Umerez 1998). Further, these systems to varying degrees are adaptive systems that continually modify their internal structure in response to experience. To the extent that an adaptive epistemic system constructs itself and determines the nature of its own informational transactions with its environs, that system achieves a degree of epistemic autonomy relative to its surrounds (Cariani 1992ab; Cariani 1998a). Like the organism as a whole, nervous systems are self-constructing biological systems that are in constant adaptive interaction with their environments. It is not surprising, then, that parallel questions related to information and organization arise in the study of the brain. How are the informational functions of neurons to be distinguished from their non-informational ones? How is the informational identity of a nervous system maintained over the life of the organism? What kinds of neural pulse codes subserve the representation of information? What is the relationship between analog and discrete information processing in the brain? What does it mean to say that 3 neurons perform “computations” or “measurements” or that “symbols” exist in the brain? How should we think about the semiotics of such symbols? Nervous systems are biological systems that reproduce their internal organizations over time, they are information-processing systems that use sensory data to steer effective action, they are epistemic systems that assimilate the correlational structure of their environments, and in addition, they are also material systems that support conscious awareness. In this paper we will discuss these various aspects of nervous systems with many of Pattee’s probing questions and organizing concepts in mind. 3. Regeneration of internal relations: organisms as self-producing systems The fundamental concept of a self-production system is common to an organizational view of both life and mind. A self-production system reproduces its own parts and regenerates its own functional states. Both the material organization that characterizes life and the informational order that characterizes mind therefore necessarily involve regenerative processes at their cores. Regenerative “circular-causal” processes that renew energy flows, material parts and functional relations continually recreate stable, ongoing systems-identities. Regenerations of parts and relations between parts permit selfconstruction, self-repair, and self-reproduction that allow energetically-open organizations to continually reproduce their internal relations (Kampis 1991b). The ensuing dynamic orders of organisms and brains are more flame-like than crystalline (Piatelli-Palmarini 1980, introduction). Thus far our best theories of living organization all involve self-production networks, but differ on the role that symbols play in these networks (Figure 1). In his logical requisites for a self-reproducing automaton, von Neumann (1951) drew an explicit functional dichotomy between plans (genome) and the apparatus that interprets them to construct a body (phenome) (Figure 1A). In metabolism-repair systems (Rosen 1971; Rosen 1991) and symbol-matter systems (Pattee 1982; Pattee 1995), a similar complementarity exists between symbols (plans) and physical dynamics (rate-dependent chemical reactions). On the other hand, metabolic descriptions that de-emphasize and eliminate the role of biological symbols have also been proposed (Figure 1B). These include autopoietic models, (Maturana 1981; Mingers 1995; Varela 1979) reaction networks, hypercycles (Eigen 1974), and autocatalytic networks.(Kauffman 1993). In these models, organizational stability comes from the dynamics of rate-dependent chemical reactions rather than from the stability of genetic sequences. Here organizational memory is analog and implicit in the dynamics, rather than discrete, explicit and insulated from them . 4 Roles for symbolic constraint and dynamically-based structural stability need not be mutually exclusive. A reconciliation of the two views is to see the cell in terms of analog biochemical kinetics that are channeled by the regulatory actions of discrete genetic switches (Figure 1C). Biochemical reactions are described in terms of rate-dependent processes that critically depend on the passage of time, while switches are described in terms of states that are largely indifferent to time. Pattee distinguishes rate-independent genetic, information storage and retrieval operations from rate-dependent processes that are involved in construction, metabolism, and action (Pattee 1979). The time-indifferent processes utilize independent discrete, inheritable, genetic “symbols” while time-critical ones depend on rate-dependent chemical dynamics. There is thus a way of recognizing in natural systems those physical mechanisms that can function as “symbols” or “records,” 5 i.e. the physical substrates of the semiotic. If we examine the workings of a digital computer, we see that the behavior of the material system can be described not only in terms of rate-dependent dynamics (e.g. as differential equations that embody the laws of classical physics), but also in terms of rule-governed switchings between macroscopic operational states (e.g. as a finite state automaton). Some processes lend themselves better to symbolic description, others to dynamical description. In von Neumann’s scheme (Figure 1A), the different processes can be described in terms of symbols (plans, genetic strings), material parts (phenotype, body), and construction mechanisms (von Neumann’s universal constructor, transcriptiontranslation) that transform symbols into material structures. The latter interpret symbols to direct the construction of organized structures from basic parts. In this view, the organism interprets its own symbols in order to continually construct its own body.1 Pattee has called this mixture of symbolic and nonsymbolic action “semantic closure” (Pattee 1982). Many different kinds of closure are possible.2 To the extent that material structures and functional organizations are continually regenerated by internal mechanisms, some degree of material and functional closure is achieved. This closure, or internal causation, in turn creates domains of partial structural and functional autonomy. Structure is created from within rather than imposed from without. Closure thus creates a boundary on the basis of mode of causation, between an interior self-produced realm and an exterior milieu that is beyond the control of the self-production loop. For biological organisms, closure and autonomy are always partial and provisional because these systems depend on continuous material and informational exchange with their environments. 4. Regeneration of informational pattern in the nervous system If organisms can be seen in terms of regenerations of material parts, minds can be seen in terms of regenerations of informational orders. Organizational conceptions of both life and mind came together early in Western natural philosophy, in the form of Aristotle’s concept of psyche (Hall 1969; Modrak 1987). Living organisms, nervous systems, and societies of organisms are cooperative networks of active, but interdependent, semi-autonomous elements. It is therefore not surprising that conceptions of the coherent functional organization of nervous systems have developed in parallel with those for biological organisms. Anatomically, the nervous system consists of a huge multiplicity of transmission loops: recurrent multisynaptic connectivities, reciprocal innervations, and re-entrant paths (Lorente de Nó and Fulton 1949; McCulloch 1947; Mesulam 1998). Virtually every 6 neuron in the system is part of a signaling cycle, providing inputs to and receiving inputs from other elements in the network. These signaling cycles manifest themselves physiologically in terms of reciprocal activations, reverberations, and more complex, history-dependent modes of activity.(Gerard 1959; Thatcher and John 1977) Theoretical neuroscientists have generally believed that this recurrent organization is essential to the operation of the nervous system as an informational system, on both macroscopic and microscopic levels. Within individual neurons, a host of regenerative action-recovery cycles subserve synaptic action as well as the generation and transmission of action potentials. Thus, many of the first formal models of neural networks dealt with the stability properties of closed cycles of excitation and inhibition, (Rashevsky 1960) of pulse-coded “nets with circles”, (McCulloch 1969a; McCulloch and Pitts 1943) and assemblies of oscillators (Greene 1962). At a few junctures, formal relations between metabolic networks and recurrent neural networks were also considered (Cowan 1965; Haken 1983; Katchalsky et al. 1972; Kauffman 1993; Maturana 1970; Maturana 1981; Minch 1987; Rashevsky 1960; Varela 1979). 7 Psychology in the mid-20th century was accordingly formulated in terms of switching between reverberant signaling loops (Greene 1962; Hebb 1949; Hebb 1966; McCulloch and Pitts 1943; Rashevsky 1960) (Figure 2A). In these frameworks, mental states could be seen as alternative eigenstates of a large, dynamical system (Rocha 1996; Rocha 1998; von Foerster 1984a; von Foerster 1984b). Different stimuli would switch the resonant states of the system in different ways, such that different motor response patterns would be produced (Figure 2B). Linkages between particular stimulus-classes and appropriate responses could then be implemented by means of adjusting synaptic efficacies and/or firing thresholds of excitatory and inhibitory elements so as to create mutually-exclusive behavioral alternatives. In the subsequent decades that saw the ascendance of the digital electronic computer, cybernetics-inspired notions of the brain as a set of tuned, reverberant analog feedback circuits were replaced with accounts that relied on neural mechanisms of a more discrete sort: feature detectors, decision trees, sequential-hierarchical processing, and high-level rule-systems. In the 1960’s and 1970s, funding for research in information-processing shifted from neural networks towards the more symbolically-oriented, logic-based approaches of symbolic artificial intelligence, cognitive psychology, and linguistics. Strong conceptions of minds as rule-governed symbol-processing systems emerged from this movement. The rise of the term “genetic program” reflected the diffusion of the computer metaphor into purely biological realms. 5. Symbols and dynamics in the brain In this historical context, one could discuss the competing paradigms of analog and digital computation in terms of their respective descriptions: dynamical networks vs. symbolic computation (Pattee 1990). These two paradigms defined the poles of the “symbol-matter” problem as it related to the description of the brain. In the mid-1980’s, neural network research was revived under the rubric of “parallel distributed processing”, and neural network information-processing models reappeared in significant numbers in the neurosciences. Currently most neuroscientists who work on informational aspects of the brain assume that the brain is a parallel, distributed connectionist network of one sort or another. The great diversity of current neurocomputational approaches make the core assumptions and boundaries of this paradigm hard to clearly delineate, such that it can be fit within the categories of the symbol-matter dichotomy (Cariani 1997a; Pattee 1990). How brain function is conceptualized thus depends heavily on which technological examples are available, especially in the absence of strong theories and decisive 8 empirical data. The current situation in the neurosciences regarding the neural code is not unlike the situation in molecular biology before the elucidation of the genetic code. Biologists understood that there had to be molecular mechanisms for heredity in the chromosomes, but did not have a specific understanding of which aspects of chromosomal structure were responsible for the transmission of genetic information. We understand that all of the information necessary for perception, cognition, and action must be embedded in the discharge activities of neurons, but we do not yet have firm understanding or agreement as to which specific aspects of neural discharge convey which specific kinds of information. TABLE I Explanatory mode Symbol-processing Functionalism: Symbolic computation Dynamical-systems Mass behavior: System trajectories Change View of cells Rules Genetic programs Switching systems Discrete-state computer Feature detectors Channel-activations Functional atoms Explicit mappings onto symbol-states Sequential hierarchical decision processes Iterated computation Functional modules Physical laws Metabolic cycles Autopoiesis Analog computer Neurocomputation Functionalism: Neural codes and info. processing Neural mechanisms Adaptive computing elements Neural architectonics Mixed analog-digital device Neural mass-statistics Interneural correlations Attractor basins Nonrepresentational Implicate embeddings Resonance processes Mass dynamics Controllable dynamics Chaos Neural representations: rateprofiles & temporal patterns Mutually exclusive patterns Analog & discrete modes General and special-purposed Pattern-resonance & elaboration Feature-detection , correlations Hierarchical & heterarchical Sequential & (a)synchronous Brains Neural primitives Symbols Representation Information processing Many strategies for cracking the neural code are being pursued. Some clues may be provided by studying the parts of neural systems on molecular and cellular levels, but structural knowledge by itself may not generate the functional heuristics needed to reverse-engineer them. One can have in hand a circuit diagram of an unknown information-processing device, but still not understand what it is for, how it works, or what general functional principles are employed in its design. System-pathologies provide other clues for function: what functional deficits are associated with damage to particular parts. One strives to identify those parts that are essential for a given function and those that are redundant or non-essential. These strategies are also presently limited by the relatively coarse character of physical lesions and the systemic nature of molecular 9 interventions that do not readily yield much insight into the details of neural representations and computations. Electrophysiological experiments do provide some of these details, but the sheer complexity of neural responses makes their meaningful interpretation difficult at best. Neurocomputational approaches attempt to understand how the brain works by developing functional models of neural systems that have information-processing capabilities similar to those of nervous systems, simultaneously searching for existing neural structures that might implement such mechanisms. It is in the realm of neurocomputational theory that the concepts of symbols and dynamics have their greatest relevance. Amongst global theories of how the brain functions as an informational system, there are currently three broad schools: the dynamical approach, the symbolic approach, and the neural information processing (neurocomputational) approach (Table I). While symbolic and dynamical approaches are quite disjoint, considerable overlap exists between each of these and portions of neurocomputational view. The dynamical approach has been adopted by a diversity of research traditions that seek to understand the brain in terms of analog, rate-dependent processes and physicsstyle models: early formulations of neural network dynamics (Beurle 1956; Greene 1962; Rashevsky 1960), Gestalt psychology (Köhler 1951), Gibsonian ecological psychology (Carello et al. 1984), EEG modeling (Basar 1989; Nunez 1995), and dynamical systems theory (Freeman 1975; Freeman 1995; Freeman 1999; Haken 1983; Haken 1991; Kelso 1995; Kugler 1987; van Gelder and Port 1995). For dynamicists, the brain is considered as a large and complex continuous-time physical system that is described in terms of the dynamics of neural excitation and inhibition. The behavior of large numbers of microscopic neural elements create discrete basins of attraction for the system that can be switched. These contingently-stable dynamical macro-states form the substrates for mental and behavioral states. Some dynamics-oriented traditions have formulated analog alternatives to discrete computations with the aim of explaining perceptual and behavioral functions (Carello et al. 1984; Michaels and Carello 1981), while others are more concerned with the mass-dynamics of neural systems that account for their observed exogenous and endogenous electromagnetic response patterns.3 In the neural and cognitive sciences, the symbol-based approach has been adopted by research traditions whose subject matter lends itself to orderly, rule-governed successions of discrete functional states: symbolic artificial intelligence, symbolically-oriented cognitive science, and linguistics. Perception is seen in terms of microcomputations by discrete feature-detection elements, while mental operations are conceptualized in terms 10 of computations on discrete, functional symbolic states that are thought to be largely autonomous of the underlying neural microdynamics.4 The brain may be best conceptualized in terms of mixed analog-digital devices, since strong examples of both analog and discrete modes of representation can be found there (von Neumann 1958). Clearly, most sensory representations that subserve sensory qualia such as pitch, timbre, color, visual form, smell, taste, convey continuous ranges of qualities, and most actions involve continuous ranges of possible movements. On the other hand, cognitive representations, such as those that subserve speech, language, thought, planning, and playing music, by necessity involve discrete functional states that must be organized and combined in highly specific ways. The neurocomputational approach includes a variety of neurophysiological and neurocomputational perspectives that seek to understand on a detailed level how neural populations process information (Arbib 1989; Churchland and Sejnowski 1992; Licklider 1959; Marr 1991; McCulloch 1965; Rieke et al. 1997). In the brain these alternatives are often conceptualized in terms of analog and digital processes operating at many different levels of neural organization: subcellular, cellular, systems level (assemblies), continuous vs. discrete percepts and behaviors. On the subcellular level, continuously graded dendritic potentials influence the state-switchings of individual ion channels whose statistical mechanics determine the production of discrete action potentials (“spikes”). Most information in the brain appears to be conveyed by trains of spikes, but how various kinds of information are encoded in such spike trains is not yet well understood. Central to the neurocomputational view is the neural coding problem – the identification of which aspects of neural activity convey information (Cariani 1995; Cariani 1999; Mountcastle 1967; Perkell and Bullock 1968; Rieke et al. 1997; Uttal 1973). Neurocomputational approaches presume that ensembles of neurons are organized into functional, “neural assemblies” (Hebb 1949) and processing architectures that represent and analyze information in various ways. The functional states of a neural code can form highly discrete alternatives or continuously graded values. A simple “doorbell” code in which a designated neuron either fires or does not (on/off) is an example of the former, while an interspike interval code in which different periodicities are encoded in the time durations between spikes (intervals of 10.0 ms vs. 10.5 ms signal different periodicities) is an example of the latter. The nature of a code depends upon how a receiver interprets particular signals; in the case of neural codes, receivers are neural assemblies that interpret spike trains. Thus, a given spike train can be interpreted in multiple ways by different sets of neurons that receive it. 11 The nature of the neural codes that represent information determine the kinds of neural processing architectures that must be employed to make effective use of them. If neural representations are based on across-neuron profiles of average firing rate, then neural architectures must be organized accordingly. If information is contained in temporal patterns of spikes, then neural architectures must be organized to distinguish different time patterns (e.g. using time delays). The many possible feedforward and recurrent neural net architectures range from traditional feedforward connectionist networks to recurrent, adaptive resonance networks (Grossberg 1988) to time-delay networks (MacKay 1962; Tank and Hopfield 1987) to timing nets (Cariani in press-b; Longuet-Higgins 1989). A given neurocomputational mechanism may be a specialpurpose adaptation to a specific ecological context or it may be a general-purpose computational strategy common to many different ecological contexts and information processing tasks.5 Each general theoretical approach has strengths and weaknesses. Symbol-processing models couple directly to input-output functions and are interpretable in functional terms that we readily understand: formal systems, finite-automata, and digital computers. Dynamical approaches, while further removed from functional states, directly address how neural systems behave given the structural properties of their elements. Neurocomputational, information-processing approaches at their best provide bridges between structural and functional descriptive modes by concentrating on those aspects of structure that are essential for function. A general weakness of symbolic “black box” approaches lies in the assumption of discrete perceptual and/or higher level representational atoms. Symbolic primitives are then processed in various ways to realize particular informational functions. However, in abstracting away the neural underpinnings of their primitives, these approaches may consequently miss underlying invariant aspects of neural codes that give rise to cognitive equivalence classes.6 Historically, logical atomist and black box approaches have ignored problems related to how new symbolic primitives can be created (Carello et al. 1984; Cariani 1997a; Cariani 1989; Piatelli-Palmarini 1980; Schyns et al. 1998). This problem in psychology of forming new perceptual and conceptual primitives is related to more general problems of how qualitatively new structures, levels of organization can emerge. Pattee and Rosen originally addressed the problem of emergence in the context of evolution of new levels of cellular control (Pattee 1973b; Rosen 1973a), but subsequently extended their ideas to the emergence of new epistemic functions (Pattee 1995; Rosen 1985). Underlying these ideas are notions of systems that increase their effective dimensionalities over time (Carello et al. 1984; Cariani 1993; Cariani 1997a; Cariani 12 1989; Conrad 1998; Kugler and Shaw 1990; Pask 1960).7 Purely symbolic systems selfcomplexify by multiplying logical combinations of existing symbol-primitives, not by creating new ones. Because their state sets are much finer grained and include continuous, analog processes, dynamical and neurocomputational models leave more room for new and subtle factors to come into play in the formation of new primitives. Dynamical and neurocomputational substrates arguably have more potential for selforganization than their purely symbol-based counterparts. In the case of neural signaling systems as well as in the cell, there are also means of reconciling dynamical models with symbolic ones – attractor basins formed by the dynamics of the interactions of neural signals become the state symbol-alternatives of the higher-level symbol-processing description.8 Even with these interpretational heuristics, there remain classical problems of inferring functions from structures and phase-space trajectories (Rosen 1973b; Rosen 1986; Rosen 2000). While detailed state-trajectories often yield insights into the workings of a system, by themselves they may not address functional questions of how neurons must be organized in order to realize particular system-goals. Much of what we want to understand by studying biological systems are principles of effective design, i.e. how they realize particular functions, rather than whether these systems are governed by known physical laws (we assume that they are), or whether their state-transition behavior can be predicted. Though they provide useful clues, neither parts-lists, wiring-diagrams, nor input-output mappings by themselves translate directly into these principles of design. One can have in hand complete descriptions of the activities of all of the neurons in the brain, but without some guiding ideas of how the brain represents and processes information, this knowledge alone does not lead inevitably to an understanding of how the system works. 6. Symbols and dynamics in epistemic systems Brains are more than simply physical systems, symbol-processing systems, and neural information processing architectures. They are epistemic systems that observe and interact with their environs. How biological systems become epistemic systems has been a primary focus of Pattee’s theoretical biology. In addition to internalist roles that symbols play in biological self-construction, there are also externalist roles in epistemic operations: how symbols retain information related to interactions with the environment. These interactions involve neural information processes for sensing, deliberating, and acting (Figures 4-6). These operations have very obvious and direct analogies with the functionalities of the idealized observer-actor: measurement, computation, prediction, evaluation, and action (“modeling relations”). In order to provide an account of how modeling relations might be embedded in biological systems, essential functionalities of 13 observer-actors (measurement, computation, evaluation, action) must be distinguished and clarified, and then located in biological organisms. The latter task requires a theory of the physical substrates of these operations, such that they can be recognized wherever they occur in nature. One needs to describe in physical terms the essential operations of observers, such as measurement, computation, and evaluation. Once measurement and computation can be grounded in operational and physical terms, they can be simultaneously seen as very primitive, essential semiotic operations that are present at all levels of biological organization and as highly elaborated and refined externalized endproducts of human biological and social evolution. This epistemically-oriented biology then provides explanations for how physical systems can evolve to become observingsystems. It also provides an orienting framework for addressing the epistemic functions of the brain. One of the hallmarks of Pattee’s work has been a self-conscious attitude toward the nature of physical descriptions and the symbols themselves. Traditionally our concepts regarding symbols, signals, and information have been developed in the contexts of human perceptions, representations, coordinations, actions, and communications and their artificial counterparts. The clearest cases are usually artificial devices simply because people explicitly designed them to fulfill particular purposes – there is no problem of second-guessing or reverse-engineering their internal mechanisms, functional states, and system-goals. In the realm of epistemology – how information informs effective prediction and action – the clearest examples have come from the analysis of the operational structure of scientific models. In the late 19th and early 20th century, physics was compelled to adopt a rigorously self-conscious and epistemologically-based attitude towards its methods and its descriptions (Bridgman 1936; Hertz 1894; Murdoch 1987; Weyl 1949a). The situation in physics paralleled a self-consciousness about the operation of formal procedures in mathematics. Heinrich Hertz (1894) explicated the operational structure of the predictive scientific model (Figure 4A), in which an observer makes a measurement that results in symbols that become the initial conditions of a formal model. The observer then computes the predicted state of a second observable and compares this to the outcome of the corresponding second measurement. When the two agree, “the image of the consequent” is congruent with the “consequence of the image”, and the model has made a successful prediction. The operational description of a scientific experiment includes the building of measuring devices, the preparation of the measurement, the measurements themselves, and the formal procedures that are used to generate predictions and compare predictions 14 with observed outcomes. When one examines 15 this entire context, one finds material causation on one side of the measuring devices and rule-governed symbol-manipulation on the other.9 If one were watching this predictive process from without, there would be sequences of different operational, symbol states that we would observe as measurements and computations, and comparisons were made (Figure 4B). Operationally, measurement involves contingent state-transitions that involve the actualization of one outcome amongst two or more possible ones. The observer sees this transition from many potential alternatives to one observed outcome as a reduction of uncertainty, i.e. gaining information about the interaction of sensor and environment. Computations on the other hand involve reliable, determinate mappings of symbol-states to other symbol-states. Charles Morris was the first to explicitly distinguish syntactic, semantic, and pragmatic aspects of symbols, and modeling relations can be analyzed in these terms (Morris 1946; Nöth 1990). In Hertz’s framework, measuring devices are responsible for linking particular symbol-states to particular world-states (or more precisely, particular interactions between the measuring apparatus and the world). Thus the measuring devices determine the external semantics of the symbol-states in the model. Computations link symbol-states to other symbol-states, and hence determine syntactic relations between symbols.10 Finally, there are linkages between the symbol-states and the purposes of the observer that reflect what aspects of the world the observer wishes to predict to what benefit. The choice of measuring devices and their concomitant observables thus is an arbitrary choice of the observer that is dependent upon his or her desires and an evaluative process that compares outcomes to goals. Constituted in this way, the three semiotic aspects (syntactics, semantics, and pragmatics) and their corresponding operations (computation, measurement, evaluation) are irreducible and complementary. One cannot replace semantics with syntactics, semantics with pragmatics, syntactics with semantics.11 The measurement problem, among other things, involved arguments over where one draws the boundaries between the observer and the observed world – the epistemic cut (Pattee 2001, this issue). Equivalently, this is the boundary where the formal description and formal causation begins and where the material world and material causation ends (von Neumann’s cut). If the observer can arbitrarily change what is measured, then the cut is ill-defined. However, once measuring devices along with their operational states are specified then the cut can be operationally defined. The cut can be drawn in the statetransition structure of the observer’s symbol-states, where contingent state-transitions end and determinate ones begin (Figure 4B). These correspond to empirical, contingent measurement operations and analytic, logically-necessary formal operations (“computations”). 16 7. Epistemic transactions with the external world How are we to think about how such modeling relations might be embedded in the brain? In addition to organizational-closures maintained through self-sustained, internally-generated endogenous activity, nervous systems are also informationally-open systems that interact with their environments through sensory inputs and motor outputs (Figure 4). Together these internal and external linkages form percept-action loops that extend through both organism and environment (Uexküll 1926) (Figure 4A). Thus both the internal structure of the nervous system and the structure of its transactions with the environment involve “circular-causal” loops. (Ashby 1960; McCulloch 1946) The central 17 metaphor of cybernetics was inspired by this cyclic image of brain and environment, where internal sets of feedback loops themselves have feedback connections to the environment, and are completed through it. (de Latil 1956; McCulloch 1989; McCulloch 1965; McCulloch 1969b; Powers 1973). Thus McCulloch speaks of “the environmental portion of the path” (Figure 4B) and Powers, emphasizing the external portion of the loop, talks in terms of “behavior, the control of perception” rather than the reverse (Powers 1973). Clearly both halves of the circle are necessary for a full account of behavior and adaptivity: the nervous half and the environmental half. In these frameworks, sensory receptors are in constant interaction with the environment and switch their state contingent upon their interactions. Effectors, such as muscles, act on the world to alter its state. Mediating between sensors and effectors is the nervous system, which determines which actions will be taken given particular percepts. The function of the nervous system, at its most basic, is to realize those percept-action mappings that permit the organism to survive and reproduce. Adaptive robotic devices (Figure 4C) can also be readily seen in these terms (Cariani 1998a; Cariani 1998b; Cariani 1989) if one replaces percept-action coordinations that are realized by nervous systems with explicit percept-action mappings that are realized through computations. These adaptive robotic devices then have a great deal in common with the formal, operational structure of scientific models discussed above. In such adaptive devices (Figure 4C), there is in addition to the percept-action loop a pragmatic, feedback-tostructure loop that evaluates performance and alters sensing, computing, and effectoractions in order to improve measured performance. Evaluations are operations that are similar to measurements made by sensors, except that their effect is to trigger a change in system-structure rather than simply triggering a change in system-state. What follows is a hypothetical account of the brain as both a self-production network and an epistemic system. On a very high level of abstraction, the nervous system can be seen in terms of many interconnected recurrent pathways that create sets of neural signals that regenerate themselves to form stable mental states (Figure 5). These can be thought of as neural “resonances” because some patterns of neural activity are self-reinforcing, while others are self-extinguishing. Sensory information comes into the system through modality-specific sensory pathways. Neural sensory representations are built up through basic informational operations that integrate information in time by establishing circulating patterns which are continuously cross-correlated with incoming ones (i.e. bottom-up/top-down interactions). When subsequent sensory patterns are similar to previous ones, these patterns are built up and inputs are integrated over time. When subsequent patterns diverge from previous ones, new dynamically-created “templates” 18 are formed from the difference between expectation and input. The result is a patternresonance. Tuned neural assemblies can provide top-down facilitation of particular patterns by adding them to circulating signals. The overall framework is close to the account elaborated by (Freeman 1999), with its circular-causal reafferences, resonant 19 mass dynamics, and intentional dimensions. The neural networks that subserve these “adaptive resonances” have been elaborated in great depth by Grossberg and colleagues,(Grossberg 1988; Grossberg 1995) whose models qualitatively account for a wide range of perceptual and cognitive phenomena. Various attempts have been made to locate neural resonances in particular re-entrant pathways, such as thalamocortical and cortico-cortical loops (Edelman 1987; Mumford 1994). For the most part, neural resonance models have assumed that the underlying neural representations of sensory information utilize channel-coded, input features and neural networks with specific, adaptively modifiable connection weights. However, a considerable body of psychophysical and neurophysiological evidence exists for many other kinds of neural pulse codes in which temporal patterns and relative latencies between spikes appear to subserve different perceptual qualities. (Cariani 1995; Cariani 1997b; Perkell and Bullock 1968) For example, patterns of interspike intervals correspond closely with pitch perception in audition (Cariani and Delgutte 1996a) and vibration perception in somatoception. (Mountcastle 1993) Neural resonances can also be implemented in the time domain using temporally-coded sensory information, recurrent delay lines, and coincidence detectors (Cariani in press-b; Thatcher and John 1977). In addition to stimulus-driven temporal patterns, stimulus-triggered endogenous patterns can be evoked by conditioned neural assemblies.(Morrell 1967) Networks of cognitive timing nodes that have characteristic time-courses of activation and recovery time have been proposed as mechanisms for sequencing and timing of percepts and actions (MacKay 1987). Coherent temporal, spatially-distributed and statistical orders (“hyperneurons”) consisting of stimulus-driven and stimulus-triggered patterns have been proposed as neural substrates for global mental states (John 1967; John 1972; John 1976; John 1988; John 1990; John and Schwartz 1978). In this present scheme, build-up loops and their associated resonance-processes would be iterated as one proceeds more centrally into successive cortical stations. Once sensory representations are built up in modality-specific circuits (e.g. perceptual resonances in thalamic and primary sensory cortical areas), they would become available to the rest of the system, such that they could activate still other neural assemblies that operate on correlations between sensory modalities (e.g. higher order semantic resonances in association cortex). Subsequent build-up processes would then implement correlational categories further and further removed from sensory specifics. These resonances would then also involve the limbic system and its interconnections, which could then add affective and evaluative components to circulating sets neural signal-patterns (pragmatic 20 evaluations). Similarly, circulating patterns could activate associated long-term memories which would in turn facilitate and/or suppress activation of other assemblies. Long term memory is essential to stable mental organization. Pattee has asserted that “life depends upon records.” Analogously, we can assert that mind depends upon memory. Like DNA in the cell, long term memory serves as an organizational anchor that supplies stable informational constraints for ongoing processes. Do brain and cell have similar organizational requirements for stability? Must this storage mechanism be discrete in character? Like the cell, the nervous system is an adaptive system that is constantly rebuilding itself in response to internal and external pressures. As von Neumann pointed out, purely analog systems are vulnerable to the build up of perturbations over time, while digital systems (based as they are on functional states formed by basins of attraction) continually damp them out (von Neumann 1951). Memory is surprisingly long-lived. We are intimately familiar with the extraordinary lengths of time that memories can persist, from minutes, hours, years, and decades to an entire lifetime. Long-term memories survive daily stretches of sleep, transient exposures to general anesthesia, and even extended periods of coma. These are brain states in which patterns of central activity are qualitatively different from the normal waking state in which memories were initially formed. What is even more remarkable is the persistence of memory traces in the face of constant molecular turnover and neural reorganization. The persistence of memory begs the fundamental question of whether long term memory must be “hard-coded” in some fashion, perhaps in molecular form, for the same reasons that genetic information is hard-coded in DNA (see (John 1967; Squire 1987) for discussions). DNA is the most stable macromolecule in the cell. Autoradiographic evidence suggests that no class of macromolecule in the brain save DNA appears to remain intact for more than a couple of weeks . These and other considerations drove neuroscientists who study memory to concentrate almost exclusively on synaptic rather than molecular mechanisms (Squire 1987). While enormous progress has been made in understanding various molecular and synaptic correlates of memory, crucial links in the chain of explanation are still missing. These involve the nature and form of the information being stored, as well as how the neural organizations would make use of this information. Currently, the most formidable gap between structure and function lies in our primitive state of understanding of neural codes and neural computation mechanisms. Consequently, we cannot yet readily and confidently interpret the empirical structural data that has been amassed in terms directly linked to informational function. Presently, we can only hypothesize how the contents of long term memories might be stored given alternative neural coding schemes. 21 By far, the prevailing view in the neurosciences is that central brain structures are primarily connectionist systems that operate on across-neuron average rate patterns. Neurons are seen as rate-integrators with long integration times, which mandates that functionally-relevant information must be stored and read out through the adjustment of inter-element connectivities. Learning and memory are consequently thought to require the adjustment of synaptic efficacies. Some of the difficulties associated with such associationist neural “switchboards” (e.g. problems of the regulation of highly specific connectivities and transmission paths, of the stability of old patterns in the face of new ones, problems of coping with multidimensional, multimodal information) have been raised in the past (John 1967; John 1972; Lashley 1998; Thatcher and John 1977), but these difficulties on the systems integration level are largely ignored in the rush to explore the details of synaptic behavior. As (Squire 1987) makes clear, the predominant, conventional view has been that that molecular hard coding of memory traces is inherently incompatible with connectionistic mechanisms that depend on synaptic efficacies. Alternately, neurocomputations in central brain structures might be realized by neural networks that operate on the relative timings of spikes (Abeles 1990; Braitenberg 1967; Cariani 1995, 1997a, 1999, in press; Licklider 1951, 1959). Neurons are then seen as coincidence detectors with short time windows that analyze relative arrival times of their respective inputs (Abeles 1982; Carr 1993). Although the first effective neurocomputational models for perception were time-delay networks that analyzed temporal correlations by means of coincidence detectors and delay lines (Jeffress 1948; Licklider 1951), relatively few temporal neurocomputational models for memory have been proposed (Cariani in press-b; Longuet-Higgins 1987; Longuet-Higgins 1989; MacKay 1962). The dearth of models notwithstanding, animals do appear to possess generalized capabilities for retaining the time course of events. Conditioning experiments suggest that the temporal structure of both rewarded and unrewarded events that occur during conditioning is explicitly stored, such that clear temporal expectations are formed (Miller and Barnet 1993). Neural mechanisms are capable of storing and retrieving temporal patterns by tuning dendritic and axonal time delays to favor particular temporal combinations of inputs or by selecting for existing delays by adjusting synaptic efficacies. By tuning or choosing delays and connection weights, neural assemblies can be constructed that are differentially sensitive to particular time patterns in their inputs. Assemblies can also be formed that emit particular temporal patterns when activated (John and Schwartz 1978). A primary advantage of temporal pattern codes over those that 22 depend on dedicated lines is that the information conveyed is no longer tied to particular neural transmission lines, connections, and processing elements. Further, temporal codes permit multiple kinds of information to be transmitted and processed by the same neural elements (multiplexing) in a distributed, holograph-like fashion (Pribram 1971). Because information is distributed and not localized in particular synapses, such temporal codes are potentially compatible with molecular coding mechanisms (John 1967). Polymer-based molecular mechanisms for storing and retrieving temporal patterns can also be envisioned in which time patterns are transformed to linear distances along polymer chains. A possible molecular mechanism would involve polymer-reading enzymes that scan RNA or DNA molecules at a constant rate (e.g. hundredsto thousands of bases/sec), catalyzing bindings of discrete molecular markers (e.g. methylations) whenever intracellular ionic changes related to action potentials occurred. Time patterns would thus be encoded in spatial patterns of the markers. Readout would be accomplished by the reverse, where polymer-reading enzymes encountering markers would trigger a cascade of molecular events that would transiently facilitate initiation of action potentials. Cell populations would then possess an increased capacity to asynchronously regenerate temporal sequences to which they have been previously exposed.12 Molecular memory mechanisms that were based on DNA would be structurally stable, ubiquitous, superabundant, and might support genetically-inheritable predispositions for particular sensory patterns, such as species-specific bird songs (Dudai 1989). Signal multiplexing and nonlocal storage of information, whether through connectionist or temporal mechanisms, permit broadcast strategies of neural integration. The global interconnectedness of cortical and subcortical structures permits widespread sharing of information that has built-up to some minimal threshold of global relevance, in effect creating a “global workspace” (Baars 1988). The contents of such a global workspace would become successively elaborated, with successive sets of neurons contributing correlational annotations to the circulating pattern in the form of characteristic pattern-triggered signal-tags. Such tags could then be added on to the evolving global pattern as indicators of higher-order associations and form new primitives in their own right (Cariani 1997a). Traditionally, the brain has been conceived in terms of sequential hierarchies of decision processes, where signals represent successively more abstract aspects of a situation. As one moves to higher and higher centers, information about low-level properties is presumed to be discarded. A tag system, on the other hand, elaborates rather than reduces, continually adding additional annotative dimensions. Depending upon 23 attentional and motivational factors, such a system would distribute relevant information over wider and wider neural populations. Rather than a feed-forward hierarchy of featuredetections and narrowing decision-trees, a system based on signal-tags would more resemble a heterarchy of correlational pattern-amplifiers in which neural signals are competitively facilitated, stabilized, and broadcast to produce one dominant, elaborated pattern that ultimately steers the behavior of the whole. There would then be bidirectional influence between emergent global population-statistical patterns and those of local neural populations. This comes very close to Pattee’s concept of “statistical closure” (Pattee 1973a), which entails “the harnessing of the lower level by the collective upper level.” In terms of neural signaling, local and global activity patterns interact, but the global patterns control the behavior of the organism as a unified whole. 8. Semiotic and phenomenal aspects of neural activity Pattee’s ideas have many far ranging implications for general theories of symbolic function. His description of symbols as rate-independent, nonholonomic constraints grounds semiotic theory in physics. His mapping of the operations of the observer to operations of the cell produces a biosemiotic “cognitive biology.” Pattee’s concept of “semantic closure” involves the means by which an organism selects the interpretation of 24 its own symbols (Pattee 1985). The high-level semiotics of mental symbols, conceived in terms of neural pattern-resonances in the brain, can similarly be outlined to explain how brains construct their own meanings (Cariani in ress-c; Freeman 1995; Freeman 1999; Pribram 1971; Thatcher and John 1977). Such neurally-based theories of meaning imply constructivist psychology and conceptual semantics (Lakoff 1987; von Glasersfeld 1987; von Glasersfeld 1995). Within the tripartite semiotic of (Morris 1946), one wants to account for relations of symbols to other symbols (syntactics), relations of symbols to the external world (semantics), and relations of symbols to system-purposes (pragmatics) (Figure 6). Neural signal tags characteristic of a given neural assembly in effect serve as markers of symbol type that can be analyzed and sequenced without regard for their sensory origins or motor implications. The appearance of such characteristic tags in neural signals would simply signify that a particular assembly had been activated. These tags would be purely syntactic forms shorn of any semantic or pragmatic content. Other tags characteristic of particular kinds of sensory information would bear sensory-oriented semantic content. Tags characteristic of neural assemblies for planning and motor executions would bear action-oriented semantic content. Tags produced by neural populations in the limbic system would indicate hedonic, motivational, and emotive valences such that these neural signal patterns would bear pragmatic content. These various kinds of neural signal tags that are characteristic of sensory, motor, and limbic population responses would be added through connections of central neural assemblies to those populations. All of these different kinds of neural signals would be multiplexed together and interacting on both local and global levels to produce pattern resonances. Thus, in a multiplexed system there can be divisions of labor between neural populations, but the various neural signals that are produced need not constantly be kept separate on dedicated lines. Characteristic differences between tags could be based on different latencies of response, different temporal pattern, differential activation of particular sets of inputs, or even differential use of particular kinds of neurotransmitters. Which role a particular kind of tag would play would depend on its functional role within the system. Linkages between particular sensory patterns and motivational evaluations could be formed that add tags related to previous reward or punishment history, thereby adding to a sensory pattern a hedonic marker. In this way, pragmatic meanings (“intentionality”) could be conferred on sensory representations (“intensionality”).13 Pragmatic meanings could similarly be attached to representations involved in motor planning and execution. Such emotive, motivational factors play a predominant role in steering everyday behavior (Hardcastle 1999). Neural signal tags 25 with different characteristics could thus differentiate patterns that encode the syntactic, semantic, and pragmatic aspects of an elaborated neural activity pattern. In the wake of an action that had hedonic salience, associations between all such co-occurring tags would then be stored in memory. The system would thus build up learned expectations of the manifold hedonic consequences of percepts and actions. When similar circumstances presented themselves, memory traces containing all of the hedonic consequences would be read out to facilitate or inhibit particular action alternatives, depending upon whether percept-action sequences in past experience had resulted in pleasure or pain. Such a system, which computes conditional probabilities weighted by hedonic relevance, is capable of one-shot learning. A system so organized creates its own concepts and meanings that are thoroughly imbued with purpose. Formation of new neural assemblies is thus a means by which the brain can adaptively construct what are in effect new measuring devices that make new distinctions on an internal milieu that is richly coupled to the external world (Cariani 1998a). Finally, we know firsthand that brains are material systems capable of supporting conscious awareness.14 These classes of linkages between neural patterns produced by sensory inputs (external semantics), those produced by internal coordinations (syntactics), and those produced by intrinsic goal-states may have correspondences in the structure of experience. Those neural signal patterns that are produced by processes that are contingent relative to the internal set signal-self-productions resemble measurement processes, and these are experienced as sensations. Ordered sequences of neural signal patterns generated from within the system would have the character of successions of mental symbols, and these would be experienced as thoughts. Those internal patterns that were related to goal-states have the character of system imperatives to adjust behavior, and these would be experienced as desires and pains. Actions would be experienced through their effects on perceptions, exterioceptive and proprioceptive, sensory and hedonic. As in the case of a scientific model, an epistemic cut could be drawn at the point of contingency, where the control of the nervous system ends and sensory inputs become dependent at least in part on the environment. This might then explain why, when wielding a stick, the boundaries of one’s body appear to move outward to the end of the stick, as well as why we cease to experience as sensations those processes that become reliably controlled from within. This raises the possibility that the structure of awareness is isomorphic to the functional organization of informational process in the brain, and on a more abstract level, to the operational structure of the ideal observer. 26 9. Conclusions Using concepts developed and elaborated by Howard Pattee, we have outlined common, fundamental roles that symbols might play in life and mind. The organism produces and reproduces itself using genetic codes, while the mind continually regenerates its own organization through neural codes. We then considered commonalities between epistemic processes of organisms and brains and the operational structure of scientific models. The various roles of symbolic, dynamics-based, and neurocomputational descriptions were then evaluated in terms of the different aspects of brain function that they illuminate and neglect. We then took up the problem of neural coding and asked whether brains require memory mechanisms that perform organizational functions analogous to those of genetic information in cells. A high-level conception of the brain that combines self-production of neural signals and percept-action loops was proposed, and the semiotic relations in such systems were discussed. Finally, we briefly examined high level similarities between the structure of awareness and the operational structure of the observer, and pondered whether self-regenerative organization is essential to life, mind, and even conscious awareness itself. The deep insights of Howard Pattee into the essentials of biological organization have proven invaluable in our difficult but rewarding quest to understand how brains work such that they can construct their own meanings. 10. Acknowledgments I owe a profound intellectual debt to Howard Pattee, who introduced me to the world of symbols. I could not have asked for a more intellectually-engaged and engaging mentor. The most important lesson I learned from Howard is the necessity of continuing to ask fundamental questions in the face of a world obsessed with the accumulation of little facts. In the early stages of this paper, I was much provoked by discussions with the late Alan Hendrickson, who was searching for molecular mechanisms for encoding time patterns. Our conversations and his unpublished manuscript on the engram prompted me to think about the stabilization of organization that memory provides and to consider possible molecular storage mechanisms. This work was supported by grant DC3054 from the National Institute of Deafness and Communications Disorders of the National Institutes of Health. 27 11. Notes A concrete example involves the tRNA molecules that map particular tri-nucleotide codons to particular amino acids in transcription. These tRNA molecules that implement the interpretation of the genetic code are also themselves produced by the cell, so that alternative, and even multiple interpretations of the same nucleotide sequence would be possible (though unlikely to be functionally meaningful). The cell fabricates the means of interpreting its own plans. 2 Many more aspects of closure are discussed elsewhere in greater depth (Chandler and Van de Vijver 2000; Maturana 1970; Maturana 1981; Pask 1981; Varela 1979; von Foerster 1984a; von Glasersfeld 1987). 3 The failure to find intelligible neural representations for sensory qualities has led some theorists, e.g. (Freeman 1995; Hardcastle 1999), to propose that explicit representations do not exist as such, at least on the level of the cerebral cortex, and are therefore implicitly embedded in the mass-dynamics in a more covert way. 4 Thus the belief in a “symbol level” of processing. The model of vision laid out in (Trehub 1991) is a good example of the microcomputational approach to perception, while (Pylyshyn 1984) epitomizes the symbol-based approach to cognition. 5 Von Bekesy identified a number of striking localization mechanisms in different sensory modalities that appear to involve computation of temporal cross-correlation between receptors at different places on the body surface. This suggests the possibility of a phylogenetically-primitive “computational Bauplan” for information-processing strategies analogous to the archetypal anatomical-developmental body plan of vertebrates and many invertebrates. One expects special-purpose evolutionary specializations for those percept-action loops whose associated structures are under the control of the same sets of genes. Intraspecies communication systems, particularly pheromone systems, are prime examples. Here members of the same species have common genes that can specify dedicated structures for the production and reception of signals. The signals are always the same, so that dedicated receptors and labeled line codes can be used. One expects the evolution of general-purpose perceptual mechanisms for those tasks that involve detection and recognition of variable parts of the environment over which a species has no effective control, such as the recognition of predators under highly variable contexts (e.g. lighting, acoustics, wind, chemical clutter). In this case the system must be set up to detect properties, such as form, that remain invariant over a wide range of conditions. 6 Strong physiological evidence exists for interspike interval coding of periodicity pitch in the auditory system (Cariani 1999; Cariani and Delgutte 1996b; Meddis and Hewitt 1 28 1991). Interspike intervals form autocorrelation-like, iconic representations of stimulus periodicities from which pitch-equivalences, pitch- similarities, and other harmonic relations are simply derived. These relations require complex cognitive analysis if spectrographic frequency-time representation is taken as primitive. Here is a potential example of cognitive structures that arise out of the structure of underlying neural codes. 7 For example, fitness landscapes increase in effective dimensionality as organisms evolve new epistemic functions. More modes of sensing and effecting result in more modes of interaction between organisms. 8 Complementarity between different modes of description has been an abiding part of Pattee’s thinking. Pattee (1979) explicates the complementarity between universal laws and local rules, and outlined how organized material systems can be understood in either “dynamic” or “linguistic” mode, depending upon the organization of the system and the purposes of the describer. The dynamic mode describes the behavior of the system in terms of a continuum of states traversed by the action of rate-dependent physical laws, while the linguistic mode describes the behavior of the system in terms of rule-governed transitions between discrete functional states. A simple switch can be described in either terms, as a continuous, dynamical system with two basins of attraction or as a discrete system with two alternative states (Pattee 1974). The attractor basins of the dynamical system are the sign-primitives of the symbol-system. How the switch should be described is a matter of the purposes to which the description is to be used, whether the describer is interested in predicting the statetrajectory behavior of the system or of outlining the functional primitives it affords to some larger system. 9 But the symbols themselves are also material objects that obey physical laws. As Hermann Weyl remarked: … we need signs, real signs, as written with chalk on the blackboard or with pen on paper. We must understand what it means to place one stroke after the other. It would be putting matters upside down to reduce this naively and grossly misunderstood ordering of signs in space to some purified spatial conception and structure, such as that expressed in Euclidean geometry. Rather, we must support ourselves here on the natural understanding in handling things in our natural world around us. Not pure ideas in pure consciousness, but concrete signs lie at the base, signs which are for us recognizable and reproducible despite small variations in detailed execution, signs which by and large we know how to handle. 29 As scientists we might be tempted to argue thus: ‘As we know’ the chalk mark on the blackboard consists of molecules, and these are made up of charged and uncharged elementary particles, electrons, neutrons, etc. But when we analyzed what theoretical physics means by such terms, we saw that these physical things dissolve into a symbolism that can be handled according to some rules. The symbols, however, are in the end again concrete signs, written with chalk on the blackboard. You notice the ridiculous circle.” (Weyl 1949b) 10 Operationally, we are justified in describing a material system as performing a “computation” when we can put the observed state-transitions of a material system under a well-specified set of observables into a 1:1 correspondence with the state-transitions of a finite-length formal procedure, e.g. the states of a deterministic finite-state automaton. This is a more restrictive, operationally-defined use of the word “computation” than the more common, looser sense of any orderly informational process. Relationships between the operations of the observer (Figure 4A) and the functional states of the predictive process (Figure 4B) are discussed more fully in (Cariani 1989). 11 John von Neumann showed in the 1930’s that attempts to incorporate the measuring devices (semantics) into the formal, computational part of the modeling process (syntactics) result in indefinite regresses, since one then needs other measuring devices to determine the initial conditions of the devices one has just subsumed into the formal model (von Neumann 1955). Unfortunately, this did not prevent others in the following decades from conflating these semiotic categories and reducing semantics and pragmatics to logical syntax. 12 See (Hendrickson and Hendrickson 1998; John 1967; John 1972; John et al. 1973; Thatcher and John 1977) for longer discussions of alternative temporal mechanisms. Pattee’s polymer-based feedback shift register model of information storage (Pattee 1961) was part of the inspiration for this mechanism. As DNA methylation might be a candidate marker, since this mechanism is utilized in many other similar molecular contexts and there is an unexplained overabundance of DNA methyltransferase in brains relative to other tissues (Brooks et al. 1996). 13 What we call here semantics and pragmatics are often called the “intensional” and “intentional” aspects of symbols (Nöth 1990). Semantics and pragmatics have often been conflated with injury to both concepts. (Freeman 1999) argues that we should also separate intent (forthcoming, directed action) from motive (purpose). Many realist and model-theoretic frameworks that have dominated the philosophy of language and mind for the last half century ignore the limited, situated, purpose-laden nature of the observer 30 (Bickhard and Terveen 1995). Realist philosophers, e.g. (Fodor 1987), have defined “meaning” in such a way that it precludes any notion that is brain-bound and therefore admits of individual psychological differences and constructive capacities (cf. Lakoff’s (1987) critique of “objectivism”). Contra Fodor and Putnam, meaning can and does lie in the head. The neglect of the self-constructing and expansive nature of the observer’s categories has impeded the development of systems that are thoroughly imbued with purpose, directly connected to their environs, and capable of creating their own conceptual primitives (Bickhard and Terveen 1995; Cariani 1989). 14 We discuss elsewhere whether activation of particular neurons is sufficent for conscious awareness or this depends instead on coherent organizations of neural activity(Cariani in press-a). FIGURE CAPTIONS Figure 1. Three conceptions of the role of symbols in biological self-production. A. John von Neumann's (1951) mixed digital-analog scheme for a self-producing automaton. Inheritable plans direct the construction of the plans themselves and the universal construction apparatus. Once plans and constructor can reproduce themselves, then byproducts can be produced that need not themselves be directly a part of the reproductive loop. B. A nonsymbolic self-production network in which there is no division between plans and material parts. C. A symbolically-constrained self-production network in which geneticexpression sets boundary conditions for metabolic reaction cycles through catalytic control points (concentric circles). Figure 2. Stimulus-contingent switching between reverberant states. A. Hebb's conception of percept-action mappings using reverberant loops. B. Simplified statetransition diagram for this process. Depending upon the stimulus and the resulting neural activity pattern, the network enters one of two resonant states (pattern-resonances), which subsequently produce different motor responses. Resonant states at this level of description become the functional primitive (symbolic) states of higher-level descriptions. The epistemic cut for this system lies at the point of contingency, where stimuli A and B cause different system-trajectories. 31 Figure 3. Operational and semiotic structure of scientific models. A. Hertzian commutation diagram illustrating the operations involved in making a prediction and testing it empirically. B. Operational state transition structure for measurement, prediction, and evaluation. Preparation of the measuring apparatus (reference state R1), the contingent nature of the measurement transition (R1 transits to A, but could have registered B instead), computation of a prediction (A transits to PA by way of intermediate computational states), and comparison with outcome of the second measurement (A vs. C). Epistemic cults demarcate boundaries between operationallycontingent, extrinsically-caused events and operationally-determinate, internally-caused sequences of events. Figure 4. Percept-action loops in organisms and devices. A. Cycles of actions and percepts and the formation of sensorimotor interactions (von Uexküll, 1926). B. The completion of a neural feedback loop through environmental linkages (McCulloch, 1946). C. Adaptive control of percept-action loops in artifical devices, showing the three semiotic axes (Cariani, 1989, 1997, 1998). Evaluative mechanisms adaptively modify sensing and effector functionalities as well as steering percept-action mappings. Figure 5. The brain as a set of resonant loops that interact with an external environment. the loops represent functionalities implemented by means of pattern-resonances in recurrent networks. Figure 6. Semiotics of brain states. A. Basic semiotic relations between symbol, world, and purpose: syntactics, semantics, and pragmatics. B. Semiotic aspects of brain states. Semiotic functional division of labor via different sets of overlaid circuits. Neural assemblies in sensory and motor systems provide semantic linkages between central brain states and the external world. Assemblies that integrate and sequence internal representations for prediction, planning, and coordination implement syntactic linkages. Those that add evaluative components to neural signals (e.g. limbic system) implement pragmatic linkages. Phenomenal correlates of these semiotic aspects are sensations, thoughts, and motivational states (hungers, pains, drives, desires, emotions). 32 12. References Abeles, M. 1982, "Role of the cortical neuron: integrator or coincidence detector" Israel Journal of Medical Sciences 18, 83-92. Abeles, M. 1990, Corticonics (Cambridge University Press, Cambridge). Arbib, M. A. 1989, The Metaphorical Brain 2: Neural Nets and Beyond (John Wiley, New York). Ashby, W. R. 1960, Design for a Brain (Chapman and Hall, London). Baars, B. J. 1988, A Cognitive Theory of Consciousness (Cambridge University Press, Cambridge). Basar, E. 1989, "Brain natural frequencies are causal factors for resonances and induced rhythms", in: Brain Dynamics, E. Basar and T.E. Bulloch (eds.)(Springer-Verlag, Berlin), 425-57. Beurle, R. L. 1956, "Properties of a mass of cells capable of regenerating pulses" Phil. Trans. Roy. Soc. London B240, 55-94. Bickhard, M. H. & Terveen, L. 1995, Foundational Issues in Artificial Intelligence and Cognitive Science: Impasse and Solution (Elsevier, New York). Braitenberg, V. 1967, "Is the cerebellar cortex a biological clock in the millisecond range?" Prog. Brain Res. 25, 334-46. Bridgman, P. W. 1936, The Nature of Physical Theory (Princeton University Press, Princeton, NJ). Brooks, P. J., Marietta, C. & Goldman, D. 1996, "DNA mismatch repair and DNA methylation in adult brain neurons" J. Neurosci. 16, 939-45. Carello, C., Turvey, M. T., Kugler, P. N. & Shaw, R. E. 1984, "Inadequacies of the computer metaphor", in: Handbook of Cognitive Neuroscience, M.S. Gazzaniga (eds.)(Plenum Press, New York), 229-48. Cariani, P. A. 1989 On the Design of Devices with Emergent Semantic Functions. Ph. D. thesis, State University of New York at Binghamton (Ann Arbor, University Microfilms). Cariani, P. 1992a "Emergence and artificial life," in Artificial Life II. Volume X, Santa Fe Institute Studies in the Science of Complexity C.G. Langton, Taylor, C., Farmer, J.D., Rasmussen, S., (eds.) (Addison-Wesley, Redwood City, CA), 775-798. Cariani, P. 1992b, "Some epistemological implications of devices which construct their own sensors and effectors.", in: Towards a Practice of Autonomous Systems, F. Varela and P. Bourgine (eds.)(MIT Press, Cambridge, MA), 484-93. 33 Cariani, P. 1993, "To evolve an ear: epistemological implications of Gordon Pask's electrochemical devices" Systems Research 10(3), 19-33. Cariani, P. 1995, "As if time really mattered: temporal strategies for neural coding of sensory information" Communication and Cognition - Artificial Intelligence (CC-AI) 12(1-2), 161-229. Reprinted in: K Pribram, ed. Origins: Brain and Self-Organization, Hillsdale, NJ: Lawrence Erlbaum, 1994; 208-252. Cariani, P. 1997a, "Emergence of new signal-primitives in neural networks" Intellectica 1997(2), 95-143. Cariani, P. 1997b, "Temporal coding of sensory information", in: Computational Neuroscience: Trends in Research, 1997, J.M. Bower (eds.)(Plenum, New York), 591-8. Cariani, P. 1998a, "Epistemic autonomy through adaptive sensing" Proceedings of the 1998 IEEE International Symposium on Intelligent Control (ISIC) held jointly with the IEEE Internaltional Symposium on Computational Intelligence in Robotics (CIRA) and Automation and the Intelligent Systems and Semiotics (ISAS), A Joint Conference on the Science and Technology of Intelligent Systems, Sept. 14-17, 1998, National Institute of Standards and Technology, Gaithersburg, MD. , 718-23. Cariani, P. 1998b, "Towards an evolutionary semiotics: the emergence of new signfunctions in organisms and devices", in: Evolutionary Systems, G. Van de Vijver, S. Salthe and M. Delpos (eds.)(Kluwer, Dordrecht, Holland), 359-77. Cariani, P. 1999, "Temporal coding of periodicity pitch in the auditory system: an overview" Neural Plasticity 6(4), 147-72. Cariani, P. in press-a, "Anesthesia, neural information processing, and conscious awareness." Consciousness and Cognition Cariani, P. in press-b, "Neural timing nets for auditory computation", in: Computational Models of Auditory Function, S. Greenberg and M. Slaney (eds.)(IOS Press, Amsterdam), 1-16. Cariani, P. in press-c, "Cybernetics and the semiotics of translation.", in: Tra segni. Athanor. Semiotica, Filosofia, Arte, Letterature, XI, 2, 200. Cariani, P. A. & Delgutte, B. 1996a, "Neural correlates of the pitch of complex tones. I. Pitch and pitch salience." J. Neurophysiol. 76(3), 1698-716. Cariani, P. A. & Delgutte, B. 1996b, "Neural correlates of the pitch of complex tones. I. Pitch and pitch salience. II. Pitch shift, pitch ambiguity, phase-invariance, pitch circularity, and the dominance region for pitch." J. Neurophysiology 76(3), 1698-734. 34 Carr, C. E. 1993, "Processing of temporal information in the brain" Annu. Rev. Neurosci. 16, 223-43. Chandler, J. L. R. & Van de Vijver, G., 2000, Closure: Emergent Organizations and their Dynamics (Annals of the New York Academy of Sciences, New York). Churchland, P. S. & Sejnowski, T. J. 1992, The Computational Brain (MIT Press, Cambridge). Conrad, M. 1998, "Towards high evolvability dynamics", in: Evolutionary Systems, G. Van de Vijver, S. Salthe and M. Delpos (eds.)(Kluwer, Dordrecht, Holland), 33-43. Cowan, J. D. 1965, "The problem of organismic reliability", in: Cybernetics of the Nervous System, N. Wiener and J.P. Schade (eds.)(Elsevier, Amsterdam), 9-63. de Latil, P. 1956, Thinking by Machine (Houghton Mifflin, Boston). Dudai, Y. 1989, The Neurobiology of Memory (Oxford University Press, Oxford). Edelman, G. M. 1987, Neural Darwinism: The Theory of Neuronal Group Selection (Basic Books, New York). Eigen, M. 1974, "Molecules, information, and memory: from molecular to neural networks", in: The Neurosciencs: A Third Study Program, F.O. Schmitt and F.G. Worden (eds.)(MIT Press, Cambridge), 1-10. Etxeberria, A. 1998, " Embodiment of natural and artificial agents", in: Evolutionary Systems, G. van de Vijver, S. Salthe and M. Delpos (eds.)(Kluwer, Dordrecht, Holland), Fodor, J. 1987, Psychosemantics. The Problem of Meaning in the Philosophy of Mind (MIT Press, Cambridge). Freeman, W. J. 1975, Mass Action in the Nervous System (Academic Press, New York). Freeman, W. J. 1995, Societies of Brains. A Study in the Neuroscience of Love and Hate. (Lawrence Erlbaum, New York). Freeman, W. J. 1999, "Consciousness, intentionality, and causality", in: Reclaiming Cognition (Reprint of J. Consciousness Studies, Vol. 6, Nos.11-12)., W.J. Freeman and R. Núñez (eds.)(Imprint Academic, Thorverton, UK), 143-72. Gerard, R. W. 1959, "Neurophysiology: an integration (molecules, neurons, and behavior)", in: Handbook of Physiology: Neurophysiology. Volume II., J. Field, H.W. Magoun and V.E. Hall (eds.)(American Physiological Society, Washington, D.C.), 1919-65. Greene, P. H. 1962, "On looking for neural networks and "cell assemblies" that underlie behavior. I. Mathematical model. II. Neural realization of a mathematical model." Bull. Math. Biophys. 24, 247-75, 395-411. 35 Grossberg, S. 1988, The Adaptive Brain, Vols I. and II (Elsevier, New York). Grossberg, S. 1995, "Neural dynamics of motion perception, recognition learning, and spatial attention", in: Mind as Motion: Explorations in the Dynamics of Cognition, R.F. Port and T. van Gelder (eds.)(MIT Press, Cambridge), 449-90. Haken, H. 1983, "Synopsis and introduction", in: Synergetics of the Brain, E. Basar, H. Flohr, H. Haken and A.J. Mandell (eds.)(Springer-Verlag, Berlin), 3-27. Haken, H. 1991, Synergetic Computers and Cognition (Springer-Verlag, Berlin). Hall, T. S. 1969, Ideas of Life and Matter: Studies in the History of General Physiology, 600 B.C. - 1900 A.D., 2 vols. (University of Chicago, Chicago). Hardcastle, V. G. 1999, "It's O.K. to be complicated: the case of emotion", in: Reclaiming Cognition (Reprint of J. Consciousness Studies, Vol. 6, Nos.11-12)., W.J. Freeman and R. Núñez (eds.)(Imprint Academic, Thorverton, UK), 237-49. Hebb, D. O. 1949, The Organization of Behavior (Simon & Schuster, New York). Hebb, D. O. 1966, A Textbook of Psychology, 2nd ed. (W.B Saunders, Philadelphia). Hendrickson, A. E. & Hendrickson, D. E. 1998, The Engram: The Neural Code and the Molecular and Cellular Basis of Learning and Memory (Unpublished manuscript, Verbier, Switzerland). Hertz, H. 1894, Principles of Mechanics (1956 reprint) (Dover, New York). Jeffress, L. A. 1948, "A place theory of sound localization" J. Comp. Physiol. Psychol. 41, 35-9. John, E. R. 1967, Mechanisms of Memory (Wiley, New York). John, E. R. 1972, "Switchboard vs. statistical theories of learning and memory" Science 177, 850-64. John, E. R. 1976, "A model of consciousness", in: Consciousness and Self-Regulation, Vol 1, G.E. Scwartz and D. Shapiro (eds.)(Plenum, New York), 1-50. John, E. R. 1988, "Resonating fields in the brain and the hyperneuron", in: Dynamics of Sensory and Cognitive Processing by the Brain, E. Basar (eds.)(Springer-Verlag, Berlin), 368-77. John, E. R. 1990, "Representation of information in the brain", in: Machinery of the Mind, E.R. John (eds.)(Birkhauser, Boston), 27-56. John, E. R., Bartlett, F., Shimokochi, M. & Kleinman, D. 1973, "Neural readout from memory" J. Neurophysiol. 36(5), 893-924. John, E. R. & Schwartz, E. L. 1978, "The neurophysiology of information processing and cognition" Ann. Rev. Psychol. 29, 1-29. 36 Kampis, G. 1991a, "Emergent computations, life, and cognition" World Futures 32(2-3), 95-110. Kampis, G. 1991b, Self-Modifying Systems in Biology and Cognitive Science (Pergamon Press, Oxford). Katchalsky, A. K., Rowland, V. & Blumenthal, R. Dynamic Patterns of Brain Cell Assemblies: Neurosciences Research Program, 1972. Kauffman, S. 1993, The Origins of Order (Oxford University Press, New York). Kelso, J. A. S. 1995, Dynamic Patterns: The Self-Organization of Brain and Behavior (MIT Press, Cambridge, MA). Köhler, W. 1951, "Relational determination in perception", in: Cerebral Mechanisms in Behavior: The Hixon Symposium, L.A. Jeffress (eds.)(Wiley, New York), 200-43. Kugler, P. N. & Shaw, R. 1990, "On the role of symmetry and symmetry-breaking in thermodynamics and epistemic engines", in: Synergetics of Cognition, H. Haken (eds.)(Springer Verlag, Heidelberg), 296-331. Kugler, P. N. a. T., M.T. 1987, Information, Natural Law, and the Self-assembly of Rhythmic Movement (Lawrence Erlbaum Associates, Hillsdale, New Jersey). Lakoff, G. 1987, Women, Fire, and Dangerous Things: What Categories Reveal about the Mind (University of Chicago, Chicago). Lashley, K. S. 1998, "The problem of cerebral organization in vision. (Biol. Symp. 1942; 7:301-322)", in: The Neuropsychological Theories of Lashley and Hebb, J. Orbach (eds.)(University Press of America, Lanham, MD), 159-76. Licklider, J. C. R. 1951, "A duplex theory of pitch perception" Experientia VII(4), 12834. Licklider, J. C. R. 1959, "Three auditory theories", in: Psychology: A Study of a Science. Study I. Conceptual and Systematic, S. Koch (eds.)(McGraw-Hill, New York), 41144. Longuet-Higgins, H. C. 1987, Mental Processes: Studies in Cognitive Science (The MIT Press, Cambridge, Mass.). Longuet-Higgins, H. C. 1989, "A mechanism for the storage of temporal correlations", in: The Computing Neuron, R. Durbin, C. Miall and G. Mitchison (eds.)(AddisonWesley, Wokingham, England), 99-104. Lorente de Nó, R. & Fulton, J. F. 1949, "Cerebral cortex: architecture, intracortical connections, motor projections (1933)", in: Physiology of the Nervous System, J.F. Fulton (eds.)(Oxford University Press, New York), 288-330. 37 MacKay, D. G. 1987, The Organization of Perception and Action (Springer-Verlag, New York). MacKay, D. M. 1962, "Self-organization in the time domain", in: Self-Organizing Systems 1962, M.C. Yovitts, G.T. Jacobi and G.D. Goldstein (eds.)(Spartan Books, Washington, D.C.), 37-48. Marr, D., 1991, From the Retina to the Neocortex: Selected Papers of David Marr (Birkhäuser, Boston). Maturana, H. 1970, "The biology of cognition", in: Autopoiesis and Cognition, H. Maturana and F. Varela (eds.)(D. Reidel, Dordrecht, Holland), Maturana, H. R. 1981, "Autopoiesis", in: Autopoiesis: A Theory of the Living, M. Zeleny (eds.)(North Holland, New York), McCulloch, R., 1989, Collected Works of Warren McCulloch, Vols 1-4 (Intersystems Publications, Salinas, CA). McCulloch, W. S. 1946, "A heterarchy of values determined by the topology of nervous nets" Bull. Math. Biophys. 7(2), 89-93. McCulloch, W. S. 1947, "Modes of functional organization of the cerebral cortex" Federation Proceedings 6, 448-52. McCulloch, W. S. 1965, Embodiments of Mind (MIT Press, Cambridge). McCulloch, W. S. 1969a, "Of digital oscillators", in: Information Processing in the Nervous System, K.N. Leibovic (eds.)(Springer Verlag, New York), 293-6. McCulloch, W. S. 1969b, "Regenerative loops" J. Nervous and Mental Disease 149(1), 54-8. McCulloch, W. S. & Pitts, W. H. 1943, "A logical calculus of the ideas immanent in nervous activity", in: Embodiments of Mind (1965), W.S. McCulloch (eds.)(MIT Press, Cambridge, MA), 19-39. Meddis, R. & Hewitt, M. J. 1991, "Virtual pitch and phase sensitivity of a computer model of the auditory periphery. II. Phase sensitivity" J. Acoust. Soc. Am. 89(6), 2883-94. Mesulam, M.-M. 1998, "From sensation to perception" Brain 121, 1013-52. Michaels, C. E. & Carello, C. 1981, Direct Perception (Prentice-Hall, Englewood Cliffs, NJ). Miller, R. R. & Barnet, R. C. 1993, "The role of time in elementary associations" Current Directions in Psychological Science 2(4), 106-11. Minch, E. The Representation of Hierarchical Structure in Evolving Networks: State University of New York at Binghamton, 1987. 38 Mingers, J. 1995, Self-Producing Systems (Plenum Press, New York). Modrak, D. K. 1987, Aristotle: The Power of Perception (University of Chicago, Chicago). Morrell, F. 1967, "Electrical signs of sensory coding", in: The Neurosciences: A Study Program, G.C. Quarton, T. Melnechuck and F.O. Schmitt (eds.)(Rockefeller University Press, New York), 452-69. Morris, C. 1946, Signs, Language, and Behavior (George Braziller, New York). Mountcastle, V. 1967, "The problem of sensing and the neural coding of sensory events", in: The Neurosciences: A Study Program, G.C. Quarton, T. Melnechuk and F.O. Schmitt (eds.)(Rockefeller University Press, New York), Mountcastle, V. 1993, "Temporal order determinants in a somatosthetic frequency discrimination: sequential order coding" Annals New York Acad. Sci. 682, 151-70. Mumford, D. 1994, "Neuronal architectures for pattern-theoretic problems", in: LargeScale Neuronal Theories of the Brain, C. Koch and J.L. Davis (eds.)(MIT Press, Cambridge), 125-52. Murdoch, D. 1987, Niels Bohr's Philosophy of Physics (Cambridge University Press, Cambridge). Nöth, W. 1990, Handbook of Semiotics (Indiana University Press, Indianapolis). Nunez, P. L. 1995, "Towards a physics of neocortex", in: Neocortical Dynamics and Human EEG Rhythms, P.L. Nunez (eds.)(Oxford University Press, New York), 68132. Pask, G. 1960, "The natural history of networks", in: Self-Organzing Systems, M.C. Yovits and S. Cameron (eds.)(Pergamon Press, New York), 232-63. Pask, G. 1981, "Organizational closure of potentially conscious systems", in: Autopoiesis: A Theory of Living Organization, M. Zeleny (eds.)(North Holland, New York), 265-308. Pattee, H. H. 1961, "On the origin of macromolecular sequences" Biophysical Journal 1, 683-709. Pattee, H. H. 1969, "How does a molecule become a message?" Developmental Biology Supplement 3, 1-16. Pattee, H. H. 1973a, "The physical basis of the origin of hierarchical control", in: Hierarchy Theory: The Challenge of Complex Systems, H. Pattee (eds.)(George Braziller, New York), Pattee, H. H. 1973b, "Physical problems in the origin of natural controls", in: Biogenesis, Homeostasis, Evolution, A. Locker (eds.)(Pergamon Press, New York), 39 Pattee, H. H. 1974, "Discrete and continuous processes in computers and brains", in: The Physics and Mathematics of the Nervous System, W. Guttinger, M. Conrad and M. Dal Cin (eds.)(Springer-Verlag, New York), Pattee, H. H. 1979, "The complementarity principle and the origin of macromolecular information" Biosystems 11, 217-26. Pattee, H. H. 1982, "Cell psychology: an evolutionary view of the symbol-matter problem." Cognition and Brain Theory 5, 325-41. Pattee, H. H. 1985, "Universal principles of measurement and language functions in evolving systems", in: Complexity, Language, and Life: Mathematical Approaches, J.L. Casti and A. Karlqvist (eds.)(Springer-Verlag, Berlin), 268-81. Pattee, H. H. 1990, "The measurement problem in physics, computation, and brain theories.", in: Nature, Cognition, and System, M.E. Cavallo (eds.)(Kluwer, Winschoten, Holland), Pattee, H. H. 1995, "Evolving self-reference: matter, symbols, and semantic closure" Communication and Cognition – Artificial Intelligence (CC-AI) 12(1-2), 9-27. Pattee, H. H. 1996, "The problem of observables in models of biological organizations", in: Evolution, Order, and Complexity, E.L. Khalil and K.E. Boulding (eds.)(Routledge, London), 249-64. Pattee, H. H. 2001, The physics of symbols: bridging the epistemic cut. Biosystems (this issue). Perkell, D. H. & Bullock, T. H. 1968, "Neural Coding" Neurosciences Research Program Bulletin 6(3), 221-348. Piatelli-Palmarini, M., 1980, Language and Learning. The Debate between Jean Piaget and Noam Chomsky (Harvard University Press, Cambridge, MA). Powers, W. 1973, Behavior: The Control of Perception (Aldine, New York). Pribram, K. H. 1971, Languages of the Brain: Experimental Paradoxes and Principles in Neurophysiology (Prentice-Hall, New York). Pylyshyn, Z. 1984, Computation and Cognition (MIT Press, Cambridge). Rashevsky, N. 1960, Mathematical Biophysics: Physico-Mathematical Foundations of Biology, Vols. I & II (Dover, New York). Rieke, F., Warland, D., de Ruyter van Steveninck, R. & Bialek, W. 1997, Spikes: Exploring the Neural Code (MIT Press, Cambridge). Rocha, L. 1996, "Eigen-states and symbols." Systems Research 13(3), 371-84. 40 Rocha, L. 1998, "Selected self-organization and the semiotics of evolutionary systems", in: Evolutionary Systems, G. Van de Vijver, S. Salthe and M. Delpos (eds.)(Kluwer, Dordrecht, Holland), 341-58. Rosen, R. 1971, "Some realizations of (M,R) systems and their interpretation" J. Math. Biophys. 33, 303-19. Rosen, R. 1973a, "On the generation of metabolic novelties in evolution", in: Biogenesis, Homeostasis, Evolution, A. Locker (eds.)(Pergamon Press, New York), Rosen, R. 1973b, "On the relation between structural and functional descriptions of biological systems", in: The Physical Principles of Neuronal and Organismic Behavior, M. Conrad and E.M. Magar (eds.)(Gordon & Breach, London), 227-32. Rosen, R. 1978, Fundamentals of Measurement and Representation of Natural Systems (North-Holland, New York). Rosen, R. 1985, Anticipatory Systems (Pergamon Press, Oxford). Rosen, R. 1986, "Causal structures in brains and machines" International Journal of General Systems 12, 107-26. Rosen, R. 1991, Life Itself (Columbia University Press, New York). Rosen, R. 2000, Essays on Life Itself (Columbia University Press, New York). Schyns, P. G., Goldstone, R. L. & Thibaut, J.-P. 1998, "The development of features in object concepts" Behavioral and Brain Sciences 21(1), 1-54. Squire, L. R. 1987, Memory and Brain (Oxford Unversity Press, New York). Tank, D. W. & Hopfield, J. J. 1987, "Neural computation by concentrating information in time" Proc. Natl. Acad. Sci. USA 84, 1896-900. Thatcher, R. W. & John, E. R. 1977, Functional Neuroscience, Vol. I. Foundations of Cognitive Processes (Lawrence Erlbaum, Hillsdale, NJ). Trehub, A. 1991, The Cognitive Brain (MIT Press, Cambridge). Uexküll, J. v. 1926, Theoretical Biology (Harcourt, Brace & Co, New York). Umerez, J. 1998, " The evolution of the symbolic domain in living systems and artificial life", in: Evolutionary Systems, G. van de Vijver, S. Salthe and M. Delpos (eds.)(Kluwer, Dordrecht, Holland), Uttal, W. R. 1973, The Psychobiology of Sensory Coding (Harper and Row, New York). van Gelder, T. & Port, R. F. 1995, "It's about time: an overview of the dynamical approach", in: Mind as Motion: Explorations in the Dynamics of Cognition, R.F. Port and T. van Gelder (eds.)(MIT Press, Cambridge), 1-44. Varela, F. 1979, Principles of Biological Autonomy (North Holland, New York). von Foerster, H. 1984a, Observing Systems (Intersystems Press, Seaside, CA). 41 von Foerster, H. 1984b, "On constructing a reality", in: The Invented Reality, P. Watzlawick (eds.)(W.W. Norton, New York), 41-62. von Glasersfeld, E. 1987, The Construction of Knowledge: Contributions to Conceptual Semantics (Intersystems Press, Salinas, CA). von Glasersfeld, E. 1995, Radical Constructivism: A Way of Knowing and Learning (The Falmer Press, London). von Neumann, J. 1951, "The general and logical theory of automata", in: Cerebral Mechanisms of Behavior (the Hixon Symposium), L.A. Jeffress (eds.)(Wiley, New York), 1-41. von Neumann, J. 1955, Mathematical Foundations of Quantum Mechanics (Princeton University Press, Princeton). von Neumann, J. 1958, The Computer and the Brain (Yale University Press, New Haven). Weyl, H. 1949a, Philosophy of Mathematics and Natural Science (Princeton University Press, Princeton). Weyl, H. 1949b, "Wissenschaft als symbolische Konstruction des Menchens" Eranos Jahrbuch , 427-8, as quoted in: Holton. 1988, Thematic Origins of Scientific Thought (Harvard University Press, Cambridge). 42