Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Journal of Medical …, 2009
Learning is a process involved in multiple timescales. As per biology, changes which last from milliseconds to seconds and hours to days are the main mediators for the formation of short-term and long-term memory. It is obvious that, memory formation is neither static nor it is restricted into a one phase of life. Every step we keep in our life, even it succeed or fail or no matter what happen, we learn from them and acquire invaluable knowledge on that, which makes us easy manipulation on similar events in future. Thus continuous learning in a dynamic environment is a necessary qualification for the researches which are interested in studying phenomena, such as addiction, stress, noise, etc on such a dynamic learning environments. This research proposes a new approach of modelling our nervous system with the intention of implementing learning on dynamic environment.
1988
An improved learning paradigm that offers a significant reduction in computation time during the supervised learning phase is described. It is based on extending the role that the neuron plays in artificial neural systems. Prior work has regarded the neuron as a strictly passive, non-linear processing element, and the synapse on the other hand as the primary source of information processing and knowledge retention. In this work, the role of the neuron is extended insofar as allowing its parameters to adaptively participate in the learning phase. The temperature of the sigmoid function is an example of such a parameter. During learning, both the synaptic interconnection weights wijm and the neuronal temperatures Tim are optimized so as to capture the knowledge contained within the training set. The method allows each neuron to possess and update its own characteristic local temperature. This algorithm has been applied to logic type of problems such as the XOR or parity problem, resul...
Artificial intelligence has been the inspiration and goal of computing since the discipline was first conceived by Alan Turing. Our understanding of the brain has increased in parallel with the development of computers capable of modelling its functions. While the human brain is vastly complex, too much so for the computation abilities of modern super computers, interesting results have been found while modelling the nervous system of smaller creatures such as the salamander [3].
Artificial neural networks (ANNs) are usually homoge- nous in respect to the used learning algorithms. On the other hand, recent physiological observations suggest that in bio- logical neurons synapses undergo changes according to lo- cal learning rules. In this study we present a biophysically motivated learning rule which is influenced by the shape of the correlated signals and results in a learning charac- teristic which depends on the dendritic site. We investigate this rule in a biophysical model as well as in the equiva- lent artificial neural network model. As a consequence of our local rule we observe that transitions from differential Hebbian to plain Hebbian learning can coexist at the same neuron. Thus, such a rule could be used in an ANN to cre- ate synapses with entirely different learning properties at the same network unit in a controlled way.
The first few pages of any good introductory book on neurocomputing contain a cursory description of neurophysiology and how it has been abstracted to form the basis of artificial neural networks as we know them today. In particular, artificial neurons simplify considerably the behavior of their biological counterparts. It is our view that in order to gain a better understanding of how biological systems learn and remember it is necessary to have accurate models on which to base computerized experimentation. In this paper we describe an artificial neuron that is more realistic than most other models used currently. The model is based on conventional artificial neural networks (and is easily computerized) and is currently being used in our investigations into learning and memory.
Computational Intelligence, 1987
paper presents an overview and analylds of learning in Artiftciul Neural System_ (ANS's). It bef_ns with a general introduction to neural networks and connection_t approaches to informatlon proceging. The basis for learning in ANS's is then described, and compared with classical machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual _eigh_a to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analysed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomised, and where possible, the limitations inherent to specific classes of rules are outlined. DATA SHEET 4. Title sad Subcicle I. Rep, s No.
Physica A: Statistical Mechanics and its Applications, 2004
A feed-forward neural net with adaptable synaptic weights and fixed, zero or non-zero threshold potentials is studied, in the presence of a global feedback signal that can only have two values, depending on whether the output of the network in reaction to its input is right or wrong. It is found, on the basis of four biologically motivated assumptions, that only two forms of learning are possible, Hebbian and Anti-Hebbian learning. Hebbian learning should take place when the output is right, while there should be Anti-Hebbian learning when the output is wrong. For the Anti-Hebbian part of the learning rule a particular choice is made, which guarantees an adequate average neuronal activity without the need of introducing, by hand, control mechanisms like extremal dynamics. A network with realistic, i.e., non-zero threshold potentials is shown to perform its task of realizing the desired input-output relations best if it is sufficiently diluted, i.e. if only a relatively low fraction of all possible synaptic connections is realized.
Technological Forecasting and Social Change, 1991
In our book “Neural Engineering: Representation, Transformations and Dynamics”, MIT Press 2003, Chris Eliasmith and I present a unified framework that describes the function of neurobiological systems through the application of the quantitative tools of systems engineering. Our approach is not revolutionary, but more evolutionary in nature, building on many current and generally disparate approaches to neuronal modeling. The basic premise is that the principles of information processing apply to neurobiological systems.
Kerns Verlag eBooks, 2024
Ethnic and Racial Studies, 2019
Sociological Bulletin, 2018
Law and History Review, Special Edition "Rethinking Legal Pluralism" edited by Mark Letteney and Jessica Marglin, 2024
International Journal of Quality Engineering and Technology, 2018
Northeast African Studies, 2022
Građa za proučavanje spomenika kulture Vojvodine, XXXVI, 2023
Adriatica Editrice, 2004
مجلة التربية وثقافة الطفل, 2021
Revista Medica Herediana, 2013
Journal of Visualized Experiments, 2015
Terrestrial, Atmospheric and Oceanic Sciences
European Neuropsychopharmacology, 2014
Urban Science, 2017
Ingeniería sanitaria y …, 1998