Papers by Hava Siegelmann
Frontiers in Neuroinformatics, 2018
The development of spiking neural network simulation software is a critical component enabling th... more The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET 1 , enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.
IEEE transactions on neural networks and learning systems, 2023
We propose a new learning framework, signal propagation (sigprop), for propagating a learning sig... more We propose a new learning framework, signal propagation (sigprop), for propagating a learning signal and updating neural network parameters via a forward pass, as an alternative to backpropagation (BP). In sigprop, there is only the forward path for inference and learning. So, there are no structural or computational constraints necessary for learning to take place, beyond the inference model itself, such as feedback connectivity, weight transport, or a backward pass, which exist under BP-based approaches. That is, sigprop enables global supervised learning with only a forward path. This is ideal for parallel training of layers or modules. In biology, this explains how neurons without feedback connections can still receive a global learning signal. In hardware, this provides an approach for global supervised learning without backward connectivity. Sigprop by construction has compatibility with models of learning in the brain and in hardware than BP, including alternative approaches relaxing learning constraints. We also demonstrate that sigprop is more efficient in time and memory than they are. To further explain the behavior of sigprop, we provide evidence that sigprop provides useful learning signals in context to BP. To further support relevance to biological and hardware learning, we use sigprop to train continuous time neural networks with the Hebbian updates and train spiking neural networks (SNNs) with only the voltage or with biologically and hardware-compatible surrogate functions.
Quantum reports, Oct 3, 2022
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
Applied Sciences
In this paper, we propose a bioinformatics-based method, which introduces thermodynamic measures ... more In this paper, we propose a bioinformatics-based method, which introduces thermodynamic measures and topological characteristics aimed to identify potential drug targets for pharmaco-resistant epileptic patients. We apply the Gibbs homology analysis to the protein–protein interaction network characteristic of temporal lobe epilepsy. With the identification of key proteins involved in the disease, particularly a number of ribosomal proteins, an assessment of their inhibitors is the next logical step. The results of our work offer a direction for future development of prospective therapeutic solutions for epilepsy patients, especially those who are not responding to the current standard of care.
BioMedInformatics, 2021
Modern network science has been used to reveal new and often fundamental aspects of brain network... more Modern network science has been used to reveal new and often fundamental aspects of brain network organization in physiological as well as pathological conditions. As a consequence, these discoveries, which relate to network hierarchy, hubs and network interactions, have begun to change the paradigms of neurodegenerative disorders. In this paper, we explore the use of thermodynamics for protein–protein network interactions in Alzheimer’s disease (AD), Parkinson’s disease (PD), multiple sclerosis (MS), traumatic brain injury and epilepsy. To assess the validity of using network interactions in neurological diseases, we investigated the relationship between network thermodynamics and molecular systems biology for these neurological disorders. In order to uncover whether there was a correlation between network organization and biological outcomes, we used publicly available RNA transcription data from individual patients with these neurological conditions, and correlated these molecula...
ArXiv, 2018
We introduce Error Forward-Propagation, a biologically plausible mechanism to propagate error fee... more We introduce Error Forward-Propagation, a biologically plausible mechanism to propagate error feedback forward through the network. Architectural constraints on connectivity are virtually eliminated for error feedback in the brain; systematic backward connectivity is not used or needed to deliver error feedback. Feedback as a means of assigning credit to neurons earlier in the forward pathway for their contribution to the final output is thought to be used in learning in the brain. How the brain solves the credit assignment problem is unclear. In machine learning, error backpropagation is a highly successful mechanism for credit assignment in deep multilayered networks. Backpropagation requires symmetric reciprocal connectivity for every neuron. From a biological perspective, there is no evidence of such an architectural constraint, which makes backpropagation implausible for learning in the brain. This architectural constraint is reduced with the use of random feedback weights. Mod...
International Journal of Molecular Sciences, 2020
We propose to use a Gibbs free energy function as a measure of the human brain development. We ad... more We propose to use a Gibbs free energy function as a measure of the human brain development. We adopt this approach to the development of the human brain over the human lifespan: from a prenatal stage to advanced age. We used proteomic expression data with the Gibbs free energy to quantify human brain’s protein–protein interaction networks. The data, obtained from BioGRID, comprised tissue samples from the 16 main brain areas, at different ages, of 57 post-mortem human brains. We found a consistent functional dependence of the Gibbs free energies on age for most of the areas and both sexes. A significant upward trend in the Gibbs function was found during the fetal stages, which is followed by a sharp drop at birth with a subsequent period of relative stability and a final upward trend toward advanced age. We interpret these data in terms of structure formation followed by its stabilization and eventual deterioration. Furthermore, gender data analysis has uncovered the existence of f...
Nature Machine Intelligence
Biological organisms learn from interactions with their environment throughout their lifetime. Fo... more Biological organisms learn from interactions with their environment throughout their lifetime. For artificial systems to successfully act and adapt in the real world, it is desirable to similarly be able to learn on a continual basis. This challenge is known as lifelong learning, and remains to a large extent unsolved. In this perspective article, we identify a set of key capabilities that artificial systems will need to achieve lifelong learning. We describe a number of biological mechanisms, both neuronal and non-neuronal, that help explain how organisms solve these challenges, and present examples of biologically inspired models and biologically plausible mechanisms that have been applied to artificial intelligence systems in the quest towards development of lifelong learning machines. We discuss opportunities to further our understanding and advance the state of the art in lifelong learning, aiming to bridge the gap between natural and artificial intelligence.
arXiv (Cornell University), Apr 4, 2022
We propose a new learning framework, signal propagation (sigprop), for propagating a learning sig... more We propose a new learning framework, signal propagation (sigprop), for propagating a learning signal and updating neural network parameters via a forward pass, as an alternative to backpropagation. In sigprop, there is only the forward path for inference and learning. So, there are no structural or computational constraints necessary for learning to take place, beyond the inference model itself, such as feedback connectivity, weight transport, or a backward pass, which exist under backpropagation based approaches. That is, sigprop enables global supervised learning with only a forward path. This is ideal for parallel training of layers or modules. In biology, this explains how neurons without feedback connections can still receive a global learning signal. In hardware, this provides an approach for global supervised learning without backward connectivity. Sigprop by construction has compatibility with models of learning in the brain and in hardware than backpropagation, including alternative approaches relaxing learning constraints. We also demonstrate that sigprop is more efficient in time and memory than they are. To further explain the behavior of sigprop, we provide evidence that sigprop provides useful learning signals in context to backpropagation. To further support relevance to biological and hardware learning, we use sigprop to train continuous time neural networks with Hebbian updates, and train spiking neural networks with only the voltage or with biologically and hardware compatible surrogate functions.
bioRxiv (Cold Spring Harbor Laboratory), Mar 12, 2020
The prefrontal cortex encodes and stores numerous, often disparate, schemas and flexibly switches... more The prefrontal cortex encodes and stores numerous, often disparate, schemas and flexibly switches between them. Recent research on artificial neural networks trained by reinforcement learning has made it possible to model fundamental processes underlying schema encoding and storage. Yet how the brain is able to create new schemas while preserving and utilizing old schemas remains unclear. Here we propose a simple neural network framework based on a modification of the mixture of experts architecture to model the prefrontal cortex's ability to flexibly encode and use multiple disparate schemas. We show how incorporation of gating naturally leads to transfer learning and robust memory savings. We then show how phenotypic impairments observed in patients with prefrontal damage are mimicked by lesions of our network. Our architecture, which we call DynaMoE, provides a fundamental framework for how the prefrontal cortex may handle the abundance of schemas necessary to navigate the real world.
CRC Press eBooks, Mar 13, 2019
We design and implement a small neural network, comprised of 52 fixed precision neurons - computa... more We design and implement a small neural network, comprised of 52 fixed precision neurons - computationally equivalent to a bounded memory Universal Turing Machine; this design is an order of magnitude smaller than the smallest known universal neural nets. The network is the core of a practical universal neural computer; all neurons have fixed precision and a small set of simple weights. External memory will be used, or additional neurons dynamically recruited for more memory intensive calculations or input.
Birkhäuser Boston eBooks, 1999
The full neural network model with real weights will be analyzed in the next chapter. Here, we di... more The full neural network model with real weights will be analyzed in the next chapter. Here, we discuss the model with the weights constrained to the set of rationals. In contrast to the case described in the previous chapter, where we dealt only with integer weights, and each neuron could assume two values only, here a neuron can take on countably infinite different values. The analysis of networks with rational weights is a prerequisite for the proofs of the real weight model in the next chapter. It also sheds light on the role of different types of weights in determining the computational capabilities of the model.
Birkhäuser Boston eBooks, 1999
ABSTRACT
2022 International Joint Conference on Neural Networks (IJCNN), Jul 18, 2022
Sparsity learning aims to decrease the computational and memory costs of large deep neural networ... more Sparsity learning aims to decrease the computational and memory costs of large deep neural networks (DNNs) via pruning neural connections while simultaneously retaining high accuracy. A large body of work has developed sparsity learning approaches, with recent large-scale experiments showing that two main methods, magnitude pruning and Variational Dropout (VD), achieve similar state-of-the-art results for classification tasks. We propose Adaptive Neural Connections (ANC), a method for explicitly parameterizing fine-grained neuron-to-neuron connections via adjacency matrices at each layer that are learned through backpropagation. Explicitly parameterizing neuron-to-neuron connections confers two primary advantages: 1. Sparsity can be explicitly optimized for via norm-based regularization on the adjacency matrices; and 2. When combined with VD (which we term, ANC-VD), the adjacencies can be interpreted as learned weight importance parameters, which we hypothesize leads to improved convergence for VD. Experiments with ResNet18 show that architectures augmented with ANC outperform their vanilla counterparts.
Springer eBooks, 2001
In the beginning of nineties, Hava Siegelmann proposed a new computational model, the Artificial ... more In the beginning of nineties, Hava Siegelmann proposed a new computational model, the Artificial Recurrent Neural Network (ARNN), and proved that it could perform hypercomputation. She also established the equivalence between the ARNN and other analog systems that support hypercomputation, launching the foundations of an alternative computational theory. In this paper we contribute to this alternative theory by exploring the use of formal methods in the verification of temporal properties of ARNNs. Based on the work of Bradfield in verification of temporal properties of infinite systems, we simplify his tableau system, keeping its expressive power, and show that it is suitable to the verification of temporal properties of ARNNs.
Journal of the Association for Information Science and Technology, 2005
When a client interacts with an expert, e.g., a doctor, it falls upon the expert to ask questions... more When a client interacts with an expert, e.g., a doctor, it falls upon the expert to ask questions that steer the process towards fulfilling the client's needs. This is most efficient given that the expert has more knowledge and a broader view of possible illnesses and treatments. On the other hand, when faced with an information retrieval (IR) task, most IR systems leave to the client the task of coming up with queries. We propose an information retrieval framework that assumes the responsibility of leading the users to the information, thus increasing efficiency and satisfaction.
Uploads
Papers by Hava Siegelmann