In spite of intensive efforts it has remained an open problem to what extent current Artificial I... more In spite of intensive efforts it has remained an open problem to what extent current Artificial Intelligence (AI) methods that employ Deep Neural Networks (DNNs) can be implemented more energy-efficiently on spike-based neuromorphic hardware. This holds in particular for AI methods that solve sequence processing tasks, a primary application target for spike-based neuromorphic hardware. One difficulty is that DNNs for such tasks typically employ Long Short-Term Memory (LSTM) units. Yet an efficient emulation of these units in spike-based hardware has been missing. We present a biologically inspired solution that solves this problem. This solution enables us to implement a major class of DNNs for sequence processing tasks such as time series classification and question answering with substantial energy savings on neuromorphic hardware. In fact, the Relational Network for reasoning about relations between objects that we use for question answering is the first example of a large DNN that carries out a sequence processing task with substantial energy-saving on neuromorphic hardware. Energy consumption is a major impediment for more widespread applications of new AI-methods that use DNNs, especially in edge devices. Spike-based neuromorphic hardware is one direction that promises to alleviate this problem. This research direction is partially motivated by the method that brains use to run even more complex and larger neural networks than those DNNs that are used in current AI, with a total energy consumption of just 20W: Neurons in the brain only rarely emit spikes which mostly triggers energy consumption in neurons and synapses. But it has remained an open problem as to how DNNs that are needed for modern AI solutions could be implemented in neuromorphic hardware in such a sparse firing mode. Another open problem is how the LSTM units of such DNNs, that are needed for providing a working memory for sequence processing tasks, could be implemented in spike-based neuromorphic hardware. We present a biologically inspired solution to the second problem, that simultaneously provides a step towards also solving the first problem, since it reduces the firing activity of neurons that hold working memory content. We combine this method with a brain-inspired technique called membrane voltage regularization for enforcing sparse firing activity during the training of the DNN. We have tested the impact of these two innovations on computational performance and energy consumption for two benchmark tasks in an implementation on a representative spike-based chip: Intel's neuromorphic research chip Loihi [Davies et al., 2018]. We find significant reductions in the energy-delay product (EDP). In contrast to power, EDP accounts for the true energy and time cost per task/workload/computation. Simultaneously, these implementations demonstrate that two hallmarks of cognitive computations, both in brains and in machine intelligence, working memory and reasoning about relations between concepts or objects, can in fact be implemented more efficiently in spike-based neuromorphic hardware than in GPUs, the standard computing hardware for implementing DNNs. Implementing a long short-term memory in spike-based neuromorphic hardware Working memory is maintained in an LSTM unit in a special memory cell, to which read-and write-access is gated by trained neurons with sigmoidal activation function [Hochreiter and Schmidhuber, 1997]. Such an LSTM unit is difficult to realize efficiently in spike-based hardware. However, it turns out that by simply adding a standard feature of some biological neurons, slow after-hyperpolarizing (AHP) currents, a spiking neural network (SNN) acquires similar working memory capabilities as LSTM units over the time scale of the AHP currents. These AHP currents lower the membrane potential of a neuron after each of its spikes (see Fig. 1). Furthermore, these AHP currents can easily be implemented on Loihi with the desirable side benefit of reducing firing activity, and therefore
Sensory information is processed by the brain not in a simple feedforward fashion. Rather, bottom... more Sensory information is processed by the brain not in a simple feedforward fashion. Rather, bottom-up inputs are combined in pyramidal cells of sensory cortices with top-down information from higher brain areas that arrives through synapses in apical dendrites. The exact functional role of these top-down inputs has remained unknown. A promising abstract model posits that they provide probabilistic priors for bottom-up sensory inputs. We show that this hypothesis is consistent with a large number of experimental about synaptic plasticity in apical dendrites, in particular with the prominent role of NMDA-spikes. We identify conditions under which this synaptic plasticity could approximate the gold standard for self-supervised learning of probabilistic priors: logistic regression. Furthermore, this perspective suggests an additional functional role for the complex structure of the dendritic arborization plays: It enables the neuron to learn substantially more complex landscapes of proba...
Online Mendelian Inheritance in Animals (OMIA) is a comprehensive, annotated catalogue of inherit... more Online Mendelian Inheritance in Animals (OMIA) is a comprehensive, annotated catalogue of inherited disorders and other familial traits in animals other than humans and mice. Structured as a comparative biology resource, OMIA is a comprehensive resource of phenotypic information on heritable animal traits and genes in a strongly comparative context, relating traits to genes where possible. OMIA is modelled on and is complementary to Online Mendelian Inheritance in Man (OMIM). OMIA has been moved to a MySQL database at the Australian National Genomic Information Service (ANGIS) and can be accessed at http://omia.angis.org.au/. It has also been integrated into the Entrez search interface at the National Center forBiotechnologyInformation(NCBI;"http://www.ncbi. nlm.nih.gov/entrez/query.fcgi?db=omia). Curation of OMIA data by researchers working on particular species and disorders has also been enabled.
Reporting of laboratory critical values has become an issue of national attention as illustrated ... more Reporting of laboratory critical values has become an issue of national attention as illustrated by recent guidelines described in the National Patient Safety Goals of the Joint Commission on Accreditation of Healthcare Organizations. Herein, we report the results of an analysis of 37,503 consecutive laboratory critical values at our institution, a large urban academic medical center. We evaluated critical value reporting by test, laboratory specialty, patient type, clinical care area, time of day, and critical value limits. Factors leading to delays in critical value reporting are identified, and we describe approaches to improving this important operational and patient safety issue.
In spite of intensive efforts it has remained an open problem to what extent current Artificial I... more In spite of intensive efforts it has remained an open problem to what extent current Artificial Intelligence (AI) methods that employ Deep Neural Networks (DNNs) can be implemented more energy-efficiently on spike-based neuromorphic hardware. This holds in particular for AI methods that solve sequence processing tasks, a primary application target for spike-based neuromorphic hardware. One difficulty is that DNNs for such tasks typically employ Long Short-Term Memory (LSTM) units. Yet an efficient emulation of these units in spike-based hardware has been missing. We present a biologically inspired solution that solves this problem. This solution enables us to implement a major class of DNNs for sequence processing tasks such as time series classification and question answering with substantial energy savings on neuromorphic hardware. In fact, the Relational Network for reasoning about relations between objects that we use for question answering is the first example of a large DNN that carries out a sequence processing task with substantial energy-saving on neuromorphic hardware. Energy consumption is a major impediment for more widespread applications of new AI-methods that use DNNs, especially in edge devices. Spike-based neuromorphic hardware is one direction that promises to alleviate this problem. This research direction is partially motivated by the method that brains use to run even more complex and larger neural networks than those DNNs that are used in current AI, with a total energy consumption of just 20W: Neurons in the brain only rarely emit spikes which mostly triggers energy consumption in neurons and synapses. But it has remained an open problem as to how DNNs that are needed for modern AI solutions could be implemented in neuromorphic hardware in such a sparse firing mode. Another open problem is how the LSTM units of such DNNs, that are needed for providing a working memory for sequence processing tasks, could be implemented in spike-based neuromorphic hardware. We present a biologically inspired solution to the second problem, that simultaneously provides a step towards also solving the first problem, since it reduces the firing activity of neurons that hold working memory content. We combine this method with a brain-inspired technique called membrane voltage regularization for enforcing sparse firing activity during the training of the DNN. We have tested the impact of these two innovations on computational performance and energy consumption for two benchmark tasks in an implementation on a representative spike-based chip: Intel's neuromorphic research chip Loihi [Davies et al., 2018]. We find significant reductions in the energy-delay product (EDP). In contrast to power, EDP accounts for the true energy and time cost per task/workload/computation. Simultaneously, these implementations demonstrate that two hallmarks of cognitive computations, both in brains and in machine intelligence, working memory and reasoning about relations between concepts or objects, can in fact be implemented more efficiently in spike-based neuromorphic hardware than in GPUs, the standard computing hardware for implementing DNNs. Implementing a long short-term memory in spike-based neuromorphic hardware Working memory is maintained in an LSTM unit in a special memory cell, to which read-and write-access is gated by trained neurons with sigmoidal activation function [Hochreiter and Schmidhuber, 1997]. Such an LSTM unit is difficult to realize efficiently in spike-based hardware. However, it turns out that by simply adding a standard feature of some biological neurons, slow after-hyperpolarizing (AHP) currents, a spiking neural network (SNN) acquires similar working memory capabilities as LSTM units over the time scale of the AHP currents. These AHP currents lower the membrane potential of a neuron after each of its spikes (see Fig. 1). Furthermore, these AHP currents can easily be implemented on Loihi with the desirable side benefit of reducing firing activity, and therefore
Sensory information is processed by the brain not in a simple feedforward fashion. Rather, bottom... more Sensory information is processed by the brain not in a simple feedforward fashion. Rather, bottom-up inputs are combined in pyramidal cells of sensory cortices with top-down information from higher brain areas that arrives through synapses in apical dendrites. The exact functional role of these top-down inputs has remained unknown. A promising abstract model posits that they provide probabilistic priors for bottom-up sensory inputs. We show that this hypothesis is consistent with a large number of experimental about synaptic plasticity in apical dendrites, in particular with the prominent role of NMDA-spikes. We identify conditions under which this synaptic plasticity could approximate the gold standard for self-supervised learning of probabilistic priors: logistic regression. Furthermore, this perspective suggests an additional functional role for the complex structure of the dendritic arborization plays: It enables the neuron to learn substantially more complex landscapes of proba...
Online Mendelian Inheritance in Animals (OMIA) is a comprehensive, annotated catalogue of inherit... more Online Mendelian Inheritance in Animals (OMIA) is a comprehensive, annotated catalogue of inherited disorders and other familial traits in animals other than humans and mice. Structured as a comparative biology resource, OMIA is a comprehensive resource of phenotypic information on heritable animal traits and genes in a strongly comparative context, relating traits to genes where possible. OMIA is modelled on and is complementary to Online Mendelian Inheritance in Man (OMIM). OMIA has been moved to a MySQL database at the Australian National Genomic Information Service (ANGIS) and can be accessed at http://omia.angis.org.au/. It has also been integrated into the Entrez search interface at the National Center forBiotechnologyInformation(NCBI;"http://www.ncbi. nlm.nih.gov/entrez/query.fcgi?db=omia). Curation of OMIA data by researchers working on particular species and disorders has also been enabled.
Reporting of laboratory critical values has become an issue of national attention as illustrated ... more Reporting of laboratory critical values has become an issue of national attention as illustrated by recent guidelines described in the National Patient Safety Goals of the Joint Commission on Accreditation of Healthcare Organizations. Herein, we report the results of an analysis of 37,503 consecutive laboratory critical values at our institution, a large urban academic medical center. We evaluated critical value reporting by test, laboratory specialty, patient type, clinical care area, time of day, and critical value limits. Factors leading to delays in critical value reporting are identified, and we describe approaches to improving this important operational and patient safety issue.
Uploads
Papers by Arjun rao