Self - Notes - Neuromorphic Computing
Self - Notes - Neuromorphic Computing
Self - Notes - Neuromorphic Computing
Neuromorphic Engineering
Neuromorphic Architectures
Neuromorphic Processor
next-generation semiconductors;
transistors;
accelerators; and
Intel Lab's Loihi 2 has two million synapses and over one million
neurons per chip. They are optimized for SNNs. Nx SDK for using Loihi,
LAVA framework
IBM's TrueNorth chip has over 1 million neurons and over 256 million
synapses. It is 10,000 times more energy-efficient than conventional
microprocessors and only uses power when necessary.
Now, this is relatively new concept of computing. The human brain typical
human brain contains between 86 to 87 billion neurons and 10 to 15 power
of synapse.
The TrueNorth chip has 4096 cores each core contains 256 numerals (about
1 million new rooms and over 250 million synapses.).
Learning Rules
Neuroscience modelling
Lohihi:
binary valued spike messages; integer valued payloads
SNN model
SPINNAKAR
analogue electronic circuits to map neural and synapse equations
directly into the circuit function.
A lot of previous work in what was known as neuromorphic computing was based
upon the use of analogue electronic circuits to map neural and synapse equations
directly into the circuit function. This was then combined with a digital
communications approach to convey neural ‘spikes’ (action potentials) between
neurons.
Neurons Vs Axons
This had been resolved using Address Event Representation (AER) [152],
where each spike source (neuron) is given a unique code or ‘Address’, and this
address is then sent down a shared bus.
Spikes happens at the same time in biology and need to be serialized in electrical systems…
AER is a fine solution to spike communication up to the point where the shared
bus begins to saturate, but it isn’t scalable beyond that point.
AER suggested a starting point but, as noted above, bus-based AER has
limited scalability.
The first key insight into the fundamental innovation in SpiNNaker was to transform
AER from a broadcast bus-based communication system into a packetswitched
fabric.
Now (in 2002) we had all the essential properties of SpiNNaker’s unique communications
infrastructure: an AER-based multicast packet-switching system based
upon a router on each chip that uses TCAM lookup for efficient population-based
routeing and default routeing for long, straight connections.
This, then, was the thinking that went into defining the architecture of the
SpiNNaker system – a processing chip with as many small ARM cores as would
fit, each with local code and data memory, and a shared SDRAM chip to hold the
large synaptic data structures. Each processor subsystem would have a spike packet
transmitter and receiver and a Direct Memory Access (DMA) engine to transfer
synaptic data between the SDRAM and the local data RAM, thereby hiding the
variable SDRAM latency. Each chip would have a multicast AER packet router
using TCAM associative lookup and links to convey packets to and from neighbouring
chips.
There are also a few shared resources on each chip, to facilitate operation
as a computer component. These provide features such as the clock generators;
interrupt and watchdog reset control and communications; multiprocessor interlock
support; a small, shared SRAM for inter-processor messaging; an Ethernet
interface (as a host link) and, inevitably, some general purpose I/O bits for operations
such as run-time configuration and status indication. A boot ROMcontaining
some preliminary self-test and configuration software completes the set of shared
resources.
2.2.3 Router
The router is the key specialised unit in SpiNNaker. Each router has 24 network
input and output pairs, one to each of the 18 processor subsystems and 6 to connect
to neighbouring chips. Largely the links are identical, the only difference being that
off-chip links (only) are notionally paired, so that there is a default output associated
with each input which is used in some cases if no other routeing information is
found.
All router packets are short. They comprise an 8-bit header field, a 32-bit data
field and an optional 32-bit payload. Much of the network is (partially) serialised,
so omitting the payload when not required reduces the demand on bandwidth and
saves some energy.
The
neuronal populations consist of current-based Leaky Integrate and Fire (LIF) neurons,
with the membrane potential of each neuron in the excitatory population
initialised via a uniform distribution bounded by the threshold and resting potentials.
At the top of the left-hand side stack in Figure 4.14, users create a PyNN script
defining an SNN. The SpiNNaker back-end is specified, which translates the SNN
into a form suitable for execution on a SpiNNaker machine. This process includes
mapping of the SNN into an application graph, partitioning into a machine graph,
generation of the required routeing information and loading of data and applications
to a SpiNNaker machine. Once loading is complete, all core applications are
instructed to begin execution and run for a predefined period. On simulation completion,
requested output data are extracted from the machine and made accessible
through the PyNN API.
The
neuronal populations consist of current-based Leaky Integrate and Fire (LIF) neurons,
with the membrane potential of each neuron in the excitatory population
initialised via a uniform distribution bounded by the threshold and resting potentials.
The sPyNNaker API first interprets the PyNN defined network to construct
an application graph: a vertices and edges view of the neural network, where each
edge corresponds to a projection carrying synapses, and each vertex corresponds to
a population of neurons.
SPINNAKR is a SNN – SPIKES ARE INPUTS AND OUTPUTS ARE ALSO SPIKES
SPINNAKR – hybrid simulation of applications in the cores / synapse – i.e time driven neuron
updates and event-driven synapse updates
PyNN defines a number of standard cell models, such as the LIF neuron and
the Izhikevich neuron.
The Poisson spike source application emits packets according to a Poisson distribution
about a given frequency.
FUNDAMENTALS OF NEUROSCIENCE
Properties of Neurons
Neurons are highly specialized for generating electrical signals in response
to chemical and other inputs, and transmitting them to other cells. Some
important morphological specializations, seen in figure 1.1, are the dendrites
that receive inputs from other neurons and the axon that carries
the neuronal output to other cells.
ACTION POTENTIAL
For a longer interval known as the relative refractory period, lasting up to tens of milliseconds
after a spike, it is more difficult to evoke an action potential.
Neurons typically
respond by producing complex spike sequences that reflect both the intrinsic
dynamics of the neuron and the temporal characteristics of the stimulus.
Spiking probability and statistics – measured by firing rate and spike-train correlation functions
Spike-triggered averaging – relates action potentials to the stimulus that evoked them.
Reverse correlation methods – to construct the estimates of firing rates in response to time-varying stimuli
Neurons typically respond by producing complex spike sequences that reflect both the intrinsic
dynamics of the neuron and the temporal characteristics of the stimulus.
Firing rate – spike count rate – time average of the neural response function during the course of the trial….
we define the time-dependent firing rate as the average number of spikes (averaged over trials)
appearing during a short interval between times t and t + !1t, divided by the duration of
the interval.
trial average ( )
firing rate r(t ) – time dependent – take trials - mutl
The number of spikes occurring between times t and t + �t on a single trial is the integral of the
neural response function over that time inter- val. The average number of spikes during this
interval is the integral of the trial-averaged neural response function. We use angle brackets, ( ), to
denote averages over trials that use the same stimulus, so that (z) for any quantity z is the sum of the
values of z obtained from many trials involv- ing the same stimulus, divided by the number of trials.
The trial-averaged neural response function is denoted by (ρ (t )), and the time-dependent firing
rate is given by
For sufficiently small 1'1t, r(t )1'1t is the average number of spikes occurring
between times t and t+ 1'1t over multiple trials.
For sufficiently small 1'1t, r(t )1'1t is the average number of spikes occurring between times t and t +
1'1t over multiple trials. The average number of spikes over a longer time interval is given by the
integral of r(t ) over that interval. If 1'1t is small, there will never be more than one spike within the
interval between t and t + 1'1t on any given trial. This means that r(t )1'1t is also the fraction of trials
on which a spike occurred between those times. Equivalently, r(t )1'1t is the probability that a spike
occurs during this time interval. This probabilistic interpretation provides a formal definition of the
time-dependent firing rate; r(t )1'1t is the probability of a spike occur- ring during a short interval of
duration 1'1t around the time t.
The term “firing rate” is commonly used for all three quantities, r(t ), r, and (r).
Whenever possible, we use the terms “firing rate”, “spike-count rate”, and “average firing
rate” for r(t ), r, and (r), respectively, but when this becomes too cumbersome, the
different mathematical notations serve to distinguish them.
Stimulus s
In this chapter, we characterize responses of neurons as functions of just one of the stimulus
attributes to which they may be sensitive.
The value of this single attribute is denoted by s.
Fechner’s law
for perceived stimulus intensities. Integrating Weber ’s law, this means that the perceived intensity
of a stimulus of absolute intensity s varies as log s. This is known as Fechner ’s law.
String together all the stimuli and consider single time dependent stimulus sequence and average over time….
Periodic Stimulus…..
Just as a correlation function provides information about the temporal re-lationship between two
quantities, so an autocorrelation function tells us about how a quantity at one time is related to
itself evaluated at another time.
Point process
A stochastic process that generates a sequence of events, such as action point process
potentials, is called a point process.
Renewal Process
Fano Factor
Coefficient of variation – Cv
We present a simple
but nevertheless useful model neuron, the integrate-and-fire model,
in a basic version and with added membrane and synaptic conductances.
Longitudinal Current
Longitudinal Resistance
Equilibrium Potential
Goldman Equation
Shunting conductance – due Cl-
Inhibitory Synapses – synapse reversal potential < threshold for action potential
Excitatory Synapses - synapse reversal potential > threshold for action potential
Driving force = V – Ei
Membrane current
Leakage current
Resting Potential