Self - Notes - Neuromorphic Computing

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

Neuromorphic Computing

Neuromorphic Engineering

Neuromorphic Architectures

Neuromorphic Processor

Neurons – chemicals and electronic impulses

Neurons and Synapses

Neuromorphic technology is expected to be used in the following ways:

 deep learning applications;

 next-generation semiconductors;

 transistors;

 accelerators; and

 autonomous systems, such as robotics, drones, self-driving cars and


artificial intelligence (AI).
AGI refers to an AI computer that understands and learns like a human.

How does neuromorphic computing work?


Neuromorphic computing uses hardware based on the structures,
processes and capacities of neurons and synapses in biological brains.
The most common form of neuromorphic hardware is the spiking neural
network (SNN). In this hardware, nodes -- or spiking neurons -- process
and hold data like biological neurons.

Artificial synaptic devices connect spiking neurons. These devices use


analog circuitry to transfer electrical signals that mimic brain signals.
Instead of encoding data through a binary system like most standard
computers, spiking neurons measure and encode the discrete analog
signal changes themselves.
The high-performance computing architecture and functionality used in
neuromorphic computers is different from the standard computer hardware
of most modern computers, which are also known as von Neumann
computers.

Examples of the computational approach

 Intel Lab's Loihi 2 has two million synapses and over one million
neurons per chip. They are optimized for SNNs. Nx SDK for using Loihi,
LAVA framework

 Intel Lab's Pohoiki Beach computer features 8.3 million neurons. It


delivers 1,000 times better performance and 10,000 times more
efficiency than comparable GPUs.

 IBM's TrueNorth chip has over 1 million neurons and over 256 million
synapses. It is 10,000 times more energy-efficient than conventional
microprocessors and only uses power when necessary.

 NeuRRAM is a neuromorphic chip designed to let AI systems run


disconnected from the cloud.

Examples of the neuroscience approach


 The Tianjic chip, developed by Chinese scientists, is used to power a
self-driving bike capable of following a person, navigating obstacles and
responding to voice commands. It has 40,000 neurons, 10 million
synapses and performs 160 times better and 120,000 times more
efficiently than a comparable GPU.

 Human Brain Project (HBP) is a research project created by


neuroscientist Henry Markram with funding from the European Union
that is attempting to create a human brain. It uses two neuromorphic
supercomputers, SpiNNaker and BrainScaleS, collaboratively designed
by various universities. More than 140 universities across Europe are
working on the project.

 BrainScaleS was created by Heidelberg University in collaboration with


other universities. It uses neuromorphic hybrid systems that combine
biological experimentation with computational analysis to study brain
information processing.
Key advantages of neuromorphic computing compared to traditional approaches are energy
efficiency, execution speed, robustness against local failures and the ability to learn.

Now, this is relatively new concept of computing. The human brain typical
human brain contains between 86 to 87 billion neurons and 10 to 15 power
of synapse.

The TrueNorth chip has 4096 cores each core contains 256 numerals (about
1 million new rooms and over 250 million synapses.).

Neuromorphic computers which uses neuromorphic computing are directly


modeled after human brain it uses special artificial neural network
methodology called Spiking Neural Networks (SNN). This is not to be
confused with software-based algorithms such as Convolutional Neural
Networks (CNN), Recurrent Neural Networks (RNN), or Generative
Adversarial Networks (GAN).

To keep up, a new type of non-von Neumann architecture


will be needed: a neuromorphic architecture. Quantum
computing and neuromorphic systems have both been
claimed as the solution, and it's neuromorphic computing,
brain-inspired computing, that's likely to be commercialised
sooner.

While von Neumann systems are largely serial, brains use


massively parallel computing. Brains are also more fault-
tolerant than computers -- both advantages researchers are
hoping to model within neuromorphic systems.

An action potential can be triggered by either lots of inputs


at once (spatial), or input that builds up over time
(temporal). These techniques, plus the huge
interconnectivity of synapses -- one synapse might be
connected to 10,000 others -- means the brain can transfer
information quickly and efficiently.

Neuromorphic computing models the way the brain works


through spiking neural networks. Conventional computing is
based on transistors that are either on or off, one or zero.
Spiking neural networks can convey information in both the
same temporal and spatial way as the brain can and so
produce more than one of two outputs. Neuromorphic
systems can be either digital or analogue, with the part of
synapses played by either software or memristors.

Memristors could also come in handy in modelling another useful


element of the brain: synapses' ability to store information as well
as transmitting it. Memristors can store a range of values, rather
than just the traditional one and zero, allowing it to mimic the way
the strength of a connection between two synapses can vary.
Changing those weights in artificial synapses in neuromorphic
computing is one way to allow the brain-based systems to learn.

Along with memristive technologies, including phase change


memory, resistive RAM, spin-transfer torque magnetic RAM, and
conductive bridge RAM, researchers are also looking for other new
ways to model the brain's synapse, such as using quantum dots and
graphene.
Current generation AI tends to be heavily rules-based,
trained on datasets until it learns to generate a particular
outcome. But that's not how the human brain works: our
grey matter is much more comfortable with ambiguity and
flexibility.

It's hoped that the next generation of artificial intelligence could


deal with a few more brain-like problems, including constraint
satisfaction, where a system has to find the optimum solution to a
problem with a lot of restrictions.

Neuromorphic systems are also likely to help develop better AIs as


they're more comfortable with other types of problems like
probabilistic computing, where systems have to cope with noisy and
uncertain data. There are also others, such as causality and non-
linear thinking, which are relatively immature in neuromorphic
computing systems, but once they're more established, they could
vastly expand the uses AIs could be put to.

The HBP has led to two major neuromorphic initiatives,


SpiNNaker and BrainScaleS. In 2018, a million-
core SpiNNaker system went live, the largest neuromorphic
supercomputer at the time, and the university hopes to
eventually scale it up to model one million neurones.
BrainScaleS has similar aims as SpiNNaker, and its
architecture is now on its second generation, BrainScaleS-
2.

Fully programmable neuron models with graded spikes….

Biological neural computations

Intelligent information processing like brains…

Learn on the fly through Neuron Firing Rules


Asynchronous Event Based Spikes

Parallel Sparse Compute

Self-learning capabilities – continuous adaptation

Novel neuron models

Learning Rules

Async spike-based communication

Neuroscience modelling

Lohihi:
binary valued spike messages; integer valued payloads

Generally spike Messages

Event Based Messaging

Sparse and Time Coded Communication

SNN model

Programmable Pipeline in each neuromorphic core –


common arithmetic, comparison, program control flow
instructions….

Two factor learning rules on synapses

Third modulatory term for non localized reward broadcasts –


third term is for specific synapses

Supports latest neuro inspired learning algorithms

Async circuits, lowest levels of pipeline sequencing….

Chip wide time steps….


3D mesh; 6 scalability ports per chip

Loihi – rate coded SNN

Loihi 2 – SIGMA DELTA NN (SDNN) – graded activation


values, only communicate significant changes as sparse
event driven manner…

Each Loihi Chip – what it has inside:

 8 microprocessor cores – optimized for spike based


communication, runs C code, data i/o, network
configuration, management, monitoring and control;
 128 fully async neuron cores – group of spiking neurons
and synapses for connecting the neurons,
programmable neuron model and learning, 128KB
synaptic memory, 8192 neurons, async design of
neurons;
Neuron model is a program (short sequence of
microcode instructions which are RMW, RDC, MOV,
SEL, AND, OR, SHL, ADD, NEG, MIN, MUL-SHR, LT, GE,
EQ, SKP_C, JMP_C, SPIKE, PROBE) represents a single
neuron; MUL is in fixed precision; binary spike
messages payload is 32 bits integer; weights are 8
bits???

Common Neuron model – Leaky-Integrate-and-Fire

Sparse and Dense Synapse Encoding

Convolutional–strided, dilated, and sparse kernels;


Factorized, Stochastic Connections

Address Event Representation ASYNC protocol for


interchip communication…

Neuron state – 0 to 4096 bytes…


Timestep synchronization….

Neurons are approximate pattern matching device…

Associative memories that are tolerant of input noise….

What are the functions of the neurons and synapses…..

SPINNAKAR
analogue electronic circuits to map neural and synapse equations
directly into the circuit function.

A lot of previous work in what was known as neuromorphic computing was based
upon the use of analogue electronic circuits to map neural and synapse equations
directly into the circuit function. This was then combined with a digital
communications approach to convey neural ‘spikes’ (action potentials) between
neurons.

Neurons Vs Axons

Biological neurons communicate principally (though not exclusively)


through the transmission of action potentials which, because of the way they are
electrochemically regenerated as they propagate along the neuron’s axon (the biological
wire that conveys the output of one neuron to the inputs of the next neuron),
carry no information in the size or shape of the spike. Thus, the spike can be viewed
as a pure asynchronous event, ideal for digital propagation – digital circuits have a
similar ability to regenerate pure signals as they propagate.

This had been resolved using Address Event Representation (AER) [152],
where each spike source (neuron) is given a unique code or ‘Address’, and this
address is then sent down a shared bus.

Spikes happens at the same time in biology and need to be serialized in electrical systems…

AER is a fine solution to spike communication up to the point where the shared
bus begins to saturate, but it isn’t scalable beyond that point.

Digital implementation of neuron and synapse equations…..

Synaptic weights in memories - memristor

A population of neurons with inputs from multiple


other populations could use one such matrix memory for each population
input.

AER suggested a starting point but, as noted above, bus-based AER has
limited scalability.

The first key insight into the fundamental innovation in SpiNNaker was to transform
AER from a broadcast bus-based communication system into a packetswitched
fabric.

Now (in 2002) we had all the essential properties of SpiNNaker’s unique communications
infrastructure: an AER-based multicast packet-switching system based
upon a router on each chip that uses TCAM lookup for efficient population-based
routeing and default routeing for long, straight connections.

This, then, was the thinking that went into defining the architecture of the
SpiNNaker system – a processing chip with as many small ARM cores as would
fit, each with local code and data memory, and a shared SDRAM chip to hold the
large synaptic data structures. Each processor subsystem would have a spike packet
transmitter and receiver and a Direct Memory Access (DMA) engine to transfer
synaptic data between the SDRAM and the local data RAM, thereby hiding the
variable SDRAM latency. Each chip would have a multicast AER packet router
using TCAM associative lookup and links to convey packets to and from neighbouring
chips.

Issues that have come to the fore are


the importance of modelling axonal delays, the importance of the sparse connectivity
of biological neurons, the cost issues relating to the use of very large
on-chip memories, and the need to keep as many decisions open for as long as
possible.

on-chip neural functions through


parallel programmable processors of a fairly conventional nature.

It is therefore the specialist communications network, designed to support the


specific spiking neural network applications, that differentiates SpiNNaker from
most other multiprocessor systems.

SpiNNaker communicates with short packets. In neural operation, each packet


represents a particular neuron firing. A packet is identified using AER [152];

Some form of MESH topology is suitable…

2D mesh…. Triangular instead of Cartesian

Each ASIC connected to SIX neighbours


This can be post-rationalised into 16 neuron processors,
a monitor processor to manage the chip as a computer component plus a spare, but
the constraint was primarily physical. The processor count does have some impact
on the router since, when multicasting packets, it is necessary to specify whether
each of the 24 destinations – 6 chip-to-chip connections plus 18 local processors –
is used; 24 bits is a reasonably convenient size to pack into the RAM tables, so this
is a bonus.

There are also a few shared resources on each chip, to facilitate operation
as a computer component. These provide features such as the clock generators;
interrupt and watchdog reset control and communications; multiprocessor interlock
support; a small, shared SRAM for inter-processor messaging; an Ethernet
interface (as a host link) and, inevitably, some general purpose I/O bits for operations
such as run-time configuration and status indication. A boot ROMcontaining
some preliminary self-test and configuration software completes the set of shared
resources.

The ARM968 is an integer-only processor.


It has no cache either… NO FPU…

but does support Tightly-Coupled Memory (TCM)


on both its instruction and data buses. The TCM is a static RAM with single-cycle
access; the processor can perform parallel instruction and data accesses. It acts like
a cache memory except it is under software control. Although the processor can
address shared memories directly – a boot ROM, an on-chip (32 KByte) SRAM
and the SDRAM – these are very slow in comparison so all applications code is
kept in the 32 KByte Instruction Tightly-Coupled Memory (ITCM) and the data
working set and stack in the Data Tightly-Coupled Memory (DTCM).

The only peripheral of particular note is the communications controller. This


provides bidirectional on-chip communication with the router. The input interconnection
is blocking, so it is important to read arriving packets with low latency;
the ARM’s Fast Interrupt Request (FIQ) is typically used for this. Failure to read
packets will cause the appropriate network buffers to fill and, ultimately, stall the
on-chip router. Similarly, the outgoing link is blocking but the back-pressure may
partially rely on software checking availability.

2.2.3 Router
The router is the key specialised unit in SpiNNaker. Each router has 24 network
input and output pairs, one to each of the 18 processor subsystems and 6 to connect

to neighbouring chips. Largely the links are identical, the only difference being that
off-chip links (only) are notionally paired, so that there is a default output associated
with each input which is used in some cases if no other routeing information is
found.
All router packets are short. They comprise an 8-bit header field, a 32-bit data
field and an optional 32-bit payload. Much of the network is (partially) serialised,
so omitting the payload when not required reduces the demand on bandwidth and
saves some energy.

There are four types of packet:


• Multicast (MC) packets are intended to support neural spike communications.
• Point-to-Point (P2P) packets are for chip-to-chip messages and are intended
for machine management.
• Nearest Neighbour (NN) packets primarily support the machine boot and
debugging functions.
• Fixed Route packets contain no key information and are always routed the
same way: they can provide facilities such as carrying extra status data to a
host.

1000 neurons per core – basically Neuron are realiuzed in s/w

18 cores (ARM Async processor) per chip

48 chips per board

24 boards per rack…

The
neuronal populations consist of current-based Leaky Integrate and Fire (LIF) neurons,
with the membrane potential of each neuron in the excitatory population
initialised via a uniform distribution bounded by the threshold and resting potentials.

Neuron variables such as spike


trains, total synaptic conductances and neuron membrane potential are accessible
from population objects, while synaptic weights and delays are extracted from
projections. These data can be subsequently saved or visualised using the built-in
plotting functionality.

At the top of the left-hand side stack in Figure 4.14, users create a PyNN script
defining an SNN. The SpiNNaker back-end is specified, which translates the SNN
into a form suitable for execution on a SpiNNaker machine. This process includes
mapping of the SNN into an application graph, partitioning into a machine graph,
generation of the required routeing information and loading of data and applications
to a SpiNNaker machine. Once loading is complete, all core applications are
instructed to begin execution and run for a predefined period. On simulation completion,
requested output data are extracted from the machine and made accessible
through the PyNN API.

The
neuronal populations consist of current-based Leaky Integrate and Fire (LIF) neurons,
with the membrane potential of each neuron in the excitatory population
initialised via a uniform distribution bounded by the threshold and resting potentials.

The sPyNNaker API first interprets the PyNN defined network to construct
an application graph: a vertices and edges view of the neural network, where each
edge corresponds to a projection carrying synapses, and each vertex corresponds to
a population of neurons.

Vertices –population / sub-population of neurons

Edges – projection carrying synapses…

Each vertex mapped into a single core….

Connection between the neurons in a population – within each vertex / core

Connections among populations of neurons – connection between vertices / cores – connections


probabilities – possible synaptic connections……

These tables are subsequently compressed


and loaded into router memory (as described in the previous chapter). The Python
software stack from Figure 4.14 then generates the core-specific neuron and synapse
data structures and loads them onto the SpiNNaker machine using the SpiNNTools
software.
Core specific neuron data – in DTCM and the core specific Synapse data in SDRAM regions specific
to that core

Programs for execution of applications - ITCM in the core

SPINNAKR is a SNN – SPIKES ARE INPUTS AND OUTPUTS ARE ALSO SPIKES

SPINNAKR – hybrid simulation of applications in the cores / synapse – i.e time driven neuron
updates and event-driven synapse updates

Spike transmission between neuron are AER based

sPyNNaker simulation typically contains multiple cores, each simulating a different


population of neurons (see Figure 4.15(b)). Each core updates the states of its
neurons in time via an explicit update scheme with fixed simulation timestep .1t/.
When a neuron is deemed to have fired, packets are delivered to all cores that neuron
projects to and processed in real time by the post-synaptic core to evaluate the
resulting synaptic contribution.

All cores will


therefore initiate a timer event and execute a timer_callback to advance the state
of their neurons approximately in parallel, although the system is asynchronous as
there is no hardware or software mechanism to synchronise cores.

4.9.5 Neural Modelling


At the heart of a sPyNNaker application is the solution of a series of mathematical
models governing neural dynamics. It is these models which determine how
incoming spikes affect a neuron and when a neuron itself reaches threshold. While
the preceding section described the underlying event-based operating system facilitating
simulation and interaction of neurons, this section focuses on the solution
of equations governing neural state and how they are structured in software.

PyNN defines a number of standard cell models, such as the LIF neuron and
the Izhikevich neuron.

synapse_type, defining how synapse state evolves between pre-synaptic spikes


and how contributions from new spikes are added to the model. A fundamental
requirement is that multiple synaptic inputs can be summed and shaped
linearly, such as the _-kernel [49].
– neuron_model, implementing the sub-threshold update scheme and refractory
dynamics.
– input_type, governing the process of converting synaptic input into neuron
input current. Examples include current-based and conductance-based
formulations [45].

threshold_type, defining a system against which a neuron membrane potential


is compared to adjudge whether a neuron has emitted a spike.
– additional_input_type, offering a flexible framework to model intrinsic currents
dependent on the instantaneous membrane potential and potentially
responding discontinuously on neuron firing (such as the Ca2C-activated
KC current described by Liu and Wang [149]).

The use of intermediate variables not only enables compiler


optimisations improving speed and code size but also helps prevent over/underflow
of the accum data type during intermediate calculations [101].

The spike source array application contains a population of neuron-like units


which emit spikes at specific times (see Listing 4.2).

The Poisson spike source application emits packets according to a Poisson distribution
about a given frequency.

FUNDAMENTALS OF NEUROSCIENCE

Properties of Neurons
Neurons are highly specialized for generating electrical signals in response
to chemical and other inputs, and transmitting them to other cells. Some
important morphological specializations, seen in figure 1.1, are the dendrites
that receive inputs from other neurons and the axon that carries
the neuronal output to other cells.

Membrane potential - -70mV inside and 0v outside – polarized state

Hyperpolarization – more negative membrane potential

Depolarization – less negative or even positive membrane potential…

ACTION POTENTIAL

If a neuron is depolarized sufficiently to raise the membrane potential


above a threshold level, a positive feedback process is initiated, and the
neuron generates an action potential. An action potential is a roughly 100
mV fluctuation in the electrical potential across the cell membrane that
lasts for about 1ms (figure 1.2A).

REFRACTORY PERIOD – ABSOLUTE

Action potential generation also depends


on the recent history of cell firing. For a few milliseconds just after an
action potential has been fired, it may be virtually impossible to initiate
another spike. This is called the absolute refractory period.

RELATIVE RERFRACTORY PERIOD

For a longer interval known as the relative refractory period, lasting up to tens of milliseconds
after a spike, it is more difficult to evoke an action potential.

ACTION POTENTIAL VS SUBTHRESHOLD POTENTIAL


Action potentials are of great importance because they are the only form
Of membrane potential fluctuation that can propagate over large distances.
Subthreshold potential fluctuations are severely attenuated over distances
of 1 mm or less. Action potentials, on the other hand, are regenerated
actively along axon processes and can travel rapidly over large distances
without attenuation.

Depending on the nature of the


ion flow, the synapses can have either an excitatory, depolarizing, or an
inhibitory, typically hyperpolarizing, effect on the postsynaptic neuron.

Signal Originating - pre- synaptic side of the synapse

Signal receiving – post synaptic side of the synapse

The subthreshold membrane potential


waveform, apparent in the soma recording, is completely absent on
the axon due to attenuation, but the action potential sequence in the two
recordings is the same. This illustrates the important point that spikes, but
not subthreshold potentials, propagate regeneratively down axons.

Neurons typically
respond by producing complex spike sequences that reflect both the intrinsic
dynamics of the neuron and the temporal characteristics of the stimulus.

In this chapter, we introduce the firing rate and spike-train correlation


functions, which are basic measures of spiking probability and statistics. We also discuss spike-triggered
averaging, a method for relating action
potentials to the stimulus that evoked them. Finally, we present basic
stochastic descriptions of spike generation, the homogeneous and inhomogeneous
Poisson models, and discuss a simple model of neural responses
to which they lead.

Spiking probability and statistics – measured by firing rate and spike-train correlation functions

Spike-triggered averaging – relates action potentials to the stimulus that evoked them.

Stochastic Description of spike generation

Homogeneous Poisson models

Non-homogeneous Poisson models

Reverse correlation methods – to construct the estimates of firing rates in response to time-varying stimuli

Neurons typically respond by producing complex spike sequences that reflect both the intrinsic
dynamics of the neuron and the temporal characteristics of the stimulus.

Action potential – can vary in duration (typically 1ms), amplitude and


shape

Neural response function – in the form of dirac function delta


Because the sequence of action potentials generated by a given stimulus
varies from trial to trial, neuronal responses are typically treated statisti-
cally or probabilistically. May be characterized by the firing rates. May not be by the specific spike
sequences…..

Firing rate – spike count rate – time average of the neural response function during the course of the trial….

we define the time-dependent firing rate as the average number of spikes (averaged over trials)
appearing during a short interval between times t and t + !1t, divided by the duration of
the interval.

trial average ( )
firing rate r(t ) – time dependent – take trials - mutl
The number of spikes occurring between times t and t + �t on a single trial is the integral of the
neural response function over that time inter- val. The average number of spikes during this
interval is the integral of the trial-averaged neural response function. We use angle brackets, ( ), to
denote averages over trials that use the same stimulus, so that (z) for any quantity z is the sum of the
values of z obtained from many trials involv- ing the same stimulus, divided by the number of trials.
The trial-averaged neural response function is denoted by (ρ (t )), and the time-dependent firing
rate is given by

For sufficiently small 1'1t, r(t )1'1t is the average number of spikes occurring
between times t and t+ 1'1t over multiple trials.

Spiking probability - what is the definition……….

For sufficiently small 1'1t, r(t )1'1t is the average number of spikes occurring between times t and t +
1'1t over multiple trials. The average number of spikes over a longer time interval is given by the
integral of r(t ) over that interval. If 1'1t is small, there will never be more than one spike within the
interval between t and t + 1'1t on any given trial. This means that r(t )1'1t is also the fraction of trials
on which a spike occurred between those times. Equivalently, r(t )1'1t is the probability that a spike
occurs during this time interval. This probabilistic interpretation provides a formal definition of the
time-dependent firing rate; r(t )1'1t is the probability of a spike occur- ring during a short interval of
duration 1'1t around the time t.

Trial-averaged neural response function

Firing rate – trial averaged

Trial averaged density of spikes…

Trial averaged spike-count firing rate

The term “firing rate” is commonly used for all three quantities, r(t ), r, and (r).
Whenever possible, we use the terms “firing rate”, “spike-count rate”, and “average firing
rate” for r(t ), r, and (r), respectively, but when this becomes too cumbersome, the
different mathematical notations serve to distinguish them.

In particular, we distinguish the spike-count rate r


from the time-dependent firing rate r(t ) by using a different font and by
including the time argument in the latter expression (unless r(t ) is inde-
pendent of time).

Linear filter and kernel

Stimulus s

In this chapter, we characterize responses of neurons as functions of just one of the stimulus
attributes to which they may be sensitive.
The value of this single attribute is denoted by s.

response tuning curve f(s) )


The average firing rate written as a function of s, ( r) = f (s ), is called the neural response tuning
curve. The functional form of a tuning curve depends on the parameter s used to describe the
stimulus.

Because tuning curves correspond to


firing rates, they are measured in units of spikes per second or Hz.

Gaussian Tuning Curve

Cosine Tuning Curve

Sigmoidal Tuning Curve

The trial-to-trial deviation of r from f (s) is


considered to be noise, and such models are often called noise models.
The standard deviation for the noise distribution either can be independent
of f (s), in which case the variability is called additive noise, or it can
depend on f (s). Multiplicative noise corresponds to having the standard
deviation proportional to f (s).

Response tuning curves characterize the average response of a neuron to


a given stimulus.

Weber measured how different the intensity of two stimuli had


to be for them to be reliably discriminated, the “just noticeable” difference
�s. He found that, for a given stimulus, �s is proportional to the magni- tude of the stimulus s, so
that �s/s is constant. This relationship is called
Weber ’s law. Fechner suggested that noticeable differences set the scale
Weber’s law

Fechner’s law

for perceived stimulus intensities. Integrating Weber ’s law, this means that the perceived intensity
of a stimulus of absolute intensity s varies as log s. This is known as Fechner ’s law.

Our analysis of neural encoding involves two different types of averages:


averages over repeated trials that employ the same stimulus, which we
denote by angle brackets, and averages over different stimuli.

Stimulus averages and time averages

String together all the stimuli and consider single time dependent stimulus sequence and average over time….

Periodic Stimulus…..

Equation 1.20 allows us to relate the spike-triggered average to


the correlation function of the firing rate and the stimulus.

Spike triggered average stimulus……

Firing rate - stimulus correlation function

Reverse correlation function

The spike-triggered average stimulus is widely used to study and characterize


neural responses.

The defining characteristic of a white-noise stimulus is that its value at


any one time is uncorrelated with its value at any other time.

Stimulus auto correlation function

Just as a correlation function provides information about the temporal re-lationship between two
quantities, so an autocorrelation function tells us about how a quantity at one time is related to
itself evaluated at another time.

An approximation to white noise can be generated by choosing each s m


independently from a probability distribution with mean 0 and variance
σ 2 /!:lt. Any reasonable probability function satisfying these two condi- tions can be used to
s
generate the stimulus values within each time bin. A special class of white-noise stimuli, Gaussian
white noise, results when the probability distribution used to generate the s m values is a Gaussian
func- tion. The factor of 1/!:lt in the variance indicates that the variability must
be increased as the time bins get smaller.

If, how-ever, the probability of generating an action potential is independent of


the presence or timing of other spikes (i.e., if the spikes are statistically in- dependent) the
firing rate is all that is needed to compute the probabilities
for all possible action potential sequences.

Point process

A stochastic process that generates a sequence of events, such as action point process
potentials, is called a point process.

Renewal Process

In general, the probability of an event


occurring at any given time could depend on the entire history of preced-
ing events. If this dependence extends only to the immediately preceding event, so that the
intervals between successive events are independent,
the point process is called a renewal process. If
Poisson Process – homogeneous and non-homogeneous

If there is no dependence renewal process


at all on preceding events, so that the events themselves are statistically
independent, we have a Poisson process. The Poisson process provides
an extremely useful approximation of stochastic neuronal firing. To make Poisson process
the presentation easier to follow, we separate two cases, the homogeneous
Poisson process, for which the firing rate is constant over time, and the
inhomogeneous Poisson process, which involves a time-dependent firing
rate.

Fano Factor

Interspike Interval Distribution

Coefficient of variation – Cv

Spike Train Auto correlation function

We present a simple
but nevertheless useful model neuron, the integrate-and-fire model,
in a basic version and with added membrane and synaptic conductances.

Hodgkin-Huxley model, which describes the conductances


responsible for generating action potentials.

Longitudinal Current
Longitudinal Resistance

Intracellular Resistivity – 1 TO 3K ohm mm

Single Channel conductance

Membrane Capacitance – 0.1 to 1 nF

Specific membrane capacitance

Specific Membrane Resistance

Membrane Resistance - 10 to 100 Mohm

Membrane time constant – 10 to 100 msec

Equilibrium Potential

Goldman Equation
Shunting conductance – due Cl-
Inhibitory Synapses – synapse reversal potential < threshold for action potential

Excitatory Synapses - synapse reversal potential > threshold for action potential

Membrane current per unit area

Driving force = V – Ei

Specific conductance of channel – gi

Membrane current

Leakage current

Resting Potential

You might also like