Self Organizing Maps

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 22

Self Organizing

Maps(SOM)
SOM- Introduction
• Networks are based on competitive learning
• Neurons are placed at nodes of lattice
• Nonlinear
• Characterized by the formation of topographic maps
• Motivation by cerebral cortex (Kass et.al 1983)
• It follows an unsupervised learning approach
• SOM has two layers, one is the Input layer and the other one is the Output
layer.
• The architecture of the Self Organizing Map with two clusters and n input
features of any sample is given below:
• How do SOM works?
• Input data of size (m, n)
• First, it initializes the weights of size (n, C) where C is the number of clusters.
• Then iterating over the input data, for each training example, it updates the
winning vector.
• Weight updating rule is given by :
wij = wij(old) + α(t) * (xik - wij(old))

• where α is a learning rate at time t,


• j denotes the winning vector,
• i denotes the ith feature of training example
• k denotes the kth training example from the input data.
After training the SOM network, trained weights are used for clustering new
examples. A new example falls in the cluster of winning vectors.
SOM Algorithm

Kohonen Self-Organizing Feature Map Architecture


KSOFM
SOM Algorithm….
SOM Algorithm
Training:
• Step 1: Initialize the weights wij random value may be assumed. Initialize the
learning rate α.
• Step 2: Calculate squared Euclidean distance.
• D(j) = Σ (wij – xi)^2 where i=1 to n and j=1 to m
• Step 3: Find index J, when D(j) is minimum that will be considered as winning
index.
• Step 4: For each j within a specific neighborhood of j and for all i, calculate the
new weight.
• wij(new)=wij(old) + α[xi – wij(old)]
• Step 5: Update the learning rule by using :
• α(t+1) = 0.5 * α(t)
• Step 6: Test the Stopping Condition.
Kohonen Algorithm
Self Organizing Maps
• Self-organizing maps are based on competitive learning; the output neurons
of the network compete among themselves to be activated or fired, with the
result that only one output neuron, or one neuron per group, is on at any one
time.
• An output neuron that wins the competition is called a winner-takes all
neuron, or simply a winning neuron.
• The self-organizing map is inherently nonlinear.
• As a neural model, the self-organizing map provides a bridge between two
levels of adaptation:
• adaptation rules formulated at the microscopic level of a single neuron;
• formation of experientially better and physically accessible patterns of
feature selectivity at the microscopic level of neural layers.
Types of Self Organizing Maps
Types of Self Organizing Maps
SOM Process
Three essential processes involved in the formation of the self-organizing map.

1. Competition. For each input pattern, the neurons in the network compute their respective values
of a discriminant function. This discriminant function provides the basis for competition among the
neurons. The particular neuron with the largest value of discriminant function is declared winner of
the competition.

2. Cooperation. The winning neuron determines the spatial location of a topological neighborhood
of excited neurons, thereby providing the basis for cooperation among such neighbouring neurons.

3. Synaptic Adaptation. This last mechanism enables the excited neurons to increase their individual
values of the discriminant function in relation to the input pattern through suitable adjustments
applied to their synaptic weights.
SOM Process
two phases of the adaptive process:
1. Self-organizing or ordering phase: It is during this first phase of the
adaptive process that the topological ordering of the weight vectors takes
place.
2. Convergence phase: This second phase of the adaptive process is needed
to fine tune the feature map and therefore provide an accurate statistical
quantification of the input space.
Essence of Kohonen’s Algorithm
• The essential ingredients and parameters of the algorithm are as
follows:
• a continuous input space of activation patterns that are generated in
accordance with a certain probability distribution;
• a topology of the network in the form of a lattice of neurons, which
defines a discrete output space;
• a time-varying neighborhood function hj,i(x) (n) that is defined around a
winning neuron i(x);
• a learning-rate parameter (n) that starts at an initial value 0 and then
decreases gradually with time n, but never goes to zero.
Learning Algorithm
• The algorithm is summarized as follows:
Learning Algorithm
PROPERTIES OF THE FEATURE MAP
• Once the SOM algorithm has converged, the feature map computed by the
algorithm displays important statistical characteristics of the input space.

• Property 1. Approximation of the Input Space


• Property 2. Topological Ordering

You might also like