Self Organizing Maps
Self Organizing Maps
Self Organizing Maps
Maps(SOM)
SOM- Introduction
• Networks are based on competitive learning
• Neurons are placed at nodes of lattice
• Nonlinear
• Characterized by the formation of topographic maps
• Motivation by cerebral cortex (Kass et.al 1983)
• It follows an unsupervised learning approach
• SOM has two layers, one is the Input layer and the other one is the Output
layer.
• The architecture of the Self Organizing Map with two clusters and n input
features of any sample is given below:
• How do SOM works?
• Input data of size (m, n)
• First, it initializes the weights of size (n, C) where C is the number of clusters.
• Then iterating over the input data, for each training example, it updates the
winning vector.
• Weight updating rule is given by :
wij = wij(old) + α(t) * (xik - wij(old))
1. Competition. For each input pattern, the neurons in the network compute their respective values
of a discriminant function. This discriminant function provides the basis for competition among the
neurons. The particular neuron with the largest value of discriminant function is declared winner of
the competition.
2. Cooperation. The winning neuron determines the spatial location of a topological neighborhood
of excited neurons, thereby providing the basis for cooperation among such neighbouring neurons.
3. Synaptic Adaptation. This last mechanism enables the excited neurons to increase their individual
values of the discriminant function in relation to the input pattern through suitable adjustments
applied to their synaptic weights.
SOM Process
two phases of the adaptive process:
1. Self-organizing or ordering phase: It is during this first phase of the
adaptive process that the topological ordering of the weight vectors takes
place.
2. Convergence phase: This second phase of the adaptive process is needed
to fine tune the feature map and therefore provide an accurate statistical
quantification of the input space.
Essence of Kohonen’s Algorithm
• The essential ingredients and parameters of the algorithm are as
follows:
• a continuous input space of activation patterns that are generated in
accordance with a certain probability distribution;
• a topology of the network in the form of a lattice of neurons, which
defines a discrete output space;
• a time-varying neighborhood function hj,i(x) (n) that is defined around a
winning neuron i(x);
• a learning-rate parameter (n) that starts at an initial value 0 and then
decreases gradually with time n, but never goes to zero.
Learning Algorithm
• The algorithm is summarized as follows:
Learning Algorithm
PROPERTIES OF THE FEATURE MAP
• Once the SOM algorithm has converged, the feature map computed by the
algorithm displays important statistical characteristics of the input space.