AIT Important Questions: 1) Explain The Concept of Biological Neuron Model With The Help of A Neat Diagram? Answer
AIT Important Questions: 1) Explain The Concept of Biological Neuron Model With The Help of A Neat Diagram? Answer
AIT Important Questions: 1) Explain The Concept of Biological Neuron Model With The Help of A Neat Diagram? Answer
1) Explain the concept of biological neuron model with the help of a neat
diagram?
Answer:
The biological neural network consists of nerve cells (neurons) as shown in above
Fig., which are interconnected as in Fig. given below. The cell body of the neuron,
which includes the neuron's nucleus is where most of the neural computation takes
place.
Neural activity passes from one neuron to another in terms of electrical
triggers which travel from one cell to the other down the neuron's axon, by means of
an electro-chemical process of voltage-gated ion exchange along the axon and of
diffusion of neurotransmitter molecules through the membrane over the synaptic
gap.
2) Name the different learning methods and explain any one method of
supervised learning?
Answer:
1. Error-correction learning
2. Memory-based learning
3. Hebbian learning
4. Competitive learning
5. Boltzmann learning
6.
1. Error-Correction Learning
This output signal, representing the only output of the neural network, is
compared to a desired response or target output, denoted by dk(n).
The corrective adjustments are designed to make the output signal yk(n)
come closer to the desired response dk(n) in a step-by-step manner.
n ek2 n
1
2
(n) is the instantaneous value of the error energy.
2. Memory-Based Learning
Criterion used for defining the local neighborhood of the test vector xtest.
Learning rule applied to the training examples in the local neighborhood of
xtest.
3. Hebbian Learning
Hebb's postulate of learning is the oldest and most famous of all learning
rules; it is named in honor of the neuropsychologist Hebb (1949). According to Hebb
Hebb's hypothesis
kj n y k nx j n
4. Competitive Learning
A set of neurons that are all the same except for some randomly
distributed synaptic weights, and which therefore respond differently to a
given set of input patterns.
A limit imposed on the "strength" of each neuron.
A mechanism that permits the neurons to compete for the right to respond
to a given subset of inputs, such that only one output neuron, or only one
neuron per group, is active (i.e., "on") at a time. The neuron that wins the
competition is called a winner-takes-all neuron.
In the simplest form of competitive learning, the neural network has a single
layer of output neurons, each of which is fully connected to the input nodes. The
network may include feedback connections among the neurons, as indicated in
below Figure. In the network architecture described herein, the feedback
connections perform lateral inhibition with each neuron tending to inhibit the
neuron to which it is laterally connected. In contrast, the feedforward synaptic
connections in the network shown below Fig. are all excitatory.
For a neuron k to be the winning neuron, its induced local field vk for a
specified input pattern x must be the largest among all the neurons in the network.
The output signal yk of winning neuron k is set equal to one; the output signals of
all the neurons that lose the competition are set equal to zero.
Where the induced local field vk represents the combined action of all the
forward and feedback inputs to neuron k.
Let wkj denote the synaptic weight connecting input node j to neuron k.
Suppose that each neuron is allotted A fixed amount of synaptic weight (i.e., all
synaptic weights are positive), which is distributed among its input nodes; that is
A neuron then learns by shifting synaptic weights from its inactive to active
input nodes. If a neuron does not respond to a particular input pattern, no learning
takes place in that neuron. If a particular neuron wins the competition, each input
node of that neuron relinquishes some proportion of its synaptic weight, and the
weight relinquished is then distributed equally among the active input nodes.
According to the standard competitive learning rule, the change wkj applied to
synaptic weight wkj is defined by
Where is the learning-rate parameter. This rule has the overall effect of
moving the synaptic weight vector wk of winning neuron k toward the input pattern
x.
5. Boltzmann Learning
Where xj is the state of neuron j, and wkj is the synaptic weight connecting
neuron j to neuron k. The fact that j k means simply that none of the neurons in
the machine has self-feedback. The machine operates by choosing a neuron at
randomfor example, neuron kat some step of the learning process, then flipping
the state of neuron k from state xk to state xk at some temperature T with
probability
Where Ek is the energy change (i.e., the change in the energy function of the
machine) resulting from such a flip. Notice that T is not a physical temperature, but
rather a pseudo temperature, as explained in Chapter 1. If this rule is applied
repeatedly, the machine will reach thermal equilibrium.
Clamped condition, in which the visible neurons are all clamped onto specific
states determined by the environment.
Free-running condition, in which all the neurons (visible and hidden) are
allowed to operate freely.
According to the Boltzmann learning rule, the change wkj applied to the
synaptic weight wkj from neuron to neuron k is defined by
Where is a learning-rate parameter. Note that both kj+ and kj- range in
value from -1 to +1.
Answer:
The AND function gives the response "true" if both input values are "true";
otherwise the response is "false." If we represent "true" by the value I, and "false" by
0, this gives the following four training input, target output pairs:
The OR function gives the response "true" if either of the input values is "true";
otherwise the response is "false." This is the "inclusive or," since both input values
may be "true" and the response is still "true." Representing "true" as 1, and "false"
as 0, we have the following four training input, target output pairs:
Answer:
2. Piecewise-linear Function
6. Gaussian functions
Answer:
According to the flow of the signals within an ANN, we can divide the
architectures into feedforward networks, if the signals flow just from input to
output, or recurrent networks, if loops are allowed. Another possible classification is
dependent on the existence of hidden neurons, i.e., neurons which are not input nor
output neurons. If there are hidden neurons, we denote the network as a multilayer
NN, otherwise the network can be called a singlelayer NN. Finally, if every neuron
in one layer is connected with the layer immediately above, the network is called
fully connected. If not, we speak about a partially connected network.
The simplest form of an ANN is represented in fig. below. In the left, there is the
input layer, which is nothing but a buffer, and therefore does not implement any
processing. The signals flow to the right through the synapses or weights, arriving
to the output layer, where computation is performed.
In this case there is one or more hidden layers. The output of each layer
constitutes the input to the layer immediately above. For instance, a ANN [5,4, 4,
1] has 5 neurons in the input layer, two hidden layers with 4 neurons in each one,
and one neuron in the output layer.
3. Recurrent networks
Recurrent networks are those where there are feedback loops. Notice that any
feedforward net-work can be transformed into a recurrent network just by
introducing a delay, and feeding back this delay signal to one i nput, as represented
in fig.
Answer:
The Perceptron model is the simplest type of neural network developed by Frank
Rosenblatt in 1962. This type of simple network is rarely used now but it is
significant in terms of its historical contribution to neural networks. A very simple
form of Perceptron model is shown in Fig. below. It is very much similar to the MCP
model discussed in the last section. It has more than 1 inputs connected to the node
summing the linear combination of the inputs connected to the node. Also, the
resulting sum goes through a hard limit er which produces an output of +1 if the
input of the hard limiter is positive. Similarly, it produces an output of -1 if the
input is negative. It was first developed to classify a set of externally inputs into 2
classes of C1 or C2 with an output +1 signifies C1 or C2.
Answer:
Answer:
Machine learning tasks are typically classified into three broad categories,
depending on the nature of the learning "signal" or "feedback" available to a
learning system.
They are
1. Supervised Learning
2. Unsupervised Learning
3. Reinforcement Learning
Supervised Learning
The machine is presented with an example of inputs and their desired outputs.
These are given by a teacher and the goal of learning is to learn the general rule
that maps inputs and outputs.
Unsupervised Learning
No labels are given to the learning system; the learning system has to find out its
own structure to the input. Unsupervised learning can be a goal in itself or a means
towards an end.
Reinforcement Learning
Answer:
A useful attribute of sets and the universes on which they are defined is a metric
known as the cardinality, or the cardinal number.
The total number of elements in a universe X is called its cardinal number;
denoted nx, where x again is a label for individual elements in the universe.
Discrete universes that are composed of a countably finite collection of elements will
have a finite cardinal number; continuous universes comprises an infinite collection of
elements will have an infinite cardinality.
Collections of elements within a universe are called sets, and collections of
elements within sets are called subsets. Sets and subsets are terms that are often used
synonymously, since any set is also a subset of the universal set X.
The collection of all possible sets in the universe is called the whole set.
Example
For crisp sets A and B consisting of collections of some elements in X, the following
notation is defined:
x X a x b e l o n g s t o X
x A a x b e l o n g s t o A
x A a x d o e s n o t b e l o n g t o A
A B a A i s f u l l yA, thenc o x n B)
t a i n e d i n B ( i f x
A B a A i s c o n t a i n e d i n o r i s e q u i v a
( A c B B) andaB A A (A is equivalent to B)
Null set, , as the set containing no elements, and the whole set, X, as the set of all
elements in the universe.
All possible sets of X constitute a special set called the power set, Denoted by P(X).
The cardinality of the power set, denoted nP(X), is found as nP(X) = 2nX.
If the cardinality of the universe is infinite, then the cardinality of the power set is also
infinity
Operations on Classical Sets
Complement of set A.
Difference operation A| B.
The most appropriate properties for defining classical sets and showing their similarity
to fuzzy sets are:
Commutativity A B=B A
A g B = B g A .
Associativity A (B C ) = (A B) C
A g ( B g C. ) = ( A g B ) g C
Distributivity A ( B g CB ) =gC) (( AA
A g C ( B) = (( A
A gg B
C )) .
Idempotency A A=A
A g . A = A
Identity A =A
A g X = A
A g = .
A X=X.
Transitivity If A B and B C , then A C.
Involution = A.
Two special properties of set operations are known as the excluded middle axioms and
D e M principles.
o r g a These
n s properties are enumerated here for two sets A and B.
The excluded middle axioms are very important because these are the only set
operations that are not valid for both classical sets and fuzzy sets.
The first, called the axiom of the excluded middle, deals with the union of a set A and
its complement;
A =X
The second, called the axiom of contradiction, represents the intersection of a set A and
its complement.
A =g
D e M o r g a n s p r i n c i p l e s
= .
= g .
D e M o r g a n s p r i n c i p l e s c a n b e s t a t e d f
= U U
= g
Mapping of Classical Sets to Functions
W h e Ar e x p r e s s e s m e m b e r s h i puniverse.
i n s e t A f o
Fuzzy Sets
In classical, or crisp, sets the transition for an element in the universe between
membership and non membership in a given set is abrupt and w e l l d e f i n e d ( s
A fuzzy set, is a set containing elements that have varying degrees of membership in
the set.
Elements of a fuzzy set are mapped to a universe of membership values using a
function-theoretic form.
Fuzzy sets are denoted in this text by a set symbol with a tilde under strike.
would be the fuzzy set A. This function maps elements of a fuzzy set to a real
numbered value on the interval 0 1.
If an element in the universe, say x, is a member of fuzzy set , then this mapping is
given by Equation
When the universe, X, is continuous and infinite, the fuzzy set is denoted as
The numerator in each term is the membership value in set associated with the
element of the universe indicated in the denominator.
Union:
Intersection
H e r e i n d i c a t e s t h e m i n i m u m o p e r a t o r
Complement
Fuzzy sets follow the same properties as crisp sets. Because of this fact and because the
membership values of a crisp set are a subset of the interval [0, 1], classical sets can be
thought of as a special case of fuzzy sets.
The properties of fuzzy sets are
Classical Relations and Fuzzy Relations
Relations represent the mapping of the sets. Relations are intimately involved in logic,
approximate reasoning, classification, rule-based systems, pattern recognition, and control.
In the case of crisp sets relation there are only two degrees of relationship between the
e l e m e n t s o f s e t s i n a c r i s p r re el a l at te id o n. , i
A crisp relation represents the presence or absence of association, interaction, or
interconnectedness between the elements of two or more sets.
Fuzzy relations have infinite number of relationship between the extremes of
completely related and not related between the elements of two or more sets considered.
Degrees of association can be represented by membership grades in a fuzzy relation by
membership grades in a fuzzy relation in the same way as degrees of set membership are
represented in the fuzzy set.
Crisp set can be viewed as a restricted case of the more general fuzzy set concept.
Let be a fuzzy set on universe X and be a fuzzy set on universe Y, then the
Cartesian product between fuzzy sets and will result in a fuzzy relation which is
contained with the full Cartesian product space
Membership function describes the information contained in a fuzzy set and is useful to
develop a lexicon of terms to describe various special features of this function.
For purposes of simplicity, the functions shown in the figures will all be continuous, but
the terms apply equally for both discrete and continuous fuzzy sets.
The sketch of a membership function consists of 3 regions
1. Core
2. Support
3. Boundary
Core
The core of a membership function for
some fuzzy set is defined as that region of
the universe that is characterized by
complete and full membership in the set
.That is, the core comprises those
elements x of the universe such that
Support
Boundaries
The boundaries of a membership function for some fuzzy set are defined as that
region of the universe containing elements that have a nonzero membership but not complete
membership.
The boundaries comprise those elements x of the universe such that
These elements of the universe are those with some degree of fuzziness, or only partial
membership in the fuzzy set .
Crossover point
The height of the fuzzy set is the maximum value of the membership function, max ( )
The membership functions can be symmetrical or asymmetrical. Membership value is
between 0 and 1.
Based on the membership functions fuzzy sets are classified into two categories
Normal fuzzy set: If the membership function has at least one element in the universe
whose value is equal to 1, then that set is called as normal fuzzy set.
Subnormal fuzzy set: If the membership function has the membership values less than 1,
then that set is called as subnormal fuzzy set.
Fuzzification: Fuzzification is the process where the crisp quantities are converted to fuzzy
(crisp to fuzzy). By identifying some of the uncertainties present in the crisp values, fuzzy
values are formed.
The conversion of fuzzy values is represented by the membership functions. Thus
fuzzification process involve assigning membership values for the given crisp quantities.
In the real world, hardware such as a digital voltmeter generates crisp data, but these
data are subject to experimental error. The representation of imprecise data as fuzzy sets is a
useful but not mandatory step when those data are used in fuzzy systems.
For representing this kind of data, data is considered as a crisp. The crisp data is
compared with a fuzzy set.
There are various methods to assign the membership values or the membership
functions to fuzzy variables.
Intuition
Inference,
Rank ordering,
Angular fuzzy sets,
Neural networks,
Genetic algorithms, and
Inductive seasoning
Intuition
I n t u i t i o n i s b a s e ligence
d o and
n understanding
t h e h uto m
develop
a n the
s o
membership functions. The thorough knowledge of the problem has to be known, the
knowledge regarding the linguistic variable should also
be known.
For example, consider the speed of a dc-motor.
The shape of the universe of speed given in rpm is
shown in Figure. The curves represent membership
function corresponding to various fuzzy variables. The
range of speed is splitted into low, medium, and high.
The curves differentiate the ranges, said by humans.
The placement of curves is approximate over the
universe of discourse; the number of curves and the
overlapping of curves is an important criteria to be
considered while defining membership functions
Inference
This method involves the knowledge to perform deductive reasoning. The membership
function is formed from the facts known and knowledge.
Rank Ordering
The polling concept is used to assign membership values by rank ordering process.
Preferences are above for pair wise comparisons and from this the ordering of the membership
is done.
The angular fuzzy sets are different from the standard fuzzy sets in their coordinate
description. These sets are defined on the universe of angles, hence are repeating shapes
e v e r y 2 c y c l e s . A n g u l a r f u z z y s e t s a
variables known truth-values. When membership of value 1 is true and that of 0 is false, then
i n b e t w e e n true
0 or a
partially
n d false.
1 i s p a r t i a l l y
Neural Networks
The fuzzy membership function may be created for fuzzy classes of an input data set.
For a given problem the number of input data values is selected. Then the data is divided into
training data set and testing data set. The training data set is used to train the network. After
full training and testing process is completed, the neural network is ready and it can be used
to determine the membership values of any input data in the different regions.
Genetic Algorithm
Genetic algorithm (G A ) u s e s t h e c o n c e p t o D
f a Dr w
a ri
t h e o r y i s b a s e d o n t h e r u l e , s u r v i v a l
Inductive Reasoning
Formation of Rules
For any linguistic variable, there are three general forms in which the canonical rules
can be formed.
(1) Assignment statements
(2) Conditional statements
(3) Unconditional statements
Assignment statements
These statements are those in which the variable is assignment with the value. The
variable and the value assigned are combined by the assignment operator = . T
assignment statements are necessary in forming fuzzy rules. The value to be assigned may be
a linguistic term.
The assignment statement is found to restrict the value of a variable to a specific
equality.
Conditional statements
In these statements, some specific conditions are mentioned; if the conditions are
satisfied then it enters the following statements, called as restrictions.
Unconditional statements
There is no specific condition that has to be satisfied in this form of statements.
In the field of artificial intelligence (machine intelligence), there are various ways to
represent knowledge. Perhaps the most common way to represent human knowledge is to
form it into natural language expressions of the type
IF premise (antecedent), THEN conclusion (consequent).
The form in Expression given in above statement is commonly referred to as the IF
THEN rule-based form; this form is generally referred to as the deductive form. It typically
expresses an inference such that if we know a fact (premise, hypothesis, antecedent), then we
can infer, or derive, another fact called a conclusion (consequent).
This form of knowledge representation, characterized as shallow knowledge, is quite
appropriate in the context of linguistics because it expresses human empirical and heuristic
knowledge in our own language of communication.
It does not, however, capture the deeper forms of knowledge usually associated with
intuition, structure, function, and behavior of the objects around us simply because these
latter forms of know ledge are not readily reduced to linguistic phrases or representations;
The fuzzy rule-based system is most useful in modeling some complex systems that can
be observed by humans because they make use of linguistic variables as their antecedents and
consequents; as described her e these linguistic variables can be naturally represented by
fuzzy sets and logical connectives of these sets.
deliver B
Defuzzification
Defuzzification means the fuzzy to crisp conversions. The fuzzy results generated
cannot be used as such to the applications, hence it is necessary to convert the fuzzy
quantities into crisp quantities for further processing.
This can be achieved by using defuzzification process. The defuzzification has the
capability to reduce a fuzzy to a crisp single-valued quantity or as a set, or converting to the
form in which fuzzy quantity is present.
Defuzzification can a l s o b e c a l l e d a s r o u n d i n g
collection of membership function values in to a single sealer quantity.
Defuzzification of fuzzy sets is done with the help of Lambda Cuts for Fuzzy Sets.
Consider a fuzzy set , then the lambda cut set can be denoted by A , where r a n
b e t w e e n 0 aThe
n set
d A 1is going
( 0 to be
a crisp set.
This
1 crisp
) . set is called the
lambda cut set of the fuzzy set , where
The value of lambda cut set is x, when the membership value corresponding to x is
greater that o r e q u a l t o t h e s p e c i fbeicalled
e d as alpha
. cutTset.
h i s
L a m b d a the interval r a [0,
n 1].
g e s i n
Defuzzification methods
There are seven methods used for defuzzifying the fuzzy output functions.
They are:
1. Max-membership principle,
2. Centroid method,
3. Weighted average method,
4. Mean max membership,
5. Centre of sums,
6. Centre of largest area, and
7. First of maxima or last of maxima
Max-membership-principle
Centroid method
This is the most widely used method. This can be called as center of gravity or center of
area method. It can be defined by the algebraic expression i s u s e d f o r a l g
Weighting each membership function in the obtained output by its largest membership
value forms this method. This method cannot be used for asymmetrical output membership
functions, can be used only for symmetrical output membership functions.
The evaluation expression for this method is
Mean max-membership
This method is related to max-membership principle, but the present of the maximum
membership need not be unique, i.e., the maximum membership need not be a single point, it
can be a range. This method is also called as middle of maxima method the expression is
given as
Centre of sums
It involves the algebraic sum of individual output fuzzy sets. The intersecting areas of the
fuzzy are added twice.
The defuzzified value z is given as
If the fuzzy set has two convex sub regions, then the entire of gravity of the convex sub
region with the largest area can be used to calculate the defuzzification value. The equation is
given as
Where cm is the convex region with largest area. The value z is same as the value z obtained
by centroid method. This can be done even for non-convex regions.
1. Write the mathematical expression of the membership function and sketch of the
membership function.
Membership function describes the information contained in a fuzzy set and is useful to
develop a lexicon of terms to describe various special features of this function.
For purposes of simplicity, the functions shown in the figures will all be continuous, but
the terms apply equally for both discrete and continuous fuzzy sets.
The sketch of a membership function consists of 3 regions
4. Core
5. Support
6. Boundary
Core
The core of a membership function for
some fuzzy set is defined as that region of
the universe that is characterized by
complete and full membership in the set
.That is, the core comprises those
elements x of the universe such that
Support
Boundaries
The boundaries of a membership function for some fuzzy set are defined as that
region of the universe containing elements that have a nonzero membership but not complete
membership.
The boundaries comprise those elements x of the universe such that
These elements of the universe are those with some degree of fuzziness, or only partial
membership in the fuzzy set .
Fuzzy sets follow the same properties as crisp sets. Because of this fact and because the
membership values of a crisp set are a subset of the interval [0, 1], classical sets can be
thought of as a special case of fuzzy sets.
The properties of fuzzy sets are
3. Give and explain the properties of crisp sets
The most appropriate properties for defining classical sets and showing their similarity
to fuzzy sets are:
Commutativity A B=B A
A g B = B g A .
Associativity A (B C ) = (A B) C
A g ( B g C. ) = ( A g B ) g C
Distributivity A ( B g CB ) =gC) (( AA
A g C ( B) = (( A
A gg B
C )) .
Idempotency A A=A
A g . A = A
Identity A =A
A g X = A
A g = .
A X=X.
Transitivity If A B and B C , then A C.
Involution = A.
Two special properties of set operations are known as the excluded middle axioms and
D e M o r g a n s p r i n c i p l e s . T h e s e p r o p e r t i
The excluded middle axioms are very important because these are the only set
operations that are not valid for both classical sets and fuzzy sets.
The first, called the axiom of the excluded middle, deals with the union of a set A and
its complement;
A =X
The second, called the axiom of contradiction, represents the intersection of a set A and
its complement.
A =g
Classical (crisp) relations structures are the relations or structures that represent the
presence or absence of correlation, interaction, or propinquity between the elements of two or
more crisp sets.
There are only two degrees of relationship between elements of the sets in a crisp
relation: t h e r e l a t i o n s h i p s c o m p l e t e l y r e l a t
Fuzzy relations are then developed by allowing the relationship between elements of
two or more sets to take on an infinite number of degrees of relationship between the
extremes o f c o m p l e t e l y r e l a t e d a n d n o t r e
Fuzzy relations are to crisp relations as fuzzy sets are to crisp sets; crisp sets and
relations are constrained realizations of fuzzy sets and relations.
Operations, properties, and cardinality of fuzzy relations are Cartesian products and
compositions of fuzzy relations.
Let R be a relation that relates, or maps, elements from universe X to universe Y, and
let S be a relation that relates, or maps, elements from universe Y to universe Z. A useful
question we seek to answer is whether we can find a relation, T, that relate the same
elements in universe X that R contains to the same elements in universe Z that S contains. It
turns out that we can find such a relation using an operation known as composition.
F r o m t h e S a g i t t a l d i a g r a m i n relation
F i gR u r e
and relation S is the two routes that start at x1 a n d e n d a t z a2 n( id . ex . 1,
z2). Hence, we wish to find a relation T that relates the ordered pair (x1,z2), that is, (x1,z2)
T.
In this example, R ={(x1, y1), (x1, y3), (x2, y4)} . S ={(y1, z2), (y3, z2)} .
The max min composition is defined by the set-theoretic and membership function-
theoretic expressions
Uncertainty