QB Btech DP Sem Viii 21-22
QB Btech DP Sem Viii 21-22
QB Btech DP Sem Viii 21-22
A) Feature Vector
B) Linear Vector
C) Collection
D) Lambda function
Q3. Among the following which is the boundary descriptors.
A) diameter
B) area
C) perimeter
D) compactness
A)
B)
C) P(A | B) = P(B)/P(A)
D) P(A | B) = P(A)/P(B,A)
A)
B)
C) P(A | B) = P(B)/P(A)
D) P(A | B) = P(A)/P(B,A)
Q17. If the covariance matrix is not same for classes then the discriminant function is the form of
A) linear
B) Not linear
C) circle
D) heart shape
Q18. If the covariance matrix is same for classes with identity form then the discriminant function is
the form of
A) linear
3
B) Not linear
C) circle
D) heart shape
Q19. Solution region is nothing but limit between
A) Solution vector
B) Feature vector
C) sigma value
D) covariance matrix
Q20. Machine learning is the subset of
A) Deep learning
B) AI
C) Both A and B
D) None of the Mentioned
Q. 21 Element Difference Moment given by
A)
B)
C)
D)
Q22. Maximum Probability is given by
A)
B)
C)
d)
Q23. Inverse Element Difference Moment given by
A)
B)
C)
D)
Q.24 Which of these statements about mini-batch gradient descent do you agree with?
A) You should implement mini-batch gradient descent without an explicit for-loop over different mini-
batches, so that the algorithm processes all mini-batches at the same time (vectorization).
B) Training one epoch (one pass through the training set) using mini-batch gradient descent is faster
4
than training one epoch using batch gradient descent.
C) One iteration of mini-batch gradient descent (computing on a single mini-batch) is faster than one
iteration of batch gradient descent.
D) All of above
Q. 26) _____processes all the training examples for each iteration of gradient descent.
A) 4
B) 3
C) 2
D) 1
5
B)Dendrites
C)Soma
D) Axon
Q. 31) In multi layer neural network, every layer the neurons compute a -------------function.
A) non-linear
B) linear
C) Both of above
D) None of above
Q. 33) The fundamental unit of network is
A) brain
B) nucleus
C) neuron
D) axon
Q. 34 What are dendrites?
A) fibers of nerves
B) nuclear projections
C) other name for nucleus
D) none of the mentioned
Q. 35 What is purpose of Axon?
A) receptors
B) transmitter
C) transmission
D) none of the mentioned
Q. 36) Operations in the neural networks can perform what kind of operations?
A) serial
B) parallel
C) serial or parallel
D) none of the mentioned
Q. 37) Feedforward networks are used for?
A) pattern mapping
B) pattern association
C) pattern classification
6
D) all of the mentioned
Q. 38) What is the objective of backpropagation algorithm?
A) to develop learning algorithm for multilayer feedforward neural network
B) to develop learning algorithm for single layer feedforward neural network
C) to develop learning algorithm for multilayer feedforward neural network, so that network can be
trained to capture the mapping implicitly
D) none of the mentioned
Q. 39) What are general limitations of back propagation rule?
A) local minima problem
B) slow convergence
C) scaling
D) all of the mentioned
Q. 40) Does backpropagaion learning is based on gradient descent along error surface?
A) yes
B) no
C) cannot be said
D) it depends on gradient descent but not error surface
Q. 41) Activation value is associated with?
A) Potential at synapses
B) cell membrane potential
C) all of the mentioned
D) none of the mentioned
Q. 43) What’s the actual reason behind the boundedness of the output function in activation
dynamics?
A) limited neural fluid
B) limited fan in capacity of inputs
C) both limited neural fluid & fan in capacity
7
D) none of the mentioned
Q. 51) Which application out of these of robots can be made of single layer feedforward network?
A) wall following
B) wall climbing
C) gesture control
D) rotating arm and legs
Q.2
For the given set of feature vector calculate the covariance matrices and prove that decision
boundary is linear.
11
Q.3 Describe SVM.
Q.4 Enlist and describe optimization techniques.
Q.5 Explain Natural neural network (Human Brain)
Q.6 Describe back propagation learning in detail.
12