QB Btech DP Sem Viii 21-22

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Department of Computer Engineering

MSE (Question Bank)


B.Tech Computer AY:- 2021-22 Sem-VI

Deep Learning (BTCOE801(A))


Multiple Choice Questions (MCQ)
1 Marks each Unit: I, II & III

Q1. ---------- the ability to comprehend.


A) Intelligence
B) Machine Learning
C) deep Learning
D) Social Learning
Q2. --------- are used to represent numeric or symbolic characteristics

A) Feature Vector
B) Linear Vector
C) Collection
D) Lambda function
Q3. Among the following which is the boundary descriptors.
A) diameter
B) area
C) perimeter
D) compactness

Q4. Among the following which is the regional descriptors.


A) diameter
B) area
C) perimeter
D) compactness
Q5. plot of distance of different boundary points from the centroid of the shape taken in the
1
various directions or in various orientations
A) Signature
B) Vector
C) Set
D) Discriminant function
Q6. Among the following which is region descriptor
A) Intensity
B) Area
C) Perimeter
D) Instance
Q7. Among the following which is region descriptor
A) Texture
B) Area
C) Perimeter
D) Instance
Q8. Among the following which is region descriptor
A) Color
B) Area
C) Perimeter
D) Instance

Q9. In K-nearest neighbor algorithm(k-NN) how we classify an unknown object


A) Assigning the label which is most frequent among the k nearest training samples
B) Assigning the unknown object to the class of its nearest neighbor among training
sample
C) Assigning the label which is most frequent among the all training samples except the
k farthest neighbor
D) None of these
Q10. For a minimum classifier which of the following must be satisfied
A) All classes should have identical covariance matrix and diagonal matrix
B) All classes should have identical covariance matrix but otherwise arbitrary
C) All classes should have equal class probability
D) None of above
Q11. Which of the following classifiers can be replaced by a linear SVM?
A) Logistic Regression
B) Neural Networks
C) Decision Trees
D) None of the above
Q12. Uniformity is given by
2
A)
B)
C)
D)
Q13. The likelihood of an event occurring when there is a finite amount of outcomes and each is
equally likely to occur.
A) A priori probability
B) Sigma Probability
C) Pool’s Conditional Probability
D) Pooja effect

Q14. Bayes Theorem is given by


A) P(A | B) = P(B|A)P(A)/P(B)
B) P(A | B) = P(A)/P(B)
C) P(A | B) = P(B)/P(A)
D) P(A | B) = P(A)/P(B,A)

Q15. Bayes minimum risk classifier is given by

A)

B)
C) P(A | B) = P(B)/P(A)
D) P(A | B) = P(A)/P(B,A)

Q16. Bayes minimum error classifier is given by

A)

B)
C) P(A | B) = P(B)/P(A)
D) P(A | B) = P(A)/P(B,A)

Q17. If the covariance matrix is not same for classes then the discriminant function is the form of
A) linear
B) Not linear
C) circle
D) heart shape
Q18. If the covariance matrix is same for classes with identity form then the discriminant function is
the form of
A) linear
3
B) Not linear
C) circle
D) heart shape
Q19. Solution region is nothing but limit between
A) Solution vector
B) Feature vector
C) sigma value
D) covariance matrix
Q20. Machine learning is the subset of
A) Deep learning
B) AI
C) Both A and B
D) None of the Mentioned
Q. 21 Element Difference Moment given by

A)
B)
C)
D)
Q22. Maximum Probability is given by
A)
B)
C)
d)
Q23. Inverse Element Difference Moment given by
A)
B)
C)
D)

Q.24 Which of these statements about mini-batch gradient descent do you agree with?
A) You should implement mini-batch gradient descent without an explicit for-loop over different mini-
batches, so that the algorithm processes all mini-batches at the same time (vectorization).
B) Training one epoch (one pass through the training set) using mini-batch gradient descent is faster
4
than training one epoch using batch gradient descent.
C) One iteration of mini-batch gradient descent (computing on a single mini-batch) is faster than one
iteration of batch gradient descent.
D) All of above

Q. 25 Gradient Descent is an optimization algorithm used for,

A) Certain Changes in algorithm


B) minimizing the cost function in various machine learning algorithms
C) maximizing the cost function in various machine learning algorithms
D) remaining same the cost function in various machine learning algorithms

Q. 26) _____processes all the training examples for each iteration of gradient descent.

A) Stochastic Gradient Descent


B) Batch Gradient Descent
C) Mini Batch gradient descent
D) None of the above

Q. 27) There are how many types of Gradient Descent?

A) 4
B) 3
C) 2
D) 1

Q. 28) Which is the fastest gradient descent?

A) Batch Gradient Descent


B) Stochastic Gradient Descent
C) Mini Batch gradient descent
D) none of these
Q. 29) connectors related to neural network, are known as---------
A)Synapse
B)Dendrites
C)Soma
D) Axon
Q. 30) the processing is done in an unit in the cell which is known as----
A)Synapse

5
B)Dendrites
C)Soma
D) Axon
Q. 31) In multi layer neural network, every layer the neurons compute a -------------function.
A) non-linear
B) linear
C) Both of above
D) None of above
Q. 33) The fundamental unit of network is
A) brain
B) nucleus
C) neuron
D) axon
Q. 34 What are dendrites?
A) fibers of nerves
B) nuclear projections
C) other name for nucleus
D) none of the mentioned
Q. 35 What is purpose of Axon?
A) receptors
B) transmitter
C) transmission
D) none of the mentioned
Q. 36) Operations in the neural networks can perform what kind of operations?
A) serial
B) parallel
C) serial or parallel
D) none of the mentioned
Q. 37) Feedforward networks are used for?
A) pattern mapping
B) pattern association
C) pattern classification
6
D) all of the mentioned
Q. 38) What is the objective of backpropagation algorithm?
A) to develop learning algorithm for multilayer feedforward neural network
B) to develop learning algorithm for single layer feedforward neural network
C) to develop learning algorithm for multilayer feedforward neural network, so that network can be
trained to capture the mapping implicitly
D) none of the mentioned
Q. 39) What are general limitations of back propagation rule?
A) local minima problem
B) slow convergence
C) scaling
D) all of the mentioned
Q. 40) Does backpropagaion learning is based on gradient descent along error surface?
A) yes
B) no
C) cannot be said
D) it depends on gradient descent but not error surface
Q. 41) Activation value is associated with?
A) Potential at synapses
B) cell membrane potential
C) all of the mentioned
D) none of the mentioned

Q. 42) In activation dynamics is output function bounded?


A) Yes
B) No

Q. 43) What’s the actual reason behind the boundedness of the output function in activation
dynamics?
A) limited neural fluid
B) limited fan in capacity of inputs
C) both limited neural fluid & fan in capacity
7
D) none of the mentioned

Q. 44) What is noise saturation dilemma?


A) at saturation state neuron will stop working, while biologically it’s not feasible
B) how can a neuron with limited operating range be made sensitive to nearly unlimited range of
inputs
C) can be either way
D) none of the mentioned

Q. 45) What is structural stability?


A) when both synaptic & activation dynamics are simultaneously used & are in equilibrium
B) when only synaptic dynamics in equilibrium
C) when only synaptic & activation dynamics are used
D) none of the mentioned

Q. 46) What is global stability?


A) when both synaptic & activation dynamics are simultaneously used & are in equilibrium
B) when only synaptic dynamics in equilibrium
C) when only synaptic & activation dynamics are used
D) none of the mentioned

Q. 47) Which models belongs to main subcategory of activation models?


A) additive & subtractive activation models
B) additive & shunting activation models
C) subtractive & shunting activation models
D) all of the mentioned

Q. 48) Who proposed the shunting activation model?


A) Rosenblatt
B) Hopfield
C) perkel
D) grossberg
8
Q. 49) What was the goal of shunting activation model?
A) to make system dynamic
B) to keep operating range of activation value to a specified range
C) to make system static
D) can be either for dynamic or static, depending on inputs

Q. 50) Activation models are?


A) Static
B) Dynamic
C) Deterministic
D) None of the Mentioned

Q. 51) Which application out of these of robots can be made of single layer feedforward network?
A) wall following
B) wall climbing
C) gesture control
D) rotating arm and legs

Q. 52) Which is the most direct application of neural networks?


A) vector quantization
B) pattern mapping
C) pattern classification
D) rotating arm and legs

Q. 53) What are pros of neural networks over computers?


A) they have ability to learn b examples
B) they have real time high computational rates
C) they have more tolerance
D) all of the mentioned

Q. 54) what is true about single layer associative neural networks?


A) neural networks are artificial copy of the human brain
9
B) neural networks have high computational rates than conventional computers
C) neural networks learn by examples
D) performs pattern recognition

Q. 55) which of the following is false?


A) neural networks are artificial copy of the human brain
B) neural networks have high computational rates than conventional computers
C) neural networks learn by examples
D) none of the mentioned

Q. 56) What is the use of MLFFNN?


A) o realize structure of MLP
B to solve pattern classification problem
C) to solve pattern mapping problem
D) to realize an approximation to a MLP

Q. 57) Pattern recall takes more time for?


A) MLFNN
B) Basis function
C) Equal for both MLFNN and basis function
D) None of the mentioned

Q. 58) In which type of networks training is completely avoided?


A) GRNN
B) PNN
C) GRNN and PNN
D) None of the mentioned

Q. 59) What does GRNN do?


A) function approximation task
B) pattern classification task
C) function approximation and pattern classification task
10
D) None of the mentioned

Q. 60) What consist of a basic counterpropagation network?


A) a feedforward network only
B) a feedforward network with hidden layer
C) two feedforward network with hidden layer
D) None of the mentioned

Short Answers Questions (3 Marks each)


Unit: I, II & III
Q.1 Describe Bayesian Learning
Q.2 Explain linear machine.
Q.3 Write a short note on Gradient Descent.
Q.4 Discuss Multilayer perceptron.
Q.5 Analyze building blocks of CNN.
Q.6 Elaborate the term transfer learning.
Q.7 Write a note on CNN.
Q.8 Write a note on Autoencoders.
Q.9 Explain the working of AND function using neural network.

Long Answers Questions (8 Marks each)

Unit: I, II & III


Q.1 Explain deep learning with applications.

Q.2

For the given set of feature vector calculate the covariance matrices and prove that decision
boundary is linear.

11
Q.3 Describe SVM.
Q.4 Enlist and describe optimization techniques.
Q.5 Explain Natural neural network (Human Brain)
Q.6 Describe back propagation learning in detail.

12

You might also like