0% found this document useful (0 votes)
69 views8 pages

Model Test Paper - 1 (Answers) : M T P (A)

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 8

MODEL TEST PAPER – 1

(ANSWERS)

SECTION A – OBJECTIVE TYPE QUESTIONS


1. Answer any 4 out of the given 6 questions on Employability Skills.
A. (c) B. (c) C. (b) D. (c) E. (a), (b) and (d) F. (a)
2. Answer any 5 out of the given 6 questions.
A. (c) B. (b) C. (b) D. (d) E. (c) F. (c)
3. Answer any 5 out of the given 6 questions.
A. (a) B. (b) C. (d) D. (b) E. (b) F. (b)
4. Answer any 5 out of the given 6 questions.
A. (a) B. (b) C. (a) D. (b) E. (b) F. (d)
5. Answer any 5 out of the given 6 questions.
A. (b) B. (a) C. (b) D. (b) E. (a) F. (d)

SECTION B – SUBJECTIVE TYPE QUESTIONS


Answer any 3 out of the given 5 questions on Employability Skills.
6. The 7Cs of communication ensure effective communication, both written and oral. These are:
(a) Completeness
(b) Conciseness
(c) Consideration
(d) Clarity
(e) Concreteness
(f) Courtesy
(g) Correctness
7. All the input and output devices that are connected directly to the computer but are not part of the key
processing components are called peripheral devices, for example, Keyboard, Mouse, Speaker, Printer,
Joystick, Monitor, etc.
8. Disk management tools are utility software that perform all the necessary disk-related functions to ensure
smooth functioning of the system, for example, Disk fragmentation, Disk partition, Disk formatting, Disk
cleaning, etc.
9. Self-management is the ability to control and manage our emotions, thoughts and behaviour according to the
situation. It helps in:
(a) Handling ourselves and working towards achieving our goals.
(b) Dealing with difference of opinion and handling unpleasant situations.
10. Ecological imbalance is caused when natural or man-made disturbances affect natural balance of an ecosystem.
Some of the factors that cause ecological imbalance are:
(a) Deforestation
(b) Soil Erosion
(c) Overexploitation of Resources
(d) Environmental pollution
(e) Irregular land use
(f) E-waste
Ways to manage ecological balance include:
(a) Planting more trees
(b) Recycling waste products
(c) Careful use of natural resources

Model Test Papers (Answers) 1


Answer any 4 out of the given 6 questions in 20-30 words each.
11. Stemming: Stemming is the process of removing affixes of words to reduce the remaining words to their root
words or base form. For example,
Word Affix Root Stem
Caring ing Car
Lemmatization: Lemmatization is the process of extracting a ‘lemma’, which is a meaningful root word after
affix removal. For example,
Word Affix Root Word
Caring ing Care
Both Stemming and Lemmatization are similar in nature as both are used to extract the affixes from the words
but the difference between them is that in lemmatization, we get meaningful words unlike in stemming.
Stemming is faster and it doesn’t care if the stemmed word is meaningful or not.
12. Accuracy: Accuracy is the proportion of correctness in a classification model. It is defined as the percentage
of correct predictions from all the observations. Accuracy explains overall how often the model is making a
correct prediction.
TP + TN
     Accuracy =
TP + TN + FP + FN
Precision: Precision is the positive predictive value or the fraction of the positive predictions that are actually
positive.
TP
     Precision =
TP + FP
13. Chatbots are AI applications that use NLP to assist humans and communicate through voice or text or both.
The two types of chatbots are:
Simple chatbots: Simple chatbots are task-specific and have limited capabilities. They work on pre-written
keywords and each command is coded as a set of rules by the developer. They are easy to build.
Smart chatbots: Smart chatbots are AI-enabled chatbots. They are not pre-programmed but learn with time,
catching keywords, and help users by providing the most relevant answers to their queries. They are much
harder to build and implement and need a lot of data to learn, for example, Virtual Assistants.
14. HSL and HSV are useful for identifying contrast in images in an RGB model. HSL stands for Hue, Saturation and
Lightness while HSV stands for Hue, Saturation and Value, where Hue is the actual colour, Saturation is the
greyness, and Value is the brightness of the pixel. In HSL, a colour with maximum lightness is pure white but
in HSV, the maximum value or brightness leads to shining a white light on a coloured object.
15. The key differences between Supervised learning and Unsupervised learning are:
(a) Supervised learning deals with the labelled data where the output data patterns are known to the
machine. Unsupervised learning works with unlabelled data and a machine has to come up with the
underlying relationship between input and output through pattern recognition.
(b) Supervised learning is less complicated than Unsupervised learning and requires less processing.
(c) The outcome of Supervised learning is more accurate and reliable as compared to Unsupervised learning.
16. Python command to write an image is: cv2.imwrite()
Python command to read an image is: cv2.imread()
Answer any 3 out of the given 5 questions in 50–80 words each.
17. AI can help solve the issue of world hunger in the following ways:
(a) AI systems could be deployed for smarter and more efficient food production. For example, AI can help in
finding the right seeds that generate the highest yields and process the growth, genetic and environmental
data.
(b) AI can help segregate the various types of food. For example, TOMRA sorting is a sensor-based solution
that uses AI to separate food into ‘good’ and ‘bad’ and evaluates the best way to reduce waste.

2Essentials of Artificial Intelligence – X


AI can help in gender equality in the following ways:
(a) AI can remove discrepancies based on gender, ethnicity, disability, etc.
(b) AI using Machine Learning algorithms can be used to analyze data based on academics and experience
to find the causes behind gender inequality.
18. The 4Ws problem canvas helps in identifying the key elements related to problems. Here, we are creating 4Ws
problem canvas on the basis of given scenario:
A. WHO: Who is having the problem?
(i) Who are the stakeholders?
Ans. Here, the IT Company Top Management, investors, customers, employees and customer support
executives are the stakeholders.
(ii) What do you know about them?
Ans. They are members of a project team, project managers, executives, customer support executives and
customers, who would be benefited from the solution.
B. WHAT: What is the nature of the problem?
(i) What is the problem?
Ans. The problem is with the classification of support tickets in an IT company. The support ticket refers
to the interaction between customers and support representatives regarding any issue.
(ii) How do you know it is a problem?
Ans. Here, the problem is regarding classification of support tickets according to priority like Urgent,
Business and Important. The tickets are not routed to the right team thus leading to delay in response
time.
C. WHERE: Where does the problem arise?
(i) What is the context/situation in which the stakeholders experience the problem?
Ans. The problem arises when customers look for assistance according to their support tickets like Urgent,
Important and Business and representatives fail to route those tickets to the concerned departments
leading to delayed response time.
D. WHY: Why do you believe it is a problem worth solving?
(i) What would be the key value to the stakeholders?
Ans. Slow response time leaves a bad impression about any business. So, in order to function smoothly
and continue achieving outstanding customer satisfaction levels, we need automated support ticket
classifier and router. In other words, we can use Chatbots to handle the transaction. Chatbots simplify
the process by intelligently categorizing tickets and routing to the agent group based on the type of
issue. This is also known as Robotic Process Automation (RPA).
(ii) How would it improve their situation?
Ans. The benefits of using chatbots in support ticket system are:
(a) It improves overall response time for users’ queries and provides 24×7 availability.
(b) Faster ticket resolution rate increases that leads to improving customer satisfaction.
19. The Confusion Matrix is used to record the result of comparison between Prediction and Reality. It helps to
understand the prediction results and their interpretation. The possible situations in the given scenario are:
Case – 1

Are the employees arriving in office on time?
Prediction: Yes Reality: Yes
Result: True Positive

Here, the model predicts a Yes which means that the employees arrive in office on time. The Prediction
matches with the Reality. Hence, this condition is termed as True Positive.

Model Test Papers (Answers) 3


Case – 2

Are the employees arriving in office on time?
Prediction: No Reality: No
Result: True Negative

Here, the model predicts a No which means that the employees do not arrive in office on time. The prediction
matches with the Reality. Hence, this condition is termed as True Negative.
Case – 3

Are the employees arriving in office on time?
Prediction: Yes Reality: No
Result: False Positive

Here, the Reality is that the employees do not arrive in office on time but the machine has incorrectly predicted
that the employees arrive on time. Hence, this condition is termed as False Positive.
Case – 4

Are the employees arriving in office on time?
Prediction: No Reality: Yes
Result: False Negative

Here, the Reality is that the employees arrive in office on time but the machine has incorrectly predicted it as
a No. Hence, this condition is termed as False Negative.
The Confusion Matrix Reality
Yes No
Prediction Yes True Positive (TP) False Positive (FP)
No False Negative (FN) True Negative (TN)
20. Bag of Words is an NLP model that helps in extractive features from the corpus. It simply calculates occurrences
of each word to build a library for the corpus.
According to the sample text given in the question, the Bag of Words can be calculated using the following
steps:
Step 1: Collecting data and pre-processing: In this step, we convert text to lower case and remove all
punctuations and stopwords. After text normalization, the text becomes:
[every, image, made, up, pixels]
[pixels, smallest, units, information, usually, round, square]
[word, pixel, simply, means, picture, element]
[just, like, wall, made, up, bricks, stacked, next, each, other, coloured, differently, similarly, image,
made, up, small, components, called, pixels, each, pixel, takes, up, particular, part, image]
[more, pixels, we, have, more, closely, image, resembles, original, considered, better, sharper]
[we, can, see, example, actual, picture, pixelated, version, image]
Step 2: Creating dictionary: In this step, repeated words in the document are written once after removing all
the stopwords.
every image made up pixel small unit information usual
round square word simple mean picture element just like
stack next wall brick each other colour different similar
component called take particular part more we have close
resemble original consider better sharp see example actual version

4Essentials of Artificial Intelligence – X


Step 3: Creating document vector: In this step, we have to go through the whole document and put 1 if it

matches with the dictionary and increment the previous value by 1 if the same word appears again.
If the word does not occur, put a 0 for it.
Line 1
every image made up pixel small unit information usual
1 1 1 1 1 0 0 0 0
round square word simple mean picture element just like
0 0 0 0 0 0 0 0 0
stack next wall brick each other colour different similar
0 0 0 0 0 0 0 0 0
component called take particular part more we have close
0 0 0 0 0 0 0 0 0
resemble original consider better sharp see example actual version
0 0 0 0 0 0 0 0 0
Line 2
every image made up pixel small unit information usual
1 1 1 1 1 0 0 0 0
0 0 0 0 1 1 1 1 1
round square word simple mean picture element just like
0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0
stack next wall brick each other colour different similar
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
component called take particular part more we have close
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
resemble original consider better sharp see example actual version
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
Line 3
every image made up pixel small unit information usual
1 1 1 1 1 0 0 0 0
0 0 0 0 1 1 1 1 1
0 0 0 0 1 0 0 0 0
round square word simple mean picture element just like
0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0
0 0 1 1 1 1 1 0 0
stack next wall brick each other colour different similar
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0

Model Test Papers (Answers) 5


component called take particular part more we have close
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
resemble original consider better sharp see example actual version
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
Line 4
every image made up pixel small unit information usual
1 1 1 1 1 0 0 0 0
0 0 0 0 1 1 1 1 1
0 0 0 0 1 0 0 0 0
0 1 1 1 1 1 0 0 0
0 1 1 1 1 1 0 0 0
round square word simple mean picture element just like
0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0
0 0 1 1 1 1 1 0 0
0 0 0 0 0 0 0 1 1
stack next wall brick each other colour different similar
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 1
0 0 0 0 1 0 0 0 0
component called take particular part more we have close
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
1 1 1 1 1 0 0 0 0
resemble original consider better sharp see example actual version
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0

6Essentials of Artificial Intelligence – X


Line 5 and Total
every image made up pixel small unit information usual
1 1 1 1 1 0 0 0 0
0 0 0 0 1 1 1 1 1
0 0 0 0 1 0 0 0 0
0 1 1 1 1 1 0 0 0
0 1 1 1 1 1 0 0 0
1 1
1
1 5 3 3 6 3 1 1 1
round square word simple mean picture element just like
0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0
0 0 1 1 1 1 1 0 0
0 0 0 0 0 0 0 1 1
1 1 1 1 1 1 1 1 1
stack next wall brick each other colour different similar
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 1
0 0 0 0 1 0 0 0 0
1 1 1 1 2 1 1 1 1
component called take particular part more we have close
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
1 1 1 1 1 0 0 0 0
1 1 1
1 1
1 1 1 1 1 2 1 1 1
resemble original consider better sharp see example actual version
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
TF-IDF is an information retrieval technique that weighs a term’s frequency (TF) and its inverse document
frequency (IDF). Each word in the document has its respective TF and IDF score. Higher the TF-IDF score, rarer
the term in the document.
To calculate TF, the formula is:
TF(t), d = number of occurrences of t in document d / Total number of words in the document.

Model Test Papers (Answers) 7


Here, we are taking some of the words with the highest to low frequency resulting from Bag of Words.

Word TF IDF TF * IDF


image 5/101 =0.049 log (101/5) = 1.30 0.063
pixel 6/101=0.059 log (101/6) = 1.22 0.071
small 3/101=0.029 log (101/3) =1.52 0.044
each 2/101=0.019 log (101/2) =1.70 0.032
sharp 1/101=0.009 log (101/1) =2.00 0.018
Difference between Bag of Words and TF-IDF are:
(i) Bag of Words creates a set of vectors and calculates occurrences in the document while the TF-IDF
calculates the more important words and the less important ones as well.
(ii) Bag of Words are easy to implement while TF-IDF performs complex calculations.
21. A Convolutional Neural Network is an algorithm which takes an input image, assigns learnable weights and
biases to various objects in the image and identifies them. A CNN consists of the following layers:

CAR
TRUCK
VAN

BICYCLE

INPUT CONVOLUTION+RELU POOLING CONVOLUTION+RELU POOLING FLATTEN FULLY SOFTMAX


CONNECTED

FEATURE LEARNING CLASSIFICATION


(i) Convolutional Layer: In this layer, we extract the high-level features such as edges, colour, gradient
orientation, etc., from the input image. There could be more than one convolutional layer as well. The
output of this layer is called feature map or activation map, through which we can reduce the image size
or focus only on the features of the image for efficient processing.
(ii) Rectified Linear Unit (ReLU): This layer replaces all the negative numbers to zero and keeps positive
numbers constant using max function. The advantage of using ReLU is that it does not activate all neurons
at the same time.
Pooling Layer: It is responsible for reducing spatial size of the feature passed through Layer1 while
(iii)
retaining important features. The pooling layer makes the image smaller and more resistant to small
transformations, distortions and translations in the input image. The two types of pooling are:
• Max Pooling: It returns the maximum value from the region of the image covered by the kernel. It
gives the most prominent features of the feature map.
• Average Pooling: It returns the average value from the region of the image covered by the kernel.
(iv) Fully Connected Layer: It takes the results of the convolution/pooling process and uses them to classify
the image into a label. In this layer, the neurons have a complete connection to all the activations from
the previous layers. The output of this layer is a single vector of values, each representing a probability
that a certain feature belongs to a label.

8Essentials of Artificial Intelligence – X

You might also like