DATA SCIENCE iNTERVIEW QUESTION
DATA SCIENCE iNTERVIEW QUESTION
DATA SCIENCE iNTERVIEW QUESTION
Data analytics tools include data mining, data Machine Learning, Hadoop,
modelling, database management and data analysis. Java, Python, software
development etc., are the tools
of Data Science.
Here's a list of the most popular data science interview questions on the technical
concept which you can expect to face, and how to frame your answers.
Logistic regression measures the relationship between the dependent variable (our
label of what we want to predict) and one or more independent variables (our features)
by estimating probability using its underlying logistic function (sigmoid).
The formula and graph for the sigmoid function are as shown:
3. Explain the steps in making a decision tree.
3. Calculate your information gain of all attributes (we gain information on sorting
different objects from each other)
4. Choose the attribute with the highest information gain as the root node
5. Repeat the same procedure on every branch until the decision node of each branch
is finalized
For example, let's say you want to build a decision tree to decide whether you should
accept or decline a job offer. The decision tree for this case is as shown:
A random forest is built up of a number of decision trees. If you split the data into
different packages and make a decision tree in each of the different groups of data, the
random forest brings all those trees together.
1. Randomly select 'k' features from a total of 'm' features where k << m
2. Among the 'k' features, calculate the node D using the best split point
3. Split the node into daughter nodes using the best split
4. Repeat steps two and three until leaf nodes are finalized
5. Build forest by repeating steps one to four for 'n' times to create 'n' number of trees
Overfitting refers to a model that is only set for a very small amount of data and ignores
the bigger picture. There are three main methods to avoid overfitting:
1. Keep the model simple—take fewer variables into account, thereby removing some
of the noise in the training data
Univariate
Univariate data contains only one variable. The purpose of the univariate analysis is to
describe the data and find patterns that exist within it.
164
167.3
170
174.2
178
180
The patterns can be studied by drawing conclusions using mean, median, mode,
dispersion or range, minimum, maximum, etc.
Bivariate
Bivariate data involves two different variables. The analysis of this type of data deals
with causes and relationships and the analysis is done to determine the relationship
between the two variables.
20 2,000
25 2,100
26 2,300
28 2,400
30 2,600
36 3,100
Here, the relationship is visible from the table that temperature and sales are directly
proportional to each other. The hotter the temperature, the better the sales.
Multivariate
2 0 900 $4000,00
3 2 1,100 $600,000
The patterns can be studied by drawing conclusions using mean, median, and mode,
dispersion or range, minimum, maximum, etc. You can start describing the data and
using it to guess what the price of the house will be.
7. What are the feature selection methods used to select the right variables?
There are two main methods for feature selection, i.e, filter, and wrapper methods.
Filter Methods
This involves:
ANOVA
Chi-Square
The best analogy for selecting features is "bad data in, bad answer out." When we're
limiting or selecting the features, it's all about cleaning up the data coming in.
Wrapper Methods
This involves:
Forward Selection: We test one feature at a time and keep adding them until we get
a good fit
Backward Selection: We test all the features and start removing them to see what
works better
Recursive Feature Elimination: Recursively looks through all the different features
and how they pair together
Wrapper methods are very labor-intensive, and high-end computers are needed if a lot
of data analysis is performed with the wrapper method.
8. In your choice of language, write a program that prints the numbers ranging
from one to 50.
But for multiples of three, print "Fizz" instead of the number, and for the multiples of five,
print "Buzz." For numbers which are multiples of both three and five, print "FizzBuzz"
Note that the range mentioned is 51, which means zero to 50. However, the range
asked in the question is one to 50. Therefore, in the above code, you can include the
range as (1,51).
If the data set is large, we can just simply remove the rows with missing data values. It
is the quickest way; we use the rest of the data to predict the values.
For smaller data sets, we can substitute missing values with the mean or average of the
rest of the data using the pandas' data frame in python. There are different ways to do
so, such as df.mean(), df.fillna(mean).
Offer Expires In
00 : HRS
48 : MIN
42SEC
Watch the beginner level video lessons for FREEGET ACCESS NOW
10. For the given points, how will you calculate the Euclidean distance in
Python?
plot1 = [1,3]
plot2 = [2,5]
Check out the Simplilearn's video on "Data Science Interview Question" curated by
industry experts to help you prepare for an interview.
The Dimensionality reduction refers to the process of converting a data set with vast
dimensions into data with fewer dimensions (fields) to convey similar information
concisely.
This reduction helps in compressing data and reducing storage space. It also reduces
computation time as fewer dimensions lead to less computing. It removes redundant
features; for example, there's no point in storing a value in two different units (meters
and inches).
12. How will you calculate eigenvalues and eigenvectors of the following 3x3
matrix?
-2 -4 2
-2 1 2
4 2 5
Expanding determinant:
- λ3 + 4λ2 + 27λ – 90 = 0,
λ3 - 4 λ2 -27 λ + 90 = 0
33 – 4 x 32 - 27 x 3 +90 = 0
Hence, (λ - 3) is a factor:
For X = 1,
-5 - 4Y + 2Z =0,
-2 - 2Y + 2Z =0
Y = -(3/2)
Z = -(1/2)
Monitor
Evaluate
Evaluation metrics of the current model are calculated to determine if a new algorithm is
needed.
Compare
The new models are compared to each other to determine which model performs the
best.
Rebuild
A recommender system predicts what a user would rate a specific product based on
their preferences. It can be split into two different areas:
Collaborative Filtering
As an example, Last.fm recommends tracks that other users with similar interests play
often. This is also commonly seen on Amazon after making a purchase; customers may
notice the following message accompanied by product recommendations: "Users who
bought this also bought…"
Content-based Filtering
As an example: Pandora uses the properties of a song to recommend music with similar
properties. Here, we look at content, instead of looking at who else is listening to music.
15. How do you find RMSE and MSE in a linear regression model?
RMSE and MSE are two of the most common measures of accuracy for a linear
regression model.
We use the elbow method to select k for k-means clustering. The idea of the elbow
method is to run k-means clustering on the data set where 'k' is the number of clusters.
Within the sum of squares (WSS), it is defined as the sum of the squared distance
between each member of the cluster and its centroid.
17. What is the significance of p-value?
This indicates strong evidence against the null hypothesis; so you reject the null
hypothesis.
This indicates weak evidence against the null hypothesis, so you accept the null
hypothesis.
Example: height of an adult = abc ft. This cannot be true, as the height cannot be a
string value. In this case, outliers can be removed.
If the outliers have extreme values, they can be removed. For example, if all the data
points are clustered between zero to 10, but one point lies at 100, then we can remove
this point.
Try a different model. Data detected as outliers by linear models can be fit by
nonlinear models. Therefore, be sure you are choosing the correct model.
Try normalizing the data. This way, the extreme data points are pulled to a similar
range.
You can use algorithms that are less affected by outliers; an example would
be random forests.
It is stationary when the variance and mean of the series are constant with time.
In the first graph, the variance is constant with time. Here, X is the time factor and Y is
the variable. The value of Y goes through the same points all the time; in other words, it
is stationary.
In the second graph, the waves get bigger, which means it is non-stationary and the
variance is changing with time.
You can see the values for total data, actual values, and predicted values.
= 0.93
21. Write the equation and calculate the precision and recall rate.
= 262 / 277
= 0.94
= 262 / 288
= 0.90
For example, a sales page shows that a certain number of people buy a new phone and
also buy tempered glass at the same time. Next time, when a person buys a phone, he
or she may see a recommendation to buy tempered glass as well.
23. Write a basic SQL query that lists all orders with customer information.
Usually, we have order tables and customer tables that contain the following columns:
Order Table
Orderid
customerId
OrderNumber
TotalAmount
Customer Table
Id
FirstName
LastName
City
Country
FROM Order
JOIN Customer
ON Order.CustomerId = Customer.Id
24. You are given a dataset on cancer detection. You have built a
classification model and achieved an accuracy of 96 percent. Why shouldn't
you be happy with your model performance? What can you do about it?
Hence, to evaluate model performance, we should use Sensitivity (True Positive Rate),
Specificity (True Negative Rate), F measure to determine the class wise performance of
the classifier.
25. Which of the following machine learning algorithms can be used for
inputting missing values of both categorical and continuous variables?
K-means clustering
Linear regression
Decision trees
The K nearest neighbor algorithm can be used because it can compute the nearest
neighbor and if it doesn't have a value, it just computes the nearest neighbor based on
all the other features.
When you're dealing with K-means clustering or linear regression, you need to do that in
your pre-processing, otherwise, they'll crash. Decision trees also have the same
problem, although there is some variance.
Looking forward to becoming a Data Scientist? Check out the Data Science Course and get
certified today.
26. Below are the eight actual values of the target variable in the train file.
What is the entropy of the target variable?
[0, 0, 0, 1, 1, 1, 1, 1]
27. We want to predict the probability of death from heart disease based on
three risk factors: age, gender, and blood cholesterol level. What is the most
appropriate algorithm for this case?
1. Logistic Regression
2. Linear Regression
3. K-means clustering
4. Apriori algorithm
1. K-means clustering
2. Linear regression
3. Association rules
4. Decision trees
As we are looking for grouping people together specifically by four different similarities,
it indicates the value of k. Therefore, K-means clustering (answer A) is the most
appropriate algorithm for this study.
29. You have run the association rules algorithm on your dataset, and the two
rules {banana, apple} => {grape} and {apple, orange} => {grape} have been
found to be relevant. What else must be true?
30. Your organization has a website where visitors randomly receive one of
two coupons. It is also possible that visitors to the website will not receive a
coupon. You have been asked to determine if offering a coupon to website
visitors has any impact on their purchase decisions. Which analysis method
should you use?
1. One-way ANOVA
2. K-means clustering
3. Association rules
4. Student's t-test
31. What do you understand about true positive rate and false-positive rate?
The True Positive Rate (TPR) defines the probability that an actual positive will turn
out to be positive.
The True Positive Rate (TPR) is calculated by taking the ratio of the [True Positives
(TP)] and [True Positive (TP) & False Negatives (FN) ].
TPR=TP/TP+FN
The False Positive Rate (FPR) defines the probability that an actual negative result
will be shown as a positive one i.e the probability that a model will generate a false
alarm.
The False Positive Rate (FPR) is calculated by taking the ratio of the [False Positives
(FP)] and [True Positives (TP) & False Positives(FP)].
FPR=FP/TN+FP
The graph between the True Positive Rate on the y-axis and the False Positive Rate on
the x-axis is called the ROC curve and is used in binary classification.
The False Positive Rate (FPR) is calculated by taking the ratio between False Positives
and the total number of negative samples, and the True Positive Rate (TPR) is
calculated by taking the ratio between True Positives and the total number of positive
samples.
In order to construct the ROC curve, the TPR and FPR values are plotted on multiple
threshold values. The area range under the ROC curve has a range between 0 and 1. A
completely random model, which is represented by a straight line, has a 0.5 ROC. The
amount of deviation a ROC has from this straight line denotes the efficiency of the
model.
34. What do you understand about the true-positive rate and false-positive
rate?
The primary and vital difference between Data Science and traditional application
programming is that in traditional programming, one has to create rules to translate the
input to output. In Data Science, the rules are automatically produced from the data.
36. What is the difference between the long format data and wide format
data?
LONG FORMAT DATA: It contains values that repeat in the first column. In this format,
each row is a one-time point per subject.
WIDE FORMAT DATA: In the Wide Format Data, the data’s repeated responses will be
in a single row, and each response can be recorded in separate columns.
NAME HEIGHT
RAMA 182
SITA 160
37. Mention some techniques used for sampling. What is the main advantage
of sampling?
Data Scientists and technical analysts must convert a huge amount of data into effective
ones. Data Cleaning includes removing malwared records, outliners, inconsistent
values, redundant formatting etc. Matplotlib, Pandas etc are the most used Python Data
Cleaners.
Tensor Flow
Pandas
NumPy
SciPy
Scrapy
Librosa
MatPlotLib
Variance is the value which depicts the individual figures in a set of data which
distributes themselves about the mean and describes the difference of each value from
the mean value. Data Scientists use variance to understand the distribution of a data
set.
Information gain is the expected reduction in entropy. Information gain decides the
building of the tree. Information Gain makes the decision tree smarter. Information gain
includes parent node R and a set E of K training examples. It calculates the difference
between entropy before and after the split.
The k-fold cross validation is a procedure used to estimate the model's skill in new data.
In k-fold cross validation, every observation from the original dataset may appear in the
training and testing set. K-fold cross-validation estimates the accuracy but does not help
you to improve the accuracy.
Normal Distribution is also known as the Gaussian Distribution. The normal distribution
shows the data near the mean and the frequency of that particular data. When
represented in graphical form, normal distribution appears like a bell curve. The
parameters included in the normal distribution are Mean, Standard Deviation, Median
etc.
Deep Learning is one of the essential factors in Data Science, including statistics. Deep
Learning makes us work more closely with the human brain and reliable with human
thoughts. The algorithms are sincerely created to resemble the human brain. In Deep
Learning, multiple layers are formed from the raw input to extract the high-level layer
with the best features.
RNN is an algorithm that uses sequential data. RNN is used in language translation,
voice recognition, image capturing etc. There are different types of RNN networks such
as one-to-one, one-to-many, many-to-one and many-to-many. RNN is used in Google’s
Voice search and Apple’s Siri.
2. Look for a split that maximizes the separation of the classes. A split is any test that
divides the data into two sets.
6. This step is called pruning. Clean up the tree if you went too far doing splits.
Root cause analysis was initially developed to analyze industrial accidents but is now
widely used in other areas. It is a problem-solving technique used for isolating the root
causes of faults or problems. A factor is called a root cause if its deduction from the
problem-fault-sequence averts the final undesirable event from recurring.
Logistic regression is also known as the logit model. It is a technique used to forecast
the binary outcome from a linear combination of predictor variables.
Recommender systems are a subclass of information filtering systems that are meant to
predict the preferences or ratings that a user would give to a product.
53. Explain cross-validation.
The goal of cross-validation is to term a data set to test the model in the training phase
(i.e. validation data set) to limit problems like overfitting and gain insight into how the
model will generalize to an independent data set.
Most recommender systems use this filtering process to find patterns and information by
collaborating perspectives, numerous data sources, and several agents.
They do not, because in some cases, they reach a local minima or a local optima point.
You would not reach the global optima point. This is governed by the data and the
starting conditions.
This is statistical hypothesis testing for randomized experiments with two variables, A
and B. The objective of A/B testing is to detect any changes to a web page to maximize
or increase the outcome of a strategy.
It is a theorem that describes the result of performing the same experiment very
frequently. This theorem forms the basis of frequency-style thinking. It states that the
sample mean, sample variance, and sample standard deviation converge to what they
are trying to estimate.
These are extraneous variables in a statistical model that correlates directly or inversely
with both the dependent and the independent variable. The estimate fails to account for
the confounding factor.
It is a traditional database schema with a central table. Satellite tables map IDs to
physical names or descriptions and can be connected to the central fact table using the
ID fields; these tables are known as lookup tables and are principally useful in real-time
applications, as they save a lot of memory. Sometimes, star schemas involve several
layers of summarization to recover information faster.
Eigenvalues are the directions along which a particular linear transformation acts by
flipping, compressing, or stretching.
65. What are the types of biases that can occur during sampling?
1. Selection bias
2. Undercoverage bias
3. Survivorship bias
Survivorship bias is the logical error of focusing on aspects that support surviving a
process and casually overlooking those that did not because of their lack of
prominence. This can lead to wrong conclusions in numerous ways.
The underlying principle of this technique is that several weak learners combine to
provide a strong learner. The steps involved are:
This exhaustive list is sure to strengthen your preparation for data science interview
questions.
68. What is a bias-variance trade-off?
Some of the popular machine learning algorithms which are low on the bias scale are -
Support Vector Machines (SVM), K-Nearest Neighbors (KNN), and Decision Trees.
While trying to get over bias in our model, we try to increase the complexity of the
machine learning algorithm. Though it helps in reducing the bias, after a certain point, it
generates an overfitting effect on the model hence resulting in hyper-sensitivity and high
variance.
Bias-Variance trade-off: To achieve the best performance, the main target of a
supervised machine learning algorithm is to have low variance and bias.
The following things are observed regarding some of the popular machine learning
algorithms -
The Support Vector Machine algorithm (SVM) has high variance and low bias. In
order to change the trade-off, we can increase the parameter C. The C parameter
results in a decrease in the variance and an increase in bias by influencing the
margin violations allowed in training datasets.
In contrast to the SVM, the K-Nearest Neighbors (KNN) Machine Learning algorithm
has a high variance and low bias. To change the trade-off of this algorithm, we can
increase the prediction influencing neighbors by increasing the K value, thus
increasing the model bias.
Markov Chains defines that a state’s future probability depends only on its current
state.
Markov chains belong to the Stochastic process type category.
The below diagram explains a step-by-step model of the Markov Chains whose output
depends on their current state.
A perfect example of the Markov Chains is the system of word recommendation. In this
system, the model recognizes and recommends the next word based on the
immediately previous word and not anything before that. The Markov Chains take the
previous paragraphs that were similar to training data-sets and generates the
recommendations for the current paragraphs accordingly based on the previous word.
70. Why is R used in Data Visualization?
R has multiple libraries like lattice, ggplot2, leaflet, etc., and so many inbuilt functions
as well.
The frequency of a certain feature’s values is denoted visually by both box plots
and histograms.
Boxplots are more often used in comparing several datasets and compared to
histograms, take less space and contain fewer details. Histograms are used to know
and understand the probability distribution underlying a dataset.
The diagram above denotes a boxplot of a dataset.
72. What does NLP stand for?
NLP is short for Natural Language Processing. It deals with the study of how computers
learn a massive amount of textual data through programming. A few popular examples
of NLP are Stemming, Sentimental Analysis, Tokenization, removal of stop words, etc.
The difference between a residual error and error are defined below -
A residual error is used to show how the sample An error is how actual
population data and the observed data differ from each population data and observed
other. data differ from each other.
Standardization Normalization
Normalization formula -
Here,
Standardization formula -
X’ = (X - 𝞵) / 𝞼
Xmin - feature’s minimum value,
To conclude, the bias and variance are inversely proportional to each other, i.e., an
increase in bias results in a decrease in the variance, and an increase in variance
results in a decrease in bias.