Compilled Notes - Final

Download as pdf or txt
Download as pdf or txt
You are on page 1of 100

Analytics and AI for

Decision Making
Arnab Kumar Laha
Indian Institute of Management Ahmedabad
AI – as it emerged

 Statistics → Machine Learning → AI


 Problem Solving in the real-world and
delivering value
 Robotics and the Phygital Future
 Challenges - Job loss, AI-induced bias, and
ethical concerns.
Statistics

 Quantifying uncertainty
 Decision making under uncertainty
 Use cases:
Statistical Quality Control (SQC),
Design of Experiments (DoE) in agriculture and
product design
Total Quality Management
Six-sigma
Business Intelligence

 3rd Industrial Revolution – wide spread


adoption of information technology
 Availability of large volume of data to
support informed decision making
 Efficient storage and retrieval of data from
multiple databases
 Data warehouse
 Data Lakes
Descriptive Analytics
 Rapid answers to questions such as
- Which is the best performing stock in the market today?
- How many financial transactions were done in our branches
in this month? How many of them are by our loan customers?
- How many pieces of a particular winter wear were sold at
our stores all over the country?
- What are the stock levels of the various SKUs in our
warehouse in Kolkata currently?
 Analysis involves mostly numerical and graphical data
summaries
Big Data

 Volume
 Variety
 Velocity
 Veracity
 Value
Predictive Analytics
 What will happen next ?
 Will the interest rate rise by 2% in the current
financial year?
 Will this customer default in payment if given a
loan ?
 Will this product sell at least 2000 units in the
next 3 months?
 What is the expected yield of rice in a certain
region based on the prevailing weather
conditions?
Statistical Learning for Predictive
Analytics

 Regression models relates the variable of interest


(response variable) with the predictor variables.
 The relationship between the response variable and the
predictor variable may be linear or non-linear
 Classification models are used to classify a categorical
response variable into one of several categories.
 Time series analysis is used to examine the behavior of a
variable over time and use the same for predicting the
value of the variable in the future.
Machine Learning

 A set of tools based on Artificial Neural Networks (ANN) to solve


complex real life problems
 Y= Response, X1, …, Xk are predictors. If Y=f(X1, X2, …, Xk) for some
unknown f is it possible to estimate f based on data?
 A theorem in mathematics known as the Universal Approximation
Theorem proved by Kolmogorov in the 1950s showed the possibility of
creating tools that can be used to estimate f.
 Advances in algorithms and computational capabilities over the years
have paved the way for estimating f efficiently when enough data is
available
 Huge number of real-life applications have followed since then.
Machine Learning for Predictive
Analytics

 One major benefit of using Machine Learning (as compared to Statistical


Learning) that has emerged over the years is its ability to handle unstructured
data problems.
 This has enabled applications in
- image recognition
- Chatbots (natural language processing)
- speech recognition and speech-to-text solutions
- Object detection is video feeds
- Autonomous vehicles
Learning Concepts – Supervised Learning
 It uses “labeled datasets” to train algorithms that classify data or
predict outcomes.
 The training dataset includes inputs and correct outputs (“labels”),
which allow the model to learn.
 A test dataset is typically used to understand the effectiveness of the
learning and the trained model’s potential to generalize to unknown
datasets
 Supervised learning tasks can be separated as:
- Classification when the response variable is categorical
- Regression when the response variable is numerical
 Linear / Logistic Regression, ANN, Support vector machines (SVM), k-
nearest neighbor (k-NN), and Random forest are some commonly used
supervised learning algorithms
Learning Concepts – Unsupervised
Learning
 Unsupervised learning algorithms attempt to discover “hidden
patterns” without the need for human intervention. It works on
“unlabeled” data.
 Clustering algorithms are used to process data with the objective of
putting the data in groups such that the observations placed in a
group are “similar” and observations in different groups are
“dissimilar”.
 In “hard” or “Exclusive” clustering a data point can exist only in one
cluster (e.g. K-means) while in “soft” clustering a data point may
belong to different clusters with varying probabilities (mixture
models).
 Association rules helps to discover relationships amongst variables.
Learning Concepts – Reinforcement
Learning
 Reinforcement Learning (RL) is a type of machine learning where an
“agent” interacts with an “environment” to learn how to perform
tasks by taking actions and receiving feedback.
 RL uses a trial-and-error method. The agent takes actions in the
environment, and after each action, it receives feedback that helps it
determine whether the choice it made was correct, neutral, or
incorrect¹. This feedback is often in the form of rewards (positive
feedback) or penalties (negative feedback)².
 By designing an effective reward scheme, RL algorithms learn on their
own without further human intervention.
AI in our daily life

 Email categorization
 Tele calling for securing loan / credit card business
 Chatbots
 Navigation applications such as Maps
 Transportation solutions such as Uber
 Food delivery solutions such as Swiggy or Zomato
 Biometric solutions such as face detection and finger print
recognition
Problem Solving with AI - Healthcare

 AI is revolutionizing healthcare by enabling personalized approaches,


improving patient outcomes, and reducing side effects.
 AI techniques being used for disease diagnosis, drug discovery, and
patient risk identification.
 AI can sift through enormous volumes of health data—from health
records and clinical studies to genetic information - to provide
accurate evaluations. This would help in reducing physician workload,
decrease diagnostic errors particularly for rare diseases.
 Personalized medicine tailors treatments to individual patients based
on their medical history, lifestyle factors, and unique genetic makeup
to enhance treatment efficacy.
Problem Solving with AI - Healthcare

 AI in medical imaging can provide accurate reading of X-Rays and MRI images
helping in accurate diagnosis
 AI enabled ECG systems embedded in wearable devices can provide early warning
for serious cardiac events such as a heart attack which can be prevented by early
medical intervention.
 Healthcare workers spend a lot of time doing paperwork and other administrative
tasks. AI and automation can help perform many of those mundane tasks, freeing
up employee time for other activities and giving them more face-to-face time with
patients.
 AI virtual nurse assistants are AI-powered chatbots, apps or other interfaces that
can help answer questions about medications, forward reports to doctors or
surgeons and help patients schedule a visit with a physician.
 AI mediated healthcare market is projected to be large (worth $187 billion in 2030
by some estimates).
Problem Solving with AI – Employee
Recruitment

 AI can automate the resume screening process, saving recruiters time


and improving efficiency.
 AI algorithms can parse resumes and applications, extracting
information such as work experience, education, and skills, and match
them against job descriptions.
 AI can be used to assess candidates during interviews.
 AI can help reduce unconscious bias in the hiring process by relying on
data and predefined criteria.
 AI can provide recruiters with data-driven insights, for e.g., AI can
analyze a candidate's resume and interview responses and predict
their potential success in a role.
Problem Solving with AI - Education

 AI can adapt the content to each student's needs, providing


personalized learning experience.
 It can analyze a student's strengths and weaknesses and tailor
educational content accordingly.
 AI is automating tasks such as grading assignments and providing
feedback.
 AI can be used tp provide personalized tutoring to students, helping
them understand complex concepts and providing additional support
outside of classroom hours.
 AI can be used to analyze student data to predict their performance
and identify those who may need additional help.
Problem Solving with AI - Education

 AI can help make education more accessible, for e.g. speech-to-text


technologies can help students with hearing impairments, while text-
to-speech technologies can assist those with visual impairments.

 AI can create virtual learning environments that simulate real-world


scenarios. This is particularly useful in fields like medicine or
engineering, where students can practice skills in a safe environment.

 Educators who can quickly adapt to AI can provide great value to


students and their institutions.
Problem Solving with AI – Financial Services

 AI systems monitor incoming data and stop fraud threats before they
materialize. Machine learning algorithms can analyze data to spot
deviation from a customer’s expected behavior and raise alarm. For
example, if a transaction is of an unusually high transaction amount,
at an unusual location, unusually short time intervals between
transactions can lead the system to raise alarm.
 AI can analyze a variety of data, including social media activity and
other online behavior, to assess customers’ creditworthiness and make
more accurate credit decisions.
 AI can automate repetitive and time-consuming tasks, allowing
financial institutions to process large amounts of data faster and more
accurately.
 AI can automate monitoring and reporting requirements to ensure
regulatory compliance.
Problem Solving with AI – Transportation

 AI-powered route optimization is transforming logistics through


analysis of massive quantities of data to provide efficient delivery
routes. Route optimization minimizes travel time, fuel consumption,
and overall transportation costs while maximizing the number of
deliveries and freight capacity.
 Autonomous vehicles are equipped with multiple sensors, which help
them better understand their surroundings and assist in path planning.
AI systems in these vehicles collect data from sensors, analyze, and
make accurate decisions while on the road.
 AI in transportation can solve many problems like safety, traffic
congestion, fuel usage and parking shortages.
Robotics
 Robotics is an interdisciplinary field that involves the design,
construction, operation, and use of robots. It's a blend of science and
engineering that enables the machines to sense their surroundings,
make decisions, and perform tasks autonomously.
 Robots can perform tasks faster and more accurately than humans,
especially for repetitive tasks.
 Robots can operate in environments that are hazardous to humans,
such as bomb detection or space exploration.
 Robots can perform tasks that humans are unable to do, or tasks that
are difficult for humans such as deep sea exploration, radioactive
waste disposal etc.
 Robots can gather information that humans can't get, such as in
interplanetary missions.
Phygital Future
 The term "Phygital" is a blend of the words "physical" and "digital" and
it refers to the merging of offline and online environments to deliver
an enhanced customer experience.
 Some examples are:
- In retail, customers might browse products in a physical store, then
use an app to order online
- In education, students attend a physical class but use digital tools to
interact with the material and complete assignments.
- In healthcare, patients might have a physical consultation with a
doctor but use a digital platform to access test results and treatment
plans.
 The goal of the "Phygital Future" is to create immersive engagements
that are seamless and easy.
Phygital future…
 In manufacturing, AI and robotics are used for tasks such as predictive
maintenance, quality control, and assembly. This leads to improved
speed, precision, and better quality leading to savings in operational
costs.
 In precision agriculture, AI-powered agricultural bots help farmers
find more efficient ways to monitor the health of the crops, protect
their crops from weeds, suggest the correct pesticide and accurate
spraying etc.
 In the realm of home automation AI-powered virtual assistants can
control home appliances, security systems, and even suggest optimal
energy usage patterns. Robots can also perform household chores,
such as cleaning.
Impact on Us

 Increased efficiency, productivity, and cost savings for businesses


through automation of repetitive tasks, providing insightful data
analysis, leading to improved customer service.
 On a societal level, while there are concerns about job displacement
due to automation, new jobs are also being created in fields related
to AI and robotics. The potential benefits in terms of improved public
services, healthcare, education etc. gives us hope of a better future.
 AI is helping to make our daily lives more convenient by making
service delivery smoother and hassle free by providing the service as
and when needed.
 AI is a dynamic field that is attracting huge attention globally and
would require thoughtful policy-making and careful implementation.
AI challenges and concerns…
 AI solutions often rely on large volumes of data, some of which can be
sensitive and personal in nature. This raises security and privacy
vulnerabilities.
 AI system development and deployment requires highly trained
professionals. However, the talent pool is limited.
 Effective AI systems depends on huge amounts of multi-type data.
This requires access to specialized compute and storage facilities
which is limited in supply.
 There is a strong requirement of having policy that guides
development and use of AI technology evolves to avoid decision
making that is biased.
 The growing dependence on third-party AI tools and the rapid
adoption of generative AI exposes companies to new commercial,
legal, and reputational risks.

 .
Thank you
[email protected]
From Analytics to AI
(Thomas H. Davenport)

1. Three eras of analytics


2. Analytics 4.0: the era of artificial intelligence
3. Analytical and Non-analytical AI
3.1 Deep Learning
3.2 Statistical Natural Language Processing (NLP)
3.3 Semantic NLP
3.4 Natural Language Generation (NLG)
3.5 Robotics Process Automation
3.6 Rule Based Systems
4. From analytical application to AI application
4.1 Developing and enhancing products and services
4.2 Optimizing internal business processes
4.3 Optimizing external business processes
4.4 Accelerating analytics capability
5. Analytical capabilities that help with AI
6. Create and AI organization development plan
i. Start a centre of excellence
ii. Up-train existing analytics personnel
iii. Develop a recruitment strategy
7. Develop a AI strategy
i. SWOT
ii. AI targets
iii. Ambition levels
iv. Realistic timelines
v. Partner strategy
8. Conclusion
Why is data visualisation important and what is important in it?
(Antony Upwin)

1. The what and why of data visualisation


2. A picture is worth a thousand words
3. Presentation and exploratory graphics
4. Data visualisation has become more important
5. Research in data visualisation
6. Examples and sources
7. What happens now

A/B Testing
(Amy Gallo)

1. What is A/B testing?


2. How does A/B testing work?
• Randomised controlled experiments
• Statistical significance
• Blocking
• Multivariate
3. How do you interpret the results of A/B test?
• Test variation
• Test version
• Control version
• Lift
4. How do companies use A/B testing?
5. What mistakes do people make when doing A/B test?
• Let the tests run their course….. don’t abort in middle
• Don’t look at too many metrics (spurious correlation)
• Enough retesting
Chapter – 7: Predictive Modelling
(Logistic regression and Decision trees (CHAID))

1. Logistic regression
• Binary and Non-binary events
• P = exp(Logit(p) / (1 + exp(Logit(p))
• Variables – Independent (Explanatory) and Dependent (Response)
• True Negative, False Positive, True Positive, False Negative
• This always holds good:
o True Negative Rate + False Positive Rate = 1
o True Positive Rate + False Negative Rate = 1
2. Decision Tree: CHAID (Chi-Square Automatic Interaction Detection)

Chapter – 2: The Forecaster’s Toolbox


1. Graphics
2. Time plots
3. Time series patterns
4. Seasonal plots
5. Seasonal sub-series plots
6. Scatterplots and scatterplots matrices
7. Numerical data summaries
8. Bivariate statistics
9. Autocorrection
10. Simple forecasting methods
• Average method
• Naïve method
• Seasonal naïve method
• Drift method
11. Transformations and adjustments
• Mathematical transformations
• Features of power transformations
• Calendar adjustments
• Population adjustments
• Inflation adjustments
12. Evaluating forecast accuracy
• Forecast accuracy measures
o Mean Absolute Error (MAE)
o Root Mean Squared Error (RMSE)
o Mean Absolute Percentage Error (MAPE)
o Mean Absolute Scaled Error
13. Training and test sets
14. Cross validation
15. Residual diagnostics
16. Portmanteau test for autocorrection
17. Prediction intervals
Chapter – 3: Judgmental Forecasts

1. Introduction
2. Beware of limitations
3. Key principles
• Set the forecasting task clearly and concisely
• Implement a systematic approach
• Document and justify
• Systematically evaluate forecasts
• Segregate forecasters and users
• Example: Pharmaceutical Benefits Scheme
• The Delphi Method
o Experts and anonymity
o Setting the forecasting task in a Delphi
o Feedback
o Iteration
o Final forecasts
o Limitations and variations
o The facilitator
4. Forecasting by analogy: Example Designing a high school curriculum
• A structured analogy
5. Scenario forecasting
6. New product forecasting
7. Sales force composite
8. Executive opinion
9. Customer intentions
10. Judgmental adjustments
• Use adjustments sparingly
Apply a structured approach

Chapter – 4: Data Visualisation


SMPBL11 - Analytics for Business – Prof. Laha – Class Notes – Ruchir Agarwal

If you torture the data long enough, it will confess. - Ronald Coase.
In God we trust; all others must bring data. - W. Edwards Deming
• Statistics (1890-1900) –> SQC (1930) -> TQM (1940-50) -> 6 Sigma (1980) -> Machine Learning (2000-
15) -> AI (2015)
• 1940 -> 1970 (SQL/DB2/Oracle) -> 1990 (Analytics) -> Machine Learning
• AI helps with Problem Solving in the real world and delivering value.
• Robotics and the Phygital (Physical + Digital) Future is here in the next 5 years.
• The theory of AI is still evolving.
• Challenges of AI would be Job loss, AI-induced bias & Ethical concerns.
• In the medium term: semi-skilled & skilled jobs would be impacted by AI. A lot of these jobs will be
transformed, and reskilling will be required. (the other groups are Super Skilled and Unskilled)
• Statistics is the only subject that is known to mankind to read the MIND OF GOD.
• Subject similar to Statistics is Fuzzy Sets which unfortunately lost out on the race.
• Statistics:
o Qualifying uncertainty
o Decision-making under uncertainty.
o Use Cases:
 Statistical Quality Control (SQC)
 Design of Experiments (DoE) in agriculture and product design
 Total Quality Management
 Six-Sigma

• Control Charts (Shewhart, Walter Andrew Shewhart was an American physicist, engineer and
statistician, sometimes known as the father of statistical quality control and also related to the
Shewhart cycle) – Running Process -> Small no. of Items (called as rational subgroup, meaning this
subgroup is representing the entire process) -> Inspected & measurements are Obtained -> If within
control then no action -> If out of control then corrective action

1
SMPBL11 - Analytics for Business – Prof. Laha – Class Notes – Ruchir Agarwal

In a stable process:
68.3% of the data points should fall between ± 1 sigma.
95.5% of the data points should fall between ± 2 sigma.
99.7% of the data points should fall between the UCL and LCL.
How do you calculate control limits?
1. First calculate the Center Line. The Center Line equals either the average or
median of your data.
2. Second calculate sigma. The formula for sigma varies depending on the
type of data you have.
3. Third, calculate the sigma lines. These are simply ± 1 sigma, ± 2 sigma
and ± 3 sigma from the center line.

+ 3 sigma = Upper Control Limit (UCL)


- 3 sigma = Lower Control Limit (LCL)

• Shewhart reported that bringing a process into a state of statistical control—where there is only
chance-cause (common-cause) variation—and keeping it in control was needed to reduce waste and
improve quality.
• If the points are between the upper control limit (UCL) & lower control limit (LCL) then the process is in
statistical control, which means the variation is only impacted by random variables and variation is not
due to an assigned cause. If it goes out of control, then the chance of creating a defective product
increases (the process could lead to defects), then the process should be stopped and investigated,
perform the RCA (root cause analysis), the cause should be removed and the process should be
brought back to control.
• From a normal perspective, 5 samples are good enough in a given regular timeframe i.e. 1 hr or 0.5 hr.

2
SMPBL11 - Analytics for Business – Prof. Laha – Class Notes – Ruchir Agarwal

• The propensity of the defect increases if the process goes above UCL or below LCL and hence action
should be taken as the rejection rate might increase.
• 99.73% production will lie within the UCL-LCL in regular line i.e. it is 3 std. deviation on each side.

• Robust design of experiments by Taguchi method (https://en.wikipedia.org/wiki/Taguchi_methods) is a


field of study in the Design of Experiments. They are orthogonal designs which are designed by C. R.
Rao.

• Descriptive Analytics is must-do before proceeding to more advanced techniques. It deals with past
data and helps answer what happened. This is accomplished by using dashboards, and graphical or
numerical analysis. Story telling is also used to describe/present descriptive analysis.

• Variety and Velocity have got most value to Big Data.

• Data lake – supports both structured and unstructured data (no SQL)
• It is difficult to store streaming data. Predictive maintenance is possible due to velocity of data.
• Data veracity is related to Accuracy which is related to Fake news and data manipulation, etc.
• Turing test – (https://www.investopedia.com/terms/t/turing-test.asp) The Turing Test is a deceptively
simple method of determining whether a machine can demonstrate human intelligence: If a machine
can engage in a conversation with a human without being detected as a machine, it has demonstrated
human intelligence.

• Package pricing works when there is low variability in expense ie fixed duration, standardised
procedures and there is ahigh volume.
• When you do package pricing, you are doing stabilization of cost for the patient.
• Hence, high volumes are required for the averages to converge.

3
SMPBL11 - Analytics for Business – Prof. Laha – Class Notes – Ruchir Agarwal

• There is an ethical issue with package pricing is that healthier patients would cross-subsidise sicker
patients.
• Storytelling (https://www.youtube.com/watch?v=r5_34YnCmMY,
https://www.youtube.com/watch?v=jbkSRLYSojo) with data using the following construct: Set up ->
Conflict -> Resolution
• It creates emotional connect to the data that you are showing. It requires thinking sometimes as it is
not quite intuitive.
• Bar Chart is used for nominal or ordinal data.
• Line chart is used for continuous variables and it’s variation is studied.
• Don’t make pie chart when there are too many categories or some categories are too small.
• Pie represents % of data and angle defines it. It is a compositional data ie part of the whole. Data has
to be categorical.
• Age/Weight/BP are example of continuous variable and should be used in X axis of histogram and Y
axis shows frequency. Histogram bars does not have spaces between them.

• Predictive Analytics: what happens next?


Statistical Learning for Predictive Analytics are:
o Regression Models – Linear & Non Linear
o Classification Models
o Time Series Analysis

• Use Case: Credit Card Customers


• Type of customers:
o Transactional Users: Low/Medium
o Revolvers: Who do not pay full amount (High value customers)
o Defaulters: Run away with the money
• Revolvers & Defaulters both do not have enough funds to cover the expense but the difference is in
their intention to pay back.
• Revolvers usually spend on education, health, home, self development etc. while defaulters usually
spend on alcohol, races, gambling, cash withdrawals etc.
• For revolvers EMI schemes were established.

Actual
Credit Card Users
Defaulter Revolver
Defaulter Yes Type 2
Predicted
Revolver Type 1 Yes

Actual
Concentrix Will Not
Corporation Will Pay Pay On-
On-Time Time

Predicted Yes Type 1 Not called


Will Pay for
On-Time erroneously

4
SMPBL11 - Analytics for Business – Prof. Laha – Class Notes – Ruchir Agarwal

Will Not
Pay On- Type 2 Yes
Time
Additional non useful spends
• Type 1 error are more costlier than Type 2 errors in general.

• Older methods of Predictive Analysis are:


o Linear Discriminant Analysis -> Bankruptcy Prediction (Altman Z-Score, 1973)
o Logistics Regression Analysis -> Probability of Default (Late Payment) -> Assigns probability
score for late payment -> Sort it in descending order -> Take calling action. P=1/(1+e^p)

• Persistency of data is a big problem across the globe and more so in India.

• New methods of Predictive Analysis are: Support Vector Machines (SVM), Decision Trees -> Random
Forests are machine learning techniques
• If we have clear decision boundaries then linear and logistic regressions work well but if we have
different and difficult decision boundaries then SVM works well. It is used in Image Processing.
• One very good use case of AI is to detect TB from X-ray. Good tissues are surrounded by bad tissues
and hence decision boundaries are difficult.
• In decision trees, the last nodes are called leaves.
• Decision tree algorithm gives a set of rules that helps you classify. It is unstable as rules change
substantially with minor change in data and hence Random Forest were developed by Brieman.
• Random Forest is ensemble of decision trees. It is very strong classifier and state of art.
• Classification is predicting class of the new observation while clustering is segmenting the population.
• Random Forest (Brieman) is state of art in classification problems – ensemble of decision tree, very
strong classifier. Computationally heavy Example are cross selling , customer churn , dealer churn,
market-basket analysis, Attrition prediction.
• Cross sell, Upsell (https://www.youtube.com/watch?v=VMavY0pBo2o), Customer Churn
• (https://www.youtube.com/watch?v=Akjgml43hzU) where predictive analysis is used in marketing use
cases. Vendor: Agnoss
• There are two types of learning: Supervised & Unsupervised
• Unsupervised is Segmentation where we do not have predictive variables value to check.

• Time series analysis can further be divided into:


o Short term forecasting -> Few units of time ahead. Upto 5 units. Most methods are
available and quite a few are effective.
o Medium term forecasting -> 5-20 Units of time ahead. A few methods are known but
doesn’t always do well.
o Long term forecasting -> 20-30 units of time ahead. Not many good methods are
known. Only educated guess work available as Scenario Planning.
• Business forecast are difficult to evaluate and they rarely come true (in medium/long term) as
knowledge is less and environment is dynamic compared to physical forecast.
• In business forecast, you have agency -> they can work and take steps to make forecast come true or
make it false.
• Business forecast fails as it is taken as Guidance -> based on as is condition.
• Revision of forecast is quite prevalent in business forecasts to accommodate the changed reality.
• Shocks are very difficult to handle.

5
SMPBL11 - Analytics for Business – Prof. Laha – Class Notes – Ruchir Agarwal

• Physical forecasts are more accurate – fact that knowledge is more, environment is well known, and it
is constant.
• Financial forecasting should not be done by simple methods. Efficient market hypothesis is underlying
assumption for any market analysis.
• In Agarwal automobile case, first predict demand and thencheck supply..
• In time series forecasting, most recent data is most useful and old data is of not that much of use.
• ARIMA is used for short term (2-3 units) time series forecasting. It is widely used as they can handle
trend change.
• Double exponential smoothing also helps in analysis of the trend change.
1 |𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 −𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓|
MAPE is Mean Absolute Percentage Error ∑( ∗ 100)
𝑛𝑛 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎
• It is average of the absolute percentage error. It is used in forecasting demand. But, Series has to have
positive values. It cannot be applied to 0 or -ve values.
• MAPE: 0-4% Good 4-10% Satisfactory >10% Depends on Industry
• In Agarwal Case, MAPE is 11.85%

If the series not positive then the measure applied is RSME ie. Root Mean Squared Error. It
1
can be applied for all kinds of series. � ∑(𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 − 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓)2
𝑛𝑛

• RSME is not interpretable and hence its usage is limited.
• In Agarwal, Diesel MAPE is 25.92% however, for high speed diesel it is 44.835% which is even bigger
than other two. So, high speed is most fluctuating demand followed by diesel demand is more
fluctuating then petrol.
• Ensembled forecasts is use of multiple forecasts. Earlier paradigm was to use single forecast but now
it is use multiple ways of forecasts and then using weights which is derived from respective MAPEs and
ensembled forecasts is gathered.

Forecast Forecast Ensembled


Type Values MAPE Ymape Weights Value
1/10 / 3/20 =
I 200 10% 1/10 200 x 2/3 + 300
2/3
x 1/3 = 700/3 =
1/20 / 3/20 =
II 300 20% 1/20 233
1/3
Sum 1/10+1/20=3/20

• Ensembled forecasts help in arriving at a consensus. It is extensively used in the fashion industry and
others where there is a lot of volatility and divergent views.

• Prescriptive Analytics and Cognitive Analytics lead to Business Experiments <- PDCA
• The Uber case is an example of a business experiment.
• Prescriptive Analytics: What is best? Is 2 min waiting better than 5 min waiting? Is Pool preferred over
Express by the customers?
• To prescribe something, we must base our argument on data. So, we should create a hypothesis and
then validate it.
• The output matrix which is predicted will depend on the context. Ex: revenue, etc.
• Optimization tools also come under prescriptive analytics. Ex: Google Maps. It is widely used in plants.

6
SMPBL11 - Analytics for Business – Prof. Laha – Class Notes – Ruchir Agarwal

• Simulation was given by Sir R A Fisher in 1930 which was implemented in engineering in the 1940s
which was adopted by Taguchi and called Taguchi designs in 1970s.
• In simulation, you change variables to create scenarios. Ex. A B C has 5 levels each in this case there
would be 5x5x5= 125 scenarios. We check the yield for all 125 and choose the best combination of
settings for the 3 factors.
• If we do all 125 experiments, it is called Full Factorial Experiment. It is time-consuming and costly. So,
we do fewer experiments & Taguchi used Orthogonal designs or Fractional Factorials. When using
Fractional Factorials we use some assumptions.
• Recently, companies have been using Digital Twins. It has made experiments easier in silico. It is an
up-to-date representation of a real asset in operation.
• In Uber Case, core issue is increasing revenue per ride but still enhance customer experience.
• Business Experiment is useful when there is uncertainty and there is trade-off between which you
need to choose.
• There are effects of contamination of data and spill over effects which we need to be careful about.
• Phenomenon of confounding is that multiple effects are affecting together the outcome. You can not
differentiate the output of one from another. Such variables are called Confounders or Lurking
variables.
• Ex: Reduction in attrition after the Learning & Development initiative could also be due to market
scenario.
• To treat confounding, you need to create 2 groups namely control and treatment of similar types of
markets. Then measure the difference in treatment and control group to check if the treatment has
worked.
• In the Uber example, you take 2 similar markets say SF & LA and induce 5 min in one market and 2 min
waiting in other and then check.
• Optimizing towards your assumptions is foolish.
• You should first check random samples through A/A testing before doing A/B testing.
• A/A test is done to check if there lurking factors that you are not aware of and it needs control.
• In Synthetic Control Experiments, we choose one variable and then figure out which is close to it and
club it as group. Then we choose the 3rd variable and figure out its partner and so on.
• However, these experiments work well only when effect sizes are large ie. >5%
• There are also switchback designs where you want to get maximum information without
contamination and spillovers. Ex: 2 min 5 min 2 min 5 min every 160 mins for 2 weeks to get samples at
each time slot for both types.
• Avoid optimization while in the experiment as it might lead to chain effects.
• If you are building database culture in the organization, then you have to let the data do the talking and
not in force your hunch in the decision-making process even though it might initially give higher errors.

7
SMPBL11 - Analytics for Business – Prof. Laha – Class Notes – Ruchir Agarwal

True Positives (TP): The cases in which the model correctly predicts the positive class.
True Negatives (TN): The cases in which the model correctly predicts the negative class.
False Positives (FP): The cases in which the model incorrectly predicts the positive class (also known
as a Type I error).
False Negatives (FN): The cases in which the model incorrectly predicts the negative class (also known
as a Type II error).

Actual (0) Actual (1)


Predicated (0) TN (4) FN (3)
Predicated (1) FP (2) TP (1)

8
BA MBA papers:

1. What do you mean by Measures of Central Tendency


• statistical measures used to describe the center or typical value of a set of data.
• provide a single representative value around which the entire data set tends to
cluster.
• The three main measures of central tendency are the mean, median, and mode.
• Mean – Average, Mode - value that occurs most frequently and Median – middle
value
2. Measures of Dispersion –
• the degree of variability or spread within the data.
• Measures –
i. Range,
ii. IQR (Q3 – Q1),
iii. Variance
iv. standard deviation (Standard Devia�on (σ) = square root of variance
v. Coeffecient of variance = std Dev/Mean *100
vi. MAD (Mean absolute devia�on) – this is also known as Mean Devia�on
3. What is Standard Deviation
• amount of variation or dispersion in a data set
• how spread out the individual values in a dataset are from the mean (average) of that
dataset.
• A low standard deviation - closer to the mean, while a high standard deviation -
values are more spread out.
4. What is Correlation - degree of association or relationship between two variables (ranges
from -1 to 1)

Data analy�cs: it’s a wider term that incorporate all techniques, approach, data collec�on , mining,
processing, analysing and visualiza�on. Convert and create meaningful information from raw
data (structured/unstructured)

Business Analy�cs: its part of data analy�cs with focus on Business requirement, address business
challenges and business-related decision making. It’s a subset of Data analy�cs.

Descrip�ve analy�cs, predic�ve analy�cs, prescrip�ve analy�cs, and cogni�ve analy�cs are different
stages / types of analy�cs:

Stage Summary Example


Descrip�ve Analy�cs – Focus of analysing the Sales trend by region,
Business intelligence, Data historical data to iden�fy Customer behaviours,
management, repor�ng trends, paterns, gaps, KPIs . fraudulent transac�ons.
This can help business to
uncover the hidden facts by
looking the historical trends.
retrospective view of data
Predic�ve analy�cs – Big data Focus shi�ed towards Using your past sales trends
discovering the future trends – how can you predict the future
predic�ons/ forecas�ng. In this sales.
stage of analy�cs can also
supported by Descrip�ve
analy�cs. Forecas�ng tools-
Sta�s�cal modelling, ML algos
Prescriptive Analytics - big Op�miza�on of process, What Offering and discounts
data and analy�cs – business Decision making using can be provided to customers
model transforma�ons predic�ve analysis. product placement - seasonal,
on shelve
Cognitive Analytics – AL /ML -Natural language ChatGPT, Bots
artificial intelligence processing, pattern
recognition, and machine
Note - depending upon learning algorithms are
geography, of 20 to 30% across applied to understand, learn,
large enterprises in 2016 and make decisions in a way
that mimics human
cognitive functions.

Structure data – formated , fixed layout data set

Eg – any data suppor�ng data, flat files

Semi Structure data – not fixed layout like MS dos files, JSON files, PDFs

Unstructured data – not in text fixed format – email, logs, images, videos

• ML - Basic machine learning is predic�ve analy�cs. It uses “supervised learning” – the crea�on
of a sta�s�cal model based on data for which the values of the outcome variable are known.
• Then once a model is found that explains the variance in the training data and predicts well, it
is deployed to predict or classify new data for which the outcome variable isn’t known –
some�mes called a scoring process.
• Deep learning – Complex form of neural network “train” networks that are then used
to recognize and characterize situations based on input data.
o It is inspired by the structure and functioning of the human brain, where neural
networks are composed of interconnected nodes, also known as neurons.
o its ability to automatically learn hierarchical representations from data without
the need for explicit feature engineering
o These networks are typically organized into an input layer, one or more hidden
layers, and an output layer.
• deep learning include: Neural Networks, Deep Neural Networks (DNNs),

• Training and Backpropagation:


• Deep learning models are trained using large datasets through a
process called backpropagation.
• Backpropagation involves adjusting the weights of the neural network
based on the error between predicted and actual outputs
• Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs):
o CNNs are particularly effective for image and spatial data, utilizing
convolutional layers to capture spatial hierarchies and patterns.
o RNNs are designed for sequential data and have memory mechanisms, making
them suitable for tasks like natural language processing.
• Applications
o image and speech recognition, natural language processing, machine
translation, autonomous vehicles, and healthcare.
• Transfer Learning: Transfer learning is a technique in deep learning where a pre-
trained model on a large dataset is fine-tuned for a specific task, leveraging the learned
features.

Regression
Part of predictive analysis and statistical technique

Predict the relationship between one or more predictor variables (independent variable)
and a response variable (dependent variable).

Key components:

Dependent Variable (response variable): the variable to predict, denotes with Y

independent variable (predictor variables) , denotes with X

For a simple linear regression with one independent variable

y=β0+β1x+ε

For multiple linear regression with more than one independent variable:

y=β0+β1x + β12x2 + β13x3+… +ε

Coefficients (β) - impact of each independent variable on the dependent variable.


β0 is the y-intercept (constant term),
β1 is the slope of the line,
Residuals - ε represents the error term.

• Common metrics for evaluating regression models include Mean Squared Error (MSE),
Mean Absolute Error (MAE), and R-squared.
• Logistic Regression : relationship between one or more predictor variables and a binary
response variable. The response variable (dependent variable) is binary – it can only take
on two values.

P(Y=1)=1+e−(β0+β1X1+β2X2+...+βnXn)1, where �e is the base of the natural logarithm.

Forcasting –

Qualitative - sufficient data not available


Quantative methords – data based analysis
Descriptive statistics vs inferential statistics

Descriptive statistics

1. Measures of Central Tendency:


• Mean, median, mode.
2. Measures of Variability:
• Range, variance, standard deviation.
3. Measures of Shape:
• Skewness, kurtosis.
4. Graphical Representation:
• Histograms, box plots, pie charts, bar charts.
5. Summary Statistics:
• Mean, median, mode, standard deviation, percentiles.

 Location parameter: Mean value, median, mode, sum (central tendency


of the data set)
 Dispersion parameter: Standard deviation, variance, range
 Tables: Absolute, relative and cumulative Frequencies
 Charts: Histograms, bar charts, box plots, scatter charts, matrix plots

inferential statistics - predictions


1. Hypothesis Testing:
• Testing hypotheses about population parameters based on sample
data.
2. Estimation:
• Estimating population parameters based on sample statistics.
3. Regression Analysis:
• Modeling relationships between variables and making predictions.
4. Analysis of Variance (ANOVA):
• Comparing means of different groups to determine if they are
significantly different.
5. Confidence Intervals:
• Providing a range of values within which a population parameter is
likely to fall.

Visualization:
Bar - arranged horizontally or vertically. Numerical value presentation –
length of the bar

Histogram

representation of the frequency distribution . data must me grouped or


divided into class (bins)

Scatter plots

correlation between the two variables

Box Plot:
What is the p-value?
The p-value - probability that the observed result or an even more extreme result will occur
if the null hypothesis is true.
The p-value is used to decide whether the null hypothesis is rejected or not rejected (not
rejected). If the p-value is smaller than the defined significance level (often 5%), the null
hypothesis is rejected, otherwise not.
Note – Forecasting and Inventory management. Agarwal Pump and Concentrix Company Case

Predictive Analytics:

Credit card case: Type of customers

1. Transactional – pay in full. Low margin for credit card company


2. Revolver – don’t pay in full. High margin for credit card company
3. Defaulter – don’t pay and default. Loss for credit company

How to identify these set of customers from the data.

Difference between Revolver and Defaulter is in intension

>> Revolver Intention is good and intend to payback, they have good credit history.

>> Defaulter intension is bad and intend to run away

Features one can look for

- Revolvers – Education, health, development of home, may go to EMI options so can be


offered EMI options. Revolvers >>> EMI >> earning for credit card agency
- Defaulters – Alcohol, Races , Gambling , Cash withdrawals,

Predict ↓/ Actual→ Default Revolver


Default Yes Type 2 error
Revolver Type 1 error Yes

Case 1: Concentrix Corporation

Q : if customer is likely to pay premium on time or not on due date. If person going to pay on
time no need of call. If person not going to pay then give call for renewal.

Predict ↓/ Actual→ Will pay on time Will not pay on time


Will pay on time Yes Type 2 error (not called
erroneously)
Will not pay on time Type 1 error (Additional Yes
spend incurred to call)

Prediction models:

1. Linear Discriminate Analysis: Used for Bankruptcy prediction (Altman Z-score)


2. Logistic Regression Analysis: It predicts probability of default (in this case late payment).
Good in linear boundaries. Most prevalent one
- Each policy will have probability of late payment in premium > sort the probability in
descending order > take calling action in that order
3. Support Vector Machine (SVMs): Strong in non-linear boundaries. Example Image
processing
4. Decision Tree : Random Forest > Create Decision Rules i.e.
X1<= 110 , X2<=20 = Classify as 0

NARESH JOSHI | SMP11 Analytics


Note – Forecasting and Inventory management. Agarwal Pump and Concentrix Company Case

X1<=110, X2>20 = Classify as 1


Random Forest (Brieman) is state of art in classification problems – ensemble of decision
tree, very strong classifier. Computationally heavy
Example are cross selling , customer churn , dealer churn, market-basket analysis, Attrition
prediction.

- Supervised learning are used for prediction


- Un-supervised learning is used for clustering. Clustering algorithms are used for clustering
and classification

Statistical Learning for predictive Analytics

- Regression models relates the variable of interest (response variable) with the

Short-term forecasting Medium-term forecasting Long-term forecasting


Likely to be accurate Like to be moderately accurate Guess work- for guidance purpose
(not beyond 5-7 time ( If monthly data then medium (beyond 30 time units, if it is
units, if monthly data term is upto 5 years) monthly then beyond 5 years ~ 60
then short term is 6 months is long term)
months)
few units of time ahead. A few methods are known and Not many good methods are
Most of the method are not many good methods are known. Educated guess work ,
available for this known scenario planning.

Business forecast are very difficult to evaluate: In medium – long term rarely come true. In
Medium-long term forecast below are challenges in physical and business forecast

Physical Forecast (e.g. process outcomes) Business Forecast (e.g. Sales/ Revenue etc)
Knowledge is more Knowledge is less
Environment is more well-known and constant Environment is dynamic (non-constant)
No Agency affect Agency is critical in business forecast > (they
can work/ take steps to make forecast come
true or not. The forecast can be turned into
ones favor
Example Haley’s comet. Astronomers know Rarely come true because actual can be turned
exactly when comet will come into solar system in favor
Predicting yield of a plant – yield of a plant is
fairly predictable.
With optimum parameters we will get new Used as a guidance. Forecasts are revised time
forecast which will largely come true. to time

NARESH JOSHI | SMP11 Analytics


Note – Forecasting and Inventory management. Agarwal Pump and Concentrix Company Case

Agarwal Pump case : Predict the demand and match the orders which are delivered.

ARIMA method for prediction / forecasting tool for short term forecasting (An autoregressive
integrated moving average, or ARIMA): it is not used for long term forecasting.

Next 2 days forecast from ARIMA – 3379 and 3315. Depending on inventory available / space
available action will be taken whether to order or not order.

How to know forecasting is working well or not :

MAPE : Mean absolute % error. 1/n ∑ I Actual – Forecast I / Actual x 100. To be applied for series
with positive values. Not for negative values like profits (which can be negative)

0-4% : good, 4-10% : Satisfactory , > 10% ~~~ depends on industry.

RMSE : Root Mean Squared Error. Used for negative values as well, difficult to interpret = Root mean
Squared error

Next 2 days diesel forecast: 9400 Ltrs. Much more MAPE and tells more fluctuating. RMSE is also
more but doesn’t indicate much.

NARESH JOSHI | SMP11 Analytics


Note – Forecasting and Inventory management. Agarwal Pump and Concentrix Company Case

Use of multiple forecasts (Ensemble forecast) : not just statistical but other expert forecast can be
used

Forecast MAPE Ymape Wt. Ensemble forecast


I 200 10 1/10 (1/10) / (3/20 ) = 2/3 200*2/3 + 300 *1/3=
233.33
II 300 20 1/20 (1/20)/(3/20) = 1/3
3/20

Group assignments : Chose one industry depending on your interest. Indian Company

- What kind of analytics and AI adoption use cases and trends


- Submit a report
- Word count - , pages-
- Date -

NARESH JOSHI | SMP11 Analytics


Note – Uber Case , A/B Testing

 Descriptive Analytics – Dashboards , Visualization and Storytelling with data


 Predictive Analytics – Forecast (out of time), Predict
 Prescriptive Analytics and Cognitive Analytics – Business Experiments - PDCA

What does Prescriptive analytics involved?

- Prescribing what is best ? if best not possible then what is better for example Is ‘2 Min’ is
better than ‘5 mins’. Is ‘Pool’ preferred over ‘express’ by customer
- To do above comparison – one need hypothesis.
- In prescriptive one identify the metric for business benefits , data required for analysis and
hypothesis
- One tool which mostly comes under prescriptive analytics is optimisation tools e.g. google
maps, lama soft , Cplex solver. Optimisation tool is core tool used in case of prescriptive

Simulation :

3 factors A B C , which affect yield of products each can have five different settings 5 levels so – 5 x
5 x 5 = 125

Find yield for all 125 setting. The maximum yield will be at certain combination of A, B and C.

Scenario 1 to 125

Scenario 1

Scenarios Yield ABC


1 Y1 A=1,B=1,C=1
2 Y2 A=1, B=0, C=0
3 Y3
4 Y4
5 Y5
6 Y6

When all 125 scenarios considered it is called full factorial experiment.

Alternate option is fractional factorial experiments. It was designed by Taguchi.

PA – in 1930 by R.A Ficher >>>1940>>> 1970

Digital Twins

Digital Twins help in reducing time for experiments:

For every asset we need to create unique digital twin

Failure detection will need one digital twin for a particular asset.

NARESH JOSHI | SMP11 Analytics


Note – Uber Case , A/B Testing

For fault classification another model can be used by comparing various pumps under different
conditions and their performance

Digital twin helps in history of asset and future performance and planning spares , repairs , PM.

- Every individual asset has a unique digital twin

Uber Case Study :

Core Issues – Increase revenue, improve customer satisfaction (not to lose customer).

Where is Business experiment useful

- Uncertainty that whether the offer / solution will lead to result i.e. in uber case , will 2 mins
or 5 mins wait time better
- Competition will offer solution and grab the opportunity
- Trade-off in choices

What will affect the experiment?

- Geography difference - cold , summer


- Reliability – Reliability of the ride availability with 5 mins / 2mins
- Language – customer is in language friendly regions
- Running business – impact on already running business
- Preference to status quo (prospect theory) –
- Contamination of the Data and spill over affects – Lurking variables, confounders – multiple
factors affect together the outcome. i.e. Attrition is not just influenced by Learning and
Development. One can’t differentiate the effect of one factor from other.
- Treatment – Similar markets (units) , one unit is put in treatment , other unit is not put in
treatment (called controlled unit) and the one can compare the 2 units to see impact of
treatment.
o For example – introduce 5 mins waiting time in San Francisco (Treatment market)
and not introduce in LA (Control market). Both market are similar in terms of similar
rate of growth , user preference , profiles , demographic so experiment can be done
for these markets. Compare revenue and customer satisfaction for treatment and
control market
o Another example – introduce L&D in one unit (treatment) and not introduce in
another unit (control). Then compare the attrition in both units and revenue in both
units. But this is very risky because perception of one unit can get influenced by bias
and perform worst then the treatment unit because of this bias

5 Min 2 Min
Customer Dissatisfaction
Enhanced Revenue

Chain Affects in Uber Case :

- If data base culture is to be built then let data speak. So if decision taken to go without
experiment then data base culture will be affected

NARESH JOSHI | SMP11 Analytics


Note – Uber Case , A/B Testing

- Also it will affect customers – many customer may not like it

Page 9 of case: Optimising toward your assumption is foolish? – If some part of assumption is wrong
then entire optimisation may wrong. Check all your assumptions, any wrong assumption can make
optimisation redundant.

3 Kinds of experiment:

1. User level A/B testing : to compare the 2 options , 2 options presented to same groups and
which option does well. User level A/A testing: To rule out no lurking affect. To know if there
are factors one need to control
2. Synthetic Control Experiments: Take 1 to 12 cities. From these chose 1 and figure out which
is closure to 1 , if it is 10 then we pick 1 and 10 taken together. So there will be groups 1 and
10, 4 and 7 etc. Unless the affect of one factor is large it will not be picked
3. Switchback: every 160 mins switch back from 2 mins to 5 mins back to 2 mins to 5 mins. It is
done for 4 weeks. The switch back helps to capture the data from peak hours on various
days. This reduces data contamination and spillover affect

NARESH JOSHI | SMP11 Analytics


Analytics for Decision making – Analytics 1

Background Concepts – class notes

AI- as it emerged:

Statistics (SQC-1930, TQM-1940, Six Sigma-1980) Machine Learning  AI

 1890-1900 Statistics - 1930 (SQC)  1940-50 (TQM)- 1980 (Six Sigma) ----1940s
(Machine Learning) -- 1970 (SQL, DB2) -- 1990 Analytics -- 1990 onwards emergence
of AI. Major adoption started in 2000 onwards
 Problem solving in real-world and delivering value : Business value from Analytics and AI
 Robotics and the phygital future : dramatic change in interaction with virtual technologies
 Challenges – job loss, AI-induced bias, and ethical concerns.

Skills Affect from AI/ Analytics


Super Skilled Will not be affected
Skilled Will be affected
Semi-skilled Will be affected
Unskilled Will not be affected

Statistics:
 Quantifying uncertainty (reference book – Lies ,Damn lies and statistics) – Statistics is the
only subject that is known to human kind to read the mind of GOD
 Decision making under uncertainty
 Use Cases :
o Statistics quality control (SQC)
o Design of Experiments (DoE) in agriculture and product design
o TQM
o Six-Sigma

NARESH JOSHI 1
Analytics for Decision making – Analytics 2

Control charts: Running process  small number of items (Rational sub-group) -- Inspected and
measurements are obtained - within control then NO ACTION -- out of control then TAKE
CORRECTIVE ACTION --- Do Root Cause Analysis (RCA) - Remove the cause - bring process
back to control

- In TOYOTA production system (TPS) process is stopped for identifying root cause and
correcting the process (Andon systems)
- Measurement interval can be minutes

- Design of Experiments (DoE) was invented by Sir. R.A Fisher. It was adopted by P C Maha
lanobis
- Is Method A better than Method B ??
- E= EVM voting (New Method) , P = Ballet Polling (Control Group) : take random 6 polling
station and use conventional and new method. Compare treatment and control group for
final output if they are same
Vote
Method E P E P P E

NARESH JOSHI 2
Analytics for Decision making – Analytics 3

Business Intelligence

- 3rd industrial revolution – wide spread adoption of information technology


- Availability of large volume of data to support informed decision making
- Efficient storage and retrieval of data from multiple databases

Descriptive Analytics (What happened, deals with past, historical data)

- Rapid answer to questions such as


o Which is the best performing stock in the market today? How many transactions
were done in our branch?
- Analysis involves mostly numerical and graphical data summaries
- Graphical data analysis can also be done for storytelling
- It can pose questions which can be used for using other tools to find solutions or decision
making i.e. buying stock, increasing transaction

Big Data : first 3 are benefits , 4th is challenge and 5th is value from Big data

1. Volume - large volume of data. Terabytes, Exabytes. Large data need processing capabilities
, hardware infrastructure , technical skills etc. to manage this data
2. Variety – data of multiple types - Structured , unstructured , numerical, images, text, sound,
videos, log files
3. Velocity – Flowing data / streaming data
4. Veracity – Accuracy become a big problem and leads to  data manipulation , fakenews
5. Value

Case Study :

- When is fixed package pricing better? When is Ala carte pricing better?
- Fixed Package Pricing useful when  Low variability in expenses (Cataract , Gall Bladder , CBAG)
>> standardized procedures , fixed duration , high volume.
o In fixed pricing, price of treatment for customer gets stabilized. Patient is upfront told
the price of treatment  for example 100k which becomes reference for Patient and
patient will perceive the loss/gain from prospect theory

-
NARESH JOSHI 3
Analytics for Decision making – Analytics 4

o Fixed price are prone to ethical challenges e.g. healthier patient get discharged earlier
and hospital gains but patient loses. Sometime hospital may be in hurry to discharge
patient early inspite of not recovered completely to save cost to hospital. In example
below the 4 patients actually paid extra cost for one patient where cost was way higher
than average. So hospital with large volume of patient will benefit because in such case
not all patient will be incurring 200 and inflow of patient incurring cost lower than 62
will average out the cost incurred and keep actual average cost below 62
Actual Cost 20 30 40 200 20

Average 62 62 62 62 62
Package price
(paid by patient)
Extra Paid 42 32 22 -138 42

Types of Charts

- Bar Charts : Categorical data (Nominal / Ordinals) , use for few categories

- Line Charts :

- Horizontal bar chart: Y= f(x) , Y = dependent , X Independent. Example – comparing projects


A1, A2 and A3 on timelines.
- Vertical bar chart : when sales of 3 products P1 , P2 and P3 to be shown. When there is no
Y=f(x) relationship

NARESH JOSHI 4
Analytics for Decision making – Analytics 5

- Pie Charts : % as a whole, Compositional data  parts of a whole. Use for 5-6 categories
maximum, else it will be cluttered. Don’t use for very small categories.

Sales

5
20
30 10

- Histogram :

Telling Data Story

- Prospect > Conflict > Presentation of data around prospect and conflict
- Question for assignment: Focus of presentation >>> what packages you would offer at the
hospital?? Cover in story telling in ppt.

NARESH JOSHI 5
ANALYTICS
Senior Management Program (Batch 10)

Indian Institute of Management, Ahmedabad

Created by:
Neeraj Sinha ([email protected])
Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 1 of 18

Contents
Introduction ................................................................................................ 2
Analytics .................................................................................................. 2
Online Videos Library .................................................................................. 4
Regression ............................................................................................... 5
Binary Logistic Regression ............................................................................ 6
Prospect Theory and Value Function ................................................................ 8
Design of Experiments and Level of Significance.................................................. 10
Case Studies ............................................................................................. 10
Descriptive Analytics ..................................................................................... 11
Examples of Descriptive Analytics ................................................................... 11
❖ Traffic and Engagement Reports ............................................................ 11
❖ Financial Statement Analysis................................................................. 11
❖ Demand Trends ................................................................................ 11
❖ Aggregated Survey Results ................................................................... 11
❖ Progress to Goals ............................................................................... 11
Predictive Analytics ...................................................................................... 12
Model Development and Deployment Framework................................................ 13
Exponential Smoothing ............................................................................... 14
Net Present Value (NPV) .............................................................................. 16
❖ NPV Formula .................................................................................... 16
❖ Significance of NPV Value ..................................................................... 16
Prescriptive Analytics .................................................................................... 17
Examples of Descriptive Analytics ................................................................... 17
❖ Venture Capital: Investment Decisions ................................................... 17
❖ Sales: Lead Scoring .......................................................................... 17
❖ Content Curation: Algorithmic Recommendations ..................................... 17
❖ Banking: Fraud Detection ................................................................... 17
❖ Product Management: Development and Improvement ............................... 17
❖ Marketing: Email Automation ............................................................... 17
Appendix: .................................................................................................. 18

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 2 of 18

Introduction
Analytics

Analytics is a body of knowledge consisting of statistical, mathematical, and operations


research techniques; artificial intelligence techniques such as machine learning and deep
learning algorithms; data collection and storage; data management processes such as data
extraction, transformation, and loading (ETL); and computing and big data technologies such
as Hadoop, Spark, and Hive that create value by developing actionable items from data. The
primary macro-level objectives of analytics are problem solving and decision making.
"Analytics help organizations to create value by solving problems effectively and assisting in
decision making."
"In God we trust; all others must bring Data" - Edwards Deming
Business Analytics (BA) refers to the tools, techniques, and processes for continuous
exploration and investigation of past data to gain insights and help in decision making.
Business Analytics is an integration between science, technology and business context that
assists data driven decision making. Many organizations, whether small or big, have a large
number of low-hanging fruits that can be targeted with simple analytics tools. In this section,
we will discuss the framework of building a center of analytics excellence.

The following are the different components of analytics.

a) Descriptive Analytics focuses on what happened in the past.


b) Predictive Analytics focuses on what will happen in the future.
c) Prescriptive Analytics focuses on what action to take.

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 3 of 18

Components of Business Analytics


DESCRIPTIVE ANALYTICS • Communicates the hidden facts and trends in the data.
• Simple analysis of data can lead to business practices that
result in financial rewards
PREDICTIVE ANALYSIS • Predicts the probability of occurrence of a future event.
• Helps organizations to plan their future course of action.
• Most frequently used type of analytics across several
industries
PRESCRIPTIVE ANALYSIS • Assists users in finding the optimal solution to a problem.
• In most cases, provides an optimal solution/decision to
the problem.
• Inventory management is one of the problems that are
most frequently addressed.

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 4 of 18

Online Videos Library

Analytics (IIM Bangalore - https://swayam.gov.in/) 2:01:14


What is Analytics? 0:04:27
Why do we need Analytics ? 0:04:10
Game Changes and Innovators 0:04:55
Descriptive Analytics 0:05:51
Predictive Analysis 0:03:19
Case Study : Package Pricing at DAD Hospital 0:04:02
Introduction to Simple Linear Regression (SLR) 0:04:34
Importance of Simple Linear Regression 0:03:33
Types of Regression 0:05:32
SLR Model Building 0:03:15
SLR: OLS Estimation 0:03:45
SLR Beta Coefficient Interpretation 0:03:12
Simple Linear Regression Model Variation 0:02:35
Regression R-Square 0:04:37
Regression Standard Error Estimation 0:02:27
Regression T-Test 0:04:42
Demo: Excel Analysis 0:09:16
Demo: SPSS Analysis 0:07:43
Introduction to Multiple Linear Regression (MLR) 0:03:45
Interpretation of Multiple Regression Co-efficients 0:07:34
MLR Model Diagnostics 0:06:11
MLR: Tests for Model Significance 0:04:23
Dummy Variables 0:06:34
Interaction Variables 0:05:36
MLR Model Deployment 0:05:16
Exponential Smoothing 0:05:36

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 5 of 18

During the early period of the 20th century, many companies were taking business decisions
based on 'opinions' rather than decision based on proper data analysis (which probably
acted as a trigger for Deming's quote). Opinion-based decision making can be very risky and
often leads to incorrect decisions. One of the primary objectives of business analytics is to
improve the quality of decision-making using data analysis and this makes the following
considerations very critical:
• The foundations of analytics and how it is becoming a competitive strategy for many
organizations.
• The importance of analytics in decision making and problem solving.
• How different organizations are using analytics to gain insights and add value.
• How organizations are using analytics to generate solutions and products.
• Different types of analytical models.
• Framework for analytics model development and deployment.

Regression

Regression is one of the most important techniques in predictive analytics since many
prediction problems are modelled using regression. It is one of the supervised learning
algorithms, that is, a regression model requires the knowledge of both the dependent and
the independent variables in the training data set.
Simple Linear Regression (SLR) is a statistical model in which there is only one
independent variable and the functional relationship between the dependent variable and
the regression coefficient is linear

• The fundamentals of Simple Linear Regression and its application in predictive


analytics.
• The difference between causal and association relationship.
• Various stages in regression model building, and the underlying assumptions of
regression models.
• Method of Ordinary-Least-Squares (OLS) for estimation of regression parameters.
• Interpretation of regression parameters under different functional forms.
• How to carry out regression model diagnostics and validate the model.
• SLR model deployment.

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 6 of 18

Binary Logistic Regression

Binary logistic regression models the relationship between a set of independent variables
and a binary dependent variable. It is useful when the dependent variable is dichotomous
(with two distinct parts, i.e., yes/no, etc.) in nature, like death or survival, absence, or
presence and so on. Independent variables can be categorical or continuous, for example,
gender, age, income, or geographical region. Binary logistic regression models a dependent
variable as a logit of p, where p is the probability that the dependent variables take a value
of 1.
Binary logistic regression models are used across many domains and sectors. For example, it
can be used in marketing analytics to identify potential buyers of a product, in human
resources management to identify employees who are likely to leave a company, in risk
management to predict loan defaulters, or in insurance, where the objective is to predict
policy lapse. All of these objectives are based on information such as age, gender, occupation,
premium amount, purchase frequency, and so on. In all these objectives, the dependent
variable is binary, whereas independent variables are categorical or continuous.
Why can’t we use linear regression for binary dependent variables? One reason is that the
distribution of Y is random and not normal, as in the case of linear regression. Also, the left-
hand side and right hand of the model will not be comparable if we use linear regression for
a binary dependent variable.

Linear regression is suitable for predicting a continuous value such as the price of property
based on area in square feet. In such a case the regression line is a straight line. Logistic
regression on the other hand is used for classification problems, which predict a probability
that a dependent variable Y takes a value of 1, given the values of predictors. In binary logistic
regression, the regression curve is a sigmoid curve.

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 7 of 18

Statistical Model for Binary Logistic Regression

The equation below is a statistical model for binary logistic regression with a single
predictor. The small p is the probability that the dependent variable ‘Y’ will take the value 1,
given the value of ‘X’, where X is the independent variable. The natural log of “p divided by
one minus p” is called the logit or link function. The right-hand side is a linear function of X,
very much similar to a linear regression model.

The general logistic regression model for a single predictor can be extended to a model with
k predictors and is represented as given here. In this equation, p is the probability that Y
equals one given X, where Y is the dependent variable and X’s are independent variables. B0
to bk are the parameters of the model, they are estimated using the maximum likelihood
method, which we’ll discuss shortly. The left-hand side of the equation ranges between
minus infinity to plus infinity.

Note that LHS of the model can lie between – ∞ to ∞

Article: Binary Logistic Regression – An introduction

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 8 of 18

Prospect Theory and Value Function


Prospect theory is a theory of behavioral economics and behavioral finance that was
developed by Daniel Kahneman and Amos Tversky in 1979. Based on results from controlled
studies, it describes how individuals assess their loss and gain perspectives in an asymmetric
manner. For example, for some individuals, the pain from losing $1,000 could only be
compensated by the pleasure of earning $2,000. Thus, contrary to the expected utility
theory (which models the decision that perfectly rational agents would make), prospect
theory aims to describe the actual behavior of people.
In the original formulation of the theory, the term prospect referred to the predictable results
of a lottery. However, prospect theory can also be applied to the prediction of other forms of
behaviors and decisions.
Prospect theory postulates the existence of a value function V(z) that depends on the
deviation from a reference point (determined in terms of wealth), and a function π(p) that
turns the weighted probabilities applied in decision-making.
In Figure A, function π (p) is defined on the interval [0, 1] transforms the probabilities into
decision making weighted probabilities. It is a function characterized by monotonically
increasing that intersects the 45 degrees line near the origin, so that the weights in decision-
making are smaller than the probabilities, except for the very low probabilities.

The value function V(z), which is shown in Figure B, is dependent on changes in wealth
relative to a reference situation, having a sigmoid-shape: being concave for gains (risk
aversion) and convex for losses (risk seeking). It presents a discontinuity at the origin with
a steeper slope for losses than for gains.

Figure A Figure B

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 9 of 18

This theory helps understand why the same individual can be, at different situations, risk-
avoiding, or risk-seeking. It explains behaviors observed in the economy such as the
disposition effector why the same person may buy both a lottery ticket and an insurance
policy.

Value Function

Z=deviation from the reference point

V(z) = z0.88 if z ≥ 0 (when looking at a Gain)

V(z) = (-2.25)(-z)0.88 if z < 0 (when looking at a Loss)

𝑝0.65
π(p) =
[𝑝0.65 +(1−𝑝)0.65]1/0.65

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 10 of 18

Design of Experiments and Level of Significance

The phrase statistically significant indicates if a result is different enough that it is unlikely
to be the result of random chance. In comparing the average height of students in the class to
the average height of students in the entire school, a statistically significant result indicates
that the class mean height is different enough from the school mean height that observed
difference is not likely to be the result of random chance.

In statistical tests, statistical significance is determined by citing an alpha level, or the


probability of rejecting the null hypothesis when the null hypothesis is true. For this example,
alpha, or significance level, is set to 0.05 (5%).

Lady Tasting Tea - Inferential Statistics and Experimental Design


If the probability is below a certain threshold (known as the Level of Significance) then we
can say that possibility of an event is not due to chance

Case Studies
Sessions Topics Case Studies & Reading Material
Session 1 Analytics and AI in Action From analytics to artificial intelligence
Package Pricing at Mission Hospital
Session 2 Descriptive Analytics Why is Data Visualization Important, What is Important
in Data
Forecasting Demand for Food at Apollo Hospitals
Session 3 Predictive Analytics - I The Marketing Mix (Chapter 8 of How to Make the Right
Decision)
Concentrix Corporation Improving Customer
Session 4 Predictive Analytics - II Persistency for an Indian Insurance Company
How to make the right decision Ch.7
Innovation at Uber : The Launch of Express Pool
Session 5 Prescriptive Analytics
A Refresher on A B Testing

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 11 of 18

Descriptive Analytics
Descriptive analytics is the process of using current and historical data to identify trends and
relationships. It’s sometimes called the simplest form of data analysis because it describes
trends and relationships but doesn’t dig deeper.
Descriptive analytics is relatively accessible and likely something your organization uses
daily. Basic statistical software, such as Microsoft Excel or data visualization tools, such as
Google Charts and Tableau, can help parse data, identify trends and relationships between
variables, and visually display information.

Descriptive analytics is especially useful for communicating change over time and uses
trends as a springboard for further analysis to drive decision-making.

Examples of Descriptive Analytics

❖ Traffic and Engagement Reports

❖ Financial Statement Analysis

❖ Demand Trends

❖ Aggregated Survey Results

❖ Progress to Goals

HBS Article: WHAT IS DESCRIPTIVE ANALYTICS? 5 EXAMPLES

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 12 of 18

Predictive Analytics
In the analytics capability maturity model (ACMM), predictive analytics comes after
descriptive analytics and is the most important analytics capability. It aims to predict the
probability of occurrence of a future event such as forecasting demand for products/services,
customer churn, employee attrition, loan defaults, fraudulent transactions, insurance claim,
and stock market fluctuations.
Anecdotal evidence suggests that predictive analytics is the most frequently used type of
analytics across several industries. The reason for this is that almost every organization
would like to forecast the demand for the products that they sell, prices of the materials used
by them, and so on.
The use of predictive analytics can reveal relationships that were previously unknown and are
not intuitive.
It is critical to understand:
• Framework for predictive analytics model development and deployment.
• Problems that companies try to solve using predictive analytics.
• Different classification of algorithms used in Machine Learning.

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 13 of 18

Model Development and Deployment Framework

Problem or Opportunity Identification


• Domain knowledge is very important at this stage of the analytics project.
• This will be a major challenge for many companies who do not know the capabilities of
analytics.

Collection of Relevant Data


• Once the problem is identified clearly, the project team should identify and collect data.
• This may be an interactive process since "relevant data" may not be known in advance for many
analytics projects.
• The existence of ERP systems will be very useful at this stage.

Data Pre-processing
• Data preparation and Data Pre-processing for a significant part of any analytics project.
• This will include data inputation and the creation of additional variables such as interaction
variables and dummy variables in the case of predictive analysis projects.

Model Development
• Analytics Model builing is an iterative process that aims to find the best model
• Several analytics tools and solution procedures are used to find the best model in this stage.

Communication
• The communication of the analytics output to the top management and clients play a cruicial
role
• Innovative data visualization techniques may be used in this stage to present the output.

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 14 of 18

Exponential Smoothing
Exponential smoothing was first suggested in statistical literature without reference to
previous work by Robert Goodell Brown in 1956 and then expanded by Charles C. Holt in
1957. Exponential smoothing is a broadly accurate principle for smoothing time series data
using the exponential window function. The controlling input of the exponential smoothing
calculation is defined as the smoothing factor or the smoothing constant.

As we know that, in the simple moving average, the past observations are weighted
equally, exponential functions are used to assign exponentially decreasing weights over
time. It is an easily learned and easily applied method for making some determination based
on prior assumptions by the user, such as seasonality. Exponential smoothing is generally
used for the analysis of time-series data.

Video: Exponential Smoothing Methods - Christian Hofer, Walton College


Forecasting

Exponential smoothing is generally used to make short term forecasts, but longer-term
forecasts using this technique can be quite unreliable. More recent observations given larger
weights by exponential smoothing methods, and the weights decrease exponentially as the
observations become more distant. When the parameters describing the time series are
changing slowly over time, then these methods are most effective.

Exponential Smoothing Methods


There are three main methods to estimate exponential smoothing. They are:

• Simple or single exponential smoothing


• Double exponential smoothing (Out of SMP Syllabus)
• Triple exponential smoothing (Out of SMP Syllabus)

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 15 of 18

Simple or Single exponential smoothing


If the data has no trend and no seasonal pattern, then this method of forecasting the time
series is essentially used. This method uses weighted moving averages with exponentially
decreasing weights.

The single exponential smoothing formula is given by:

Ft+1 = Forecast for a future time period t+1

yt = Actual Demand Observation in the immediately preceding time period t

Ft = Forecast for the immediately preceding time period t

λ = Dampening Factor

Ft+1 = λ yt + (1- λ) Ft

= λ yt + (1- λ)[ λ yt-1 + (1- λ) Ft-1)]

= λ yt + λ (1- λ) yt + (1- λ)2 Ft-1

= λ yt + λ (1- λ) yt + (1- λ)2 [λ yt-2 + (1- λ) Ft-2]

= λ yt + λ (1- λ) yt + λ (1- λ)2 yt-2 + (1- λ)3Ft-2

Ft+1 = λ yt + λ (1- λ) yt + λ (1- λ)2 yt-2 + λ (1- λ)3 yt-3 + … + λ (1- λ)n Ft-n

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 16 of 18

Net Present Value (NPV)


Net present value (NPV) is the difference between the present value of cash inflows and the
present value of cash outflows over a period of time. NPV is used in capital budgeting and
investment planning to analyze the profitability of a projected investment or project.

❖ NPV Formula
If there’s one cash flow from a project that will be paid one year from now, then the
calculation for the NPV of the project is as follows:

NPV=Today’s value of the expected cash flows−Today’s value of invested cash

𝐹𝑢𝑡𝑢𝑟𝑒 𝑉𝑎𝑙𝑢𝑒
𝑁𝑃𝑉 = − 𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝐼𝑛𝑣𝑒𝑠𝑡𝑚𝑒𝑛𝑡
(1 + 𝑟)𝑛

where:

r=Required return or discount rate,


n=Number of time periods

❖ Significance of NPV Value


NPV accounts for the time value of money and can be used to compare the rates of return of
different projects or to compare a projected rate of return with the hurdle rate required to
approve an investment. The time value of money is represented in the NPV formula by the
discount rate, which might be a hurdle rate for a project based on a company’s cost of capital.
No matter how the discount rate is determined, a negative NPV shows that the expected rate
of return will fall short of it, meaning that the project will not create value.

Key Takeaways

• Net present value (NPV) is used to calculate the current value of a future stream of
payments from a company, project, or investment.
• To calculate NPV, you need to estimate the timing and amount of future cash flows
and pick a discount rate equal to the minimum acceptable rate of return.
• The discount rate may reflect your cost of capital or the returns available on
alternative investments of comparable risk.
• If the NPV of a project or investment is positive, it means its rate of return will be
above the discount rate.

Video: Net Present Value (NPV) - Edspira

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 17 of 18

Prescriptive Analytics
Prescriptive analytics is the process of using data to determine an optimal course of action.
By considering all relevant factors, this type of analysis yields recommendations for next
steps. Because of this, prescriptive analytics is a valuable tool for data-driven decision-
making.
Machine-learning algorithms are often used in prescriptive analytics to parse through large
amounts of data faster—and often more efficiently—than humans can. Using “if” and “else”
statements, algorithms comb through data and make recommendations based on a specific
combination of requirements. For instance, if at least 50 percent of customers in a dataset
selected that they were “very unsatisfied” with your customer service team, the algorithm
may recommend additional training.
It’s important to note: While algorithms can provide data-informed recommendations, they
can’t replace human discernment. Prescriptive analytics is a tool to inform decisions and
strategies and should be treated as such. Your judgment is valuable and necessary to provide
context and guard rails to algorithmic outputs. At your company, you can use prescriptive
analytics to conduct manual analyses, develop proprietary algorithms, or use third-party
analytics tools with built-in algorithms.

Examples of Descriptive Analytics

❖ Venture Capital: Investment Decisions

❖ Sales: Lead Scoring

❖ Content Curation: Algorithmic Recommendations

❖ Banking: Fraud Detection

❖ Product Management: Development and Improvement

❖ Marketing: Email Automation

HBS Article: WHAT IS PRESCRIPTIVE ANALYTICS? 6 EXAMPLES

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Indian Institute of Management Ahmedabad - Senior Management Program (BL 10) Page 18 of 18

Appendix:
Additional Content 1:01:55
Analytics in Finance 0:07:15
Analytics in Manufacturing 0:03:05
Analytics in Healthcare 0:02:50
Analytics in Telecommunication 0:01:00
Analytics in Supply Chain 0:01:17
Digital Analytics 0:05:17
Analytics in IT 0:02:01
Partial and Part Correlation 0:04:37
Partial F-Test and Variable Selection Method 0:03:34
Multi-Collinearity 0:04:56
MLR Model Building 0:03:48
Demo: Data Description 0:03:59
Demo: Data Model Development 0:13:23
Demo: Model Validation 0:04:53

Analytics Neeraj Kumar Sinha ([email protected]) SMPBL1022065


Compiled by: Amit Goel & Neeraj Sinha
Current State of Mind (Unstructured)

Open/Closed Jamovi
Theory Book

Core Concepts Numericals

Internet Access
Readings
Case Studies
Laptop, e-Notes

Books MCQ
Exam Pattern

Paper Notes,
Cheat Sheet
Excel sheet
Grades (S/S+) Calculators
Span of Control Outside the Control Span

Theory Core Concepts

MCQ Internet Access

Readings Case Studies

Exam Pattern Laptop, e-Notes


Paper Notes,
Numericals
Cheat Sheet
Open/Closed
Grades (S/S+)
Book
Books Excel sheet

Calculators
Span of Control

Theory Core Concepts Readings Case Studies

Paper Notes,
Numericals Books
Cheat Sheet

Excel sheet Calculators


Agenda Duration Presenter
Analytics
Introduction: Descriptive, Predictive, Prescriptive
Regression, Logistic Regression
15 mins Neeraj Sinha
Exponential Smoothing
NPV (Net Present Value)
Design of Experiments & Level of Significance
Readings Summary
A Refresher on AB Testing - Done
From analytics to artificial intelligence
15 mins Amit Goel
Chapter 7: Predictive Modelling
Chapter 8: The Marketing Mix
Why is Data Visualization Important What is Important in Data
Case Studies Summary
Concentrix Corporation (Indian Insurance Company)
Forecasting Demand for Food at Apollo Hospitals 15 mins
Innovation at Uber The Launch of Express POOL Neeraj Sinha
Package Pricing at Mission Hospital
Numericals 20 mins
Format of the Exams + Multiple Choice Questions (MCQs) 15 mins
Questions 10 mins Open House
Analytics
Neeraj Sinha
Analytics

Business Analytics (BA) refers to the tools, techniques, and processes for continuous
exploration and investigation of past data to gain insights and help in decision making.

Descriptive Analytics Predictive Analytics Prescriptive Analytics

• Assists users in finding the


• Predicts the probability of
• Communicates the hidden optimal solution to a problem.
occurrence of a future event.
facts and trends in the data. • In most cases, provides an
• Helps organizations to plan
• Simple analysis of data can optimal solution/decision to
their future course of action.
lead to business practices the problem.
• Most frequently used type of
that result in financial • Inventory management is one
analytics across several
rewards of the problems that are most
industries
frequently addressed.

Mean, Median, Mode, Liner Regression, Auto-regression, Exponential Smoothing,


Coefficient of determination (R2), Binary Logistic Regression, Multi- A/B Testing
Multiple Regression, collinearity, Gradient Descent
Binary Logistic Statistically
Regression Significant
Regression

Regression is one of the most The phrase statistically


important techniques in predictive Binary logistic regression models the
significant indicates if a result is
analytics since many prediction relationship between a set of
different enough that it is unlikely to be
problems are modelled using independent variables and a binary
the result of random chance. If the
regression. It is one of the supervised dependent variable. It is useful when
probability is below a certain threshold
learning algorithms, that is, a the dependent variable is dichotomous
(known as the Level of Significance)
regression model requires the (with two distinct parts, i.e., yes/no,
then we can say that possibility of an
knowledge of both the dependent and etc.) in nature, like death or survival,
the independent variables in the event is not due to chance
absence, or presence and so on
training data set.

Exponential Net Present


Smoothing Value (NPV)

Exponential smoothing is generally


used for the analysis of time-series data Net present value (NPV) is the
over a short-term. If the data has no difference between the present value of
cash inflows and the present value of
trend and no seasonal pattern, then this
cash outflows over a period of time.
method of forecasting the time series is
NPV is used in capital budgeting and
essentially used. This method uses
investment planning to analyze the
weighted moving averages with profitability of a projected investment
exponentially decreasing weights. or project.
Readings Summary
Amit Goel
Data Visualization, What is important
• Simply means showing drawing graphics display to show data
• The importance of Data visualization is – analyzing complex data, identifying patterns, and
extracting valuable insights. Simplifying complex information and presenting it visually enables
decision-makers to make informed and effective decisions quickly and accurately.

• Presentation Graphics : Basic graphical presentation of the data, It is for masses and explanatory
text is along the data. Example, newspaper, TV, advertisement.
• Exploratory graphics : Multiple graphs for limited audience. Intent is for detailed analysis with bar
chart, histogram or other detailing. Graphs provides more insight. Example, Statistical or business
modelling/ graphs. Trend views.
From Analytics to Artificial Intelligence
• Analytics is process of discovering, interpreting and communicating significant patterns in data

• Three eras of analytics


1. Analytics 1.0 –era of artisanal analytics (business intelligence) – descriptive and diagnostic analytics
• Sales, customer data, interactions and process were collected, aggregated in format and then analysed.
• Used for Internal insights for business decision support.
2. Analytics 2.0 – the era of big data analytics – Included Predictive analytics
• Massive amount of data, amounting in Giga byte to petabyte was collected.
• Goal shifted from internal decision support to data products. Built for customer use.
3. Analytics 3.0 – the era of data economy analytics (connected devices) – included prescriptive analytics.
• Combination of traditional business intelligence, big data and IoT distributed through network.
• Companies to transform their business models and cultures with extensive use of analytics.
• Large-scale companies create data and analytics-based products, and analytical activities are increasingly “industrialized,” often with thousands of
machine-learning model.
4. Analytics 4.0 –the era of artificial intelligence
• Analytics embedded, automated. Massive amount of data with robotics process in places.
• Conscious and intellectual thinking like humans.
• Capability to predict and prescribe solutions, helping large companies to take corrective actions. Example, predictive maintenance, find pattern and
abnormalities, supply chain management, etc.
Refresher on AB testing
• Part of prescriptive analytics

• A/B testing in its simplest sense is an experiment on two variants to see which performs better based on a
given metric.

• Typically, two consumer groups are exposed to two different versions of the same thing to see if there is a
significant difference in metrics like sessions, click-through rate, and/or conversions.

• Considered as most basic king of randomized controlled experiment.

• Example, two different ads and response. Or button size to website subscription
• Red vs Blue
• Small vs big
• Aerial font vs other

• Avoid Mistakes
• Make sure tests are running their course and not short executed.
• Avoid too many Metric leads to Spurious correlations
• Do enough retesting to eliminate outliers.
Chapter 7 (Predictive Modelling)
How to make the Right Decision – Arnab Laha

• A regression is a statistical technique that relates a dependent variable to one or more independent (explanatory)
variables. Used in finance, investing and other disciplines
• Regression analysis is used for one of two purposes: predicting the value of the dependent variable when information
about the independent variables is known or predicting the effect of an independent variable on the dependent variable
• Different Types of Regression Models
1. Linear Regression
• Relationship between two variables, linear equation
• One dependent variant and other independent variable. Example - weight and height
• Y= c + b*x (y = dependent, c = constant, b= regression coefficient, x = independent)
2. Logistic Regression
• Estimates the probability of an event occurring in future. Binary outcome – yes /no, pass/ fail, true/ false etc.
• Represented by sigmoid curve.
• Example, political results, college admissions, bacterial populations
(p = probability of response = 1)
Chapter 7 (Predictive Modelling) contd/--
How to make the Right Decision – Arnab Laha

• True Positive (TP): the truth is positive, and the test predicts a positive. The person is sick, and the test accurately reports this.

• True Negative (TN): the truth is negative, and the test predicts a negative. The person is not sick, and the test accurately reports this.

• False Negative (FN): the truth is positive, but the test predicts a negative. The person is sick, but the test inaccurately reports that they are not.

• False Positive (FP): the truth is negative, but the test predicts a positive. The person is not sick, but the test inaccurately reports that they are.

• The false positive rate is calculated as FP/FP+TN, where FP is the number of false positives and TN is the number of true negatives (FP+TN
being the total number of negatives). It’s the probability that a false alarm will be raised: that a positive result will be given when the true value is
negative.

• The false negative rate — also called the miss rate — is the probability that a true positive will be missed by the test. It’s calculated as
FN/FN+TP, where FN is the number of false negatives and TP is the number of true positives (FN+TP being the total number of positives).

• The true positive rate (TPR, also called sensitivity) is calculated as TP/TP+FN. TPR is the probability that an actual positive will test positive.

• The true negative rate (also called specificity), which is the probability that an actual negative will test negative. It is calculated as TN/TN+FP.

• CHAID ( Chi -Square Automatic Interaction Detection) - relation between dependent and independent variable.

• Example - TV show popularity vs length, star cast, show timing etc.

• Uses Split based on homogenous within themselves and heterogenous from others.
Chapter 8 (The Marketing Mix)
How to make the Right Decision – Arnab Laha

• Marketing mix includes four traditional P’s of marketing strategy (Product, pricing,
promotion, and place)
• Seven key patterns of response to advertisement (Current, shape, competitive, carryover,
dynamic, content and media effects)
• Current effect – change in sales caused by ads at time when it is released. Small
relative impact
• Carryover Effect – Change in sales due by past ads. Can have short or long duration
• Shape Effect – Change in level of intensity of ads in same time.
• Competitive effect – Competition responding to new innovation or advertisement by
entering into the market.
• Dynamic Effect – Effect of advertisement that change with time. (wear-in/ increases,
wear-out / fades away)
• Content Effect – Changes in sales due to changes in content of advertisement.
• Media effect – Changes in sales due to type of media used (TV, radio, magazine,
internet…)
• Linear regression is used to calculate sales
• Gross Rating Points (GRP) = Reach x Frequency of a campaign.
• Elasticity = (Average Price/ Average Sales) x Coefficient
Case Studies Summary
Neeraj Sinha
Package Pricing at Forecasting Demand for Concentrix Corporation- Innovation at Uber The
Attributes
Mission Hospital Food at Apollo Hospitals Indian Insurance Co. Launch of Express POOL
The Mission Hospital case study Dr. Ananth Rao's study on The Concentrix Corporation The "Innovation at Uber:
focuses on devising optimal Forecasting Demand for food at case study showcases their Launch of Express POOL" case
package pricing strategies that Apollo Hospital employs data- successful partnership with an study illustrates Uber's
balance affordability and quality driven models to enhance Indian insurance company, pioneering approach to
healthcare. By analyzing market inventory management and highlighting how Concentrix's transportation with the
trends and cost structures, the patient satisfaction by tailored customer service introduction of Express POOL, a
study aims to offer transparent accurately predicting food solutions led to improved shared ride service that
and competitive pricing, demand based on historical customer experiences, optimizes routes and reduces
Case Summary enhancing patient value and consumption patterns and streamlined processes, and costs for passengers. This
hospital sustainability. external factors. This research heightened operational innovation exemplifies Uber's
informs efficient resource efficiency. Through advanced commitment to redefining
allocation and demand technology and strategic urban mobility through
projection strategies within insights, the collaboration technology-driven solutions.
healthcare catering services. underscores the potential for
transformative impacts in the
insurance sector.

Analytics Descriptive Analytics Predictive Analytics Predictive Analytics Prescriptive Analytics

Mean, Median, Mode, Liner


Regression, Coefficient of
determination (R2), Multiple Binary Logistic Regression,
Key Statistical Auto-regression, Exponential
Regression, Mean Absolute Multi-collinearity, Gradient A/B Testing
Concepts Used Smoothing
Error (MAE), Mean Squared Descent
Error (MSE), Root Mean
Squared Error (RMSE)
Numericals
Neeraj Sinha
FORMULA 1:
REGRESSION (SLR & MSLR)
𝑝𝑝
log = 𝑏𝑏0 + 𝑏𝑏1𝑋𝑋1 + ….. + 𝑏𝑏𝑛𝑛𝑋𝑋𝑛𝑛 = logit(p)
1−𝑝𝑝
𝑝𝑝
= 𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙(𝑝𝑝)
1 − 𝑝𝑝
𝑝𝑝
1+ = 1 + 𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙(𝑝𝑝)
1−𝑝𝑝

FORMULA 2:
= 1 + 𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙(𝑝𝑝)
1 − 𝑝𝑝
1
BIINARY LOGISTIC REGRESSION = 1 − 𝑝𝑝
1+ 𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙(𝑝𝑝)
1
𝑝𝑝 = 1 −
1+𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙(𝑝𝑝)

𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙(𝑝𝑝) 𝑒𝑒 (𝑏𝑏0+𝑏𝑏1𝑋𝑋1 + ….. + 𝑏𝑏𝑛𝑛𝑋𝑋𝑛𝑛)


𝑝𝑝 = =
1+𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙(𝑝𝑝) 1+𝑒𝑒 (𝑏𝑏0+𝑏𝑏1𝑋𝑋1 + ….. + 𝑏𝑏𝑛𝑛𝑋𝑋𝑛𝑛)
P = probability of response = 1
X1, X2, … , Xn independent variables If p ≥ c, predicted response = Yes
b0, b1, … , bn respective coefficients as determined by regression If p < c, predicted response = No
c = chosen probability cut-off
FORMULA 3:
EXPONENTIAL SMOOTHING
FORMULA 4: 𝑠𝑠

NET PRESENT VALUE (NPV) 1 + 𝑟𝑟 𝑛𝑛


QUESTIONS

You might also like