Lean
Lean
Lean
• If the ratio of Performance (P) to Expectations (E) (P/E) is less than 1, it indicates a lower-
quality product, which can lead to customer attrition.
• At P/E < 1, customers may start exploring other options due to dissatisfaction.
• Customer loyalty is obtained when P/E > 1, leading to repeat sales for the organization.
• For example, during World War II, all products were required to go through mandatory
inspections before use to ensure they were "fit for use."
• ISO plays a crucial role in developing and publishing international standards for quality and
other areas.
Cost Implications:
• In Lean Six Sigma, the focus is on reducing defects to minimize rework and associated costs.
These notes summarize the key concepts related to quality, customer satisfaction, delight, and the
importance of meeting expectations in Lean Six Sigma. Quality is not just about meeting basic
functionality but exceeding customer expectations to achieve delight and loyalty. Additionally,
adhering to standards and minimizing rework costs are critical aspects of quality management.
• Dr. Joseph M. Juran: A prominent quality management expert who developed a model
known as "Juran on Quality Improvement." He stressed the importance of understanding
customer needs, product features, and the process for quality improvement.
• Edwards Deming: A renowned statistician and quality management expert who is well-
known for the Plan-Do-Study-Act (PDSA) cycle, also known as the Deming Cycle. He played a
crucial role in promoting quality management in Japan.
• The Japanese invited these scientists to improve the quality of their manufacturing
processes.
• The focus in Japan was on defect-free manufacturing and achieving total quality.
• DMAIC (Define, Measure, Analyze, Improve, Control) is a structured process used in Lean Six
Sigma for improving existing processes.
• DMADV (Define, Measure, Analyze, Design, Verify) is another structured process used for
designing and developing new products or improving the performance of existing products.
• Dr. Juran's Quality Triangle: An approach that focuses on quality, cost, and time as key factors
in achieving quality improvements.
These notes summarize the key figures, models, and concepts related to quality and process
improvement, with a particular emphasis on the contributions of Walter A. Shewhart, Joseph M.
Juran, and Edwards Deming, as well as the quality improvement initiatives in Japan.
• Dr. Kaoru Ishikawa: Known for the development of the Fishbone Diagram (Ishikawa diagram)
used for root cause analysis in quality management.
• Philip B. Crosby: Known for the "Absolutes of Quality Management," a set of principles that
emphasize prevention of defects and the importance of conformance to requirements.
Quality Control and Customer Requirements:
• Quality is often represented as the relationship between a dependent variable (Y) and
independent variables (X).
• Y represents the key process output variable (kPOV) and is governed by customer
specification limits, including the Upper Specification Limit (USL) and Lower Specification
Limit (LSL).
• X represents key process input variables (kPIV) and is governed by process control limits,
including the Upper Control Limit (UCL) and Lower Control Limit (LCL).
• A scenario is described where the goal is to produce 500 cakes per day with an initial defect
rate of 30 defective cakes out of 500.
• The cost per cake is calculated, resulting in a total cost of INR 7,950. The selling price (SP) per
cake is INR 20, and the total revenue is INR 10,000, leading to a profit of INR 2,050.
• After process improvement, the defect rate is reduced to 10 defective cakes out of 500,
resulting in a higher profit of INR 2,350.
• Further process improvement leads to zero defective cakes and a profit of INR 2,500,
highlighting that fewer defects lead to higher profitability.
These notes summarize the key concepts related to quality management, customer requirements,
process improvement, and the impact of defect reduction on profitability. They also mention Dr.
Kaoru Ishikawa's Fishbone Diagram and Philip B. Crosby's quality management principles.
• The primary goal is to minimize variations and improve the quality of products or services.
• Reducing defects involves identifying factors (X1, X2, X3) that affect the key process output
variable (Y).
• In a baking scenario, factors like Baking Temperature, Baking Time, and Quantity of Baking
Soda can influence the quality of the cake.
• By improving efficiency and reducing waste, the Lean approach complements Six Sigma's
focus on reducing defects.
• Six Sigma aims for improved process effectiveness, reducing variations and defects.
• It is often measured by the standard deviation and expressed as defects per million
opportunities (DPMO).
• A high Sigma level (e.g., 6 Sigma) indicates high process performance (99.99967%).
Industry Applications:
• The application of Six Sigma varies across industries, with some industries requiring higher
criticality.
• Industries like airlines, healthcare, and space agencies have critical processes that benefit
from Six Sigma.
Historical Context:
• In 1987, Motorola officially launched its Six Sigma program, making significant strides in
quality management.
• The term "Six Sigma" was coined by Bill Smith, an engineer at Motorola.
• General Electric (GE) promoted Six Sigma across its business verticals, expanding its
adoption.
• Ford introduced the concept of Lean in the manufacturing of cars, emphasizing efficiency
and waste reduction.
These notes provide an overview of Six Sigma, its aim to reduce defects, factors affecting defect
reduction, the Lean approach, effectiveness, industry applications, and the historical development of
Six Sigma.
• Apex Council: Analyzes and reviews projects, allocates budgets and resources, and provides
oversight.
• Project Champion/Sponsors: Senior members who support projects, such as the CFO who
helps justify project importance.
• Project Owner: Responsible for driving high-quality output from the process.
• Master Black Belt (MBB): Develops Six Sigma strategy and is an expert in Six Sigma
methodologies.
• Black Belt (BB): Drives projects and is responsible for process improvement; may have a
team of Green Belts.
• Green Belts (GB): Lead projects and are part of the project team.
• "Rat Stars" are individuals with high skills and a positive attitude.
• Process mapping techniques and creating a project charter are commonly used in the Define
phase.
• There are two types of customers in Six Sigma: internal (within the organization) and external
(actual product or service users).
• Understanding and satisfying the VOC is vital to the success of a Six Sigma project.
These notes provide an overview of key roles in a Six Sigma organization, the selection of team
members, and the activities involved in the Define phase of a Six Sigma project, with a focus on
understanding customer needs and the critical-to-quality elements.
• VOC represents the feedback, complaints, and expectations that customers have, which an
organization may struggle to meet.
• Understanding VOC is essential for improving products, services, and customer satisfaction.
• Surveys such as Customer Satisfaction (CSAT) and Net Promoter Score (NPS).
• It's important to work with subject matter experts and sales teams to convert VOC into
specific, actionable statements.
• Grouping VOC into categories or buckets can help identify common themes.
• The Kano Model, introduced by Dr. Noriaki Kano, helps prioritize VOC based on its criticality
and impact on customer satisfaction.
1. Basic Needs: VOC that, when fulfilled, prevent dissatisfaction. More of these doesn't
significantly increase satisfaction.
2. Performance Needs (One-Dimensional): More is better and helps the product compete with
others.
3. Delighters: Fulfilling these needs delights the customer, but if not met, the customer may not
be aware of their absence.
These notes provide insights into understanding VOC, its sources, challenges in handling VOC, and
how the Kano Model can be used to prioritize VOC based on customer criticality and satisfaction.
• Customer satisfaction can be classified into three categories based on the Kano Model:
1. Basic Needs (Must Be): These are critical and expected by customers. Failure to meet
these needs results in dissatisfaction. Examples are safe arrival and accurate booking
system uptime.
2. Performance Needs (More the Better): Fulfilling these needs leads to increased
satisfaction and competitiveness but does not cause dissatisfaction if not fully met.
Examples include free upgrades.
3. Delighters: Meeting these needs delights customers, but they may not be aware of
their absence. They provide a competitive advantage. An example is g-seal.com fort.
• CTQ represents key measurable characteristics of selected VOC that are critical to the quality
of a project.
• For example, if the VOC is "App crashes 4 out of 5 times," the CTQ may involve metrics like
the number of incidents reported for app crashes or the total number of times the app
crashed and the total number of logins.
• Process mapping involves visualizing and understanding the current state (as-is) of a process.
Various techniques can be used, including:
These notes provide an overview of customer satisfaction, the Kano Model, CTQ aspects, and
techniques for process mapping, emphasizing their importance in understanding and improving
quality and processes.
Process Mapping:
• Process mapping is a technique used to visualize and understand the flow of a process. It
helps in identifying suppliers, inputs, processes, and customers (SIPOC) in a process.
• Process mapping often includes various elements such as activities, decision points, data
recording, and terminals.
• There are different ways to represent process mapping, including flowcharts, swim lanes,
and other visual diagrams.
• This example illustrates a pizza making process with suppliers (e.g., flour and water), inputs
(e.g., dough and pizza base), and various process steps (e.g., adding sauce, toppings, cheese,
baking, etc.).
• The final product (pizza) is delivered to the customer, in this case, named Rahul.
Project Charter:
• A project charter is a document that describes the objectives, timelines, and the people or
teams involved in a project.
• It is used by the Apex Council to make decisions about projects, allocate resources, and
review project progress.
These notes provide an overview of process mapping techniques, illustrated with a pizza making
process, and the purpose and use of a project charter in project management.
Project Charter Elements:
1. Minimum Requirements: The project charter should include essential components, such as
the project title, business case, problem statement, project goals, project scope, project
timeline, and project team members.
2. Business Case:
• The business case in the project charter should address the urgency, impact, and
potential savings of the project. It helps justify why the project is necessary.
3. Problem Statement:
• The problem statement in the charter defines the pain point, including what it is,
where it's occurring, its duration, magnitude, and how it was identified.
4. Project Goals:
• The project goals should be specific, measurable, achievable, relevant, and time-
bound (SMART). They describe the current state and the target state for critical-to-
quality (CTQ) aspects.
5. Project Scope:
• The project scope sets the boundaries and limits of the project, defining what will
and won't be addressed.
6. Project Timeline:
• The project charter should clearly define the start and end dates for each phase of
the project, especially in the context of the DMAIC methodology.
• Identify the individuals who will be part of the project team, designating their roles
according to Six Sigma roles for assigning responsibilities.
ORACI Chart:
• The ORACI chart is used to define roles and responsibilities within the project team. It stands
for Responsible, Accountable, Consulted, and Informed.
Project Activities:
• Activities such as creating the charter, collecting data, conducting measurement system
analysis, and other tasks should be outlined to guide the project's progression.
These notes provide an overview of the key elements found in a project charter, which is a crucial
document for initiating and managing Six Sigma projects, ensuring that they are well-defined, well-
organized, and have clearly defined roles and responsibilities.
APMI Chart:
Project Charter:
• After forming the project charter, it needs to be approved or signed off to conclude the
Define phase.
Measure Phase:
• The Measure phase in Six Sigma is the second phase in the DMAIC methodology, following
the Define phase.
• It involves defining the operational definition of Critical to Quality (CTQ) characteristics and
standardizing parameters.
1. Data Types & Data Collection Plan: Determining what types of data (e.g.,
continuous/variable or discrete/attribute) will be collected and how the data will be
gathered.
2. Samples & Sampling Strategies: Planning how samples will be selected and the
strategy for collecting data.
5. Process Stability & Capability: Evaluating the stability and capability of the process.
Data Types:
• Two primary types of data are continuous/variable data (measurable characteristics like
height or weight) and discrete/attribute data (categorical data like yes/no or pass/fail).
These notes provide an overview of APMI charts, the transition from the project charter to the
Measure phase, and the key elements in the Measure phase, including data types and measurement
system analysis. Understanding data types and ensuring a reliable measurement system are crucial in
Six Sigma projects.
Types of Data:
1. Ordinal Data: Data that is arranged in a specific order, such as ratings, rankings, or grades.
2. Nominal Data: Data that can be counted, representing categories like the number of bags,
students, or colleges.
• Data can be classified as continuous or discrete based on the numerator of the ratio.
1. Why Collect Data: It's important to define the purpose and reasons for collecting data.
2. What Data Needs to Be Collected: Determine which specific data points are relevant to the
project.
3. How to Collect Data: Develop a data collection plan, which may involve a pilot data
collection plan to ensure effectiveness.
5. Ensure Consistency and Stability: It's critical to maintain data consistency and stability
throughout the collection process.
Sampling Techniques:
1. Simple Random Sampling: Every unit has an equal opportunity to get selected as a
sample.
These notes provide an overview of different data types, the data collection process in Six Sigma, and
key sampling techniques used to collect data effectively. Understanding data types and employing
appropriate sampling methods are crucial for accurate data collection in Six Sigma projects.
Sampling Techniques:
• The population is divided into strata or groups, and samples are collected from each
stratum.
1. Confidence Level: Decided based on the criticality of the process. Higher criticality
typically requires a larger sample size for a higher confidence level.
2. Error Margin/Precision: A smaller error margin indicates high precision, and a larger
margin indicates lower precision.
3. Standard Deviation of the Population (σ): Reflects how far the values in the
population are from their mean value.
• n = [(Z * σ) / Δ]^2
• Where:
• Z represents the critical value for a specific confidence level (e.g., 1.96 for a
95% confidence level).
• The formula is used to determine the appropriate sample size based on the specified
confidence level, standard deviation, and precision.
These notes provide an overview of different sampling techniques used in Six Sigma, as well as the
key factors that influence sample size determination. Understanding the relationship between
sample size and confidence level, error margin, and standard deviation is crucial for effective data
collection and analysis in Six Sigma projects.
Discrete Data:
• It is used for data that can be counted and cannot take on an infinite number of values.
• The sample size (n) for discrete data can be calculated using the formula:
• Where:
• Z is the critical value for the desired confidence level.
Data Distribution:
Central Tendency:
Dispersion/Variation of Data:
Graphical Tools:
• Graphical tools like histograms and box plots are used to visualize data distributions.
• Right Skewed (Positively Skewed): Data values cluster on the left side with a tail to
the right.
• Left Skewed (Negatively Skewed): Data values cluster on the right side with a tail to
the left.
• The normal distribution curve, or Gaussian distribution curve, is bell-shaped and represents a
symmetrical distribution of data.
• Mean, median, and mode are equal and located at the center of the curve.
• About 68% of data falls within one standard deviation of the mean.
These notes provide an overview of discrete data, sample size calculation for discrete data, data
distribution, measures of central tendency and variation, graphical tools, and different shapes of data
sets, emphasizing the significance of the normal distribution curve. Understanding data distribution
and its characteristics is essential for data analysis in Six Sigma projects.
You've provided information about normality tests, the Anderson-Darling test, and dealing with
outliers in data analysis. Let's structure this information into organized notes:
Normality Test:
• Normality tests are used to check if a dataset follows a normal distribution, which is a bell-
shaped distribution.
Anderson-Darling Test:
• In the Anderson-Darling test, the null hypothesis is that the data is normally distributed.
• If the p-value is greater than 0.05 (typically used significance level), it suggests that the data
follows a normal distribution.
• If the p-value is less than or equal to 0.05, it indicates that the data is non-normal.
Outliers:
• Outliers are data points that are significantly different from the rest of the data and may
distort statistical analyses.
Boxplot:
• A boxplot is a graphical tool used to identify outliers and visualize the distribution of data.
• It consists of:
• Upper Adjacent Value: A value marking the upper limit of potential outliers.
• Interquartile Range (IQR): The range between the upper quartile (Q3) and lower
quartile (Q1).
• Lower Adjacent Value: A value marking the lower limit of potential outliers.
Use of IQR:
• The Interquartile Range (IQR) is calculated as Q3 - Q1, where Q1 and Q3 are the lower and
upper quartiles, respectively.
• Comparing data to these limits helps in assessing if the process is within the required quality
parameters.
• It can assist in quick comparisons, identifying outliers, and detecting skewness in data.
These notes provide an overview of normality tests, the Anderson-Darling test, dealing with outliers
using boxplots and IQR, and the importance of USL and LSL in quality control. Understanding the
normality of data and addressing outliers are essential steps in data analysis in Six Sigma projects.
You've provided information about Measurement System Analysis (MSA) and the factors that can
contribute to measurement system errors. Let's organize this information into structured notes:
• MSA is a process used to assess the adequacy and reliability of a measurement system.
• The goal is to statistically verify that the measurement system provides unbiased results with
minimal variability and accurately measures the factor being examined.
1. Process: The process refers to the method or procedure used to collect measurements. It
should be consistent and standardized.
3. Gauge/Measuring Instrument: The quality and calibration of the measuring instrument are
critical for accurate measurements.
• Measurement system error refers to the difference between the observed reading and the
actual reading. It represents the variation introduced by the measurement system itself.
1. Overall Variation: The total variation in measurements, including both the variation from the
measurement system and the variation from the part being measured.
Reproducibility:
Repeatability:
• If measurements are repeatable, the same operator and instrument should produce
consistent results.
Sources of Variation:
• High variation can be observed due to factors like operator methods, different operators, or
problems with samples.
Conclusion: Measurement System Analysis is crucial in Six Sigma to ensure that measurement
systems are reliable and produce consistent and accurate results. It helps identify and reduce
measurement errors to improve data quality and decision-making in process improvement projects.
You've provided information about the evaluation of measurement systems, specifically for
continuous data and discrete/attribute data. Let's structure this information into organized notes:
• Gauge R&R is used to assess the performance of a measurement system for continuous data.
Calculation:
• Gauge R&R helps evaluate the contribution of variation in the overall system.
• If the variation is less than 10%, the measurement system is generally accepted.
• If the variation is greater than 30%, the measurement system is usually rejected.
• If the Kappa value is 0.9 or above, the measurement system is typically accepted.
• If the Kappa value is between 0.8 and 0.9, it becomes a business call.
• If the Kappa value is below 0.8 (e.g., 0.2), the measurement system is often rejected.
• These factors are part of the Attribute Agreement Analysis and are used to assess the
agreement and consistency of measurements among different appraisers (operators).
These notes provide an overview of the evaluation of measurement systems, considering both
continuous data (Gauge R&R) and discrete/attribute data (Attribute Agreement Analysis). The criteria
for acceptance or rejection are based on the level of variation and the Kappa value. Measurement
system analysis is crucial to ensure reliable and consistent measurements in Six Sigma projects.
Process Stability and Control Charts: Process stability, often assessed using control charts, is a
fundamental concept in statistical process control. Control charts are tools developed by Dr. Walter
A. Shewhart to monitor and maintain the stability and predictability of a process. Here are key points
related to process stability and control charts:
Stable Process:
2. In a stable process, all variations in the process are expected to fall within control limits,
specifically the Upper Control Limit (UCL) and the Lower Control Limit (LCL).
3. These variations are due to common causes, also known as common cause variation.
1. Common cause variations are inherent to the process and are part of its natural variation.
2. They are typically consistent and predictable over time, and they are referred to as chronic in
nature.
3. Common cause variations are variations that can be expected in day-to-day operations and
do not indicate any specific problems or anomalies.
1. Special cause variations are variations that are not part of the natural, common cause
variation in the process.
2. They are typically sporadic and unpredictable and are often referred to as assignable causes.
3. Special cause variations indicate that something unusual or abnormal has occurred in the
process.
Process Improvement and Control Charts:
1. In the context of Six Sigma projects, one of the objectives is to eliminate common cause
variations and maintain process stability.
2. When special cause variations are identified, an analysis is conducted to identify their root
causes and take corrective actions to address them.
Process Capability:
3. The capability of a process can be assessed using various statistical measures and is a critical
component of Six Sigma quality improvement efforts.
Example: Let's consider a production process in a manufacturing facility. The goal is to produce a
specific component with consistent dimensions to meet customer specifications. To assess process
stability, a control chart is used. If the control chart shows that the process is within control limits
(UCL and LCL) and common cause variations are the only source of variation, it indicates a stable
process. However, if special cause variations are observed, an analysis is performed to determine the
root causes and take corrective actions to maintain process stability and improve product quality.
Process capability studies are also conducted to ensure that the process consistently meets customer
requirements for component dimensions.
Process Capability for Continuous Data: Process capability is a measure of how well a process can
consistently produce items that meet customer specifications. It assesses whether a process is
"capable" of producing products within the defined specification limits. The capability of a process is
typically determined using statistical measures.
1. LSL (Lower Specification Limit): The lowest acceptable value or limit set by the customer or
quality standards for a particular characteristic. Anything below this limit is considered
unacceptable.
2. USL (Upper Specification Limit): The highest acceptable value or limit set by the customer or
quality standards for a particular characteristic. Anything above this limit is considered
unacceptable.
3. CP (Process Capability Index): A measure that quantifies how well a process can produce
items within specification limits. It is calculated as (USL - LSL) divided by (6σ), where σ
represents the standard deviation of the process. The larger the CP value, the more capable
the process.
4. Voice of Customer (VOC): The customer's requirements and expectations for a specific
product or service. VOC includes the defined specification limits (LSL and USL).
5. Voice of Process (VOP): The actual performance of the process in terms of the characteristic
being measured. It is determined by calculating process parameters, such as the mean and
standard deviation.
• Capable Process: If the process is capable of producing items within specification limits
(between LSL and USL), it is considered a capable process. No major changes are needed to
meet customer requirements.
• Not Capable Process: If the process is not capable of consistently producing items within
specification limits, it is considered not capable. This may lead to rejections or defects.
Process improvements are required to meet customer requirements.
• Just Capable Process: A process that coincides with both the LSL and USL is referred to as
just capable. It can produce items within specification, but it's very close to the limits.
• Highly Capable Process: A highly capable process has a large margin between the process's
spread (6σ) and the specification limits (LSL and USL). It can consistently produce items well
within specification limits, providing a high level of confidence in meeting customer
requirements.
Example: Imagine a manufacturing process that produces pens. The customer's requirement is that
the pens should have a length between 10 cm (LSL) and 10.2 cm (USL). The process's average pen
length (Voice of Process) is measured to be 10 cm, and the process's standard deviation (σ) is
determined to be 0.1 cm.
To assess the process capability (CP), you can use the formula: CP = (USL - LSL) / (6σ) = (10.2 - 10) / (6
* 0.1) = 0.33.
In this case, CP is relatively low, indicating that the process is not highly capable of consistently
producing pens within the specified length range. Process improvements may be necessary to
increase the CP value and meet customer requirements with higher confidence.
Cpk (Process Capability Index) Calculation: Cpk is a measure used to assess how well a process can
produce items within customer specifications. It takes into account both the mean (average) and
standard deviation of the process and is a more comprehensive indicator of process capability
compared to Cp.
1. Identify the lower specification limit (LSL) and upper specification limit (USL) based on
customer requirements. In your example, LSL is 10 and USL is 13.
2. Calculate the process mean (average). In your example, the process mean is 12.
5. Calculate Cpk for both the upper and lower specification limits and take the minimum of the
two values.
Example Calculation: For your example, where LSL = 10, USL = 13, Mean = 12, and σ = 1.333, the
calculation would be:
Since both Cpk (USL) and Cpk (LSL) are the same and equal to 0.75, the minimum of the two is 0.75.
Therefore, Cpk = 0.75, indicating that the process is not capable of consistently meeting customer
specifications. The process capability is below the acceptable level (Cpk < 1), and improvements are
needed to meet customer requirements more reliably.
Sigma level, often denoted as "σ level," is a measure that indicates how well a process can
consistently meet customer specifications. It is related to process capability indices like Cp and Cpk.
The relationship between these terms is as follows:
2. Cpk (Process Capability Index with Respect to the Mean): Cpk also considers the mean of
the process in addition to the spread. It is calculated as the minimum of two values: (USL -
Mean) / (3σ) and (Mean - LSL) / (3σ). Cpk evaluates how well the process center aligns with
the specification limits.
3. Sigma Level (σ level): Sigma level is a measure of process performance expressed in standard
deviations. It indicates how many standard deviations the process mean is away from the
nearest specification limit. A higher sigma level indicates a more capable process. Sigma level
is often used in Six Sigma methodologies.
Example Calculation: For your example, you mentioned the following data:
Cpk (USL) = Min[1, 0] = 0 (Since the process mean is exactly at the Upper Specification Limit)
Cpk (LSL) = Min[0, 1] = 0 (Since the process mean is exactly at the Lower Specification Limit)
In this case, the Sigma Level is 0, indicating that the process is not capable of consistently meeting
customer specifications, as the process mean aligns with one of the specification limits. A higher
Sigma Level would signify a more capable process that is further away from the nearest specification
limit.
In the context of attribute data, you are examining characteristics like defects or non-conformities.
These characteristics are binary in nature, meaning they are either present (defective) or absent
(non-defective) in a unit.
• Defect Opportunity: This refers to the probability of possible defects occurring within the
same unit. It's the number of opportunities for defects to occur in a single unit.
Calculations:
1. You provided data for a sample with 5,000 units. Out of these units, 329 were found to be
defective. This means the Defects per Unit (DPU) is calculated as:
2. To determine the Parts Per Million (PPM) for this data, you can use the formula:
This means there are approximately 65,800 defects per million units.
3. You also calculated the Sigma level based on the provided DPMO (Defects Per Million
Opportunities) using a table or reference (Sigma & DPMO table). The Sigma level is found to
be 3.01.
4. To calculate the DPMO for this new data, you can use the formula:
5. Using the DPMO, you can find the corresponding Sigma level, which is calculated as 4.04.
6. Finally, you provided another set of data where the number of Defective Forms is not
specified, but you calculated Defective Per Unit, PPM, and Sigma level as follows:
These calculations indicate the level of process performance and quality based on the attributes and
defects associated with the data. Higher Sigma levels are indicative of better process quality and
reliability.
Analyze Phase:
The Analyze Phase is a crucial step in the Lean Six Sigma methodology, aimed at identifying the root
causes of problems and understanding the factors contributing to variability in a process. This phase
involves various steps and tools to help achieve process improvement.
1. Process Door Approach: In this step, you take a closer look at the process itself to
understand its flow, inputs, and outputs. It's important to identify where potential issues
might occur.
2. Data Door Approach: Data analysis is a key part of the Analyze Phase. You collect and
analyze relevant data to identify trends, patterns, and potential causes of problems.
3. Segregation of Causes: The causes of problems in a process can be classified into different
categories. This step involves organizing and segregating these causes for further analysis.
4. Validation of Causes: To ensure that the identified causes are indeed contributing to the
problem, a validation process is carried out. This may involve conducting experiments or
additional data analysis.
Types of Causes:
During the Analyze Phase, it's important to distinguish between different types of causes:
1. Non-Controllable Causes: These are factors that cannot be easily controlled or changed
within the scope of the project. They may be external factors that influence the process but
are not directly manageable.
2. Direct Improvement Causes: These are the causes that, when addressed or improved, have a
direct impact on the problem and can be controlled or influenced by process changes.
The Analyze Phase is an essential part of the Lean Six Sigma approach as it guides teams to identify
the causes behind process issues and lays the groundwork for effective solutions. The Cause & Effect
Diagram is a valuable tool for organizing and visualizing these causes, making it easier to prioritize
and address them.
1. Why-Why Analysis: This is a systematic approach used to identify the root causes of
problems. You ask "why" multiple times, often five times, to dig deeper into the underlying
issues. The goal is to not only find the immediate or surface-level cause but to identify the
fundamental reasons for a problem. This approach helps in taking corrective and preventive
actions.
2. Pareto Chart/Diagram: This is a graphical tool used to prioritize and focus efforts on the
most significant causes of a problem. The Pareto principle, often known as the 80/20 rule,
suggests that roughly 80% of effects come from 20% of the causes. In problem-solving, it
means that a majority of problems are caused by a small number of issues. By using a Pareto
chart, you can visualize and prioritize the vital few causes that need attention while leaving
aside the trivial many.
• The Pareto Chart is named after Vilfredo Pareto, an Italian economist. Dr. Joseph M. Juran
introduced this principle into problem-solving approaches, emphasizing that roughly 80% of
problems result from 20% of the causes.
• The chart is a bar graph with causes on the x-axis and their frequency or impact on the y-
axis.
• By analyzing the Pareto Chart, you can identify the most critical issues, focus your efforts on
addressing them, and achieve a more significant impact in problem-solving.
Lean Principles:
Lean principles are a fundamental part of Lean Six Sigma. They aim to streamline processes,
eliminate waste, and enhance efficiency. Lean was introduced by the Lean Enterprise Institute,
founded by James P. Womack and Daniel T. Jones in 1997. Some core principles include:
• Value: Identifying what adds value from the customer's perspective and eliminating what
doesn't.
• Value Stream: Mapping the entire process from start to finish, identifying areas for
improvement.
• Pull: Allowing customers to "pull" products or services as needed rather than pushing excess
inventory.
Lean principles complement Lean Six Sigma by emphasizing efficiency and waste reduction, which
aligns with the Six Sigma focus on reducing defects and variability in processes.
Principles of Lean:
1. Identify Values: The first principle of Lean involves recognizing the value of a product or
service from the customer's perspective. It's about understanding what the customer truly
values and is willing to pay for.
2. Map the Value Stream: Value Stream Mapping is a critical step in Lean. It involves visually
mapping out the entire process, from start to finish, to identify how value flows and where
waste occurs. This helps in understanding the current state of the process.
3. Create Flow: Once you've identified the value stream and potential areas of waste, the next
step is to streamline the process to create a smooth and continuous flow of work. This aims
to eliminate interruptions and delays.
4. Establish Pull: Lean operates on a "pull" system where work or products are produced based
on customer demand. This means you only produce what the customer needs when they
need it, reducing excess inventory and waste.
Value-Related Concepts:
• Value-Adding (VA) Activities: These are activities that directly contribute to the product or
service and are considered valuable from the customer's perspective.
• Non-Value-Adding (NVA) Activities: These are activities that do not add value to the product
or service. Identifying and reducing NVA activities is a key goal of Lean.
• Value-Enabling Activities (VE): These activities, while not directly adding value, are
necessary to complete value-adding activities. They support the value stream.
• Lead Time (LT): The total time from the start of a process to its completion, including both
value-adding and non-value-adding activities.
• Takt Time (TT): The rate at which one must complete an operation to meet customer
requirements. It's calculated by dividing the available time by the customer demand.
For example, if the goal is to produce 1,000 units in 10 days, the Takt Time would be 0.01 days per
unit, which is equivalent to 14.4 minutes per unit. This means you need to complete one unit every
14.4 minutes to meet customer demand.
o Favorites
o AIPRM
o Public
o Own
o Hidden
o Add List
All
Topic
All
Activity
Not specific
Model
4
Prompts per Page
PrevNext
SEO / Writing
GPT-3.5-turbo GPT-4 Human Written | Plagiarism Free | SEO Optimized Long-Form Article With
Proper Outline [Upgraded Version]
Copywriting / Writing
SEO / Writing
GPT-3.5-turbo GPT-4 GPT-4 browsing GPT-4 plugins [Version: 3.2] Enjoyed the prompt? Hit Like
button! | Yoast and Rank Math SEO Optimized | Create a 100% Unique | Plagiarism Free Content
with | Title | Meta Description | Headings with Proper H1-H6 Tags | up to 2000 Words Article with
FAQs, and Conclusion.
Generative AI / Midjourney
4
Prompts per Page
PrevNext
FavoritesPublicOwn
Category
All
All
Category
Use case
All
All
Use case
Search...
Prompts per page:
12
12
1–12 of 218
SEO / Writing
·MaxAI.me
Human Written Style Original Content SEO Enhanced Long-Form Article With Proper Structure
SEO / Writing
·MaxAI.me
Live CrawlingBy creating a comprehensive article that similar to your competitor, but with better SEO
(based on the URL of your competitor)
All / All
·MaxAI.me
Web SearchAugment your ChatGPT prompts with relevant web search results through web browsing.
Entering your query to start.
SEO / Ideation
·MaxAI.me
Produce a strategy for keywords and a plan for SEO content using only one {{KEYWORD}}.
SEO / Writing
·MaxAI.me
Entirely Unique, Original and Fully SEO Tuned Articles with Meta Description, Headings, 1500 Words
Length, FAQ's, Meta Description & Much more
Generative-AI / Midjourney
·MaxAI.me
Copywriting / Writing
·MaxAI.me
Compose Top Smart Article Best for ranking on Google by simply providing the Title.
Copywriting / Writing
·MaxAI.me
Rephrase your AI-crafted article with this instrument! You can achieve up to 90%+ Human Generated
score.
SEO / Writing
·MaxAI.me
Elevate your online visibility and draw in more clients with copywriting and SEO solutions from a
skilled professional. Prepare to outpace your competitors and reach peak search rankings with our
human-like writing approach and keyword-dense content.
Copywriting / Writing
·MaxAI.me
Provide the article title you want composed. He endeavors to write an extensive and thorough
article. Prepares it for sharing with h tags.
Copywriting / Writing
·MaxAI.me
·MaxAI.me
Develop engaging script concepts for your YouTube videos. Input a brief description of your video.
Generates: Title, Scene, and Full Script.
Process Cycle Efficiency (PCE):
• PCE is a metric that measures the efficiency of a process. It's calculated as the ratio of the
total value time to the total lead time. The value time is the time involved in value-adding
activities, and the lead time is the total time from start to finish.
Lean Principles:
• Flow: Creating a smooth and continuous flow of work where value-adding steps occur in
sequence, ensuring that the product or service moves efficiently towards the customer.
• Pull: A "pull" system means that products or services are produced or delivered only when
there's a demand from the customer. This helps in reducing excess inventory and waste.
• Just in Time (JIT): JIT is a Lean approach that focuses on delivering the right quantity at the
right time to meet customer demand, minimizing inventory and waste.
• Perfection: Lean is a continuous improvement process that involves striving for perfection. It
means always looking for ways to eliminate waste and improve processes.
2. Inventory: Excess inventory that ties up resources and can lead to waste.
3. Motion: Unnecessary physical or mental actions by employees that don't add value to the
process.
4. Human Intellect: Not tapping into the full potential of employees and their ideas.
5. Waiting: Idle time during the process when work is not being done.
7. Overprocessing: Doing more work on a product or service than what is required by the
customer.
This matrix helps in prioritizing and deciding which changes or improvements to make in a process. It
considers the impact and difficulty of each change, categorizing them as high or low impact and easy
or difficult to implement. Based on this, actions can be prioritized as "Just do it," "Target it now," or
"Strategize."
Hypothesis Testing:
Hypothesis testing is a statistical method used to assess the validity of a claim or statement about a
population parameter, such as the mean or variance. The process typically involves the following
steps:
3. Collect Data:
• Data is collected to test the hypothesis. The data can be continuous (normal data) or
discrete, depending on the nature of the analysis.
• Based on the data type and the nature of the hypothesis, an appropriate statistical
test is selected. For example:
It's important to note that the null hypothesis (H0) and the alternate hypothesis (Ha) are mutually
exclusive and complementary events. H0 assumes equality, while Ha assumes a difference, and the
choice between them depends on the specific research question.
• The confidence level is the complement of the level of significance. A 95% confidence level
corresponds to a level of significance (α) of 0.05. It indicates the level of confidence that the
results are not due to chance.
Non-parametric tests are used when the data does not meet the assumptions of normality required
for parametric tests. These tests do not rely on the specific distribution of the data. Here are some
common non-parametric tests and their applications:
1. Sign Test:
• For example, testing if the median delivery time is equal to a certain benchmark.
• For example, testing if there is a significant difference in delivery times before and
after implementing a new process.
3. Mann-Whitney U Test:
• For example, comparing the delivery times of two different delivery modes (Mode A
and Mode B).
4. Kruskal-Wallis Test:
• For example, comparing the delivery times of multiple delivery modes (Mode A,
Mode B, Mode C, etc.).
5. Chi-Squared Test:
• For example, testing whether there is a significant relationship between the number
of errors in purchase orders and the supplier.
• Similar to the Chi-Squared test, it is used for 2x2 contingency tables with small
sample sizes.
• For example, assessing the association between the presence of errors in purchase
orders and the supplier's status (approved or not approved).
7. 1-Proportion Test:
• For example, testing whether the proportion of purchase orders with errors is
different from a known industry average.
8. 2-Proportion Test:
• For example, comparing the proportion of errors in purchase orders between two
different time periods.
These non-parametric tests are valuable tools when dealing with data that does not follow a normal
distribution, and they provide reliable statistical results in such cases. The choice of test depends on
the specific research question and the characteristics of the data being analyzed.
I apologize for the confusion earlier. Let's break down the calculation step by step.
To calculate DPMO (Defects Per Million Opportunities), you need the following information:
1. Number of Defects: 20
2. Total Opportunities: 500 samples * 400 possible defects per sample = 500 * 400 = 200,000
DPMO = (Number of Defects / Total Opportunities) * 1,000,000 DPMO = (20 / 200,000) * 1,000,000
DPMO = (0.0001) * 1,000,000 DPMO = 100
So, the DPMO in this case is 100, which means there are 100 defects per one million opportunities.
Hypothesis Testing:
• Data is collected, and an appropriate hypothesis test is selected based on the data type and
research question.
• Type 1 Error (α) occurs when we reject the null hypothesis when it's actually true, leading to
a false positive.
• Type 2 Error (β) occurs when we accept the null hypothesis when it's actually false, leading
to a false negative.
Correlation Analysis:
• Correlation analysis is used to measure the relationship between two variables, typically
denoted as X and Y.
If you need more detailed notes or have specific questions on any of these topics, please let me
know!
Correlation Coefficient:
• An absolute value close to 1 indicates a strong relationship. For example, 0.907 represents a
strong positive relationship.
Regression Analysis:
• Linear regression is used when the relationship between variables is linear, especially when
one variable (y) is continuous.
• In simple linear regression, there's one predictor variable, while in multiple linear regression,
there are multiple predictor variables.
• Logistic regression is used when the dependent variable is discrete or binary (e.g., yes/no).
• R² measures how well the regression equation can predict future values.
• It ranges from 0 to 1.
• R² shows the proportion of the total variation in the dependent variable (y) explained by the
independent variable (x).
• R² (pred) measures how well the model can predict new, unseen data.
Interpretation of R²:
These values help assess the accuracy and strength of the regression model.
If you have any more specific questions or need further information, please feel free to ask!
Improve Phase Steps:
1. Generation of Solid Solutions: Brainstorm and create a pool of potential solutions to the
problem.
2. Prioritize and Proof Solutions: Evaluate the solutions generated to identify the most
promising ones.
3. Test Solutions: Conduct small-scale tests or pilots to assess the feasibility and effectiveness
of the selected solutions.
4. Justify Solutions: Provide clear and data-backed reasoning for selecting particular solutions
over others.
Brainstorming Techniques:
• Unstructured Brainstorming: Allowing for a more open and creative exchange of ideas. A
combination of both structured and unstructured brainstorming can be effective.
Brainstorming Principles:
• Antisolution: Sometimes, thinking about what not to do or the opposite can help identify
the right path.
• Analogy and Similar-Looking Processes: Draw parallels from processes that are similar or
have faced similar challenges.
• Brainwriting: Encourage participants to write down their ideas on paper for shared
discussion.
Selecting Solutions:
• Strategies for selecting solutions can include voting, pay-off matrix, and screening against
specific criteria.
• Screening Against Must Be: Ensure solutions align with essential criteria like compliance,
regulations, and customer Critical to Quality (CTQ).
Criteria-Based Matrix:
• Use a matrix that includes various criteria and evaluate each solution against these criteria.
• Criteria can include criteria that measure compliance, effort required, and alignment with
project goals.
Voting:
• Allocate votes to participants and have them vote for the solutions they find most favorable.
Nominal Group Technique (NGT): In NGT, participants rank or rate each solution based on various
parameters or implementation factors. The solutions with the highest cumulative ratings are
selected. It's a structured method for group decision-making and encourages active participation
from team members.
Kaizen: Kaizen is a Japanese term that means "change for better" or "continuous improvement." It is
a process of making small, incremental improvements in processes, products, or systems. The
primary goal of Kaizen is to eliminate waste (Muda) and create a culture of continuous improvement
within an organization.
Kaizen Blitz: A Kaizen Blitz, also known as a Kaizen Event, is a focused and intensive approach to
implement radical improvements in a specific process or area in a short period of time. It involves
concentrated efforts and can lead to significant, immediate enhancements.
Kaizen Cycle: The Kaizen Cycle, often represented as the PDCA (Plan-Do-Check-Act) cycle, is a
systematic approach to improvement. It involves documenting the current process, identifying areas
of waste, planning countermeasures, implementing changes, verifying results, and then repeating
the cycle. Continuous improvement is the key focus.
Kanban: Kanban is a visual system that uses cards or signboards to signal the need for certain
actions, such as restocking inventory or initiating a specific task. It is widely used in Lean and "Just in
Time" production systems. Kanban helps optimize processes and reduce waste by ensuring materials
or tasks are pulled only when needed.
Types of Kanban:
• Red Kanban: Signals a high inventory level, indicating that it's time to order more.
• Yellow Kanban: Suggests that inventory is at a reorder level, and it's advisable to start
preparations.
• Seiton (Set in Order): Organizing items in a way that makes them easily accessible.
These Lean and Kaizen principles help organizations become more efficient, reduce waste, and foster
a culture of continuous improvement and quality.
Roles for Implementation:
• Top Management: Provides leadership and support for the implementation of Lean and
Kaizen principles. They set the vision, objectives, and strategic direction for the organization.
• Middle/Line Management: Middle managers play a critical role in facilitating and driving the
implementation process within their specific areas. Line managers are responsible for day-to-
day operations.
• Employees: All employees contribute to the success of Lean and Kaizen by actively
participating in continuous improvement efforts, suggesting improvements, and ensuring
adherence to standards and processes.
Poka Yoke (Mistake Proofing): Poka Yoke is a method for mistake avoidance and error prevention. It
involves identifying potential errors or defects in processes and implementing mechanisms to
prevent them.
FMEA (Failure Modes and Effects Analysis): FMEA is a structured approach for identifying potential
failure modes in a process, assessing their impact, and prioritizing them based on a Risk Priority
Number (RPN). The goal is to address high-priority failure modes to reduce the risk of process
failures.
Testing and Piloting: Testing and piloting involve verifying the effectiveness of proposed solutions,
often through simulation or pilot projects to ensure that changes will produce the desired results.
Solution Justification: Before implementing changes, solutions should be justified and presented to
an apex team for approval. This involves demonstrating how the solution aligns with the
organization's goals and objectives and outlining the expected benefits, both tangible and intangible.
Control Phase: In the control phase, it is essential to establish and maintain control over the
processes and monitor them for any deviations. This includes defining control limits, documenting
procedures, and preparing for potential failures by creating response plans.
Control Plan: A control plan is a document that outlines the process control measures to ensure that
the improvements made during the implementation phase are maintained over time. It includes a
response plan for addressing any deviations or potential failures.
Document Changes: Document changes and updates should be thoroughly documented. Ensure that
all relevant documents, including procedures, checklists, templates, and manuals, are reviewed and
updated to reflect the new processes and standards. Any changes to document formats or content
should be clearly communicated to all relevant personnel. Additionally, establish a system for version
control to keep track of document revisions.
Statistical Process Control (SPC): Statistical Process Control (SPC) is a critical tool in maintaining and
improving process quality. It involves the use of control charts to monitor processes and detect
variations. The following are some key points related to control charts:
• Control Charts: Control charts are graphical tools used to monitor and control processes.
They typically display process data over time and include control limits to identify when a
process is out of control.
• Test 1: One of the common rules for identifying when a process is out of control is when a
single data point falls beyond the control limits (either the Upper Control Limit, UCL, or the
Lower Control Limit, LCL).
• T-Test (10): A common rule is the 10-point rule, which involves observing ten consecutive
points on the same side of the centerline. This can indicate a significant shift or trend in the
process.
• H-Test (2): The H-Test is another rule that checks for patterns in data. If there are 9 or more
consecutive points on the same side of the centerline, it could signal a problem in the
process.
• Test 13: This rule looks for occurrences where 13 or more points are on the same side of the
centerline, which can indicate a shift or trend.
• Types of Control Charts: There are various types of control charts, each suitable for different
types of data and purposes. Common control charts include the X-bar and R-chart for
continuous data, and p-charts and c-charts for attribute data.
Overall, it is essential to implement effective SPC practices to monitor processes continuously and
ensure that any deviations or variations are addressed promptly to maintain process stability and
product or service quality.