ISE 250 Leading The Six Sigma Improvement Project Canvas
ISE 250 Leading The Six Sigma Improvement Project Canvas
ISE 250 Leading The Six Sigma Improvement Project Canvas
CANVAS
https://sjsu.instructure.com/
Dr. Jacob Tsao
Industrial and Systems Engineering
San Jose State University
https://www.sjsu.edu/people/jacob.tsao 1
Synopsis: Week 4
A Quick Review of Lecture 3
A Quick Expanded Study on σ and as well
Ch. 6: Approaching the Problem
Ch. 21: Hypothesis Testing, with Minitab and
Intuition (Cont’d)
Ch. 22: Sample Size, with Minitab and Intuition
(Cont’d)
2
Synopsis: Week 3 (Review)
A Quick Review of Lecture 2
Ch. 2: Six Sigma Applications (Cont’d)
• Use of a Spectrum of Real-world Cases to Achieve Objectives
(1) – (6).
More Cases before Addressing Methods Next: Qualitative &
Quantitative Methods in Parallel
• Gerald Smith’s taxonomy of Quality Problems.
• Classification of the Real-world Cases according to Smith’s
taxonomy.
• Identification of Appropriate Solution Methods for Each Case.
• Building the Problem-Type-to-Solution-Method
Mapping/Matrix. 3
Gerald Smith’s Taxonomy (02-08)
4
Building the Problem-Type-to-Solution-Method
Mapping/Matrix (in Excel)
↓Method/Problem Problem Problem Problem
Type→ Type 1 Type j Type m
Method 1 Initial
Method 2 intermediate
Intermediate
Method i Terminal
Method i+1 Intermediate
Terminal
5
Method n Terminal
σ and also , An Expanded Study
Six Sigma presumes the knowledge of the standard
deviation (σ) and also the mean () of the random
variable or the distribution of the quality characteristic.
• Is this realistic?
σ and also must be accurately estimated
A standard method, as a minimum
• Make sure the process is good, as designed
• Take at least 30 samples, with 5 measurements each
• Calculate sample standard deviation and sample mean
for each sample
• Average the 30 estimates
150 measurements (data points); “effectively” 120
6
Standard Deviation (σ), An Expanded Study
7
Week 1: What is Six Sigma?
14
CH. 6: APPROACHING THE PROBLEM (Book);
Expanded: Definitions of DMAIC;
To Build a Clearer Big Picture, for your Project
DMAIC vs. DMADV, throughout the five phases
Phase 1: Define:
• Identify the problem, define requirements and set goals for
success.
• DMADV in cases where the current process needs to be
completely replaced or redesigned; or newly conceived
opportunity
• DMADV: DMAIC + Change Management + Customer
Requirements 15
CH. 6: APPROACHING THE PROBLEM (Book);
Expanded: Definitions of DMAIC, for your Project
Phase 2: Measure
• The team [identify data to collect; collect and] use data to
verify their assumptions about the process and problem.
Might revisit the problem statements, goals and other
process-related definitions, etc., based on the result of
assumption verification.
• DMADV: DMAIC + “activities more targeted,” “collect data
and measurements that help them define the performance
requirements of the new process.”
16
CH. 6: APPROACHING THE PROBLEM (Book);
Expanded: Definitions of DMAIC, for your Project
Phase 3: Analyze
• Develop hypothesis about the causal relationship between
inputs and outputs, narrow the causation to the vital few
(using methods such as the Pareto principle) and use the
statistical analysis and data to validate the hypotheses and
assumptions they have made so far.
17
CH. 6: APPROACHING THE PROBLEM (Book);
Expanded: Definitions of DMAIC, for your Project
Phase 3: Analyze
• DMADV: … team also identify cause and effect relationships
but they are usually more concerned about identifying best
practices and benchmarks by which to measure and design
the new process. The team might also begin process design
work by identifying value-added and non-value-added
activities, locating areas where bottlenecks and errors are
likely, refining requirements to better meet the needs and
goals of the project.
• (What is the difference between a hypothesis and an
assumption?) 18
CH. 6: APPROACHING THE PROBLEM (Book)
Expanded: Definitions of DMAIC, for your Project
Phase 4: Improve (or Design)
• “… teams start developing ideas that began in the Analyze
phase during the Improve phase of the project.” “They use
statistics and real-world observation to test their
hypotheses and solutions.” “Hypothesis testing actually
begins in the Analyze phase but is continued in Improve
phase as teams select solutions and begin to implement
them. Team also work to standardize solutions in
preparation for roll out improved process to daily production
and non-team employees. Teams also start measuring
results and lay the foundation for controls that will be built
in the last phase. 19
CH. 6: APPROACHING THE PROBLEM (Book);
Expanded: Definitions of DMAIC, for your Project
Phase 4: Improve (or Design)
• DMADV projects diverges substantially from DMAIC
projects. The team actually works to design a new process,
which does involve some of the solutions testing mentioned
above but also involves mapping, workflow principles,
actively building new infrastructures. This might mean
putting new equipment in place, hiring and training new
employees or developing new software tools. Teams also
start to implement the new systems and process.
20
CH. 6: APPROACHING THE PROBLEM (Book);
Expanded: Definitions of DMAIC, for your Project
Phase 5: Control (or Verify)
• The Control or Verify phase is where loose ends are tied and
the project is transitioned to a daily work environment.
Controls and standards are established so that the
improvements can be maintained, but the responsibility for
those improvements is transitioned to the process owner.
During the transition, the Six Sigma team might work with
the process owner and his or her team to trouble shoot any
problems with the improvement.
21
CH. 6: APPROACHING THE PROBLEM;
Expanded: FMEA, RCA, Fault Tree, Logic Tree;
To Build a Clearer Big Picture, for your Project
Root Cause Analysis (RCA): Failure Mode, Effect and
Criticality Analysis (FMECA), Brief Introduction to
Fault Tree (FT) Analysis, 5 Why’s for Root Causes,
Logic Tree (Beyond Fault Tree).
Connection to ISE 235 (Quality Assurance and
Reliability)
Methods to Remove or alleviate Root Causes
• Process Improvement
22
• Poka Yoke
Case: Fast Food Cooking (Cont’d; Brief)
Hamburger again, continuing the textbook and my
example
• Textbook: 5 Whys
• Looking further into the possibility of within-piece variability
• Poka Yoke: Two-surface (or George Foreman) grill properly
labeled now.
23
Case: PCB Manufacturing: (Cont’d)
PCB “stuffing”: Manufacturing electronic products
• From FMEA to root causes: e.g.,
Insufficient solder (Montgomery et al., Example 6.4; 02-06)
Uneven solder: within-piece variability
• Possible process improvements; Poka Yoke?
Take a closer look at the two soldering videos
• Use of stencil to apply solder paste, uniformly on a circuit board (1:12)
https://www.youtube.com/watch?v=n0GaNvrPsJ8
• PCB Assembly, with Surface Mount Devices (8:21)
https://www.youtube.com/watch?v=2qk5vxWY46A
Orientation of the board during soldering: production line design;
production-line design for manufacturability
Needs for solder vary on the same board: circuit layout design;
product design for manufacturability
24
(Slanting PCB upward during soldering? To take advantage of gravity?
Example 6.4 (Cause-and-Effect or Fishbone;
as input to FMEA; then input to Root Cause A.)
Example 6.4 (Root Cause Analysis Producing Sold. Insuffici;
Sold. Cold joint; Sold. … Sold. Short; Raw cd damaged)
Case: Implementation of Poka-Yoke System
to Prevent Human Error in Material Prep
“Implementation of Poka-Yoke System to Prevent Human Error
in Material Preparation for Industry” (04-02) *
• FMEA to root causes to Poka Yoke
• A focus on Missing Part and Wrong Part
• FMEA
• Root Cause: Human Errors
• Poka Yoke: Scanner, Database and Software Control, with UI
*** International Journal of Productivity and Quality Management, Vol 4, No. 5/6, 2009
Case: Root Cause thru “Logic Tree”
A Practical Example for Searching for Root Causes - Case:
“Eastman Chemical’s Success Story” (04-04; HW)
“Unlike a fault tree Analysis, which is traditionally used for
mapping out what could go wrong, a logic tree helps determine
what did go wrong. Patience and discipline are stressed.”
Note:
• GE website about Logic Tree:
https://www.ge.com/digital/documentation/meridium/Help/V43050/Def
ault/Subsystems/RootCauseAnalysis/Content/LogicTree.htm
Root Cause Analysis,
After Failures Have Occurred
RCA is conducted after a quality problem has occurred in the
FIELD and to a CUSTOMER (or during proactive product testing
and design).
Process definition, FMEA, and FAULT TREE should always be
done at the DESIGN STAGE.
They should always proceed RCA.
So, always start with the current process, whether it is well
defined or optimized or not.
If the process was not followed, it is a conformance problem.
It followed and a problem occurred, improve the process.
In either case, root causes need to be found and removed or
alleviated.
Root Cause Analysis
Fault Tree Analysis has been a well established methodology to
anticipate failures
In fault tree analysis, “secondary events” are place holders for
possible further development. If what did go wrong was not
identified in the fault tree developed during the design stage,
the failure can be used to further develop the fault tree.
There could be too many possible failure modes!
Failure of Imagination, as a conclusion of a Congressional
Hearing after the Apollo 1 fire killed three astronauts on the
test pad (on the gorund)
• Current Congressional hearing ongoing about Jan. 6 2021 riot; some
concluded tentatively “Failure of Imagination”
Fault Tree Exercise (04-supp; by John Thomas,
CERN)
Hazard: Toxic chemical released
Design:
Tank includes a relief valve opened by an operator to
protect against over-pressurization. A secondary valve is
installed as backup in case the primary valve fails. The
operator must know if the primary valve does not open so
the backup valve can be activated.
Operator console contains both a primary valve position
indicator and a primary valve open indicator light.
Draw a fault tree for this hazard and system design.
© Copyright 2014 John Thomas
Fault Tree Exercise
Example of an actual incident
System Design: Same
Events: The open position indicator light and open indicator light both
illuminated. However, the primary valve was NOT open, and the system
exploded.
Causal Factors: Post-accident examination discovered the indicator light circuit
was wired to indicate presence of power at the valve, but it did not indicate
valve position. Thus, the indicator showed only that the activation button had
been pushed, not that the valve had opened. An extensive quantitative safety
analysis of this design had assumed a low probability of simultaneous failure for
the two relief valves, but ignored the possibility of design error in the electrical
wiring; the probability of design error was not quantifiable. No safety evaluation
of the electrical wiring was made; instead, confidence was established on the
basis of the low probability of coincident failure of the two relief valves.
Ch. 21: Hypothesis Testing, with Intuition
Hypothesis Testing (Revisited)
• The probability of this observation or worse (i.e., farther out)
is called “the p-value”, where the “p” stands for “(tail)
probability”. (See a minor note * below.)
• For any hypothesis test and for most practical purposes, the
“p-value” summarizes almost the entirety of the result of the
hypothesis test; in other words, it is the “bottom line”.
• Sometimes, the concern is on both sides; some other times,
the concern is only on one side: 2-sided vs. 1-sided test
Two-sided: the sum of the two tail probabilities, corresponding to the
tail on the side of the test statistic and its symmetric counterpart.
One-sided: Just the probability of observing what has been observed
or worse (or further on the same tail).
*Note: This is NOT the percentage non-conforming involved in the binary outcome of a
INTUITION AND EXAMPLES, WITH MINITAB:
TWO-SIDE
Your observed value WILL BE DIFFERENT from your
hypothesized value, e.g., sample p or sample mean
So, the question is
• How far is too far? Or,
• How different is too different?
So, you MUST DO (two-sided) hypothesis testing
regardless of what you observed.
37
INTUITION AND EXAMPLES, WITH MINITAB:
ONE-SIDED
Consider the case: H0: p = 0.85 vs. H1: p <= 0.85
(Low), with any given sample size
Consider two possibilities about your observed value.
• Sample p = 0.835: Do you need to test H0? (Intuitively)
• Sample p = 0.865: Do you need to test H0 against H1?
(Intuitively)
Minitab will give the same answer as your intuitive answer.
Because, the question is 38
45