Six Sigma Tools

Download as xls, pdf, or txt
Download as xls, pdf, or txt
You are on page 1of 56
At a glance
Powered by AI
The document discusses various statistical tools, their uses, and when they should be applied. It also discusses the Six Sigma process and different types of data.

Tools like ANOVA, t-tests, box plots, and regression are discussed alongside their uses in comparing means, determining differences between groups, visualizing distributions, and determining relationships between variables.

The 12 steps of the DMAIC Six Sigma process are outlined, including defining CTQs, measuring systems, establishing process capabilities, identifying sources of variation, discovering relationships between variables, and implementing process control.

Six Sigma Reference Tool

Definition:
1-Sample sign test

rev. 2.0b Author: R. Chapin

Tests the probability of sample median being equal to hypothesized value.

Tool to use:
1-Way ANOVA

What does it do?


ANOVA tests to see if the difference between the means of each level is significantly more than the variation within each level. 1-way ANOVA is used when two or more means (a single factor with three or more levels) must be compared with each other.

Why use it?


One-way ANOVA is useful for identifying a statistically significant difference between means of three or more levels of a factor.

When to use?
Use 1-way ANOVA when you need to compare three or more means (a single factor with three or more levels) and determine how much of the total observed variation can be explained by the factor.

No picture available!

Data Type:
Continuous Y, Discrete Xs

P < .05 Indicates:

At least one group of data is different than at least one other group.

Six Sigma 12 Step Process


Step Description
Project Selection Select CTQ characteristics Define Performance Standards Measurement System Analysis Establish Process Capability Define Performance Objectives Identify Variation Sources Screen Potential Causes Discover Variable Relationships Establish Operating Tolerances Define and Validate Measurement System on X's in actual application Determine Process Capability Implement Process Control Y Y Y Y Y X X X Y, X Y, X Y, X X

Focus

Deliverable
Identify project CTQ's, develop team charter, define high-level process map Identify and measure customer CTQ's Define and confirm specifications for the Y Measurement system is adequate to measure Y Baseline current process; normality test Statisicly define goal of project List of statistically significant X's based on analysis of historical data Determine vital few X's that cause changes to your Y Determine transfer function between Y and vital few X's; Determine optimal settings for vital few X's; Perform confirmation runs Specify tolerances on the vital few X's Measurement system is adequate to measure X's Determine post improvement capability and performance Develop and implement process control plan

Sample Tools

0 1 2 3 4 5 6 7 8 9 10 11 12

Customer, QFD, FMEA Customer, blueprints Continuous Gage R&R, Test/Retest, Attribute R&R Capability indices Team, benchmarking Process Analysis, Graphical analysis, hypothesis testing DOE-screening Factorial designs Simulation Continuous Gage R&R, Test/Retest, Attribute R&R Capability indices Control charts, mistake proof, FMEA

61713671.xls

GE PROPRIETARY INFORMATION

RMC 06/27/2011

Tool

Use When

Example

Minitab Format
Stat ANOVA Oneway

Data Format

Xs

p < 0.05 indicates


At least one group of data is different than at least one other group.

ANOVA

Determine if the average of a group of Compare multiple fixtures to data is different than the average of other determine if one or more performs (multiple) groups of data differently

Response data must be stacked in one column and the individual points Variable must be tagged (numerically) in another column. Response data must be stacked in one column and the individual points Variable must be tagged (numerically) in another column. All

Attribute

Box & Whisker Plot

Compare median and variation between Compare turbine blade weights using Graph groups of data. Also identifies outliers. different scales. Boxplot Stat

Attribute

N/A

Cause & Effect Diagram/ Fishbone

Brainstorming possible sources of variation for a particular effect

Potential sources of variation in gage r&r

Input ideas in proper column Quality Tools heading for main branches of Cause fishbone. Type effect in pulldown and Effect window. Stat Tables Chisquare Test Graph Character Graphs Dotplot Input two columns; one column containing the number of nondefective, and the other containing the number of defective. Input multiple columns of data of equal length

All

N/A

Chi-Square

Determine if one set of defectives data is Compare DPUs between GE90 and different than other sets of defectives CF6 data. Quick graphical comparison of two or more processes' variation or spread Compare length of service of GE90 technicians to CF6 technicians

Discrete

Discrete

At least one group is statistically different.

Dot Plot

Variable

Attribute

N/A

General Linear Models

Determine if difference in categorical Determine if height and weight are data between groups is real when taking significant variables between two into account other variable x's groups when looking at pay

Histogram

View the distribution of data (spread, mean, mode, outliers, etc.)

View the distribution of Y

Response data must be stacked in Stat one column and the individual points ANOVA must be tagged (numerically) in Variable another column. Other variables General Linear Model must be stacked in separate columns. Graph Histogram or Input one column of data Variable Stat Quality Tools Process Capability Stat Response data must be stacked in ANOVA one column and the individual points Variable Homogeneity must be tagged (numerically) in of Variance another column. Stat Response data must be stacked in one column and the individual points Variable must be tagged (numerically) in another column. Response data must be stacked in one column and the individual points Variable must be tagged (numerically) in another column in time order. Response data must be stacked in one column and the individual points Variable must be tagged (numerically) in another column.

Attribute/ Variable

At least one group of data is different than at least one other group.

Attribute

N/A

Homogeneity of Variance

Determine if the variation in one group of Compare the variation between data is different than the variation in teams other (multiple) groups of data

Attribute

(Use Levene's Test) At least one group of data is different than at least one other group At least one mean is different

Kruskal-Wallis Test

Determine if the means of non-normal data are different

Compare the means of cycle time for Nonparametrics different delivery methods KruskalWallis Compare within piece, piece to piece Graph or time to time making of airfoils Interval Plot leading edge thickness Graph Character Graphs Boxplot

Attribute

Multi Vari Analysis (See also Run Chart / Time Series Plot)

Helps identify most important types or families of variation

Attribute

N/A

Notched Box Plot

Compare different hole drilling Compare median of a given confidence patterns to see if the median and interval and variation between groups of spread of the diameters are the data same

Attribute

N/A

One-sample t-test

Pareto Process Mapping Regression

Run Chart/Time Series Plot

Manufacturer claims the average number of cookies in a 1 lb. package Stat is 250. You sample 10 packages Basic Statistics and find that the average is 235. 1 Use this test to disprove the Sample t manufacturer's claim. Stat Determine which defect occurs the Compare how frequently different causes Quality Tools most often for a particular engine occur Pareto program Chart Create visual aide of each step in the Map engine horizontal area with all N/A process being evaluated rework loops and inspection points Stat Determine if a group of data Determine if a runout changes with Regression incrementally changes with another temperature group Regression Quality Tools Run Chart Look for trends, outliers, oscillations, etc. View runout values over time or Determine if average of a group of data is statistically equal to a specific target Graph Graph Look for correlations between groups of variable data Time Plot or Graph

Input one column of data

Variable

N/A

Not equal

Input two columns of equal length Use rectangles for process steps and diamonds for decision points Input two columns of equal length

Variable N/A Variable

Attribute N/A Variable

N/A N/A A correlation is detected

Input one column of data. Must also input a subgroup size (1 will show all Variable points)

N/A

N/A

Scatter Plot

Determine if rotor blade length varies Input two or more groups of data of Variable with home position Marginal Plot or equal length Graph

Variable

N/A

Two-sample t-test

Matrix Plot (multiples) Determine if the average radius Stat Determine if the average of one group of produced by one grinder is different Basic Statistics data is greater than (or less than) the Input two columns of equal length than the average radius produced by average of another group of data another grinder 2 Sample t

Variable

Variable

There is a difference in the means

Definitions
184

Term
1-Sample sign test Accuracy

Definition
Tests the probability of sample median being equal to hypothesized value. Accuracy refers to the variation between a measurement and what actually exists. It is the difference between an individual's average measurements and that of a known standard, or accepted "truth." Alpha risk is defined as the risk of accepting the alternate hypothesis when, in fact, the null hypothesis is true; in other words, stating a difference exists where actually there is none. Alpha risk is stated in terms of probability (such as 0.05 or 5%). The acceptable level of alpha risk is determined by an organization or individual and is based on the nature of the decision being made. For decisions with high consequences (such as those involving risk to human life), an alpha risk of less than 1% would be expected. If the decision involves minimal time or money, an alpha risk of 10% may be appropriate. In general, an alpha risk of 5% is considered the norm in decision making. Sometimes alpha risk is expressed as its inverse, which is confidence level. In other words, an alpha risk of 5% also could be expressed as a 95% confidence level. The alternate hypothesis (Ha) is a statement that the observed difference or relationship between two populations is real and not due to chance or sampling error. The alternate hypothesis is the opposite of the null hypothesis (P < 0.05). A dependency exists between two or more factors Analysis of variance is a statistical technique for analyzing data that tests for a difference between two or more means. See the tool 1-Way ANOVA. P-value < 0.05 = not normal. see discrete data A bar chart is a graphical comparison of several quantities in which the lengths of the horizontal or vertical bars represent the relative magnitude of the values. Benchmarking is an improvement tool whereby a company measures its performance or process against other companies' best practices, determines how those companies achieved their performance levels, and uses the information to improve its own performance. See the tool Benchmarking. Beta risk is defined as the risk of accepting the null hypothesis when, in fact, the alternate hypothesis is true. In other words, stating no difference exists when there is an actual difference. A statistical test should be capable of detecting differences that are important to you, and beta risk is the probability (such as 0.10 or 10%) that it will not. Beta risk is determined by an organization or individual and is based on the nature of the decision being made. Beta risk depends on the magnitude of the difference between sample means and is managed by increasing test sample size. In general, a beta risk of 10% is considered acceptable in decision making. Bias in a sample is the presence or influence of any factor that causes the population or process being sampled to appear different from what it actually is. Bias is introduced into a sample when data is collected without regard to key factors that may influence the population or process. Blocking neutralizes background variables that can not be eliminated by randomizing. It does so by spreading them across the experiment A box plot, also known as a box and whisker diagram, is a basic graphing tool that displays centering, spread, and distribution of a continuous data set CAP Includes/Excludes is a tool that can help your team define the boundaries of your project, facilitate discussion about issues related to your project scope, and challenge you to agree on what is included and excluded within the scope of your work. See the tool CAP Includes/Excludes. CAP Stakeholder Analysis is a tool to identify and enlist support from stakeholders. It provides a visual means of identifying stakeholder support so that you can develop an action plan for your project. See the tool CAP Stakeholder Analysis. Capability analysis is a MinitabTM tool that visually compares actual process performance to the performance standards. See the tool Capability Analysis. A factor (X) that has an impact on a response variable (Y); a source of variation in a process or product. A cause and effect diagram is a visual tool used to logically organize possible causes for a specific problem or effect by graphically displaying them in increasing detail. It helps to identify root causes and ensures common understanding of the causes that lead to the problem. Because of its fishbone shape, it is sometimes called a "fishbone diagram." See the tool Cause and Effect Diagram. The center of a process is the average value of its data. It is equivalent to the mean and is one measure of the central tendency. A center point is a run performed with all factors set halfway between their low and high levels. Each factor must be continuous to have a logical halfway point. For example, there are no logical center points for the factors vendor, machine, or location (such as city); however, there are logical center points for the factors temperature, speed, and length. The central limit theorem states that given a distribution with a mean and variance 2, the sampling distribution of the mean appraches a normal distribution with a mean and variance/N as N, the sample size, increases A characteristic is a definable or measurable feature of a process, product, or variable.

Training Link

Alpha risk

Alternative hypothesis (Ha) Analysis of variance (ANOVA) Anderson-Darling Normality Test Attribute Data Bar chart Benchmarking

Beta risk

Bias Blocking Boxplot CAP Includes/Excludes CAP Stakeholder Analysis Capability Analysis Cause Cause and Effect Diagram Center Center points Central Limit Theorem Characteristic

Term
Chi Square test

Definition
A chi square test, also called "test of association," is a statistical test of association between discrete variables. It is based on a mathematical comparison of the number of observed counts with the number of expected counts to determine if there is a difference in output counts based on the input category. See the tool Chi Square-Test of Independence. Used with Defects data (counts) & defectives data (how many good or bad). Critical Chi-Square is Chi-squared value where p=.05. Common cause variability is a source of variation caused by unknown factors that result in a steady but random distribution of output around the average of the data. Common cause variation is a measure of the process's potential, or how well the process can perform when special cause variation is removed. Therefore, it is a measure of the process technology. Common cause variation is also called random variation, noise, noncontrollable variation, within-group variation, or inherent variation. Example: many X's with a small impact. Measurement of the certainty of the shape of the fitted regression line. A 95% confidence band implies a 95% chance that the true regression line fits within the confidence bands. Measurement of certainty. Factors or interactions are said to be confounded when the effect of one factor is combined with that of another. In other words, their effects can not be analyzed independently. Concluding something is bad when it is actually good (TYPE II Error) Continuous data is information that can be measured on a continuum or scale. Continuous data can have almost any numeric value and can be meaningfully subdivided into finer and finer increments, depending upon the precision of the measurement system. Examples of continuous data include measurements of time, temperature, weight, and size. For example, time can be measured in days, hours, minutes, seconds, and in even smaller units. Continuous data is also called quantitative data. Control limits define the area three standard deviations on either side of the centerline, or mean, of data plotted on a control chart. Do not confuse control limits with specification limits. Control limits reflect the expected variation in the data and are based on the distribution of the data points. Minitab calculates control limits using collected data. Specification limits are established based on customer or regulatory requirements. Specification limits change only if the customer or regulatory body so requests.

Training Link

3.1

Common cause variability

Step 12 p.103

Confidence band (or interval) Confounding Consumers Risk Continuous Data

Control limits

Correlation Correlation coefficient (r) Critical element

Correlation is the degree or extent of the relationship between two variables. If the value of one variable increases when the value of the other increases, they are said to be positively correlated. If the value of one variable decreases when the value of the other decreases, they are said to be negatively correlated. The degree of linear association between two variables is quantified by the correlation coefficient The correlation coefficient quantifies the degree of linear association between two variables. It is typically denoted by r and will have a value ranging between negative 1 and positive 1. A critical element is an X that does not necessarily have different levels of a specific scale but can be configured according to a variety of independent alternatives. For example, a critical element may be the routing path for an incoming call or an item request form in an order-taking process. In these cases the critical element must be specified correctly before you can create a viable solution; however, numerous alternatives may be considered as possible solutions. CTQs (stands for Critical to Quality) are the key measurable characteristics of a product or process whose performance standards, or specification limits, must be met in order to satisfy the customer. They align improvement or design efforts with critical issues that affect customer satisfaction. CTQs are defined early in any Six Sigma project, based on Voice of the Customer (VOC) data. Cycle time is the total time from the beginning to the end of your process, as defined by you and your customer. Cycle time includes process time, during which a unit is acted upon to bring it closer to an output, and delay time, during which a unit of work waits to be processed. A dashboard is a tool used for collecting and reporting information about vital customer requirements and your business's performance for key customers. Dashboards provide a quick summary of process performance. Data is factual information used as a basis for reasoning, discussion, or calculation; often this term refers to quantitative information A defect is any nonconformity in a product or process; it is any event that does not meet the performance standards of a Y. The word defective describes an entire unit that fails to meet acceptance criteria, regardless of the number of defects within the unit. A unit may be defective because of one or more defects. Descriptive statistics is a method of statistical analysis of numeric data, discrete or continuous, that provides information about centering, spread, and normality. Results of the analysis can be in tabular or graphic format. A design risk assessment is the act of determining potential risk in a design process, either in a concept design or a detailed design. It provides a broader evaluation of your design beyond just CTQs, and will enable you to eliminate possible failures and reduce the impact of potential failures. This ensures a rigorous, systematic examination in the reliability of the design and allows you to capture system-level risk

CTQ Cycle time Dashboard Data Defect Defective Descriptive statistics Design Risk Assessment

Detectable Effect Size DF (degrees of freedom)

When you are deciding what factors and interactions you want to get information about, you also need to determine the smallest effect you will consider significant enough to improve your process. This minimum size is known as the detectable effect size, or DES. Large effects are easier to detect than small effects. A design of experiment compares the total variability in the experiment to the variation caused by a factor. The smaller the effect you are interested in, the more runs you will need to overcome the variability in your experimentation. Equal to: (#rows - 1)(#cols - 1)

Term
Discrete Data Distribution DMADV DMAIC DOE DPMO DPO DPU Dunnett's(1-way ANOVA): Effect Entitlement Error Error (type I) Error (type II) Factor Failure Mode and Effect Analysis Fisher's (1-way ANOVA): Fits Fitted value

Definition
Discrete data is information that can be categorized into a classification. Discrete data is based on counts. Only a finite number of values is possible, and the values cannot be subdivided meaningfully. For example, the number of parts damaged in shipment produces discrete data because parts are either damaged or not damaged. Distribution refers to the behavior of a process described by plotting the number of times a variable displays a specific value or range of values rather than by plotting the value itself. DMADV is GE Company's data-driven quality strategy for designing products and processes, and it is an integral part of GE's Six Sigma Quality Initiative. DMADV consists of five interconnected phases: Define, Measure, Analyze, Design, and Verify. DMAIC refers to General Electric's data-driven quality strategy for improving processes, and is an integral part of the company's Six Sigma Quality Initiative. DMAIC is an acronym for five interconnected phases: Define, Measure, Analyze, Improve, and Control. A design of experiment is a structured, organized method for determining the relationship between factors (Xs) affecting a process and the output of that process. Defects per million opportunities (DPMO) is the number of defects observed during a standard production run divided by the number of opportunities to make a defect during that run, multiplied by one million. Defects per opportunity (DPO) represents total defects divided by total opportunities. DPO is a preliminary calculation to help you calculate DPMO (defects per million opportunities). Multiply DPO by one million to calculate DPMO. Defects per unit (DPU) represents the number of defects divided by the number of products. Check to obtain a two-sided confidence interval for the difference between each treatment mean and a control mean. Specify a family error rate between 0.5 and 0.001. Values greater than or equal to 1.0 are interpreted as percentages. The default error rate is 0.05. An effect is that which is produced by a cause; the impact a factor (X) has on a response variable (Y). As good as a process can get without capital investment Error, also called residual error, refers to variation in observations made under identical test conditions, or the amount of variation that can not be attributed to the variables included in the experiment. Error that concludes that someone is guilty, when in fact, they really are not. (Ho true, but I rejected it--concluded Ha) ALPHA Error that concludes that someone is not guilty, when in fact, they really are. (Ha true, but I concluded Ho). BETA A factor is an independent variable; an X. Failure mode and effects analysis (FMEA) is a disciplined approach used to identify possible failures of a product or service and then determine the frequency and impact of the failure. See the tool Failure Mode and Effects Analysis. Check to obtain confidence intervals for all pairwise differences between level means using Fisher's LSD procedure. Specify an individual rate between 0.5 and 0.001. Values greater than or equal to 1.0 are interpreted as percentages. The default error rate is 0.05. Predicted values of "Y" calculated using the regression equation for each value of "X" A fitted value is the Y output value that is predicted by a regression equation.

Training Link

Fractional factorial DOE

A fractional factorial design of experiment (DOE) includes selected combinations of factors and levels. It is a carefully prescribed and representative subset of a full factorial design. A fractional factorial DOE is useful when the number of potential factors is relatively large because they reduce the total number of runs required. By reducing the number of runs, a fractional factorial DOE will not be able to evaluate the impact of some of the factors independently. In general, higher-order interactions are confounded with main effects or lower-order interactions. Because higher order interactions are rare, usually you can assume that their effect is minimal and that the observed effect is caused by the main effect or lower-level interaction. A frequency plot is a graphical display of how often data values occur. A full factorial design of experiment (DOE) measures the response of every possible combination of factors and factor levels. These responses are analyzed to provide information about every main effect and every interaction effect. A full factorial DOE is practical when fewer than five factors are being investigated. Testing all combinations of factor levels becomes too expensive and time-consuming with five or more factors. Measurement of distance between individual distributions. As F goes up, P goes down (i.e., more confidence in there being a difference between two means). To calculate: (Mean Square of X / Mean Square of Error) Gage R&R, which stands for gage repeatability and reproducibility, is a statistical tool that measures the amount of variation in the measurement system arising from the measurement device and the people taking the measurement. See Gage R&R tools. A Gantt chart is a visual project planning device used for production scheduling. A Gantt chart graphically displays time needed to complete tasks. Term used to describe % variation explained by X GRPI stands for four critical and interrelated aspects of teamwork: goals, roles, processes, and interpersonal relationships, and it is a tool used to assess them. See the tool GRPI. A histogram is a basic graphing tool that displays the relative frequency or occurrence of continuous data values showing which values occur most and least frequently. A histogram illustrates the shape, centering, and spread of data distribution and indicates whether there are any outliers. See the tool Histogram. Homogeneity of variance is a test used to determine if the variances of two or more samples are different. See the tool Homogeneity of Variance.

C:\Six Sigma\CD Training\04B_analysis_010199.pps - 7

Frequency plot Full factorial DOE F-value (ANOVA) Gage R&R Gannt Chart Goodman-Kruskal Gamma GRPI Histogram Homegeneity of variance

Term

Definition
Hypothesis testing refers to the process of using statistical analysis to determine if the observed differences between two or more samples are due to random chance (as stated in the null hypothesis) or to true differences in the samples (as stated in the alternate hypothesis). A null hypothesis (H0) is a stated assumption that there is no difference in parameters (mean, variance, DPMO) for two or more populations. The alternate hypothesis (Ha) is a statement that the observed difference or relationship between two populations is real and not the result of chance or an error in sampling. Hypothesis testing is the process of using a variety of statistical tools to analyze data and, ultimately, to accept or reject the null hypothesis. From a practical point of view, finding statistical evidence that the null hypothesis is false allows you to reject the null hypothesis and accept the alternate hypothesis. An I-MR chart, or individual and moving range chart, is a graphical tool that displays process variation over time. It signals when a process may be going out of control and shows where to look for sources of special cause variation. See the tool I-MR Control. In control refers to a process unaffected by special causes. A process that is in control is affected only by common causes. A process that is out of control is affected by special causes in addition to the common causes affecting the mean and/or variance of a process. An independent variable is an input or process variable (X) that can be set directly to achieve a desired output Intangible benefits, also called soft benefits, are the gains attributable to your improvement project that are not reportable for formal accounting purposes. These benefits are not included in the financial calculations because they are nonmonetary or are difficult to attribute directly to quality. Examples of intangible benefits include cost avoidance, customer satisfaction and retention, and increased employee morale. An interaction occurs when the response achieved by one factor depends on the level of the other factor. On interaction plot, when lines are not parallel, there's an interaction. An interrelationship digraph is a visual display that maps out the cause and effect links among complex, multivariable problems or desired outcomes. Intraquartile range (from box plot) representing range between 25th and 75th quartile. Kano analysis is a quality measurement used to prioritize customer requirements. Kruskal-Wallis performs a hypothesis test of the equality of population medians for a one-way design (two or more populations). This test is a generalization of the procedure used by the Mann-Whitney test and, like Moods median test, offers a nonparametric alternative to the one-way analysis of variance. The Kruskal-Wallis test looks for differences among the populations medians. The Kruskal-Wallis test is more powerful (the confidence interval is narrower, on average) than Moods median test for analyzing data from many distributions, including data from the normal distribution, but is less robust against outliers. Kurtosis is a measure of how peaked or flat a curve's distribution is. An L1 spreadsheet calculates defects per million opportunities (DPMO) and a process Z value for discrete data. An L2 spreadsheet calculates the short-term and long-term Z values for continuous data sets. A leptokurtic distribution is symmetrical in shape, similar to a normal distribution, but the center peak is much higher; that is, there is a higher frequency of values near the mean. In addition, a leptokurtic distribution has a higher frequency of data in the tail area. Levels are the different settings a factor can have. For example, if you are trying to determine how the response (speed of data transmittal) is affected by the factor (connection type), you would need to set the factor at different levels (modem and LAN) then measure the change in response. Linearity is the variation between a known standard, or "truth," across the low and high end of the gage. It is the difference between an individual's measurements and that of a known standard or truth over the full range of expected values. A lower specification limit is a value above which performance of a product or process is acceptable. This is also known as a lower spec limit or LSL. A lurking variable is an unknown, uncontrolled variable that influences the output of an experiment. A main effect is a measurement of the average change in the output when a factor is changed from its low level to its high level. It is calculated as the average output when a factor is at its high level minus the average output when the factor is at its low level. Statistic within Regression-->Best Fits which is used as a measure of bias (i.e., when predicted is different than truth). Should equal (#vars + 1) Mann-Whitney performs a hypothesis test of the equality of two population medians and calculates the corresponding point estimate and confidence interval. Use this test as a nonparametric alternative to the two-sample t-test. The mean is the average data point value within a data set. To calculate the mean, add all of the individual data points then divide that figure by the total number of data points. Measurement system analysis is a mathematical method of determining how much the variation within the measurement process contributes to overall process variability. The median is the middle point of a data set; 50% of the values are below this point, and 50% are above this point. The most often occurring value in the data set

Training Link

Hypothesis testing

I-MR Chart In control Independent variable Intangible benefits Interaction Interrelationship digraph IQR Kano Analysis Kruskal-Wallis Kurtosis L1 Spreadsheet L2 Spreadsheet Leptokurtic Distribution Levels Linearity LSL Lurking variable Main Effect Mallows Statistic (C-p) Mann-Whitney Mean Measurement system analysis Median Mode

C:\Six Sigma\CD Training\04A_efficient_022499.pps - 13

Term

Definition
Moods median test can be used to test the equality of medians from two or more populations and, like the Kruskal-Wallis Test, provides an nonparametric alternative to the one-way analysis of variance. Moods median test is sometimes called a median test or sign scores test. Moods Median Test tests: H0: the population medians are all equal versus H1: the medians are not all equal An assumption of Moods median test is that the data from each population are independent random samples and the population distributions have the same shape. Moods median test is robust against outliers and errors in data and is particularly appropriate in the preliminary stages of analysis. Moods Median test is more robust than is the Kruskal-Wallis test against outliers, but is less powerful for data from many distributions, including the normal. Multicolinearity is the degree of correlation between Xs. It is an important consideration when using multiple regression on data that has been collected without the aid of a design of experiment (DOE). A high degree of multicolinearity may lead to regression coefficients that are too large or are headed in the wrong direction from that you had expected based on your knowledge of the process. High correlations between Xs also may result in a large p-value for an X that changes when the intercorrelated X is dropped from the equation. The variance inflation factor provides a measure of the degree of multicolinearity. Multiple regression is a method of determining the relationship between a continuous process output (Y) and several factors (Xs). A multi-vari chart is a tool that graphically displays patterns of variation. It is used to identify possible Xs or families of variation, such as variation within a subgroup, between subgroups, or over time. See the tool Multi-Vari Chart. Process input that consistently causes variation in the output measurement that is random and expected and, therefore, not controlled is called noise. Noise also is referred to as white noise, random variation, common cause variation, noncontrollable variation, and within-group variation. It refers to the value that you estimate in a design process that approximate your real CTQ (Y) target value based on the design element capacity. Nominals are usually referred to as point estimate and related to y-hat model. Set of tools that avoids assuming a particular distribution. Normal distribution is the spread of information (such as product performance or demographics) where the most frequently occurring value is in the middle of the range and other probabilities tail off symmetrically in both directions. Normal distribution is graphically categorized by a bell-shaped curve, also known as a Gaussian distribution. For normally distributed data, the mean and median are very close and may be identical. Used to check whether observations follow a normal distribution. P > 0.05 = data is normal A normality test is a statistical process used to determine if a sample or any group of data fits a standard normal distribution. A normality test can be performed mathematically or graphically. See the tool Normality Test. A null hypothesis (H0) is a stated assumption that there is no difference in parameters (mean, variance, DPMO) for two or more populations. According to the null hypothesis, any observed difference in samples is due to chance or sampling error. It is written mathematically as follows: H0: m1 = m2 H0: s1 = s2. Defines what you expect to observe. (e.g., all means are same or independent). (P > 0.05) An opportunity is anything that you inspect, measure, or test on a unit that provides a chance of allowing a defect. An outlier is a data point that is located far from the rest of the data. Given a mean and standard deviation, a statistical distribution expects data points to fall within a specific range. Those that do not are called outliers and should be investigated to ensure that the data is correct. If the data is correct, you have witnessed a rare event or your process has changed. In either case, you need to understand what caused the outliers to occur. Percent of tolerance is calculated by taking the measurement error of interest, such as repeatability and/or reproducibility, dividing by the total tolerance range, then multiplying the result by 100 to express the result as a percentage. A platykurtic distribution is one in which most of the values share about the same frequency of occurrence. As a result, the curve is very flat, or plateau-like. Uniform distributions are platykurtic. Pooled standard deviation is the standard deviation remaining after removing the effect of special cause variation-such as geographic location or time of year. It is the average variation of your subgroups. Measurement of the certainty of the scatter about a certain regression line. A 95% prediction band indicates that, in general, 95% of the points will be contained within the bands. Probability refers to the chance of something happening, or the fraction of occurrences over a large number of trials. Probability can range from 0 (no chance) to 1 (full certainty). Probability of defect is the statistical chance that a product or process will not meet performance specifications or lie within the defined upper and lower specification limits. It is the ratio of expected defects to the total output and is expressed as p(d). Process capability can be determined from the probability of defect. Process capability refers to the ability of a process to produce a defect-free product or service. Various indicators are used-some address overall performance, some address potential performance. Concluding something is good when it is actually bad (TYPE I Error)

Training Link

Moods Median

Multicolinearity Multiple regression Multi-vari chart Noise Nominal Non-parametric Normal Distribution Normal probability Normality test Null Hypothesis (Ho) Opportunity Outlier Percent of tolerance Platykurtic Distribution Pooled Standard Deviation Prediction Band (or interval) Probability Probability of Defect Process Capability Producers Risk

Term
p-value

Definition
The p-value represents the probability of concluding (incorrectly) that there is a difference in your samples when no true difference exists. It is a statistic calculated by comparing the distribution of given sample data and an expected distribution (normal, F, t, etc.) and is dependent upon the statistical test being performed. For example, if two samples are being compared in a t-test, a p-value of 0.05 means that there is only 5% chance of arriving at the calculated t value if the samples were not different (from the same population). In other words, a p-value of 0.05 means there is only a 5% chance that you would be wrong in concluding the populations are different. P-value < 0.05 = safe to conclude there's a difference. P-value = risk of wasting time investigating further. 25th percentile (from box plot) 75th percentile (from box plot) Discrete data Quality function deployment (QFD) is a structured methodology used to identify customers' requirements and translate them into key process deliverables. In Six Sigma, QFD helps you focus on ways to improve your process or product to meet customers' expectations. See the tool Quality Function Deployment. Continuous data A radar chart is a graphical display of the differences between actual and ideal performance. It is useful for defining performance and identifying strengths and weaknesses. Running experiments in a random order, not the standard order in the test layout. Helps to eliminate effect of "lurking variables", uncontrolled factors whihc might vary over the length of the experiment. A rational subgroup is a subset of data defined by a specific factor such as a stratifying factor or a time period. Rational subgrouping identifies and separates special cause variation (variation between subgroups caused by specific, identifiable factors) from common cause variation (unexplained, random variation caused by factors that cannot be pinpointed or controlled). A rational subgroup should exhibit only common cause variation.

Training Link

Q1 Q3 Qualitative data Quality Function Deployment Quantitative data Radar Chart Randomization Rational Subgroup

Regression analysis

Regression analysis is a method of analysis that enables you to quantify the relationship between two or more variables (X) and (Y) by fitting a line or plane through all the points such that they are evenly distributed about the line or plane. Visually, the best-fit line is represented on a scatter plot by a line or plane. Mathematically, the line or plane is represented by a formula that is referred to as the regression equation. The regression equation is used to model process performance (Y) based on a given value or values of the process variable (X). Repeatability is the variation in measurements obtained when one person takes multiple measurements using the same techniques on the same parts or items. Number of times you ran each corner. Ex. 2 replicates means you ran one corner twice. Replication occurs when an experimental treatment is set up and conducted more than once. If you collect two data points at each treatment, you have two replications. In general, plan on making between two and five replications for each treatment. Replicating an experiment allows you to estimate the residual or experimental error. This is the variation from sources other than the changes in factor levels. A replication is not two measurements of the same data point but a measurement of two data points under the same treatment conditions. For example, to make a replication, you would not have two persons time the response of a call from the northeast region during the night shift. Instead, you would time two calls into the northeast region's help desk during the night shift. Reproducibility is the variation in average measurements obtained when two or more people measure the same parts or items using the same measuring technique. A residual is the difference between the actual Y output value and the Y output value predicted by the regression equation. The residuals in a regression model can be analyzed to reveal inadequacies in the model. Also called "errors" Resolution is a measure of the degree of confounding among effects. Roman numerals are used to denote resolution. The resolution of your design defines the amount of information that can be provided by the design of experiment. As with a computer screen, the higher the resolution of your design, the more detailed the information you will see. The lowest resolution you can have is resolution III. A robust process is one that is operating at 6 sigma and is therefore resistant to defects. Robust processes exhibit very good short-term process capability (high short-term Z values) and a small Z shift value. In a robust process, the critical elements usually have been designed to prevent or eliminate opportunities for defects; this effort ensures sustainability of the process. Continual monitoring of robust processes is not usually needed, although you may wish to set up periodic audits as a safeguard. Rolled throughput yield is the probability that a single unit can pass through a series of process steps free of defects. A mathematical term describing how much variation is being explained by the X. FORMULA: R-sq = SS(regression) / SS(total) Answers question of how much of total variation is explained by X. Caution: R-sq increases as number of data points increases. Pg. 13 analyze Unlike R-squared, R-squared adjusted takes into account the number of X's and the number of data points. FORMULA: R-sq (adj) = 1 [(SS(regression)/DF(regression)) / (SS(total)/DF(total))] Takes into account the number of X's and the number of data points...also answers: how much of total variation is explained by X. A portion or subset of units taken from the population whose characteristics are actually measured The sample size calculator is a spreadsheet tool used to determine the number of data points, or sample size, needed to estimate the properties of a population. See the tool Sample Size Calculator.

Repeatability Replicates

Replication

Reproducibility Residual Resolution

Robust Process Rolled Throughput Yield R-squared R-Squared R-squared (adj) R-Squared adjusted Sample Sample Size Calc.

Term
Sampling scatter plot Scorecard Screening DOE Segmentation S-hat Model Sigma Simple Linear Regression SIPOC

Definition
Sampling is the practice of gathering a subset of the total data available from a process or a population. A scatter plot, also called a scatter diagram or a scattergram, is a basic graphic tool that illustrates the relationship between two variables. The dots on the scatter plot represent data points. See the tool Scatter Plot. A scorecard is an evaluation device, usually in the form of a questionnaire, that specifies the criteria your customers will use to rate your business's performance in satisfying their requirements. A screening design of experiment (DOE) is a specific type of a fractional factorial DOE. A screening design is a resolution III design, which minimizes the number of runs required in an experiment. A screening DOE is practical when you can assume that all interactions are negligible compared to main effects. Use a screening DOE when your experiment contains five or more factors. Once you have screened out the unimportant factors, you may want to perform a fractional or full-fractional DOE. Segmentation is a process used to divide a large group into smaller, logical categories for analysis. Some commonly segmented entities are customers, data sets, or markets. It describes the relationship between output variance and input nominals The Greek letter (sigma) refers to the standard deviation of a population. Sigma, or standard deviation, is used as a scaling factor to convert upper and lower specification limits to Z. Therefore, a process with three standard deviations between its mean and a spec limit would have a Z value of 3 and commonly would be referred to as a 3 sigma process. Simple linear regression is a method that enables you to determine the relationship between a continuous process output (Y) and one factor (X). The relationship is typically expressed in terms of a mathematical equation such as Y = b + mX SIPOC stands for suppliers, inputs, process, output, and customers. You obtain inputs from suppliers, add value through your process, and provide an output that meets or exceeds your customer's requirements. Most often, the median is used as a measure of central tendency when data sets are skewed. The metric that indicates the degree of asymmetry is called, simply, skewness. Skewness often results in situations when a natural boundary is present. Normal distributions will have a skewness value of approximately zero. Right-skewed distributions will have a positive skewness value; left-skewed distributions will have a negative skewness value. Typically, the skewness value will range from negative 3 to positive 3. Two examples of skewed data sets are salaries within an organization and monthly prices of homes for sale in a particular area. A measure of variation for "S-shaped" fulfillment Y's Unlike common cause variability, special cause variation is caused by known factors that result in a non-random distribution of output. Also referred to as "exceptional" or "assignable" variation. Example: Few X's with big impact. The spread of a process represents how far data points are distributed away from the mean, or center. Standard deviation is a measure of spread. The Six Sigma process report is a Minitab tool that calculates process capability and provides visuals of process performance. See the tool Six Sigma Process Report. The Six Sigma product report is a Minitab tool that calculates the DPMO and short-term capability of your process. See the tool Six Sigma Product Report. Stability represents variation due to elapsed time. It is the difference between an individual's measurements taken of the same parts after an extended period of time using the same techniques. Standard deviation is a measure of the spread of data in relation to the mean. It is the most common measure of the variability of a set of data. If the standard deviation is based on a sampling, it is referred to as "s." If the entire data population is used, standard deviation is represented by the Greek letter sigma (s). The standard deviation (together with the mean) is used to measure the degree to which the product or process falls within specifications. The lower the standard deviation, the more likely the product or service falls within spec. When the standard deviation is calculated in relation to the mean of all the data points, the result is an overall standard deviation. When the standard deviation is calculated in relation to the means of subgroups, the result is a pooled standard deviation. Together with the mean, both overall and pooled standard deviations can help you determine your degree of control over the product or process.

Training Link

Skewness Span Special cause variability Spread SS Process Report SS Product Report Stability

Step 12 p.103

Standard Deviation (s)

Standard Order Statistic Statistical Process Control (SPC) Stratification Subgrouping Tolerance Range Total Observed Variation Total Prob of Defect

Design of experiment (DOE) treatments often are presented in a standard order. In a standard order, the first factor alternates between the low and high setting for each treatment. The second factor alternates between low and high settings every two treatments. The third factor alternates between low and high settings every four treatments. Note that each time a factor is added, the design doubles in size to provide all combinations for each level of the new factor. Any number calculated from sample data, describes a sample characteristic Statistical process control is the application of statistical methods to analyze and control the variation of a process. A stratifying factor, also referred to as stratification or a stratifier, is a factor that can be used to separate data into subgroups. This is done to investigate whether that factor is a significant special cause factor. Measurement of where you can get. Tolerance range is the difference between the upper specification limit and the lower specification limit. Total observed variation is the combined variation from all sources, including the process and the measurement system. The total probability of defect is equal to the sum of the probability of defect above the upper spec limit-p(d), upper-and the probability of defect below the lower spec limit-p(d), lower.

Term
Transfer function Transformations Trivial many T-test Tukey's (1-wayANOVA): Unexplained Variation (S) Unit USL Variation Variation (common cause) Variation (special cause) Whisker Yield Z Z bench Z lt Z shift Z st
184

Definition
A transfer function describes the relationship between lower level requirements and higher level requirements. If it describes the relationship between the nominal values, then it is called a y-hat model. If it describes the relationship between the variations, then it is called an s-hat model. Used to make non-normal data look more normal. The trivial many refers to the variables that are least likely responsible for variation in a process, product, or service. A t-test is a statistical tool used to determine whether a significant difference exists between the means of two distributions or the mean of one distribution and a target value. See the t-test tools. Check to obtain confidence intervals for all pairwise differences between level means using Tukey's method (also called Tukey's HSD or TukeyKramer method). Specify a family error rate between 0.5 and 0.001. Values greater than or equal to 1.0 are interpreted as percentages. The default error rate is 0.05. Regression statistical output that shows the unexplained variation in the data. Se = sqrt((sum(yi-y_bar)^2)/(n-1)) A unit is any item that is produced or processed. An upper specification limit, also known as an upper spec limit, or USL, is a value below which performance of a product or process is acceptable. Variation is the fluctuation in process output. It is quantified by standard deviation, a measure of the average spread of the data around the mean. Variation is sometimes called noise. Variance is squared standard deviation. Common cause variation is fluctuation caused by unknown factors resulting in a steady but random distribution of output around the average of the data. It is a measure of the process potential, or how well the process can perform when special cause variation is removed; therefore, it is a measure of the process's technology. Also called, inherent variation Special cause variation is a shift in output caused by a specific factor such as environmental conditions or process input parameters. It can be accounted for directly and potentially removed and is a measure of process control, or how well the process is performing compared to its potential. Also called non-random variation. From box plot...displays minimum and maximum observations within 1.5 IQR (75th-25th percentile span) from either 25th or 75th percentile. Outlier are those that fall outside of the 1.5 range. Yield is the percentage of a process that is free of defects. A Z value is a data point's position between the mean and another location as measured by the number of standard deviations. Z is a universal measurement because it can be applied to any unit of measure. Z is a measure of process capability and corresponds to the process sigma value that is reported by the businesses. For example, a 3 sigma process means that three standard deviations lie between the mean and the nearest specification limit. Three is the Z value. Z bench is the Z value that corresponds to the total probability of a defect Z long term (ZLT) is the Z bench calculated from the overall standard deviation and the average output of the current process. Used with continuous data, ZLT represents the overall process capability and can be used to determine the probability of making out-of-spec parts within the current process. Z shift is the difference between ZST and ZLT. The larger the Z shift, the more you are able to improve the control of the special factors identified in the subgroups. ZST represents the process capability when special factors are removed and the process is properly centered. ZST is the metric by which processes are compared.

Training Link

GEAE CD (Control)

Tool

What does it do?

Why use?

When use?

Data Type

P < .05 indicates

Picture

1-Sample t-Test

Compares mean to target

The 1-sample t-test is useful in identifying a significant difference between a sample mean and a specified value when the difference is not readily apparent from graphical tools. Using the 1-sample t-test to compare data gathered before process improvements and after is a way to prove that the mean has actually shifted.

The 1-sample t-test is used with continuous data any time you need to compare a sample mean to a specified value. This is useful when you need to make judgments about a process based on a sample output from that process.

Continuous X & Y

Not equal

1-Way ANOVA

ANOVA tests to see if the difference between the means of each level is significantly more than the variation One-way ANOVA is useful for identifying a within each level. 1-way ANOVA is used when two or statistically significant difference between more means (a single factor with three or more levels) means of three or more levels of a factor. must be compared with each other.

Use 1-way ANOVA when you need to compare three or more means (a single factor with three or more levels) and determine how much of the total observed variation can be explained by the factor.

Continuous Y, Discrete Xs

At least one group of data is different than at least one other group.

2-Sample t-Test

A statistical test used to detect differences between means of two populations.

The 2-sample t-test is useful for identifying a significant difference between means of two levels (subgroups) of a factor. It is also extremely useful for identifying important Xs for a project Y.

When you have two samples of continuous data, and you need to know if they both come from the same population or if they represent two different populations

Continuous X & Y

There is a difference in the means

ANOVA GLM

The General Linear Model allows you to learn one form of ANOVA that can be used for all tests of mean differences involving ANOVA General Linear Model (GLM) is a statistical tool two or more factors or levels. Because used to test for differences in means. ANOVA tests to ANOVA GLM is useful for identifying the see if the difference between the means of each level is effect of two or more factors (independent significantly more than the variation within each level. variables) on a dependent variable, it is ANOVA GLM is used to test the effect of two or more also extremely useful for identifying factors with multiple levels, alone and in combination, on important Xs for a project Y. ANOVA GLM a dependent variable. also yields a percent contribution that quantifies the variation in the response (dependent variable) due to the individual factors and combinations of factors.

You can use ANOVA GLM any time you need to identify a statistically significant difference in the mean of the dependent variable due to two or more factors with multiple levels, alone and in combination. ANOVA GLM also can be used to quantify the amount of variation in the response that can be attributed to a specific factor in a designed experiment.

Continuous Y & all X's

At least one group of data is different than at least one other group.

Benchmarking

Benchmarking is an important tool in the improvement of your process for several reasons. First, it allows you to compare your relative position for this product or service against industry leaders or other companies outside your industry who Benchmarking is an improvement tool whereby a perform similar functions. Second, it helps company: Measures its performance or process against Benchmarking can be done at any point in the you identify potential Xs by comparing your other companies' best in class practices, Determines Six Sigma process when you need to develop a process to the benchmarked process. how those companies achieved their performance levels, new process or improve an existing one Third, it may encourage innovative or Uses the information to improve its own performance. direct applications of solutions from other businesses to your product or process. And finally, benchmarking can help to build acceptance for your project's results when they are compared to benchmark data obtained from industry leaders.

all

N/A

Best Subsets

Tells you the best X to use when you're comparing multiple X's in regression assessment.

Best Subsets is an efficient way to select a group of "best subsets" for further analysis by selecting the smallest subset that fulfills Typically used before or after a multiplecertain statistical criteria. The subset model regression analysis. Particularly useful in may actually estimate the regression determining which X combination yields the coefficients and predict future responses best R-sq value. with smaller variance than the full model using all predictors

Continuous X & Y

N/A

Tool

What does it do?

Why use?

When use?

Data Type

P < .05 indicates


The goodness-offit tests, with pvalues ranging from 0.312 to 0.724, indicate that there is insufficient evidence for the model not fitting the data adequately. If the p-value is less than your accepted a level, the test would indicate sufficient evidence for a conclusion of an inadequate fit.

Picture

Binary Logistic Regressionand modeling the relationship between a discrete Xs


binary Y and discrete and/or continuous Xs.

Binary logistic regression is useful in two important applications: analyzing the differences among discrete

Binary logistic regression is useful in two applications: analyzing the differences among discrete Xs and modeling the relationship between a discrete binary Y and discrete and/or continuous Xs. Binary logistic regression can be used to model the relationship between a discrete binary Generally speaking, logistic regression is used Y and discrete and/or continuous Xs. The when the Ys are discrete and the Xs are predicted values will be probabilities p(d) of continuous an event such as success or failure-not an event count. The predicted values will be bounded between zero and one (because they are probabilities).

Defectives Y / Continuous & Discrete X

Box Plot

A box plot is a basic graphing tool that displays the centering, spread, and distribution of a continuous data set. In simplified terms, it is made up of a box and whiskers (and occasional outliers) that correspond to each fourth, or quartile, of the data set. The box represents the second and third quartiles of data. The line that bisects the box is the median of the entire data set-50% of the data points fall below this line and 50% fall above it. The first and fourth quartiles are represented by "whiskers," or lines that extend from both ends of the box.

a box plot can help you visualize the centering, spread, and distribution of your data quickly. It is especially useful to view more than one box plot simultaneously to compare the performance of several processes such as the price quote cycle between offices or the accuracy of component placement across several production lines. A box plot can help identify candidates for the causes behind your list of potential Xs. It also is useful in tracking process improvement by comparing successive plots generated over time

You can use a box plot throughout an improvement project, although it is most useful in the Analyze phase. In the Measure phase you can use a box plot to begin to understand the nature of a problem. In the Analyze phase a box plot can help you identify potential Xs that should be investigated further. It also can help eliminate potential Xs. In the Improve phase you can use a box plot to validate potential improvements

Continuous X & Y

N/A

Box-Cox

used to find the mathematical function needed to translate a continuous but nonnormal distribution into a normal distribution. After you have entered your data, Transformation Minitab tells you what mathematical function can be applied to each of your data points to bring your data closer to a normal distribution.

Many tools require that data be normally distributed to produce accurate results. If the data set is not normal, this may reduce significantly the confidence in the results obtained.

If your data is not normally distributed, you may encounter problems in Calculating Z values with continuous data. You could calculate an inaccurate representation of your process capability. In constructing control charts.... Your process may appear more or less in control than it really is. In Hypothesis testing... As your data becomes less normal, the results of your tests may not be valid.

Continuous X & Y

N/A

Brainstorming

Brainstorming is helpful because it allows Brainstorming is a tool that allows for open and creative your team to generate many ideas on a thinking. It encourages all team members to participate topic creatively and efficiently without and to build on each other's creativity criticism or judgment.

Brainstorming can be used any time you and your team need to creatively generate numerous ideas on any topic. You will use brainstorming many times throughout your project whenever you feel it is appropriate. You also may incorporate brainstorming into other tools, such as QFD, tree diagrams, process mapping, or FMEA.

all

N/A

c Chart

a graphical tool that allows you to view the actual number of defects in each subgroup. Unlike continuous data control charts, discrete data control charts can monitor many product quality characteristics simultaneously. For example, you could use a c chart to monitor many types of defects in a call center process (like hang ups, incorrect information given, disconnections) on a single chart when the subgroup size is constant.

Control phase to verify that your process The c chart is a tool that will help you remains in control after the sources of special determine if your process is in control by cause variation have been removed. The c determining whether special causes are chart is used for processes that generate present. The presence of special cause discrete data. The c chart monitors the number variation indicates that factors are of defects per sample taken from a process. influencing the output of your process. You should record between 5 and 10 readings, Eliminating the influence of these factors and the sample size must be constant. The c will improve the performance of your chart can be used in both low- and high- volume process and bring your process into control environments

Continuous X, Attribute Y

N/A

CAP Includes/Excludes discussion.

A group exercise used to establish scope and facilitate Effort focuses on delineating project boundaries.

Encourages group participation. Increases individual involvement and understanding Define of team efforts. Prevents errant team efforts in later project stages (waste). Helps to orient new team members.

all

N/A

Tool

What does it do?

Why use?
Helps to eliminate low priority projects. Insure management support and compatibility with business goals.

When use?

Data Type

P < .05 indicates


N/A

Picture

CAP Stakeholder Analysis prioritization of Project and team efforts.

Confirms management or stakeholder acceptance and

Defone

all

Capability Analysis

Capability analysis is a MinitabTM tool that visually compares actual process performance to the performance standards. The capability analysis output includes an illustration of the data and several performance statistics. The plot is a histogram with the performance standards for the process expressed as upper and lower specification limits (USL and LSL). A normal distribution curve is calculated from the process mean and standard deviation; this curve is overlaid on the histogram. Beneath this graphic is a table listing several key process parameters such as mean, standard deviation, capability indexes, and parts per million (ppm) above and below the specification limits.

When describing a process, it is important Capability analysis is used with continuous data to identify sources of variation as well as whenever you need to compare actual process process segments that do not meet performance to the performance standards. You performance standards. Capability analysis can use this tool in the Measure phase to is a useful tool because it illustrates the describe process performance in statistical centering and spread of your data in terms. In the Improve phase, you can use relation to the performance standards and capability analysis when you optimize and provides a statistical summary of process confirm your proposed solution. In the Control performance. Capability analysis will help phase, capability analysis will help you compare you describe the problem and evaluate the the actual improvement of your process to the proposed solution in statistical terms. performance standards.

Continuous X & Y

N/A

Cause and Effect

A cause and effect diagram is a visual tool that logically organizes possible causes for a specific problem or effect by graphically displaying them in increasing detail. It is sometimes called a fishbone diagram because of its Diagram fishbone shape. This shape allows the team to see how each cause relates to the effect. It then allows you to determine a classification related to the impact and ease of addressing each cause

A cause and effect diagram allows your team to explore, identify, and display all of the possible causes related to a specific problem. The diagram can increase in detail as necessary to identify the true root cause of the problem. Proper use of the tool helps the team organize thinking so that all the possible causes of the problem, not just those from one person's viewpoint, are captured. Therefore, the cause and effect diagram reflects the perspective of the team as a whole and helps foster consensus in the results because each team member can view all the inputs

You can use the cause and effect diagram whenever you need to break an effect down into its root causes. It is especially useful in the Measure, Analyze, and Improve phases of the DMAIC process

all

N/A

Chi Square--Test of

The chi square-test of independence is a test of association (nonindependence) between discrete variables. It is also referred to as the test of association. It is based on a mathematical comparison of the number of observed counts against the expected number of counts to determine if there is a difference in output Independence on the input category. Example: The counts based number of units failing inspection on the first shift is greater than the number of units failing inspection on the second shift. Example: There are fewer defects on the revised application form than there were on the previous application form

The chi square-test of independence is useful for identifying a significant difference between count data for two or more levels of a discrete variable Many statistical problem statements and performance improvement goals are written in terms of reducing DPMO/DPU. The chi square-test of independence applied to before and after data is a way to prove that the DPMO/DPU have actually been reduced.

When you have discrete Y and X data (nominal data in a table-of-total-counts format, shown in fig. 1) and need to know if the Y output counts differ for two or more subgroup categories (Xs), use the chi square test. If you have raw data (untotaled), you need to form the contingency table. Use Stat > Tables > Cross Tabulation and check the Chisquare analysis box.

At least one group discrete (category or is statistically count) different.

Control Charts

Control charts are time-ordered graphical displays of data that plot process variation over time. Control charts are the major tools used to monitor processes to ensure they remain stable. Control charts are characterized by A centerline, which represents the process average, or the middle point about which plotted measures are expected to vary randomly. Upper and lower control limits, which define the area three standard deviations on Control charts serve as a tool for the either side of the centerline. Control limits reflect the ongoing control of a process and provide a expected range of variation for that process. Control common language for discussing process charts determine whether a process is in control or out of performance. They help you understand control. A process is said to be in control when only variation and use that knowledge to control common causes of variation are present. This is and improve your process. In addition, represented on the control chart by data points control charts function as a monitoring fluctuating randomly within the control limits. Data points system that alerts you to the need to outside the control limits and those displaying respond to special cause variation so you nonrandom patterns indicate special cause variation. can put in place an immediate remedy to When special cause variation is present, the process is contain any damage. said to be out of control. Control charts identify when special cause is acting on the process but do not identify what the special cause is. There are two categories of control charts, characterized by type of data you are working with: continuous data control charts and discrete data control charts.

In the Measure phase, use control charts to understand the performance of your process as it exists before process improvements. In the Analyze phase, control charts serve as a troubleshooting guide that can help you identify sources of variation (Xs). In the Control phase, use control charts to : 1. Make sure the vital few Xs remain in control to sustain the solution - 2. Show process performance after full-scale implementation of your solution. You can compare the control chart created in the Control phase with that from the Measure phase to show process improvement -3. Verify that the process remains in control after the sources of special cause variation have been removed

all

N/A

Tool

What does it do?

Why use?

When use?

Data Type

P < .05 indicates

Picture

Data Collection Plan

Failing to establish a data collection plan can be an expensive mistake in a project. Without a plan, data collection may be Any time data is needed, you should draft a haphazard, resulting in insufficient, data collection plan before beginning to collect unnecessary, or inaccurate information. it. This is often called "bad" data. A data collection plan provides a basic strategy for collecting accurate data efficiently

all

N/A

Design Analysis

The design analysis spreadsheet can help you improve, revise, and optimize your The design analysis spreadsheet is an MS-Excel design. It can also:Improve a product or workbook that has been designed to perform partial process by identifying the Xs which have derivative analysis and root sum of squares analysis. the most impact on the response.Identify The design analysis spreadsheet provides a quick way the factors whose variability has the to predict the mean and standard deviation of an output highest influence on the response and measure (Y), given the means and standard deviations target their improvement by adjusting of the inputs (Xs). This will help you develop a statistical tolerances.Identify the factors that have Spreadsheetof your product or process, which in turn will help low influence and can be allowed to vary model you improve that product or process. The partial over a wider range.Be used with the derivative of Y with respect to X is called the sensitivity of Solver** optimization routine for complex Y with respect to X or the sensitivity coefficient of X. For functions (Y equations) with many this reason, partial derivative analysis is sometimes constraints. ** Note that you must called sensitivity analysis. unprotect the worksheet before using Solver.Be used with process simulation to visualize the response given a set of constrained

Partial derivative analysis is widely used in product design, manufacturing, process improvement, and commercial services during the concept design, capability assessment, and creation of the detailed design.When the Xs are known to be highly non-normal (and especially if the Xs have skewed distributions), Monte Carlo analysis may be a better choice than partial derivative analysis.Unlike root sum of squares (RSS) analysis, partial derivative analysis can be used with nonlinear transfer functions.Use partial derivative analysis when you want to predict the mean and standard deviation of a system response (Y), given the means and standard deviations of the inputs (Xs), when the transfer function Y=f(X1, X2, ., Xn) is known. However, the inputs (Xs) must be independent of one another (i.e., not correlated).

Continuous X & Y

N/A

Design of Experiment

Design of experiment (DOE) is a tool that allows you to obtain information about how factors (Xs), alone and in combination, affect a process and its output (Y). Traditional experiments generate data by changing one factor at a time, usually by trial and error. This approach (DOE)requires a great many runs and cannot capture the often effect of combined factors on the output. By allowing you to test more than one factor at a time-as well as different settings for each factor-DOE is able to identify all factors and combinations of factors that affect the process Y.

DOE uses an efficient, cost-effective, and methodical approach to collecting and analyzing data related to a process output and the factors that affect it. By testing more than one factor at a time, DOE is able to identify all factors and combinations of factors that affect the process Y

In general, use DOE when you want toIdentify and quantify the impact of the vital few Xs on your process outputDescribe the relationship between Xs and a Y with a mathematical modelDetermine the best configuration

Continuous Y & all X's

N/A

Design Scorecards

Design scorecards are a means for gathering data, predicting final quality, analyzing drivers of poor quality, and modifying design elements before a product is built. This makes proactive corrective action possible, rather than initiating reactive quality efforts during preproduction. Design scorecards are an MS-Excel workbook that has been designed to automatically calculate Z values for a product based on user-provided inputs of for all the sub-processes and parts that make up the product. Design scorecards have six basic components: 1 Top-level scorecard-used to report the rolled-up ZST prediction 2. Performance worksheetused to estimate defects caused by lack of design margin 3. Process worksheet-used to estimate defects in process as a result of the design configuration 4.Parts worksheet-used to estimate defects due to incoming materialsSoftware worksheet-used to estimate defects in software 5. Software worksheet-used to estimate defects in software 6. Reliability worksheet-used to estimate defects due to reliability

Design scorecards can be used anytime that a product or process is being designed or modified and it is necessary to predict defect levels before implementing a process. They can be used in either the DMADV or DMAIC processes.

all

N/A

Tool

What does it do?

Why use?

When use?

Data Type

P < .05 indicates

Picture

Discrete Data Analysis

The DDA method is an important tool because it provides a method to independently assess the most common types of measurement variationThe Discrete Data Analysis (DDA) method is a tool used repeatability, reproducibility, and/or to assess the variation in a measurement system due to accuracy. Completing the DDA method will Method reproducibility, repeatability, and/or accuracy. This tool help you to determine whether the applies to discrete data only. variation from repeatability, reproducibility, and/or accuracy in your measurement system is an acceptably small portion of the total observed variation.

Use the DDA method after the project data collection plan is formulated or modified and before the project data collection plan is finalized and data is collected. Choose the DDA method when you have discrete data and you want to determine if the measurement variation due to repeatability, reproducibility, and/or accuracy is an acceptably small portion of the total observed variation

discrete (category or count)

N/A

Discrete Event

Discrete event simulation is conducted for processes that are dictated by events at distinct points in time; each occurrence of an event impacts the current state of the process. Examples of discrete Simulation (Process ModelTM) events are arrivals of phone calls at a call center. Timing in a discrete event model increases incrementally based on the arrival and departure of the inputs or resources

ProcessModelTM is a process modeling and analysis tool that accelerates the process improvement effort. It combines a simple flowcharting function with a simulation process to produce a quick and easy tool for documenting, analyzing, and improving business processes.

Discrete event simulation is used in the Analyze phase of a DMAIC project to understand the behavior of important process variables. In the Improve phase of a DMAIC project, discrete event simulation is used to predict the performance of an existing process under different conditions and to test new process ideas or alternatives in an isolated environment. Use ProcessModelTM when you reach step 4, Implement, of the 10-step simulation process.

Continuous Y, Discrete Xs

N/A

Dot Plot

Quick graphical comparison of two or more processes' variation or spread A means / method to Identify ways a process can fail, plan, prioritize actions related to the process

Quick graphical comparison of two or more Comparing two or more processes' variation or processes' variation or spread spread Complex or new processes. Customers are involved.

Continuous Y, Discrete Xs all

N/A N/A

estimate th risks Failure Mode and Effects Analysis of those failures, evaluate a control

Gage R & R--ANOVA

Gage R&R-ANOVA method is a tool used to assess the variation in a measurement system due to reproducibility and/or repeatability. An advantage of this tool is that it can separate the individual effects of repeatability and Method reproducibility and then break down reproducibility into the components "operator" and "operator by part." This tool applies to continuous data only.

Gage R&R-ANOVA method is an important tool because it provides a method to independently assess the most common types of measurement variation repeatability and reproducibility. This tool will help you to determine whether the variation from repeatability and/or reproducibility in your measurement system is an acceptably small portion of the total observed variation.

Measure -Use Gage R&R-ANOVA method after the project data collection plan is formulated or modified and before the project data collection plan is finalized and data is collected. Choose the ANOVA method when you have continuous data and you want to determine if the measurement variation due to repeatability and/or reproducibility is an acceptably small portion of the total observed variation.

Continuous X & Y

Gage R & R--Short

Gage R&R-Short Method is an important tool because it provides a quick method of Gage R&R-Short Method is a tool used to assess the assessing the most common types of variation in a measurement system due to the combined measurement variation using only five effect of reproducibility and repeatability. An advantage parts and two operators. Completing the of this tool is that it requires only two operators and five Gage R&R-Short Method will help you Method samples to complete the analysis. A disadvantage of this determine whether the combined variation tool is that the individual effects of repeatability and from repeatability and reproducibility in reproducibility cannot be separated. This tool applies to your measurement system is an continuous data only acceptably small portion of the total observed variation.

Use Gage R&R-Short Method after the project data collection plan is formulated or modified and before the project data collection plan is finalized and data is collected. Choose the Gage R&R-Short Method when you have continuous data and you believe the total measurement variation due to repeatability and reproducibility is an acceptably small portion of the total observed variation, but you need to confirm this belief. For example, you may want to verify that no changes occurred since a previous Gage R&R study. Gage R&R-Short Method can also be used in cases where sample size is limited.

Continuous X & Y

GRPI

GRPI is an excellent tool for organizing newly formed teams. It is valuable in helping a group of individuals work as an effective team-one of the key ingredients to success in a DMAIC project

GRPI is an excellent team-building tool and, as such, should be initiated at one of the first team meetings. In the DMAIC process, this generally happens in the Define phase, where you create your charter and form your team. Continue to update your GRPI checklist throughout the DMAIC process as your project unfolds and as your team develops

all

N/A

Tool

What does it do?

Why use?

When use?

Data Type

P < .05 indicates

Picture

Histogram

A histogram is a basic graphing tool that displays the relative frequency or occurrence of data values-or which data values occur most and least frequently. A histogram illustrates the shape, centering, and spread of data distribution and indicates whether there are any outliers. The frequency of occurrence is displayed on the y-axis, where the height of each bar indicates the number of occurrences for that interval (or class) of data, such as 1 to 3 days, 4 to 6 days, and so on. Classes of data are displayed on the x-axis. The grouping of data into classes is the distinguishing feature of a histogram

it is important to identify and control all sources of variation. Histograms allow you to visualize large quantities of data that would otherwise be difficult to interpret. They give you a way to quickly assess the distribution of your data and the variation that exists in your process. The shape of a histogram offers clues that can lead you to possible Xs. For example, when a histogram has two distinct peaks, or is bimodal, you would look for a cause for the difference in peaks.

Histograms can be used throughout an improvement project. In the Measure phase, you can use histograms to begin to understand the statistical nature of the problem. In the Analyze phase, histograms can help you identify potential Xs that should be investigated further. They can also help eliminate potential Xs. In the Improve phase, you can use histograms to characterize and confirm your solution. In the Control phase, histograms give you a visual reference to help track and maintain your improvements.

Continuous Y & all X's

N/A

Homogeneity of

Homogeneity of variance is a test used to determine if the variances of two or more samples are different, or not Variance homogeneous. The homogeneity of variance test is a comparison of the variances (sigma, or standard deviations) of two or more distributions.

While large differences in variance between a small number of samples are detectable with graphical tools, the homogeneity of variance test is a quick way to reliably detect small differences in variance between large numbers of samples.

There are two main reasons for using the homogeneity of variance test:1. A basic assumption of many statistical tests is that the variances of the different samples are equal. Some statistical procedures, such as 2-sample t-test, gain additional test power if the variances of the two samples can be considered equal.2. Many statistical problem statements and performance improvement goals are written in terms of "reducing the variance." Homogeneity of variance tests can be performed on before and after data, as a way to prove that the variance has been reduced.

Continuous Y, Discrete Xs

(Use Levene's Test) At least one group of data is different than at least one other group

I-MR Chart

The I-MR chart is a tool to help you determine if your process is in control by seeing if special causes are present.

The presence of special cause variation indicates that factors are influencing the output of your process. Eliminating the influence of these factors will improve the performance of your process and bring your process into control

The Measure phase to separate common causes of variation from special causesThe Analyze and Improve phases to ensure process stability before completing a hypothesis testThe Control phase to verify that the process remains in control after the sources of special cause variation have been removed

Continuous X & Y

N/A

Kano Analysis

Kano analysis is a customer research method for classifying customer needs into four categories; it relies on a questionnaire filled out by or with the customer. It helps you understand the relationship between the fulfillment or nonfulfillment of a need and the satisfaction or dissatisfaction experienced by the customer. The four Kano analysis provides a systematic, datacategories are 1. delighters, 2. Must Be elements, 3. One based method for gaining deeper - dimensionals, & 4. Indeifferent elements. There are understanding of customer needs by two additional categories into which customer responses classifying them to the Kano survey can fall: they are reverse elements and questionable result. --The categories in Kano analysis represent a point in time, and needs are constantly evolving. Often what is a delighter today can become simply a must-be over time.

Use Kano analysis after a list of potential needs that have to be satisfied is generated (through, for example, interviews, focus groups, or observations). Kano analysis is useful when you need to collect data on customer needs and prioritize them to focus your efforts.

all

N/A

Kruskal-Wallis Test

Compare two or more means with unknown distributions

non-parametric (measurement or count) Continuous Y & all X's

At least one mean is different N/A

Matrix Plot

Tool used for high-level look at relationships between Matrix plots can save time by allowing you You should use matrix plots early in your several parameters. Matrix plots are often a first step at to drill-down into data and determine which analyze phase. determining which X's contribute most to your Y. parameters best relate to your Y.

Mistake Proofing

Mistake-proofing devices prevent defects by preventing errors or by predicting when errors could occur.

Mistake proofing is an important tool because it allows you to take a proactive approach to eliminating errors at their source before they become defects.

You should use mistake proofing in the Measure phase when you are developing your data collection plan, in the Improve phase when you are developing your proposed solution, and in the Control phase when developing the control plan.Mistake proofing is appropriate when there are :1. Process steps where human intervention is required2. Repetitive tasks where physical manipulation of objects is required3. Steps where errors are known to occur4. Opportunities for predictable errors to occur

all

N/A

Tool

What does it do?

Why use?

When use?

Data Type

P < .05 indicates

Picture

Monte Carlo Analysis

Monte Carlo analysis is a decision-making and problemsolving tool used to evaluate a large number of possible scenarios of a process. Each scenario represents one possible set of values for each of the variables of the process and the calculation of those variables using the transfer function to produce an outcome Y. By repeating this method many times, you can develop a distribution for the overall process performance. Monte Carlo can be used in such broad areas as finance, commercial quality, engineering design, manufacturing, and process design and improvement. Monte Carlo can be used with any type of distribution; its value comes from the increased knowledge we gain in terms of variation of the output

Performing a Monte Carlo analysis is one way to understand the variation that naturally exists in your process. One of the ways to reduce defects is to decrease the output variation. Monte Carlo focuses on understanding what variations exist in the input Xs in order to reduce the variation in output Y.

Continuous Y & all X's

N/A

Multi-Generational

Multigenerational product/process planning (MGPP) is a procedure that helps you create, upgrade, leverage, and maintain a product or process in a way that can reduce production costs and increase Product/Process Planning market share. A key element of MGPP is its ability to help you follow up product/process introduction with improved, derivative versions of the original product.

Most products or processes, once introduced, tend to remain unchanged for many years. Yet, competitors, technology, and the marketplace-as personified by the ever more demanding consumer-change constantly. Therefore, it makes good business sense to incorporate into product/process design a method for anticipating and taking advantage of these changes.

You should follow an MGPP in conjunction with your business's overall marketing strategy. The market process applied to MGPP usually takes place over three or more generations. These generations cover the first three to five years of product/process development and introduction.

all

N/A

Multiple Regression

method that enables you to determine the relationship between a continuous process output (Y) and several factors (Xs).

Multiple regression will help you to understand the relationship between the process output (Y) and several factors (Xs) that may affect the Y. Understanding this relationship allows you to1. Identify important Xs2. Identify the amount of variation explained by the model3. Reduce the number of Xs prior to design of experiment (DOE )4. Predict Y based on combinations of X values5. Identify possible nonlinear relationships such as a quadratic (X12) or an interaction (X1X2)The output of a multiple regression analysis may demonstrate the need for designed experiments that establish a cause and effect relationship or identify ways to further improve the process.

You can use multiple regression during the Analyze phase to help identify important Xs and during the Improve phase to define the optimized solution. Multiple regression can be used with both continuous and discrete Xs. If you have only discrete Xs, use ANOVA-GLM. Typically you would use multiple regression on existing data. If you need to collect new data, it may be more efficient to use a DOE.

Continuous X & Y

A correlation is detected

Multi-Vari Chart

A multi-vari chart is a tool that graphically displays patterns of variation. It is used to identify possible Xs or families of variation, such as variation within a subgroup, between subgroups, or over time

A multi-vari chart enables you to see the effect multiple variables have on a Y. It also helps you see variation within subgroups, between subgroups, and over time. By looking at the patterns of variation, you can identify or eliminate possible Xs

Continuous Y & all X's

N/A

Normal Probability Plot

Allows you to determine the normality of your data.

To determine the normality of data. To see if multiple X's exist in your data.

cont (measurement)

Data does not follow a normal distribution

Normality Test

Many statistical tests (tests of means and A normality test is a statistical process used to determine tests of variances) assume that the data if a sample, or any group of data, fits a standard normal being tested is normally distributed. A distribution. A normality test can be done mathematically normality test is used to determine if that or graphically. assumption is valid.

There are two occasions when you should use a normality test: 1. When you are first trying to characterize raw data, normality testing is used in conjunction with graphical tools such as histograms and box plots. 2. When you are analyzing your data, and you need to calculate basic statistics such as Z values or employ statistical tests that assume normality, such as t-test and ANOVA.

cont (measurement)

not normal

Tool

What does it do?

Why use?

When use?

Data Type

P < .05 indicates

Picture

np Chart

The np chart is a tool that will help you determine if your process is in control by seeing if special causes are present. The a graphical tool that allows you to view the actual presence of special cause variation number of defectives and detect the presence of special indicates that factors are influencing the causes. output of your process. Eliminating the influence of these factors will improve the performance of your process and bring your process into control.

You will use an np chart in the Control phase to verify that the process remains in control after the sources of special cause variation have been removed. The np chart is used for processes that generate discrete data. The np chart is used to graph the actual number of defectives in a sample. The sample size for the np chart is constant, with between 5 and 10 defectives per sample on the average.

Defectives Y / Continuous & Discrete X

N/A

Out-of-the-Box Thinkingbased on overcoming the subconscious patterns of


thinking that we all develop.

Out-of-the-box thinking is an approach to creativity

Many businesses are successful for a brief time due to a single innovation, while Root cause analysis and new product / process continued success is dependent upon development continued innovation

all

N/A

p Chart

a graphical tool that allows you to view the proportion of defectives and detect the presence of special causes. The p chart is used to understand the ratio of nonconforming units to the total number of units in a sample.

The p chart is a tool that will help you determine if your process is in control by You will use a p chart in the Control phase to determining whether special causes are verify that the process remains in control after present. The presence of special cause the sources of special cause variation have variation indicates that factors are been removed. The p chart is used for influencing the output of your process. processes that generate discrete data. The Eliminating the influence of these factors sample size for the p chart can vary but usually will improve the performance of your consists of 100 or more process and bring your process into control

Defectives Y / Continuous & Discrete X

N/A

Pareto Chart

A Pareto chart is a graphing tool that prioritizes a list of variables or factors based on impact or frequency of occurrence. This chart is based on the Pareto principle, which states that typically 80% of the defects in a process or product are caused by only 20% of the possible causes

. It is easy to interpret, which makes it a convenient communication tool for use by individuals not familiar with the project. The Pareto chart will not detect small differences between categories; more advanced statistical tools are required in such cases.

In the Define phase to stratify Voice of the Customer data...In the Measure phase to stratify data collected on the project Y..In the Analyze phase to assess the relative impact or frequency of different factors, or Xs

all

N/A

Process Mapping

Process mapping is a tool that provides structure for defining a process in a simplified, visual manner by displaying the steps, events, and operations (in chronological order) that make up a process

In the Define phase, you create a high-level process map to get an overview of the steps, events, and operations that make up the process. This will help you understand the process and verify the scope you defined in As you examine your process in greater your charter. It is particularly important that your detail, your map will evolve from the high-level map reflects the process as it actually process you "think" exists to what "actually" is, since it serves as the basis for more detailed exists. Your process map will evolve again maps.In the Measure and Analyze phases, you to reflect what "should" exist-the process create a detailed process map to help you after improvements are made. identify problems in the process. Your improvement project will focus on addressing these problems.In the Improve phase, you can use process mapping to develop solutions by creating maps of how the process "should be."

all

N/A

Pugh Matrix

the tool used to facilitate a disciplined, team-based process for concept selection and generation. Several concepts are evaluated according to their strengths and weaknesses against a reference concept called the datum. The datum is the best current concept at each iteration of the matrix. The Pugh matrix encourages comparison of several different concepts against a base concept, creating stronger concepts and eliminating weaker ones until an optimal concept finally is reached

provides an objective process for reviewing, assessing, and enhancing design concepts the team has generated with reference to the project's CTQs. Because it employs agreed-upon criteria for assessing each concept, it becomes difficult for one team member to promote his or her own concept for irrational reasons.

The Pugh matrix is the recommended method for selecting the most promising concepts in the Analyze phase of the DMADV process. It is used when the team already has developed several alternative concepts that potentially can meet the CTQs developed during the Measure phase and must choose the one or two concepts that will best meet the performance requirements for further development in the Design phase

all

N/A

Tool

What does it do?

Why use?

When use?

Data Type

P < .05 indicates

Picture

Quality Function

QFD drives a cross-functional discussion to define what is important. It provides a vehicle for asking how products/services a methodology that provides a flowdown process for will be measured and what are the critical CTQs from the highest to the lowest level. The flowdown variables to control processes.The QFD process begins with the results of the customer needs process highlights trade-offs between mapping (VOC) as input. From that point we cascade conflicting properties and forces the team Deployment a series of four Houses of Quality to arrive at the to consider each trade off in light of the through internal controllable factors. QFD is a prioritization tool customer's requirements for the used to show the relative importance of factors rather product/service.Also, it points out areas for than as a transfer function. improvement by giving special attention to the most important customer wants and systematically flowing them down through the QFD process.

QFD produces the greatest results in situations where1. Customer requirements have not been clearly defined 2. There must be trade-offs between the elements of the business 3. There are significant investments in resources required

all

N/A

Reqression

see Multiple Regression

Continuous X & Y

A correlation is detected

Risk Assessment

Any time you make a change in a process, there is potential for unforeseen failure or unintended consequences. Performing a risk assessment allows you to identify The risk-management process is a methodology used to potential risks associated with planned identify risks,analyze risks,plan, communicate, and process changes and develop abatement implement abatement actions, andtrack resolution of actions to minimize the probability of their abatement actions. occurrence. The risk-assessment process also determines the ownership and completion date for each abatement action.

In DMAIC, risk assessment is used in the Improve phase before you make changes in the process (before running a DOE, piloting, or testing solutions) and in the Control phase to develop the control plan. In DMADV, risk assessment is used in all phases of design, especially in the Analyze and Verify phases where you analyze and verify your concept design.

all

N/A

Root Sum of Squares

Root sum of squares (RSS) is a statistical tolerance analysis method used to estimate the variation of a system output Y from variations in each of the system's inputs Xs.

RSS analysis is a quick method for estimating the variation in system output given the variation in system component inputs, provided the system behavior can be modeled using a linear transfer function with unit ( 1) coefficients. RSS can quickly tell you the probability that the output (Y) will be outside its upper or lower specification limits. Based on this information, you can decide whether some or all of your inputs need to be modified to meet the specifications on system output, and/or if the specifications on system output need to be changed.

Use RSS when you need to quantify the variation in the output given the variation in inputs. However, the following conditions must be met in order to perform RSS analysis: 1. The inputs (Xs) are independent. 2. The transfer function is linear with coefficients of +1 and/or 1. 3. In addition, you will need to know (or have estimates of) the means and standard deviations of each X.

Continuous X & Y

N/A

Run Chart

A run chart is a graphical tool that allows you to view the variation of your process over time. The patterns in the run chart can help identify the presence of special cause variation.

The patterns in the run chart allow you to see if special causes are influencing your process. This will help you to identify Xs affecting your process run chart.

used in many phases of the DMAIC process. Consider using a run chart to 1. Look for possible time-related Xs in the Measure phase 2. Ensure process stability before completing a hypothesis test 3. Look at variation within a subgroup; compare subgroup to subgroup variation

cont (measurement)

N/A

Sample Size

The sample size calculator simplifies the use of the sample size formula and provides you with a statistical Calculator basis for determining the required sample size for given levels of and risks

The calculation helps link allowable risk with cost. If your sample size is statistically sound, you can have more confidence in your data and greater assurance that resources spent on data collection efforts and/or planned improvements will not be wasted

all

N/A

Scatter Plot

a basic graphic tool that illustrates the relationship between two variables.The variables may be a process output (Y) and a factor affecting it (X), two factors affecting a Y (two Xs), or two related process outputs (two Ys).

Scatter plots are used with continuous and Useful in determining whether trends exist discrete data and are especially useful in the between two or more sets of data. Measure, Analyze, and Improve phases of DMAIC projects.

all

N/A

Tool

What does it do?

Why use?

When use?

Data Type

P < .05 indicates


indicate that there is sufficient evidence that the coefficients are not zero for likely Type I error rates (a levels)... SEE MINITAB

Picture

Simple Linear

Simple linear regression is a method that enables you to determine the relationship between a continuous process output (Y) and one factor (X). The relationship is typically expressed in terms of a mathematical equation, Regression such as Y = b + mX, where Y is the process output, b is a constant, m is a coefficient, and X is the process input or factor

Simple linear regression will help you to understand the relationship between the process output (Y) and any factor that may affect it (X). Understanding this relationship will allow you to predict the Y, given a value of X. This is especially useful when the Y variable of interest is difficult or expensive to measure

You can use simple linear regression during the Analyze phase to help identify important Xs and during the Improve phase to define the settings needed to achieve the desired output.

Continuous X & Y

Simulation

Simulation is a powerful analysis tool used to experiment with a detailed process model to determine how the process output Y will respond to changes in its structure, inputs, or surroundings Xs. Simulation model is a computer model that describes relationships and interactions among inputs and process activities. It is used to evaluate process output under a range of different conditions. Different process situations need different types of simulation models. Discrete event simulation is conducted for processes that are dictated by events at distinct points in time; each occurrence of an event impacts the current state of the process. ProcessModel is GE Company's standard software tool for running discrete event models.Continuous simulation is used for processes whose variables or parameters do not experience distinct start and end points. CrystalBall is GE's standard software tool for running continuous models

Simulation can help you: 1. Identify interactions and specific problems in an existing or proposed process 2. Develop a realistic model for a process 3. Predict the behavior of the process under different conditions 4. Optimize process performance

Simulation is used in the Analyze phase of a DMAIC project to understand the behavior of important process variables. In the Improve phase of a DMAIC project, simulation is used to predict the performance of an existing process under different conditions and to test new process ideas or alternatives in an isolated environment

all

N/A

Six Sigma Process

A Six Sigma process report is a Minitab tool that Report provides a baseline for measuring improvement of your product or process

It helps you compare the performance of your process or product to the performance standard and determine if technology or control is the problem

A Six Sigma process report, used with continuous data, helps you determine process capability for your project Y. Process capability is calculated after you have gathered your data and have determined your performance standards used with discrete data, helps you determine process capability for your project Y. You would calculate Process capability after you have gathered your data and determined your performance standards.

Continuous Y & all X's

N/A

calculates DPMO and process short term capability Six Sigma Product Report

It helps you compare the performance of your process or product to the performance standard and determine if technology or control is the problem

Continuous Y, Discrete Xs

N/A

Stepwise Regression

Regression tool that filters out unwanted X's based on specified criteria. A tree diagram is helpful when you want to 1. Relate a CTQ to subprocess elements (Project A tree diagram is a tool that is used to break any concept Useful in organizing information into logical CTQs) 2. Determine the project Y (Project Y) 3. (such as a goal, idea, objective, issue, or CTQ) into categories. See "When use?" section for Select the appropriate Xs (Prioritized List of All subcomponents, or lower levels of detail. more detail Xs) 4. Determine task-level detail for a solution to be implemented (Optimized Solution)

Continuous X & Y

N/A

Tree Diagram

all

N/A

u Chart

A u chart, shown in figure 1, is a graphical tool that allows you to view the number of defects per unit sampled and detect the presence of special causes

The u chart is a tool that will help you You will use a u chart in the Control phase to determine if your process is in control by verify that the process remains in control after determining whether special causes are the sources of special cause variation have present. The presence of special cause been removed. The u chart is used for variation indicates that factors are processes that generate discrete data. The u influencing the output of your process. chart monitors the number of defects per unit Eliminating the influence of these factors taken from a process. You should record will improve the performance of your between 20 and 30 readings, and the sample process and bring your process into control size may be variable.

N/A

Voice of the Customer

The following tools are commonly used to collect VOC data: Dashboard ,Focus group, Interview, Scorecard, and Survey.. Tools used to develop specific CTQs and associated priorities.

Each VOC tool provides the team with an organized method for gathering information You can use VOC tools at the start of a project from customers. Without the use of to determine what key issues are important to structured tools, the data collected may be the customers, understand why they are incomplete or biased. Key groups may be important, and subsequently gather detailed inadvertently omitted from the process, information about each issue. VOC tools can information may not be gathered to the also be used whenever you need additional required level of detail, or the VOC data customer input such as ideas and suggestions collection effort may be biased because of for improvement or feedback on new solutions your viewpoint.

all

N/A

Tool

What does it do?

Why use?

When use?

Data Type

P < .05 indicates

Picture

Worst Case Analysis

A worst case analysis is a nonstatistical tolerance analysis tool used to identify whether combinations of inputs (Xs) at their upper and lower specification limits always produce an acceptable output measure (Y).

Worst case analysis tells you the minimum and maximum limits within which your total product or process will vary. You can then compare these limits with the required specification limits to see if they are acceptable. By testing these limits in advance, you can modify any incorrect tolerance settings before actually beginning production of the product or process.

You should use worst case analysis : To analyze safety-critical Ys, and when no process data is available and only the tolerances on Xs are known. Worst case analysis should be used sparingly because it does not take into account the probabilistic nature (that is, the likelihood of variance from the specified values) of the inputs.

all

N/A

Xbar-R Chart

The Xbar-R chart is a tool to help you decide if your process is in control by determining whether special causes are present.

The presence of special cause variation indicates that factors are influencing the output of your process. Eliminating the influence of these factors will improve the performance of your process and bring your process into control

Xbar-R charts can be used in many phases of the DMAIC process when you have continuous data broken into subgroups. Consider using an Xbar-R chart in the Measure phase to separate common causes of variation from special causes, in the Analyze and Improve phases to ensure process stability before completing a hypothesis test, or in the Control phase to verify that the process remains in control after the sources of special cause variation have been removed.

Continuous X & Y

N/A

Xbar-S Chart

An Xbar-S chart, or mean and standard deviation chart, is a graphical tool that allows you to view the variation in your process over time. An Xbar-S chart lets you perform statistical tests that signal when a process may be going out of control. A process that is out of control has been affected by special causes as well as common causes. The chart can also show you where to look for sources of special cause variation. The X portion of the chart contains the mean of the subgroups distributed over time. The S portion of the chart represents the standard deviation of data points in a subgroup

The Xbar-S chart is a tool to help you determine if your process is in control by seeing if special causes are present. The presence of special cause variation indicates that factors are influencing the output of your process. Eliminating the influence of these factors will improve the performance of your process and bring it into control

An Xbar-S chart can be used in many phases of the DMAIC process when you have continuous data. Consider using an Xbar-S chartin the Measure phase to separate common causes of variation from special causes, in the Analyze and Improve phases to ensure process stability before completing a hypothesis test, or in the Control phase to verify that the process remains in control after the sources of special cause variation have been removed. NOTE - Use Xbar-R if the sample size is small.

Continuous X & Y

N/A

Tool Summary

Y's
Continuous Data
Continuous Data
Regression Time series plots General Linear model Multi-Vari plot Histogram DOE Best Subsets ImR Scatter plot Matrix Plot Fitted line Step wise Regression

Attribute Data
Logistic regression Time series plot C chart P chart N chart NP chart

X's

X-bar R ANOVA Kruskal-Wallis Box plots T-test Dot plots MV plot Histogram DOE Homogeneity of variance General linear model Matrix plot

Attribute Data

Chi Square Pareto Logistic Regression

A
Affinitize Affinity Diagram Affinity Diagrams Alternate Hypothese s Analyze phase ANOVA Assets Assignabl e Cause Assignabl e Variations Attribute Data Average

B
Backgroun d Variables Bar Chart Base Cost Benchmar king Best Practices Best Subsets Beta Risk Beta Version Binomial Distributio n

Black Belt

Blocking Variables Box and Whisker Plot Brainstor ming Breakthro ugh Improvem ent Breakthro ugh Strategy Business Process Reengine ering Business Quality Council Business Y's

C
C Charts CAP Capability Capability Indices Causal Effect Causality Cause Cause and Effect Diagram Champion Change Accelerati on Process(C AP) Change Agent Change Sponsor

Characteri stic Check Sheet Chisquare Cluster Projects Common Cause Confidenc e Level Confidenc e Limits Confoundi ng Continuou s Data Continuou s Random Variable Contributi on Margin Control Control Chart Control Limits Control Phase Control Vs. Technolog y Plot Correlatio n Correlatio n Coefficient Cost / Benefit Analysis Cost Avoidance Cost of Failure

Cost of Poor Quality Cp Cpk Cpl Cpu Critical Path Mapping Critical-toquality (CTQ) Customer expectatio ns Customer needs, expectatio ns Cutoff Point Cycle time

D
Dashboar ds Data Data Collection Plan Data Transform ation Defect 1 Defect 2 Defect Opportunit y Defective Defectives Data Defects Data

Defects per Million Opportunit ies (DPMO) Defects per Unit (DPU) Define Phase Degrees of Freedom Delay Time Density Function Dependen t Variable Deployme nt Flowchart Depreciati on Design For Manufactu rability (DFM) Design for Six Sigma (DFSS) Design of Experimen ts (DOE) DF DFSS Direct Costs Disconnec ts Discrete Data Discrete Random Variable

Distributio n Curve Distributio ns DMAIC DOE Dot Plots DPMO DPO DPU

E
Effect Effectiven ess Measures Efficiency Measures Entitlemen t Experimen t Experimen tal Error Experimen tal error Exponenti ally Weighted Moving Average

F
Factors Factory processes Failure Mode and Effect Analysis (FMEA) First Time Yield Fiscal Year Fishbone Diagram

Five Ms Fixed Cost Flowchart Fluctuatio ns Force Field Analysis Four Ps Frequency Distributio n F-test FTY

G
Gage Repeatabil ity and Reproduci bility (Gage R&R) Gantt chart GB Green Belt (GB) GRPI

H
Ha Hardware Platform Hidden Factory

Histogram Ho Homogen eity of Variance House of Quality

Hypothesi s

I
Improve Phase Independe nt Variable Indirect Costs Input Measures Inputs Instability Interaction Interrelatio nship Diagram (ID) Interval

K
Kano Key Process Input Variables Kurtosis

L
LCL Leveragin g Line Charts Long-term Process Performan ce Lower Control Limit Lower Specificati on Limit (LSL) LSL

M
M/A Master Black Belt Matrix Diagram MBB Mean Measure Phase Measurem ent Error Measurem ent System Analysis Median MiniTab Mistake Proofing MSA Multicollin earity MultiVariance Analysis

N
NIH Nominal Group Technique (NGT) Nonconfor mity Nonparametric Test Nonvalueadded Work Normal distributio n

Normalize d Yield NPI Null Hypothesi s

O
One-sided Alternative OpenBook Managem ent Out of Control Outlier Output Output Measures Overall Yield

P
p(d) Parameter Pareto diagram Pareto principle Pert Diagram Pie Chart Pilot Poisson Distributio n Population Power of a Test Pp Ppk

Primary control variables Prioritizati on Matrices Probability Probability of Defect p(d) Procedure Process Process Capability Process Control Process Control Chart Process Document ation Process Map Process Measure Process Owner Process Spread Process Time Process Worker Project Bounding Project Leader Proportion Defective p(d) Pull Leveragin g

Push Leveragin g p-value

Q
QFD Qualitative Data Quality Functional Deployme nt (QFD) Quantified Financial Opportunit y Quantitativ e Data Quartile Queue Time

R
R Chart R&R R2 Radar Chart Random Random Cause Random cause Random Sample Random variation Range Regressio n analysis

Regressio n for tolerancin g Response Surface Module (RSM) Robust Rolled Throughp ut Yield Rolled yield ROM Root Cause Analysis RSM R-squared R-squared Adjusted Run Run Chart

S
S Sample Sample Size Sampling Scatter Diagram Scatter diagram Scatter Sheet Scope Scoping Projects Scorecard Screening DOE

Shift Short Term

Sigma Sigma Level Sign-off SIPOC Six Sigma quality Skewed Distributio n Span SPC Special Cause Special Cause Sponsor SS Stable process Standard deviation Statistical Control Statistical control Statistical Process Control (SPC) Stratificati on Stratificati on Factors

T
Target of Change Team

Team Members Teamwork and Group Process Technolog y Total Quality Managem ent (TQM) Touch Time Transactio n Process Tree Diagram Trivial Many t-statistic t-test

U
UCL Upper Control Limit (UCL) Upper Specificati on Limit (USL) USL

V
Valueadded work Valueenabling work Variable Variable Cost Variable data

Variance Variation VCP Vital Few Vital X

W
Working Capital

X
X&R Charts Xs

Y
Yield Ys

Z
Z Z bench Zlt Zst

A method of collecting and categorizing brainstormed ideas. Used when compiling qualitative data from customers, as in prepa

A diagram allowing teams to organize and classify a large number of ideas to help solve complex problems. Visual tool where brainstormed ideas are grouped by topic. Unlike the basic tools for improvement that deal primarily with colle quantitative data, this tool focuses on issues and ideasqualitative data.

Part of statistical hypothesis testing, generally set at two or more samples representing populations that are not equal. Third phase of the DMAIC process. Examines data collected on the variables in the process that effect the output; generates a of variation. A statistical test comparing two or more samples of variable data for the purpose of determining the probability that differences random, or due to statistically significant differences between the processes. Items providing service or use potential to the owning business A process input variable that can be identified and that contributes in an observable manner to non-random shifts in process m deviation.

Quantifiable differences in data which can be attributed to specific causes Categorical data that can be separated into different categories distinguished by some non-numeric characteristic (i. e., cold/h like/dislike). Also known as Discrete Data. Sum of measures in a distribution divided by the number of measurements in the distribution. Same as MEAN. The mean of a as (read as xbar). The mean of a population is denoted as (read as mew).

Quantifiable differences which are of no experimental interest and are not held constant. Their effects are often assumed insig are randomized to ensure that contamination of the primary response does not occur. Also referred to as environmental variab variables. A graphical comparison of several quantities where the lengths of the horizontal or vertical bars represent the relative magnitud comparisons of quantities by the relative length of the bars. Costs related to the basic operations of the business mostly independent of the changes in volume of goods and services prod building, electricity for the offices. Also known as Fixed Costs. An improvement process whereby a company, group, or department measures its performance against the best-in-class, dete achieved their performance levels, and uses this information to improve its own performance.

Techniques established and verified using data applicable to similar processes and organizations determined to be best-in-cla Used in multiple regression analysis. All regression equations are fit using 1 X, then using 2 Xs, 3 Xs, and so on. The outputs a and the user can evaluate the regression models using some criterion such as R-squared adjusted. The probability of failing to detect a difference between two samples when a difference really does exist. Determined by using Pilot for a software solution, commonly called a beta site or beta test. Generally the last evaluation prior to full launch.

A mathematical expression of a distribution of categorized (Pass/Fail) data defined by the yield of the process. Models the Def

Fully-trained Six Sigma experts deployed in full-time positions in one of two types of roles, 1) primary responsibility is to train a secondarily leading improvement teams, working projects across the business, and infusing Six Sigma into the culture; or, 2) fu leader who is deployed on strategic projects leading teams who are responsible for measuring, analyzing, improving and contr influence customer satisfaction and/or productivity growth.

A relatively homogenous set of conditions within which different conditions of the primary variables are compared. Used to ens variables do not contaminate the evaluation of primary variables.

A pictorial representation of groups of variable data highlighting the median, quartiles, range, and outliers.

A process that allows generation of a high volume of ideas quickly in an atmosphere free of criticism and judgment. A process of many ideas within a team. There are typically two major ways to brainstorm: structured and unstructured. Structured brainst member giving ideas in turn. Unstructured brainstorming occurs when participants present ideas as they come to mind.

Modification to a process that results in a dramatic increase in the quality of a process's output.

The data driven, Six Sigma process improvement strategy involving five phases: Define, Measure, Analyze, Improve and Cont

Describes a Commercial Quality effort or project where either no process exists or the process is irrevocably broken and the g new process. Typically requiring the use of the DFSS (Design for Six Sigma) methodology and tools.

Group that oversees and coordinates the quality process, identifies champion, approves team charter, recommended solutions The targeted areas for improvements by the business to which all work should focus.

Graphics which display the number of defects per sample. Used where sample size is constant See Change Acceleration Process. The inherent ability of a process to yield units satisfying the customers' requirements, usually expressed in quantitative specific A mathematical calculation used to compare the process variation to a specification. Examples are Cp, Cpk, Pp, PpK, Zst, and Zst and Zlt as the common communication language on process capability. Used in regression. Sometimes mathematical relationships can exist between variables which have no meaning. One variable to behave in a certain manner this is considered a causal effect. The principle that every change implies the operation of a cause. That which produces an effect or brings about a change.

A schematic sketch, usually resembling a fishbone, which illustrates the main causes and sub-causes leading to an effect (sym graphically display and recognize, in detail, possible causes to a problem and identify potential solutions. Also known as Fishb Champions provide direction, resources and support to the Six Sigma effort and approve and review projects. Typically, these manager or a customers leader (manager). Also known as a sponsor.

A GE created management model delineating the sequence of considerations and actions necessary to institutionalize a majo changes in people's behaviors, actions, and attitudes.

Individual, usually a project leader, tasked to identify and implement changes to a process Individual(s) requesting an improvement effort, likely to also be a project sponsor, and expected to provide guidance and resou change effort. Probably the recipient of any benefits created from the process change.

A definable or measurable feature of a process, product, or variable. A tool to methodically compile and record data to detect patterns and trends. Check sheets can greatly facilitate the identificati data. A statistical test used to determine if two nominal variables are different (independent); such as checking for differences in def between sub-groups. Describes a recognized statistical distribution. A set of projects addressed concurrently to improve a larger process or Y. Also called random cause. A source of variation that cannot be pinpointed or controlled. The probability that a randomly distributed variable "x" lies within a defined interval of a normal curve. The two values that define the confidence interval. Allowing two or more variables to vary together so that it is impossible to separate their unique effects.

Also called quantitative or measurements data. Information that can be measured on a continuum or scale. Ex: length, weight,

A random variable which can take any value continuously within some specified interval.

The difference between revenues and the variable costs associated with producing those goods and services. The state of stability, normal variation, and predictability in a process or system. A method of regulating and guiding processes quantitative data. A graphical rendition of a characteristics performance across time in relation to its natural limits and central tendency. Aids in t and their causes in order to improve process performance. Apply to both range or standard deviation and subgroup average (X) portions of process control charts and are used to determ control. Control limits are derived statistically and are not related to engineering specification limits in any way. Fifth phase of the DMAIC process during which the project leader creates control plans and institutionalizes, through business improvements identified in the project. Also includes leveraging the Best Practices

A simple graph plotting Z short term (Technology) on the horizontal axis and Z shift (Control) on the vertical. Used to identify th improvement needed. The determination of the effect of one variable upon another in a dependent situation.

A measure of the degree to which two variables are linearly related.

Process that determines whether a project has clear financial payback; quantifies the financial aspects of a projects quality im reductions.

Estimate of costs that will not have to be paid due to actions taken to prevent the errors which would have led to the failure of t product, or process. Correct category when costs have never been accrued due to the problem.

The costs of activities resulting from the failure of a process or system (e. g., scrap or warranties). One of the contributors to C

Costs related to the results of poorly performing processes scrapped materials, repeating work on a good or service, repairing Includes the total amount of warranty and concessions paid to customers. A widely used capability index for process capability studies. It may range in value from zero to infinity with a larger value indic process. Six Sigma represents Cp of 2.0. An index combining Cp and k (difference between the process mean and the specification mean) to determine whether the pro tolerance. Cpk is always less than or equal to Cp. When the process is centered at nominal, Cpk is equal to Cp. Lower Spec Limit Process Capability Similar to Cpk, but addresses only the impact of the lower specification limit. Upper Spec Limit Process Capability Similar to Cpk, but addresses only the impact of the upper specification limit. The linkage of steps in the process map defining the shortest amount of time the unit can pass through process steps also an those steps is detained, the time to output will increase.

The select few, measurable key characteristics of a specific process/part/drawing/specification where reduced variation will ha customer; formerly known as a key quality characteristic (KQC). Elements of a process or practice directly effecting its perceive Sigma as Y

Needs, as defined by customers, that meet their basic requirements and standards; a reference against which a team can eva output measure.

CTQs, as defined by customers, which meet their basic requirements and standards. Represented in Six Sigma as Y.

The point which partitions the acceptance region from the reject region. The total time from the point in which the customer orders the good or service until the good or service is delivered to the custo and delay time.

Tool for collecting and reporting vital customer requirements. They identify the customer CTQs and help focus Six Sigma proje important items. Factual information used as a basis for reasoning, discussion or calculation.

An organized strategy for gathering information for your project.

A mathematical technique used to create a near normally distributed data set out of a non-normal (skewed) data set. A failure to meet an imposed requirement on a single quality characteristic or a single instance of nonconformance to the spec Source of customer irritation. Defects are costly to both customers and to manufacturers or service providers and eliminating th both. Defects are measurable.

A measurable event that provides a chance of not meeting a customer requirement. A unit which fails to meet the standards required of it. A unit may simply be categorized as defective, or may be defective beca defects, or because a measured characteristic fails to conform to specification. Projects with attribute data such that units are determined to be defective simply failing to meet the standard pass/fail, win/lose this data are based on the Binomial Distribution. Projects with attribute data such that units are determined to be defective due to having one or more things wrong or missing. counting the defects on each unit. Calculations for this data are based on the Poisson Distribution.

The number of defects counted during a standard production run, divided by the actual number of opportunities to make a defe multiplied by one million. A direct measure of sigma level.

The number of defects measured in a sample divided by the number of products or characteristics in the sample. A process of defects as an initial step toward Six Sigma quality. The Six Sigma project phase in which one: prioritizes between project opportunities and justifies, via Dashboards, CTQs, and F executing a project; creates a shared need; and ensures that resources are available for project.

The number of independent measurements available for estimating a population parameter. See Queue Time The function which yields the probability that a particular random variable takes on any one of its possible values.

A Response Variable; e.g., Y is the dependent or "Response" variable where Y = f(X1 . . . XN) process input variables.

A visual tool that combines the sequence of steps in a process with responsibility for each step, clearly showing who does wha Particularly helpful in processes with many hand-offs, with information or material passed back and forth.

The accounting procedure used to allocate the cost of an asset, which will benefit the business enterprise for more than a year

A concept in which products are designed within the current manufacturing process capability to ensure that engineering requi production.

Creating a component, system, or process such that its capability approaches entitlement upon initiation.

Statistical experimental designs to economically improve product and process quality. A major tool used during the Improve Ph methodology. Degree of Freedom, statistical term equal to the number of terms, less 1, in the evaluation. Design For Six Sigma Costs associated with the creation of individual units of goods or services as the volume output increases, those costs which in called "Cash" or "Hard" savings in quantification terminology.

Points in the process where work can be disrupted or delayed, or defects can be created; reflect causes of quality problems. Information that can be categorized or counted, such as male/female, yes/no, A/B/C (categorized) and numbers of typos, scra Also called attribute data.

A random variable which can take values only from a definite number of discrete values.

Theoretical patterns of data applied to project data sets such that conclusions can be drawn during the course of a Six Sigma distributions are the Normal, Poisson, and Binomial.

Tendency of large numbers of observations to group themselves around some central value with a certain amount of variation GEs data-driven quality strategy for improving processes; an integral part of GEs Six Sigma Quality Initiative. Consists of five Define, Measure, Analyze, Improve and Control. See Design of experiment A graphical technique (MiniTab tool) comparing the range and distribution of two or more sets of variable data. See Defects Per Million Opportunities See Defect Per Opportunity See Defects per unit. That which was produced by a cause.

The degree to which customer needs and requirements are met or exceeded. The amount of resources allocated toward meeting or exceeding customer requirements. The expected performance level of a process when the major sources of variation are identified and controlled.

A test under defined conditions to determine an unknown effect, to illustrate or verify a known law, or to test or establish a hyp Variation in observations made under identical test conditions. The amount of variation in an experiment that cannot be attribu called residual error. Variation in observations made under identical test conditions. Also called residual error. The amount of variation which canno variables included in the experiment.

A control charting method where the most current data point is weighted on an exponential basis such that older data points ca average. This charting technique is used to detect small shifts in process average. Independent variables.

For Six Sigma purposes, defined as design, manufacturing, assembly or test processes which directly impact hardware (see a

A disciplined approach for identifying and classifying the type, severity and detectability of all modes of failure of a product or p prioritize parts of the product or process that need improvement.

Output of a process with no reworks or repairs. Usually expressed as a percentage.

Period of time for calculation of income. Always 12 months. In GE the fiscal year runs from January through December. A schematic sketch, usually resembling a fishbone, which illustrates the main causes and sub-causes leading to an effect (sym Effect Diagram.

Major sources of variation: manpower, machine, method, material and measurement. Additionally, "environment" is considered These sources of variation are typically the primary categories on the Fishbone Diagram. Costs associated with running a business that do not change with modest changes in volume of units of goods and services pr example, rent for the office space. A graphical display of the logical sequence of events in a process: used by teams to identify inefficiencies and redundancies. C map. Variances in data caused by a large number of minut variations or differences.

A statistical technique used to identify factors supporting and opposing a potential solution so that positive factors can be reinf can be eliminated. Major sources of variation for a business process: People, Procedures, Plants, Policies

The pattern or shape formed by the group of measurements in a distribution; used with data that has a sequence (i. e., weight, A statistical test to determine if a difference exists between two variances; compares the variance of two distributions. See "First Time Yield"

A measurement system evaluation to determine equipment variation and appraiser variation. This study is critical to ensure tha accurate and to assess how much of the total process variation is due to measurement.

A visual project planning technique used for scheduling; a graphical display of time (bars) needed to complete tasks (milestone See Green Belt Six Sigma project leader who follows a training and application program and successfully completes Six Sigma projects while c his / her current position. A project management tool encouraging teams to assure Goals, Roles, Procedures and Interpersonal Relationships as they in See Alternative Hypothesis Computer systems (hardware) such as Sun Servers, VAXes, Wangs, HP's, Mainframes Re- loops (rework, repair, re-enter, re-check, etc.) in a process map usually indicating cost and cycle reduction opportunities.

Vertical display of a population distribution in terms of frequencies; a formal method of plotting a frequency distribution. A grap Graphs) to assess the ranges and distribution of a set of variable data. A histogram is useful for determining variation and help performance. See Null Hypothesis

The variances of the data groups being contrasted are equal (as defined by a statistical test of significant difference). The matrices in the QFD.

When used as a statistical term, it is a theory proposed or postulated for comparing means and standard deviations of two or m hypothesis states that the data sets are from the same statistical population, while the "alternate" hypothesis states that the da statistical population.

The Six Sigma project phase in which the leader introduces changes to the vital Xs to improve overall process variation or targ

A controlled variable; a variable whose value is independent of the value of another variable. An X. Also known as operating expense costs related to the operation of a business, generally do not change when the volume of go changes.

Measures that contribute to the process that are transformed and turned into value to the customer. Materials, resources, and data needed to carry out a process. Unnaturally large fluctuations in a process input or output characteristic. Occurs when the effects of a factor A are not the same at all levels of another factor B. For example, the performance of snow on the condition of the road.

A visual display of cause/effect relationships between key issues, representing ideas in a scattered rather than linear form. Allo cause-effect relationships between key issues. Team members are forced to think in many directions as opposed to linearly. Numeric categories with equal units of measure but no absolute zero point. An analysis differentiating customer needs based on Satisfiers, Dissatisfies, and Must Haves

The vital few input variables, called "x's" (normally 2-6), that drive 80% of the observed variations in the process output charac A condition in which the normal distribution is more peaked or less peaked than expected. Can influence the validity of using A See Lower Control Limit. Act of applying learnings from one project or area to another. Charts used to track the performance without relationship to process capability or control limits.

Performance of a process during which all sources of variation (Xs) have had an opportunity to influence process output (Y).

The statistically derived minimum expected value as determined by the long-term performance of a controlled process. Used in performance and predict when a process might go out of control. Most commonly identified on a control chart as a horizontal d lower process limit acceptable for a process. Different value than Spec Limit.

Set by design as related to the required level which must be guaranteed to assure product performance / customer satisfaction See Lower Specification Limit.

Measure and Analyze Phases, Combined

An expert in Six Sigma quality techniques deployed to advise leaders, scope projects, mentor quality teams, and perform Six S

A graphical display that identifies relationships between two or more sets of data; allows quick identification of patterns. See Master Black Belt. See Average. Second phase of the DMAIC process during which the team identifies the defect(s) in the product, gathers baseline information a Sigma (Z) value, identifies short-term capability, and establishes improvement goals.

The net effect of all sources of measurement variability that cause an observed value to deviate from the true value.

A mathematical method of determining how much the measurement process contributes to overall process variability. Of a series of variable (measurement) data point arranged in rank order the middle data point. If an even number of values, the data points. (e. g., the median of the data set 1, 2, 4, 4, 7, 9, 11 is 4.) A computer application available through the Software Store designed for use by people with moderate statistical backgrounds and statistical evaluations. Proactive technique used to positively prevent errors from occuring. See "Measurement System Analysis". Regression term. Situations in which the correlation amongst Xs in the model are high. If this situation exists, the estimates of may be in question.

A simple picture of logical subgroupings of data so that you can easily see variation within sub-groups and between them.

Not Invented Here, a colloquial term describing an individual's or organization's hesitation to incorporate a Best Practice from a

Process whereby each team member ranks issues, problems, or solutions, allowing the team to reach quick consensus and fo

A condition within a unit which does not conform to some specification, standard, and/or requirement; often referred to as a de nonconforming unit can have the potential for more than one nonconformity. Nonconformities can often be reworked (with add inspections) to fully conforming condition.

A non-parametric test is used for data for which a normal distribution (or some other distribution) cannot be assumed.

Results of poor capability processes; steps that do not contribute to the improved meeting of the customer's needs and require delivery of a product or service. Examples include rework, preparation, control/inspection.

A continuous, symmetrical density function characterized by a bell-shaped curve, (e.g., distribution of sampling averages); a m distribution of variable data (measurements) defined by its central point (mean or average) and dispersion (standard deviation) characterized by a bell-shaped curve.

Figure used for benchmarking; measures the average yield per process step by calculating one minus the ratio of total defects opportunities; a weighted average that shows the chance of an opportunity for defect. New Product Introduction a detailed management tool utilizing checklists, tollgates, and a disciplined approach to the introduct configurations or processes.

Part of statistical hypothesis testing, generally set at two or more samples representing populations that are equal.

The value of a parameter which has an upper bound or a lower bound, but not both.

The regular sharing of business and financial information with all Associates in the organization to show their individual roles a company's profitability, and then to greater involve them in the business process. Condition which applies to statistical process control chart where plot points fall outside of the control limits or fail an establishe which indicate that an assignable cause is present in the process. A data point in a data set that is not representative of the process. Possibly a Special Cause Variation or error in data recordin The tangible evidence that is a result of a process. Measures indicating how well customer needs and requirements are being met or exceeded. Combine yield of all processes involved in the production of a good or Probability of a defect, or proportion (%) defection A constant defining a particular property of the density function of a variable.

A graphical technique (MiniTab QC tools) to represent counts of different categories. Allows one to focus on efforts or the prob potential for improvement by showing relative frequency and or size in a descending bar graph. Based on the proven Pareto p cause 80% of any problem. Idea that 20% of the sources cause 80% of any problem.

Project Evaluation and Review Technique a project management and scheduling tool used to show activities, their precedence Graphical display of how proportions impact the overall situation; shows the relationships among quantities by dividing the who A test of all or part of a proposed solution on a small scale. Used to better understand solution's effects and to learn about how implementation more effective.

A mathematical expression of a distribution of counts of defects which are assumed to be random and unrelated, defined by its (DPU) All data, including information on units that will be output in the future, describing the performance of a process. The population unknowable, but is estimated using samples. Equals 1-B where B = the probability of statistics failing to reject the null hypothesis when it should. For instance, allowing a pr in an unsatisfactory manner. As B becomes smaller, the power of the test increases. Performance capability of a process, presuming the process can be centered. See Cp. Overall performance capability of a process, see Cpk.

The major independent variables used in the experiment.

Visual tool used to identify priorities by systematically eliminating options by applying criteria and demanding team consensus The chance of something happening; the percent or number of occurrences over a large number of trials.

A statistical term. The same as proportion defective, and different than dpu.

The documented sequence of steps and other instructions necessary to carry out an activity. A series of formal or informal steps that converts input materials and/or information into a finished product or service; a combin that lead to the production of some result (output).

The relative ability of any process to produce consistent results centered on a desired target value when measured over time. A system of activities whose purpose is to maintain process performance at a level that satisfies customers' needs, and drives process performance to ensure a consistent level of service to customers.

Any of a number of various types of graphs upon which data are plotted against specific control limits. Also known as Statistica charts.

Written account of the flow of the process and standard procedures for operating the process; ensures consistent operating pr process variation. The graphic display of steps, events, and operations that make up a process; a structure for defining a process in a simplified, overall view of the entire process. Taken at critical points in the process for assessing the performance of the process which is correlated to the pertinent output The individual accountable for all aspects of process performance.

The range of values that a given process characteristic displays; this particular term most often applies to the range but may a the spread may be based on a set of data collected at a specific point in time or may reflect the variability across a given amou See Touch Time. Any individual whose job falls within the scope of the process.

To identify a suitable Six Sigma project from all information available in a given area generally requiring increasingly finer focus The individual (usually a Black Belt or Green Belt) who leads the team effort by coordinating team activities in all phases of the a link to the sponsor and the Master Black Belt, and representing the team to the organization.

Of all units output from a process, that fraction or percent which are defective by failing to meet the requirements and/or have

Seeking best practices from others to incorporate into own area.

Transition of project learning to other similar areas The output of a statistical hypothesis test, literally the probability that the two samples in question were drawn from the same p GEAE guideline to reject the null hypothesis and conclude statistical difference. See Quality Function Deployment.

Inputs which are measured in abstract terms as opposed to numerically represented. Qualitative data can be gathered through WorkOut!, a survey, and observation. Qualitative data can be very useful in helping one discover potential areas for quality imp develop a measurement for quantitative data.

A tool based on a structured methodology to identify and translate customer needs and wants (Y's) into technical requirements characteristics (x's). Also called the House of Quality.

A refined estimate of the financial return that may be realized by improving a process.

Input represented numerically, either continuous (e. g., length, weight, temperature) or discrete (e. g., on/off, yes/no, true/false Of a series of variable (measurement) data points arranged in rank order the first 25% and the last 25% data point. For examp set 1, 2, 2, 2, 2, 4, 4, 5, 5, 5, 7, 9, 10, 11, 11, 17 are 2 and 10. Total cycle-time work is waiting to be processed. Plot of the difference between the highest and lowest in a sample. Repeatability and Reproducibility Math term to describe variation caused by X

A graphical display of the differences between actual and ideal performance; useful in defining performance and in identifying Selecting a sample so each item in the population has an equal chance of being selected; lack of predictability; without pattern Also called common cause. A source of variation cannot be pinpointed or controlled; an inherent natural source of variation.

A source of variation which is random; a change in the source ("trivial many" variables) will not produce a highly predictable ch (dependent variable), (e.g., a correlation does not exist); any individual source of variation that cannot be pinpointed or control source of.

A collection of data selected without pattern or predictability so that each item in the population has an equal chance of being c

Variations in data that result from causes which cannot be pinpointed or controlled. Of a series of variable (measurement) data point the different between the maximum value and the minimum value. E.g. the ra 4, 4, 5, 5, 5, 7, 9, 10, 11, 11, 17 is 15 (=17 2).

A statistical technique for determining the relationship between one response and one or more independent variables.

Used to determine tolerances on the "vital few" after ANOVA is complete. (Clarify)

A techniques used to improve a process. Usually includes more complex terms such as interactions and quadratic terms in the screening). State of a process design such that naturally occurring variation in inputs do not create unacceptable levels of variation in the o in which a response parameter exhibits hermetically to external cause of a nonrandom nature; i.e., impervious to perturbing inf

See Rolled Yield

The combined resulting quality level, stated as a percent acceptable, that occurs when several processes known to produce d combined to produce a product. For example, a product that requires 100 steps, each of which produces a yield of 98.78% wil that is, no acceptable products. Rough Order of Magnitude, an estimate, usually financial in nature

A methodology used to determine the true cause of a problem. Response Surface Module, an output of a DOE evaluating multiple factors Primarily used in regression; a measure of the amount of reduction in the variability of Y obtained by the Xs. It ranges from 0 to percentage of variation in the Y or response attributable to the change in X.

A variation of R-squared used for comparison to other R-squared values to account for the fact that the number of terms added R-squared term. A single point or series of consecutive points on a run chart where no point crosses the median. Facilitates the identification of trends and useful in gauging performance over time. Reliable for measuring performance prior to implementation of a solution. A graphical technique (MiniTab QC Tools) showing the time-ordered relationship of the data poin

Sample Standard Deviation, a statistical term expressing a variation in a set of data. Sometimes represented by the lower case A set of data drawn from a population of data. Quantity of data points included in a sample. Larger sample sizes enable the reliable detection of smaller process improvemen time or cost to collect. The technique by which data is drawn from a process with care taken that data points in the sample are chosen randomly. A graphical display of variable data that shows the relationships between predictor variables (X's) and response variables (Y's) potential root causes. A diagram that displays the relationships between two variables.

See scatter diagram See project scope A Six Sigma project, often the first in an area or process, whose intention is to define additional projects for the area to underta have a modest return to the business.

A management technique, similar to dashboards, that shows an organization's or process' performance over time, and possibl

Approach used in the early stages of experimentation to isolate the "vital few" from the "trivial many." See design of experimen

A measure of process performance that over the long term, a process' performance is made up of a series of smaller, more tig processes. The magnitude of the difference (expressed in units of standard deviation) is shift. A rational subgroup of historic process data representative of the inherent capability of the process to yield consistent output.

Standard deviation; an empirical measure based on the analysis of random variation in a standard distribution of values; a unif or average value such that 68.26% of all values are within 1 sigma on either side of the mean, 95.44% are within 2 sigma, 99.7 99.99% are within 4 sigma and so forth..

A statistical measure (Z value) of process variation (number of defects) that any process will produce equivalent to defects per process; the distribution or spread about the mean (average) of any process or procedure. The higher the sigma, the few the d Endorsement of a project by all potential stakeholders. Suppliers, Inputs, Project, Outputs, Customers A process definition tool helpful in the preliminary stages of process mapping.

A combination of verified customer requirements reflected in robust process or product designs and matched to the capability o creates products with fewer than 3.4 defects per million opportunities to make a defect. World-class quality. A collection of tool quality to world-class levels.

A non-symmetrical distribution having a tail in either a positive or a negative direction. Span is the metric GE uses to measure the variation around customer requests; calculated by ordering the data from highest t and 5th percentiles and averaging the remaining 80%. Statistical Process Control Variation resulting from a specific, identifiable source.

Defect root causes are assignable. A business leader who provides overall strategic direction for a Six Sigma project team. This individual serves as a liaison betw project team; facilitates the acquisition of resources and support for the project. Sum of Squares a statistical term which assess relative magnitudes of variation.

A process which is free of assignable causes, (e.g., in statistical control). A statistical index of variability which describes the spread in a Normal distribution. The most common measure of the variabili root of the variance. A quantitative condition which describes a process that is free of assignable/special causes of variation, e.g., variation in the c Such a condition is most often evidenced on a control chart. A quantitative condition which describes a process that is free of assignable/special causes of variation, e.g., variation in the c Such a condition is most often evidenced on a control chart.

The application of statistical methods and procedures relative to a process and a given set of standards used to monitor proce predicts ,prior to creation of units, the output data before unit level rework or repair. A data analysis technique where a set of sample data is separated by a particular factor (data tag) for statistical analysis, for e employees' arrival times at work by days of the week, Monday, Tuesday, etc.

Typical tags than can be associates with each data point identifying the who, what, where, when, and why's of the process upo identify sources of variation.

Individual whose activities, tasks, job scope, authority etc. changes as result of the process improvement effort. Likely does no created from the process change. The individuals who work together on a Six Sigma project. Every project has a core team consisting of the project team leader core team is supported by one or more sponsors, a Master Black Belt and stakeholders, who advise the team

Individuals who comprise the improvement team-usually a combination of process workers, process experts, and other individu organizations that will be affected by changes in the process.

Techniques emphasizing the importance of creating a shared need for a change and managing the people side of the process In the Control vs. Technology chart the value represented by Z short term; the predicted capability of the process when identifi controlled.

A philosophy that the strategy for a successful and efficient business operation depends on management vision and leadership control process, expectations of participation, recognition and rewards, common language, and training; requiring inputs from e The total time taken for a unit of work to have something done to it; does not include time due to delays or waiting.

A business process that contributes to customer satisfaction or impacts operating efficiency and which is designated as a focu Also known as Commercial (commerce) Quality. A graphical display of a broad goal divided into different levels of detailed actions; encourages team members to expand their solutions.

Based on the Pareto principle--the combination of variables that are least likely to be responsible for variation in a process, pro A distribution with heavier tails than the normal distribution. Primarily used for testing means from sample distributions when th unknown. A statistical test used to compare the means of two distributions and determine whether a significant difference exists between standard deviation is not known. See Upper control limit

The statistically derived higher value as determined by the long-term performance of a controlled process. Used in Statistical P process performance and predict when the process might go out of control. Different from Spec Limit. A horizontal line on a co representing the upper limits of process capability.

Set by design as related to the required level which must be guaranteed to assure product performance / customer satisfaction See Upper specification limit.

Steps in the production and delivery of a product or service that are considered essential to meeting customer needs and requ

Tasks that do not directly add value to the product or service, but are important to key value-adding activity. An observable characteristic that can be described according to some well-defined classification or measurement scheme. Costs associated with the production of goods and services which change directly as the volumes of goods and services produ materials. Continuous data; a numerical measurement made at the interval or ratio level.

A measure of data variation; the mean of the squared deviation scores about the means of a distribution. Any quantifiable difference between individual measurements; such differences can be classified as being due to common cau causes (assignable). Variable Cost Productivity Based on the Pareto principle--the variables that are most likely responsible for variation in a process, product, or service. One of the few causal factors in a process statistically verified to have an effect on the process output.

Specific category of accounts on the balance sheet. Inventory, Receivables, Progress Payments, Contract Engineering, and A current (short term) assets & liabilities.

A control chart representing process capability over time; displays the variability in the process average and range across time Designation in Six Sigma terminology for those variables which are independent, root causes; as opposed to "Ys" which are de process. Six Sigma focuses on measuring and improving Xs, to see subsequent improvement in Ys. Algebraically represented

Output of a process that is shippable to the customer. Usually include units that were reworked, repaired, reinspected, or requ that did not increase value for the customer. See First Time Yield. Designation in Six Sigma terminology for those variables which are dependent outputs of a process, as opposed to "Xs" which causes. Algebraically represented as Y = f{x}.

A statistical term describing the number of standard deviation units a spec limit is away from the center point of the data distrib project leaders calculate Z values from all data distributions. Total of Z usl + Z lsl Z value based on long term or overall data. Z value based on short term or Best-in-Class data.

Continuous
aka quantitative data Measurement Time of day Date Cycle time Speed Brightness Temperature <Count data> Test scores Defects Defects Color Location Groups Anything Units (example) Hours, minutes, seconds Month, date, year Hours, minutes, seconds, month, date, year Miles per hour/centimeters per second Lumens Degrees C or F Number of things (hospital beds) Percent, number correct N/A N/A N/A N/A N/A Percent Ordinal (example) 1, 2, 3, etc. Jan., Feb., Mar., etc. 10, 20, 30, etc. 10, 20, 30, etc. Light, medium, dark 10, 20, 30, etc. 10, 20, 30, etc. F, D, C, B, A Number of cracks N/A N/A N/A N/A 10, 20, 30, etc. N/A N/A N/A N/A N/A N/A N/A N/A N/A

Discrete
aka qualitative/categorical/attribute data Nominal (example) Binary (example) a.m./p.m. Before/after Before/after Fast/slow On/off Hot/cold Large/small hospital Pass/Fail Good/bad Good/bad N/A Domestic/international Exempt/nonexempt Above/below

Cracked, burned, missing Red, blue, green, yellow Site A, site B, site C HR, legal, IT, engineering N/A

You might also like