This document outlines eight key elements for a successful design of experiments (DOE): 1) set clear objectives, 2) use quantitative measures, 3) replicate experiments to account for noise, 4) randomize the run order, 5) block out known sources of variation, 6) understand which effects may be aliased, 7) conduct experiments sequentially to build on results, and 8) always confirm critical findings. Successful DOE depends on implementing these fundamental steps to efficiently identify important variables and optimize quality characteristics.
This document outlines eight key elements for a successful design of experiments (DOE): 1) set clear objectives, 2) use quantitative measures, 3) replicate experiments to account for noise, 4) randomize the run order, 5) block out known sources of variation, 6) understand which effects may be aliased, 7) conduct experiments sequentially to build on results, and 8) always confirm critical findings. Successful DOE depends on implementing these fundamental steps to efficiently identify important variables and optimize quality characteristics.
This document outlines eight key elements for a successful design of experiments (DOE): 1) set clear objectives, 2) use quantitative measures, 3) replicate experiments to account for noise, 4) randomize the run order, 5) block out known sources of variation, 6) understand which effects may be aliased, 7) conduct experiments sequentially to build on results, and 8) always confirm critical findings. Successful DOE depends on implementing these fundamental steps to efficiently identify important variables and optimize quality characteristics.
This document outlines eight key elements for a successful design of experiments (DOE): 1) set clear objectives, 2) use quantitative measures, 3) replicate experiments to account for noise, 4) randomize the run order, 5) block out known sources of variation, 6) understand which effects may be aliased, 7) conduct experiments sequentially to build on results, and 8) always confirm critical findings. Successful DOE depends on implementing these fundamental steps to efficiently identify important variables and optimize quality characteristics.
0E3l0N 0F EXPERlVENT3: ELEVENT3 0F 3uCCE33 n Larry 3coll
0E8|CN 0F EXPER|HENT8: ELEHENT8 0F 8U66E88
By Larry Scott
In the last issue of The Quality Herald, we outlined the strategy of experimental design. Highlights included the four strategic phases: Discovery, Breakthrough, Optimization, and Validation; resolution of the different array designs; and the overall iterative approach to DOE based on the users requirements. In this final segment, we review eight key elements necessary to performing a successful DOE. Successful application depends on the users ability to effectively implement these fundamental steps.
1. Set good objectives
To design an effective experiment, one must clearly define the experimental objective(s). Objectives may include: screening to identify the critical variables; identification of critical system interactions; or optimization of one or more quality characteristics at several levels. Failure to set good objectives may lead to excessive experiments; failure to identify meaningful quality characteristics; loss of valuable time and resources; unclear results; and poorly defined prediction equations.
To set good objectives consider; 1) overall project funding and timing, 2) the number of quality characteristics to be monitored, 3) the array design to clarify the need for basic main effects and interaction effects or the need for more detailed quadratic models requiring factor analysis at multiple levels.
2. Measure responses quantitatively
Metrics are a major aspect of creating a successful DOE. As all quality professionals and engineers know, measurement data is classified as either quantitative or qualitative. A quantitative measure is defined as a numerical value independent of observer judgment, for example reading a calibrated temperature gage or a cars digital speedometer. On the other hand, a qualitative measure is a subjective measure dependent on the judgment of an observer, for example a typical inspection choice of pass/fail, each observation reflects observer bias.
3. Replicate to dampen uncontrollable variation (noise)
Replicating an experiment refers to the number of times the combination of input variables are set-up and performed independently. In other words, the defining operations of the experiment must be set-up from scratch for each replicate and not just repeated measurements of the same set-up.
The more times you replicate a given set of conditions, the more precisely you can estimate the response. Replication improves the chance of detecting a statistically significant effect (the signal) in the midst of natural process variation (the noise). The noise of unstable processes can drown out the process signal. Before doing a DOE, it helps to assess the signal-to-noise ratio. The signal-to-noise ratio defines the power of the experiment, allowing the researcher to determine how many replicates will be required for the DOE. Designs reflecting low power require more replicates.
4. Randomize the run order
The order in which you run the experiments should be randomized to avoid influence by uncontrolled variables such as tool wear, ambient temperature and changes in raw material. These changes, which often are time-related, can significantly influence the response. If you don't randomize the run order, the DOE may indicate factor effects that are really due to uncontrolled variables that just happened to change at the same time. For example, let's assume that you run an experiment to keep your copier from jamming so often during summer months. During the day-long DOE, you first run all the low levels of a setting (factor "A"), and then you run the high levels. Meanwhile, the humidity increases by 50 percent, creating a significant change in the response. (The physical properties of paper are very dependent on humidity.) In the analysis stage, factor A then appears to be significant, but it's actually the change in humidity that caused the effect. Randomization would have prevented this confusion.
5. Block out known sources of variation
Blocking screens out noise caused by known sources of variation, such as raw material batch, shift changes or machine differences. By dividing your experimental runs into homogeneous blocks, and then arithmetically removing the difference, you increase the sensitivity of your DOE.
Don't block anything that you want to study. For example, if you want to measure the difference between two raw material suppliers, include them as factors to study in your DOE.
6. Know which effects (if any) will be aliased
An alias indicates that you are trying to evaluate too many factors in too few experiments. Even unsophisticated experimenters know better, but aliasing is a critical and often overlooked aspect of fractional factorials. For example, if you try to study three factors in only four runs--a half-fraction--the main effects become aliased with the two-factor interactions. If you're lucky, only the main effects will be active, but more likely there will be at least one interaction.
Aliasing can be avoided by using full factorials or high- resolution fractionals, which isn't always practical. However, low resolution designs generate misleading results. Always perform a design evaluation to see what's aliased in factorials. Good DOE software will give you these necessary details, even if runs are deleted or levels changed. Then, if any effects are significant, you will know whether to rely on the results or do further verification. 0E3l0N 0F EXPERlVENT3: ELEVENT3 0F 3uCCE33 n Larry 3coll
7. Do a sequential series of experiments
Designed experiments should be executed in an iterative manner so that information learned in one experiment can be applied to the next. For example, rather than running a very large experiment with many factors and using up the majority of your resources, consider starting with a smaller experiment and then building upon the results. A typical series of experiments consists of a screening design (fractional factorial) to identify the significant factors, a full factorial or response surface design to fully characterize or model the effects, followed up with confirmation runs to verify your results. If you make a mistake in the selection of your factor ranges or responses in a very large experiment, it can be very costly. Plan for a series of sequential experiments so you can remain flexible. A good guideline is not to invest more than 25 percent of your budget in the first DOE.
8. Always confirm critical findings
After all the effort that goes into planning, running and analyzing a designed experiment, it's very exciting to get the results of your work. There is a tendency to eagerly grab the results, rush out to production and say, "We have the answer!" Before doing that, you need to take the time to do a confirmation run and verify the outcome. Good software packages will provide you with a prediction interval to compare the results within some degree of confidence. Remember, in statistics you never deal with absolutes--there is always uncertainty in your recommendations. Be sure to double-check your results.
In Conclusion
Design of experiments is a very powerful tool that can be utilized in manufacturing or any industry that can identify input variables and metrics. The methodology has been neglected because of the extreme volume of calculations and statistical complexity needed to generate data. In the last two decades, however, that has changed. DOE has been successfully integrated into user-friendly software, making this method a convenient and beneficial tool for quantifying performance at the component, system or process level.
Larry Scott is an ASQ Six Sigma Black Belt and Principal at Process Technologies LLC, specializing in DOE. Larry has 25+ years of experience applying DOE and has worked in the automotive, electronics and financial industries as a DOE trainer and consultant since 1996.
1. This segment is condensed from Eight Keys to a Successful DOE, with permission from the authors M. Anderson and S. Kraber of Stat-Ease Inc. To review this article in its entirety, visit: www.statease.com.