To Study Taguchi'S Stretegy For Design of Experiments.: A Seminar Report ON
To Study Taguchi'S Stretegy For Design of Experiments.: A Seminar Report ON
To Study Taguchi'S Stretegy For Design of Experiments.: A Seminar Report ON
SEMINAR REPORT
ON
ACKNOWLEDGEMENT
It is my profound privilege to express my deep sense of regard, sincere gratitude &
indebtness to Prof. --------------------, Professor Mechanical Engineering department
--------------------------- for his expert guidance, valuable suggestion, critical views &
constant encouragement throughout the course of this project work.
In the preparation of this project a number of books, paper & journals have been freely
used, for which I want to express my sincere thanks & obligation to the authors.
I take this opportunity to thank my friends, colleagues for their useful suggestion & moral
support.
DECLARATION
I hereby declare that the work that has been presented in this report entitled To Study
Taguchis Strategy for Design of Experiment in partial fulfillment of requirement of
award of degree of Master of Technology submitted in Department of Mechanical
Engineering, -------------------------- is an authentic record of my own work under
supervision of Prof. ---------------- of department of Mechanical Engineering,
-----------------------------------------.
The matter embodied in this report has not been submitted by me for award of any other
degree/diploma.
Dated
CONTENTS
Acknowledgement
Abstract
Chapter-1
1. Introduction
1.1 Definition of Quality.
1.2 Cost of Quality.
1.3 Parameters of Quality.
1.4 Basic Concepts of Quality.
1.5 Types of Quality Control
1.6 Taguchi Method: Basic steps involved.
Chapter-2
2. Taguchi method of off-line Quality Control
2.1 Historical Background
2.2 Goal post philosophy.
2.3 Taguchis philophy.
2.4 Performance variation.
2.5 Counter measures to performance variation.
(i) Concept (Preliminary Design).
(ii) Parameter (Secondary Design).
Case study: Heat Treatment
Case study: Electronic Circuit.
(iii) Allowance or Tolerance Design
4
Chapter-3
3. Taguchis experimental design & analysis
3.1 Experimental Design Strategy.
3.2 Loss Function, Signal to Noise ratio & their Interrelationship.
3.3 Steps in experimental design& analysis.
3.4 Confirmation Experiment.
REFERANCES
ABSTRACT
In a developed economy, the market for any product is generally highly competitive. To
be successful in such a condition, producer has to do output which has distinctive
advantage over other. For example innovative design & ideas, extremely good quality
related to aesthetic & functional aspects etc. Quality is a good tool for competition. It is
necessary for the customers goal that all these properties have to be built in the product
through system approach & that is the main objective of total quality control.
For achieving Quality, we have to start right from inception stage of the product &
Continue till the product is in service. In past lot of consideration was given for
controlling quality at the manufacturing process stage & checking incoming & outgoing
materials in the form of inspections. The approach that is emerging is based on zero
defect system, quality control circles & off-line quality control. Out of these Taguchis
method of off-line Quality control is most comprehensive & effective system. It gives a
Design of Product, Which will require very less on-line control. Taguchis approach has
both philosophical & Mathematical Content. The Methodologies developed to implement
his ideas are known as Taguchis Methods.
Chapter 1
1. INTRODUCTION
1.1 Definition of Quality
Quality is a relative term that is generally used with reference to end use of product. It
depends on the perception of the person in a given situation. The situation can be user
oriented, cost oriented or supplier oriented. The quality has a variety of meanings like;
1. Fitness of purpose
2. Conformance to requirements
3. Grade
4. Degree of performance
5. Degree of excellence
6. Measure of fulfillment of promise.
Cost of internal failure: The cost associated with defective products &
components that fail to meet quality requirements & results, in manufacturing
process are called as cost of internal failure.
Quality has one true evaluator, the customer. A quality circle that describes this situation
is shown in Fig. The customer is judge, Jury & executor in this model. Customers vote
with their wallets on the product, which meets their requirements, including price&
performance. The birth of a product, if you will, is when the designer takes information
from the customer (market) to define what the customer wants, need & expectations from
a particular product. Sometimes a new idea (new technology) creates its own market, but
once a competitor duplicates the product, the technological advantage is lost.
Garvin observed that Japan is famous for Reliability & conformance. Western industrial
countries are generally more capable of providing other dimensions of quality. From this
view point, the new strategy is what is the best way to serve the customer? He
(Customer) values the producer, who can produce high quality all eight dimensions of it
plus low cost, short lead time & high flexibility to change quantities, model mix &
design.
Serving the customer on these dimensions require the philosophy- the quality is to be
built in acceptance of view hat good quality lowers cost & use of advanced technology
like JIT.
Sources of Variation
Development
stages
Environmental
Production
10
Manufacturing
Variables
Deterioration
Variables
Product Design
Process Design
Manufacturing
problem statement should be specific & if multiple responses are involved that should be
noted.
Step 2: Determine the objective of the experiment. This includes the identification of
performance characteristic & level of performance needed.
Step 3: Determine the measurement Method(s). An understanding of how the
performance characteristic(s) will be accessed after the experiment has been conducted.
Step 4: Identify the factor, which is supposed to influence the performance characteristic.
Step 5: Separate the factors into signal & noise factors.
Step 6: Determine the number of levels & values of all the factors. For initial screening
the experiments, the number of levels should be kept low, as low as possible.
Step 7: Select the design of Experiment.
Step 8: Conduct the experiments. Randomization strategies should be considered during
the Experiment.
Step 9: Do analysis of result.
Step 10: Interpret the results. Determine which factors are influential & which are not
influential to the performance characteristic of interest.
Step 11: Select optimum levels of most influential control factors & predict expected
results.
Step 12: Run a confirmatory Experiment. This is to demonstrate which factors & levels
chosen for the influential factors do provide the expected results. If the results do not turn
out to be as per expectations, then some important factors may turn to be left out of
experiment & more screening has to be done.
Step 13: Return to step 4, if objective is not met & further optimization is possible with
confirmed factors.
12
Chapter 2
2.1 Historical Background
After the World War II, the allied forces found that the quality of Japanese telephone
system was extremely poor & totally unsuitable for long distance communication
purpose. To improve the system, the allied command recommended that Japan should
establish research facilities similar to the Bell laboratories in the United States in order to
develop state-of-the-art communication systems. The Japanese founded the Electrical
Communication Laboratories (ECL) with Dr. Genichi Taguchi in charge of improving R
& D productivity & enhancing product quality. He observed that a great deal of money &
time was expended in engineering experimentations & testing. Dr. Taguchi started to
develop new methods to optimize the process of engineering experimentation. He
developed techniques, which are now known as Taguchi methods. His greatest
contribution lies not in the mathematical formulations of Design of Experiment but his
accompanying philosophy. His approach is more than a method to layout experiments,
13
which is a concept that has produced a unique & powerful quality improvement discipline
that differs from traditional methods. Despite some controversies, the Taguchis technique
is considered as greatest innovation in quality control history.
2.2 Goal post Philosophy
Today in America it is quite popular to take a very strict view of what constitutes quality.
In his book Crosby supports the position that a product made according to print
specifications, with-in permitted tolerance, is of high quality. The strict view point
embraces only the designers & makers. This is the goal post syndrome. What is missing
from this philophy is the customers requirements. A product may meet print
specifications but if the print does not meet customer requirements, then true quality
cannot be present. For example, customers buy TVs with best pictures, not ones that meet
the specifications.
Another example showing goal post syndrome contradicts the customers desires is as
below:
Batteries supply a voltage to light bulb in a flash light. There is some nominal
voltage say 3V that will provide the brightest light but will burn out the bulb prematurely.
Customers want the voltage to be as close to the nominal voltage as possible, but battery
manufacturer may be using a wider tolerance than allowed by battery specification. So as
a result, either flash light will burn dimly or some bulbs will burn out prematurely.
Customers want the product to close to nominal all the times & manufacturers want to
have wide range of tolerance, so the question is that these seemingly opposite ideas can
be brought into harmony.
2.3 Taguchi Philosophy
Taguchi espoused an excellent philosophy for the quality control in the manufacturing
industries. Indeed, his doctrine is creating an entirely different breed of engineers who
think breath & live quality. His philosophy has far reaching consequences yet it is
founded on three simple & fundamental concepts.
(a) Quality should be designed in to the product & not inspected in it. No amount of
inspection can put quality back into the product.
(b) Quality is best achieved by minimizing the deviation from a target. The product
should be so designed that it is immune to uncontrollable environmental factors.
14
(c) The cost of the quality should be measured as a function of deviation from the
standard & the losses are measured system wise.
The above three concepts are becoming the guiding principles of todays quality control
activities. Taguchi builds both his conceptual framework & specific methodology for
implementation from these percepts. He recommended the following three stage process
1. System design: It is the primary design stage in which engineering & scientific
knowledge is used to produce the basic product or process design. It is very important
stage but we cannot afford the research of all the concepts. Therefore, research is
limited to few concepts, selected on the basis of past experience or guess.
2. Parameter design:
15
under normal conditions but may fail at high temperature, high humidity or in the other
conditions of outer noise.
increases both product manufacturing & life time cost. For example consider an electric
circuit, suppose the performance characteristic of interest is the output voltage of the
circuit & target is y0. Assume that output voltage of the circuit is largely determined by
the gain of a transistor x in the circuit & the circuit designer is at the liberty to choose the
nominal value of this transistor.
19
Increasing the amount of ammonia per batch did increased cost. Ammonia costs
approximately dollar 0.0025 per ft3 (dollar 0.0833 per m3) or approximately 1 cent per
batch. The quality of this process was vastly improved. The tolerance design approach of
improving the ammonia measuring device would have entailed a much greater cost than 1
cent per batch. The reduced inspection cost & scrap rate subsequently saved
approximately dollar 20,000 annually for the amount of 1 cent per batch investment.
Tolerance Design: after the system has been designed & nominal parameter values of its
parameters determined, the next step is to set the tolerances of the parameters.
Environmental factors must be considered with the variation of factors around the mid
values of the output characteristics. The methodology is, off-course, different than in
parameter design. Narrow tolerances can be given to the noise factors with the greatest
influence. The attempt is to control the error factors & keep them within narrow
tolerances but it will drive the cost up. Narrow tolerances should be the weapon of last
resort, to be used only when parameter design gives insufficient results & never without
careful evaluation of loss due to variability. Cost calculation determines the several
advantages gained by parameter design approach. First larger variation of transistor gain
is not passed to variation in output voltage this allows lower quality component. To be
used without lowering the quality of overall circuit. If the transistor gain changes slightly
with operating temperature, an outer noise, then this will not be passed to the variation in
output voltage. If transistor-gain changes slightly with age, then also an inner noise will
not be passed.
2.6 Design of Production Process
The result of system parameter & tolerance design by design department is passed to thre
production department in the specifications. The production department then designs a
manufacturing process that will adequately satisfy these specifications. Process design is
also done in three steps:
System Design: In which the manufacturing process is selected from knowledge of
pertinent technology, which may include automatic control.
Parameter Design: In which the optimum working conditions for each of the component
processes are selected, including the optimum material & parts to be purchased. The
purpose this step is to improve process capability by reducing the influence of harmful
factors.
20
Tolerance Design: In which the tolerance of the process conditions & sources of
variability are set. This is a means of surprising quality variation by directly removing its
cause.
The efficiency of step (2) & (3) can be frequently raised by means of experimental
design. Step (2) is more important than step (3).
System
Parameter
Allowance
Set Concept
Set Target
Set Tolerance
Use: Engineering
Use: Engineering
Use: Engineering
Statistical Design
Statistical Tolerance
Sensitivity analysis
Experimental Design
22
Because B is usually much smaller than A, the manufacturing tolerance interval will be
narrower than the customers tolerance limit. So the Loss function is to be minimized
with an optimal parameter setting in the parameter design stage. The loss function
encourages the production of parts & components, targeted to cluster tightly around an
optimal setting, as opposed to staying in tolerances.
2.8 Comparison of Philosophies
Lets look at comparison of goal post loss function philosophy with an example. When
the hood of a typical automobile is opened, a mechanism may be in place, which
automatically holds the hood in position. The required force to close the hood from this
position is important to the customer. If the force is too high, then a weaker individual
may have difficulty in closing the hood & ask for mechanism to be adjusted. If the force
is too low then it may come down with a gust of wind & the customer will again ask for it
to be adjusted. The engineering specifications, details & assembly drawings call out a
particular range of force values for the hood assembly. A range must be used, since all
hoods cannot be exactly the same; a lower limit (LL) & an upper limit (UL) are specified.
If the force is little high or low, the customer may be somewhat dissatisfied but may not
ask for the adjustment to the hood. The goal post view of this situation is as shown in
figure.
The goal post philosophy says that as long as the closing force is within the zone shown
as customers tolerance, this would be satisfactory & no problem at all. If the closing
force is smaller than the lower limit or higher than upper limit of customers tolerance,
then hood would have to be adjusted at some expense say dollar 50, to be borne by the
manufacturer.
As a customer, closer the closing force to the nominal value, the happier you are. If the
force is little low or little high, you sense some loss. If the force is even greater or lower,
you would experience greater loss, the hood would come down frequently or extremely
hard to close. When the force reached the customer tolerance limit, the typical customer
would complain about the hood. But what is the real difference between closing forces
indicated by points A & B on the Goalpost Graph. It appears from the producers
viewpoint; there is very little difference in a hood that falls down just a little bit more
easily. A better model for the cost verses closing force is as shown in figure.
23
In this curve, the loss function more nearly describes the real situation. If the closing
force is near the nominal value, there is no cost or very low cost associated with the hood.
The farther the force gets from the nominal value, the greater the cost associated with that
force, until the customers limit is reached where the cost equals the adjustment cost. This
model quantifies the slight difference in cost associated with the hood closing force
amongst force A & force B.
Case Study: Polyethylene film
A supplier in Japan made a polyethylene film with a nominal thickness of 0.039 in.
(1mm) that is used for green house coverings. The customers want the film to be hick
enough to resist wind damage but not so thick to prevent the passage of light. The
producers want the film to be thinner to be able to produce more square feet of material at
the same cost. The plot of these contradictory desires is as shown in figure. At the time
the national specification for the film thickness stated that film thickness should be 0.039
0.008 in. A company that has made this film could control the film thickness to 0.008
in. consistently. The company made an economic decision to reduce the nominal
thickness to 0.032 in. & with their ability to produce film with specification. This would
reduce manufacturing cost & increase profits.
However, the same year strong typhoon winds caused a large number of green houses to
be destroyed. The cost to replace the film had to be paid by the customer & these costs
were much higher than expected. Both of these cost situations can be seen in figure. What
the film producer had not considered was the fact that the customers cost was rising
while producers cost was falling. The loss function, the loss to society is the upper curve
which is sum of customers & producers curve. The curve does show the proper film
thickness to minimize the losses to society, & is where nominal value of 0.039 in. is
located.
Looking at the loss function, one can easily see then as the film thicker from the nominal
thickness of 0.039 in., the producer is losing money, & when the film thickness gets
thinner, the customer is losing money.
The producer is obliged by being the part of society to fabricate film with a nominal
thickness of 0.039 in. & to reduce variation of that thickness to a low amount. In addition
it will save money for the society to further reduce the manufacturers capability from
0.008 in. (Losses are lower closer to nominal value)
24
If the producer does not attempt to hold the nominal thickness at 0.039 in. & causes
additional loss to the society, then it is worth than stealing from customers pocket. If
someone steals dollar 10, then net loss to the society is zero: someone has dollar 10 loss
& the thief has dollar 10 gain. If however the producer causes an additional loss to the
society, everyone in the society has suffered some loss. A producer who saves less money
than a customer spends on repair, has done something worse than stealing from the
customer. After this experiment the national specification was changed to make the
average thickness produced 0.039 in. the tolerance was left unchanged at 0.008 in.
2.9 Other loss Function.
The loss function can also be applied to the product characteristic other than the situation
where the nominal value is best value, where smaller value is best value & where higher
value is best value. For instance, as good example of lower is better characteristic is the
waiting time for your order delivery at the fast food junction. If the attendant tells you
that it will be a moment for your hamburger to come up, then you sense some loss; the
longer you have to wait the larger is the loss. Micro finish of a machined surface, friction
loss or wear is some examples of lower is better characteristic. Efficiency, Ultimate
strength & Fuel economies are an example of larger is better characteristic.
The loss function of lower is better characteristic is as shown in figure. The cost constant
k can be calculated in the same way as nominal is best situation. There is loss associated
with a particular value of Y. the loss can then be calculated for any value of Y based on
that value of k. this loss function is identical to the nominal is best situation when m = 0
which is best value for lower is better characteristic (no negative value). The Equation
takes the form:
L(Y) = k [S2 + Y2]
The loss function for a higher is better characteristic is also shown in figure. Again the
cost constant can be calculated based upon some loss associated with a particular value of
Y. subsequently any value of Y will have a loss associated. The average loss per unit may
be determined by finding the average value of 1 / Y2.
25
Randomization
The order performing the tests of various trials should include some form opf
Randomization. The randomized trial order protects the experimenter from any unknown
& uncontrolled factors that vary during entire experiment & which may influence the
results.
The Randomization can take many forms but only three approaches are discussed:
1. Complete
2. Simple repetition.
3. Complete within blocks.
Complete randomization: Complete Randomization means any trial had an equal
chance of being selected for the test. To determine which trial to do next for test random
number generator or simply drawing numbers from a hat will suffice. However, even
complete randomization may strategy applied to it. For instance several repetition of each
trial may be necessary, so each trial should be randomly selected until all trials have one
test completed. Then each trial is randomly selected in a different order until all trials
have two tests completed.
Simple Repetition: Simple repetition means that any trial has an equal opportunity of
being selected for the first test, but once that trial is selected all the repetitions are tested
for that trial. This method is used if test set-up are very difficult or expensive to change.
26
Engineers & Scientists are most often encountered with product or process development
situations. The terms product & process can be used interchangeably in the following
discussion, because the same approaches will apply if developing either a product or
process. One development situation is to find a parameter that will improve the
performance characteristic to an acceptable or optimum value. A second situation is to
find a less expensive, alternate design material or method that will provide equivalent
performance. Depending upon which situation the experimenter is facing, different
strategies may be used. The first problem of improving performance is most typical one.
When searching for improved or equivalent design, he typically runs tests, observes some
of the product & makes a decision to use or reject new design. It is the quality of his
decision, which can be improved upon when proper test strategies are utilized. In other
words, avoid the mistake of using a interior design or not using an acceptable design.
27
Before a formal discussion on Orthogonal array (OA), it would be wise to review some
often used test strategies. Not being aware of efficient, proper test strategies
experimenters resort to following approaches:
The most common test plan is evaluate the effect of one parameter on the product
performance. A typical progression of this approach, when the first parameter chosen dose
not works, is to evaluate the effect of several parameters on the product performance at
the same time.
The simplest case of testing the effect of one parameter on performance would be to run a
test at two different conditions of that resultant parameter, for example, the case of cutting
speed could be used & the resultant micro finish measured to determine which cutting
speed gave the most satisfactory results. If at the first level, 1 symbolize first cutting
speed & 2 symbolize second cutting speed, then experimental conditions will appear as in
table:
Trial No.
Factor level
Test results
***
***
scientific approach to experimentation that is usually taught in todays high schools &
college chemistry & physics classes.
Third & most urgent situation finds the person grapping the straws & changing several
things all at the same time in the hope that at least one of the changes will improve the
situation sufficiently. Again one can see in table that the first trial represents the baseline
situation.
Average of the data under Trial 1 may be compared to the average of data under Trial 2 to
determine the combined effect of all the factors.
Factor & factor Levels
Trial
Test Results
***
***
***
***
***
Number
Test Result
***
***
Number
The taguchi method for identifying settings of design parameters that maximize a
performance setting is summarized as below:
1. Formulate the problem.
2. Plan the Experiment.
3. Running the Experiment.
4. Analyze the Results.
29
30
be done to suit any other situation of experiment. Common orthogonal arrays are shown
in table. Figure shows L8 inner array & L4 outer orthogonal array.
Orthogonal Array
No. of Factors
No. of Levels
L8(23)
L12(3 ,4 )
1,2
3,4
L16(24)
L32(25)
L9(32)
L18(21,32)
1,2
2,3
L27(33)
L64(43)
Results
1
2
32
Experiment Number
1
2
3
4
5
6
7
8
Table 10: Orthogonal L8 Arrays
Column
(1)
(2)
(3)
(4)
(5)
(6)
(7)
Table 11: Interaction between two columns in a L8 array
A successful confirmation experiment alleviates concern about possible improper
assumption underlying the model. If the increase in the performance statistic is large
enough, another interaction of parameter may be necessary.
Experiment design
33
Assign factors to
Columns as appropriate
Modify columns
Assign factors requiring
level modifications
Assign interacting
factors
Assign all other factors.
Type of Analysis
Without Repetition
34
Standard Analysis
Analysis of signal to
noise ratio
Nominal is Best
Smaller is better
Larger is better
2.14
Areas of application
Apart from product & process design in the field of quality control, the technique can be
used in facility design, in repair & maintenance, service, inventory control policy &
production scheduling. In recent paper, the use of the methods in optimizing robots
process capability is described.
2.15
Advantages:
1. Robust design is obtained.
2. Inspection required is minimum.
3. Elimination of arbitrary specification limit.
4. Continuous improvement of quality & process development.
5. Money as a measure is essentially acceptable to management.
6. Concern about loss to society.
35
Current Approach
Taguchi Approach
(A Series Approach)
(A Parallel Process)
Some
Thinki
ng
What is quality
characteristic?
What are Design
Experiment
Analysis of Result
More
Thinkin
g
Fig. 14:
Confirmation
Test
Comparison
between Current
Practice, Taguchi
approach
Chapter 3
force the noise variation into the experiment i.e. it is intentionally introduced into the
experiment. However the processes are often subjected to many noise factors that in
combination influence the variation of response strongly. It is due to the reason that
extremely noisy system is generally not necessary to specify specific noise factors & to
deliberately control them during experimentation. It is sufficient to generate repetitions at
each experimental condition of the controllable parameter & to analyze them using
appropriate S/N ratio.
In the present investigation, the raw data analysis & S/N data analysis have been
introduced. The effect of selected turning process parameters on the selected quality
characteristic have been investigated through the plot of main effect based on raw data.
The optimum condition for each of the quality characteristic has been established through
the use of S/N data analysis aided by raw data analysis. No outer array has been used &
instead experiments have been repeated three times at each experimental condition.
3.2 Relationship between Loss Function & Signal to Noise Ratio
Loss Function:
The heart of taguchi method is the definition of the nebulous & elusive term Quality as
the characteristic that avoids the loss to the society from the time the product is shipped.
Loss is measured in terms of monetary units & is related to quantifiable product
characteristics.
Taguchi defines quality in terms of loss incurred by society & is measured by Loss
function. He unites financial loss with the functional characteristics specifying through a
quadratic relationship that comes from a Taylor series expansion. The Quadratic takes the
form of a parabola. Taguchi defines the Loss Function as a quantity proportional; to the
square of the deviation from the nominal quality characteristics. He found the following
quadratic form to be Practical Workable Function.
L (y) = K (y m) 2
Where
L = Loss in monetary units
M = value at which characteristic should be set i.e. Target Value.
Y = actual value of characteristic
LOSS FUNCTION
GOAL POST
NO LOSS
CHARACTERISTIC LB
L = kY2
Fig. 18:
Loss
Function for HB
i=0
-------------------- (2)
Where
yi = value of characteristic at an observation at an observation i
R = number of repetitions in a trial
Alternately,
(S/N)LB = -10 log (MSDLB)
Where
MSDLB = (y12 + y22 + y32 + yR2)/R
Here target value (m) = 0
1/ y
-------------------- (3)
i=0
Where
yi = value of characteristic at an observation at an observation i
R = number of repetitions in a trial
Alternately,
(S/N)HB = -10 log (MSDHB)
Where
MSDLB = (1/y12 + 1/y22 + 1/y32 + 1/yR2)/R
Here target value (m) = 0
(c) Nominal the Best (NB)
Performance characteristics, whose values are preferred when they are nominal, are
calculated using this approach. Such factors are dimensions etc. The following equation is
used to calculate the S/N ratio for NB type of characteristics.
R
( yi y 0)
i=0
-------------------- (4)
Where
yi = value of characteristic at an observation at an observation i
R = number of repetitions in a trial
y0 = nominal value of characteristic
Alternately,
(S/N)NB = -10 log (MSDNB)
Where
MSDNB = (y1 - y0) 2 + (y2 - y0) 2 + (y3 - y0) 2 + (yR - y0) 2/R
Here target value (m) = 0
The mean square deviation (MSD) is a statistical quality that reflects the deviation from
the target value. The expressions for MSD are different for different quality characteristic.
For the nominal the best (NB), the standard definition of MSD is used, while for other
slightly modified definition is used. For Lower the better, the unstated target value is zero.
For higher the better (HB), the inverse of each large value becomes a small value & again
the unstated target value is zero. Thus for all three expressions, the smallest magnitude of
MSD is being sorted out.
Relationship between S/N ratio & Loss-Function.
Fig. shows a single sided quadratic loss function with minimum loss at the zero value of
desired characteristic. As the value of y increases, the loss grows. Since loss is to be
minimized, the target in this situation for y is zero.
If m = 0, the basic loss function equation becomes
L(y) = k (y) 2
The loss may be generalized by using k = 1 & the expected value of loss may be found by
summing all the losses for a population & dividing by the number of samples (R) taken
from this population. This in turn gives the following expression:
Expected loss (EL) = y2 /R
-------------------- (5)
The above expression is a figure of demerit. The negative of this demerit expression
produces a positive quality function. This is the thought process that goes into the
creation of S/N ratio from the basic quadratic loss function. Taguchi adds the final touch
to this transformed loss function by taking the logarithm (base 10) of negative expected
loss & then he multiplies by 10 to put the metric into decibel terminology. The final
expression for lower the better S/N ratio takes the form of equation (2). The same thought
process follows in creation of other S/N ratio.
Cool in mould
Shake out
Casting
Good
Casting
Air cool
Shot Blasting
Process Step
Factor Identified
Cool mould
Air Cool
Shot Blast
Cause-effect diagram:
This is perhaps the most comprehensive tool that enables one to speculate systematically
about record & clarify the potential causes that might lead to performance deviation or
poor quality. The development of the cause-effect diagram begins with the statement of
the basic effect of interest. It is a systematic listing of causes in the form of a spine tree
with tree trunk & branches with cause side written on the left hand side of the diagram &
effect side written on right side of the diagram.
Cause-effect diagram can also be drawn by thinking about the broad categories of causes,
material, machinery & equipment, operating methods, operator action & the environment.
The participants should add factors, based on their knowledge, by repeatedly asking the
question: why the effect until diagram appears to include all causes that one could
regard as possible root causes.
2. Selection of number of levels for the factors
One can select two or three levels of the factors under study. Three levels of the factors
are considered more appropriate since the non-linear relationship between the process
variables & the performance characteristics can only be revealed if more than two levels
of the process parameters are taken. In the present case study, three levels of the process
variables have been selected.
The selection of which orthogonal array to use depends upon these items:
1. The number of factors & interactions of interest.
2. The number of levels for the factors of interest.
These items determine the total degree of freedom required for the entire experiment. The
degree of freedom for each factor is equal to the number of levels subtracted by one.
FA = K A 1
The degree of freedom for an interaction is the product of degree of freedom of the
interacting factors.
FAxB = (FA) x (FB)
The total degree of freedom in the experiment is the sum of all the factors & interactions
degree of freedom.
The basic kind of OAs developed by Taguchi is either two level array or three level array.
The standard two & three level array are:
(a) Two level array L4, L8, L16, L32
(b) Three level array L3, L9, L27
When a particular OA is selected for an experiment, the following inequality must be
satisfied.
FLN Total degree of freedom required for parameters & interactions
Where
FLN = Total degree of freedom of an OA
(Depending upon the number of levels of the parameters & total degree of freedom
required for the experiment, a suitable orthogonal array is selected)
The number of levels used in the factors should be used to select either two or three level
kind of orthogonal array. If the factors are two levels, then an array from Appendix B
should be chosen & it the factors are three levels then an array from Appendix C should
be chosen. If some factors are two level & some three level, then whichever is
predominant should indicate which kind of orthogonal array is selected. Once the
decision is made between a two level and three levels OA, then the number of trials for
that kind of array must provide adequate total degree of freedom. Many times the
required degree of freedom will fall between the degree of freedom provided by two of
the orthogonal array. The next larger orthogonal array must then be chosen.
4. Assignment of Parameters and Interactions to the orthogonal array.
The Orthogonal arrays have several columns available for assignment of parameters &
some columns subsequently can estimate the effect of interactions of these parameters.
Taguchi has provided two tools to aid in the assignment of parameters & interactions to
arrays.
(a) Linear graph
(b) Triangular tables.
Each orthogonal array has a particular set of linear graphs & triangular table associated
with it. The linear graph indicate various columns to which parameters may be assigned
& the columns subsequently evaluating the evaluating the interactions of these
parameters. The triangular table contains all the possible interactions between parameters.
Using the linear graph and/or the triangular graph of the selected orthogonal array, the
parameters & interactions are assigned to the columns of the orthogonal array.
SELECTION OF OUTER ARRAY:
Taguchi separates two factors (Parameters) into two main groups:
Controllable factor
Uncontrollable factor
Controllable factors are those that can easily be controlled whereas uncontrollable factors,
also known as noise factors, are the nuisance variables that are either difficult or
impossible or expensive to control. The noise factors are responsible for the performance
variation of the process. Taguchi recommends the use of outer array for noise factors &
inner array for the controllable factors. If an outer array is used, the noise variation is
forced into the experiment. However experiments against the trial condition of the inner
array may be repeated & in this case the noise variation is unforced into the experiment.
The outer array, if used should not be complex as the inner array because the outer array
is noise only, which is controlled only in the experiment.
The experiment is conducted against each of the trial conditions of the inner array. Each
experiment at the trial condition is repeated simply or conducted according to outer array
used.
The randomized trial order protects the experimenter from any unknown & uncontrolled
factors that may vary during the entire experiment & which may influence the results.
Randomization can take many forms, but the three most widely used approaches are
given below:
1. Complete randomization
2. Simple repetition
3. Complete randomization within blocks.
6. Data Analysis
There are a number of methods suggested for analyzing the experimental data like
observation method, ranking method, column effect method and interaction graphs etc.
However in the present case the following methods have been used:
1. Plot of average response curves
2. ANOVA for raw data.
3. ANOVA for S/N data.
The plot of average responses at each level of a parameter indicates the trend. It is a
pictorial representation of the effect of a parameter on the response. The change in the
response characteristic with the change in the levels of parameters can easily be
visualized from these curves.
The S/N ratio is treated as a response of the experiment, which is a measure of the
variation within a trial when the noise factors are present. A standard ANOVA can be
conducted on the S/N ratio, which will identify the significant parameters (mean &
variance).
The estimate of the mean () is only a point estimate based on the average of the results
obtained from the experiment. Statistically this provides a 50% chance of the true average
being greater than & 50% chance of the true average being less than . It is therefore
customary to represent the values of the statistical parameters as a range within which it is
likely to fall for a given level of confidence. This range is termed as confidence interval.
In other words, the confidence interval is a maximum & minimum value between which
the true average should fall at some stated percentage of confidence.
The following two types of confidence intervals are suggested by Taguchi in regard to the
estimated mean of the optimal treatment condition.
1. Around the estimated average of the treatment condition predicted from the experiment.
This type of confidence interval is designated as CIPOP
population)
2. Around the estimated average of a treatment condition used in a confirmation experiment
to verify the prediction. This type of confidence interval is designated as CI CE (confidence
interval for the sample group)
The difference between CIPOP & CICE is that CIPOP is for entire population i.e. all parts ever
made under the specified conditions & CICE is only a sample group made under the
specified conditions. Because of the smaller size in confirmation experiment relative to
entire population CICE must be slightly wider. The expressions for computing the
confidence intervals are given below (Ross1996, Roy1990)
CIPOP = F (1, fe) Ve / eff
CICE = F (1, fe) Ve (1/ eff + 1/R )
Where, F (1,fe) = The F-ratio at a confidence level of (1 ) against DOF 1 & error
DOF.
Ve = Error Variance (from ANOVA)
eff = N/(1 total DOF associated with items used in estimate.)
N = Total number of results
R = Sample size for confirmation experiments.
As R , 1/R0 & CICE = CIPOP
& As R 1, CICE becomes wider.
7. Confirmation experiment
The confirmation experiment is final step in verifying the conclusion from the previous
round of experimentation. The optimum condition is set for the significant parameters &
selected number of tests is run under constant specified conditions. The average of the
confirmation experimentation results is compared with the anticipated averages on the
parameters & levels tested. The confirmation experiment is a crucial step & is highly
recommended to verify the experimental conclusion.
WEB REFERANCES
1. www.knowledgestorm.com
2. www.mit.stut.edu.tw/courseoutline.html
3. www.nhk.org
4. www.gracepublication.org
5. www.rkroy.com
6. www.jat.org
7. www.manufacturing trading.com
8. www.geocities.com
9. www.stanford.com
10. www.japansociety.com
11. www.raraf.com
12. www.rs.kagu.com
13. www.lambaresearch.com
14. www.iomacademy.com
15. www.thomasregisterdirectory.com
16. www.fap.com.
REFERANCES
1. Kochar R.N. , Off line Quality Control, Parameter Design& Taguchi method,
Journal of Quality Technology, Vol.17, No. 4, Oct1985, pp.176-188.
2. Schronberger, R.J., The Quality Concept: Still evolving, National productivity
Review, Vol. 6, 1986-87.
3. Ostle B., Industrial use of statistical test design industrial quality control, Vol.
24, No. 1, July 1967, pp.24-34
4. Crosby P.B., Quality control from A to Z Industrial Quality control, Vol. 20, Jan
1964, pp.4-16.
5. Roy Ranjit K., A primer on Taguchi method van nostrand reinfold, 1990, New
York.
6. Freund R.A., Definitions & basic quality concepts, Journal of Quality
technology. Vol. 17, No. 1, Jan. 1985, pp.50-56.
7. Ross Philip J., Taguchi technique for Quality Engineering: McGraw Hill book
company, Ed. 1996, New York.
8. Barker T.B., Quality Engineering by Design: Taguchis Philosophy, quality
Progress, Dec. 1986, pp.33-42.
9. Byne D.M & Taguchi S., The Taguchi Approach to parameter design, Quality
progress, Dec 1987, pp.19-26.