To Study Taguchi'S Stretegy For Design of Experiments.: A Seminar Report ON

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 54

A

SEMINAR REPORT
ON

TO STUDY TAGUCHIS STRETEGY FOR DESIGN OF


EXPERIMENTS.
Submitted in partial fulfillment of award of degree of
Master of Technology
In
Machine Design

Department of Mechanical Engineering

ACKNOWLEDGEMENT
It is my profound privilege to express my deep sense of regard, sincere gratitude &
indebtness to Prof. --------------------, Professor Mechanical Engineering department
--------------------------- for his expert guidance, valuable suggestion, critical views &
constant encouragement throughout the course of this project work.
In the preparation of this project a number of books, paper & journals have been freely
used, for which I want to express my sincere thanks & obligation to the authors.
I take this opportunity to thank my friends, colleagues for their useful suggestion & moral
support.

DECLARATION
I hereby declare that the work that has been presented in this report entitled To Study
Taguchis Strategy for Design of Experiment in partial fulfillment of requirement of
award of degree of Master of Technology submitted in Department of Mechanical
Engineering, -------------------------- is an authentic record of my own work under
supervision of Prof. ---------------- of department of Mechanical Engineering,
-----------------------------------------.
The matter embodied in this report has not been submitted by me for award of any other
degree/diploma.

Dated

CONTENTS

Acknowledgement
Abstract
Chapter-1
1. Introduction
1.1 Definition of Quality.
1.2 Cost of Quality.
1.3 Parameters of Quality.
1.4 Basic Concepts of Quality.
1.5 Types of Quality Control
1.6 Taguchi Method: Basic steps involved.
Chapter-2
2. Taguchi method of off-line Quality Control
2.1 Historical Background
2.2 Goal post philosophy.
2.3 Taguchis philophy.
2.4 Performance variation.
2.5 Counter measures to performance variation.
(i) Concept (Preliminary Design).
(ii) Parameter (Secondary Design).
Case study: Heat Treatment
Case study: Electronic Circuit.
(iii) Allowance or Tolerance Design
4

2.6 Design of Production Process


2.7 Taguchis Loss Function.
2.8 Comparison of Philosophies
Case Study: Polythene film
2.9 Other Loss Function
2.10 Randomization
2.11 Analysis of variance: Basic need.
2.12 Introduction to Orthogonal arrays.
2.13 Taguchis Method.
2.14 Areas of Application.
2.15 Advantages & Limitation.

Chapter-3
3. Taguchis experimental design & analysis
3.1 Experimental Design Strategy.
3.2 Loss Function, Signal to Noise ratio & their Interrelationship.
3.3 Steps in experimental design& analysis.
3.4 Confirmation Experiment.
REFERANCES

ABSTRACT

In a developed economy, the market for any product is generally highly competitive. To
be successful in such a condition, producer has to do output which has distinctive
advantage over other. For example innovative design & ideas, extremely good quality
related to aesthetic & functional aspects etc. Quality is a good tool for competition. It is
necessary for the customers goal that all these properties have to be built in the product
through system approach & that is the main objective of total quality control.
For achieving Quality, we have to start right from inception stage of the product &
Continue till the product is in service. In past lot of consideration was given for
controlling quality at the manufacturing process stage & checking incoming & outgoing
materials in the form of inspections. The approach that is emerging is based on zero
defect system, quality control circles & off-line quality control. Out of these Taguchis
method of off-line Quality control is most comprehensive & effective system. It gives a
Design of Product, Which will require very less on-line control. Taguchis approach has
both philosophical & Mathematical Content. The Methodologies developed to implement
his ideas are known as Taguchis Methods.

Chapter 1
1. INTRODUCTION
1.1 Definition of Quality
Quality is a relative term that is generally used with reference to end use of product. It
depends on the perception of the person in a given situation. The situation can be user
oriented, cost oriented or supplier oriented. The quality has a variety of meanings like;
1. Fitness of purpose

2. Conformance to requirements
3. Grade
4. Degree of performance
5. Degree of excellence
6. Measure of fulfillment of promise.

1.2 Cost of Quality


A quality cost committee of American society for quality control has recommended that
quality cost can be defined in four categories:
1. Cost of Prevention: These costs are associated with personnel engaged with
designing, implementing & maintaining the quality system. These costs are
incurred to keep the failure & appraisal cost to minimum.
2. Cost of appraisal: This cost is associated with measuring, evaluating & auditing of
products, components & purchased material to assure conformance to quality
standards & performance requirements.
3.

Cost of internal failure: The cost associated with defective products &
components that fail to meet quality requirements & results, in manufacturing
process are called as cost of internal failure.

Quality has one true evaluator, the customer. A quality circle that describes this situation
is shown in Fig. The customer is judge, Jury & executor in this model. Customers vote
with their wallets on the product, which meets their requirements, including price&
performance. The birth of a product, if you will, is when the designer takes information
from the customer (market) to define what the customer wants, need & expectations from
a particular product. Sometimes a new idea (new technology) creates its own market, but
once a competitor duplicates the product, the technological advantage is lost.

Fig. 1 Quality Circle


The designer must take the customers desires, needs & expectations in to consideration
& translate them into product specification, which include drawings, dimensions,
tolerances materials, processes, tooling & gauging. The makers use this information,
along with the prescribed machinery, to fabricate the product. The product is ten delivered
via marketing channels to the customer. To satisfy the customer, the product must arrive
in right quantity at the right time on the Right place & provide right functions for the right
period of time. All of these must be available to the customer at the right place too. This is
a tough order to fill. But a simplest definition of high quality is a happy customer. The
customer become endeared to the product the more it is used. The feedback of the
customer to designer come in terms of the number of products sold & warranty, repair &
complaint rate. Increasing sales volume & market share with low warranty, repair &
complaint rates translates to happy customer.

1.3 Parameters of quality


Professor David Garvin of Harvard Business School identifies eight different ways in
which the user might look at quality
1. Performance
2. Features
3. Reliability
4. Conformance
5. Durability
6. Serviceability
7. Aesthetics
8. Perceived quality
8

Garvin observed that Japan is famous for Reliability & conformance. Western industrial
countries are generally more capable of providing other dimensions of quality. From this
view point, the new strategy is what is the best way to serve the customer? He
(Customer) values the producer, who can produce high quality all eight dimensions of it
plus low cost, short lead time & high flexibility to change quantities, model mix &
design.
Serving the customer on these dimensions require the philosophy- the quality is to be
built in acceptance of view hat good quality lowers cost & use of advanced technology
like JIT.

1.4 Basic concepts of quality control.


Quality control is defined by American Society of Quality Control that The operational
techniques& activities which sustain a quality of product or service that will satisfy the
given needs
Quality can be measured in terms of quality characteristics it may be variables or
attributes & all the products must have nearly the same measurements. This nearly the
same generally means within specification limits. In the beginning of industry when after
taking all possible measures, somehow, the products were not just the same, then the
method to ensure quality was to sort out the products to satisfy the customer & this
sorting out was made official with the term inspection. So inspection was the first method
of Quality Control.
Next the control activities were extended to the manufacturing process. For next 60 years,
these two fields developed greatly. From the beginning, statistical techniques were used in
solving the problems of quality & actually statistics was a prerequisite of a quality control
method. Due to this involvement of statistics in quality control, once the quality control
was known as Statistical Quality Control (SQC). But the purpose of SQC was not really
to judge the product quality but to see if the process was stable. Statistical

Fig. 2 Quality Evolution.


Techniques like Control charts can tell when to hunt for causes of variation, but it is a
tough engineering job of hunting for causes & eliminating them. This fact is responsible
for H.F. Dodges frequently quoted statem4nt that Statistical Quality Control is 90%
engineering & only 10% statistics. But as the tenth link, statics is equally important as
other nine links in the quality control chain as far as its strength is concerned. In time the
quality control community changed the name to Statistical Process Control (SPC).
Problem solving Techniques such as Pareto & Fishbone analysis are first used to reveal
where to use SPC. Control Charts are main components of SPC with many other
techniques like process flow charts, histogram, Stratification, Checklists, Run diagram,
Pre Control & Scatter diagram.
Implementation of these Quality Control Methods is responsibility of Managerial group.
It is due to the fact that no matter how best a method is, everyone needs motivation in the
industry. This requires new managerial techniques like Total Quality Control, Self
inspection, Quality Circles.
To this point, it is discussed that quality control activities were in the field of inspection &
manufacturing. Now a day, quality control activities are mostly related to the prevention
& improvement aspects of quality & as a result quality control activities are extended to
the design field as shown in fig. given by Kauro Ishikawa. This evolution is due to the
fact that the ultimate source of high product quality & high efficiency is good design. If
the product is not made right in the first time, then product quality is very difficult or
impossible to improve at subsequent a stage i.e. product quality can be built-in only at the
inception stage by controlling various parameters related to product characteristic. Thus
off-line quality control was introduced by Taguchi & definitely it is the most important
tool in controlling quality & reducing cost to make to make the statement of Philip
Crosby Quality is Free, a reality.
Product

Sources of Variation

Development
stages
Environmental

Production

10

Manufacturing

Variables

Deterioration

Variables

Product Design

Process Design

Manufacturing

Table1: Product Development stages at which counter measures against various


sources of variation can be built into the product

1.5 Types of Quality Control.


Taguchi addresses quality in two main areas, off-line & on-line quality control. Both of
these areas are very cost sensitive in the decisions that are made with respect to activities
in each. Off-line quality control refers to improvement in quality of product & Process at
development stage. On-line quality control refers to monitoring of current manufacturing
process to verify the quality levels produced.
Traditional quality control methods such as cause-effect diagram, process capability
studies, process quality control, control charts & empirical-byes-charts concentrate almost
exclusively on Manufacturing. These quality control activities at the manufacturing stage,
which are conducted to keep the manufacturing process in statistical control & to reduce
manufacturing imperfections in the product, are on-line quality control methods.
Off-line quality control methods are quality & cost control activities conducted at the
product & process design stage in the product development cycle. The overall aim of offline quality control activities is to enhance manufacturability & reliability & to reduce
product development & life time costs. There are a number of quality control methods
such as design reviews, sensitivity analysis, prototype tests, accelerated life tests, &
reliability studies.
Industry needs well researched off-line quality control method that reduces both degree of
performance variation & manufacturing costs. Taguchis methods are off-line quality
control methods.

1.5 Taguchis Method Basic Steps involved.


Step 1: State the problem to be solved. A clear understanding of the problem by those
involved with the experiment is necessary to be able to structure the experiments. The
11

problem statement should be specific & if multiple responses are involved that should be
noted.
Step 2: Determine the objective of the experiment. This includes the identification of
performance characteristic & level of performance needed.
Step 3: Determine the measurement Method(s). An understanding of how the
performance characteristic(s) will be accessed after the experiment has been conducted.
Step 4: Identify the factor, which is supposed to influence the performance characteristic.
Step 5: Separate the factors into signal & noise factors.
Step 6: Determine the number of levels & values of all the factors. For initial screening
the experiments, the number of levels should be kept low, as low as possible.
Step 7: Select the design of Experiment.
Step 8: Conduct the experiments. Randomization strategies should be considered during
the Experiment.
Step 9: Do analysis of result.
Step 10: Interpret the results. Determine which factors are influential & which are not
influential to the performance characteristic of interest.
Step 11: Select optimum levels of most influential control factors & predict expected
results.
Step 12: Run a confirmatory Experiment. This is to demonstrate which factors & levels
chosen for the influential factors do provide the expected results. If the results do not turn
out to be as per expectations, then some important factors may turn to be left out of
experiment & more screening has to be done.
Step 13: Return to step 4, if objective is not met & further optimization is possible with
confirmed factors.

12

Chapter 2
2.1 Historical Background
After the World War II, the allied forces found that the quality of Japanese telephone
system was extremely poor & totally unsuitable for long distance communication
purpose. To improve the system, the allied command recommended that Japan should
establish research facilities similar to the Bell laboratories in the United States in order to
develop state-of-the-art communication systems. The Japanese founded the Electrical
Communication Laboratories (ECL) with Dr. Genichi Taguchi in charge of improving R
& D productivity & enhancing product quality. He observed that a great deal of money &
time was expended in engineering experimentations & testing. Dr. Taguchi started to
develop new methods to optimize the process of engineering experimentation. He
developed techniques, which are now known as Taguchi methods. His greatest
contribution lies not in the mathematical formulations of Design of Experiment but his
accompanying philosophy. His approach is more than a method to layout experiments,
13

which is a concept that has produced a unique & powerful quality improvement discipline
that differs from traditional methods. Despite some controversies, the Taguchis technique
is considered as greatest innovation in quality control history.
2.2 Goal post Philosophy
Today in America it is quite popular to take a very strict view of what constitutes quality.
In his book Crosby supports the position that a product made according to print
specifications, with-in permitted tolerance, is of high quality. The strict view point
embraces only the designers & makers. This is the goal post syndrome. What is missing
from this philophy is the customers requirements. A product may meet print
specifications but if the print does not meet customer requirements, then true quality
cannot be present. For example, customers buy TVs with best pictures, not ones that meet
the specifications.
Another example showing goal post syndrome contradicts the customers desires is as
below:
Batteries supply a voltage to light bulb in a flash light. There is some nominal
voltage say 3V that will provide the brightest light but will burn out the bulb prematurely.
Customers want the voltage to be as close to the nominal voltage as possible, but battery
manufacturer may be using a wider tolerance than allowed by battery specification. So as
a result, either flash light will burn dimly or some bulbs will burn out prematurely.
Customers want the product to close to nominal all the times & manufacturers want to
have wide range of tolerance, so the question is that these seemingly opposite ideas can
be brought into harmony.
2.3 Taguchi Philosophy
Taguchi espoused an excellent philosophy for the quality control in the manufacturing
industries. Indeed, his doctrine is creating an entirely different breed of engineers who
think breath & live quality. His philosophy has far reaching consequences yet it is
founded on three simple & fundamental concepts.
(a) Quality should be designed in to the product & not inspected in it. No amount of
inspection can put quality back into the product.
(b) Quality is best achieved by minimizing the deviation from a target. The product
should be so designed that it is immune to uncontrollable environmental factors.
14

(c) The cost of the quality should be measured as a function of deviation from the
standard & the losses are measured system wise.
The above three concepts are becoming the guiding principles of todays quality control
activities. Taguchi builds both his conceptual framework & specific methodology for
implementation from these percepts. He recommended the following three stage process
1. System design: It is the primary design stage in which engineering & scientific
knowledge is used to produce the basic product or process design. It is very important
stage but we cannot afford the research of all the concepts. Therefore, research is
limited to few concepts, selected on the basis of past experience or guess.
2. Parameter design:

It is the secondary design stage in which an investigation is

conducted to identify settings that minimize the performance variations.


3. Tolerance design: It is the tertiary design stage in which the tolerances of the process
conditions and the sources of variability are set. This is a means of suppressing
quality variations by directly removing its cause.

2.4 Performance Variation


The quality has been designated as the totality of features & characteristics of a product
or service that bears on its ability to satisfy given needs. Performance characteristics are
the final characteristics of a product that determine the product performance satisfying the
customers need. The sharpness of picture on a T.V. set is a good example of a
performance characteristic. Most products have many performance characteristic of
interest to the user.
In order to determine degree of satisfaction with a performance characteristic, the ideal
state of performance characteristic from the customers view point must be known. This
ideal state is called the Target Value.

15

Fig 3. Loss Function


Performance characteristic
All target specifications of performance characteristic should be stated in terms of
nominal levels & tolerance around nominal values. The degree of performance variation
i.e. the amount by which a manufactured products Performance deviates from target
value during the Products life under different operating conditions & across different
units of Product is an important aspect of Product quality. This particular aspect, Taguchi
has incorporated in his philosophy as the loss function (Fig. 3), over the traditional goal
post philophy which states the target values of performance characteristics in terms of
internal specifications only. The traditional approach, erroneously conveys the idea that a
user remains equally satisfied for all values of Performance characteristic falling within
the moment the performance values slip out of the specification interval, whereas the
Taguchis loss function approach states that the smaller the performance variation from
the target values, the better the quality.
The primary causes of performance variation of product are environmental variables,
product deterioration & manufacturing imperfections. Taguchi defined the variables
causing products performance variation as noise factors.
An example is the brightness of a florescent lamp, which varies with input voltage. Noise
variables are classified in following three types:
1. Outer noise such as Temperature, Humidity, Voltage, Dust or Individual human
differences. Operating conditions or environmental conditions that affect the
functions of a product are called outer noises.
2.

Variational noise or between product noise, primarily, variations among products


manufactured from the same specifications.

For example a series of specifications or drawings prepared for the manufacture of a


product with a certain function. Among the manufactured products, some have the target
value while others dont have. Here is a functional variation or performance variation
caused by between product noise. These products may perform well at first & be
functionally damaged by deterioration or inner noise. The product may function well
16

under normal conditions but may fail at high temperature, high humidity or in the other
conditions of outer noise.

Fig 4. Goal post philosophy


2.5 Counter Measures to performance Variation
In order to minimize the effects by three noise sources, some counter measures may be
taken. Off-course, the most important is the counter measure by design, an off-line
counter measure. It consists of the following three steps:
1. Concept (Primary) design.
2. Parameter (Secondary) design.
3. Tolerance (Tertiary) design.
Concept Design: In the Concept Design stage, engineering & scientific knowledge is
used to produce the basic product & process design. For example in the case of electronic
equipments, the type of circuit used to convert alternating current to direct current is
researched, or in the design of a job-shop machining center, design engineer may
determine, at the system design/concept design stage, the need for a machining cell
consisting of lathes, a machining cell consisting of drill machines & one or more
automated guided vehicles (AGVs) to transport parts from one machining cell to other. In
case of electronics, the automatic control system may be a choice for the power circuit,
but that will increase cost. Although conceptual design is very important, we cannot
afford to research all concepts. Therefore research is limited to a few concepts, selected
on the basis of past experiences or informed guess.
Parameter Design: Parameter design is an investigation conducted to identify settings
that minimize (or at least reduce) the performance variation. A product or process can
perform its intended function at many settings of its design characteristic. However
variation in performance characteristic may change with different settings. This variation
17

increases both product manufacturing & life time cost. For example consider an electric
circuit, suppose the performance characteristic of interest is the output voltage of the
circuit & target is y0. Assume that output voltage of the circuit is largely determined by
the gain of a transistor x in the circuit & the circuit designer is at the liberty to choose the
nominal value of this transistor.

Fig. 5 Parameter Design


Suppose also that effect of transistor gain on the voltage is nonlinear as shown in figure.
In order to obtain an output y0, the product designer can select the nominal values of
transistor gain to x0. If the actual transistor gain deviates from the nominal values x 0, the
output voltage will deviates from y0. The transistor gain can deviate from x0 because of
manufacturing imperfections in the transistor, because of deterioration during the circuits
life span & because of environmental variables. The variation of the actual transistor gain
around its nominal values is the internal noise factor in the circuit. If the distribution of
this internal noise factor is as shown in figure, the output voltage will have a mean value
equal to y0, but with a large variation. On the other hand, if the circuit designer selects the
nominal value of transistor gain to be x1, then the output voltage will have a much smaller
variance.
But the mean value of output voltage y1associated with nominal value x1 is far from target
value y0. Now suppose, there is another component in the circuit that has a linear effect on
the output voltage & the circuit designer is at the liberty to choose nominal value of this
component to move mean value of this output voltage from y1 to target value y0.
Adjustment of the mean value of the performance characteristic to its target value is
usually much easier than the reduction of performance variation. When the goal is to
design a product with high stability, parameter design is the most important step. This is
the step in which we find non-linearity as in factor x. in this step we find the combination
of parameter-levels that reduces the effect of not just internal noise but all noises, while
maintaining a constant output voltage. It is the Central step in design research.
18

CASE STUDY: HEAT TREATMENT


This case study involves the use of non-linear characteristics of a process. A heat
treatment process, Carburizing was primarily used to create a wear resistant surface on
engine component. Unfortunately, a side effect of the heat treatment process caused a
slight growth in eight of the components. The predictability of this growth had been
causing a scrap problem after the heat treatment operation. An experiment was designed
to investigate various heat treatment process factors.

Fig.6 Ammonia Content Vs Growth


Out of all those studied, the testers screened out one key factor, which had a non-linear
tendency with respect to change in height (Growth) during heat treatment.
Three level of amount of ammonia (NH 3) introduced into the heat treatment atmosphere
during a batch were evaluated with results shown in figure. The production levels had
been at 5ft3(0.1415m3) value. The device which measured the amount of ammonia
introduced into the heat treatment atmosphere had a precision accuracy of 1ft 3
(0.0285m3). The variation in the amount of ammonia caused a substantial variation in
batch-to-batch change in height.
The parameter design stage was utilized & a level of 8.75ft 3(0.2476m3) was chosen for
production conditions. Any value above 10ft3 (0.2830m3) caused adverse chemical
reactions. Art new level, the measuring device does not have to be replaced or improved
to cause an improvement in the heat treatment process. Only the average change in height
had to be accommodated. The adjustment changed to handle increased amount of growth.
The parts would have to be machined to slightly smaller value than in past, but the
resultant part after heat treatment would be much more consistent patch to batch.

19

Increasing the amount of ammonia per batch did increased cost. Ammonia costs
approximately dollar 0.0025 per ft3 (dollar 0.0833 per m3) or approximately 1 cent per
batch. The quality of this process was vastly improved. The tolerance design approach of
improving the ammonia measuring device would have entailed a much greater cost than 1
cent per batch. The reduced inspection cost & scrap rate subsequently saved
approximately dollar 20,000 annually for the amount of 1 cent per batch investment.
Tolerance Design: after the system has been designed & nominal parameter values of its
parameters determined, the next step is to set the tolerances of the parameters.
Environmental factors must be considered with the variation of factors around the mid
values of the output characteristics. The methodology is, off-course, different than in
parameter design. Narrow tolerances can be given to the noise factors with the greatest
influence. The attempt is to control the error factors & keep them within narrow
tolerances but it will drive the cost up. Narrow tolerances should be the weapon of last
resort, to be used only when parameter design gives insufficient results & never without
careful evaluation of loss due to variability. Cost calculation determines the several
advantages gained by parameter design approach. First larger variation of transistor gain
is not passed to variation in output voltage this allows lower quality component. To be
used without lowering the quality of overall circuit. If the transistor gain changes slightly
with operating temperature, an outer noise, then this will not be passed to the variation in
output voltage. If transistor-gain changes slightly with age, then also an inner noise will
not be passed.
2.6 Design of Production Process
The result of system parameter & tolerance design by design department is passed to thre
production department in the specifications. The production department then designs a
manufacturing process that will adequately satisfy these specifications. Process design is
also done in three steps:
System Design: In which the manufacturing process is selected from knowledge of
pertinent technology, which may include automatic control.
Parameter Design: In which the optimum working conditions for each of the component
processes are selected, including the optimum material & parts to be purchased. The
purpose this step is to improve process capability by reducing the influence of harmful
factors.
20

Tolerance Design: In which the tolerance of the process conditions & sources of
variability are set. This is a means of surprising quality variation by directly removing its
cause.
The efficiency of step (2) & (3) can be frequently raised by means of experimental
design. Step (2) is more important than step (3).
System

Parameter

Allowance

Set Concept

Set Target

Set Tolerance

Use: Engineering

Use: Engineering

Use: Engineering

Statistical Design

Statistical Tolerance

Sensitivity analysis

Experimental Design

Fig. 7 Three Phases of Design Cycle

2.7 Taguchi Loss Function.


The Taguchi loss function recognizes the customers desire to have product that are some
consistent, part-to-part & a producers desire to make a low cost product. The loss to
society is composed of the costs incurred in the production process as well as the costs
encountered during use by customer (repair, lost business etc.) to minimize the loss to
society is the strategy that will encourage the uniform products & reduce cost at the point
of production & at the point of consumption.
The concept of loss function employed by Dr. Taguchi has forced engineers & cost
accountants to take a serious look at the quality control practices of the past. The concept
is simple but effective. He defines quality as total loss imparted to the society from the
time the product is shipped. The loss is expected value of monetary loss s any user of the
product is likely to suffer at an arbitrary point of time during life span of the product due
to performance variation.
There may be other losses incurred as a result of harmful effects to the society, for
example Pollution.
Let Y be the value of performance characteristic of interest & suppose that target value of
Y is . The value of Y can deviate from both during products life span & across
21

different units of product. In statistical terms, Y is a random variable with some


probability distribution. Variation in Y causes losses to user of product. Let L(Y)
represents monetary loss suffered by an arbitrary user of the product at an arbitrary time
during life span of the product due to variation of Y from . usually it is difficult to
determine the actual form of L(Y) so the Loss function is
L(Y) = k(Y- )2
Where k is some constant. The unknown constant k can be determined if L(Y) is known
for any value of Y. suppose ( , + ) is the customers tolerance interval. If the
performance of the product is unsatisfactory, then Y slips out of this interval, if the
customers cost of repairing or discharging is A then from above equation:
A = k2 or k = A / 2
The manufacturers tolerance interval ( , + ) can also be obtained from the loss
function. Suppose before a product is shipped, the cost to the manufacturer of repairing an
item that exceeds the customers tolerance limit is B then
B = A / 2 (Y )2 or Y = (B / A)
Case study: Electronic Circuit
A power supply circuit is required to provide a certain output voltage to another electronic
circuit. The voltage could be a function of transistor component A, which was evaluated
at three levels of gain & resistance component B, which was evaluated at two levels. The
curves labeled as B1 & B2 are generated from the average voltages obtained under the six
possible treatment conditions of factor A & factor B. Based on this performance, a typical
designer might specify a transistor gain of Ax to provide the target average voltage.
However, since the transistor gain varies from one part to other, the output voltage will
vary from assembly to assembly due to transistor gain variation as indicated by
distribution I. A typical parameter design approach would be to specify the transistor gain
in the neighborhood of Ax. At this value, the variation of transistor gain could be could be
even greater but the variation of output voltage would be substantially reduced by
distribution II. The problem here is that average voltage produced is greater than the
target value so an adjustment factor is required. Resistance may be altered to the value B w
to reduce the average voltage to the target value.

22

Because B is usually much smaller than A, the manufacturing tolerance interval will be
narrower than the customers tolerance limit. So the Loss function is to be minimized
with an optimal parameter setting in the parameter design stage. The loss function
encourages the production of parts & components, targeted to cluster tightly around an
optimal setting, as opposed to staying in tolerances.
2.8 Comparison of Philosophies
Lets look at comparison of goal post loss function philosophy with an example. When
the hood of a typical automobile is opened, a mechanism may be in place, which
automatically holds the hood in position. The required force to close the hood from this
position is important to the customer. If the force is too high, then a weaker individual
may have difficulty in closing the hood & ask for mechanism to be adjusted. If the force
is too low then it may come down with a gust of wind & the customer will again ask for it
to be adjusted. The engineering specifications, details & assembly drawings call out a
particular range of force values for the hood assembly. A range must be used, since all
hoods cannot be exactly the same; a lower limit (LL) & an upper limit (UL) are specified.
If the force is little high or low, the customer may be somewhat dissatisfied but may not
ask for the adjustment to the hood. The goal post view of this situation is as shown in
figure.
The goal post philosophy says that as long as the closing force is within the zone shown
as customers tolerance, this would be satisfactory & no problem at all. If the closing
force is smaller than the lower limit or higher than upper limit of customers tolerance,
then hood would have to be adjusted at some expense say dollar 50, to be borne by the
manufacturer.
As a customer, closer the closing force to the nominal value, the happier you are. If the
force is little low or little high, you sense some loss. If the force is even greater or lower,
you would experience greater loss, the hood would come down frequently or extremely
hard to close. When the force reached the customer tolerance limit, the typical customer
would complain about the hood. But what is the real difference between closing forces
indicated by points A & B on the Goalpost Graph. It appears from the producers
viewpoint; there is very little difference in a hood that falls down just a little bit more
easily. A better model for the cost verses closing force is as shown in figure.

23

In this curve, the loss function more nearly describes the real situation. If the closing
force is near the nominal value, there is no cost or very low cost associated with the hood.
The farther the force gets from the nominal value, the greater the cost associated with that
force, until the customers limit is reached where the cost equals the adjustment cost. This
model quantifies the slight difference in cost associated with the hood closing force
amongst force A & force B.
Case Study: Polyethylene film
A supplier in Japan made a polyethylene film with a nominal thickness of 0.039 in.
(1mm) that is used for green house coverings. The customers want the film to be hick
enough to resist wind damage but not so thick to prevent the passage of light. The
producers want the film to be thinner to be able to produce more square feet of material at
the same cost. The plot of these contradictory desires is as shown in figure. At the time
the national specification for the film thickness stated that film thickness should be 0.039
0.008 in. A company that has made this film could control the film thickness to 0.008
in. consistently. The company made an economic decision to reduce the nominal
thickness to 0.032 in. & with their ability to produce film with specification. This would
reduce manufacturing cost & increase profits.
However, the same year strong typhoon winds caused a large number of green houses to
be destroyed. The cost to replace the film had to be paid by the customer & these costs
were much higher than expected. Both of these cost situations can be seen in figure. What
the film producer had not considered was the fact that the customers cost was rising
while producers cost was falling. The loss function, the loss to society is the upper curve
which is sum of customers & producers curve. The curve does show the proper film
thickness to minimize the losses to society, & is where nominal value of 0.039 in. is
located.
Looking at the loss function, one can easily see then as the film thicker from the nominal
thickness of 0.039 in., the producer is losing money, & when the film thickness gets
thinner, the customer is losing money.
The producer is obliged by being the part of society to fabricate film with a nominal
thickness of 0.039 in. & to reduce variation of that thickness to a low amount. In addition
it will save money for the society to further reduce the manufacturers capability from
0.008 in. (Losses are lower closer to nominal value)
24

If the producer does not attempt to hold the nominal thickness at 0.039 in. & causes
additional loss to the society, then it is worth than stealing from customers pocket. If
someone steals dollar 10, then net loss to the society is zero: someone has dollar 10 loss
& the thief has dollar 10 gain. If however the producer causes an additional loss to the
society, everyone in the society has suffered some loss. A producer who saves less money
than a customer spends on repair, has done something worse than stealing from the
customer. After this experiment the national specification was changed to make the
average thickness produced 0.039 in. the tolerance was left unchanged at 0.008 in.
2.9 Other loss Function.
The loss function can also be applied to the product characteristic other than the situation
where the nominal value is best value, where smaller value is best value & where higher
value is best value. For instance, as good example of lower is better characteristic is the
waiting time for your order delivery at the fast food junction. If the attendant tells you
that it will be a moment for your hamburger to come up, then you sense some loss; the
longer you have to wait the larger is the loss. Micro finish of a machined surface, friction
loss or wear is some examples of lower is better characteristic. Efficiency, Ultimate
strength & Fuel economies are an example of larger is better characteristic.
The loss function of lower is better characteristic is as shown in figure. The cost constant
k can be calculated in the same way as nominal is best situation. There is loss associated
with a particular value of Y. the loss can then be calculated for any value of Y based on
that value of k. this loss function is identical to the nominal is best situation when m = 0
which is best value for lower is better characteristic (no negative value). The Equation
takes the form:
L(Y) = k [S2 + Y2]
The loss function for a higher is better characteristic is also shown in figure. Again the
cost constant can be calculated based upon some loss associated with a particular value of
Y. subsequently any value of Y will have a loss associated. The average loss per unit may
be determined by finding the average value of 1 / Y2.

25

Fig. 10 (a) Other Loss Function: lower is better, L = kY2

Fig. 10(b) other loss Function: Higher is better, L = k (1 / Y2)


2.10

Randomization

The order performing the tests of various trials should include some form opf
Randomization. The randomized trial order protects the experimenter from any unknown
& uncontrolled factors that vary during entire experiment & which may influence the
results.
The Randomization can take many forms but only three approaches are discussed:
1. Complete
2. Simple repetition.
3. Complete within blocks.
Complete randomization: Complete Randomization means any trial had an equal
chance of being selected for the test. To determine which trial to do next for test random
number generator or simply drawing numbers from a hat will suffice. However, even
complete randomization may strategy applied to it. For instance several repetition of each
trial may be necessary, so each trial should be randomly selected until all trials have one
test completed. Then each trial is randomly selected in a different order until all trials
have two tests completed.
Simple Repetition: Simple repetition means that any trial has an equal opportunity of
being selected for the first test, but once that trial is selected all the repetitions are tested
for that trial. This method is used if test set-up are very difficult or expensive to change.
26

Complete Randomization within Block: Complete randomization within block is used


where one factor may be very difficult or expensive to change the test set-up for, but
others are very easy.
2.11

Analysis of Variance (ANOVA): Basic Need

The purpose of product & process development is to improve the performance


characteristic of the product or process relative to customer needs & expectation. The
purpose of experimentation should be to reduce & control variation in a product or
process; subsequently the decision should be made regarding which parameter affects the
performance of a product or process. The loss function quantifies the need to understand
which design factors influence the average & variation of performance characteristic of a
product or process. By properly adjusting the average & reducing variation, the product
or process losses are minimized.
Due to variation, in a large part of discussion relative to quantity, statistical method like
Analysis of variance (ANOVA) will be used to interpret experimental data & make
necessary decision. This method was developed by Sir Ronald fisher in 1930as a way to
interpret the results from agricultural experiments. ANOVA is not a complicated method
& has lot of mathematical beauty associated with it. ANOVA is a statistical toll for
detecting any differences in average performance of group of terms tested. The decision
takes variation into account.
2.12

Introduction to Orthogonal array.

Engineers & Scientists are most often encountered with product or process development
situations. The terms product & process can be used interchangeably in the following
discussion, because the same approaches will apply if developing either a product or
process. One development situation is to find a parameter that will improve the
performance characteristic to an acceptable or optimum value. A second situation is to
find a less expensive, alternate design material or method that will provide equivalent
performance. Depending upon which situation the experimenter is facing, different
strategies may be used. The first problem of improving performance is most typical one.
When searching for improved or equivalent design, he typically runs tests, observes some
of the product & makes a decision to use or reject new design. It is the quality of his
decision, which can be improved upon when proper test strategies are utilized. In other
words, avoid the mistake of using a interior design or not using an acceptable design.
27

Before a formal discussion on Orthogonal array (OA), it would be wise to review some
often used test strategies. Not being aware of efficient, proper test strategies
experimenters resort to following approaches:
The most common test plan is evaluate the effect of one parameter on the product
performance. A typical progression of this approach, when the first parameter chosen dose
not works, is to evaluate the effect of several parameters on the product performance at
the same time.
The simplest case of testing the effect of one parameter on performance would be to run a
test at two different conditions of that resultant parameter, for example, the case of cutting
speed could be used & the resultant micro finish measured to determine which cutting
speed gave the most satisfactory results. If at the first level, 1 symbolize first cutting
speed & 2 symbolize second cutting speed, then experimental conditions will appear as in
table:
Trial No.

Factor level

Test results
***
***

Table 6: One Factor Experiment


The * symbolizes the value of micro finish that would be obtained on different test
samples. Sample2 for example, could be averaged only under trial 1 & compared to the
average value of sample 2 under trial 2 to estimate the effect of cutting speed. To do this
in satisfactory & proper manner, a valid number of samples under trial 1 & 2 would have
to be made; two under conditions may not be adequate. Most engineers are not familiar
with the statistical method of determining the proper sample size but there are statisticians
that can aid in that determination.
If the first factor chosen fails to produce the hope for result, the person usually resorts to
testing some other factor & the resultant test program would appear as below. In generic
term, lets assume the experimenter has looked at four different factors labeled as A, B, C,
D etc. each evaluated one at a time. One can see in table that the first trial is at the
baseline condition. The results of trial 2 can be compared that of trial 1 to estimate the
effect of factor A on the performances of the product. The results of trial 3 can be
compared to trial 1 to estimate the effect of factor B on product performance & so on.
Each factor level is changed one at a time, holding others constant. This is the traditional
28

scientific approach to experimentation that is usually taught in todays high schools &
college chemistry & physics classes.
Third & most urgent situation finds the person grapping the straws & changing several
things all at the same time in the hope that at least one of the changes will improve the
situation sufficiently. Again one can see in table that the first trial represents the baseline
situation.
Average of the data under Trial 1 may be compared to the average of data under Trial 2 to
determine the combined effect of all the factors.
Factor & factor Levels
Trial

Test Results

***

***

***

***

***

Number

Table 7: Several factors one at a time


Factor & factor Levels
Trial

Test Result

***

***

Number

Table 8: Several Factors all at the same Time.


2.13

The Taguchi Method

The taguchi method for identifying settings of design parameters that maximize a
performance setting is summarized as below:
1. Formulate the problem.
2. Plan the Experiment.
3. Running the Experiment.
4. Analyze the Results.
29

5. Confirm the Result.


Now the results are discussed in detail.
1. Formulate the Problem
In this step enginee3ring knowledge is applied to control factors & noise variables. &
make tentative decisions about non-linearity of factors that should be allowed for in the
experiment. These control & noise factors are corresponding to the performance
characteristic of the interest. Possible interactions are considered.
2. Plan The Experiment
Experiments are to be carried out on the physical system or on simulation model with
factors at known levels to make appropriate inferences from the analysis of experimental
results. The experiments are also to be performed in a statistically designed way.
Englishman Sir R.A.Fisher in 1920 first proposed the technique of laying out conditions
of experiment involving multiple factors that vary simultaneously. The method is
popularly known as factorial design of experiment. They are originally used in the field of
agriculture.
Peach (1946) was one of the first statistician to bring the attention of quality control
practitioners to the subject of Design of Experiment. Next Sedar (1948) called the
attention of production minded individuals to the advantages of applying Statistical
Design of Experiments to the fact finding investigations.
A Full factorial design will identify all possible combinations for a given set of factors.
Since most industrial experiments usually involve a significant number of factors, full
factorial design results in a large number of experimental run. For example 7 factors at 2
levels each will require (2)7 = 128 experimental settings to find the optimal setting. The
number is prohibitively large considering time & money involved. In such cases, factorial
design of experiment can be used. But there are no general guidelines for its application
& results from two experiments will not ever coincide. Of course, the method of analysis
is simple.
Taguchi developed a set of Standard orthogonal arrays to design inner arrays for control
factors & outer arrays for noise factors. The standard arrays in the tabular form can be
used for wide range of experimental conditions. Modifications to the standard arrays can

30

be done to suit any other situation of experiment. Common orthogonal arrays are shown
in table. Figure shows L8 inner array & L4 outer orthogonal array.

Orthogonal Array

No. of Factors

No. of Levels

L8(23)

L12(3 ,4 )

1,2

3,4

L16(24)

L32(25)

L9(32)

L18(21,32)

1,2

2,3

L27(33)

L64(43)

Table 9: Common Orthogonal Arrays.


For assigning factors with interactions linear graph & triangular table are used. Figure
shows a linear graph for L 8 Design & table shows a triangular table for L 4 & L8 Design.
Taguchi tried to determine only main effect & few selected two factor interactions to keep
experimental runs to minimum.
3. Running the Experiment
Experiments are carried out with different settings of the factors according to the Design
of Experiment. The response in each run is recorded as results. Whenever possible, the
trial conditions should be used in a random order to avoid the influence off the
experimental set up. For each run at least one reading is to be taken for multiple readings
repetition or replication procedure can be used.
4. Analyze the results
Objective of Taguchis experiment is primarily to seek answer to the following questions:
(a) What are optimum conditions?
(b) Which of the Factor(s) contribute to the results & by how much?
(c) What will be the expected resultant of the optimal conditions?
The results of the Taguchis experiment are analyzed in a standard series of phases
keeping the following three quality characteristic criteria in mind:
31

(a) Target (Nominal value) is best (N-type).


(b) Smaller is better (S-type).
(c) Larger is better (L-type)
First the factorial effects are evaluated & the influences of the factors are determined in
qualitative term. The optimum conditions & the performance at the optimum condition
are also determined from the factorial effects. Graphical representation of the main effect
is shown in figure. The performance of the optimum conditions can be calculated from
the relation:
Expected Result = Grand average + Contribution at the respective level.
Interaction effects can also be shown graphically. If there is interaction, the graph will
cross each other or otherwise they will be parallel to each-other.
In the next phase, ANOVA (Analysis of variance) is performed on the results. ANOVA
study identifies the relative influence of the factors in discrete terms. When the
experiments include multiple runs & results are measured in quantitative terms, Taguchi
recommends the use of signal to noise ratio (S/N) analysis. In the S/N analysis the
multiple results of different settings are first transformed in the S/N ratio & then
analyzed. The signal to noise ratio expresses the scatter around a target value. The larger
the ratio, the smaller the scatter. So we take the setting with large signal to noise ratio.
Signal to noise ratio is given by relation:
S/N = -10 log10 (MSD)
Where, MSD = mean square deviation
Figure below show the flow diagram of Design of Experiments & the Analysis of the
result in Taguchis method. Now an example of Cake Baking Experiment is described,
which involves following steps.
5. Confirm the improvement
In order to know whether a statistical model underlying the design & analysis of the
experiment is true, it is necessary to confirm & follow-up the experiment so that new
settings improve the performance statistic over its value at the initial settings.
Control Factors
Column Number

Results
1

2
32

Experiment Number
1
2
3
4
5
6
7
8
Table 10: Orthogonal L8 Arrays
Column

(1)

(2)

(3)

(4)

(5)

(6)

(7)
Table 11: Interaction between two columns in a L8 array
A successful confirmation experiment alleviates concern about possible improper
assumption underlying the model. If the increase in the performance statistic is large
enough, another interaction of parameter may be necessary.

Experiment design

Simple Design using


Standard Array

Design with Mixed


Levels & Interactions

33

Assign factors to
Columns as appropriate

Modify columns
Assign factors requiring
level modifications
Assign interacting
factors
Assign all other factors.

Consider noise factor


Determine noise condition using outer array
Determine number of repetitions

Run experiments in random order


when possible

Fig. 12: Experimental design flow chart.

Type of Analysis

Without Repetition

With Repeated results

34

Standard Analysis

Analysis of signal to
noise ratio

Nominal is Best
Smaller is better
Larger is better

Fig. 13: Analysis Flow Diagram.

2.14

Areas of application

Apart from product & process design in the field of quality control, the technique can be
used in facility design, in repair & maintenance, service, inventory control policy &
production scheduling. In recent paper, the use of the methods in optimizing robots
process capability is described.
2.15

Advantages & Limitations

Advantages:
1. Robust design is obtained.
2. Inspection required is minimum.
3. Elimination of arbitrary specification limit.
4. Continuous improvement of quality & process development.
5. Money as a measure is essentially acceptable to management.
6. Concern about loss to society.
35

7. Team approach in problem solving.


8. Consistency in problem solving & analysis.
9. Long term overall benefit
Limitation:
1. Taguchi method needs knowledge of advanced statistics & training.
2. For short term, there is no cost benefit.
3. There is doubt about statistical purity of the method.
The comparison between current practice & Taguchi Approach is shown in Figure.

Current Approach

Taguchi Approach

(A Series Approach)

(A Parallel Process)

Some
Thinki
ng

Lets Try this


BRAINSTORMING

What is quality
characteristic?
What are Design

Experiment

Analysis of Result

More
Thinkin
g

The same process is


repeated again & again
Lets Try That
in36a random manner
some times without any
meaningful
More result
Experiments

Fig. 14:
Confirmation
Test
Comparison
between Current
Practice, Taguchi
approach

Chapter 3

Taguchi Experimental Design & Analysis


3.1 Strategy to Design of Experiment
Taguchi recommends orthogonal arrays (OA) for doing experiments. Greco latin square
generalizes these orthogonal arrays. To design an experiment is to select the most suitable
OA & to assign the parameters & interactions of interest to the appropriate columns. The
use of linear graphs & triangular tables suggested by taguchi makes the assignment of
parameters simple. The array forces all experiments to design almost identical
e4xperiments. In the Taguchi method, the results of experiments are analyzed to achieve
one or more of following objectives.
1. To establish the best condition of product or process.
2. To estimate the contribution of individual parameters & interactions.
3. To estimate the response under the identical conditions.
The optimum conditions are identified during the study of main effects of each of the
parameter. The main effects indicate the general trend of influence of each of the
parameter. The knowledge of contribution of individual parameter is a key in deciding the
nature of control to be established in a production process. Analysis of variance
(ANOVA) is a statistical treatment commonly applied to the results of the experiments in
determining the percentage contribution of each of the factor against a standard level of
confidence. Study of ANOVA table for a given analysis helps to determine which of the
parameter needs control.
Taguchi suggests two different routes to carry out the complete analysis. First, the
standard approach where results of a single or average of the repetitive runs are processed
through main effect & ANOVA analysis. The second approach which Taguchi strongly
recommends, for multiple runs to use signal to noise ratio for same steps in the analysis.
The S/N ratio is concurrent quality metric linked to the loss function by maximizing the
S/N ratio, the loss associated can be minimized. The S/N ratio determines the most robust
design set of operating conditions to minimize the variations in the result. The S/N ratio is
considered as the response of the experiment. Taguchi recommends the use of outer OA to

force the noise variation into the experiment i.e. it is intentionally introduced into the
experiment. However the processes are often subjected to many noise factors that in
combination influence the variation of response strongly. It is due to the reason that
extremely noisy system is generally not necessary to specify specific noise factors & to
deliberately control them during experimentation. It is sufficient to generate repetitions at
each experimental condition of the controllable parameter & to analyze them using
appropriate S/N ratio.
In the present investigation, the raw data analysis & S/N data analysis have been
introduced. The effect of selected turning process parameters on the selected quality
characteristic have been investigated through the plot of main effect based on raw data.
The optimum condition for each of the quality characteristic has been established through
the use of S/N data analysis aided by raw data analysis. No outer array has been used &
instead experiments have been repeated three times at each experimental condition.
3.2 Relationship between Loss Function & Signal to Noise Ratio
Loss Function:
The heart of taguchi method is the definition of the nebulous & elusive term Quality as
the characteristic that avoids the loss to the society from the time the product is shipped.
Loss is measured in terms of monetary units & is related to quantifiable product
characteristics.
Taguchi defines quality in terms of loss incurred by society & is measured by Loss
function. He unites financial loss with the functional characteristics specifying through a
quadratic relationship that comes from a Taylor series expansion. The Quadratic takes the
form of a parabola. Taguchi defines the Loss Function as a quantity proportional; to the
square of the deviation from the nominal quality characteristics. He found the following
quadratic form to be Practical Workable Function.
L (y) = K (y m) 2
Where
L = Loss in monetary units
M = value at which characteristic should be set i.e. Target Value.
Y = actual value of characteristic

K = constant depending upon the magnitude of the characteristic


& monetary unit involved.
The characteristics of the loss function are
1. The further the product characteristic deviates from the target value, greater is the
loss. The loss must be zero when the quality characteristic of a product meets its
target value.
2. The loss is a continuous function & not a sudden step as in the case of traditional;
approach. The consequence of the continuous function illustrates the point that
merely making a product within the specification limits does not necessarily mean
that product is of good quality.
The difference between a Taguchi loss function & traditional quality control approach is
graphically shown in figure:

LOSS FUNCTION

Fig. 15: Taguchi Loss Function

GOAL POST

NO LOSS

Fig. 16: Traditional Approach


Average loss Function for Product population
In a mass-production process the average loss unit is expressed as:
L(Y) = 1/n [k (Y1 - m)2 + k (Y2 - m)2 + k (Y3 - m)2 + ------------------------------k (Yn - m)2]
Where Y1, Y2, Y3-------------------Yn are actual value of the characteristic for unit 1, 2,
3-------------n etc.
N = number of in the given sample
K = constant depending upon the magnitude of the characteristic & monetary unit
involved.
M = target value at which the characteristic should be set.
The equation can be simplified as
L(Y) = k/n [(Y1 - m)2 + (Y2 - m)2 + (Y3 - m )2 + ---------------------------( Yn - m )2]
Or L(Y) = k (MSDNB)
Where MSDNB is mean square deviation or average of square of all deviation about target
value or nominal value.
NB = nominal is best.

Other Loss Function


The loss function can also be applied to the product characteristic other than the situation
where nominal value is best.
The loss function for Smaller the Better case, when m = 0 which is the best value for such
characteristic. The loss function for larger the better characteristic is also shown in figure,
where m = 0

CHARACTERISTIC LB
L = kY2

Fig. 17: Loss function for LB


CHARACTERISTIC HB
L = k (1/Y2)

Fig. 18:
Loss
Function for HB

Signal-to-noise (S/N) ratio


The loss function discussed in the previous section is an effective figure of merit for
making engineering design decisions. However, to establish an appropriate loss function
within its k value to use a figure of merit is not always cost effective & easy. Recognizing
the dilemma Taguchi created a transform for the loss function, which is called signal-tonoise (S/N) ratio.
The S/N ratio is a concurrent static. A concurrent static is able to look simultaneously at
two characteristics of a distribution & roll these characteristics into a single number or
figure of merit. The S/N ratio combines both the parameters (mean level of the quality
characteristic & variance around this mean) into single metric.
A high value of S/N implies that the signal is much higher than the random effects of
noise factor.
Process operation consistent with highest S/n ratio always yields optimum quality with
minimum variation.
(a) Lower the better (LB)
Performance characteristics whose values are preferred when they are low are calculated
using this approach. Such factors are surface roughness, cutting forces etc. The following
equation is used to calculate S/N ratio for LB type of characteristic.
R

(S/N)LB = -10 log 1/R

i=0

-------------------- (2)
Where
yi = value of characteristic at an observation at an observation i
R = number of repetitions in a trial
Alternately,
(S/N)LB = -10 log (MSDLB)
Where
MSDLB = (y12 + y22 + y32 + yR2)/R
Here target value (m) = 0

(b) Higher the better(HB)


Performance characteristics, whose values are preferred when they are high, are
calculated using this approach. Such factors are tool life, material removal rate etc. The
following equation is used to calculate the S/N ratio for HB type of characteristic.
R

(S/N)HB = -10 log 1/R

1/ y

-------------------- (3)

i=0

Where
yi = value of characteristic at an observation at an observation i
R = number of repetitions in a trial
Alternately,
(S/N)HB = -10 log (MSDHB)
Where
MSDLB = (1/y12 + 1/y22 + 1/y32 + 1/yR2)/R
Here target value (m) = 0
(c) Nominal the Best (NB)
Performance characteristics, whose values are preferred when they are nominal, are
calculated using this approach. Such factors are dimensions etc. The following equation is
used to calculate the S/N ratio for NB type of characteristics.
R

(S/N)NB = -10 log 1/R

( yi y 0)
i=0

-------------------- (4)

Where
yi = value of characteristic at an observation at an observation i
R = number of repetitions in a trial
y0 = nominal value of characteristic
Alternately,
(S/N)NB = -10 log (MSDNB)
Where

MSDNB = (y1 - y0) 2 + (y2 - y0) 2 + (y3 - y0) 2 + (yR - y0) 2/R
Here target value (m) = 0
The mean square deviation (MSD) is a statistical quality that reflects the deviation from
the target value. The expressions for MSD are different for different quality characteristic.
For the nominal the best (NB), the standard definition of MSD is used, while for other
slightly modified definition is used. For Lower the better, the unstated target value is zero.
For higher the better (HB), the inverse of each large value becomes a small value & again
the unstated target value is zero. Thus for all three expressions, the smallest magnitude of
MSD is being sorted out.
Relationship between S/N ratio & Loss-Function.
Fig. shows a single sided quadratic loss function with minimum loss at the zero value of
desired characteristic. As the value of y increases, the loss grows. Since loss is to be
minimized, the target in this situation for y is zero.
If m = 0, the basic loss function equation becomes
L(y) = k (y) 2
The loss may be generalized by using k = 1 & the expected value of loss may be found by
summing all the losses for a population & dividing by the number of samples (R) taken
from this population. This in turn gives the following expression:
Expected loss (EL) = y2 /R

-------------------- (5)

The above expression is a figure of demerit. The negative of this demerit expression
produces a positive quality function. This is the thought process that goes into the
creation of S/N ratio from the basic quadratic loss function. Taguchi adds the final touch
to this transformed loss function by taking the logarithm (base 10) of negative expected
loss & then he multiplies by 10 to put the metric into decibel terminology. The final
expression for lower the better S/N ratio takes the form of equation (2). The same thought
process follows in creation of other S/N ratio.

3.3 Steps in experimental design & Analysis


The Taguchi Experimental Design & Analysis Flow Diagram is shown in fig. The
important steps are discussed below:

1. Selection of factors and/or interactions to be evaluated.


2. Selection of number of levels for the factors.
3. Selection of appropriate orthogonal array.
4. Assignment of factors and/or interactions to columns.
5. Conduct tests.
6. Analyze results.
7. Confirmation experiment.
1. Selection of factors and/or interactions to be evaluated
The decision about which parameters to investigate, depends upon the product or process
performance characteristics or responses of interest. Taguchi suggests several methods for
determining which parameters to include in the experiment. These are:
Brainstorming:
It is a process that brings together the people associated with the product/process or its
performance problems, with the objective of soliciting suggestions or ideas. For instance,
which factors should be studied to improve performance? Before Brainstorming begins,
the leader must ensure that the participants understand that the objective here is the
identification of potentially influential factors rather than solving the problem.
Flowcharting:
This is the next useful approach, particularly for determining factors that might influence
process. A flow chart adds structure to the thought process, thus avoiding possible
omission of potentially significant factors.
We take an instance of the Casting process
Pour molten
metal

Cool in mould

Shake out
Casting

Good
Casting

Air cool

Shot Blasting

Fig.15: Flow Chart for Casting Process

Process Step

Factor Identified

Pour molten metal

Temperature of metal, speed of pouring,


Chemistry of metal

Cool mould

Time in mould, Ambient temperature of


mould.

Shake out Casting

Intensity of Vibration, Time of Vibration.

Air Cool

Ambient Temperature, Rate of air-flow

Shot Blast

Intensity of Shot Blast, Time of Shot Blast.

Cause-effect diagram:
This is perhaps the most comprehensive tool that enables one to speculate systematically
about record & clarify the potential causes that might lead to performance deviation or
poor quality. The development of the cause-effect diagram begins with the statement of
the basic effect of interest. It is a systematic listing of causes in the form of a spine tree
with tree trunk & branches with cause side written on the left hand side of the diagram &
effect side written on right side of the diagram.
Cause-effect diagram can also be drawn by thinking about the broad categories of causes,
material, machinery & equipment, operating methods, operator action & the environment.
The participants should add factors, based on their knowledge, by repeatedly asking the
question: why the effect until diagram appears to include all causes that one could
regard as possible root causes.
2. Selection of number of levels for the factors
One can select two or three levels of the factors under study. Three levels of the factors
are considered more appropriate since the non-linear relationship between the process
variables & the performance characteristics can only be revealed if more than two levels
of the process parameters are taken. In the present case study, three levels of the process
variables have been selected.

3. Selection of Orthogonal array (OA)

The selection of which orthogonal array to use depends upon these items:
1. The number of factors & interactions of interest.
2. The number of levels for the factors of interest.
These items determine the total degree of freedom required for the entire experiment. The
degree of freedom for each factor is equal to the number of levels subtracted by one.
FA = K A 1
The degree of freedom for an interaction is the product of degree of freedom of the
interacting factors.
FAxB = (FA) x (FB)
The total degree of freedom in the experiment is the sum of all the factors & interactions
degree of freedom.
The basic kind of OAs developed by Taguchi is either two level array or three level array.
The standard two & three level array are:
(a) Two level array L4, L8, L16, L32
(b) Three level array L3, L9, L27
When a particular OA is selected for an experiment, the following inequality must be
satisfied.
FLN Total degree of freedom required for parameters & interactions
Where
FLN = Total degree of freedom of an OA
(Depending upon the number of levels of the parameters & total degree of freedom
required for the experiment, a suitable orthogonal array is selected)
The number of levels used in the factors should be used to select either two or three level
kind of orthogonal array. If the factors are two levels, then an array from Appendix B
should be chosen & it the factors are three levels then an array from Appendix C should
be chosen. If some factors are two level & some three level, then whichever is
predominant should indicate which kind of orthogonal array is selected. Once the
decision is made between a two level and three levels OA, then the number of trials for
that kind of array must provide adequate total degree of freedom. Many times the

required degree of freedom will fall between the degree of freedom provided by two of
the orthogonal array. The next larger orthogonal array must then be chosen.
4. Assignment of Parameters and Interactions to the orthogonal array.
The Orthogonal arrays have several columns available for assignment of parameters &
some columns subsequently can estimate the effect of interactions of these parameters.
Taguchi has provided two tools to aid in the assignment of parameters & interactions to
arrays.
(a) Linear graph
(b) Triangular tables.
Each orthogonal array has a particular set of linear graphs & triangular table associated
with it. The linear graph indicate various columns to which parameters may be assigned
& the columns subsequently evaluating the evaluating the interactions of these
parameters. The triangular table contains all the possible interactions between parameters.
Using the linear graph and/or the triangular graph of the selected orthogonal array, the
parameters & interactions are assigned to the columns of the orthogonal array.
SELECTION OF OUTER ARRAY:
Taguchi separates two factors (Parameters) into two main groups:

Controllable factor

Uncontrollable factor
Controllable factors are those that can easily be controlled whereas uncontrollable factors,
also known as noise factors, are the nuisance variables that are either difficult or
impossible or expensive to control. The noise factors are responsible for the performance
variation of the process. Taguchi recommends the use of outer array for noise factors &
inner array for the controllable factors. If an outer array is used, the noise variation is
forced into the experiment. However experiments against the trial condition of the inner
array may be repeated & in this case the noise variation is unforced into the experiment.
The outer array, if used should not be complex as the inner array because the outer array
is noise only, which is controlled only in the experiment.

5. Experiment & data collection

The experiment is conducted against each of the trial conditions of the inner array. Each
experiment at the trial condition is repeated simply or conducted according to outer array
used.
The randomized trial order protects the experimenter from any unknown & uncontrolled
factors that may vary during the entire experiment & which may influence the results.
Randomization can take many forms, but the three most widely used approaches are
given below:
1. Complete randomization
2. Simple repetition
3. Complete randomization within blocks.
6. Data Analysis
There are a number of methods suggested for analyzing the experimental data like
observation method, ranking method, column effect method and interaction graphs etc.
However in the present case the following methods have been used:
1. Plot of average response curves
2. ANOVA for raw data.
3. ANOVA for S/N data.
The plot of average responses at each level of a parameter indicates the trend. It is a
pictorial representation of the effect of a parameter on the response. The change in the
response characteristic with the change in the levels of parameters can easily be
visualized from these curves.
The S/N ratio is treated as a response of the experiment, which is a measure of the
variation within a trial when the noise factors are present. A standard ANOVA can be
conducted on the S/N ratio, which will identify the significant parameters (mean &
variance).

PARAMETER DESIGN STRATEGY

Parameter classification & selection of optimum levels:


When the ANOVA on the raw data (identifies control parameters which affect average
values) & S/N data (identifies control parameters which affect variations) are completed,
the control parameters are classified into four classes:
Class 1: Parameter that affect both average value & variation. (significant in both i.e. raw
data ANOVA & S/N ANOVA)
Class 2: Parameters that affect variations only. (Significant in S/N ANOVA only)
Class 3: Parameters that affect average value only. (Significant in raw data ANOVA only)
Class 4: Parameters that affect nothing. (Not significant in any of the ANOVA)
The parameter design strategy is to select the proper levels of Class1 & Class 2
parameters to reduce the variations & Class 3 parameters to adjust the average to the
target value. Class 4 parameters may be set at the most economical levels since nothing is
affected.
PREDICTION OF MEAN: ()
After determination of the optimum condition, the mean of the response () at the
optimum condition is preferred. This mean is estimated only for the significant
parameters. The ANOVA identifies the significant parameters. Suppose, parameters A &
B are significant & A1 & B2 is the optimum treatment condition. Then the mean at the
optimum condition is estimated as:
= U + (A1 U) + (B2 U)
Where
U = Overall mean of response
A1 = average value of response at first level of parameter A
B2 = average value of parameter at second level of parameter B
It may also happen that the prescribed combination of the parameter level is identical to
one of those in the experiment. If this situation exists, then the most direct may to
estimate the mean for that treatment condition is to average all the results for the trials
which are set those particular levels.
DETERMINATION OF THE CONFIDENCE LEVEL

The estimate of the mean () is only a point estimate based on the average of the results
obtained from the experiment. Statistically this provides a 50% chance of the true average
being greater than & 50% chance of the true average being less than . It is therefore
customary to represent the values of the statistical parameters as a range within which it is
likely to fall for a given level of confidence. This range is termed as confidence interval.
In other words, the confidence interval is a maximum & minimum value between which
the true average should fall at some stated percentage of confidence.
The following two types of confidence intervals are suggested by Taguchi in regard to the
estimated mean of the optimal treatment condition.
1. Around the estimated average of the treatment condition predicted from the experiment.
This type of confidence interval is designated as CIPOP

(confidence interval for

population)
2. Around the estimated average of a treatment condition used in a confirmation experiment
to verify the prediction. This type of confidence interval is designated as CI CE (confidence
interval for the sample group)
The difference between CIPOP & CICE is that CIPOP is for entire population i.e. all parts ever
made under the specified conditions & CICE is only a sample group made under the
specified conditions. Because of the smaller size in confirmation experiment relative to
entire population CICE must be slightly wider. The expressions for computing the
confidence intervals are given below (Ross1996, Roy1990)
CIPOP = F (1, fe) Ve / eff
CICE = F (1, fe) Ve (1/ eff + 1/R )
Where, F (1,fe) = The F-ratio at a confidence level of (1 ) against DOF 1 & error
DOF.
Ve = Error Variance (from ANOVA)
eff = N/(1 total DOF associated with items used in estimate.)
N = Total number of results
R = Sample size for confirmation experiments.
As R , 1/R0 & CICE = CIPOP
& As R 1, CICE becomes wider.

7. Confirmation experiment
The confirmation experiment is final step in verifying the conclusion from the previous
round of experimentation. The optimum condition is set for the significant parameters &
selected number of tests is run under constant specified conditions. The average of the
confirmation experimentation results is compared with the anticipated averages on the
parameters & levels tested. The confirmation experiment is a crucial step & is highly
recommended to verify the experimental conclusion.

WEB REFERANCES

1. www.knowledgestorm.com

2. www.mit.stut.edu.tw/courseoutline.html
3. www.nhk.org
4. www.gracepublication.org
5. www.rkroy.com
6. www.jat.org
7. www.manufacturing trading.com
8. www.geocities.com
9. www.stanford.com
10. www.japansociety.com
11. www.raraf.com
12. www.rs.kagu.com
13. www.lambaresearch.com
14. www.iomacademy.com
15. www.thomasregisterdirectory.com
16. www.fap.com.

REFERANCES

1. Kochar R.N. , Off line Quality Control, Parameter Design& Taguchi method,
Journal of Quality Technology, Vol.17, No. 4, Oct1985, pp.176-188.
2. Schronberger, R.J., The Quality Concept: Still evolving, National productivity
Review, Vol. 6, 1986-87.
3. Ostle B., Industrial use of statistical test design industrial quality control, Vol.
24, No. 1, July 1967, pp.24-34
4. Crosby P.B., Quality control from A to Z Industrial Quality control, Vol. 20, Jan
1964, pp.4-16.
5. Roy Ranjit K., A primer on Taguchi method van nostrand reinfold, 1990, New
York.
6. Freund R.A., Definitions & basic quality concepts, Journal of Quality
technology. Vol. 17, No. 1, Jan. 1985, pp.50-56.
7. Ross Philip J., Taguchi technique for Quality Engineering: McGraw Hill book
company, Ed. 1996, New York.
8. Barker T.B., Quality Engineering by Design: Taguchis Philosophy, quality
Progress, Dec. 1986, pp.33-42.
9. Byne D.M & Taguchi S., The Taguchi Approach to parameter design, Quality
progress, Dec 1987, pp.19-26.

You might also like