Metrology & Quality Control

Download as pdf or txt
Download as pdf or txt
You are on page 1of 127

A Course Material on

METROLOGY AND QUALITY CONTROL


CODE – BMEF185T60

By

R.ELLAPPAN
ASSISTANT PROFESSOR
DEPARTMENT OF MECHANICAL ENGINEERING
SCSVMV Deemed to be University
KANCHIPURAM– 631 561

1
Course Title LTPC
Course Code
METROLOGY AND QUALITY CONTROL Credits
3 0 03
UNIT - I BASICS OF METROLOGY 9
Definition of metrology - Objective of metrology - Precision and Accuracy - Sources of errors - Concept of
Repeatability, Sensitivity, Readability and Reliability – Linear measurements – types – Vernier caliper –
Micrometer – types - Vernier height gauges – depth gauges – Slip gauges – Angular measurements – Types -
Vernier and optical Bevel protractor - Sine Principle and Sine Bar - Optical Instruments for angular
measurement – Autocollimator - Angle Gauge.
UNIT – II COMPARATIVE MEASUREMENT 9
Comparators – Introduction – Characteristics and uses – types – mechanical – Optical – profile projector –
Electrical – pneumatic – Testing of straightness – Flatness – parallelism and circularity- Limit gauges – types-
Taylors principle - Snap gauges – plain plug gauges – progressive plug gauges - Ring gauges – Thread pitch
gauges - feeler gauges – radius gauges – engineers square and parallel – dial gauges – types – plunger type –
needle type – Magnetic V block.
UNIT - III CALIBRATION AND MEASURING MACHNES 9
Introduction – sensitivity – Range – standards – Traceability - Calibration of Vernier caliper – Micrometer –
Dial gauges – Measurement using surface roughness tester – Co-ordinate measuring machine – Types - Tool
makers microscope - Gear measurement – Gear tooth caliper – Circular pitch measuring machine – Parkinson
gear tester – shore hardness tester – Surface plates – bore gauges – Machine tool metrology for Lathe.
UNIT - IV QUALITY CONTROL FOR VARIABLES 9
Introduction, definition of quality – Facts of quality - basic concept of quality, definition of SQC, benefits and
limitation of SQC, Quality assurance - Concepts of Quality control - Quality cost-Variation in process- factors
– process capability – process capability studies and simple problems – Theory of control chart- uses of control
chart – Control chart for variables – X chart, R chart and (P & C) chart – six sigma concept – Elements of quality
costs.
UNIT - V PROCESS CONTROL FOR ATTRIBUTES 9
Control chart for proportion or fraction defectives – p chart and np chart – control chart for defects – C and U
charts, State of control and process out of control identification in charts – Acceptance sampling plan – Types
- O.C. curves – producer’s Risk and consumer’s Risk. AQL, LTPD, AOQL concepts-standard sampling plans
for AQL and LTPD- uses of standard sampling plans.
TEXT BOOK
1. 1. R.K.JAIN, Engineering Metrology, Khanna publishers, 21st edition, 1984
2. GRANT, EUGENE.L “Statistical Quality Control”, Tata McGraw-Hill, 7th edition, 2005
REFERENCES
1. MONOHAR MAHAJAN, “Statistical Quality Control”, Dhanpat Rai & Sons, 2001.
2. R.C.GUPTA, “Statistical Quality control”, Khanna Publishers, 9th edition, 1998.
3. BESTERFIELD D.H., “Quality Control”, Prentice Hall, 7th edition, 2003.
4. SHARMA S.C., “Inspection Quality Control and Reliability”, Khanna Publishers, 2002.

2
Sri Chandrasekharendra Saraswathi Viswa Mahavidyalaya University
(SCSVMV)
DEPARTMENT OF MECHANICAL ENGINEERING

METROLOGY AND QUALITY CONTROL


UNIT – I - BASICS OF METROLOGY

Introduction to Metrology
Metrology word is derived from two Greek words such as metro which means measurement and
logy which means science. Metrology is the science of precision measurement. The engineer can
say it is the science of measurement of lengths and angles and all related quantities like width, depth,
diameter and straightness with high accuracy. Metrology demands pure knowledge of certain basic
mathematical and physical principles. The development of the industry largely depends on the
engineering metrology. Metrology is concerned with the establishment, reproduction and
conservation and transfer of units of measurements and their standards. Irrespective of the branch of
engineering, all engineers should know about various instruments and techniques.

Introduction to Measurement
Measurement is defined as the process of numerical evaluation of a dimension or the process of
comparison with standard measuring instruments. The elements of measuring system include the
instrumentation, calibration standards, environmental influence, human operator limitations and features
of the work-piece. The basic aim of measurement in industries is to check whether a component has
been manufactured to the requirement of a specification or not.

Types of Metrology
Legal Metrology
'Legal metrology' is that part of metrology which treats units of measurements, methods of
measurements and the measuring instruments, in relation to the technical and legal requirements. The
activities of the service of 'Legal Metrology' are:
(i) Control of measuring instruments;
(ii) Testing of prototypes/models of measuring instruments;
(iii) Examination of a measuring instrument to verify its conformity to the statutory requirements
etc.

Dynamic Metrology
Dynamic metrology' is the technique of measuring small variations of a continuous nature.
The technique has proved very valuable, and a record of continuous measurement, over a surface, for
instance, has obvious advantages over individual measurements of an isolated character.

Deterministic metrology
Deterministic metrology is a new philosophy in which part measurement is replaced by process
measurement. The new techniques such as 3D error compensation by CNC (Computer Numerical
Control) systems and expert systems are applied, leading to fully adaptive control. This technology is
used for very high precision manufacturing machinery and control systems to achieve micro technology
and nanotechnology accuracies.
3
OBJECTIVES OF METROLOGY
Although the basic objective of a measurement is to provide the required accuracy at a minimum
cost, metrology has further objectives in a modern engineering plant with different shapes which are:
1. Complete evaluation of newly developed products.
2. Determination of the process capabilities and ensure that these are better than the relevant
component tolerances.
3. Determination of the measuring instrument capabilities and ensure that they are quite sufficient
for their respective measurements.
4. Minimizing the cost of inspection by effective and efficient use of available.
5. Reducing the cost of rejects and rework through application of Statistical Quality Control
Techniques.
6. To standardize the measuring methods
7. To maintain the accuracies of measurement.
8. To prepare designs for all gauges and special inspection fixtures.
Need and Importance of Metrology
 The importance of the science of measurement as a tool for scientific research (by which accurate
and reliable information can be obtained) was emphasized by Ga1ileo and Gvethe. This is
essential for solving almost all technical problems in the field of engineering in general, and in
production engineering and experimental design in particular. The design engineer should not
only check his design from the point of view of strength or economical production, but he
should also keep in mind how the dimensions specified can be checked or measured.
Unfortunately, a considerable amount of engineering work is still being executed without
realizing the importance of inspection and quality control for improving the function of product
and achieving the economical production.
 Higher productivity and accuracy is called for by the present manufacturing techniques. This
cannot be achieved unless the science of metrology is understood, introduced and applied in
industries.
 Improving the quality of production necessitates proportional improvement of the measuring
accuracy, and marking out of components before machining and the in-process and post process
control of the dimensional and geometrical accuracies of the product. Proper gauges should be
designed and used for rapid and effective inspection. Also automation and automatic control,
which are the modem trends for future developments, are based on measurement.

METHODS OF MEASUREMENTS
These are the methods of comparison used in measurement process. In precision measurement
various methods of measurement are adopted depending upon the accuracy required and the amount of
permissible error.

4
The methods of measurement can be classified as:
1. Direct method
2. Indirect method
3. Absolute or Fundamental method
4. Comparative method
5. Transposition method
6. Coincidence method
7. Deflection method
8. Complementary method
9. Contact method
10. Contact less method

Direct method of measurement:


This is a simple method of measurement, in which the value of the quantity to be measured is
obtained directly without any calculations. For example, measurements by using scales, vernier callipers,
micrometers, bevel protector etc. This method is most widely used in production. This method is not
very accurate because it depends on human insensitiveness in making judgment.

`Indirect method of measurement:


In indirect method the value of quantity to be measured is obtained by measuring other quantities
which are functionally related to the required value. E.g. Angle measurement by sine bar, measurement
of screw pitch diameter by three wire method etc.

Absolute or Fundamental method:


It is based on the measurement of the base quantities used to define the quantity. For example,
measuring a quantity directly in accordance with the definition of that quantity, or measuring a quantity
indirectly by direct measurement of the quantities linked with the definition of the quantity to be
measured.
Comparative method:
In this method the value of the quantity to be measured is compared with known value of the
same quantity or other quantity practically related to it. So, in this method only the deviations from a
master gauge are determined, e.g., dial indicators, or other comparators.

Transposition method:
It is a method of measurement by direct comparison in which the value of the quantity measured
is first balanced by an initial known value A of the same quantity, and then the value of the quantity
measured is put in place of this known value and is balanced again by another known value B. If
the position of the element indicating equilibrium is the same in both cases, the value of the quantity
to be measured is AB. For example, determination of amass by means of a balance and known weights,
using the Gauss double weighing.

5
Coincidence method:
It is a differential method of measurement in which a very small difference between the
value of the quantity to be measured and the reference is determined by the observation of the
coincidence of certain lines or signals. For example, measurement by vernier calliper micrometer.
Deflection method:
In this method the value of the quantity to be measured is directly indicated by a deflection of a
pointer on a calibrated scale.
Complementary method:
In this method the value of the quantity to be measured is combined with a known value of the
same quantity. The combination is so adjusted that the sum of these two values is equal to predetermined
comparison value. For example, determination of the volume of a solid by liquid displacement.
Method of measurement by substitution:
It is a method of direct comparison in which the value of a quantity to be measured is
replaced by a known value of the same quantity, so selected that the effects produced in the indicating
device by these two values are the same.
Method of null measurement:
It is a method of differential measurement. In this method the difference between the value of the
quantity to be measured and the known value of the same quantity with which it is compared is brought
to zero.

GENERALIZED MEASUREMENT SYSTEM


A measuring system exists to provide information about the physical value of some variable
being measured. In simple cases, the system can consist of only a single unit that gives an output
reading or signal according to the magnitude of the unknownvariable applied to it. However, in more
complex measurement situations, a measuring system consists of several separate elements as shown in
Figure1.1.

6
Table 1.1 Physical Quantities and its units

Standards
The term standard is used to denote universally accepted specifications for devices.
Components or processes which ensure conformity and interchangeability throughout a particular
industry. A standard provides a reference for assigning a numerical value to a measured quantity.
Each basic measurable quantity has associated with it an ultimate standard. Working standards, those
used in conjunction with the various measurement making instruments.
The national institute of standards and technology (NIST) formerly called National Bureau
of Standards (NBS), it was established by an act of congress in 1901, and the need for such body had
been noted by the founders of the constitution. In order to maintain accuracy, standards in a vast industrial
complex must be traceable to a single source, which may be national standards.
The following is the generalization of echelons of standards in the national measurement system.
1. Calibration standards
2. Metrology standards
3. National standards
Calibration standards: Working standards of industrial or governmental laboratories.
Metrology standards: Reference standards of industrial or Governmental laboratories.
National standards: It includes prototype and natural phenomenon of SI (Systems International),
the world wide system of weight and measures standards. Application of precise measurement has
increased so much, that a single national laboratory to perform directly all the calibrations and
standardization required by a large country with high technical development. It has led to the
establishment of a considerable number of standardizing laboratories in industry and in various other
areas. A standard provides a reference or datum for assigning a numerical value to a measured quantity.

7
Classification of Standards
To maintain accuracy and interchangeability it is necessary that Standards to be traceable to a single
source, usually the National Standards of the country, which are further linked to International
Standards. The accuracy of National Standards is transferred to working standards through a chain
of intermediate standards in a manner given below.
 National Standards
 National Reference Standards
 Working Standards
 Plant Laboratory Reference Standards
 Plant Laboratory Working Standards
 Shop Floor Standards
Evidently, there is degradation of accuracy in passing from the defining standards to the shop floor
standards. The accuracy of particular standard depends on a combination of the number of times it has
been compared with a standard in a higher echelon, the frequency of such comparisons, the care with
which it was done, and the stability of the particular standards itself.
Accuracy of Measurements
The purpose of measurement is to determine the true dimensions of a part. But no measurement
can be made absolutely accurate. There is always some error.
The amount of error depends upon the following factors:
 The accuracy and design of the measuring instrument
 The skill of the operator
 Method adopted for measurement
 Temperature variations
 Elastic deformation of the part or instrument etc.
Thus, the true dimension of the part cannot be determined but can only by approximate.
The agreement of the measured value with the true value of the measured quantity is called accuracy. If
the measurement of dimensions of a part approximates very closely to the true value of that dimension,
it is said to be accurate. Thus the term accuracy denotes the closeness of the measured value with
the true value. The difference between the measured value and the true value is the error of measurement.
The lesser the error, more is the accuracy.
Precision
The terms precision and accuracy are used in connection with the performance of the instrument.
Precision is the repeatability of the measuring process. It refers to the group of measurements for the
same characteristics taken under identical conditions. It indicates to what extent the identically
performed measurements agree with each other. If the instrument is not precise it will give different
(widely varying) results for the same dimension when measured again and again. The set of observations
will scatter about the mean. The scatter of these measurements is designated as σ, the standard deviation.
It is used as an index of precision. The less the scattering more precise is the instrument. Thus,
lower, the value of σ, the more precise is the instrument.
Accuracy
Accuracy is the degree to which the measured value of the quality characteristic
agrees with the true value. The difference between the true value and the measured value is known as
error of measurement. It is practically difficult to measure exactly the true value and therefore a set of
observations is made whose mean value is taken as the true value of the quality measured.

8
Distinction between Precision and Accuracy
Accuracy is very often confused with precision though much different. The distinction between
the precision and accuracy will become clear by the following example. Several measurements are made
on a component by different types of instruments (A, B and C respectively) and the results are plotted.
In any set of measurements, the individual measurements are scattered about the mean, and the precision
signifies how well the various measurements performed by same instrument on the same quality
characteristic agree with each other. The difference between the mean of set of readings on the same
quality characteristic and the true value is called as error. Less the error more accurate is the instrument.
Figure shows that the instrument A is precise since the results of number of measurements are close to
the average value.
However, there is a large difference (error) between the true value and the average value hence
it is not accurate. The readings taken by the instruments are scattered much from the average value
and hence it is not precise but accurate as there is a small difference between the average value and true
value.

Factors affecting the accuracy of the Measuring System


The basic components of an accuracy evaluation are the five elements of a measuring system such as:
• Factors affecting the calibration standards.
• Factors affecting the work piece.
• Factors affecting the inherent characteristics of the instrument.

• Factors affecting the person, who carries out the measurements,


• Factors affecting the environment.

9
1. Factors affecting the Standard: It may be affected by:
 Coefficient of thermal expansion
 Calibration interval
 Stability with time
 Elastic properties
 Geometric compatibility

2. Factors affecting the Work piece: These are:


 Cleanliness
 Surface finish, waviness, scratch, surface defects etc.,
 Hidden geometry
 Elastic properties,-adequate datum on the work piece
 Arrangement of supporting work piece
 Thermal equalization etc.
3. Factors affecting the inherent characteristics of Instrument:
 Adequate amplification for accuracy objective
 Scale error
 Effect of friction, backlash, hysteresis, zero drift error
 Deformation in handling or use, when heavy work pieces are measured
 Calibration errors
 Mechanical parts (slides, guide ways or moving elements)
 Repeatability and readability
 Contact geometry for both work piece and standard.
4. Factors affecting person:
 Training, skill
 Sense of precision appreciation
 Ability to select measuring instruments and standards
 Sensible appreciation of measuring cost
 Attitude towards personal accuracy achievements
 Planning measurement techniques for minimum cost, consistent with precision requirements
etc.

10
5. Factors affecting Environment:
 Temperature, humidity etc.
 Clean surrounding and minimum vibration enhance precision
 Adequate illumination
 Temperature equalization between standard, work piece, andinstrument
 Thermal expansion effects due to heat radiation from lights
 Heating elements, sunlight and people
 Manual handling may also introduce thermal expansion.
Higher accuracy can be achieved only if, ail the sources of error due to the above five elements in
the measuring system are analyzed and steps taken to eliminate them. The above analysis of five basic
metrology elements can be composed into the acronym SWIPE, for convenient reference where,

S –STANDARD, W – WORKPIECE, I – INSTRUMENT P – PERSON E – ENVIRONMENT

SENSITIVITY

Sensitivity may be defined as the rate of displacement of the indicating device of an instrument,
with respect to the measured quantity. In other words, sensitivity of an instrument is the ratio of the
scale spacing to the scale division value. For example, if on a dial indicator, the scale spacing is 1.0 mm
and the scale division value is 0.01 mm, then sensitivity is 100. It is also called as amplification factor
or gearing ratio. If we now consider sensitivity over the full range of instrument reading with respect to
measured quantities as shown in Figure the sensitivity at any value of y=dx/dy, where dx and dy are
increments of x and y, taken over the full instrument scale, the sensitivity is the slope of the curve at any
value of y.

11
The sensitivity may be constant or variable along the scale. In the first case
we get linear transmission and in the second non-linear transmission. . Sensitivity refers to the
ability of measuring device to detect small differences in a quantity being measured. High sensitivity
instruments may lead to drifts due to thermal or other effects, and indications may be less repeatable
or less precise than that of the instrument of lower sensitivity.
Readability
Readability refers to the case with which the readings of a measuring Instrument can be read. It
is the susceptibility of a measuring device to have its indications converted into meaningful number. Fine
and widely spaced graduation lines ordinarily improve the readability. If the graduation lines are very
finely spaced, the scale will be more readable by using the microscope; however, with the naked eye the
readability will be poor. To make micrometers more readable they are provided with vernier scale. It
can also be improved by using magnifying devices.
Calibration
The calibration of any measuring instrument is necessary to measure the quantity in terms of
standard unit. It is the process of framing the scale of the instrument by applying some standardized
signals. Calibration is a pre-measurement process, generally carried out by manufacturers. It is carried
out by making adjustments such that the read out device produces zero output for zero measured input.
Similarly, it should display an output equivalent to the known measured input near the full scale
input value. The accuracy of the instrument depends upon the calibration. Constant use of instruments
affects their accuracy. If the accuracy is to be maintained, the instruments must be checked and
recalibrated if necessary. The schedule of such calibration depends upon the severity of use,
environmental conditions, accuracy of measurement required etc. As far as possible calibration should
be performed under environmental conditions which are vary close to the conditions under which
actual measurements are carried out. If the output of a measuring system is linear and repeatable, it
can be easily calibrated.
Repeatability
It is the ability of the measuring instrument to repeat the same results for the measurements for
the same quantity, when the measurement are carried out-by the same observer,-with the same
instrument,-under the same conditions,-without any change in location,-without change in the method
of measurement-and the measurements are carried out in short intervals of time. It may be expressed
quantitatively in terms of dispersion of the results.

Reproducibility
Reproducibility is the consistency of pattern of variation in measurement i.e. closeness of the
agreement between the results of measurements of the same quantity, when individual measurements
are carried out:
 by different observers
 by different methods
 using different instruments
 Under different conditions, locations, times etc.
12
STATIC AND DYNAMIC RESPONSE

The static characteristics of measuring instruments are concerned only with the steady-state
reading that the instrument settles down to, such as accuracy of the reading.

The dynamic characteristics of a measuring instrument describe its behavior between the time a
measured quantity changes value and the time when the instrument output attains a steady value in
response. As with static characteristics, any values for dynamic characteristics quoted in instrument data
sheets only apply when the instrument is used under specified environmental conditions. Outside these
calibration conditions, some variation in the dynamic parameters can be expected.

In any linear, time-invariant measuring system, the following general relation can be written
between input and output for time (t) > 0:

where qi is the measured quantity, qo is the output reading, and ao ...an, bo... bm are constants. If
we limit consideration to that of step changes in the measured quantity only, then Equation (2) reduces
to

Zero-Order Instrument

(3)
If all the coefficients a1 . . . an other than a0 in Equation (2) are assumed zero, then where K is a
constant known as the instrument sensitivity as defined earlier. Any instrument that behaves according
to Equation (3) is said to be of a zero- order type.

Following a step change in the measured quantity at time t, the instrument output moves
immediately to a new value at the same time instant t, as shown in Figure. A potentiometer, which
measures motion is a good example of such an instrument, where the output voltage changes
instantaneously as the slider is displaced along the potentiometer track.

13
First-Order Instrument

If all the coefficients a2 . . . an except for ao and a1 are assumed zero in Equation (2) then
(3) Any instrument that behaves according to Equation (4) is known as a first-order instrument. If d/dt is
replaced by the D operator in Equation (4), we get
(4)

(5)
Defining K ¼ b0/a0 as the static sensitivity and t ¼ a1/a0 as the time constant of the system, Equation (5)
becomes

(6)
Second-Order Instrument
If all coefficients a3 . . . other than a0, a1, and a2 in Equation (2) are assumed zero, then we get

(7)

(8)

(9)

14
This is the standard equation for a second-order system, and any instrument, whose response can
be described by it is known as a second-order instrument. If Equation (9) is solved analytically, the shape
of the step response obtained depends on the value of the damping ratio parameter x. The output
responses of a second-order instrument for various values of x following a step change in the value of
the measured quantity at time t are shown in Figure. Commercial second-order instruments, of which
the accelerometer is a common example, are generally designed to have a damping ratio (x) somewhere
in the range of 0.6–0.8.

ERRORS IN MEASUREMENTS
It is never possible to measure the true value of a dimension there is always some error. The
error in measurement is the difference between the measured value and the true value of the measured
dimension.

Error in measurement = Measured value - True value


The error in measurement may be expressed or evaluated either as an absolute error or as a relative
error.

Absolute Error
True absolute error:
It is the algebraic difference between the result of measurement and the conventional true value of the
quantity measured.
Apparent absolute error:
If the series of measurement are made then the algebraic difference between one of the results of
measurement and the arithmetical mean is known as apparent absolute error.
Relative Error:
It is the quotient of the absolute error and the value of comparison use or calculation of that absolute
error. This value of comparison may be the true value, the conventional true value or the arithmetic mean
for series of measurement.

15
The accuracy of measurement, and hence the error depends upon so many factors, such as:
 calibration standard
 Work piece
 Instrument
 Person
 Environment etc
Types of Errors
1. Systematic Error
These errors include calibration errors, error due to variation in the atmospheric condition
Variation in contact pressure etc. If properly analyzed, these errors can be determined and reduced or
even eliminated hence also called controllable errors. All other systematic errors can be controlled in
magnitude and sense except personal error.
These errors results from irregular procedure that is consistent in action. These errors are
repetitive in nature and are of constant and similar form.
2. Random Error
These errors are caused due to variation in position of setting standard and work- piece errors.
Due to displacement of level joints of instruments, due to backlash and friction, these error are induced.
Specific cause, magnitude and sense of these errors cannot be determined from the knowledge of
measuring system or condition of measurement. These errors are non-consistent and hence the name
random errors.
3. Environmental Error
These errors are caused due to effect of surrounding temperature, pressure and humidity on the
measuring instrument. External factors like nuclear radiation, vibrations and magnetic field also leads to
error. Temperature plays an important role where high precision is required. e.g. while using slip gauges,
due to handling the slip gauges may acquire human body temperature, whereas the work is at 20°C. A
300 mm length will go in error by 5 microns which is quite a considerable error. To avoid errors of this
kind, all Metrology laboratories and standard rooms worldwide are maintained at 20°C.

Calibration
It is very much essential to calibrate the instrument so as to maintain its accuracy. as an whole,
so in that case we have to take into account the error producing properties of each component.
Calibration is usually carried out by making adjustment such that when the instrument is having zero
measured input then it should read out zero and when the instrument is measuring some dimension it
should read it to its closest accurate value. It is very much important that calibration of any measuring
system should be performed under the environmental conditions that are much closer to that under which
the actual measurements are usually to be taken. In case when the measuring and the sensing system are
different it is very difficult to calibrate the system.

16
Calibration is the process of checking the dimension and tolerances of a gauge, or the accuracy
of a measurement instrument by comparing it to the instrument/gauge that has been certified as a
standard of known accuracy. Calibration of an instrument is done over a period of time, which is decided
depending upon the usage of the instrument or on the materials of the parts from which it is made. The
dimensions and the tolerances of the instrument/gauge are checked so that we can come to whether the
instrument can be used again by calibrating it or is it wear out or deteriorated above the limit value.
If it is so then it is thrown out or it is scrapped. If the gauge or the instrument is frequently used, then
it will require more maintenance and frequent calibration. Calibration of instrument is done prior to its
use and afterwards to verify that it is within the tolerance limit or not. Certification is given by making
comparison between the instrument/gauge with the reference standard whose calibration is traceable
to accepted National standard.

INTRODUCTION TO DIMENSIONAL AND GEOMETRIC TOLERANCE


General Aspects
In the design and manufacture of engineering products a great deal of attention has to be paid
to the mating, assembly and fitting of various components. In the early days of mechanical
engineering during the nineteenth century, the majority of such components were actually mated
together, their dimensions being adjusted until the required type of fit was obtained. These methods
demanded craftsmanship of a high order and a great deal of very fine work was produced. Present
day standards of quantity production, interchangeability, and continuous assembly of many complex
compounds, could not exist under such a system, neither could many of the exacting design requirements
of modern machines be fulfilled without the knowledge that certain dimensions can be reproduced with
precision on any number of components.
Modern mechanical production engineering is based on a system of limits and fits, which while
not only itself ensuring the necessary accuracies of manufacture, forms a schedule or specifications to
which manufacturers can adhere.
In order that a system of limits and fits may be successful, following conditions must be fulfilled:
1. The range of sizes covered by the system must be sufficient for most purposes.
2. It must be based on some standards; so that everybody understands alike and a given dimension
has the same meaning at all places.
3. For any basic size it must be possible to select from a carefully designed range of fit the most
suitable one for a given application.
4. Each basic size of hole and shaft must have a range of tolerance values for each of the different
fits.
5. The system must provide for both unilateral and bilateral methods of applying the tolerance.
6. It must be possible for a manufacturer to use the system to apply either a hole- based or a shaft-
based system as his manufacturing requirements may need.
7. The system should cover work from high class tool and gauge work where very wide limits of
sizes are permissible.
Nominal Size and Basic Dimensions

Nominal size: A 'nominal size' is the size which is used for purpose of general identification. Thus the
nominal size of a hole and shaft assembly is 60 mm, even though the basic size of the hole may be60
mm and the basic size of the shaft 59.5 mm.
17
Basic dimension: A 'basic dimension' is the dimension, as worked out by purely design considerations.
Since the ideal conditions of producing basic dimension, do not exist, the basic dimensions can be
treated as the theoretical or nominal size, and it has only to be approximated. A study of function of
machine part would reveal that it is unnecessary to attain perfection because some variations in
dimension, however small, can be tolerated size of various parts. It is, thus, general practice to
specify a basic dimension and indicate by tolerances as to how much variation in the basic dimension
can be tolerated without affecting the functioning of the assembly into which this part will be used.

Definitions
The definitions given below are based on those given in IS: 919
Shaft: The term shaft refers not only to diameter of a circular shaft to any external dimension on a
component.

Hole: This term refers not only to the diameter of a circular hole but to any internal dimension on a
component.

Basics of Fit
A fit or limit system consists of a series of tolerances arranged to suit a specific range of sizes and
functions, so that limits of size may. Be selected and given to mating components to ensure specific
classes of fit. This system may be arranged on the following basis:
1. Hole basis system
2. Shaft basis system.

18
Hole basis system:
'Hole basis system' is one in which the limits on the hole are kept constant and the variations necessary
to obtain the classes of fit are arranged by varying those on the shaft.

Shaft basis system:


'Shaft basis system' is one in which the limits on the shaft are kept constant and the variations necessary
to obtain the classes of fit are arranged by varying the limits on the holes. In present day industrial
practice hole basis system is used because a great many holes are produced by standard tooling, for
example, reamers drills, etc., whose size is not adjustable. Subsequently the shaft sizes are more
readily variable about the basic size by means of turning or grinding operations. Thus the hole basis
system results in considerable reduction in reamers and other precision tools as compared to a shaft
basis system because in shaft basis system due to non-adjustable nature of reamers, drills etc. great
variety (of sizes) of these tools are required for producing different classes of holes for one class of shaft
for obtaining different fits.

Systems of Specifying Tolerances

The tolerance or the error permitted in manufacturing a particular dimension may be allowed to vary
either on one side of the basic size or on either side of the basic size. Accordingly two systems of
specifying tolerances exit.
1. Unilateral system
2. Bilateral system.
In the unilateral system, tolerance is applied only in one direction.
+ 0.04 -0.02
Examples: 40.0 or 40.0
+ 0.02 -0.04

In the bilateral system of writing tolerances, a dimension is permitted to vary in two directions. +
0.02 Examples: 40.0 - 0.04 19
INTERCHANGEABILITY

It is the principle employed to mating parts or components. The parts are picked at random,
complying with the stipulated specifications and functional requirements of the assembly. When only
a few assemblies are to be made, the correct fits between parts arc made by controlling the sizes while
machining the parts, by matching them with their mating parts. The actual sizes of the parts may vary
from assembly to assembly to such an extent that a given part can fit only in its own assembly.
Such a method of manufacture takes more time and will therefore increase the cost. There will also be
problems when parts arc needed to be replaced. Modern production is based on the concept of
interchangeability. When one component assembles properly with any mating component, both being
chosen at random, then this is interchangeable manufacture. It is the uniformity of size of the
components produced which ensures interchangeability.

The advantages of interchangeability are as follows:

 The assembly of mating parts is easier. Since any component picked up from its lot will
assemble with any other mating part from another lot without additional fitting and machining.
 It enhances the production rate.
 The standardization of machine parts and manufacturing methods is decided.
 It brings down the assembling cost drastically.
 Repairing of existing machines or products is simplified because component parts can be easily
replaced.
 Replacement of worn out parts is easy.

20
UNIT II - COMPARATIVE MEASUREMENTS

LINEAR MEASURING INSTRUMENTS

Linear measurement applies to measurement of lengths, diameter, heights and thickness


including external and internal measurements. The line measuring instruments have series of accurately
spaced lines marked on them e.g. Scale. The dimensions to be measured are aligned with the graduations
of the scale. Linear measuring instruments are designed either for line measurements or end
measurements. In end measuring instruments, the measurement is taken between two end surfaces as
in micrometers, slip gauges etc.
The instruments used for linear measurements can be classified as:
1. Direct measuring instruments
2. Indirect measuring instruments
The Direct measuring instruments are of two types:
1. Graduated
2. Non Graduated
The graduated instruments include rules, vernier calipers, vernier height gauges, vernier depth
gauges, micrometers, dial indicators etc.
The non graduated instruments include calipers, trammels, telescopic gauges, surface gauges,
straight edges, wire gauges, screw pitch gauges, radius gauges, thickness gauges, slip gauges etc.

They can also be classified as


1. Non precision instruments such as steel rule, calipers etc.,
2. Precision measuring instruments, such as vernier instruments, micrometers, dial gauges etc.

SCALES
 The most common tool for crude measurements is the scale (also known as rules, or rulers).
 Although plastic, wood and other materials are used for common scales, precision scales use
tempered steel alloys, with graduations scribed onto the surface.
 These are limited by the human eye. Basically they are used to compare two dimensions.
 The metric scales use decimal divisions, and the imperial scales use fractional divisions.
 Some scales only use the fine scale divisions at one end of the scale. It is advised that the end of
the scale not be used for measurement. This is because as they become worn with use, the end
of the scale will no longer be at a `zero' position.
 Instead the internal divisions of the scale should be used. Parallax error can be a factor when
making measurements with a scale.

21
CALIPERS
Caliper is an instrument used for measuring distance between or over surfaces comparing
dimensions of work pieces with such standards as plug gauges, graduated rules etc. Calipers may
be difficult to use, and they require that the operator follow a few basic rules, do not force them, they
will bend easily, and invalidate measurements made. If measurements are made using calipers for
comparison, one operator should make all of the measurements (this keeps the feel factor a minimal
error source). These instruments are very useful when dealing with hard to reach locations that normal
measuring instruments cannot reach. Obviously the added step in the measurement will significantly
decrease the accuracy.

VERNIER CALIPERS
The vernier instruments generally used in workshop and engineering metrology have
comparatively low accuracy. The line of measurement of such instruments does not coincide with the
line of scale. The accuracy therefore depends upon the straightness of the beam and the squareness of
the sliding jaw with respect to the beam. To ensure the squareness, the sliding jaw must be clamped
before taking the reading. The zero error must also be taken into consideration. Instruments are now
available with a measuring range up to one meter with a scale value of 0.1 or 0.2 mm.

Types of Vernier Calipers


According to Indian Standard IS: 3651-1974, three types of vernier calipers have been specified
to make external and internal measurements and are shown in figures respectively. All the three types
are made with one scale on the front of the beam for direct reading.
Type A: Vernier has jaws on both sides for external and internal measurements and a blade for depth
measurement

Type B: It is provided with jaws


on one side for external and internal
measurements.
22
Type C: It has jaws on both sides for making the measurement and for marking operations

Errors in Calipers
The degree of accuracy obtained in measurement greatly depends upon the condition of the jaws of the
calipers and a special attention is needed before proceeding for the measurement. The accuracy and
natural wear, and warping of Vernier caliper jaws should be tested frequently by closing them together
tightly and setting them to 0-0 point of the main and Vernier scales.

MICROMETERS
There are two types in it.
(i) Outside micrometer — to measure external dimensions.
(ii) Inside micrometer — to measure internal dimensions.
An outside micrometer is shown. It consists of two scales, main scale and thimble scale. While
the pitch of barrel screw is 0.5 mm the thimble has graduation of 0.01 mm. The least count of this
micrometer is 0.01 mm.

23
The micrometer requires the use of an accurate screw thread as a means of obtaining a
measurement. The screw is attached to a spindle and is turned by movement of a thimble or ratchet at
the end. The barrel, which is attached to the frame, acts as a nut to engage the screw threads, which are
accurately made with a pitch of 0.05mm. Each revolution of the thimble advances the screw 0.05mm.
On the barrel a datum line is graduated with two sets of division marks.

SLIP GAUGES
These may be used as reference standards for transferring the dimension of the unit of length
from the primary standard to gauge blocks of lower accuracy and for the verification and graduation
of measuring apparatus. These are high carbon steel hardened, ground and lapped rectangular
blocks, having cross sectional area 0f 30 mm
10mm. Their opposite faces are flat, parallel and are accurately the stated distance apart. The opposite
faces are of such a high degree of surface finish, that when the blocks are pressed together with a slight
twist by hand, they will wring together. They will remain firmly attached to each other. They are
supplied in sets of 112 pieces down to 32 pieces. Due to properties of slip gauges, they are built up by,
wringing into combination which gives size, varying by steps of 0.01 mm and the overall accuracy
is of the order of
0.00025mm. Slip gauges with three basic forms are commonly found, these are rectangular, square with
center hole, and square without center hole.

24
Wringing or Sliding is nothing but combining the faces of slip gauges one over the other. Due to
adhesion property of slip gauges, they will stick together. This is because of very high degree of
surface finish of the measuring faces.

Classification of Slip Gauges


Slip gauges are classified into various types according to their use as follows:
1) Grade 2
2) Grade 1
3) Grade 0
4) Grade 00
5) Calibration grade.
1) Grade 2:
It is a workshop grade slip gauges used for setting tools, cutters and checking dimensions roughly.
2) Grade 1:
The grade I is used for precise work in tool rooms.

3) Grade 0:
It is used as inspection grade of slip gauges mainly by inspection department.

4) Grade 00:
Grade 00 mainly used in high precision works in the form of error detection in instruments.

5) Calibration grade:
The actual size of the slip gauge is calibrated on a chart supplied by the manufactures.
Manufacture of Slip Gauges

The following additional operations are carried out to obtain the necessary qualities in slip gauges
during manufacture.

i. First the approximate size of slip gauges is done


25 by preliminary operations.
ii. The blocks are hardened and wear resistant by a special heat treatment process.

iii. To stabilize the whole life of blocks, seasoning process is done.

iv. The approximate required dimension is done by a final grinding process.

v. To get the exact size of slip gauges, lapping operation is done.

vi. Comparison is made with grand master sets.

Slip Gauges accessories


The application slip gauges can be increased by providing accessories to the slip gauges. The various
accessories are
 Measuring jaw
 Scriber and Centre point.
 Holder and base
1. Measuring jaw:
It is available in two designs specially made for internal and external features.
2. Scriber and Centre point:
It is mainly formed for marking purpose.
3. Holder and base:
Holder is nothing but a holding device used to hold combination of slip gauges. Base in designed for
mounting the holder rigidly on its top surface.

INTERFEROMETERS

They are optical instruments used for measuring flatness and determining the length of the
slip gauges by direct reference to the wavelength of light. It overcomes the drawbacks of optical flats
used in ordinary daylight. In these instruments the lay of the optical flat can be controlled and fringes
can be oriented as per the requirement. An arrangement is made to view the fringes directly from the
top and avoid any distortion due to incorrect viewing.

26
Optical Flat and Calibration
1. Optical flat are flat lenses, made from quartz, having a very accurate surface to transmit light.
2. They are used in interferometers, for testing plane surfaces.
3. The diameter of an optical flat varies from 50 to 250 nun and thickness varies from 12 to 25 mm.
4. Optical flats are made in a range of sizes and shapes.
5. The flats are available with a coated surface.
6. The coating is a thin film, usually titanium oxide, applied on the surface to reduce the light lost
by reflection.
7. The coating is so thin that it does not affect the position of the fringe bands, but a coated flat. The
supporting surface on which the optical flat measurements are made must provide a clean, rigid
platform.
Optical flats are cylindrical in form, with the working surface and are of two types are
i) Type A,
ii) Type B.
i) Type A:
It has only one surface flat and is used for testing flatness of precision measuring surfaces of
flats, slip gauges and measuring tables. The tolerance on flat should be 0.05
µm for type A.
ii) Type B:
It has both surfaces flat and parallel to each other. They are used for testing measuring surfaces
of micrometers, Measuring anvils and similar length of measuring devices for testing flatness and
parallelism. For these instruments, their thickness and grades are important. The tolerances on flatness,
parallelism and thickness should be 0.05 µm.
Interference Bands by Optical Flat
Optical flats arc blocks of glass finished to within 0.05 microns for flatness. When art optical flat
is on a flat surface which is not perfectly flat then optical flat will not exactly coincide with it, but it
will make an angle with the surface as shown in Figure 2.8.

LIMIT GAUGES

 A limit gauge is not a measuring gauge. Just they are used as inspecting gauges.
27
 The limit gauges are used in inspection by methods of attributes.
 This gives the information about the products which may be either within the prescribed limit or not.
 By using limit gauges report, the control charts of P and C charts are drawn to control invariance of
the products.
 This procedure is mostly performed by the quality control department of each and every industry.
 Limit gauge are mainly used for checking for cylindrical holes of identical components with a large
numbers in mass production.

Purpose of using limit gauges


 Components are manufactured as per the specified tolerance limits, upper limit and lower limit.
The dimension of each component should be within this upper and lower limit.
 If the dimensions are outside these limits, the components will be rejected.
 If we use any measuring instruments to check these dimensions, the process will consume
more time. Still we are not interested in knowing the amount of error in dimensions.
 It is just enough whether the size of the component is within the prescribed limits or not. For this
purpose, we can make use of gauges known as limit gauges.

The common types are as follows:


1) Plug gauges.
2) Ring gauges.
3) Snap gauges.

PLUG GAUGES
 The ends are hardened and accurately finished by grinding. One end is the GO end and the other end
is NOGO end.
 Usually, the GO end will be equal to the lower limit size of the hole and the NOGO end will be equal
to the upper limit size of the hole.
 If the size of the hole is within the limits, the GO end should go inside the hole and NOGO end
should not go.
 If the GO end and does not go, the hole is under size and also if NOGO end goes, the hole is over
size. Hence, the components are rejected in both the cases.

28
1. Double ended plug gauges
In this type, the GO end and NOGO end are arranged on both the ends of the plug. This
type has the advantage of easy handling.

2. Progressive type of plug gauges


In this type both the GO end and NOGO end are arranged in the same side of the plug. We can
use the plug gauge ends progressively one after the other while checking the hole. It saves time.
Generally, the GO end is made larger than the NOGO end in plug gauges.

TAPER PLUG GAUGE


Taper plug gauges are used to check tapered holes. It has two check lines. One is a GO line and
another is a NOGO line. During the checking of work, NOGO line remains outside the hole and GO line
remains inside the hole.

They are various types taper plug gauges are available as shown in fig. Such as
1) Taper plug gauge — plain
2) Taper plug gauge — tanged.
3) Taper ring gauge plain
4) Taper ring gauge — tanged.

29
RING GAUGES
 Ring gauges are mainly used for checking the diameter of shafts having a central hole. The
hole is accurately finished by grinding and lapping after taking hardening process.
 The periphery of the ring is knurled to give more grips while handling the gauges. We have to
make two ring gauges separately to check the shaft such as GO ring gauge and NOGO ring
gauge.
 But the hole of GO ring gauge is made to the upper limit size of the shaft and NOGO for the
lower limit.
 While checking the shaft, the GO ring gauge will pass through the shaft and NOGO will not
pass.
 To identify the NOGO ring gauges easily, a red mark or a small groove cut on its periphery.

SNAP GAUGE
Snap gauges are used for checking external dimensions. They are also called as gap gauges. The
different types of snap gauges are:

1. Double Ended Snap Gauge


This gauge is having two ends in the form of anvils. Here also, the GO anvil is made to
lower limit and NOGO anvil is made to upper limit of the shaft. It is also known as solid snap gauges

2. Progressive Snap Gauge

This type of snap gauge is also called caliper gauge. It is mainly used for checking large
diameters up to 100mm. Both GO and NOGO anvils at the same end. The GO anvil should be at the
30
front and NOGO anvil at the rear. So, the diameter of the shaft is checked progressively by these
two ends. This type of gauge is made of horse shoe shaped frame with I section to reduce the weight of
the snap gauges.

3. Adjustable Snap Gauge


Adjustable snap gauges are used for checking large size shafts made with horseshoe shaped frame of
I section. It has one fixed anvil and two small adjustable anvils. The distance between the two
anvils is adjusted by adjusting the adjustable anvils by means of setscrews. This adjustment can be
made with the help of slip gauges for specified limits of size.

4. Combined Limit Gauges

A spherical projection is provided with GO and NOGO dimension marked in a single gauge. While using
GO gauge the handle is parallel to axes of the hole and normal to axes for NOGO gauge.

5. Position Gauge

It is designed for checking the position of features in relation to another surface. Other types of gauges
are also available such as contour gauges, receiver gauges, profile gauges etc.

31
TAYLOR’ S PRINCIPLE
 It states that GO gauge should check all
 related dimensions. Simultaneously NOGO gauge should check only one dimension at a time.

Maximum metal condition


It refers to the condition of hole or shaft when maximum material is left on i.e. high limit of shaft and
low limit of hole.

Minimum metal condition


If refers to the condition of hole or shaft when minimum material is left on such as low limit of shaft and
high limit of hole.

Applications of Limit Gauges


1. Thread gauges
2. Form gauges
3. Serew pitch gauges
4. Radius and fillet gauges
5. Feeler gauges
6. Plate gauge and Wire gauge

COMPARATORS

Comparators are one form of linear measurement device which is quick and more convenient
for checking large number of identical dimensions. Comparators normally will not show the actual
dimensions of the work piece. They will be shown only the deviation in size. i.e. During the
measurement a comparator is able to give the deviation of the dimension from the set dimension. This
cannot be used as an absolute measuring device but can only compare two dimensions. Comparators
32
are designed in several types to meet various conditions. Comparators of every type incorporate some
kind of magnifying device. The magnifying device magnifies how much dimension deviates, plus or
minus, from the standard size.

The comparators are classified according to the principles used for obtaining magnification. The
common types are:

1) Mechanical comparators
2) Electrical comparators
3) Optical comparators
4) Pneumatic comparators
MECHANICAL COMPARATORS

Mechanical comparator employs mechanical means for magnifying small deviations. The method
of magnifying small movement of the indicator in all mechanical comparators are effected by means
of levers, gear trains or a combination of these elements. Mechanical comparators are available
having magnifications from 300 to 5000 to 1. These are mostly used for inspection of small parts
machined to close limits.

1. Dial indicator
A dial indicator or dial gauge is used as a mechanical comparator. The essential parts of the
instrument are like a small clock with a plunger projecting at the bottom as shown in fig. Very slight
upward movement on the plunger moves it upward and the movement is indicated by the dial pointer.
The dial is graduated into 100 divisions. A full revolution of the pointer about this scale corresponds to
1mm travel of the plunger. Thus, a turn of the pointer b one scale division represents a plunger travel of
0.01mm.

Experimental setup
The whole setup consists of worktable, dial indicator and vertical post. The dial indicator i s
fitted t o vertical p o s t by o n adjusting screw as shown in fig. The vertical post is fitted on the work
table; the top surface of the worktable is finely finished. The dial gauge can be adjusted vertically and
locked in position by a screw.

Procedure
33
Let us assume that the required height of the component is 32.5mm. Initially this height is built
up with slip gauges. The slip gauge blocks are placed under the stem of the dial gauge. The pointer in
the dial gauge is adjusted to zero. The slip gauges are removed. Now the component to be checked is
introduced under the stem of the dial gauge. If there is any deviation in the height of the component, it
will be indicated by the pointer.

Mechanism
The stem has rack teeth. A set of gears engage with the rack. The pointer is connected to a small
pinion. The small pinion is independently hinged. I.e. it is not connected to the stern. The vertical
movement of the stem is transmitted to the pointer through a set of gears. A spring gives a constant
downward pressure to the stem.

2. Read type mechanical comparator

In this type of comparator, the linear movement of the plunger is specified by means of read
mechanism. The mechanism of this type is illustrated in fig. A spring- loaded pointer is pivoted.
Initially, the comparator is set with the help of a known dimension eg. Set of slip gauges as shown
in fig. Then the indicator reading is adjusted to zero. When the part to be measured is kept under the
pointer, then the comparator displays the deviation of this dimension either in ± or— side of the set
dimension.

34
Advantages

1) It is usually robust, compact and easy to handle.


2) There is no external supply such as electricity, air required.
3) It has very simple mechanism and is cheaper when compared to other types.
4) It is suitable for ordinary workshop and also easily portable.

Disadvantages

1) Accuracy of the comparator mainly depends on the accuracy of the rack and pinion arrangement.
Any slackness will reduce accuracy.
2) It has more moving parts and hence friction is more and accuracy is less.
3) The range of the instrument is limited since pointer is moving over a fixed scale.

ELECTRICAL COMPARATOR:
An electrical comparator consists of the following three major part such as
1) Transducer
2) Display device as meter
3) Amplifier

Transducer

An iron armature is provided in between two coils held by a lea spring at one end. The other end
is supported against a plunger. The two coils act as two arms of an A.C. wheat stone bridge circuit.

Amplifier
35
The amplifier is nothing but a device which amplifies the give input signal frequency into
magnified output
Display device or meter
The amplified input signal is displayed on some terminal stage instruments. Here, the terminal
instrument is a meter.
Working principle

If the armature is centrally located between the coils, the inductance of both coils will be equal
but in opposite direction with the sign change. Due to this, the bridge circuit of A.C. wheat stone bridge
is balanced. Therefore, the meter will read zero value. But practically, it is not possible. In real cases,
the armature may be lifted up or lowered down by the plunger during the measurement. This would
upset the balance of the wheat stone bridge circuit. Due to this effect, the change in current or potential
will be induced correspondingly. On that time, the meter will indicate some value as displacement. This
indicated value may be either for larger or smaller components. As this induced current is too small, it
should be suitably amplified before being displayed in the meter.
Checking of accuracy
To check the accuracy of a given specimen or work, first a standard specimen is placed under
the plunger. After this, the resistance of wheat stone bridge is adjusted so that the scale reading shows
zero. Then the specimen is removed. Now, the work is introduced under the plunger. If height variation
of work presents, it will move the plunger up or down. The corresponding movement of the plunger is
first amplified by the amplifier then it is transmitted to the meter to show the variations. The least count
of this electrical comparator is 0.001mm (one micron).

ELECTRONIC COMPARATOR
In electronic comparator, transducer induction or the principle of application of frequency
modulation or radio oscillation is followed.

Construction details
In the electronic comparator, the following components are set as follows:
 Transducer
36
 Oscillator
 Amplifier
 iv. Demodulator
 v. Meter
(i) Transducer
It converts the movement of the plunger into an electrical signal. It is connected with oscillator.

(ii) Oscillator
The oscillator which receives electrical signal from the transducer and raises the amplitude of
frequency wave by adding carrier frequency called as modulation.

(iii) Amplifier
An amplifier is connected in between oscillator and demodulator. The signal coming out of
the oscillator is amplified into a required level.

(iv) Demodulator
Demodulator is nothing but a device which cuts off external carrier wave frequency. i.e. It
converts the modulated wave into original wave as electrical signal.

(v) Meter
This is nothing but a display device from which the output can be obtained as a linear
measurement.

Principle of operation

The work to be measured is placed under the plunger of the electronic comparator. Both
work and comparator are made to rest on the surface plate. The linear movement of the plunger is
converted into electrical signal by a suitable transducer. Then it sent to an oscillator to modulate the
electrical signal by adding carrier frequency of wave. After that the amplified signal is sent to
demodulator in which the carrier waves are cut off. Finally, the demodulated signal is passed to the
meter to convert the probe tip movement into linear measurement as an output signal. A separate
electrical supply of D.C. is already given to actuate the meter.

37
Advantages of Electrical and Electronic comparator
 It has less number of moving parts.
 Magnification obtained is very high.
 Two or more magnifications are provided in the same instrument to use various ranges.
 The pointer is made very light so that it is more sensitive to vibration.
 The instrument is very compact.

Disadvantages of Electrical and Electronic comparator


 External agency is required to meter for actuation.
 Variation of voltage or frequency may affect the accuracy of output.
 Due to heating coils, the accuracy decreases.
 It is more expensive than mechanical comparator.

SINE BAR

Sine bars are always used along with slip gauges as a device for the measurement of angles very
precisely. They are used to
 Measure angles very accurately.
 Locate the work piece to a given angle with very high precision.
Generally, sine bars are made from high carbon, high chromium, and corrosion resistant steel. These
materials are highly hardened, ground and stabilized. In sine bars, two cylinders of equal diameter are
attached at lie ends with its axes are mutually parallel to each other. They are also at equal distance
from the upper surface of the sine bar mostly the distance between the axes of two cylinders is 100mm,
200mm or 300mm. The working surfaces of the rollers are finished to 0.2µm R value.

The cylindrical holes are provided to reduce the weight of the sine bar.

38
Working principle of sine bar

The working of sine bar is based on trigonometry principle. To measure the angle of a
given specimen, one roller of the sine bar is placed on the surface plate and another one roller is placed
over the surface of slip gauges. Now, ‘h be the height of the slip gauges and ‘L’ be the distance between
roller centers, then the angle is calculated as

Use of Sine Bar Locating any’ work to a given angle

39
 Before checking the unknown angle of the specimen, first the angle (0) of given specimen is
found approximately by bevel protractor.
 Then the sine bar is set at angle of 0 and clamped on the angle plate.
 Now, the work is placed on the sine bar and the dial indicator set at one end of the work is
moved across the work piece and deviation is noted.
 Slip gauges are adjusted so that the dial indicator reads zero throughout the work surface.

Limitations of sine bars


1) Sine bars are fairly reliable for angles than 15°.
2) It is physically difficult to hold in position.
3) Slight errors in sine bar cause larger angular errors.
4) A difference of deformation occurs at the point of roller contact with the surface plate and to the
gauge blocks.
5) The size of parts to be inspected by sine bar is limited.
Sources of error in sine bars
The different sources of errors are listed below:
1) Error in distance between roller centers.
2) Error in slip gauge combination.
3) Error in checking of parallelism.
4) Error in parallelism of roller axes with each other.
5) Error in flatness of the upper surface of sine bar.

BEVEL PROTRACTORS
40
Bevel protractors are nothing but angular measuring instruments.
Types of bevel protractors:
The different types of bevel protractors used are:

1) Vernier bevel protractor


2) Universal protractor
3) Optical protractor

VERNIER BEVEL PROTRACTOR:

Working principle
A vernier bevel protractor is attached with acute angle attachment. The body is designed its back
is flat and no projections beyond its back. The base plate is attached to the main body and an adjustable
blade is attached to the circular plate containing Vernier scale. The main scale is graduated in degrees
from 0° to 90° in both the directions. The adjustable can be made to rotate freely about the center of
the main scale and it can be locked at any position. For measuring acute angle, a special attachment
is provided. The base plate is made fiat for measuring angles and can be moved throughout its length.
The ends of the blade are beveled at angles of 45° and 60°. The main scale is graduated as one main
scale division is 1° and Vernier is graduated into 12 divisions on each side of zero. Therefore the least
count is calculated as

Thus, the bevel protractor can be used to measure to an accuracy of 5 minutes.

41
Applications of bevel protractor
The bevel protractor can be used in the following applications.

1. for checking a ‘V’ block:

AUTO- COLLIMATOR
Auto-collimator is an optical instrument used for the measurement of small angular
differences, changes or deflection, plane surface inspection etc. For small angular measurements,
autocollimator provides a very sensitive and accurate approach. An auto - collimator is essentially an
infinity telescope and a collimator combined into one instrument.

42
Basic principle

If a light source is placed in the flows of a collimating lens, it is projected as a parallel beam of
light. If this beam is made to strike a plane reflector, kept normal to the optical axis, it is reflected back
along its own path and is brought to the same focus. The reflector is tilted through a small angle ‘0’.
Then the parallel beam is deflected twice the angle and is brought to focus in the same plane as the
light source.
The distance of focus from the object is given by

WORKING OF AUTO-COLLIMATOR:
There are three main parts in auto-collimator.
1. Micrometer microscope.
2. Lighting unit and
3. Collimating lens.
Figure shows a line diagram of a modern auto-collimator. A target graticule is positioned
perpendicular to the optical axis. When the target graticule is illuminated by a lamp, rays of light
diverging from the intersection point reach the objective lens via beam splitter. From objective, the light
rays are projected as a parallel rays to the reflector.

A flat reflector placed in front of the objective and exactly normal to the optical axis reflects
the parallel rays of light back along their original paths. They are then brought to the target graticule
and exactly coincide with its intersection. A portion of the returned light passes through the beam splitter
and is visible through the eyepiece. If the reflector is tilted through a small angle, the reflected beam
will be changed its path at twice the angle.

43
It can also be brought to target graticule but linearly displaced from the actual target by the
amount 2θ x f. linear displacement of the graticule image in the plane tilted angle of eyepiece is directly
proportional to the reflector. This can be measured by optical micrometer. The photoelectric auto-
collimator is particularly suitable for calibrating polygons, for checking angular indexing and for
checking small linear displacements.

APPLICATIONS OF AUTO-COLLIMATOR
Auto-collimators are used for
 Measuring the difference in height of length standards.
 Checking the flatness and straightness of surfaces.
 Checking square ness of two surfaces.
 Precise angular indexing in conjunction with polygons.
 Checking alignment or parallelism.
 Comparative measurement using master angles.
 Measurement of small linear dimensions.
 For machine tool adjustment testing.
ANGLE DEKKOR
This is also a type of auto-collimator. There is an illuminated scale in the focal plane of the
collimating lens. This illuminated scale is projected as a parallel beam by the collimating lens which
after striking a reflector below the instrument is refocused by the lens in the filed of view of the eyepiece.
In the field of view of microscope, there is another datum scale fixed across the center of screen. The
reflected image of the illuminated scale is received at right angle to the fixed scale as shown in fig. Thus
the changes in angular position of the reflector in two planes are indicated by changes in the point of
intersection of the two scales. One division on the scale is calibrated to read 1 minute.

44
Uses of Angle Dekkor
(i) Measuring angle of a component
Angle dekkor is capable of measuring small variations in angular setting i.e. determining angular tilt.
Angle dekkor is used in combination with angle gauge. First the angle gauge combination is set up to
the nearest known angle of the component. Now the angle dekkor is set to zero reading on the illuminated
scale. The angle gauge build up is then removed and replaced by the component under test. Usually a
straight edge being used to ensure that there is no change in lateral positions. The new position of
the reflected scale with respect to the fixed scale gives the angular tilt of the component from the set
angle.
(ii) Checking the slope angle of a V-block
Figure shows the set up for checking the sloping angle of V block. Initially, a polished reflector or
slip gauge is attached in close contact with the work surface. By using angle gauge zero reading is
obtained in the angle dekkor. Then the angle may be calculated by comparing the reading
obtained from the angle dekkor and angle gauge.

(iii) To measure the angle of cone or Taper gauge


Initially, the angle dekkor is set for the nominal angle of cone by using angle gauge or sine
bar. The cone is then placed in position with its base resting on the surface plate. A slip gauge or reflector
is attached on the cone since no reflection can be obtained from the curved surface. Any deviation from
the set angle will be noted by the angle dekkor in the eyepiece and indicated by the shifting of the image
of illuminated scale.

45
UNIT-III – CALIBRATION AND MEASURING MACHINES
Laser Equipment for Alignment Testing
This testing is particularly suitable in aircraft production, shipbuilding etc. Where a number of
components, spaced long distance apart, have to be checked to a predetermine straight line. Other
uses of laser equipment are testing of flatness of machined surfaces, checking square ness with the
help of optical square etc. These consist of laser tube will produces a cylindrical beam of laser about
10mm diameter and an auto reflector with a high degree of accuracy. Laser tube consists of helium-
neon plasma tube in a heat aluminum cylindrical housing. The laser beam comes out of the housing
from its centre and parallel to the housing within 10” of arc and alignment stability is the order of 0.2”
of arc per hour. Auto reflector consists of detector head and read out unit. Number of photocell are
arranged to compare laser beam in each half horizontally and vertically. This is housed on a shard
which has two adjustments to translate the detector in its two orthogonal measuring directions
perpendicular to the laser beam. The devices detect the alignment of flat surfaces perpendicular to a
reference line of sight.

MACHINE TOOL TESTING


The accuracy of manufactured parts depends on the accuracy of machine tools. The quality of
work piece depends on Rigidity and stiffness of machine tool and its components. Alignment of various
components in relation to one another Quality and accuracy of driving mechanism and control devices.

It can be classified into


 Static tests
 Dynamic tests.

Static tests
If the alignment of the components of the machine tool are checked under static conditions then the test
are called static test.

Dynamic tests
If the alignment tests are carried out under dynamic loading condition. The accuracy of machine tools
which cut metal by removing chips is tested by two types of test namely.
 Geometrical tests
 Practical tests

Geometrical tests
In this test, dimensions of components, position of components and displacement of component relative
to one another is checked.

Practical tests
In these test, test pieces are machined in the machines. The test pieces must be appropriate to the
fundamental purpose for which the machine has been designed.
46
Purpose of Machine Tool Testing
The dimensions of any work piece, its surface finishes and geometry depends on the accuracy
of machine tool for its manufacture. In mass production the various components produced should be of
high accuracy to be assembled on a non-sensitive basis. The increasing demand for accurately
machined components has led to improvement of geometric accuracy of machine tools. For this
purpose various checks on different components of the machine tool are carried out.

Type of Geometrical Checks on Machine Tools.


Different types of geometrical tests conducted on machine tools are as follows:
 Straightness.
 Flatness.
 Parallelism, equi-distance and coincidence.
 Rectilinear movements or squareness of straight line and plane.
 Rotations.
Main spindle is to be tested for
 Out of round.
 Eccentricity
 Radial-throw of an axis.
 Run out
 Periodical axial slip
 Camming

Various tests conducted on any Machine Tools


 Test for level of installation of machine tool in horizontal and vertical planes.
 Test for flatness of machine bed and for straightness and parallelism of bed ways on bearing
surface.
 Test for perpendicularity of guide ways to other guide ways.
 Test for true running of the main spindle and its axial movements.
 Test for parallelism of spindle axis to guide ways or bearing surfaces.
 Test for line of movement of various members like spindle and table cross slides etc.
Use of Laser for Alignment Testing

 The alignment tests can be carried out over greater distances and to a greater degree of accuracy
using laser equipment.

47
 Laser equipment produces real straight line, whereas an alignment telescope provides an imaginary
line that cannot be seen in space.
 This is important when it is necessary to check number of components to a predetermined
straight line. Particularly if they are spaced relatively long distances apart, as in aircraft production
and in shipbuilding.
 Laser equipment can also be used for checking flatness of machined surface by direct displacement.
By using are optical square in conjunction with laser equipment squareness can be checked with
reference to the laser base line.
CO-ORDINATE MEASURING MACHINES
Measuring machines are used for measurement of length over the outer surfaces of a length
bar or any other long member. The member may be either rounded or flat and parallel. It is more useful
and advantageous than vernier calipers, micrometer, screw gauges etc. the measuring machines are
generally universal character and can be used for works of varied nature. The co-ordinate measuring
machine is used for contact inspection of parts. When used for computer-integrated manufacturing these
machines are controlled by computer numerical control. General software is provided for reverse
engineering complex shaped objects. The component is digitized using CNC, CMM and it is then
converted into a computer model which gives the two surface of the component. These advances include
for automatic work part alignment on the table. Savings in inspection 5 to 10 percent of the time is
required on a CMM compared to manual inspection methods.
Types of Measuring Machines
1. Length bar measuring machine.
2. Newall measuring machine.
3. Universal measuring machine.
4. Co-ordinate measuring machine.
5. Computer controlled co-ordinate measuring machine.

Constructions of CMM
Co-ordinate measuring machines are very useful for three dimensional measurements. These machines
have movements in X-Y-Z co-ordinate, controlled and measured easily by using touch probes. These
measurements can be made by positioning the probe by hand, or automatically in more expensive
machines. Reasonable accuracies are 5 micro in. or 1 micrometer. The method these machines work on
is measurement of the position of the probe using linear position sensors. These are based on moiré
fringe patterns (also used in other systems). Transducer is provided in tilt directions for giving digital
display and senses positive and negative direction.

48
Types of CMM

 Cantilever type
The cantilever type is very easy to load and unload, but mechanical error takes place because
of sag or deflection in Y-axis.
 Bridge type
Bridge type is more difficult to load but less sensitive to mechanical errors.

 Horizontal boring Mill type

This is best suited for large heavy work pieces.

Fig 4.12 Types of CMM

49
Working Principle
CMM is used for measuring the distance between two holes. The work piece is clamped to
the worktable and aligned for three measuring slides x, y and z. The measuring head provides a
taper probe tip which is seated in first datum hole and the position of probe digital read out is set to zero.
The probe is then moved to successive holes, the read out represent the co-ordinate part print hole
location with respect to the datum hole. Automatic recording and data processing units are provided to
carry out complex geometric and statistical analysis. Special co-ordinate measuring machines are
provided both linear and rotary axes. This can measure various features of parts like cone, cylinder and
hemisphere. The prime advantage of co-ordinate measuring machine is the quicker inspection and
accurate measurements.

Fig 4.13 Schematic Diagram

Causes of Errors in CMM


1) The table and probes are in imperfect alignment. The probes may have a degree of run out and
move up and down in the Z-axis may cause perpendicularity errors. So CMM should be
calibrated with master plates before using the machine.

2) Dimensional errors of a CMM is influenced by


 Straightness and perpendicularity of the guide ways.
 Scale division and adjustment.
 Probe length.
 Probe system calibration, repeatability, zero point setting and reversal error.
 Error due to digitization.
 Environment

50
3) Other errors can be controlled by the manufacture and minimized by the measuring software. The
length of the probe should be minimum to reduce deflection.
4) The weight of the work piece may change the geometry of the guide ways and therefore, the work
piece must not exceed maximum weight.
5) Variation in temperature of CMM, specimen and measuring lab influence the uncertainly of
measurements.
6) Translation errors occur from error in the scale division and error in straightness perpendicular to
the corresponding axis direction.
7) Perpendicularity error occurs if three axes are not orthogonal.

Calibration of Three Co-Ordinate Measuring Machine

The optical set up for the V calibration is shown in figure.


The laser head is mounted on the tripod stand and its height is adjusted corresponding to the working
table of CMM.
The interferometer contains a polarized beam splitter which reflects F1 component of the laser
beam and the F2 Component parts through.

Fig 4.14 Optical setup

51
The retro reflector is a polished trihedral glass prism. It reflects the laser beam back along
a line parallel to the original beam by twice the distance. For distance measurement the F1 and
F2 beams that leave the laser head are aimed at the interferometer which splits F1 and F2 via
polarizing beaming splitter. Component F1 becomes the fixed distance path and F2 is sent to a target
which reflects it back to the interferometer. Relative motion between the interferometer and the remote
retro reflector causes a Dopper shift in the returned frequency. Therefore the laser head sees a frequency
difference given by F1-F2 ± ΔF2. The F1-F2 ± ΔF2 signal that is returned from the external
interferometer is compared in the measurement display unit to the reference signal. The difference ΔF2
is related to the velocity. The longitudinal micrometer microscope of CMM is set at zero and the laser
display unit is also set at zero. The CMM microscope is then set at the following points and the display
units are noted.1 to 10mm, every mm and 10 to 200mm, in steps of 10mm. The accuracy of linear
measurements is affected by changes in air temperature, pressure and humidity.

Performance of CMM
 Geometrical accuracies such as positioning accuracy, Straightness and Squareness.
 Total measuring accuracy in terms of axial length measuring accuracy. Volumetric length
measuring accuracy and length measuring repeatability. i.e., Coordinate measuring machine has
to be tested as complete system.
 Since environmental effects have great influence for the accuracy testing, including thermal
parameters, vibrations and relative humidity are required.

APPLICATIONS
 Co-ordinate measuring machines find applications in automobile, machine tool, electronics, space
and many other large companies.
 These machines are best suited for the test and inspection of test equipment, gauges and tools.
 For aircraft and space vehicles, hundred percent inspections is carried out by using CMM. CMM
can be used for determining dimensional accuracy of the components.
 These are ideal for determination of shape and position, maximum metal condition, linkage of
results etc. which cannot do in conventional machines.
 CMM can also be used for sorting tasks to achieve optimum pairing of components within
tolerance limits.
 CMMs are also best for ensuring economic viability of NC machines by reducing their
downtime for inspection results. They also help in reducing cost, rework cost at the appropriate
time with a suitable CMM.

52
Advantages
 The inspection rate is increased.
 Accuracy is more.
 Operators error can be minimized.
 Skill requirements of the operator is reduced.
 Reduced inspection fixturing and maintenance cost.
 Reduction in calculating and recording time.
 Reduction in set up time.
 No need of separate go / no go gauges for each feature.
 Reduction of scrap and good part rejection.
 Reduction in off line analysis time.
 Simplification of inspection procedures, possibility of reduction of total inspection time through
use of statistical and data analysis techniques.

Disadvantages
 The lable and probe may not be in perfect alignment.
 The probe may have run out.
 The probe moving in Z-axis may have some perpendicular errors.
 Probe while moving in X and Y direction may not be square to each other.
There may be errors in digital system.

COMPUTER CONTROLLED CO-ORDINATE MEASURING MACHINE


 The measurements, inspection of parts for dimension form, surface characteristics and position
of geometrical elements are done at the same time.
 Mechanical system can be divided into four basic types. The selection will be depends on the
application.
1. Column type.
2. Bridge type.
3. Cantilever type.
4. Gantry type.

53
All these machines use probes which may be trigger type or measuring type. This is connected to the
spindle in Z direction. The main features of this system are shown in figure

Fig 4.15 Column Type Fig 4.16 Bridge Type

Trigger type probe system

Fig 4.17 Trigger Type Probe System

54
The buckling mechanism is a three point hearing the contacts which are arranged at 1200 around the
circumference. These contacts act as electrical micro switches.
 When being touched in any probing direction one or f contacts is lifted off and the current is
broken, thus generating a pulse, when the circuit is opened, the co-ordinate positions are read and
stored.
 After probing the spring ensures the perfect zero position of the three-point bearing.
The probing force is determined by the pre stressed force of the spring with this probe system data
acquisition is always dynamic and therefore the measuring time is shorter than in static principle.

Measuring type probe system

 It is a very small co-ordinate measuring machine in which the buckling mechanism consists
of parallel guide ways when probing the spring parallelogram are deflected from their initial
position.
 Since the entire system is free from, torsion, friction, the displacement can be measured
easily.
 The mathematical model of the mechanical system is shown in figure. If the components of
the CMM are assumed as rigid bodies, the deviations of a carriage can be described by three
displacement deviations.

Fig 4.18 Buckling Mechanism

 Parallel to the axes 1, 2 and 3 and by three rotational deviations about the axes 4, 5 and
 6.Similarly deviations 7-12 occur for carriage and 13-18 occur for Z carriage and the three
squareness deviations 19, 20 and 21 are to be measured and to be treated in the mathematical
model.
 Moving the probe stylus in the Y direction the co-ordinate system L is not a straight line but a
curved one due to errors in the guide.
 If moving on measure line L further corrections are required in X, Y and Z coordinates
due to the offsets X and Z from curve L resulting from the pitch angle 5, the roll angle 4 and the
yaw angle 6.
 Similarly the deviations of all three carriages and the squareness errors can be taken into
account. The effect of error correction can be tested by means of calibrated step gauges.

The following test items are carried out for CMM.


55
(i) Measurement accuracy
a. Axial length measuring accuracy
b.Volumetric length measuring accuracy

(ii) Axial motion accuracy


a. Linear displacement accuracy
b. Straightness
c. Perpendicularity
d. Pitch, Yaw and roll.
The axial length measuring accuracy is tested at the lowest position of the Z-axis. The lengths
tested are approximately 1/10, 1/5, 2/5, 3/5 and 4/5 of the measuring range of each axis of CMM. Tile
test is repeated five times for each measuring length and results plotted and value of measuring accuracy
is derived.
CMM Construction
The main features of CNC-CMM are shown in figure has stationary granite measuring
table, Length measuring system. Air bearings; control unit and software are the important parts of
CNC & CMM.

Fig 4.19 CNC - CMM

Stationary granite measuring table


Granite table provides a stable reference plane for locating parts to be measured. It is provided with a
grid of threaded holes defining clamping locations and facilitating part mounting. As the table has a
high load carrying capacity and is accessible fro m three sides. It can be easily integrated into the material
flow system of CIM.

Length measuring system


A 3- axis CMM is provided with digital incremental length measuring system for each axis.

Air Bearing
The Bridge cross beam and spindle of the CMM are supported on air bearings.

56
Control unit
The control unit allows manual measurement and programme. It is a microprocessor control.

Software
The CMM, the computer and the software represent one system; the efficiency and cost effectiveness
depend on the software.

Features of CMM Software


 Measurement of diameter, center distance, length.
 (ii) Measurement of plane and spatial carvers.
 Minimum CNC programme.
 Data communications.
 Digital input and output command.
 Programme for the measurement of spur, helical, bevel’ and hypoid gears.
 Interface to CAD software.
A new software for reverse engineering complex shaped objects. The component is digitized using
CNC CMM. The digitized data is converted into a computer model which is the true surface of the
component. Recent advances include the automatic work part alignment and to orient the coordinate
system. Savings in inspection time by using CMM is 5 to 10% compared to manual inspection method.

COMPUTER AIDED INSPECTION USING ROBOTS


Robots can be used to carry out inspection or testing operation for mechanical dimension physical
characteristics and product performance. Checking robot, programmable robot, and co-ordinate robot
are some of the types given to a multi axis measuring machines. These machines automatically perform
all the basic routines of a CNC coordinate measuring machine but at a faster rate than that of CMM.
They are not as accurate as p as CMM but they can check up to accuracies of 5micrometers. The co -
ordinate robot can take successive readings at high speed and evaluate the results using a computer
graphics based real time statistical analysis system.

Integration of CAD/CAM with Inspection System


A product is designed, manufactured and inspected in one automatic process. One of the critical
factors is in manufacturing equality assurance. The co-ordinate measuring machine assists in the equality
assurance function. The productivity can be improved by interfacing with CAD/CAM system. This
eliminates the labour, reduces preparation time and increases availability of CMM for inspection.
Generally the CAD/CAM-CMM interface consists of a number of modules as given

(1) CMM interface

This interface allows to interact with the CAD/CAM database to generate a source file that can
be converted to a CMM control data file. During source file creation, CMM probe path motions are
simulated and displayed on the CAD/CAM workstation for visual verification. A set of CMM command
allow the CMM interface to take advantage of most of the CMM functional capabilities.

57
Fig 4.20 CMM Interface

These command statement include set up, part datum


Control, feature construction, geometric relations, tolerance, output control and feature measurements
like measurements of lines, points, arcs, circles, splines, conics, planes, analytic surfaces.

(2) Pre- processor


The pre-CMM processor converts the language source file generated by CMM interface into
the language of the specified coordinate measuring machine.
(3) Post-CMM processor
This creates wire frame surface model from the CMM-ASCII output file commands are inserted
into the ASCJI-CMM output file to control the creation of CAD/CAM which include points, lines,
arcs, circles, conics, splines and analytic surfaces.

58
Flexible Inspection System
The block diagram of flexible inspection system is shown in figure. This system has been
developed and the inspection done at several places in industry. This system helps product performance
to improve inspection and increase productivity. FIS is the Real time processor to handle part
dimensional data and as a multi programming system to perform manufacturing process control.
The input devices used with this system are CMM’s;

Fig 4.21 Flexible Inspection System

Microprocessor based gauges and other inspection devices. The terminal provides interactive
communication with personal computers where the programmes are stored. The data from CMMs and
other terminals are fed into the main computer for analysis and feedback control. The equality control
data and inspection data from each station are fed through the terminals to the main computer. The data
will be communicated through telephone lines. Flexible inspection system involves more than one
inspection station. The objective of the flexible inspection system is to have off time multi station
automated dimensional verification system to increase the production rate and less inspection time and
to maintain the inspection accuracy and data processing integrity.
Machine Vision
A Vision system can be defined as a system for automatic acquisition and analysis of images to
obtain desired data for interpreting or controlling an activity. It is technique which allows a sensor to
view a scene and derive a numerical or logical decision without further human intervention. Machine
vision can be defined as a means of simulating the image recognition and analysis capabilities of the
human system with electronic and electro mechanical techniques. Machine vision system are now a days
used to provide accurate and in expensive 100% inspection of work pieces. These are used for functions
like gauging of dimensions, identification of shapes, measurement of distances, determining orientation
of parts, quantifying motion-detecting surface shading etc. It is best suited for high production. These
systems function without fatigue. This is suited for inspecting the masks used in the production of
micro-electronic devices. Standoff distance up to one meter is possible.

59
Vision System

The schematic diagram of a typical vision system is shown. This system involves image
acquisition; image processing Acquisition requires appropriate lighting. The camera and store digital
image processing involves manipulating the digital image to simplify and reduce number of data points.
Measurements can be carried out at any angle along the three reference axes x y and z without
contacting the part. The measured values are then compared with the specified tolerance which
stores in the memory of the computer.

Fig 4.22 Machine Vision

The main advantage of vision system is reduction of tooling and fixture costs, elimination of
need for precise part location for handling robots and integrated automation of dimensional
verification and defect detection.
Principle
Four types of machine vision system and the schematic arrangement is shown
 Image formation.
 Processing of image in a form suitable for analysis by computer. (iii) Defining and analyzing the
characteristic of image.
 Interpretation of image and decision-making.

Fig 4.23 Schematic arrangement of Machine Vision

60
Fig 4.24 Image Formation

For formation of image suitable light source is required. It consists of incandescent light,
fluorescent tube, fiber optic bundle, and arc lamp. Laser beam is used for triangulation system for
measuring distance. Ultraviolet light is used to reduce glare or increase contrast. Proper illumination back
lighting, front lighting, structured light is required. Back lighting is used to obtain maximum image
contrast. The surface of the object is to be inspected by using front lighting. For inspecting three-
dimensional feature structured lighting is required. An image sensor vidicon camera, CCD camera is
used to generate the electronic signal representing the image. The image sensor collects light from the
scene through a lens, using photosensitive target, converts into electronic signal.

Vidicon camera
Image is formed by focusing the incoming light through a series of lenses onto the
photoconductive faceplate of the vidicon tube. The electron beam scans the photoconductive surface
and produces an analog voltage proportional to the variation in light intensity for each scan line of the
original scene.

Solid-state camera
The image sensors change coupled device (CCD) contain matrix of small array, photosensitive
elements accurately spaced and fabricated on silicon chips using integrated circuit technology. Each
detector converts in to analog signal corresponding to light intensity through the camera lens.

Image processor
A camera may form an image 30 times per sec at 33 m sec intervals. At each time interval the
entire image frozen by an image processor for processing. An analog to digital converter is used to
convert analog voltage of each detector in to digital value. If voltage level for each pixel is given by
either 0 or I depending on threshold value. It is called binary system on the other hand grey scale
system assigns upto 256 different values depending on intensity to each pixel. Grey scale system
requires higher degree of image refinement, huge storage processing capability.

61
For analysis 256 x 256 pixels image array up to 256 different pixel values will require 65000-
8 bit storage locations at a speed of 30 images per second. Techniques windowing and image
restoration are involved.

Windowing
Processing is the desired area of interest and ignores non-interested part of image.
Image restoration
Preparation of image during the pre-processing by removing the degrade. Blurring of lines, poor
contrast between images and presence of noise are the degrading.
The quality may be improved
 By improving the contrast by brightness addition.
 By increasing the relative contrast between high and low intensity elements.
 By Fourier domain processing.
 Other techniques to reduce edge detection and run length encoding

Image Analysis
Digital image of the object formed is analyzed in the central processing Unit of the system.
Three important tasks performed by machine vision system are measuring the distance of an object
from a vision system camera, determining object orientation and defining object position. The distance
of an object from a vision system camera can be determined by triangulation technique.

Function of Machine Vision

 Lighting and presentation of object to evaluated.


 It has great compact on repeatability, reliability and accuracy.
 I.ighting source and projection should be chosen and give sharp contrast.
 Images sensor compressor TV camera may he vidicon or solid state.
 For simple processing, analog comparator and a computer controller to convert the video
information to a binary image is used.
 Data compactor employs a high speed away processor to provide high speed processing of the
input image data.
 System control computer communicates with the operator and make decision about the part
being inspected.
 The output and peripheral devices operate the control of the system. The output enables the
vision system to either control a process or provide caution and orientation information two
a robot, etc.
 These operate under the control of the system control of computer

62
FORM MEASUREMENT

INTRODUCTION
Threads are of prime importance, they are used as fasteners. It is a helical groove, used to transmit
force and motion. In plain shaft, the hole assembly, the object of dimensional control is to ensure a
certain consistency of fit. The performance of screw threads during their assembly with nut depends
upon a number of parameters such as the condition of the machine tool used for screw cutting, work
material and tool.
 Form measurement includes
 Screw thread measurement
 Gear measurement
 Radius measurement
 Surface Finish measurement
 Straightness measurement
 Flatness and roundness measurements
Screw Thread Measurement
Screw threads are used to transmit the power and motion, and also used to fasten two components
with the help of nuts, bolts and studs. There is a large variety of screw threads varying in their form, by
included angle, head angle, helix angle etc. The screw threads are mainly classified into 1) External
thread 2) Internal thread.

63
Fig 3.1 External Thread Fig 3.2 Internal Thread

Screw Thread Terminology

Fig 3.3 Screw Thread


Pitch
It is the distance measured parallel to the screw threads axis between the corresponding points on
two adjacent threads in the same axial plane. The basic pitch is equal to the lead divided by the number
of thread starts.
Minor diameter:
It is the diameter of an imaginary co-axial cylinder which touches the roots of external threads.
Major diameter:
It is the diameter of an imaginary co-axial cylinder which touches the crests of an external thread and the
root of an internal thread.
Lead:
The axial distance advanced by the screw in one revolution is the lead.
Pitch diameter:
It is the diameter at which the thread space and width are equal to half of the screw thread

Helix angle:
It is the angle made by the helix of the thread at the pitch line with the axis. The angle is measured in an
axial plane.

64
Flank angle:
It is the angle between the flank and a line normal to the axis passing through the apex of the thread.

Height of thread:
It is the distance measured radially between the major and minor diameters respectively

Addendum:
Radial distance between the major and pitch cylinders for external thread. Radial distance between the
minor and pitch cylinder for internal thread.

Dedendum:
It is the radial distance between the pitch and minor cylinders for external thread. Also radial distance
between the major and pitch cylinders for internal thread.

Error in Thread
The errors in screw thread may arise during the manufacturing or storage of threads. The errors
either may cause in following six main elements in the thread.
 Major diameter error
 Minor diameter error
 Effective diameter error
 Pitch error
 Flank angles error
 Crest and root error
1) Major diameter error
It may cause reduction in the flank contact and interference with the matching threads.
2) Minor diameter error
It may cause interference, reduction of flank contact.
3) Effective diameter error
If the effective diameter is small the threads will be thin on the external screw and thick on an internal
screw.

4) Pitch errors
If error in pitch, the total length of thread engaged will be either too high or too small.
The various pitch errors may classified into
 Progressive error
 Periodic error
 Drunken error

65
 Irregular error
1) Progressive error
The pitch of the thread is uniform but is longer or shorter its nominal value and this is called progressive.
Causes of progressive error:
 Incorrect linear and angular velocity ratio.
 In correct gear train and lead screw.
 Saddle fault.
 Variation in length due to hardening.

Fig 3.4 Progressive Error


2) Periodic error
These are repeats itself at regular intervals along the thread
Causes of periodic error:
 Un uniform tool work velocity ratio.
 Teeth error in gears.
 Lead screw error.
 Eccentric mounting of the gears.
3) Drunken error
Drunken errors are repeated once per turn of the thread in a drunken thread. In Drunken thread the pitch
measured parallel to the thread axis. If the thread is not cut to the true
helix the drunken thread error will form

Fig 3.5 Drunken Error

66
5) Irregular errors
It is vary irregular manner along the length of the thread.
Irregular error causes:
1. Machine fault.
2. Non-uniformity in the material.
3. Cutting action is not correct.
4. Machining disturbances.

Effect of pitch errors


 Increase the effective diameter of the bolt and decreases the diameter of nut.
 The functional diameter of the nut will be less.
 Reduce the clearance.
 Increase the interference between mating threads.

Measurement of various elements of Thread


To find out the accuracy of a screw thread it will be necessary to measure the following:
1. Major diameter.
2. Minor diameter.
3. Effective or Pitch diameter.
4. Pitch
5. Thread angle and form
1. Measurement of major diameter:
The instruments which are used to find the major diameter are by
 Ordinary micrometer
 Bench micrometer.

Ordinary micrometer
The ordinary micrometer is quite suitable for measuring the external major diameter. It is first adjusted
for appropriate cylindrical size (S) having the same diameter (approximately).This process is known as
‘gauge setting’. After taking this reading ‘R the micrometer is set on the major diameter of the thread,
and the new reading is ‘R2.

67
Bench micrometer
For getting the greater accuracy the bench micrometer is used for measuring the major diameter.
In this process the variation in measuring Pressure, pitch errors are being neglected. The instrument
has a micrometer head with a vernier scale to read the accuracy of 0.002mm. Calibrated setting
cylinder having the same diameter as the major diameter of the thread to be measured is used as
setting standard. After setting the standard, the setting cylinder is held between the anvils and the
reading is taken. Then the cylinder is replaced by the threaded work piece and the new reading is taken.

Fig 3.6 Bench Micrometer

Measurement of the major diameter of an Internal thread


The Inter thread major diameter is usually measured by thread comparator fitted with ball-ended styli.
First the Instrument is set for a cylindrical reference having the same diameter of major diameter
of internal thread and the reading is taken. Then the floating head is retracted to engage the tips
of the styli at the root of spring under pressure. For that the new reading is taken

2. Measurement of Minor diameter


The minor diameter is measured by a comparative method by using floating carriage
diameter measuring machine and small V pieces which make contact with the root of the thread.
These V pieces are made in several sizes, having suitable radii at the edges. V pieces are made of
hardened steel. The floating carriage diameter-measuring machine is a bench micrometer mounted on a
carriage.

68
Fig 3.7 Measurement of Minor diameter

Measurement process
The threaded work piece is mounted between the centers of the instrument and the V pieces are placed
on each side of the work piece and then the reading is noted. After taking this reading the work piece is
then replaced by a standard reference cylindrical setting gauge.

Measurement of Minor diameter of Internal threads


The Minor diameter of internal threads are measured by
o Using taper parallels
o Using Rollers.
 Using taper parallels
For diameters less than 200mm the use of Taper parallels and micrometer is very common. The taper
parallels are pairs of wedges having reduced and parallel outer edges. The diameter across their
outer edges can be changed by sliding them over each other.

Fig 3.8 Taper parallels

69
Using rollers
For more than 20mm diameter this method is used. Precision rollers are inserted inside the thread and
proper slip gauge is inserted between the rollers. The minor diameter is then the length of slip gauges
plus twice the diameter of roller.

3. Measurement of effective diameter

Fig 3.9 Roller gauge

Effective diameter measurement is carried out by following methods.


1. One wire,
2. Two wires, or
3. Three wires method.
4. Micrometer method.
a) One wire method
The only one wire is used in this method. The wire is placed between two threads at one side and on the
other side the anvil of the measuring micrometer contacts the crests. First the micrometer reading dl is
noted on a standard gauge whose dimension is approximately same to be obtained by this method.

Fig 3.10 One wire method

70
b) Two wire method
Two-wire method of measuring the effective diameter of a screw thread is given below. In this method
wires of suitable size are placed between the standard and the micrometer anvils. First the micrometer
reading is taken and let it be R. Then the standard is replaced by the screw thread to be measured and
the new reading

Fig 3.11 Two Wire Method

is taken.

71
c) Three-Wire method
The three-wire method is the accurate method. In this method three wires of equal and precise
diameter are placed in the groves at opposite sides of the screw. In this one wire on one side and two
on the other side are used. The wires either may held in hand or hung from a stand. This method
ensures the alignment of micrometer anvil faces parallel to the thread axis.

Fig 3.12 Three-Wire Method

BEST WIRE SIZE-DEVIATION


Best wire diameter is that may contact with the flanks of the thread on the pitch line. The figure
shows the wire makes contact with the flanks of the thread on the pitch.
Hence best wire diameter,

4. Pitch measurement
The most commonly used methods for measuring the pitch are
1. Pitch measuring machine
2. Tool maker’s microscope
3. Screw pitch gauge
Pitch measuring machine
The principle of the method of measurement is to move the stylus along the screen parallel to the axis
from one space to the next. The pitch-measuring machine provides a relatively simple and accurate
method of measuring the pitch. Initially the micrometer reading is near the zero on the scale, the indicator
is moved along to bring the stylus, next the indicator adjusted radially until the stylus engages

72
between the thread flank and the pointer ‘K’ is opposite in the line L. To bring T in opposite in its
index mark a small movement is necessary in the micrometer and then the reading is taken next.

Fig 3.13 Pitch Measuring Machine

The stylus is moved along into the next space by rotation of the micrometer and the second
reading is taken. The difference between these two- measured readings is known as the pitch of the
thread.

Tool makers microscope

Fig 3.14 Tool Makers Microscope

73
Working
Worktable is placed on the base of the base of the instrument. The optical head is mounted on
a vertical column it can be moved up and down. Work piece is mounted on a gl a s s p l a t e .
A light source provides horizontal beam of light which is reflected from a mirror by 900
upwards towards the table. Image of the outline contour of the work piece passes through the objective
of the optical head. The image is projected by a system of three prisms to a ground glass screen. The
measurements are made by means of cross lines engraved on the ground glass screen. The screen can
be rotated through 360°. Different types of graduated screens and eyepieces are used.

Applications
 Linear measurements.
 Measurement of pitch of the screw.
 Measurement of pitch diameter.
 Measurement of thread angle.
 Comparing thread forms.
 Centre to center distance measurement.
 Thread form and flank angle measurement

Thread form and flank angle measurement


The optical projections are used to check the thread form and angles in the thread. The projectors
equipped with work holding fixtures, lamp, and lenses. The light rays from the lens are directed into the
cabinet and prisons and mirrors. The enlarged image of thread is drawn. The ideal and actual forms
are compared for the measurement.

GEAR MEASUREMENT Introduction


Gear is a mechanical drive which transmits power through toothed wheel. In this gear drive, the
driving wheel is in direct contact with driven wheel. The accuracy of gearing is the very important
factor when gears are manufactured. The transmission efficiency is almost 99 in gears. So it is very
important to test and measure the gears precisely. For proper inspection of gear, it is very important to
concentrate on the raw materials, which are used to manufacture the gears, also very important to check
the machining the blanks, heat treatment and the finishing of teeth. The gear blanks should be tested for
dimensional accuracy and tooth thickness for the forms of gears.
The most commonly used forms of gear teeth are
1. Involute
2. Cycloidal
The involute gears also called as straight tooth or spur gears. The cycloidal gears are used in
heavy and impact loads. The involute rack has straight teeth. The involute pressure angle is either 20° or
14.5°.

74
Types of gears
1. Spur gear
Cylindrical gear whose tooth traces is straight line. These are used for transmitting power between
parallel shafts.

2. Spiral gear
The tooth of the gear traces curved lines.

3. Helical gears
These gears used to transmit the power between parallel shafts as well as nonparallel and non-
intersecting shafts. It is a cylindrical gear whose tooth traces is straight line.

4. Bevel gears:
The tooth traces are straight-line generators of cone. The teeth are cut on the conical surface. It
is used to connect the shafts at right angles.

5. Worm and Worm wheel:


It is used to connect the shafts whose axes are non-parallel and non-intersecting.

6. Rack and Pinion:


Rack gears are straight spur gears with infinite radius.

Gear terminology
1. Tooth profile
It is the shape of any side of gear tooth in its cross section.
2. Base circle
It is the circle of gear from which the involute profile is derived. Base circle diameter Pitch circle
diameter x Cosine of pressure angle of gear

3. Pitch circle diameter (PCD)


The diameter of a circle which will produce the same motion as the toothed gear wheel.

4. Pitch circle
It is the imaginary circle of gear that rolls without slipping over the circle of its matiug gear.

5. Addendum circle
The circle coincides with the crests (or) tops of teeth.

75
6. Dedendum circle (or) Root circle

This circle coincides with the roots (or) bottom on teeth.


7. Pressure angle (a)
It is the angle making by the line of action with the common tangent to the pitch circles of mating
gears.

8. Module(m)
It is the ratio of pitch circle diameter to the total number of teeth. Where, d = Pitch circle diameter.
n = Number f teeth.

9. Circular pitch
It is the distance along the pitch circle between corresponding points of adjacent teeth.

10. Addendum
Radial distance between tip circle and pitch circle. Addendum value = 1 module.
11 Dedendum
Radial distance between itch circle and root circle, Dedendum value = 1 .25module.

12. Clearance (C)


Amount of distance made by the tip of one gear with the root of mating gear. Clearance =
Difference between Dedendum and addendum values.

13. Blank diameter:


The diameter of the blank from which gear is out. Blank diameter = PCD + 2m
14. Face:
Part of the tooth in the axial plane lying between tip circle and pitch circle.
15. Flank:
Part of the tooth lying between pitch circle and root circle.
16. Top land:
Top surface of a tooth.
17. Lead angle
The angle between the tangent to the helix and plane perpendicular to the axis of cylinder.

76
18. Backlash:
The difference between the tooth thickness and the space into which it meshes. If we assume the tooth
thickness as t and width ‘ t then

Fig 3.15 Gear Profile

Gear errors

 Profile error: - The maximum distance of any point on the tooth profile form to the design
profile.
 Pitch error: - Difference between actual and design pitch
 Cyclic error: - Error occurs in each revolution of gear
 Run out: - Total range of reading of a fixed indicator with the contact points applied to a surface
rotated, without axial movement, about a fixed axis.
 Eccentricity: - Half the radial run out
 Wobble: - Run out measured parallel to. the axis of rotation at a specified distance from the axis
 Radial run out: - Run out measured along a perpendicular to the axis of rotation.
 Undulation: - Periodical departure of the actual tooth surface from the design surface.
 Axial run out: - Run out measured parallel to the axis of rotation at a speed.
 Periodic error: -Error occurring at regular intervals.

75
Gear Measurement
The Inspection of the gears consists of determine the following elements in which manufacturing error
may be present.
1. Runout.
2. Pitch
3. Profile
4. Lead
5. Back lash
6. Tooth thickness
7. Concentricity
8. Alignment
1. Runout:
It means eccentricity in the pitch circle. It will give periodic vibration during each revolution of the
gear. This will give the tooth failure in gears. The run out is measured by means of eccentricity testers.
In the testing the gears are placed in the mandrel and the dial indicator of the tester possesses special
tip depending upon the module of the gear and the tips inserted between the tooth spaces and the gears
are rotated tooth by tooth and the variation is noted from the dial indicator.
2. Pitch measurement:
There are two ways for measuring the pitch.
1. Point to point measurement (i.e. One tooth point to next toot point)
2. Direct angular measurement
1. Tooth to Tooth measurement

Fig 3.16 Tooth to tooth measurement

The instrument has three tips. One is fixed measuring tip and the second is sensitive tip, whose
position can be adjusted by a screw and the third tip is adjustable or guide stop. The distance
between the fixed and sensitive tip is equivalent to base pitch of the gear. All the three tips are
76
contact the toot h by setting the instrument and the reading on the dial indicator is the error in the base
pitch.

2. Direct Angular Measurement


It is the simplest method for measuring the error by using set dial gauge against a tooth. in this
method the position of a suitable point on a tooth is measured after the gear has been indexed by a
suitable angle. If the gear is not indexed through the angular pitch the reading differs from the original
reading. The difference between these is the cumulative pitch error.

3. Profile checking
The methods used for profile checking is
 Optical projection method.
 Involute measuring machine.

1. Optical projection method:


The profile of the gear projected on the screen by optical lens and then projected value is compared with
master profile.

2. Involute measuring machine:

Fig 3.17 Involute Measuring Machine

In this method the gear is held on a mandrel and circular disc of same diameter as the base circle
of gear for the measurement is fixed on the mandrel. After fixing the gear in the mandrel, the straight
edge of the instrument is brought in contact with the base circle of the disc. Now, the gear and disc are
rotated and the edge moves over the disc without sleep. The stylus moves over the tooth profile and
the error is indicated on the dial gauge.

4. Lead checking:
It is checked by lead checking instruments. Actually lead is the axial advance of a helix for
one complete turn. The lead checking instruments are advances a probe along a tooth surface, parallel
to the axis when the gear rotates.

77
5. Backlash checking:
Backlash is the distance through which a gear can be rotated to bring its nonworking flank in
contact with the teeth of mating gear. Numerical values of backlash are measured at the tightest point of
mesh on the pitch circle.

There are two types of backlash


1. Circumferential backlash
2. Normal backlash

The determination of backlash is, first one of the two gears of the pair is locked, while other is
rotated forward and backward and by the comparator the maximum displacement is measured. The
stylus of comparator is locked near the reference cylinder and a tangent to this is called circular backlash.

6. Tooth thickness measurement:


Tooth thickness is generally measured at pitch circle and also in most cases the chordal
thickness measurement is carried out i.e. the chord joining the intersection of the tooth profile with the
pitch circle.
The methods which are used for measuring the gear tooth thickness is

a) Gear tooth vernier caliper method (Chordal thickness method)

b) Base tangent method.

c) Constant chord method.

d) Measurement over pins or balls.

a) Gear tooth vernier method


In gear tooth vernier method the thickness is measured at the pitch line. Gear tooth thickness varies from
the tip of the base circle of the tooth, and the instrument is capable of measuring the thickness at a
specified position on the tooth. The tooth vernier caliper consists of vernier scale and two perpendicular
arms. In the two perpendicular arms one arm is used to measure the thickness and other arm is used to
measure the depth. Horizontal vernier scale reading gives chordal thickness (W) and vertical vernier
scale gives the chordal addendum. Finally the two values compared.
The theoretical values of W and d can be found out by considering one tooth in the gear and it
can be verified. In fig noted that w is a chord ADB and tooth thickness is specified by AEB. The
distance d is noted and adjusted on instrument and it is slightly greater than addendum CE.
Vernier method like the chordal thickness and chordal addendum are dependent upon the number
of teeth. Due to this for measuring large number of gears different calculations are to be made for each
gear. So these difficulties are avoided by this constant chord method.

78
b) Measurement over Rolls or balls
A very good and convenient method for measuring thickness of gear. In this method two or three
different size rollers are used for checkup the vibrations at several places on the tooth.
7. Measurement of concentricity
In setting of gears the centre about which the gear is mounded should be coincident with the centre from
which the gear is generated. It is easy to check the concentricity of the gear by mounting the gear between
centres and measuring the variation in height of a roller placed between the successive teeth. Finally the
variation in reading will be a function of the eccentricity present.

8. Alignment checking
It is done by placing a parallel bar between the gear teeth and the gear being mounted between
centres. Finally the readings are taken at the two ends of the bar and difference in reading is the
misalignment.

Parkinson Gear Tester


Working principle
The master gear is fixed on vertical spindle and the gear to be tested is fixed on similar spindle which
is mounted on a carriage. The carriage which can slide either side of these gears are maintained in mesh
by spring pressure. When the gears are rotated, the movement of sliding carriage is indicated by a dial
indicator and these variations arc is measure of any irregularities. The variation is recorded in a recorder
which is fitted in the form of a waxed circular chart. In the gears are fitted on the mandrels and are
free to rotate without clearance and the left mandrel move along the table and the right mandrel move
along the spring-loaded carriage.

Fig 3.18 Parkinson Gear Tester

The two spindles can be adjusted so that the axial distance is equal and a scale is attached to one side
and vernier to the other, this enables center distance to be measured to within 0.025mm. If any errors
in the tooth form when gears are in close mesh, pitch or concentricity of pitch line will cause a variation
in center distance from this movement of carriage as indicated to the dial gauge will show the errors in
the gear test. The recorder also fitted in the form of circular or rectangular chart and the errors are
recorded.
79
Limitations of Parkinson gear tester:
1. Accuracy±0.001mm
2. Maximum gear diameter is 300mm
3. Errors are not clearly identified:
4. Measurement dependent upon the master gear.
5. Low friction in the movement of the floating carriage.

RADIUS MEASUREMENT
In radius measurement we are going see about two methods namely.
1. 1 Radius of circle and
2. Radius of concave surface
1. Radius of circle

Fig 3.19 Radius Measurement

This radius measurement requires the use of vernier caliper, C- Clamp, surface plate and two pins. This
method is very much use in measuring the cap of bearing. Initially the job is fixed on surface plate with
the help of C-clamp. So that the central position of the circular part is touch with the surface plate. Next
the two balls are placed on both side of the work and using the vernier caliper readings are taken.

Let, R = Radius of job, I = The reading between two balls

80
2. Radius of concave surface
Here there are two methods
 Edges are well defined.
 Edges are rounded up

Edges are well defined


In this method radius is calculated by using surface plate, height gauge, angle plate,C-clamp and slip
gauges. First the Job placed on the surface plate and then by using depth micrometer the depth is
measured and it is h. Next in such a way that cavity is resting against an angle plate and the part is
clamped in this position. By using a height gauge edge to edge size of hole is measured and this is
diameter of d.

Fig 3.20 Radius of Concave surface

81
When cavities are rounded up the radius is measured by depth micrometer and slip gauges. First
the width of the micrometer is measured by slip gauges and it is let ‘ d’. Then it is placed in the cavity
and measuring tip is lowered down to touches the base. From this condition the reading is noted and
it be h and the radius is measured by using the formula.

2) Edges are rounded up


When cavities are rounded up the radius is measured by depth micrometer and slip gauges. First
the width of the micrometer is measured by slip gauges and it is let ‘d’. Then it is placed in the cavity
and measuring tip is lowered down to touches the base. From this condition the reading is noted and
it be h and the radius is measured by using the formula

Fig 3.21 Edges round up

SURFACE FINISH MEASUREMENT

Introduction

When we are producing components by various methods of manufacturing process it is not


possible to produce perfectly smooth surface and some irregularities are formed. These irregularities are
causes some serious difficulties in using the components. So it is very important to correct the surfaces
before use. The factors which are affecting surface roughness are
 Work piece material
 Vibrations
 Machining type
 Tool and fixtures
The geometrical irregularities can be classified as
 First order
 Second order
 3 Third order
 Fourth order

82
1. First order irregularities
These are caused by lack of straightness of guide ways on which tool must move.

2. Second order irregularities


These are caused by vibrations
3. Third order irregularities
These are caused by machining.
4. Fourth order irregularities
These are caused by improper handling machines and equipments.

Elements of surface texture

 Profile: - Contour of any section through a surface.


 Lay: - Direction of the ‘predominate surface pattern’
 Flaws: - Surface irregularities or imperfection, which occur at infrequent intervals.
 Actual surface: - Surface of a part which is actually obtained,
 Roughness: - Finely spaced irregularities. It is also called primary texture.
 Sampling lengths: - Length of profile necessary for the evaluation of the irregularities.
 Waviness: - Surface irregularities which are of greater spacing than roughness.
 Roughness height: - Rated as the arithmetical average deviation.

Fig 3.22 Surface Texture

 Roughness width: - Distance parallel to the normal surface between successive peaks.
 Mean line of profile: - Line dividing the effective profile such that within the sampling length.
 Centre line of profile: - Line dividing the effectiveness profile such that the areas embraced b
profile above and below the line are equal.

83
Analysis of surface finish
The analyses of surface finish being carried out by
 The average roughness method.
 Peak to valley height method
 From factor
1. Average roughness measurement
The assessment of average roughness is carried out by
 Centre line average (CLA).
 Root mean square (RMS)
 Ten point method
a. C.L.A. method
The surface roughness is measured as the average deviation from the nominal surface.

b. R.M.S. method
The roughness is measured as the average deviation from the nominal surface. Let, h1,h2, ... are the heights
of the ordinates and L is the sampling length

c. Ten point height method


The average difference between five highest peaks and five lowest valleys of surface is taken and
irregularities are calculated by

84
2. Peak to valley height method
Peak to valley height measures the maximum depth of the surface irregularities over a given sample
length and largest value of the depth is accepted for the measurement.

 Here, = Maximum peak to valley height in one sampling lengths.


 R = Maximum peak to valley height
 V=Valley
 P = Peak
Here, R is the maximum peak to valley height within the assessment length and the disadvantages of
R, and is only a single peak or valley which gives the value is not a true picture of the actual profile of
the surface

3. Form factor
It is obtained by measuring the area of material above the arbitrarily chosen base line in the section and
the area of the enveloping rectangle.

Fig 3.23 Form factor

Methods of measuring surface finish


The methods used for measuring the surface finish is classified into
1. Inspection by comparison
2. Direct Instrument Measurements
1. Inspection by comparison methods:
In these methods the surface texture is assessed by observation of the surface. The surface to be tested is
compared with known value of roughness specimen and finished by similar machining process.

The various methods which are used for comparison are85


 YYCTouch Inspection.
 Visual Inspection.
 Microscopic Inspection.
 Scratch Inspection.
 Micro Interferometer.
 Surface photographs.
 Reflected Light Intensity.
 Wallace surface Dynamometer.

Touch Inspection
It is used when surface roughness is very high and in this method the fingertip is moved along
the surface at a speed of 25mm/second and the irregularities as up to
0.0125mm can be detected.
Visual Inspection
In this method the surface is inspected by naked eye and this measurement is limited to rough
surfaces.

Microscopic Inspection
In this method finished surface is placed under the microscopic and compared with the surface
under inspection. The light beam also used to check the finished surface by projecting the light about 60°
to the work.

Scratch Inspection:
The materials like lead, plastics rubbed on surface are inspected by this method. The impression
of this scratches on the surface produced is then visualized.

Micro-Interferometer
Optical flat is placed on the surface to be inspected and illuminated by a monochromatic source
of light.

Surface Photographs
Magnified photographs of the surface are taken with different types of illumination. The
defects like irregularities are appear as dark spots and flat portion of the surface appears as bright.

Reflected light Intensity


A beam of light is projected on the surface to be inspected and the light intensity variation on the
surface is measured by a photocell and this measured value is calibrated

Wallace surface Dynamometer:


It consists of a pendulum in which the testing shoes are clamped to a bearing surface and a pre
determined spring pressure can be applied and then, The pendulum is lifted to its initial starting position
and allowed to swing over the surface to be tested.

2. Direct instrument measurements


86
Direct methods enable to determine a numerical value of the surface finish of any surface. These
methods are quantitative analysis methods and the output is used to operate recording or indicating
instrument. Direct Instruments are operated by electrical principles. These instruments are classified into
two types according to the operating principle. In this is operated by carrier-modulating principle and the
other is operated by voltage-generating principle, and in the both types the output is amplified.
Some of the direct measurement instruments are
1. Stylus probe instruments.
2. Tomlinson surface meter.
3. Profilometer.
4. Taylor-Hobson Talysurf
1. Stylus probe type instrument
Principle
When the stylus is moved over the surface which is to be measured, the irregularities in the surface
texture are measured and it is used to assess the surface finish of the work piece.

Working
The stylus type instruments consist of skid, stylus, amplifying device and recording device. The
skid is slowly moved over the surface by hand or by motor drive. The skid follows the irregularities of
the surface and the stylus moves along with skid. When the stylus moves vertically up and down and
the stylus movements are magnified, amplified and recorded to produce a trace. Then it is analyzed by
automatic device.
Advantage
Any desired roughness parameter can be recorded.
Disadvantages
1. Fragile material cannot be measured.
2. High Initial cost.
3. Skilled operators are needed to operate.
2. Tomlinson Surface meter
This instrument uses mechanical-cum-optical means for magnification.
Construction
In this the diamond stylus on the surface finish recorder is held by spring pressure against the
surface of a lapped cylinder. The lapped cylinder is supported one side by probe and other side by rollers.
The stylus is also attached to the body of the instrument by a leaf spring and its height is adjustable
to enable the diamond to be positioned and the light spring steel arm is attached to the lapped cylinder.
The spring arm has a diamond scriber at the end and smoked glass is rest on the arm.
87
Fig 3.24 Tomlinson Surface meter

Working

When measuring surface finish the body of the instrument is moved across the surface by a screw
rotation. The vertical movement of the probe caused by the surface irregularities makes the horizontal
lapped cylinder to roll. This rolling of lapped cylinder causes the movement of the arm. So this movement
is induces the diamond scriber on smoked glass. Finally the movement of scriber together with horizontal
movement produces a trace on the smoked glass plate and this trace is magnified by an optical projector.

3. Profilometer
It is an indicating and recording instrument to measure roughness in microns. The main parts of
the instrument are tracer and an amplifier. The stylus is mounted in the pickup and it consists of induction
oil located in the magnet. When the stylus is moved on the surface to be tested, it is displaced up
and down due to irregularities in the surface.
This movement induces the induction coil to move in the direction of permanent magnet and
produces a voltage. This is amplified and recorded.

Fig 3.25 Profilometer


4. Talyor-Hobson-Talysurf
It is working a carrier modulating principle and it is an accurate method comparing with the other
methods. The main parts of this instrument is diamond stylus (0.002mm radius) and skid.

Principle
The irregularities of the surface are traced by the stylus and the movement of the stylus is converted
into changes in electric current.
88
Fig 3.26 Talyor-Honson Instrument

Working
On two legs of the E-shaped stamping there are coils for carrying an A.C. current and these coils
form an oscillator. As the armature is pivoted about the central leg the movement of the stylus causes the
air gap to vary and thus the amplitude is modulated. This modulation is again demodulated for the vertical
displacement of the stylus. So this demodulated output is move the pen recorder to produce a numerical
record and to make a direct numerical assessment.

STRAIGHTNESS MEASUREMENT
A line is said to be straight over a given length, if the variation of the distance of its from two
planes perpendicular to each other and parallel to the general direction of the line remains within the
specified tolerance limits. The tolerance on the straightness of a line is defined as the maximum deviation
in relation to the reference straight line joining the two extremities of the line to be checked.

Fig 3.27 Straightness Measurement


Straight edge
A straight edge is a measuring tool which consists of a length of a length of a steel of narrow and
deep section in order to provide resistance to bending in the plane of measurement without excessive
weight. For checking the straightness of any surface, the straight edge is placed over the surface and
two are viewed against the light, which clearly indicate the straightness. The gap between the straight
edge and surface will be negligibly small for perfect surfaces. Straightness is measured by observing the
colour of light by diffraction while passing through the small gap. If the colour of light be red, it indicates
a gap of 0.0012 to 0.0075mm. A more accurate method of finding the straightness by straight edges is to
place it in equal slip gauges at the correct point for minimum deflection and to measure the uniformity
of space under the straight edge with slip gauges.
Test for straightness by using spirit level and Autocollimator
The straightness of any surface could be determined by either of these instruments by measuring
the relative angular positions of number of adjacent sections of the surface to be tested. First straight line
is drawn on the surface then it is divided into a number of sections the length of each section being
equal to the length of sprit level base or the plane reflector’ s base in case of auto collimator. The
bases of the spirit level block or reflector are fitted with two feet so that only feet have line contact
with the surface and the surface of base does not touch the surface to he tested. The angular division
obtained is between the specified two points. Length of each section must be equal to distance between
the centerlines of two feet. The special level can be used only for the measurement of straightness of
horizontal surfaces while auto-collimator can be used on 89 surfaces are any plane. In case of spirit level,
the block is moved along the line equal to the pitch distance between the centerline of the feet and the
angular variation of the direction of block. Angular variation can be determined in terms of the
difference of height between two points by knowing the least count of level and length of the base.
Fig 3.28 Straightness using Auto-Collimator
In case of autocollimator the instrument is placed at a distance of 0.5 to 0.75m from the surface
to be tested. The parallel beam from the instrument is projected along the length of the surface to be
tested. A block fixed on two feet and fitted with a plane vertical reflector is placed on the surface and
the reflector face is facing the instrument. The image of the cross wires of the collimator appears nearer
the center of the field and for the complete movement of reflector along the surface straight line the
image of cross wires will appear in the field of eyepiece. The reflector is then moved to the other end of
the surface in steps equal to. The center distance between the feet and the tilt of the reflector is noted
down in second from the eyepiece.

FLATNESS TESTING
Flatness testing is possible by comparing the surface with an accurate surface. This method
is suitable for small plates and not for large surfaces. Mathematically flatness error of a surface
states that the departure from flatness is the minimum separation of a pair of parallel planes which
will contain all points on the Surface. The figure which shows that a surface can be considered to be
composed of an infinitely large number of lines. The surface will be flat only if all the lines are straight
and they lie in the same plane. In the case of rectangular table arc the lines are straight and parallel to the
sides of the rectangle in both the perpendicular direction. Even it is not plat, but concave and convex
along two diagonals.

For verification, it is essential to measure the straightness of diagonals in addition to the lines
parallel to the sides.
Thus the whole of the surface is divided by straight line. The fig, shows the surface is
divided by straight line. The end line AB and AD etc are drawn away from the edges as the edges of the
surface are not flat but get worn out by use and can fall off little in accuracy.

The straightness of all these lines is determined and then those lines are related with each
other in order to verify whether they lie in the same plane or not.

Procedure for determining flatness

The fig. shows the flatness testing procedure. 90


o Carry out the straightness test and tabulate the reading up to the cumulative error column.

o Ends of lines AB, AD and BD are corrected to zero and thus the height of the points A, B and D are
zero.
Fig 3.29 Flatness Testing

The height of the point I is determined relative to the arbitrary plane ABD = 000. Point C is now fixed
relative to the arbitrary plane and points B and D are set at zero, all intermediate points on BC and DC
can be corrected accordingly. The positions of H and G, E and F are known, so it is now possible to fit in
lines HG and EF. This also provides a check on previous evaluations since the mid-point of these lines
should coincide with the position of mid-point I.

In this way, the height of all the points on the surface relative o the arbitrary plane ABD is known.

ROUNDNESS MEASUREMENTS
Roundness is defined as a condition of a surface of revolution. Where all points of the surface
intersected by any plane perpendicular to a common axis in case of cylinder and cone.

Devices used for measurement of roundness


 Diametral gauge.
 Circumferential conferring gauge => a shaft is confined in a ring gauge and rotated
against a set indicator probe.
 Rotating on center
 V-Block
 Three-point probe.
 Accurate spindle.

1. Diametral method
The measuring plungers are located 180° a part and the diameter is measured at several places.
This method is suitable only when the specimen is elliptical or has an even number of lobes. Diametral
check does not necessarily disclose effective size or roundness. This method is unreliable in
determining roundness.

2. Circumferential confining gauge 91


Fig. shows the principle of this method. It is useful for inspection of roundness in production.
This method requires highly accurate master for each size part to be measured. The clearance between
part and gauge is critical to reliability. This technique does not allow for the measurement of other related
geometric characteristics, such as concentricity, flatness of shoulders etc.
Fig 3.30 Confining Gauge

3. Rotating on centers
The shaft is inspected for roundness while mounted on center. In this case, reliability is dependent
on many factors like angle of centers, alignment of centres, roundness and surface condition of the
centres and centre holes and run out of piece. Out of straightness of the part will cause a doubling run out
effect and appear to be roundness error.

4. V-Block
The set up employed for assessing the circularity error by using V Block is shown in
fig.

Fig 3.31 V-Block

The V block is placed on surface plate and the work to be checked is placed upon it.

A diameter indicator is fixed in a stand and its feeler made to rest against the surface of the work.
The work is rotated to measure the rise on fall of the workpiece.

For determining the number of lobes on the work piece, the work piece is first tested in a 60° V-Block
and then in a 90° V-Block. The number of lobes is then equal to the number of times the indicator pointer
deflects through 360° rotation of the work piece.

Limitations
a) The circularity error is greatly by affected by the following factors.
o If the circularity error is i\e, then it is possible that the indicator shows no variation.
o Position of the instrument i.e. whether measured from top or bottom. (iii) Number of lobes on the
rotating part.
b) The instrument position should be in the same vertical plane as the point of contact of the part with the
V-block.
92
c) A leaf spring should always be kept below the indicator plunger and the surface of the part.
5. Three point probe
The fig. shows three probes with 120° spacing is very, useful for determining effective size they
perform like a 60° V- block. 60° V-block will show no error for 5 a 7 lobes magnify the error for 3-
lobed parts show partial error for randomly spaced lobes.
Fig 3.32 Three Point Probe

Roundness measuring spindle


There are following two types of spindles used.
1. Overhead spindle
Part is fixed in a staging plat form and the overhead spindle carrying the comparator rotates separately
from the part. It can determine roundness as well as camming (Circular flatness). Height of the work
piece is limited by the location of overhead spindle. The concentricity can be checked by extending
the indicator from the spindle and thus the range of this check is limited.

2. Rotating table
Spindle is integral with the table and rotates along with it. The part is placed over the spindle and rotates
past a fixed comparator

Fig 3.32 Rotating Table

Roundness measuring machine


Roundness is the property of a surface of revolution, where all points on the surface are equidistant from
the axis. The roundness of any profile can be specified only when same center is found from which to
make the measurements. The diameter and roundness are measured by different method and
instruments. For measurement of diameter it is done statically, for measuring roundness, rotation is
always necessary.

Roundness measuring instruments are two types.


 Rotating pick up type.
 Turn table type.
These are accurate, speed and reliable measurements. The rotating pick up type the work piece is
stationary and the pickup revolved. In the turn table the work piece is rotated and pick up is stationery.
On the rotating type, spindle is designed to carry t he light load of the pickup. The weight of the work
piece, being stationary and is easy to make. In the turn table type the pickup is not associated with the
spindle. This is easier to measure roundness. Reposition the pickup has no effects on the reference axis.
93
The pickup converts the circuit movement of the stylus into electrical signal, which is processed
and amplified and fed to a polar recorder. A microcomputer is incorporated with integral visual
display unit and system is controlled from compact keyboards, which increases the system versatility,
scope and speed of analysis. System is programmed to access the roundness of work piece with
respect to any four of the internationality recognized reference circles. A visual display of work piece
profile ca n be obtained. Work piece can be assessed over a circumference, and with undercut surface or
an interrupted surface with sufficient data the reference circle can be fitted to the profile. The
program also provides functions like auto centering, auto ranging, auto calibration and concentricity.

Modern Roundness Measuring Instruments


This is based on use of microprocessor to provide measurements of roundness quickly and in a
simple way; there is no need of assessing out of roundness. Machine can do centering automatically and
calculate roundness and concentricity, straightness and provide visual and digital displays.

A computer is used to speed up calculations and provide the stand reference circle.

(i) Least square circle


The sum of the squares of a sufficient no. of equally spaced radial ordinates measured from
the circle to the profile has minimum value. The center of such circle is referred to as the least square
center. Out of roundness is defined as the radial distance of the maximum peak from the circle (P) plus
the distance of the maximum valley from this circle.

(ii) Minimum zone or Minimum radial separation circle


These are two concentric circles. The value of the out of roundness is the radial distance between
the two circles. The center of such a circle is termed as the minimum zone center. These circles can be
found by using a template.
(iii) Maximum inscribed circle
This is the largest circle. Its center and radius can be found by trial and error by compare or by
template or computer. Since V = 0 there is no valleys inside the circle.

(iv) Minimum circumscribed circles

This is the smallest circle. Its center and radius can be found by the previous method since
P = 0 there is no peak outside the circle. The radial distance between the minimum circumscribing circle
and the maximum inscribing circle is the measure of the error circularity. The fig shows the trace
produced by a recording instrument.

This trace to draw concentric circles on the polar graph which pass through the maximum and
minimum points in such way that the radial distance be minimum circumscribing circle containing the
trace or the n inscribing circle which can fitted into the trace is minimum. The radial distance between
the outer and inner circle is minimum is considered for determining the circularity error. Assessment of
roundness can be done by templates. The out off roundness is defined as the radial distance of the
maximum peak (P) from the least square circle plus the distance of the maximum valley (V) from the
least square circle. All roundness analysis can be performed by harmonic and slope analysis.

94
Unit 4 &5 Control charts

Quality control: Meaning, process control, SQC control charts, single, double
and sequential sampling, Introduction to TQM.
1. QUALITY CONTROL

DEFINITION OF QUALITY:
 The meaning of “Quality” is closely allied to cost and customer needs. “Quality” may simply be
defined as fitness for purpose at lowest cost.
 The component is said to possess good quality, if it works well in the equipment for which itis
meant. Quality is thus defined as fitness for purpose.
 Quality is the ‘totality of features and characteristics’ both for the products and services that can
satisfy both the explicit and implicit needs of the customers.
 “Quality” of any product is regarded as the degree to which it fulfills the requirements of the
customer.
 “Quality” means degree of perfection. Quality is not absolute but it can only be judged or realized
bycomparing with standards. It can be determined by some characteristics namely, design, size,
material, chemical composition, mechanical functioning, workmanship, finish and other
properties.

Control is a system for measuring and checking (inspecting) a phenomenon. It suggests when to
inspect,how often to inspect and how much to inspect. In addition, it incorporates a feedback mechanism
which explores the causes of poor quality and takes corrective action.

Control differs from ‘inspection’, as it ascertains quality characteristics of an item, compares the same
with prescribed quality standards and separates defective items from non-defective ones. Inspection,
however, does not involve any mechanism to take corrective action.

MEANING OF QUALITY CONTROL


Quality Control is a systematic control of various factors that affect the quality of the product. The
various factors include material, tools, machines, type of labour, working conditions, measuring
instruments, etc.

Quality Control can be defined as the entire collection of activities which ensures that the operation
willproduce the optimum Quality products at minimum cost.
As per A.Y. Feigorbaum Total Quality Control is: “An effective system for integrating the quality
development, Quality maintenance and Quality improvement efforts of the various groups in an
organization, so as to enable production and services at the most economical levels which allow fullcustomer satisfaction”
In the words of Alford and Beatly, “Quality Control” may be broadly defined as that “Industrial
management technique means of which products of uniform accepted quality are manufactured.”
QualityControl is concerned with making things right rather than discovering and rejecting those made
wrong.
In short, we can say that quality control is a technique of management for achieving required standards
ofproducts.

FACTORS AFFECTING QUALITY

In addition to men, materials, machines and manufacturing


95 conditions there are some other
factorswhich affect the product quality. These are:
 Market Research i.e. indepth into demands of purchaser.
 Money i.e. capability to invest.
 Management i.e. Management policies for quality level.
 Production methods and product design.
Modern quality control begins with an evaluation of the customer’s requirements and has a part to
play atevery stage from goods manufactured right through sales to a customer, who remains satisfied.

OBJECTIVES OF QUALITY CONTROL


To decide about the standard of quality of a product that is easily acceptable to the customer andat the
same time this standard should be economical to maintain.
 To take different measures to improve the standard of quality of product.
 To take various steps to solve any kind of deviations in the quality of the product
duringmanufacturing.
 Only the products of uniform and standard quality are allowed to be sold.
FUNCTIONS OF QUALITY CONTROL DEPARTMENT
 To suggest method and ways to prevent the manufacturing difficulties.
 To reject the defective goods so that the products of poor quality may not reach to the
customers.
 To find out the points where the control is breaking down and to investigate the causes of it.
 To correct the rejected goods, if it is possible. This procedure is known as
rehabilitation ofdefective goods.

ADVANTAGES OF QUALITY CONTROL


 Quality of product is improved which in turn increases sales.
 Scrap rejection and rework are minimized thus reducing wastage. So the cost of
manufacturingreduces.
 Good quality product improves reputation.
 Inspection cost reduces to a great extent.
 Uniformity in quality can be achieved.
 Improvement in manufacturer and consumer relations.
STATISTICAL QUALITY CONTROL (S.Q.C):
 Statistics: Statistics means data, a good amount of data to obtain reliable results. The science
ofstatistics handles this data in order to draw certain conclusions.
 S.Q.C: This is a quality control system employing the statistical techniques to control
quality by performing inspection, testing and analysis to conclude whether the quality of
the product is as perthe laid quality standards.

Using statistical techniques, S.Q.C. collects and analyses data in assessing and controlling product
quality. The technique of S.Q.C. was though developed in 1924 by Dr.WalterA.Shewartan
American scientist; it got recognition in industry only second world war. The technique permits a
more fundamental control.

“Statistical quality control can be simply defined as an economic & effective system of
maintaining & improving the quality of outputs throughout the whole operating process of
specification, production & inspection based on continuous testing with random samples.”
-YA LUN CHOU
“Statistical quality control should be viewed as a kit of tools which may influence decisions to
thefunctions of specification, production or inspection. -EUGENE L. GRANT
96
The fundamental basis of S.Q.C. is the theory of probability. According to the theories of
probability, thedimensions of the components made on the same machine and in one batch (if
measured accurately) varyfrom component to component. This may be due to inherent machine
characteristics or the environmental conditions. The chance or condition that a sample will
represent the entire batch or population is developed from the theory of probability.
Relying itself on the probability theory, S.Q.C. evaluates batch quality and controls the quality of
processes and products. S.Q.C. uses three scientific techniques, namely;
 Sampling inspection
 Analysis of the data, and
 Control charting

ADVANTAGES OF S.Q.C
S.Q.C is one of the tool for scientific management, and has following main advantages over 100
percentinspection:
 Reduction in cost: Since only a fractional output is inspected, hence cost of inspection is greatly
reduced.
 Greater efficiency: It requires lesser time and boredom as compared to the 100 percent
inspection and hence the efficiency increases.
 Easy to apply: Once the S.Q.C plan is established, it is easy to apply even by man who does not
have extensive specialized training.
 Accurate prediction: Specifications can easily be predicted for the future, which is not possible
even with 100 percent inspection.
 Can be used where inspection is needs destruction of items: In cases where destruction of
product is necessary for inspecting it, 100 percent inspection is not possible (which will spoil all
the products), sampling inspection is resorted to.
 Early detection of faults: The moment a sample point falls outside the control limits, it is taken
as a danger signal and necessary corrective measures are taken. Whereas in 100 percent
inspection, unwanted variations in quality may be detected after large number of defective items
have already been produced. Thus by using the control charts, we can know from graphic picture
that how the production is proceeding and where corrective action is required and where it is not
required.

PROCESS CONTROL

Under this the quality of the products is controlled while the products are in the process of production.

The process control is secured with the technique of control charts. Control charts are also used in
the field of advertising, packing etc. They ensure that whether the products confirm to the specified
quality standard or not.

Process Control consists of the systems and tools used to ensure that processes are well defined,
performed correctly, and maintained so that the completed product conforms to established
requirements. Process Control is an essential element of managing risk to ensure the safety and
reliability of the Space Shuttle Program. It is recognized that strict process control practices will aid in
the prevention of process escapes that may result in or contribute to in-flight anomalies, mishaps,
incidents and non-conformances.

97
1. The five elements of a process are:
 People – skilled individuals who understand the importance of process and change control
 Methods/Instructions – documented techniques used to define and perform a process
 Equipment – tools, fixtures, facilities required to make products that meet requirements
 Material – both product and process materials used to manufacture and test products
 Environment – environmental conditions required to properly manufacture and test products

2. PROCESS CONTROL SYSTEMS FORMS

Process control systems can be characterized as one or more of the following forms:

 Discrete – Found in many manufacturing, motion and packaging applications. Robotic assembly,
such as that found in automotive production, can be characterized as discrete process control.
 Most discrete manufacturing involves the production of discrete pieces of product, such as metal
stamping.
 Batch – Some applications require that specific quantities of raw materials be combined in
specific ways for particular durations to produce an intermediate or end result. One example is
the production of adhesives and glues, which normally require the mixing of raw materials in a
heated vessel for a period of time to form a quantity of end product. Other important examples
are the production of food, beverages and medicine. Batch processes are generally used to
produce a relatively low to intermediate quantity of product per year (a few pounds to millions
ofpounds).
 Continuous – Often, a physical system is represented through variables that are smooth and
uninterrupted in time. The control of the water temperature in a heating jacket, for example, is
an example of continuous process control. Some important continuous processes are the
production of fuels, chemicals and plastics. Continuous processes in manufacturing are used to
produce verylarge quantities of product per year (millions to billions of pounds).

STATISTICAL PROCESS CONTROL (SPC)

SPC is an effective method of monitoring a process through the use of control charts. Much of
its power lies in the ability to monitor both process center and its variation about that center. By
collecting data from samples at various points within the process, variations in the process that may
affect the quality of the end product or service can be detected and corrected, thus reducing waste as
well as the likelihood that problems will be passed on to the customer. It has an emphasis on early
detection and prevention of problems.

98
CONTROL CHARTS
Since variations in manufacturing process are unavoidable, the control chart tells when to leave
a process alone and thus prevent unnecessary frequent adjustments. Control charts are graphical
representation and are based on statistical sampling theory, according to which an adequate sized
random sample is drawn from each lot. Control charts detect variations in the processing and warn if
there is any departure from the specified tolerance limits. These control charts immediately tell the
undesired variations and help in detecting the cause and its removal.

In control charts, where both upper and lower values are specified for a quality characteristic, as soon
as some products show variation outside the tolerances, a review of situation is taken and corrective
step is immediately taken.

If analysis of the control chart indicates that the process is currently under control (i.e. is stable, with
variation only coming from sources common to the process) then data from the process can be used to
predict the future performance of the process. If the chart indicates that the process being monitored is
not incontrol, analysis of the chart can help determine the sources of variation, which can then be
eliminated to bring the process back into control. A control chart is a specific kind of run chart that
allows significant change to be differentiated from the natural variability of the process.

The control chart can be seen as part of an objective and disciplined approach that enables correct
decisions regarding control of the process, including whether or not to change process control
parameters. Process parameters should never be adjusted for a process that is in control, as this will
result in degraded process performance.

3. In other words, control chart is:


 A device which specifies the state of statistical control,
 A device for attaining statistical control,
 A device to judge whether statistical control has been attained or not.
PURPOSE AND ADVANTAGES:

A control charts indicates whether the process is in control or out of control.


1. It determines process variability and detects unusual variations taking place in a process.
2. It ensures product quality level.
3. It warns in time, and if the process is rectified at that time, scrap or percentage rejection
can bereduced.
4. It provides information about the selection of process and setting of tolerance limits.
5. Control charts build up the reputation of the organization through customer’s satisfaction.

99
4. A control chart consists of:

 Points representing a statistic (e.g., a mean, range, proportion) of measurements


of a qualitycharacteristic in samples taken from the process at different times [the
data]
 The mean of this statistic using all the samples is calculated (e.g., the mean of the means,
mean ofthe ranges, mean of the proportions)
 A center line is drawn at the value of the mean of the statistic
 The standard error (e.g., standard deviation/sqrt(n) for the mean) of the statistic is also
calculatedusing all the samples
 Upper and lower control limits (sometimes called "natural process limits") that indicate the
threshold at which the process output is considered statistically 'unlikely' are drawn
typically at 3 standard errors from the center line

The chart may have other optional features, including:

 Upper and lower warning limits, drawn as separate lines, typically two standard errors
above andbelow the center line
 Division into zones, with the addition of rules governing frequencies of observations in each
zone
 Annotation with events of interest, as determined by the Quality Engineer in charge of the
process'squality

100
TYPES OF CONTROL CHARTS

X(bar) Chart
Variables or
Measurement
Charts R Chart

Control charts
Attribute Charts
p chart

np Chart

C chart

Control charts can be used to measure any characteristic of a product, such as the weight of a cereal
box, the number of chocolates in a box, or the volume of bottled water. The different characteristics
that can be measured by control charts can be divided into two groups: variables and attributes.
 A control chart for variables is used to monitor characteristics that can be measured and have a
continuum of values, such as height, weight, or volume. A soft drink bottling operation is an
exampleof a variable measure, since the amount of liquid in the bottles is measured and can take
on a numberof different values. Other examples are the weight of a bag of sugar, the temperature
of a baking oven, or the diameter of plastic tubing.
 A control chart for attributes, on the other hand, is used to monitor characteristics that have
discretevalues and can be counted. Often they can be evaluated with a simple yes or no decision.
Examples include color, taste, or smell. The monitoring of attributes usually takes less time than
that of variables because a variable needs to be measured (e.g., the bottle of soft drink contains
15.9 ounces of liquid). An attribute requires only a single decision, such as yes or no, good or bad,
acceptable or unacceptable (e.g., the apple is good or rotten, the meat is good or stale, the shoes
have a defect or donot have a defect, the lightbulb works or it does not work) or counting the
number of defects (e.g., the number of broken cookies in the box, the number of dents in the car,
the number of barnacles on the bottom of a boat).

CONTROL CHARTS FOR VARIABLES VS. CHARTS FOR ATTRIBUTES


A comparison of variable control charts and attribute control charts are given below:
 Variables charts involve the measurement of the job dimensions and an item is accepted or
rejected ifits dimensions are within or beyond the fixed tolerance limits; whereas as attribute
chart only differentiates between a defective item and a non-defective item without going into
the measurementof its dimensions.
 Variables charts are more detailed and contain more information as compared to attribute charts.
 Attribute charts, being based upon go and no101 go data (which is less effective as
compared tomeasured values) require comparatively bigger sample size.
 Variables charts are relatively expensive because of the greater cost of collecting measured data.
 Attribute charts are the only way to control quality in those cases where measurement of
quality characteristics is either not possible or it is very complicated and costly to do so—as
in the case ofchecking colour or finish of a product, or determining whether a casting contains
cracks or not. Insuch cases the answer is either yes or no.
ADVANTAGES OF ATTRIBUTE CONTROL CHARTS
Attribute control charts have the advantage of allowing for quick summaries of various aspects
of the quality of a product, that is, the engineer may simply classify products as acceptable or
unacceptable, based on various quality criteria. Thus, attribute charts sometimes bypass the need for
expensive, precise devices and time-consuming measurement procedures. Also, this type of chart tends
to be more easily understood by managers unfamiliar with quality control procedures; therefore, it may
provide more persuasive (to management) evidence of quality problems.

ADVANTAGES OF VARIABLE CONTROL CHARTS


Variable control charts are more sensitive than attribute control charts. Therefore, variable
control charts may alert us to quality problems before any actual "unacceptables" (as detected by the
attribute chart) will occur. Montgomery (1985) calls the variable control charts leading indicators of
trouble that will sound an alarm before the number of rejects (scrap) increases in the production
process.

COMMONLY USED CHARTS

1. (X-Bar) and R charts, for process control.


2. P chart, for analysis of fraction defectives
3. C chart, for control of number of defects per unit.

Mean (x-Bar) ( ) Charts


A mean control chart is often referred to as an x-bar chart. It is used to monitor changes in the
mean of a process. To construct a mean chart we first need to construct the center line of the chart. To
do this we take multiple samples and compute their means. Usually these samples are small,
with about four or five observations. Each sample has its own mean. The center line of the chart is
then computed as the mean of allsample means, where _ is the number of samples:

1. It shows changes in process average and is affected by changes in process variability.


2. It is a chart for the measure of central tendency.
3. It shows erratic or cyclic shifts in the process.
4. It detects steady progress changes, like tool wear.
5. It is the most commonly used variables chart.
6. When used along with R chart:
a. It tells when to leave the process alone and when to chase and go for the causes
leading tovariation;
b. It secures information in establishing or modifying processes, specifications or
inspectionprocedures;
c. It controls the quality of incoming material.
7. X-Bar and R charts when used together form a powerful instrument for diagnosing quality
problems.

Range (R) charts


These are another type of control chart for variables. Whereas x-bar charts measure shift in the
central tendency of the process, range charts monitor the
102 dispersion or variability of the process. The
method for developing and using R-charts are the same as that for x-bar charts. The center line of the
control chart is theaverage range, and the upper and lower control limits are computed. The R chart is
used to monitor process variability when sample sizes are small (n<10), or to simplify the calculations
made by process operators.
This chart is called the R chart because the statistic being plotted is the sample range.

1. It controls general variability of the process and is affected by changes in process variability.
2. It is a chart for measure of spread.
3. It is generally used along with X-bar chart.
Plotting of and R charts:
A number of samples of component coming out of the process are taken over a period of time. Each
sample must be taken at random and the size of sample is generally kept as 5 but 10 to15 units can be
taken for sensitive control charts. For each sample, the average value of all the measurements and
the range R are calculated. The grand average (equal to the average value of all the average )
and ( is equal to the average of all the ranges R) are found and from these we can calculate
the control limits for the and R charts. Therefore,

Here the factors , and depend on the number of units per sample. Larger the number, the close
the limits. The value of the factors , and can be obtained from S.Q.C tables. However for
ready reference these are given below in tabular form:

103
Notation:
n or m= sample size
1. Example
Piston for automotive engine are produced by a forging process. We wish to establish statistical control
ofinside diameter of the ring manufactured by this process using x and R charts.
Twenty-five samples, each of size five, have been taken when we think the process is in control. The
insidediameter measurement data from these samples are shown in table.

X = 74.001

5. R = 0.023

So
,
104
From S.Q.C tables (Fig.3) for sample size 5

A2=0.58, D4=2.11 and D3= 0


UCL = X + A2 R
= 74.001+ 0.58(0.023)

= 74.01434

LCL = X --

= 74.001- 0.58(0.023)

= 73.98766

UCL (R chart) =

= 2.11*0.023

= 0.04853

LCL (R chart) =

= 0*0.023

=0

Now and R charts are plotted on the plot as shown in Fig.1 and Fig.2

105
Fig.1: Chart

Fig.2: R Chart Inference:

In the chart, all of the time the plotted points representing average are well within the control limits
but if some samples fall outside the control limits then it means something has probably gone wrong
or is about to go wrong with the process and a check is needed to prevent the appearance of defective
products.

106
PROCESS OUT OF CONTROL

After computing the control limits, the next step is to determine whether the process is in statistical
controlor not. If not, it means there is an external cause that throws the process out of control. This
cause must be traced or removed so that the process may return to operate under stable statistical
conditions. The various reasons for the process being out of control may be:
1. Faulty tools
2. Sudden significant change in properties of new materials in a new consignment
3. Breakout of lubrication system
4. Faults in timing of speed mechanisms

PROCESS IN CONTROL
If the process is found to be in statistical control, a comparison between the required specifications and
the process capability may be carried out to determine whether the two are compatible.

Conclusions:
When the process is not in control then then the point fall outside the control limits on either or R
107
charts. It means assignable causes (human controlled causes) are present in the process. When all the
points are inside the control limits even then we cannot definitely say that no assignable cause is present
but it is not economical to trace the cause. No statistical test can be applied. Even in the best
manufacturing process, certain errors may develop and that constitute the assignable causes but no
statistical action can be taken.
CONTROL CHARTS FOR ATTRIBUTES

Control charts for attributes are used to measure quality characteristics that are counted rather than
measured. Attributes are discrete in nature and entail simple yes-or-no decisions. For example, this
could be the number of nonfunctioning lightbulbs, the proportion of broken eggs in a carton, the
number of rotten apples, the number of scratches on a tile, or the number of complaints issued.
Two of the most commontypes of control charts for attributes are p-charts and c-charts.

P-charts are used to measure the proportion of items in a sample that are defective. Examples are
the proportion of broken cookies in a batch and the proportion of cars produced with a
misaligned fender. P-charts are appropriate when both the number of defectives measured and
the size of the total sample can be counted. A proportion can then be computed and used as the
statistic of measurement.
1. It can be a fraction defective chart.
2. Each item is classified as good (non-defective) or bad (defective).
3. This chart is used to control the general quality of the component parts and it checks
if thefluctuations in product quality (level) are due to chance alone.

Plotting of P-charts: By calculating, first, the fraction defective and then the control limits.
The process is said to be in control if fraction defective values fall within the control limits. In case the
process is out of control an investigation to hunt for the cause becomes necessary.

Usually the Z value is equal to 3 (as was used in the X and R charts), since the variations within three
standard deviations are considered as natural variations. However, the choice of the value of Z depends
on the environment in which the chart is being used, and on managerial judgment.

C-charts count the actual number of defects. For example, we can count the number of complaints
from customers in a month, the number of bacteria on a petri dish, or the number of barnacles on the
bottom of a boat. However, we cannot compute the proportion
108 of complaints from customers, the
proportion of bacteria on a petri dish, or the proportion of barnacles on the bottom of a boat.

Defective items vs individual defects


The literature differentiates between defect and defective, which is the same as differentiating between
nonconformity and nonconforming units. This may sound like splitting hairs, but in the interest of
claritylet's try to unravel this man-made mystery.

Consider a wafer with a number of chips on it. The wafer is referred to as an "item of a product". The
chip may be referred to as "a specific point". There exist certain specifications for the wafers. When a
particular wafer (e.g., the item of the product) does not meet at least one of the specifications, it is
classified as a nonconforming item. Furthermore, each chip, (e.g., the specific point) at which a
specification is not met becomes a defect or nonconformity.
So, a nonconforming or defective item contains at least one defect or nonconformity. It should be
pointedout that a wafer can contain several defects but still be classified as conforming. For example,
the defects may be located at noncritical positions on the wafer. If, on the other hand, the number
of the so-called "unimportant" defects becomes alarmingly large, an investigation of the production
of these wafers iswarranted.

Control charts involving counts can be either for the total number of nonconformities (defects)
for thesample of inspected units, or for the average number of defects per inspection unit.
Defect vs. Defective
• ‘Defect’ – a single nonconforming quality characteristic.

• ‘Defective’ – items having one or more defects.

C charts can be plotted by using the following formulas:

UCL  c  3 c

LCL  c 3 c
THE PRIMARY DIFFERENCE BETWEEN USING A P-CHART AND A C-CHART IS
AS FOLLOWS.

A P-chart is used when both the total sample size and the number of defects can be computed.
A C-chart is used when we can compute only the number of defects but cannot compute the proportion
thatis defective.
ACCEPTANCE SAMPLING
“Acceptance Sampling is concerned with the decision to accept a mass of manufactured items as
conformingto standards of quality or to reject the mass as non-conforming to quality. The decision is
reached through sampling.” - SIMPSON AND KAFKA
109
Acceptance sampling uses statistical sampling to determine whether to accept or reject a
production lot of material. It has been a common quality control technique used in industry and
particularly the military for contracts and procurement. It is usually done as products leave the factory,
or in some cases even within the factory. Most often a producer supplies a consumer a number of items
and decision to accept or reject the lotis made by determining the number of defective items in a sample
from the lot. The lot is accepted if the number of defects falls below where the acceptance number or
otherwise the lot is rejected

For the purpose of acceptance, inspection is carried out at many stages in the process of
manufacturing. These stages may be: inspection of incoming materials and parts, process inspection at
various points in the manufacturing operations, final inspection by a manufacturer of his own product
and finally inspection ofthe finished product by the purchaser.
Inspection for acceptance is generally carried out on a sampling basis. The use of sampling inspection
to decide whether or not to accept the lot is known as Acceptance Sampling. A sample from the
inspection lotis inspected, and if the number of defective items is more than the stated number known
as acceptance number, the whole lot is rejected.
The purpose of Acceptance Sampling is, therefore a method used to make a decision as to whether to
accept or to reject lots based on inspection of sample(s).

Incoming Inspection
Accepted Lot Outgoing Quality

Rejected lot
Subjected to cent
Percent inspection

Replacement of substandard items by good ones from assemblies


and rejection of individual defective item

Acceptance sampling is the process of randomly inspecting a sample of goods and deciding whether
to accept the entire lot based on the results. Acceptance sampling determines whether a batch of goods
should be accepted or rejected.

110
Acceptance Sampling is very widely used in practice due to the following merits:

Acceptance Sampling is much less expensive than 100 percent inspection.


1. It is general experience that 100 percent inspection removes only 82 to 95 percent of defective
material. Very good 100 percent inspection may remove at the most 99 percent of the
defectives, butstill cannot reach the level of 100 percent. Due to the effect of inspection fatigue
involved in 100 percent inspection, a good sampling plan may actually give better results than
that achieved by 100 percent inspection.
2. Because of its economy, it is possible to carry out sample inspection at various stages.

Inspection provides a means for monitoring quality. For example, inspection may be performed on
incomingraw material, to decide whether to keep it or return it to the vendor if the quality level is not
what was agreedon. Similarly, inspection can also be done on finished goods before deciding whether
to make the shipment to the customer or not. However, performing 100% inspection is generally not
economical or practical, therefore, sampling is used instead.
Acceptance Sampling is therefore a method used to make a decision as to whether to accept or to
reject lots based on inspection of sample(s). The objective is not to control or estimate the quality of
lots, only to pass ajudgment on lots.
Using sampling rather than 100% inspection of the lots brings some risks both to the consumer and
to the producer, which are called the consumer's and the producer's risks, respectively. We encounter
making decisions on sampling in our daily affairs.

Operating Characteristic Curve


The Operating Characteristic Curve (OC Curve) shows you the probability that you will accept
lots withvarious levels of quality. It is the working plan of acceptance sampling.
AQL – Acceptance Quality Level

The AQL (Acceptance Quality Level), the maximum % defective that can be considered satisfactory as
aprocess average for sampling inspection
RQL – Rejectable Quality Level

The RQL (Rejectable Quality Level) is the % defective. t is also known as the Lot Tolerance Percent
Defective (LTPD).
LTPD – Lot Tolerance Percent Defective

The LTPD of a sampling plan is a level of quality routinely rejected by the sampling plan. It is
generally defined as that level of quality (percent defective, defects per hundred units, etc.) which
the sampling planwill accept 10% of the time.
Risks in Acceptance sampling

1. Producer’s risk-: Sometimes inspite of good quality, the sample taken may show defective
units assuch the lot will be rejected, such type of risk is known as producer’s risk.
2. Consumer’s Risk-: Sometimes the quality of the lot is not good but the sample results show
good quality units as such the consumer has to 111
accept a defective lot, such a risk is known as
consumer’srisk.
ACCEPTANCE SAMPLING PLANS
A sampling plan is a plan for acceptance sampling that precisely specifies the parameters of the
sampling process and the acceptance/rejection criteria. The variables to be specified include the size of
the lot (N), the size of the sample inspected from the lot (n), the number of defects above which a lot
is rejected (c), and the number of samples that will be taken.

There are different types of sampling plans.


 Single Sampling (Inference made on the basis of only one sample)
 Double Sampling (Inference made on the basis of one or two samples)
 Sequential Sampling (Additional samples are drawn until an inference can be made) etc.
Single Sampling Plan
In single sampling plan, the decision regarding the acceptance or rejection is made after drawing a
sample from a bigger lot. Inspection is done and if the defectives exceed a certain number the lot is
rejected. Otherwise, the lot is accepted when the number of defectives is less than the acceptance
number.
Double Sampling Plan
In this, a small sample is first drawn. If the number of defectives is less than or equal to the acceptance
number (C1) the lot is accepted. If the number of defectives is more than another acceptance number
(C2) which is higher, then C1 then the lot is rejected. If in case, the number in the inspection lies
between C2 and C1, then a second sample is drawn. The entire lot is accepted or rejected on the basis
of outcome of second inspection.
Sequential Sampling Plan
Sequential sampling plan is used when three or more samples of stated size are permitted and when
the decision on acceptance or rejection must be reached after a stated number of samples.

112
A first sample of n1 is drawn, the lot is accepted if there are no more than c1 defectives, the lot
is rejected if there are more than r1 defectives. Otherwise a second sample of n2 is drawn. The lot is
accepted if there are no more than c2 defectives in the combined sample of n1 + n2. The lot is rejected
if there are more than r2 defectives in the combined sample of n1 + n2. The procedure is continued in
accordance with the table below.

If by the end of fourth sample, the lot is neither accepted nor rejected, a sample n5 is drawn. The
lot is accepted if the number of defectives in the combined sample of n1 + n2 + n3 + n4 + n5 does not
exceed c5. Otherwise the lot is rejected.

A sequential sampling plan involves higher administrative costs and use of experienced inspector

AN INTRODUCTION TO TOTAL QUALITY MANAGEMENT (TQM)

At its core, Total Quality Management (TQM) is a management approach to long-term success
through customer satisfaction.

In a TQM effort, all members of an organization participate in improving processes, products,


services and the culture in which they work.

Total Quality Management (TQM) is an approach that seeks to improve quality and performance
which will meet or exceed customer expectations. This can be achieved by integrating all quality-
related functions and processes throughout the company. TQM looks at the overall quality measures
used by a company including managing quality design and development, quality control and
maintenance, quality improvement, and quality assurance. TQM takes into account all quality measures
taken at all levels and involving all companyemployees.

TQM can be defined as the management of initiatives and procedures that are aimed at
achieving thedelivery of quality products and services.

113
PRINCIPLES OF TQM

A number of key principles can be identified in defining TQM, including:

 Executive Management – Top management should act as the main driver for TQM and
create anenvironment that ensures its success.
 Training – Employees should receive regular training on the methods and concepts of quality.
 Customer Focus – Improvements in quality should improve customer satisfaction.
 Decision Making – Quality decisions should be made based on measurements.
 Methodology and Tools – Use of appropriate methodology and tools ensures that non-
conformancesare identified, measured and responded to consistently.
 Continuous Improvement – Companies should continuously work towards improving
manufacturingand quality procedures.
 Company Culture – The culture of the company should aim at developing employees ability
to worktogether to improve quality.
 Employee Involvement – Employees should be encouraged to be pro-active in
identifying andaddressing quality related problems.

A core concept in implementing TQM is Deming’s 14 points, a set of management practices to


helpcompanies increase their quality and productivity:

1. Create constancy of purpose for improving products and services.


2. Adopt the new philosophy.
3. Cease dependence on inspection to achieve quality.
4. End the practice of awarding business on price alone; instead, minimize total cost by working
with asingle supplier.
5. Improve constantly and forever every process for planning, production and service.
6. Institute training on the job.
7. Adopt and institute leadership.
8. Drive out fear.
9. Break down barriers between staff areas.
10. Eliminate slogans, exhortations and targets for the workforce.
11. Eliminate numerical quotas for the workforce and numerical goals for management.
12. Remove barriers that rob people of pride of workmanship, and eliminate the annual rating
or meritsystem.
114
13. Institute a vigorous program of education and self-improvement for everyone.
14. Put everybody in the company to work accomplishing the transformation.
TEAM APPROACH

TQM stresses that quality is an organizational effort. To facilitate the solving of quality
problems, it places great emphasis on teamwork. The use of teams is based on the old adage that “two
heads are better than one.”Using techniques such as brainstorming, discussion, and quality control
tools, teams work regularly to correct problems. The contributions of teams are considered vital to the
success of the company. For this reason, companies set aside time in the workday for team meetings.
Teams vary in their degree of structure and formality, and different types of teams solve different types
of problems. One of the most common types of teams is the quality circle, a team of volunteer
production employees and their supervisors whose purpose is to solve quality problems. The circle is
usually composed of eight to ten members, and decisions are made through group consensus. The teams
usually meet weekly during work hours in a place designated for this purpose. They follow a preset
process for analyzing and solving quality problems. Open discussion is promoted, and criticism is not
allowed. Although the functioning of quality circles is friendly and casual, it is serious business.
Quality circles are not mere “gab sessions.” Rather, they do important work for the company and have
been very successful in many firms.

115
THE SEVEN TOOLS OF QUALITY CONTROL

1. Cause and effect analysis


2. Flowcharts
3. Checklists
4. Control techniques including Statistical quality control and control charts.
5. Scatter diagram
6. Pareto analysis which means identification of vital few from many at a glance. This is used for
fixingthe priorities in tackling a problem.
7. Histograms.

Cause-and-Effect Diagrams

Cause-and-effect diagrams are charts that identify potential causes for particular quality
problems. Theyare often called fishbone diagrams because they look like the bones of a fish. A general
cause-and-effect diagram is shown in Figure 5-8. The “head” of the fish is the quality problem, such
as damaged zippers on a garment or broken valves on a tire. The diagram is drawn so that the “spine”
of the fish connects the “head” to the possible cause of the problem. These causes could be related to
the machines, workers, measurement, suppliers, materials, and many other aspects of the production
process. Each of these possible causes can then have smaller “bones” that address specific issues that
relate to each cause. For example, a problem withmachines could be due to a need for adjustment, old
equipment, or tooling problems. Similarly, a problem with workers could be related to lack of training,
poor supervision, or fatigue.
Cause-and-effect diagrams are problem-solving tools commonly used by quality control teams.
Specific causes of problems can be explored through brainstorming.
The development of a cause-and-effect diagram requires the team to think through all the possible
causes of poor quality.
Flowcharts

A flowchart is a schematic diagram of the sequence of steps involved in an operation or process.


It providesa visual tool that is easy to use and understand.
By seeing the steps involved in an operation or process, everyone develops a clear picture of
116
how theoperation works and where problems could arise.
Checklists

A checklist is a list of common defects and the number of observed occurrences of these
defects. It is asimple yet effective fact-finding tool that allows the worker to collect specific
information regarding the defects observed. The checklist in Figure 5-7 shows four defects and the
number of times they have been observed.
It is clear that the biggest problem is ripped material. This means that the plant needs to focus
on this specific problem—for example, by going to the source of supply or seeing whether the material
rips during aparticular production process.
A checklist can also be used to focus on other dimensions, such as location or time.
For example, if a defect is being observed frequently, a checklist can be developed that measures
the numberof occurrences per shift, per machine, or per operator. In this fashion we can isolate the
location of the particular defect and then focus on correcting the problem.

Control Charts

Control charts are a very important quality control tool. We will study the use of control charts at
great length in the next chapter. These charts are used to evaluate whether a process is operating within
expectations relative to some measured value such as weight, width, or volume. For example, we could
measure the weight of a sack of flour, the width of a tire, or the volume of a bottle of soft drink.When
the production process is operating within expectations, we say that it is “in control.”

To evaluate whether or not a process is in control, we regularly measure the variable of interest
and plot iton a control chart. The chart has a line down the center representing the average value of the
variable we are measuring. Above and below the center line are two lines, called the upper control
limit (UCL) and the lower control limit (LCL). As long as the observed values fall within the upper
and lower control limits, the process is in control and there is no problem with quality. When a
measured observation falls outside of these limits, there is a problem.
Scatter Diagrams

Scatter diagrams are graphs that show how two variables are related to one another. They are
particularly useful in detecting the amount of correlation, or the degree of linear relationship, between
two variables. For example, increased production speed and number of defects could be correlated
positively; as production speed increases, so does the number of defects. Two variables could also be
correlated negatively, so that an increase in one of the117
variables is associated with a decrease in the
other. For example, increased worker training might be associated with a decrease in the number of
defects observed.
The greater the degree of correlation, the more linear are the observations in the scatter diagram.
On the other hand, the more scattered the observations in the diagram, the less correlation exists
between the variables. Of course, other types of relationships can also be observed on a scatter diagram,
such as an inverted. This may be the case when one is observing the relationship between two variables
such as oven temperature and number of defects, since temperatures below and above the ideal could
lead to defects.

Pareto Analysis

Pareto analysis is a technique used to identify quality problems based on their degree of importance.
The logic behind Pareto analysis is that only a few quality problems are important, whereas many
others are not critical. The technique was named after Vilfredo Pareto, a nineteenth-century Italian
economist who determined that only a small percentage of people controlled most of the wealth. This
concept has often been called the 80–20 rule and has been extended too many areas. In quality
management the logic behind Pareto’s principle is that most quality problems are a result of only a few
causes. The trick is to identifythese causes.

One way to use Pareto analysis is to develop a chart that ranks the causes of poor quality in
decreasing orderbased on the percentage of defects each has caused. For example, a tally can be made
of the number of defects that result from different causes, such as operator error, defective parts, or
inaccurate machine calibrations. Percentages of defects can be computed from the tally and placed in
a chart like those shown in Figure 5-7.We generally tends to find that a few causes account for most of
the defects.

Histograms

A histogram is a chart that shows the frequency distribution of observed values of a variable. We can
see from the plot what type of distribution a particular variable displays, such as whether it has a normal
distribution and whether the distribution is symmetrical.

In the food service industry the use of quality control tools is important in identifying quality problems.
Grocery store chains, such as Kroger and Meijer, must record and monitor the quality of incoming
produce, such as tomatoes and lettuce. Quality tools can be used to evaluate the acceptability of product
118
quality and tomonitor product quality from individual suppliers. They can also be used to evaluate
causes of quality problems, such as long transit time or poor refrigeration.
Similarly, restaurants use quality control tools to evaluate and monitor the quality of delivered goods,
suchas meats, produce, or baked goods.
119
Techniques of TQM

ISO 9000 Standard

Increases in international trade during the 1980s created a need for the development of
universal standards of quality. Universal standards were seen as necessary in order for companies to
be able to objectively document their quality practices around the world. Then in 1987 the
International Organization for Standardization (ISO) published its first set of standards for quality
management called ISO 9000. The International
Organization for Standardization (ISO) is an international organization whose purpose is to
establish agreement on international quality standards. It currently has members from 91 countries,
including the United States. To develop and promote international quality standards, ISO 9000 has
been created. ISO 9000consists of a set of standards and a certification process for companies. By
receiving ISO 9000 certification, companies demonstrate that they have met the standards specified
by the ISO.
The standards are applicable to all types of companies and have gained global acceptance. In
many industries ISO certification has become a requirement for doing business. Also, ISO 9000
standards have been adopted by the European Community as a standard for companies doing
business in Europe.
In December 2000 the first major changes to ISO 9000 were made, introducing the following
three new standards:
• ISO 9000:2000–Quality Management Systems–Fundamentals and Standards: Provides the
terminology and definitions used in the standards. It is the starting point for understanding the system
of standards.
• ISO 9001:2000–Quality Management Systems–Requirements: This is the standard used for the
certification of a firm’s quality management system. It is used to demonstrate the conformity of
quality management systems to meet customer requirements.
• ISO 9004:2000–Quality Management Systems–Guidelines for Performance: Provides guidelines
for establishing a quality management system. It focuses not only on meeting customer requirements
but also onimproving performance.

120
These three standards are the most widely used and apply to the majority of companies.
However, ten more published standards and guidelines exist as part of the ISO 9000 family of
standards.
To receive ISO certification, a company must provide extensive documentation of its quality
processes. Thisincludes methods used to monitor quality, methods and frequency of worker training,
job descriptions, inspection programs, and statistical process-control tools used. High-quality
documentation of all processes is critical.
The company is then audited by an ISO 9000 registrar who visits the facility to make sure the
company has a well-documented quality management system and that the process meets the
standards. If the registrar finds that all is in order, certification is received.

Once a company is certified, it is registered in an ISO directory that lists certified companies. The
entire process can take 18 to 24 months and can cost anywhere from $10,000 to $30,000. Companies
have to be recertified by ISO every three years.

One of the shortcomings of ISO certification is that it focuses only on the process used and
conformance to specifications. In contrast to the Baldrige criteria, ISO certification does not address
questions about the product itself and whether it meets customer and market requirements. Today
there are over 40,000 companies that are ISO certified. In fact, certification has become a
requirement for conducting business in many industries.

ISO 14000 Standards


The need for standardization of quality created an impetus for the development of other standards.
In 1996 the International Standards Organization introduced standards for evaluating a company’s
environmental responsibility. These standards, termed ISO 14000, focus on three major areas:
 Management systems standards measure systems development and integration of
environmentalresponsibility into the overall business.
 Operations standards include the measurement of consumption of natural resources and energy.
 Environmental systems standards measure emissions, effluents, and other waste systems.
 With greater interest in green manufacturing and more awareness of environmental concerns,
ISO 14000 may become an important set of
121standards for promoting environmental
responsibility.

122
Benchmarking
Benchmarking is the process of comparing one's business processes and performance metrics
to industry bests or best practices from other industries. Dimensions typically measured are quality,
time and cost. Inthe process of best practice benchmarking, management identifies the best firms in
their industry, or in another industry where similar processes exist, and compares the results and
processes of those studied (the "targets") to one's own results and processes. In this way, they learn
how well the targets perform and, more importantly, the business processes that explain why these
firms are successful.
Benchmarking is used to measure performance using a specific indicator (cost per unit of
measure, productivity per unit of measure, cycle time of x per unit of measure or defects per unit of
measure) resultingin a metric of performance that is then compared to others.
Also referred to as "best practice benchmarking" or "process benchmarking", this process is
used in management and particularly strategic management, in which organizations evaluate various
aspects of their processes in relation to best practice companies' processes, usually within a peer
group defined for the purposes of comparison. This then allows organizations to develop plans on
how to make improvements or adapt specific best practices, usually with the aim of increasing some
aspect of performance. Benchmarking may be a one-off event, but is often treated as a continuous
process in which organizations continually seek to improve their practices.

Six Sigma
Six Sigma is a set of tools and strategies for process improvement originally developed
by Motorola in1985. Six Sigma became well known after Jack Welch made it a central focus
of his business strategy at General Electric in 1995, and today it is used in different sectors of
industry.
Six Sigma seeks to improve the quality of process outputs by identifying and removing the causes
of defects(errors) and minimizing variability in manufacturing and business processes. It uses a set
of quality management methods, including statistical methods, and creates a special infrastructure
of people within theorganization ("Champions", "Black Belts", "Green Belts", "Orange Belts", etc.)
who are experts in these very complex methods.

123
Each Six Sigma project carried out within an organization follows a defined sequence of
steps and has quantified value targets, for example; process cycle time reduction, customer
satisfaction, reduction in pollution, cost reduction and/or profit increase. The term Six Sigma
originated from terminology associatedwith manufacturing, specifically terms associated with
statistical modeling of manufacturing processes.
The maturity of a manufacturing process can be described by a sigma rating indicating
its yield or thepercentage of defect-free products it creates.
A six sigma process is one in which 99.99966% of the products manufactured are statistically
expectedto be free of defects (3.4 defects per million), although, as discussed below, this defect
level corresponds to only a 4.5 sigma level. Motorola set a goal of "six sigma" for all of its
manufacturing operations, and this goal became a byword for the management and engineering
practices used to achieve it.

Methods
Six Sigma projects follow two project methodologies inspired by Deming's Plan-Do-Check-
Act Cycle.These methodologies, composed of five phases each, bear the acronyms DMAIC
and DMADV.[11]
 DMAIC is used for projects aimed at improving an existing business process.
 DMADV is used for projects aimed at creating new product or process designs.
DMAIC
The DMAIC project methodology has five phases:
 Define the problem, the voice of the customer, and the project goals, specifically.
 Measure key aspects of the current process and collect relevant data.
 Analyze the data to investigate and verify cause-and-effect relationships. Determine what
the relationships are, and attempt to ensure that all factors have been considered. Seek out
root cause ofthe defect under investigation.
[

 Improve or optimize the current process based upon data analysis using techniques such as
design ofexperiments, poka yoke or mistake proofing, and standard work to create a new,
future state process.Set up pilot runs to establish process capability.
 Control the future state process to ensure that any deviations from target are corrected
before theyresult in defects. Implement control systems such as statistical process control,
production boards,visual workplaces, and continuously
124 monitor the process.
Some organizations add a Recognize step at the beginning, which is to recognize the right problem
to workon, thus yielding an RDMAIC methodology.

DMADV or DFSS
The DMADV project methodology, known as DFSS ("Design For Six Sigma"),features five phases:
 Define design goals that are consistent with customer demands and the enterprise strategy.
 Measure and identify CTQs (characteristics that are Critical To Quality), product
capabilities,production process capability, and risks.
 Analyze to develop and design alternatives
 Design an improved alternative, best suited per analysis in the previous step
 Verify the design, set up pilot runs, implement the production process and hand it over to
the processowner(s).

Quality circle
A quality circle is a volunteer group composed of workers (or even students), usually under
the leadershipof their supervisor (or an elected team leader), who are trained to identify, analyze
and solve work-related problems and present their solutions to management in order to improve
the performance of the organization, and motivate and enrich the work of employees. When
matured, true quality circles become self-managing, having gained the confidence of management.
Quality circles are an alternative to the rigid concept of division of labor, where workers operate
in a more narrow scope and compartmentalized functions. Typical topics are improving
occupational safety and health, improving product design, and improvement in the workplace and
manufacturing processes. The term quality circles derives from the concept of PDCA (Plan, Do,
Check, Act) circles developed by Dr. W.Edwards Deming.
Quality circles are typically more formal groups. They meet regularly on company time and
are trained by competent persons (usually designated as facilitators) who may be personnel and
industrial relations specialists trained in human factors and the basic skills of problem identification,
information gathering and analysis, basic statistics, and solution generation. Quality circles are
generally free to select any topic they wish (other than those related to salary and terms and
conditions of work, as there are other channels through which these issues are usually considered).
Quality circles have the advantage of continuity; the circle remains intact from project to project
Note : Study Inspection method for quality control from book
125

You might also like