Ultrasonic Examination Part 2
Ultrasonic Examination Part 2
Ultrasonic Examination Part 2
Job Knowledge
The previous article (127) explained the basic principles of ultrasonic examination. As to determine
accurately the size and position of a feature it is necessary, with any measuring equipment, to
calibrate the ultrasonic examination system.
The type of calibration blocks (there are varying shapes and sizes to be used), depend on the
application and the form and shape of the subject being tested. The calibration block should be made
the same as the material being inspected and the artificially-induced flaw should closely resemble the
actual flaw of concern. The best calibration block for calibrating ultrasonic testing equipment is one
in the same grade of material and heat treatment condition as the production items and with a weld
containing genuine flaws such as slag entrapment, porosity, lack of fusion, cracks etc. Techniques
developed enable flaws of known sizes to be introduced into a welded joint. Such calibration blocks
can be produced to validate the ultrasonic test method but are expensive and tend to be used only in
applications such as nuclear vessel manufacture and critical offshore/process plant fabrication.
A number of standard calibration blocks are available with the shape and dimensions being specified
in international standards such as ISO 2400, ISO 7963, ASME V and ASTM E164. Calibration of a
compression wave probe used to measure thickness is simple and carried out using a stepped wedge
calibration block. These calibration blocks have smooth, machined features and are not therefore
truly representative of flaws in a welded component.
For calibrating equipment to be used to interrogate welded joints the calibration block needs to be
more complex than a simple step wedge, with probably the two most common types illustrated in
Fig. 1, the ISO 2400 Number 1 block and the ISO 7963 Number 2 block. These are machined from
steel to very closely controlled tolerances and contain a number of features that can be used to
calibrate the ultrasonic equipment. The standard Number 1 block is 300mm long and 25 or 50mm
thickness with a 100mm radius machined on one end. The test block also contains two drilled holes,
50 and 1.5mm in diameter and a flat bottomed machined notch.
Weld discontinuity acceptance criteria are initially based on the height of the signal displayed on the
oscilloscope screen. This is not as simple as it may appear since the ultrasonic beam is influenced by
the microstructure of the metal through which it is propagating, becoming scattered and diffused -
similar to car headlights in fog! As a general rule the larger the grain size the greater the scattering
effect, the reflected beam becomes attenuated or decreased in strength the further away the reflector
is from the ultrasonic probe. This must be taken into account when accepting or rejecting flaws
within the weld – a 4MHz signal would lose some 0.02–0.03db per mm in steel. Fig 2 illustrates this
decrease in amplitude or signal height with distance.
Before calibrating the operator must select the frequency of the transducer as this determines the
wavelength of the sound. The frequency has a significant effect on the ability to detect a flaw – a rule
of thumb is that a flaw must be larger than one half the wavelength to be readily detectable.
The ultrasonic operator will select a calibration block with some feature of known dimensions, often
a 3mm diameter flat bottomed hole and the appropriate ultrasonic probe, these generally being
specified in the relevant application code or standard. The height of the reflection at known distances
from the probe would be determined and from this data would be drawn a distance amplitude
correction (DAC) curve by joining the tips of the signals that can be seen in Fig 2. This provides a
means of establishing a ‘reference level sensitivity’ as a function of distance from the ultrasonic
probe and allows the signals from similar reflectors to be evaluated.
The characteristics of an ultrasonic probe vary according to the size of the piezo-electric transducer
and its frequency. It is therefore essential that each probe to be used to examine a welded component
is individually calibrated and a DAC curve established for each different situation.
The contract specification, application code or acceptance standard specifies the relevant ultrasonic
acceptance standard of height, length, position etc of the reflector. It is unwise to refer to a visual or
radiographic acceptance standard in the absence of a relevant ultrasonic acceptance standard. An
ultrasonic acceptance standard will state which reflectors are acceptable or unacceptable based on the
amplitude of the signal compared with a DAC curve or other ultrasonic specific acceptance criteria.
One such specification that refers to the DAC curve is ISO 11666 ‘NDT of welds – Ultrasonic testing
– Acceptance levels’ which defines four levels:
the reference level ie the amplitude of the DAC curve at the relevant distance
the evaluation level ie the amplitude at which the reflector must be examined more closely to
determine through thickness height and length of the discontinuity
the recording level ie amplitude at which the size and position of the discontinuity must be
recorded
the acceptance level above which the discontinuity must be rejected – this may be above or
below the DAC curve. Any reflector with a signal below the evaluation level would be
ignored
Fig. 2 The reduction in amplitude with distance
If, as the ultrasonic testing (U/T) probe is scanned across the surface of the component, and the
amplitude of the signal exceeds the specified evaluation level, the U/T operator would need to
investigate the reflector in detail to determine its size, orientation and position within the component.
If the probe is moved transverse and parallel to the weld and rotated slightly, a skilled and
experienced operator can often identify the flaw type – crack, lack of fusion, etc – by observing the
changes in the shape of the pulse-echo on the oscilloscope screen.
To enable the operator to identify the position of a flaw it must be possible for the path and width of
the beam to be visualised. Accurately dimensioned diagrams of the weld-cross section superimposed
on what would be the beam path are required. This may be unnecessary in many situations but
provides additional confidence in critical applications and may be a mandatory part of a written U/T
procedure.
The size of a reflector is generally determined by the ‘6db drop method’, as illustrated in Fig. 3.
The operator moves the probe backwards and forwards at right angles to the axis of the reflector until
the maximum amplitude response is found. This point is noted and the scanning continued until the
amplitude of the signal has dropped by 6dB, this point also being recorded. From this the length or
height of the reflector can be determined (Fig 3). If above the recording level this would be recorded
on the U/T report before being compared with the acceptance standard for either acceptance or
rejection.
It is impossible to measure accurately the size of a reflector using a manual scanning technique for a
number of reasons. The speed of the sound within the component may vary due to changes in the
microstructure and the cleanliness of the parent metal; the probe will be made to within dimensional
tolerances, as will the calibration block and these will affect the accuracy of calibration; the beam
width may vary; the couplant and surface condition of the component will affect the coupling and
hence sound transmission; the surface of flaws within the weld are generally not flat, smooth
reflective surfaces oriented at 90 degrees to the beam; the probe movement is measured manually
with a rule or tape measure. The most important factors in achieving accurate, consistent and
reproducible results are the skill, competence and integrity of the operator.