Minimum Error Thresholding

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Pattern Recognition. Vol. 19. No. 1, pp. 41-47, 1986. 0031 3203'86 $3.00+ .

00
Printed in Great Britain. Pergamon Press Ltd
t 1986 Pattern Recognition SocieD

M I N I M U M ERROR THRESHOLDING
J. KITTLER and J. ILLINGWORTH
SERC Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX, U.K.

(Received 15 August 1984; in final form 13 dune 1985; receivedfor publication 8 duly 1985)

Abstraet--A computationaily efficient solution to the problem of minimum error thresholding is derived
under the assumption of object and pixel grey level values being normally distributed. The method is
applicable in multithreshold selection.

Thresholding Minimum error decision rule Classification error Dynamic clustering

I. I N T R O D U C T I O N oped in Section 2 and its properties discussed in


Section 3. The method can be easily extended to cope
Thresholding is a popular tool for segmenting grey with the problem of multiple threshold selection, as
level images. The approach is based on the assumption will be shown in Section 4. Finally, an iterative im-
that object and background pixels in the image can be plementation of the method is described in Section 5.
distinguished by their grey level values. By judiciously
choosing a grey level threshold between the dominant 2. M E T H O D O F T H R E S H O L D S E L E C T I O N
values of object and background intensities the
original grey level image can be transformed into a Let us consider an image whose pixels assume grey
binary form so that the image points associated with level values, g, from the interval [0, n]. It is convenient
the objects and the background will assume values one to summarise the distribution of the grey levels in the
and zero, respectively. Although the method appears image in the form of a histogram h(g) which gives the
to be simplistic, it is very important and fundamental, frequency of occurrence of each grey level in the image.
with wide applicability, as it is relevant not only for The histogram can be viewed as an estimate of the
segmenting the original sensor data but also for probability density function p(g) of the mixture popul-
segmenting its linear and non-linear image-to-image ation comprising grey levels of object and background
transforms. pixeis. In the following we shall assume that each of the
Apart from the recently proposed direct threshold two components p(g[/) of the mixture is normally
selection method, (1'2~ determination of a suitable distributed with mean #i standard deviation ai and a
threshold involves the computation of the histogram priori probability Pi, i.e.
or other function of the grey level intensity and its 2
subsequent analysis. For a more detailed review of p(g) = ~', PiP(Vii), (1)
i=l
various approaches to threshoiding the reader is
referred to Kittler et al.[3~ where
An effective approach is to consider thresholding as
a classification problem. If the grey level distributions 1 exp( (g-#1):
of object and background pixeis are known or can be
estimated, then the optimal, minimum error threshold For given P(gli) and Pi there exists a grey level 3 for
can be obtained using the results of statistical decision which grey levels g satisfy (for example Devijver and
theoryY~ Kittler 16))
It is often realistic to assume that the respective
populations are distributed normally with distinct P1P(gl 1) >< PEP(gl2) {gg<_3
> 3" (3)
means and standard deviations. Under this assump-
tion the population parameters can be inferred from
3 is the Bayes minimum error threshold at which the
the grey level histogram by fitting, as advocated by
image should be binarised. Taking the logarithm of
Nagawa and Rosenfeld (s~ and then the corresponding
both sides in (3), this condition can be re-expressed as
optimal threshold can be determined. However, their
approach is computationally involved. In this paper (g - #1) 2 (g - #2) 2
- - + logtr~ -- 2log PI >< - -
we propose an alternative solution which is more
efficient. The principal idea behind the method is to
optimise a criterion function related to the average + log a22 -- 2 log P2 ~ 'q < z (4)
pixel classification error rate. The approach is devel- l 0>3'

41
42 J. KITTLERand J. ILLINGWORTH

The problem of minimum error threshold selection is mance figure for the whole image can then be charac-
to determine the threshold level z. terised by the criterion function
The minimum error threshold can be obtained by
solving the quadratic equation defined by equating the
J(T) = ~ h(g). ~(g, T). (12)
¢
left and right hand sides of (4). However, the para-
meters #i, ai and P~ of the mixture density p(g) The role of this criterion function in finding the
associated with an image to be thresholded will not (approximate) minimum error threshold is rather
usually be known. Nevertheless, these parameters can subtle. For a given threshold T the criterion reflects
be estimated from the grey level histogram h(g) using indirectly the amount of overlap between the Gaussian
fitting techniques. This approach has been advocated models of the object and background populations.
by Nagawa and Rosenfeld. ts) Computationally their' Note that the density functions defining the models
method is very involved, as it requires optimisation of will overlap, even though, as a result of the histogram
a goodness-of-fit criterion function by a hill climbing partitioning at threshold T, the tails of the distri-
procedure. butions are ignored in their estimates.
In this paper we derive a much simpler technique for As threshold T is varied the models of the popul-
finding the optimum threshold x. Suppose that we ation distributions change. The better the tit between
threshold the grey level data at some arbitrary level T the data and the models, the smaller the overlap
and model each of the two resulting pixel populations between the density functions and therefore the smal-
by a normal density h(gli, T) with parameters/z~(T), ler the classification error. This behaviour is illustrated
a,~T) and a priori probability P~(T) given, respectively, in Fig. 1 for two different values of threshold T. The
as
corresponding overlaps of the density functions are
indicated by hatching. The value of threshold T
b
PiT)= ~ h(g). (5) yielding the lowest value of criterion J(~, T) will give
g=a the best fit model and therefore the minimum error
threshold.
#,T,=[..~ h(g)g]/P,.(T) (6) In summary, instead of determining ¢ indirectly as in
Nagawa and Rosenfeld, ~s) the problem of minimum
and error threshold selection can be formulated as one of
minimising criterion J(T), i.e.
a~(T)= [.=~{g- u,~T)}2h(o)]/P,<T), (7) J(z) = min J(T). (13)
r
where It should be noted that even at the optimum value T,
0 i=1 the models h(gli, T), i = 1, 2 will represent biased
estimates of the true components of the mixture of
a= T+ 1 i=2 (8)
normal probability distributions, due to the truncation
and of the tails of these distributions by histogram par-
titioning. However, we shall assume that the effect of
T i= 1 this bias is small and can be ignored.
b= . (9)
n i=2 Let us express criterion J(T) as
Now using the models h(gli, T), i = 1, 2, the T
conditional probability e(g, T) of grey level g being J(T) = ~ h(g)
g=O
replaced in the image by a correct binary value is given
by
~,~-~ -_] + 2 log ~,(T) - 2
e(g, T) = h(gli, T)" P,~T)/h(o) i= ~ l 0 <- T. (10)
(2 g>T
+ ~ h(o)
g=T+l

As h(g) is independent of both i and T, the denomi-


nator in (10) will for the moment be ignored in our ~ j + 2 log ~(r) - 2
analysis. Taking the logarithm of the numerator in (10)
and multiplying the result by - 2 we get the quantity (14)
~(g,T) = [ g --#~T)]
d 2+ 21ogaAT) - 2 log P,~T)
Substituting (5) through (7) into J(T) we find
J(T) = 1 + 2[PI(T) log trl(T ) + P2(T) log a2(T) ]
1 g<r (ll) - 2 [ P I ( T ) log PI(T) + P2(T) log P2(T)]. (15)
i= 2 g>T'
The criterion function J(T) in (15) can be computed
which can be considered as an alternative index of easily and finding its minima is a relatively simple task,
correct classification performance. The average perfor- as the function is smooth.
M i n i m u m error thresholding 43

P,=0.5 /.L,:50 o',:15


Pz=0.5 p.z=150 o'z=15
500
(a)
400

500

200

IO0

25 50 75 I00 125 150 t75 200

500
T:50
5°°i
~ (c)
F ~ / T:70
(b) Pj=0.25 /.L:=38 or,:9 I ~' I I PI : 0.45 /~, :47 ~ l =1~
400 Pz=0.75 /J.2:121 0"z:44

500
A 3 0 0 ~

200 200 --

i
ICO ~00

25 50 75 I00 125 150 175 200 25 50 75 tO0 125 I~ 17'5 200

Fig. 1. (a) Grey level histogram defined by two equipopulous Gaussian distributions. Optimal threshold is at
T = 100. (b, c) Population models for trial thresholds T = 50 and T = 70. Binarisation error is represented
by the hatched overlapping areas of the model population density functions.

Fig. 2. Bimodal histogram. P1 = 0.5,/~l = 50, al = 4; P2 =


0.5,/t 2 = 150, a 2 = 30. Experimentally determined m i n i m u m
error threshold • --- 64. Threshold selected by standard
algorithms(7 9) To = 103. Fig. 3. Criterion function J(T) for histogram of Fig. 2.
44 J. KITTLER and J. 1LLINGWORTH

II

Fig.4. 50 x 50 pixel squarein a 512 × 512image. Pj = 0.99,


/a1 = 90,~rI = 10; P2 = 0.01,#2 = 170, a 2 = 10. Fig. 5. Grey level histogram of Fig. 4.

3. PROPERTIES OF THE CRITERION FUNCTION Criterion (15) is also more appropriate for thresh-
olding images with highly unequal population sizes
The criterion function may have local minima at the exceeding the ratio of 1:10 z or more, where the
boundaries of the interval of grey levels. However, methods of Otsu ~7~ and Ridler~8' 9~ tend to split the
these minima are not of interest. A unique internal larger mode into two halves."°~ Figure 4 is such an
minimum implies histogram bimodality and it corre- image and Fig. 5 shows the corresponding histogram
sponds to the optimum (minimum error) threshold. of grey level values. The minimum of the criterion
This has been verified experimentally on the histogram function, shown in Fig. 6, gives a good threshold value
shown in Fig. 2, containing object and background for segmenting the square from the background. Note
grey level modes of unequal variances. Figure 3 gives that at the peak of the larger mode the criterion
the corresponding criterion function. Note that the function attains a maximum.
popular threshold selection methods of Otsu fT} and The absence of an internal minimum is indicative of
Ridter,{s' ~} which use a less sophisticated model for the a unimodal histogram, which would correspond to a
two modes, give a biased threshold indicated by T Oin homogeneous image. Binarisation of such an image
Fig. 2. The reasons why the bias occurs are discussed in would be inappropriate and this case is illustrated in
Kittler and IllingworthJ 1°} Figs 7 and 8, which show the histogram and the

Fig. 6. Criterion function J{T) for histogram of Fig. 5. Fig. 7. Unimodal grey level histogram.
Minimum error thresholding 45

Fig. 8. J(T) for Fig. 7. No internal minima exist. Fig. 9. Poorly illuminated image of an industrial part.

corresponding criterion function for a homogeneous 4. MULTIPLETHRESHOLDSELECTION


image. The advocated method is again superior to In the presence of more than one internal minimum
those of Otsu tT) and Ridler, ts' 91 which would split the the grey level histogram will be at least trimodal.
only mode in the middle and result in a 'pepper and Although the points of minima of J(T) could be used as
salt' binary image. thresholds, they are likely to be biased. For example,
The threshold selection approach has been applied Fig. 11 shows a grey level histogram consisting of 3
to an image of an industrial part (Fig. 9). As the equipopulous modes. The means of the 3 distributions
illumination was non-uniform, the image had to be are at 50, 100 and 150 and they all have a variance of
binarised using a variable thresholding technique. The 100 grey levels. In the corresponding 3(T~ minima are
512 × 512 image was divided into subimages of 32 x present at grey level values of 70 and 130. A possible
32 pixels and an optimal threshold selected for each way of eliminating this bias is to partition the hist-
thresholdable window (i.e. containing both object and ogram at the first biased threshold level given by J(T}.
background pixels). A bilinear interpolation of deter- The second of the two resulting grey level histograms
mined thresholds defined a threshold value at every can then be subjected to histogram analysis by the
pixel of the image. The resulting binary image is shown above method to determine the unbiased threshold
in Fig. 10. level. This procedure is then repeated for the histogram

Fig. 10. Binary image derived using minimum error Fig. l l. Trimodalhistogram. Pt = P2 = P3,~ = 50,#2 =
thresholding. 100, #3 = 150; at = a2 = 0-3 |0.
=
46 J. KITTLERand J. ILLINGWORTH

ri mum correspond to the correct minimum error


thresholds
Tt =75, T2= 125.

S. I T E R A T I V E I M P L E M E N T A T I O N

The criterion (15) [and for that matter (16)] can be


minimised iteratively using the dynamic clustering
algorithm, tr) It is possible to show that for any
partition of the grey level histogram using the thresh-
~i~ O ~ old T the criterion function value can be decreased by
assigning the grey level values to the classes of object
and background pixels according to the Bayes mini-
mum error rule defined in terms of parameters/~i(T),
~r,{T) and PI(T), i.e.

if [ g - PI(T)] 2
at-~-~ -j + 2 [ l o g a t ( T ) - logPl(T)]

>< [ e - pz(T)] z + 2[loga2(T)


Fig. 12. J(T~, T2) for trimodal histogram of Fig. 11.
- log P2(T)] then (21)
obtained by partitioning the original histogram at the "-*2"
above unbiased threshold.
This decision rule effectively defines a new threshold
Alternatively, and generally, we can use a multi-
which can be obtained by solving the following
threshold extension of the proposed method. Suppose
quadratic equation
the histogram contains m modes, i.e. it is a mixture ofm
normal densities. By analogy, the corresponding crite- g2.[ 1 1 1 [-pl(T),u2(T)7
rion to be optimised is ,~(T) .~(T) - 2gL~ a~(T)J
J(T ..... T=_I) = 1 + 2 /12(T) /12(T)
q try(T) a~(T) + 2 [log tr,(T) - log tr2(T)
x ~, {P,(T 3 [log tT,(T3 - log P,(T,)]}, (16)
i=1 - 2 ['log PI(T) -- log P2(T)] = 0. (22)
where The procedure can be repeated for this new value of
r, threshold, thus reducing the criterion function value
P,.(T,) = ~ h(g), (17) even further. The algorithm is terminated when the
g=T,-l+ 1
threshold becomes stable. The algorithm can be
I r, formally stated as follows.
/~,(T,) = p,(T-----~o ~ gh(o), (18)
=Z-1 + ) Step I. Choose an arbitrary initial threshold T;
1 T, Step 2. Compute IA.(T), tr,,(T), Pi(T), i = 1, 2;
a2(T3 = P,(Ti-----) ~ [g - #,(T~)]2 h(g) (19) Step 3. Compute the updated threshold by solving
g=T~_,+l
equation (22);
and Step 4. If the new threshold equals the old one then
terminate the algorithm, else go to Step 2.
T= --- n, (20)
The threshold selection algorithm is very fast, but
To = - 1 .
the user must be aware of various pitfalls that could
Here the number of possible sets of candidate result in a nonsensical thresholding. A suitable
thresholds to be evaluated is considerably greater. strategy should be adopted to counteract them. For
Specifically, it is given by (n + 1)!/(n + 2 - m)! instance, the algorithm may converge to the boundary
(m - I)! Thus, for instance, when m = 3 and n = 255 points of the grey level range. The convergence to an
the number of points for which the criterion function internal minimum of function J(T) does not guarantee
must be computed is 32,640. that it is a unique minimum and therefore a good
The application of the multithreshold selection threshold. The algorithm may hang at some threshold
procedure to the trimodal histogram of Fig. 11 yielded value after just one or few iterations because of the
the criterion function space J(Tt, T2) shown in Fig. 12, coarse quantisation of the image intensity values. The
with bright grey level values corresponding to low general strategy is to run the threshold selection
values of J(T 1, T2). The only internal minimum lies in algorithm starting from several initial thresholds and
the central bright area. The coordinates of the mini- then compare the results.
Minimum error thresholding 47

Another point to note is that at some values of T the 2. J. Kittler, J. Illingworth, J. F6glein and K. Paler, An
product of the a priori probability and the conditional automatic thresholding method for waveform segmen-
tation, Proc. Int. Conf. on Digital Signal Processing.
density of one class exceeds that of the other. Hence the Florence, pp. 727-732 (1984).
quadratic equation in (22) will have imaginary roots. If 3. J. Kittler, J. Illingworth and J. F6glein, Threshold
such a situation occurs the algorithm must be init- selection based on a simple image statistic, Comput.
ialised at a new starting point. Vision Graphics Image Process. 30, 125-147 (1985).
4. A. Rosenfeid and A. C. Kak, Digital Picture Processing.
6. CONCLUSIONS Academic Press, New York (1976).
5. Y. Nagawa and A. Rosenfeld, Some experiments on
variable thresholding, Pattern Recognition 11, 191-204
A computationally efficient solution to the problem
(1979).
of minimum error thresholding has been derived under 6. P. A. Devijver and J. Kittler, Pattern Recognition: A
the assumption of object and pixel gray level values Statistical Approach. Prentice/Hall, Englewood Cliffs, NJ
being normally distributed. The principal idea behind (1982).
the method is to optimise the average pixel classifi- 7. N, Otsu, A threshold selection method from grey level
histograms, IEEE Trans. Syst. Man Cybernet. SMC-9,
cation error rate directly, using either an exhaustive 62-66 (1979).
search or an iterative algorithm. The method is 8. T. Ridler and S. Calvard, Picture thresholding using an
applicable in multithreshold selection. iterative selection method, IEEE Trans. Syst. Man
Cybernet. SMC-8, 630-632 (1978).
9. H. J. Trussel, Comments on picture thresholding using
REFERENCES an iterative selection method, IEEE Trans. Syst. Man
1. J. Kittler, J. Illingworth, J. F6glein and K. Paler, An Cybernet. SMC-9, 311 (1979).
automatic thresholding algorithm and its performance, 10. J. Kittler and J. Illingworth, On threshold selection using
Proc. 7th Int. Conf. on Pattern Recognition, Montreal, pp. clustering criteria, IEEE Trans. Syst. Man Cybernet.
287-289 (1984). SMC-15 (1985).

About the Autbor--Josar KtTrLERwas awarded a Ph.D. degree in Pattern Recognition in 1974 and since then
has published a number of papers and a book (Pattern Recognition: A Statistical Approach, Prentice Hall,
1982) on topics in pattern recognition and image processing. Since 1980 he has been with the Rutherford
Appleton Laboratory, where he is in charge of a research team working on projects in computer vision and
remote sensing. He is the SERC Coordinator for Pattern Analysis.
Dr Kittler is an Associate Editor of l EEE Transactions on Pattern Analysis and Machine Intelligence, and
is on the editorial board of Pattern Recognition, Pattern Recognition Letters, and Image and Vision
Computing. He has been serving as a member of the BPRA Committee for several years.

About the Author--JoHN ILLINGWORTHreceived B.Sc. and D.Phil. degrees in Physics from the Universities of
Birmingham and Oxford in 1978 and 1983, respectively. He is now employed as a Senior Research Associate
at the Rutherford Appleton Laboratory, undertaking research in computer vision algorithms.

You might also like