Checker Board Detection

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Automatic Detection of Checkerboards on Blurred and Distorted Images

Martin Rufli, Davide Scaramuzza, and Roland Siegwart


Autonomous System Lab, ETH Zurich, Switzerland
[email protected], [email protected], [email protected]

Abstract— Most of the existing camera calibration toolboxes


require the observation of a checkerboard shown by the user
at different positions and orientations. This paper presents
an algorithm for the automatic detection of checkerboards,
described by the position and the arrangement of their corners,
in blurred and heavily distorted images. The method can be
applied to both perspective and omnidirectional cameras. An
existing corner detection method is evaluated and its strengths
and shortcomings in detecting corners on blurred and distorted
test image sets are analyzed. Starting from the results of
this analysis, several improvements are proposed, implemented,
and tested. We show that the proposed algorithm is able to
consistently identify 80% of the corners on omnidirectional
images of as low as VGA resolution and approaches 100% Fig. 1. Left: hyperbolic mirror placed on a video camera. Top right: Philips
correct corner extraction at higher resolutions, outperforming SPC 300. Bottom right: Philips ToUCam Fun.
the existing implementation significantly. The performance of
the proposed method is demonstrated on several test image sets
of various resolution, distortion, and blur, which are exemplary
for different kinds of camera­mirror setups in use.
different positions and orientations. Then, the user is asked
I. INTRODUCTION
to identify the corner points of the checkerboard, which are
Cameras can appear either with limited field of view used as the only input to the calibration routine. The tooboxes
(i.e. perspective cameras) or with wide field of view. Wide given in [14] and [16] require the user to identify the four
field of view cameras can be built by using fisheye lenses external corners of the checkerboard in every test image by
(e.g. Nikon or Sigma) [1] or by combining a standard manually clicking on them. The approximate locations of
perspective camera with a shaped mirror (i.e. catadioptric the remaining corners are then simply interpolated and a
omnidirectional cameras, Fig. 1) [2]. Harris corner finder is employed in the vicinity to refine
Accurate camera calibration is necessary for any computer their position. The toolbox given in [15], which is designed
vision task requiring extracting metric information of the for fisheye and catadioptric central omnidirectional cameras,
environment from 2D images, like in ego­motion estimation conversely requires the user to click on all corners of the
and structure from motion. All works on camera calibration checkerboard. Indeed, this toolbox makes no assumption
can be classified into two different categories. The first one about the shape of the lens or the mirror, thus the positions
includes methods which exploit prior knowledge about the of the remaining corners cannot be interpolated from four
scene, such as the presence of calibration patterns (e.g. points. The positions of the clicked points are also refined by
see [3], [4], [5], [6], [7]) or lines ([8] or [5], [9] for an a Harris corner finder. In the light of the above elaboration,
overview). The second group covers techniques that do not it was decided to design an automatic checkerboard extractor
use this knowledge; this includes calibration methods from which could be easily implemented into any and all of the
pure rotation or planar motion of the camera [10], and above mentioned toolboxes. Such an add­on dramatically de­
self­calibration procedures, which are performed from point creases calibration time and increases user experience, while
correspondences and epipolar constraint through minimizing at the same time preserving high calibration accuracy and
an objective function [11], [12], [13]. the correspondence between the same corners over all test
In the last decade, several toolboxes have been imple­ images. In order to be useful, such an extraction algorithm
mented, which allow any user to easily calibrate a camera needs to work with images of low resolution cameras (at
(perspective, fisheye, or omnidirectional) by using a planar least down to VGA, still widely in use), high distortion (as
checkerboard as a calibration pattern (for perspective cam­ introduced by omnidirectional cameras) and blur, stemming
eras see as an example [14], for fisheye and omnidirectional from the fact that for catadioptric cameras usually not the
cameras see [15], [16]). These toolboxes require the user whole mirror can be made to lie in focus. For this purpose,
to take several pictures of the pattern shown at a few the checkerboard extraction algorithm by Vezhnevets [17],
This work was supported by European grant FP6­IST­1­045350 although developed for planar cameras, was found to yield
Robots@Home�R
a good starting point.
A. Contribution and Outline
The main contribution of this paper is a novel heuristic to
detect checkerboards in blurred and highly distorted images.
In particular, we show that through this heuristic the detec­
tion rate of a standard checkerboard detection algorithm[17]
increases from 20% up to 80%, reaching almost 100%
using high quality cameras. Furthermore, the code is freely
available online, both in its source form and incorporated
Fig. 2. Left: After adaptive thresholding and one erosion step (run one).
into our camera calibration toolbox. To our knowledge this Right: After adaptive thresholding and two erosion steps (run two).
is the only such implementation readily available [15].
This document is organized as follows. In Section II, the
important steps of an existing corner extraction algorithm
[17] are described and its strengths and shortcomings con­
cerning the given task are elaborated. In Section III we
propose and discuss several steps for increasing the code’s
performance. Section IV compares the performance of the
improved algorithm against both the performance of the
existing implementation and manual selection of corners.
II. THE ALGORITHM BY VEZHNEVETS Fig. 3. Left: All found quadrangles after run one. Right: All found
quadrangles after run two.
OpenCV [18] is an open source computer vision library
initially developed by Intel. It features algorithms for many
vision applications and is in particular equipped with a of these quadrangles are then easily found with a binary
checkerboard corner extraction functionality developed by contour finder. If no pattern is found during the next steps,
Vladimir Vezhnevets [17]. The function identifies single it can be assumed that the checkers are still grown together.
black checkers of a checkerboard and then tries to merge Therefore erosion is gradually increased and the following
them back into the original pattern. As a region based method steps repeated. Fig. 2 illustrates how the checkers shrink and
it has the advantage of being much more robust to noise and then separate from their neighbors.
blur than a line based method would be. What follows is a 4) Quadrangle Generation: A binary contour finder then
step­by­step analysis of the important parts of this algorithm. tries to find closed contours and upon success, tries to fit
In Section III, we will adapt it to our needs. a quadrangle onto it by gradually approximating a polygon.
A. The Steps of the Algorithm Notice how after the first erosion run (Fig. 3 left) only two
checkers are properly separated and hence only two quads are
1) Algorithm Input: Input to the algorithm is an image
found. After two erosion steps (Fig. 3 right) the majority of
containing a black­and­white checkerboard of a given size. If
quadrangles, but not all of them, are found. By applying even
a color image is provided, a greyscale conversion is executed
more erosion steps, the pattern starts to partially dissolve,
thereafter. Then, the algorithm continues with a thresholding
resulting in non­detection of some (small) checkers.
step.
5) Quadrangle Linking: Quadrangles are then linked ac­
2) Adaptive Threshold: Binary thresholding is well suited
cording to the following heuristic:
to separate black from white checkers under most circum­
stances. The algorithm supports adaptive thresholding, which • For every corner of every found quadrangle compute the

binarizes the image locally according to a given mask size distance to every corner of every other quadrangle. Store
and method and generally delivers higher level segmentation the smallest such distance and the respective corner and
results for non­uniformly lit images. Two kernel implemen­ quadrangle ID.
tations are available: “mean” and “Gaussian”. In the original • Check whether this distance is smaller than the smallest

approach “mean” is used, which requires considerably less edge length of both quadrangles. This is intended to
computational power and is thus well suited for the checker­ make sure, that no quadrangle gets linked to quadran­
board detection from a video stream, where execution time gles too far away.
is critical. The checkers in the thresholded black­and­white • If these tests are passed, then the two corners are linked

image tend to be grown together due to blur, noise and/or and the extracted corner position is set to the arithmetic
too coarse sampling. For correct identification, they need to mean of their former positions.
be separated. An erosion step is applied. The extracted corners finally form a pattern described
3) Erosion: The inclusion of an erosion step (by using a through their position and neighborhood relation with respect
3x3 “rect” kernel, see Fig. 5 right) is the main ingenious idea to the other corners.
behind Vezhnevets’ implementation. In this way it is possible 6) Further Steps: From all erosion runs, the algorithm
to separate the checkerboard at the corners and obtain a set then selects the corner pattern with the highest number of
of black quadrangles (four­sided polygons). The contours found corners. No information exchange between different
Fig. 5. Left: 3x3 “Cross” kernel. Right: 3x3 “Rect” kernel.

Fig. 4. All found corners after run two.

erosion runs is performed. It is thus assumed that in a single


Fig. 6. New heuristic for corner linking: If the two candidate corners
given run every checker is theoretically identifiable. In case (red dots) lie on the same side of each of the four straights (i.e. inside the
the largest pattern features too many corners (i.e. due to an semitransparent yellow area), they are successfully matched.
erroneously identified checker because of glare), the ones
which result in the smallest convex hull are selected .
B. New Heuristic for Quadrangle Linking
B. Limitations In the original implementation, correctly identified black
The OpenCV corner finding algorithm was designed for checkers are connected over their corners according to
real­time calibration of regular cameras. Focus was laid on the heuristic as described in Section II­A.5. It was found
fast execution times, hence the use of a “mean” instead to work well for high resolution and mostly undistorted
of a “Gaussian” mask during the adaptive threshold step. images of checkerboards. For distortions as introduced by
Furthermore, the algorithm only returns a pattern if the omnidirectional cameras, however, not necessarily the closest
complete checkerboard was successfully detected, ignoring corner should be matched to a given corner, as Fig. 6
the fact that for calibration purposes it is often enough illustrates. Correct corner matching is of utmost importance;
to correctly identify a significant portion of the corners. mismatches disturb the structure of the extracted pattern and
As will be shown in section IV, the algorithm ceases to therefore invalidate all further steps. Our proposition for a
function properly with any combination of low resolution solution of this issue comes in the form of an enhanced
(VGA), blurred, and distorted images. Therefore it is of heuristic which can be geometrically verified to work even
limited use for omnidirectional camera calibration, and thus under severe distortions:
for implementation into such toolboxes. • For every corner of every found quadrangle compute the
distance to every corner of every other quadrangle and
III. IMPROVEMENTS TO THE CODE check whether the distance is shorter than the shortest
A. Adaptation of Erosion Kernels edge length of both of the two involved quadrangles.
If true, accept the two corners as a candidate neighbor
For features of large size in comparison to the kernel used, pair.
erosion appears to affect all border pixels uniformly. Upon • For each candidate pair, focus on the quadrangles they
closer inspection, however, corners tend to get rounded, the belong to and draw two straight lines passing through
exact amount depending on the orientation of the checker the midsections of the respective quadrangle edges (see
and the type of kernel used. This starts to have a signifi­ Fig. 6).
cant effect on the checkers if they become of comparable • If the candidate corner and the source corner are on the
size as the kernels themselves; a condition which is often same side of every of the four straight lines drawn this
fulfilled for omni­images taken with VGA resolution. Even way (this corresponds to the yellow shaded area in Fig.
though the smallest possible symmetric erosion kernel (a 3x3 6), then the corners are successfully matched.
maximum filter) was used in the original implementation,
some improvements can nonetheless be achieved: the kernel C. Adaptive Quadrangle Linking Distance
size cannot be made smaller than 3x3, but its shape may be As mentioned in Section II­A.5, quadrangles only get
altered. For a symmetric 3x3 kernel it is possible to construct linked if their corners are less than a certain distance
two shapes, namely “cross” and “rect” as depicted in Fig. apart. In the original implementation, inaccurately, the
5. Alternating between the two has the effect of preserving shortest edge length of the two involved quadrangles was
the aspect ratio of (small) checkers independent of their chosen for this distance limit. If the checkers are large w.r.t
orientation, i.e. it allows for uniform “shrinking”. erosion, the error introduced is small. But for low resolution
again disturbs the resulting pattern. Decreasing the deviation
threshold leads to the identification of a substantially smaller
amount of quadrangles. At the same time, false positives
detection is reduced as well. Therefore we decided to restrict
the approximation of contours to a conservative level (i.e.
select a low deviation threshold) in the first part of the
algorithm, practically guaranteeing the extraction of correct
quadrangles at the price of the number of found objects.
The now smaller reference pattern is then introduced into
the new part two of the algorithm (see Section III­D), where
the polygonal approximation threshold is again increased.
The idea is then to try matching quadrangles found during
Fig. 7. Visualization of “matching over different dilation runs” procedure. the most strongly eroded run to the reference pattern first
Top: reference pattern (light green). Clearly the bottom checkers have not
been identified. Middle: red quadrangles indicate candidate checkers found (i.e. introducing runs in reverse order), as there the chance
in another erosion run. Bottom: addition of some of these candidates to the of separated checkers is highest. Addition of heavily eroded
reference pattern (bold red quadrangles). quadrangles to the reference structure decreases corner lo­
calization, however. With this adaptation, correct pattern
extraction is therefore favored over corner accuracy.
images, erosion has a large effect on the overall size of the
quadrangle, which may result in a drastic reduction of the F. Relative Importance
smallest edge length. Therefore the distance measure was The adaptation of the erosion kernels and especially
adapted to incorporate the effect of erosion: the introduction of a new linking heuristic were found
to be the most important enhancements. They both deal
dlimit = shortest edge length + 2 · erosion, (1) with the changes to the checker pattern as introduced by
where the factor two is due to the erosion acting on both omnidirectional camera distortions, while at the same time
quadrangles. preserving the detection rate of the original implementation
for regular images. The other improvements only start having
D. Linking of Quadrangles over Multiple Erosion Runs a significant effect on very low resolution and blurred images
Through the mirror of an omnidirectional camera, blur is (see Section IV­B).
radially unevenly spread: depending on the focal distance
IV. TEST IMAGE ANALYSIS
of the camera, either points toward the center or toward the
border of the image tend to be more blurred. Because of this In this section, 6 test image sets containing 10 images
anisotropy, not all quadrangles are separated during the same each are analyzed. Typical camera­mirror setups of various
erosion run. Some of them may even only start to separate quality have been considered. The number of found corners
when smaller ones have already completely disappeared. per image and the corner localization accuracy is compared
Therefore the problem may be encountered that even though between the original OpenCV implementation and our pro­
many quadrangles are successfully identified spread over posed method. First, however, prerequisites for successful
multiple iterations, not all of them appear in a single one. We corner extraction are discussed.
therefore tried to match patterns of found quadrangles over
different erosion runs, by combining partial into complete A. Prerequisites
results. The algorithm was thus expanded as follows: the Corner extraction using both OpenCV and our method
pattern, where most quadrangles had been found is selected is dependent on a black and white checkerboard of any
as “reference pattern”. In a second (new) part, all previously reasonable size (sizes of 5x6 and 6x7 inner corners have
found quadrangles of all erosion runs are tried to be matched been shown to work well), with a white border around it
to the border of the above defined reference pattern. Upon of at least one checker width (see Fig. 8). If you plan
successful match, the reference pattern is updated to include on using the algorithm in cases of extreme back light or
the new quadrangle and the whole process is repeated until overhead lighting, consider using a checkerboard with an
no more additions are reported. Fig. 7 visualizes this second even wider white border. Additionally, use a camera with as
part in a sequence of images. high a resolution as possible, try to minimize overall blur, but
especially around small checkers and make sure that none of
E. Adaptation of the Polygonal Approximation Level the checkers touch the border or got occluded.
As described in Section II­A.4, extracted contours are sent
to a polygonal approximator, which tries to fit quadrangles B. Results
onto them. Depending on how much the approximated poly­ For an overview of the test image sets chosen, refer to
gon is allowed to deviate from the true contour (deviation Table I. Sets no. 1­3 have been taken with a Sony XCD­
threshold), due to blur connected checkers are sometimes SX910 camera (high resolution) combined with a hyperbolic
mistakenly approximated as a single quadrangle, which mirror; sets no. 4 and 5 with a Philips ToUCam Fun camera
Fig. 8. A 7x6 inner corner checkerboard with a white border of exactly Fig. 9. Calibration images which best reflect the average performance of
one checker width. the algorithm for test set 1. Left: OpenCV. Right: our approach.

TABLE I TABLE IV
TEST IMAGE SETS TEST IMAGE SET 3

Img. set Resolution Blur Brightness Camera­mirror shape Method OpenCV Our method
Set 1 1280x960 no daylight hyperbolic, central Number of found corners Number of found corners
Set 2 1280x960 no reduced iris hyperbolic, central Mean 11.4 of 30 29.7 of 30
Set 3 1280x960 yes daylight hyperbolic, central Min 3 28
Set 4 640x480 no daylight Christmas ball, non­central Max 21 30
Set 5 640x480 no daylight spherical, central Corner inaccuracy [pxl] Corner inaccuracy [pxl]
Set 6 640x480 yes daylight spherical, central Mean 1.48 0.62
Variance 0.23 0.63

(low resolution, large depth of field, Fig. 1 bottom) combined


with a Christmas ball and a spherical mirror respectively; set C. Discussion
no. 6 with a Philips SPC 300 camera (low resolution, narrow The results show that our approach consistently outper­
depth of field, Fig. 1 top) combined with a spherical mirror. forms OpenCV, except on high­resolution and nearly planar
For set no. 4, we used a Christmas ball in order to show that images (test set no. 1) where they are on equal footage. Our
our method also works for other concave mirrors. algorithm notably also works well in conjunction with non­
For sets no. 1, 2, 4, and 5 (no blur) corner inaccuracy hyperbolic mirrors (test sets no. 4, 5, and 6). Furthermore,
is measured with respect to a reference extraction (manual corner localization is shown to have an average error of less
preselection followed by a Harris corner extraction in the than one pixel, compared to reference. If not much blur is
selected area). For sets no. 3 and 6 (blur), manual corner present in the calibration images, a Harris corner finder as
selection alone is defined as reference. Images displaying implemented into most toolboxes is able to negate this error.
the average number of found corners for test sets no. 1 and
6 are depicted in order to convey a feeling for the relative D. Issues with Our Approach
performance between the two implementations at different The following two examples are intended to give the
test conditions (Fig. 9 and 10). reader an understanding on issues which could arise during

TABLE II TABLE V
TEST IMAGE SET 1 TEST IMAGE SET 4

Method OpenCV Our method Method OpenCV Our method


Number of found corners Number of found corners Number of found corners Number of found corners
Mean 37.2 of 42 42 of 42 Mean 28 of 42 41.7 of 42
Min 18 42 Min 14 40
Max 42 42 Max 39 42
Corner inaccuracy [pxl] Corner inaccuracy [pxl] Corner inaccuracy [pxl] Corner inaccuracy [pxl]
Mean 1.46 0.68 Mean 1.77 0.97
Variance 0.13 0.17 Variance 0.55 0.84

TABLE III TABLE VI


TEST IMAGE SET 2 TEST IMAGE SET 5

Method OpenCV Our method Method OpenCV Our method


Number of found corners Number of found corners Number of found corners Number of found corners
Mean 37.4 of 42 41.5 of 42 Mean 26.8 of 42 41.4 of 42
Min 30 37 Min 10 36
Max 42 42 Max 36 42
Corner inaccuracy [pxl] Corner inaccuracy [pxl] Corner inaccuracy [pxl] Corner inaccuracy [pxl]
Mean 1.43 0.69 Mean 1.05 0.79
Variance 0.15 0.20 Variance 0.51 0.31
TABLE VII
acquisition prerequisites from Section IV­A are fulfilled. This
TEST IMAGE SET 6
shows the strength of the algorithm: it builds on top of and
Method OpenCV Our method enhances the openCV implementation using the refinements
Number of found corners Number of found corners described earlier, and thus works just as well on nondistorted
Mean 8.3 of 42 33.4 of 42 images as the original approach, but consistently outperforms
Min 3 23
Max 15 42
it in low resolution, highly distorted and/or blurred images.
Corner inaccuracy [pxl] Corner inaccuracy [pxl] We therefore believe it to be well suited for implementation
Mean 1.08 1.05 into a wide range of camera calibration toolboxes, which
Variance 0.35 0.90 could notably benefit from automatic calibration routines.
A standalone executable of the algorithm was then gener­
ated. Via a Matlab based interface it may be easily integrated
into any of the available camera calibration toolboxes. The
complete source code, the executable and a sample interface
are available online [15].
R EFERENCES
[1] B. Micusik, Two View Geometry of Omnidirectional Cameras. PhD
thesis, Center for Machine Perception, Czech Technical University in
Prague, 2004.
Fig. 10. Calibration images which best reflect the average performance of [2] R. Benosman and S. B. Kang, editors. Panoramic vision: sensors,
the algorithm for test set 6. Left: OpenCV. Right: our approach. theory, and applications. Monographs in computer science. Springer
Verlag, New York, 2001.
[3] R.Y. Tsai, An Efficient and Accurate Camera Calibration Technique
for 3D Machine Vision. Proceedings of IEEE Conference on Computer
the checkerboard pattern extraction. Vision and Pattern Recognition, Miami Beach, FL, pp. 364­374, 1986.
1) Importance of a Wide Border around the Checker­ [4] Zhengyou Zhang. A Flexible New Technique for Camera Calibration,
board: When taking pictures against bright light sources, IEEE Transactions on Pattern Analysis and Machine Intelligence,
Volume 22, Issue 11, pp.: 1330 ­ 1334. November 2000.
the adaptive threshold is disturbed into believing that the [5] R. Hartley and A. Zisserman, Multiple View Geometry in Computer
white checkerboard border is actually black. We stress the Vision. Cambridge University Press, ISBN: 0521540518, second ed.,
importance of a wide enough white border. 2004.
[6] Scaramuzza, D., Martinelli, A. and Siegwart, R. A Toolbox for Easy
2) Small Checkers in Low Resolution Images: Figure 11 calibrating Omnidirectional Cameras. In Proc. of IROS’06, pp. 5695­
belongs to test image set no. 5. Close inspection of the 5701, China, October 2006, Beijing, (2006).
[7] C. Mei and P. Rives, “Single view point omnidirectional camera
matching process shows that during one erosion run the calibration from planar grids,” in IEEE International Conference on
bottom right checkers are too small to be recognized as Robotics and Automation, April 2007.
quadrangles; during the next erosion run, however, they are [8] C. Geyer and K. Daniilidis, “Paracatadioptric camera calibration,”
IEEE Transactions on Pattern Analysis and Machine Intelligence,
already grown together with their neighbor checker. In such vol. 24, pp. 687–695, may 2002.
cases, which happen only for very small checkers in low­res [9] 19. Y. Ma, S. Soatto, J. Kosecka, S. Sastry, An invitation to 3D vision,
images, corner extraction for the involved checkers fails. from images to geometric models models, Springer Verlag, ISBN­0­
387­00893­4. 2003.
V. CONCLUSION [10] J. Gluckman and S. Nayar, “Ego­motion and omnidirectional cam­
eras,” in 6th International Conference on Computer Vision, pp. 999–
In this paper an existing method for identifying checker­ 1005, 1998.
boards on calibration images was analyzed. This method then [11] S. Kang, “Catadioptric self­calibration,” in IEEE International Con­
ference on Computer Vision and Pattern Recognition, pp. 201–207,
served as the starting point for the adapted and improved 2000.
method described in Section III. The enhancements to the [12] Micusik, B. and Pajdla, T. Estimation of omnidirectional camera model
code proved to dramatically increase the corner output for from epipolar geometry. In Proc. of CVPR. ISBN 0­7695­1900­8, US,
June 2003, IEEE Computer Society, Madison, (2003).
low resolution and blurred images, consistently returning [13] S. Bougnoux, “From projective to euclidean space under any practical
80% and more of the corners present, compared to as low as situation, a criticism of self­calibration,” in 6th International Confer­
20% for the existing method. On higher resolution images, ence on Computer Vision, pp. 790–796, 1998.
[14] Bouguet, J.­Y. Camera Calibration Toolbox for Matlab:
nearly 100% corner recognition was obtained. False positive http : //www.vision.caltech.edu/bouguetj/calib doc
detection was found to be very low, provided the image [15] Scaramuzza, D. Omnidirectional Camera Calibration Toolbox
for Matlab: Google for “ocamcalib”, or go directly to
http : //asl.epf l.ch/˜scaramuz/research/
Davide Scaramuzza f iles/Research/OcamCalib T utorial.htm
[16] Mei, C. Omnidirectional Camera Calibration Toolbox for Matlab:
http : //www.robots.ox.ac.uk/˜cmei/T oolbox.html
[17] Vezhnevets, V. OpenCV Calibration Object Detection:
http : //graphics.cs.msu.ru/en/research/calibration/
opencv.html
[18] Intel Corporation. Open Source Computer Vision Library:
Fig. 11. Bottom checkers of the board are not recognized: During one http : //www.intel.com/technology/computing/opencv
erosion run they are too small for recognition (left image), during the next,
however, already grown together with their neighbor checker (right image).

You might also like