Hough Transform based Correction of Mobile Robot Orientation
Lejla Banjanović-Mehmedović
Ivan Petrović, Edouard Ivanjko
University of Tuzla, Faculty of Electrical Engineering,
Franjevačka 2, BH-75000 Tuzla, Bosnia and Herzegovina
[email protected]
University of Zagreb, Faculty of Electrical Engineering
and Computing, Department of Control and Computer
Engineering in Automation,
Unska 3, HR-10000 Zagreb, Croatia
[email protected],
[email protected]
Abstract- In most applications, a mobile robot must be
able to determine its position and orientation in the
environment using only own sensors. The problem of
pose tracking can be seen as a constituent part of the
more general navigation problem. Our proposed
approach is able to track the mobile robot pose without
environment model. It is based on combining
histograms and Hough transform (HHT). While
histograms for position tracking (x and y histograms)
are extracted directly from local occupancy grid maps,
angle histogram is obtained indirectly via Hough
transformation combined with a non-iterative
algorithm for determination of end points and length of
straight-line parts contained in obtained histograms.
Histograms obtained at the actual mobile robot pose
are compared to histograms saved at previous mobile
robot poses to compute position displacement and
orientation correction. Orientation estimation accuracy
greatly influences the position estimation accuracy and
is crucial for a reliable mobile robot pose tracking.
Sensors used for local occupancy grid generation are
sonars but other exteroceptive sensors like a laser
range finder can also be used. Test results with mobile
robot Pioneer 2DX simulator show the capacity of this
method.
additional measurement information in an extended
odometry model.
Key words: mobile robot, pose tracking, multiple
hypotheses tracking, histogram, Hough transform
1. Introduction
The mobile robot ability to find or track its pose (position
and orientation) in an unknown environment is a crucial
feature needed for performing complex tasks over a long
period of time [1]. The most common solution for this
problem is to rely on dead reckoning methods (odometry)
for a short period of time and then to apply additional
sensors to update/correct the mobile robot pose [2, 3]. This
method assumes that the start mobile robot pose is known
and all displacements are calculated relative to this known
pose. Dead reckoning approaches provide good results
only for a short period of time due to significant error
influence from wheel slippage, floor roughness, etc.
Especially the orientation estimation is prone to significant
error influence. A lot of research has been done to improve
the orientation estimation. For example, better odometry
models have been developed in [4], and additional sensors
have being used [3, 5]. Additional sensors can be used to
compute a correction to the actual mobile robot pose or as
Mostly used sensors for additional measurement
information in an extended odometry model are compass
and gyro. Electronic compass is sensitive to magnetic noise
that comes from ferromagnetic objects in the robot
environment. Such objects are often present in man made
environments including the mobile robot body and the
noise produced by its drive system. Compass was used in
[5] with the purpose to ensure that the robot environment is
scaned with the same robot orientation angle at each place.
In this way the complexity of the mobile robot pose
tracking problem was greatly reduced because collected
environment scans differ only in the x and y position. The
problem is that this approach can’t be used in environment
under significant magnetic noise and with mobile robots
that can’t turn in spot.
To overcome the above-mentioned problem another
characteristic of man-made environments can be used.
Objects in such environments tend to lie in straight lines.
Good examples are walls and doorways. In such
environments it is possible to use line segments for the
correction of the estimated mobile robot pose [6]. The
Hough transform is widely used in computer vision for
edge detection. An efficient algorithm [7] is used to
determine the coordinate of the extracted line end points,
line length and the normal parameters of a straight line
using the Hough transform. Angles between the line
segments and positive x-axis, weighted with line segment
length, form an angle histogram. Comparison of angle
histograms and their use for mobile robot orientation
correction is the topic of our research. The angle histogram
of current mobile robot pose is convolved with the angle
histogram from the previous mobile robot pose. But all
hypothetic robot orientations with equal minimal matching
score obtained by angle histograms convolution
(orientation hypothesis) are used to determine the best
orientation with minimal distance in comparison to
predicted orientation. Mobile robot orientation is predicted
using calibrated odometry [4].
2. Position tracking
In an occupancy grid map, a regular grid represents mobile
robot environment with each cell holding a certainty value
that a particular area of space is occupied or empty [8].
The certainty value is based only on sensor readings. Each
occupancy grid cell in our approach represents an area of
10 [cm] x 10 [cm] and is considered as being in one of
three possible states: occupied (O: P(cxy) > 0.5), empty (E:
P(cxy) < 0.5) and unknown (U: P(cxy) = 0.5), depending on
corresponding probability of occupancy for that cell.
To begin the pose tracking process, the robot takes a new
sonar scan at its current pose and constructs a local
occupancy grid consisting of 60 x 60 cells. Whence a local
occupancy grid map is constructed the mobile robot is
always in the center of the local occupancy grid map, as in
[5]. The scans are converted to histograms before matching
is performed to reduce computation complexity. So on top
of the constructed local occupancy grid we get three types
of histograms: x, y and angle histogram. Both x and y
histograms are consisted of three one-dimensional arrays,
which are obtained by adding up the total number of
occupied, empty and unknown cells in each of the 60 rows
or columns respectively (x or y histogram). An example of
x-histograms for the actual and previous mobile robot
environment scan is presented in Fig. 1.
where Mx-matching score of x-histogram, My-matching
score of y-histogram, Mxi*and Myi* are the best match
scores, produced by the best matching alignment between
histogram of chosen previous hypotheses hi-1 and current
translated histogram for the place i (in range of 5 cells due
to moving step of 0.5 [m]).
The coordinates of each hypothetic place xMi and yMi are
calculated from updated coordinates of the previous place
(xi, yi) and the offset values ∆sx and ∆sy, which are obtained
from the above described histogram matching process, as
the displacement of the robot from the previous hypotheses
by best matching (maximum likelihood L(S|hi)) :
xMi = xi + ∆sx ,
(3)
yMi = yi + ∆s y ,
{
y ={
xi =
i
xref , place =1
xupd ( i −1) , place >1
yref , place =1
yupd ( i −1) , place >1
(4)
,
(5)
.
(6)
3. Hough transform
Fig. 1: Example of obtained x-histograms.
Matching scores of stored histogram (previous place) and
recognition-translated histogram of current place [5] are
calculated as:
M ( H i −1 , H i ) = SCAL ⋅
, (1)
⋅∑ min ( O ij−1 , O ij ) + min ( E ij−1 , E ij ) + min (U ij−1 , U ij )
j
where Oj, Ej, Uj refer to the number of occupied, empty
and unknown cells, contained in the j-th element of
histogram H; SCAL = 1/DIMENSION2 is a normalizing
constant (in our approach, DIMENSION of certainty grid =
60). The same equation is used for x- and y-histograms.
Current scan is compared with stored previous scans (x, y
position) and in this way we construct set of hypotheses
h0,... hi-1. For each of these hypotheses, the likelihood
L(S|hi) is calculated as the strength of the match between
the current and stored histograms for each place hypothesis
hi:
L ( S | hi ) ∝ M xi* × M yi* ,
(2)
The Hough transform is a robust method for detecting
discontinuous patterns in noisy images. The basic idea of
this technique is to find curves that can be parameterized
like straight lines, circles, ellipses, etc., in a suitable
parameter space. Our application considers the detection of
straight-line segments in sonar data. Several variants of
standard Hough Transform have been proposed in the
literature to reduce the time and space complexity. When it
is applied to detection of a straight line, represented by
normal parameters, the transform provides only the length
of the normal and the angle it makes with the axis. The
transform gives no information about the length or the end
points of the line. Because of that an efficient non-iterative
algorithm is used to determine the coordinate of the end
points, the length and the normal parameters of a straight
line using the Hough transform and than those line
segment form angle histogram [7].
A straight line, represented by
parameterization, is expressed as (Fig. 2):
ρ = x cos θ + y sin θ ,
the
normal
(7)
where ρ is the length of the normal to the line from the
origin and θ is the angle of normal with the positive xdirection. We assume that the origin of the x-y coordinate
system is in the center of the input space. In the θ-ρ
parameter plane (HT space), the line is mapped to a single
point. Collinear points (xi, yi) in the input space, with i = 1,
…N, constitute a sinusoidal curve in the (θ, ρ) space,
which intersect in the point (θ, ρ) (Fig. 3), given by:
ρ = xi cos θ + yi sin θ .
(8)
zero elements in columns Cq and Cr. These normals can be
expressed as:
y
ρ1q = x1 cos θ q + y1 sin θ q ,
(9)
ρ1r = x1 cos θ r + y1 sin θ r ,
(10)
ρ 2 = x2 cos θ q + y2 sin θ q ,
q
(11)
ρ 2 = x2 cos θ r + y2 sin θ r ,
Oginal coordinate plane
r
A
θk
ρk
B
C
x
Fig. 2: (x,y) points in Cartesian space before applying the
Hough transformation.
ρ
(12)
where Cq, Cr are q-th, r-th column in the accumulator
array, respectively. ρ1q and ρ1r are the lengths of the
normal to the bar (in the image plane) corresponding to the
first non-zero cell in Cq and Cr respectively (which
corresponds to the bar bi,k in the image plane containing the
end point (x1, y1). ρ2q and ρ2r are the lengths of the normal
to the bar (in the image plane) corresponding to the last
non-zero cell in Cq and Cr respectively (which corresponds
to the bar bi,k in the image plane containing the end point
(x2, y2).
y
A
Solution region
Hough plane
ρ
(x1 ,y1 )
B
ρ
C
θ −θ
r
θ
r
r
2
ρ
(x2 ,y2 )
r
ρ
q
2
1
q
q
1
θ
∆ρ
q
x
θ
Fig. 3: (x,y) points form Cartesian space become
sinusoidal curves in Hough space after applying the Hough
transform.
3.1. Line segment detection and self-orientation using
Hough transform
The parameters of a line along with its length and
coordinates of the end points are sometimes referred to as a
complete line segment description [9]. Used algorithm for
detection of those characteristics [7] is independent of the
accuracy with which the peak in accumulator arrays for colinearity detection is determined, because an accurate
detection of the peak in the accumulator array is a nontrivial task. This is the reason that θ value of the peak (θp)
is only used to determine two columns Cq and Cr.
Two columns Cq and Cr whose cells correspond to the two
sets of parallel bars have their normals inclined at angles θq
and θr respectively with the positive x-axis (Fig. 4). The
lengths of the normals ρq1, ρq2, ρr1, ρr2 to the bars can be
determined from an accumulator array. The lengths of the
normals to the bars correspond to the first and last non-
Fig. 4: Computation of the end points independent of θp.
We can express the coordinates of end points (x1, y1) and
(x2, y2):
x1 =
y1 =
x2 =
y2 =
ρ1q sin θ r − ρ1r sin θ q
,
sin(θ r − θ q )
ρ1r cos θ q − ρ1q cos θ r
,
sin(θ r − θ q )
ρ 2q sin θ r − ρ 2r sin θ q
,
sin(θ r − θ q )
ρ 2r cos θ q − ρ 2q cos θ r
.
sin(θ r − θ q )
(13)
(14)
(15)
(16)
The line length (lc) is obtained from the end points by
lc = ( x1 − x2 ) 2 + ( y1 − y2 ) 2 ,
(17)
and parameters of the normal (ρc, θc – line parameters
calculated from the end points of the line by using the
method proposed in [8]) are obtained as
ρc =
x2 y1 − x1 y2
( x1 − x2 ) + ( y1 − y2 )
2
2
,
y2 − y1
y2 − y1
− 90° = a tan 2
,
x
x
−
2 1
x2 − x1
θ c = arctan
θ Mi = θ j ,
(18)
(21)
with minimum difference:
(19)
DIFF j = min( θ Pi − θ j ) , j = 0,..., n − 1 ,
(22)
where n is the number of orientations with equal minimal
matching scores.
3.2. Angle Histograms comparison
Line segments obtained by the Hough transformation with
equal angle, are used to calculate the angle histogram. This
histogram represents directly sums of the lengths of all
edges with equal orientations. An example of anglehistograms for the actual and previous mobile robot
environment scan is presented in Fig. 5. To remove small
line segments from angle histogram, each length is
compared to a threshold. Threshold value is calculated for
every sensor scan separately. Any line segment in angle
histogram, whose length is less than threshold for certain
scan, is removed. In this way, comparing of angle
histograms give better matching results.
The comparison of all hypothetic orientation θj with
predicted orientation is used to obtain the orientation
correction value Θ Mi .
4. Histograms and Hough transform based
mobile robot pose tracking
Block schema of the proposed pose tracking algorithm is
given in Fig. 6.
Reference
(previous sonar
readings)
New sonar
readings
Coordinates of
previous
hypotheses h i,
i=0,…n-1 with
probability
distribution P
Odometry
Histogram matching process
(eqns.(1-2)+(20))
Pre dict Phase
(predict coordinates for
each h j, j=0,...n )
(eqns.(23-25))
Coordinates
of newhypotheseshj , j=0,...n
(eqns.(3-6)+(21))
( xMj, yMj, θ Mj )
( xPj , yPj, θ Pj )
Matching Phase
(eqns. (26 - 27, 33))
Apply Baye s ruleto
produce probability
distribution P' over H'
(eqns. (33 - 34))
Fig. 5: An example of the current and previous angle
histograms.
Update Phase
(update coordinates
over H' )
(eqns.(28-32))
The analysis of measurements for comparing angle
histograms [10] is important, since the “intersectionmeasurement’’ gives different results for matching
histograms. Angle histogram intersection-measurement has
been introduced for the comparison of color histograms
[10]. In our approach, the calculation χTH2 is used, because
it gives the best results in mobile robot orientation
tracking:
Fig. 6: Block schema of proposed pose tracking algorithm.
2
χTH
( H i , H i −1 ) = ∑ j
( H i ( j ) − H i −1 ( j )) 2
,
H i ( j ) − H i −1 ( j )
(20)
where Hi(j) and Hi-1(j) are current and previous angle
histograms, respectively.
The angle histogram of current place is convolved with a
histogram from previous place, but all hypothetic
orientation θj with equal minimum matching score from
angle histogram (orientation hypotheses) are used to
determine the best orientation
The proposed pose tracking algorithm consists of three
phases:
A) Predict phase – where the mobile robot position is
updated according to the measured displacement in x
and y direction;
x P i = xUPD (i − 1) + ∆x ,
y P i = yUPD (i − 1) + ∆y ,
θ P i = θUPD (i − 1) + ∆θ ,
(23)
(24)
(25)
where ∆x, ∆y and ∆θ refer to the robot’s own
displacement in Cartesian space since the previous
iteration using its on-line odometry. Exception is the
first new place, where xUPD(i-1) and yUPD(i-1) are start
position in the mobile robot environment.
B) Matching phase – where a matching process is
performed between a previous hypotheses of mobile
robot pose hi, i = 0,…,j-1 and predicted hypotheses of
mobile robot pose hj:
( xUPD (i − 1), yUPD (i − 1),θUPD (i − 1) )
L ( h j | hi ) ∝ exp −η
− ( xPj , yPj ,θ Pj )
⋅ p ( hi ) , (26)
where the Gaussian function is used to model the
noise in the robot’s pose estimates and prior
probability p(hi) is used to take the influence of
particular prior hypothesis into account. Pose (xupd(i1),yupd(i-1),θupd(i-1)) denotes updated previous
hypotheses and pose (xPj, yPj, θPj) predicted pose of
hypothese hj. The constant η = 0.25 was used in order
to determine the relative weighting of exteroceptive
and proprioceptive sensory information in localization
algorithm [5]. The best matching prior hypotheses hi*
for each hj is defined as:
∀j : ∀i ≠ i* : L ( h j | hi* ) > L ( h j | hi ) ,
(27)
C) Update phase – where the predicted mobile robot pose
is updated according to the values obtained in the
matching phase according to following equations:
xUPD ( j ) = xMj + K1 × ( xPj − xMj ) ,
yUPD ( j ) = yMj + K 2 × ( yPj − yMj ) ,
p posterior (h j ) =
L( S | h j ) p prior (h j )
∑ L(S | hk ) p prior (hk )
j −1
.
k =0
5. Test results
Described pose tracking algorithm is tested using a Pioneer
2DX mobile robot simulator. The experiment scenario
included several orientation changes. The experiment was
made in a corridor (Fig. 7), where robot moved with
different orientations and made entrance in other rooms
(first move with reference orientation angle 0 [º], then
entrance in another room with 90 [º], then came back under
270 [º], then went under 180 [º], then moved with 0 [º] and
at the end under 20 [º]. All lengths of moving with equal
orientation were 1350 [mm]. Obtained results where
compared to calibrated odometry based pose tracking. Fig.
8 presents obtained results regarding orientation tracking
with calibrated odometry and with proposed HHT
algorithm. The actual robot orientations are also depicted
on the figure. Calibrated odometry has an average relative
error of 6 percent and our proposed algorithm has an
average relative error of 2 percent.
(28)
(29)
where values of correction are computed from the
following equations:
K1 = cos(θ upd ( j )) ,
(30)
K 2 = sin(θ upd ( j )) .
Updates of the θ coordinate are as follows:
θUPD ( j ) = θ Mj + D ⋅ (θ pj − θ Mj ) ,
(31)
Fig. 7: Simulation model for our experiments.
(32)
where 0 < D < 1 is a coefficient.
The prior probability pprior(hj) for each place j is
calculated using the likelihood values L(hjhi*)
produced by the matching phase:
p prior (h j ) =
L(h j | hi* )
∑ (h
j −1
k =0
j
,
(33)
| hk * )
and posterior probability pposterior(hj) for each place
as effectively combination of sensor model L(Shj)
and motion model L(hjhi*) for each place j.
(34)
Fig. 8: Orientation tracking in our experiments.
6. Conclusions and future work
Mobile robot orientation correction technique using
Histograms and Hough transform has been implemented
and compared to calibrated odometry using a mobile robot
simulator. It is shown that Hough transform in combination
with histograms, which was used for orientation correction,
gives better results then orientation tracking based on
calibrated odometry.
Our method of mobile robot orientation correction relies
on the detection of straight-line features in the sonar sensor
readings. The Hough Transform is widely used in
computer vision for edge detection, so it is good solution
for object detections in man-made environment, which
tend to lie in straight lines. The Hough Transform has a
number of properties that are useful for self-localisation,
for example it is very robust to noisy sonar data and to
occlusions of the lines. We used the correlation technique
for orientation correction rather than the product of
likelihoods. In this way, misleading sensor readings to be
caused by multiple reflections are filtered out.
The proposed method for mobile robot orientation
correction is a worth alternative to the use of magnetic
compass, particularly in environments with magnetic
noise. Future work on this topic will include
implementation of Hough transform also in the position
tracking part.
References
[1]
G. Grisetti, L. Iocchi, D. Nardi, “Global Hough
Localization for Mobile Robots in Polygonal
Environments”, In Proc. of International Conference on
Robotics and Automation (ICRA02), Washington DC,
USA, 2002, vol. 1, pp. 353-358.
[2]
E. Ivanjko, I. Petrović, “Extended Kalman Filter
based Mobile Robot Pose Tracking using Occupancy Grid
Maps”, Proc. of The 12th IEEE Mediterranean
Electrotechnical Conference–Melecon 2004, Croatia,
2004, pp.311-314.
[3]
P. Goel, S. I. Roumeliotis, G. S. Sukhatme, “Robust
Localization Using Relative and Absolute Position
Estimates”, in Proc. of the IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS),
1999.
[4]
E. Ivanjko, I. Petrović, N. Perić, “An Approach to
Odometry Calibration of Differential Drive Mobile
Robots”, in Proceedings of Electrical Drives and Power
Electronics International Conference EDPE'03, Slovakia,
2003, pp. 519-523.
[5]
T. Duckett, Concurrent map building and selflocalization for mobile robot navigation, PhD Thesis,
University of Manchester, 2000.
[6]
B. Schiele, J. L. Crowley, “Comparison of Position
Estimation Techniques Using Occupancy Grids”, in
Robotics and Autonomous Systems, 12 (3-4), 1994, pp.
163-172.
[7]
M. Atiquzzaman, M.W. Akhtar, “A Robust Hough
Transform Technique for Complete Line Segment
Description”, in Real Time Imaging, vol. 1, 1995, pp. 419426.
[8]
A. Elfes, “Using Occupancy Grids for Mobile
Robot Perception and Navigation”, in Proceedings of IEEE
International Conference on Robotics and Automation,
Vol. 2, 1988, pp. 727-733.
[9]
M. Atiquzzaman, M.W. Akhtar, “Determination of
end points and length of a straight line using the Hough
Transform”, IAPR Workshop on Machine Vision
Application, Japan, 1994, pp. 247-250.
[10] B. Schiele, J. Crowley, “Object Recognition using
Multidimensional Receptive Field Histograms, in
ECCV'96, Fourth European Conference on Computer
Vision, Cambridge, UK, 1996.