Thrun Map3d PDF
Thrun Map3d PDF
Thrun Map3d PDF
6
path
robot -
= η P (ot | st , mt−1 )
Figure 4: Result obtained using the most simple incremental approach, Z
which is common in the literature. This approach fails to close the cycle
and leads to inconsistent maps. P (st | at−1 , st−1 ) Bel(st−1 ) dst−1
work, various researchers have found that algorithms that
After computing the posterior, the new map is generated by
maintain a posterior estimation—instead of a single max-
maximizing a slightly different expression for pose estima-
imum likelihood guess—are typically more robust and
tion (c.f., ŝt in Equation (6)):
hence lead to more scalable solutions.
The algorithm for computing the posterior is identical to s̄t = argmax Bel(st ) (11)
the Markov localization algorithm [16]. It is incremental. st
Initially, at time t = 0, the belief Bel(s0 ) is centered on
which leads to the new map
a (fictitious) origin of the coordinate system hx = 0, y =
0, θ = 0i. Thus, Bel(s0 ) is initialized by a Dirac distri- mt+1 = mt ∪ {hot , , s̄t i} (12)
bution (point distribution). Suppose at time t we know the
previous belief Bel(st−1 ), which is distribution over poses Just as in (6) and (7), the map is grown by adding a scan
st−1 at time t − 1, and we just executed action at−1 and at location s̄t . However, here we use the entire posterior
observed ot . Then the new belief is obtained through: Bel(st ) for determining the most likely pose, not just the
Bel(st ) (9)most recent sensor scan and its pose (as in (6)). As a re-
Z sult, the increasing diameter of uncertainty is modeled so
= η P (ot | st , mt−1 ) P (st | at−1 , st−1 ) Bel(st−1 ) dst−1that, when closing the loop, the correct uncertainty can be
taken into account: the larger the loop, the wider the margin
where η is (different) normalizer, and mt−1 is the best of uncertainty. This difference is important when mapping
available map. large cyclic environments—where a robot needs to know
Update equation (9) is derived using the common Markov where to search when closing a loop.
localization approach (see also [16, 3]), assuming a static Our approach uses samples to approximate the posterior.
map: Figure 5 shows an example, for a sequence of robot poses
along a U-shaped trajectory. Here each of the sample sets
Bel(st ) = P (st | dt , mt−1 ) is an approximation of densities (of the type shown in Fig-
= P (st | dt−1 , at−1, ot , mt−1 ) ure 2). The idea of using samples goes back to Rubin’s
= η P (ot | st , dt−1 , at−1, mt−1 ) importance sampler [14] and, in the context of temporal
P (st | dt−1 , at−1, mt−1 ) posterior estimation is known as particle filters [12]. It has,
with great success, been applied for tracking in computer
= η P (ot | st , mt−1 ) (10) vision [7] and mobile robot localization [2, 3]. As argued
Z
P (st | dt−1, at−1 , st−1 , mt−1 ) in the statistical literature, this representation can approx-
imate almost arbitrary posteriors at a convergence rate of
P (st−1 | dt−1, at−1 , mt−1 ) dst−1 √1 [17]. It is convenient for robotics, since it is easy to
N
robot and samples
robot and
samples
robot
and samples
(a) (b) (c)
Figure 6: Incremental algorithm for concurrent mapping and localization. The dots, centered around the robot, indicate the posterior belief which grows
over time (a and b). When the cycle is closed as in (c), the posterior becomes small again.
implement, and both more efficient and more general than yield a maximum likelihood match; however, it places
most alternatives [3]. the intermediate poses in a good starting position for sub-
The sample-based representation directly facilitates the op- sequent gradient descent search.
timization of (11) using gradient descent. Our approach
performs gradient descent using each sample as a start- 3. Finally, gradient descent search is applied iteratively for
ing point, then computes the goodness of the result using all poses inside the loop, until the map is maximally con-
the obvious likelihood function. If the samples are spaced sistent (maximizes likelihood) under this new constraint
reasonably densely (which is easily done with only a few arising from the cycle.
dozen samples), one can guarantee that the global maxi-
mum of the likelihood function can be found. This dif-
fers from the simple-minded approach above, where only a These three steps implement an efficient approximation to
single starting pose is used for hill-climbing search, and the maximum likelihood estimator for the entire loop. Our
which hence might fail to produce the global maximum approach has been found to be extremely robust in prac-
(and hence the best map). tice (we never obtained a single wrong result) and also ex-
tremely fast (the entire maximization can be carried out
2.4 Backwards Correction between two sensor measurements for all experiments re-
ported in this paper).
As argued above, when closing cycles it is imperative that
maps are adjusted backwards in time. The amount of back-
wards correction is given by the difference (denoted ∆st ): 2.5 Multi-Robot Mapping
∆st = s̄t − ŝt (13) The posterior estimation component of our approach makes
it straightforward to generate maps with multiple robots.
where s̄t is computed according to Equation (11) and ŝt is Here we assume that the initial pose of the robots relative
obtained through Equation (6). This expression is the dif- to each other is unknown; however, we make the important
ference between the incremental best guess (c.f., our base- restrictive assumption that each robot starts within the map
line approach) and the best guess using the full posterior. of a specific robot, called the team leader. To generate a
If ∆st = 0, which is typically the case when not closing single, unified map, each robot must localize itself in the
a loop, no backwards correction has to take place. When map of the team leader.
∆st 6= 0, however, a shift occurred due to reconnection The insight for multi-robot mapping is closely tight to the
with a previously mapped area, and poses have to be revised notion of posterior estimation. As the reader might have
backwards in time. noticed, our posterior computation is equivalent to Monte
Our approach does this in three steps: Carlo localization [2, 3], a version of Markov localization
capable of performing global localization. Thus, to local-
1. First, the size of the loop is determined by determining
ize a robot in the map of the team leader, its initial samples
the scan in the map which led to the adjustment (this is a
are distributed uniformly across the team leader’s map, as
trivial side-result in the posterior computation).
shown in Figure 7a. The posterior estimation then quickly
2. Second, the error ∆st is distributed proportionally localizes the robot (see [2, 3]), which then enables both
among all poses in the loop. This computation does not robots to build a single, uniform map.
team leader team leader team leader
robot
robot robot
(a) (b) (c)
Figure 7: A second robot localizes itself in the map of the first, then contributes to building a single unified map. In (a), the initial uncertainty of the relative
pose is expressed by a uniform sample in the existing map. The robot on the left found its pose in (b), and then maintains a sense of location in (c).
2.6 3D Mapping
Finally, a key goal of this research is to generate accu-
rate 3D maps. With accurate localization in place, this ex-
tension is obtained using a robot equipped with two laser
range finder, such as the one shown in Figure 1b. Here the
forward-looking laser is used for concurrent mapping and
localization in 2D, and the upward-pointed laser is used to
build a 3D map of the environment.
At first glance, one might simply generate 3D maps by
connecting nearby laser measurements into small polygons.
Figure 8: Mapping without odometry. Left: Raw data, right: map, gener-
However, this approach has two disadvantages: First, it is ated on-line in real-time.
prone to noise, and second, the number of measurements
are excessively large, and the resulting maps are therefore all scans, however, were used in localization. This kept
overly complex. the complexity of maps manageable (a few hundred scans,
Our approach filters our outliers based on distance; if two instead of several thousand). Also, to make the mapping
measurements are further than a factor 2 apart from the ex- problem more challenging, we occasionally introduced ran-
pected measurement under the robot motion (and assum- dom error into the odometry (30 degrees or 1 meter shifts).
ing the robot faces a straight wall), this measurement does Figure 6 shows results obtained using the same data set as
not become part of a polygone. We also apply a sim- in Figure 4. Here the robot traverses the cycle, but it also
plification algorithm [4, 5], previously developed to sim- keeps track of its posterior belief Bel(s) represented by
plify polygonal models for real-time rendering in computer samples, as shown by the dots centered around the maxi-
graphics.1 In a nutshell, this approach iteratively simplifies mum likelihood pose in Figure 6. When the cycle is closed
multi-polygon surface models by fusing polygons that look (Figure 6b), the robot’s error is significant; however, the
similar when rendered. The result is a model with much “true” pose is well within its posterior at this time. Our ap-
reduced complexity, which nevertheless is similarly accu- proach quickly identifies the true pose, corrects past beliefs,
rate and looks about as good as the original when rendered. and reduces the robot’s uncertainty accordingly. The final
The simplification uses only a small fraction of the avail- map is shown in Figure 6c. As can be seen there, the map
able time, hence is easily applied in real-time. is highly accurate and the error in the cycle has been elim-
inated. All these results have been obtained in real-time on
a low-end PC.
3 Results
3.2 Mapping Without Odometry
3.1 Mapping A Cycle
To test the robustness of the approach, we attempted to
In our experiments, scans were only appended to the map
build the same map but in the absence of odometry data.
when the robot moved a presprecified distance (2 meters);
The raw data, stripped of the odometry information, is
1 We gratefully acknowledge Michael Garland’s assistance in using the shown in Figure 8a. Obviously, these data are difficult to
software. interpret (we are not aware of an algorithm for mapping
Figure 9: Autonomous exploration and mapping using the urban robot:
Raw data and final map, generated in teal-time during exploration.
that works well without odometry information). Figure 8b Figure 10: 3D Map.
shows the resulting map. This map has been generated us-
ing identical software settings. When traversing the cycle, situation after a few more seconds, Here the second robot
this map is less accurate than the one generated with odom- has progressed through the map of the team leader. It still
etry data. However, when closing the cycle, these errors are knows its position with high accuracy.
identified and eliminated, and the final result is optically
equivalent to the one with odometry. The reader should no- 3.5 3D Mapping
tice, however, that the odometry-free results are only pos- Results for 3D mapping are shown in Figures 10 and 11.
sible because the environment possess sufficient variation. Figure 10 shows a short corridor segment. The free-flying
In a long, featureless corridor, our approach would clearly surfaces are ceiling regions inside offices, which the robot’s
fail in the absence of odometry data. Nevertheless, these lasers sensed while moving through the corridor.
results illustrate the robustness of our approach. Figure 11 shows a sequence of rendered views of a larger
(cyclic) map, which is approximately 60 meters long. The
3.3 Autonomous Exploration
rendering algorithm is a standard virtual reality tool (VR-
Figure 9 shows results of an autonomously exploring robot, web), which enables the user to remotely inspect the build-
obtained at a recent DARPA demonstration. These results ing by “flying through the map.” The top rowin Figure 11
were obtained using the Urban Robot shown in Figure 1c. has been generated from raw laser data; this model contains
This robot is able to traverse extremely rugged terrain. 82,899 polygons. The bottom row is the simplified polyg-
However, its skid-steering mechanism using tracks is ex- onal model, which contains only 8,289 polygons. The ap-
tremely erroneous, and odometry errors are often as large pearance of both is similar; however, rendering the more
as 100%, depending on the conditions of the floor. compact one is an order of magnitude faster.
Figure 9 shows the raw data (left diagram) and the map
(right diagram) along with the robot’s trajectory. This is 4 Discussion
the worst map ever generates using our approach: some of
the walls are not aligned perfectly, but instead are rotated This paper presented a new, online method for robust map-
by approximately 2 degrees. However, given the extremely ping and localization in indoor environments. The ap-
poor odometry of the robot, and the fact that our approach proach combines ideas from incremental mapping (maxi-
does not resort to assumptions like recto-orthogonal walls, mum likelihood, incremental map construction) with ideas
these results are actually surprisingly accurate. The map of more powerful, non-incremental approaches (posterior
is clearly sufficient for navigation. We suspect if the envi- estimation, backwards correction). The result is a fast and
ronment possessed a loop, the final result would be more robust algorithm for real-time mapping of indoor environ-
accurate. ments, which extends to multi-robot mapping and mapping
in 3D. A fast algorithm was employed to generate compact
3.4 Multi-Robot Mapping 3D models of indoor environments.
Figure 7 illustrates our approach to multi-robot mapping. Experimental results illustrated that large-scale environ-
Here a second robot localizes itself in the map built by ments can be mapped in real-time. The resulting maps
the team leader (left diagram), as explained above. After were highly accurate. We also demonstrated that our ap-
a short motion segment, the posterior is focused on a single proach can generate 3D maps, and it can fuse information
location (center diagram) and the incoming sensor data is collected though multiple robot platforms.
now used to further the map. The right diagram shows the When compared to EM, the ability to generate maps in real
Figure 11: Views of the 3D map, for the high-res model (top row) and the lower resolution model (bottom row).
time comes at a price of increased brittleness. EM is a [4] M. Garland and P. Heckbert. Surface simplification using quadric
principled approach to finding the best map, which simul- error metrics. In SIGGRAPH-97.
[5] M. Garland and P. Heckbert. Simplifying surfaces with color and
taneously revises beliefs arbitrarily far in the past—which texture using quadric error metrics. In IEEE Visualization-98.
makes it necessarily inapplicable in real-time. However, [6] J.-S. Gutmann and C. Schlegel. Amos: Comparison of scan match-
our approach inherits some of the power of EM by us- ing approaches for self-localization in indoor environments. In
ing posterior estimations and a fast mechanisms for back- EUROMICRO-96.
[7] M. Isard and A. Blake. Condensation: conditional density propaga-
wards revision, specifically tuned for mapping cyclic en- tion for visual tracking. International Journal of Computer Vision,
vironments. As a result, our approach can handle a large 1998.
range of environments in real-time. [8] J.J. Leonard, H.F. Durrant-Whyte, and I.J. Cox. Dynamic map
building for an autonomous mobile robot. International Journal of
Nevertheless, our approach surpasses previous incremental Robotics Research, 11(4), 1992.
approaches in robustness, specifically in environments with [9] F. Lu and E. Milios. Robot pose estimation in unknown envi-
cycles. Our results make us confident that the approach is ronments by matching 2d range scans. Journal of Intelligent and
very robust to errors, in particular odometry errors. Robotic Systems, 1998.
[10] H. P. Moravec and A. Elfes. High resolution maps from wide angle
sonar. In ICRA-85..
Acknowledgments [11] H.P. Moravec and M.C. Martin. Robot navigation by 3D spatial
evidence grids. Internal Report, CMU, 1994.
We thank Michael Garland for his assistance with the polygon fusing soft-
[12] M. Pitt and N. Shephard. Filtering via simulation: auxiliary particle
ware, Todd Pack and RWI/ISR for their superb support with the robot
filter. Journal of the American Statistical Association, 1999.
hardware, and the members of CMU’s Robot Learning Lab for many fruit-
ful discussions. The idea to build 3D maps was brought to our attention [13] W.D. Rencken. Concurrent localisation and map building for mobile
by our DARPA-PM LTC John Blitch, which we gratefully acknowledge. robots using ultrasonic sensors. In IROS-93.
[14] D.B. Rubin. Using the SIR algorithm to simulate posterior distribu-
This research is sponsored in part by DARPA via TACOM (contract num- tions. In Bayesian Statistics 3, 1988.
ber DAAE07-98-C-L032) and Rome Labs (contract number F30602-98- [15] H. Shatkay. Learning Models for Robot Navigation. PhD thesis,
2-0137), and by the National Science Foundation (regular grant number CSD, Brown University, 1998.
IIS-9877033 and CAREER grant number IIS-9876136), which is grate- [16] R. Simmons and S. Koenig. Probabilistic robot navigation in par-
fully acknowledged. tially observable environments. In IJCAI-95.
[17] M.A. Tanner. Tools for Statistical Inference. Springer, 1993.
References [18] S. Thrun, D. Fox, and W. Burgard. A probabilistic approach to con-
current mapping and localization for mobile robots. Machine Learn-
[1] J. Borenstein, B. Everett, and L. Feng. Navigating Mobile Robots: ing, 31, 1998.
Systems and Techniques. A. K. Peters, 1996. [19] B. Yamauchi and R. Beer. Spatial learning for navigation in dynamic
[2] F. Dellaert, W. Burgard, D. Fox, and S. Thrun. Using the condensa- environments. IEEE Transactions on Systems, Man, and Cybernet-
tion algorithm for robust, vision-based mobile robot localization. In ics - Part B: Cybernetics, 1996.
CVPR-99. [20] B. Yamauchi, P. Langley, A.C. Schultz, J. Grefenstette, and
[3] D. Fox, W. Burgard, F. Dellaert, and S. Thrun. Monte carlo localiza- W. Adams. Magellan: An integrated adaptive architecture for mo-
tion: Efficient position estimation for mobile robots. In AAAI-99. bile robots. TR 98-2, ISLE, Palo Alto, 1998.