1 2 Autonomous Mobile Robots: © 2006 by Taylor & Francis Group, LLC

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 15

Mobile robots participate in meaningful and intelligent interactions with other entities inanimate objects, human users, or other

r robots through sensing and perception. Sensing capabilities are tightly linked to the ability to perceive, without which sensor data will only be a collection of meaningless figures. Sensors are crucial to the operation of autonomous mobile robots in unknown and dynamic environments where it is impossible to have complete a priori information that can be given to the robots before operation. In biological systems, visual sensing offers a rich source of information to individuals, which in turn use such information for navigation, deliberation, and planning. The same may be said of autonomous mobile robotic systems, where vision has become a standard sensory tool on robots. This is especially so with the advancement of image processing techniques, which facilitates the extraction of even more useful information from images captured from mounted still or moving cameras. The first chapter of this part therefore, focuses on the use of visual sensors for guidance and navigation of unmanned vehicles. This chapter starts with an analysis of the various requirements that the use of unmanned vehicles poses to the visual guidance equipment. This is followed by an analysis of the characteristics and limitations of visual perception hardware, providing readers with an understanding of the physical constraints that must be considered in the design of guidance systems. Various techniques currently in use for road and vehicle following, and for obstacle detection are then reviewed. With the wealth of information afforded by various visual sensors, sensor fusion techniques play an important role in exploiting the available information to
1
2006 by Taylor & Francis Group, LLC

2 Autonomous Mobile Robots

further improve the perceptual capabilities of systems. This issue is discussed, with examples on the fusion of image data with LADAR information. The chapter concludes with a discussion on the open problems and challenges in the area of visual perception. Where visual sensing is insufficient, other sensors serve as additional sources of information, and are equally important in improving the navigational and perceptual capabilities of autonomous robots. The use of millimeter wave RADAR for performing feature detection and navigation is treated in detail in the second chapter of this part. Millimeter wave RADAR is capable of providing high-fidelity range information when vision sensors fail under poor visibility conditions, and is therefore, a useful tool for robots to use in perceiving their environment. The chapter first deals with the analysis and characterization of noise affecting the measurements of millimeter wave RADAR. A method is then proposed for the accurate prediction of range spectra. This is followed by the description of a robust algorithm, based on target presence probability, to improve feature detection in highly cluttered environments. Aside from providing robots with a view of the environment it is immersed in, certain sensors also give robots the ability to analyze and evaluate its own state, namely, its position. Augmentation of such information with those garnered from environmental perception further provides robots with a clearer picture of the condition of its environment and the robots own role within it. While visual perception may be used for localization, the use of internal and external sensors, like the Inertial Navigation System (INS) and the Global Positioning System (GPS), allows refinement of estimated values. The third chapter of this part treats, in detail, the use of both INS and GPS for position estimation. This chapter first provides a comprehensive review of the Extended Kalman Filter (EKF), as well as the basics of GPS and INS. Detailed treatment of the use of the EKF in fusing measurements from GPS and INS is then provided, followed by a discussion of various approaches that have been proposed for the fusion of GPS and INS. In addition to internal and external explicit measurements, landmarks in the environment may also be utilized by the robots to get a sense of where they are. This may be done through triangulation techniques, which are described in the final chapter of this part. Recognition of landmarks may be performed

by the visual sensors, and localization is achieved through the association of landmarks with those in internal maps, thereby providing position estimates. The chapter provides descriptions and experimental results of several different techniques for landmark-based position estimation. Different landmarks are used, ranging from laser beacons to visually distinct landmarks, to moveable landmarks mounted on robots for multi-robot localization. This part of the book aims to provide readers with an understanding of the theoretical and practical issues involved in the use of sensors, and the important role sensors play in determining (and limiting) the degree of autonomy mobile

The covariance PBF +E of the joined map is obtained from the linearization of Equation (9.23), and is given by: PBF +E = JFPBF JTF + JEPEj E JT
E

= _ PBF PBF JT

1 J1PBF

J1PBF JT
1

_ + _ 00 0 J2PEj E JT
2

_ (9.24) where JF = xBF


+E

xBF ____
(xBF ,x
Ej E)

= _ I J1 _ JE = xBF
+E

xEj
E

____

(xBF ,x
Ej E)

= _ 0 J2 _ J1 = 0 . . . J1 _ xB
Fi

, xEj
E0

...0 ... ... ... 0 J1 _ xB


Fi

, xEj
Em

...0 J2 = J2 _ xB
Fi

, xEj
E0

0 ... ... ... 0 J2 _ xB


Fi

, xEj
Em

Obtaining vector xBF +E with Equation (9.23) is an O(m) operation. Given that the number of nonzero elements in J1 and J2 is O(m), obtaining matrix PBF +E with Equation (9.24) is an O(nm + m2) operation. Thus when n _ m, map joining is linear with n.
2006 by Taylor & Francis Group, LLC

Map Building and SLAM Algorithms 361

9.4.3 Matching and Fusion after Map Joining


The map resulting from map joining may contain features that, coming from different local maps, correspond to the same environment feature. To eliminate such duplications and obtain a more precise map we need a data association algorithm to determine correspondences, and a feature fusion mechanism to update the global map. For determining correspondences we use the JCBB Feature fusion is performed by a modified version of the EKF update equations, which consider a nonlinear measurement equation: zij = hij(x) = 0 (9.25) with null noise covariance matrix, which constraints the relative location between the duplicates Fi and Fj of an environment feature. Once the matching constraints have been applied, the corresponding matching features become fully correlated, with the same estimation and covariance. Thus, one of them can be eliminated. The whole process of local map joining, matching, and fusion can be seen

9.4.4 Closing a Large Loop

To compare map joining with full EKFSLAM we have performed a map building experiment, using a robotized wheelchair equipped with a SICK laser scanner. The vehicle was hand-driven along a loop of about 250 m in a populated indoor/outdoor environment in the Ada Byron building of our campus. The laser scans were segmented to obtain lines using the RANSAC technique. The global map obtained using the classical EKFSLAM algorithm is shown position, closing the loop. The figure shows that the vehicle estimated location has some 10 m error and the corresponding 95% uncertainty ellipses are ridiculously small, giving an inconsistent estimation. Due to these small ellipsoids, the JCBB data association algorithm was unable to properly detect the loop closure. This corroborates the results obtained with simulations in Reference 9: in large environments the map obtained by EKFSLAM quickly becomes inconsistent, due to linearization errors. The same dataset was processed to obtain independent local maps at fixed intervals of about 10 m. The local maps were joined and fused obtaining the global map shown in Figure 9.10b. In this case the loop was correctly detected by JCBB and the map obtained seems to be consistent. Furthermore, the computational time was about 50 times shorter that the standard EKF approach.
2006 by Taylor & Francis Group, LLC

in the example of Figure 9.9. in Figure 9.10a. At this point, the vehicle was very close to the initial starting algorithm described in Section 9.3.2.
362 Autonomous Mobile Robots
2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 3 3.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 3 (a) t = 33.700sec (b) x y x y xy P1 x y

P2 x y S3 x y S4 B1 R1 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 3 3.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 3 t = 48.492sec x y x y x y S1 x y S2 B2 R2

FIGURE 9.9 Example of local map joining. (a) Local mapMB1


F1

with four point features, P1, P2, S3, and a segment S4, with respect to reference B1; (b) local mapMB2
F2

with two features, S1 and S2, with respect to reference B2; (c) both maps are joined to obtainMB1
F1+F2 F1:2

; (d) mapMB1 after updating by fusing S3 with S5, and S4 with S6. (Reprinted with permission from Tards, J. D. et al. International Journal of Robotics Research, 21: 311330, 2002.).
2006 by Taylor & Francis Group, LLC

Map Building and SLAM Algorithms 363


2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 3 3.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 3 (c) t = 48.492sec 1.5 1 0.5 0 0.5 1 1.5 2 2.5 3 3.5

2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 3 (d) t = 48.492sec xy P1 x yx P2 x y x y S5 y S6 xx y x y S3 y S4 B1 R2 x y x y xy P1 x y P2 x y S3 x y S4 B1 R2 2

FIGURE 9.9 Continued.


2006 by Taylor & Francis Group, LLC

364 Autonomous Mobile Robots


60 40 20 0 20 40 60 80 100 80 60 40 20 0 (a) 20 60 40 20 0 20 40 60 80 100 80

60 40 20 0 (b) 20 xyxy xy xyxy xy xy xy

FIGURE 9.10 Global maps obtained using the standard EKFSLAM algorithm (a) and local map joining (b).
2006 by Taylor & Francis Group, LLC

Map Building and SLAM Algorithms 365


x y x Robot 1 y Robot 2 10 0 10 20 30 10 5 0 5 10 15 20 25 (c) 30 (a) (b) x y Robot 1 x y Robot 2

FIGURE 9.11 Maps build by two independent robots (a, b) and global map obtained by joining them (c).

9.4.5 Multi-robot SLAM


The techniques explained above can be applied to obtain global maps of large environments using several mobile robots. In Figure 9.11a and b, we can see the maps built by two independent robots that have traversed a common area. In this case, the relative location between the robots is unknown. The process for obtaining a common global map is as follows: Choose at random one feature on the first map, pick its set of covisible features and search for matchings in the second map using the RS relocation algorithm. Repeat the process until a good matching is found, for a fixed maximum number of tries.
2006 by Taylor & Francis Group, LLC

366 Autonomous Mobile Robots

When a match is found, choose a common reference in both maps. In this case the reference is built in the intersection of two nonparallel walls that have been matched by RS. This gives the relative location between both maps. Change the base of the second map to be the common reference using the technique detailed in Reference 10. tains the location of all features and both robots, relative to the base of the first map. covisibility set used to match both maps. After that point, both robots can continue exploring the environment, building new independent local maps that can be joined and fused with the global map.

9.5 CONCLUSIONS

The EKF approach to SLAM dates back to the seminal work reported in Reference 2 where the idea of representing the structure of the navigation area in a discrete-time state-space framework was originally presented. Nowadays Three important convergence properties were proven in Reference 26 (1) the determinant of any submatrix of the map covariance matrix decreases monotonically as observations are successively made, (2) in the limit, as the number of observations increases, the landmark estimates become fully correlated, and (3) in the limit, the covariance associated with any single landmark location estimate reaches a lower bound determined only by the initial covariance in the vehicle location estimate at the time of the first sighting of the first landmark. It is important to note that these theoretical results only refer to the evolution of the covariance matrices computed by the EKF in the ideal linear case. They overlook the fact that, given that SLAM is a nonlinear problem, there is no guarantee that the computed covariances will match the actual estimation errors, which is the true SLAM consistency issue first pointed out in Reference 38. In a recent paper [9], we showed with simulations that linearization errors lead to inconsistent estimates well before the computational methods like map joining, based on building independent local maps, effectively reduce linearization errors, improving the estimator consistency. The main open challenges in SLAM include efficient mapping of large environments, modeling complex and dynamic environments, multi-vehicle SLAM, and full 3D SLAM. Most of these challenges will require scalable representations, robust data association algorithms, consistent estimation
2006 by Taylor & Francis Group, LLC

the basic properties and limitations of this approach are quite well understood. The global map obtained is shown in Figure 9.11c. The bold lines are the Join both maps (Section 9.4.2), search for more matchings using JCBB, and fuse both maps in a global map (Section 9.4.3) that conproblems arise. In Section 9.4 we have presented experimental evidence that
Map Building and SLAM Algorithms 367

techniques, and different sensor modalities. In particular, solving SLAM with monocular or stereo vision is a crucial research goal for addressing many real life applications.

APPENDIX: TRANSFORMATIONS IN 2D
Two basic operations used in stochastic mapping are transformation inversion and composition, which were represented by Reference 2 using operators _ and : xB
A

= _xA
B

xAC = xA
B

xBC In this chapter, we generalize the operator to also represent the composition of transformations with feature location vectors, which results in the change of base reference of the feature. The Jacobians of these operations are defined as: J_{xA
B

} = (_xA B) xA
B

_____

(xA B) J1{xA B, B

xBC } = (xA

xBC ) xA
B

_____

(xA B, xBC) J2{xA B, B

xBC } = (xA xBC ) xBC _____


(xA B, xBC )

In 2D, the location of a reference B relative to a reference A (or transformation from A to B) can be expressed using a vector with three d.o.f.: xA
B

= [x1, y1, 1]T. The location of A relative to B is computed using the inversion operation: xB
A

= _xA
B

= x1 cos 1 y1 sin 1 x1 sin 1 y1 cos 1 1 The Jacobian of transformation inversion is: J_{xA
B

}= cos 1 sin 1 x1 sin 1 y1 cos 1 sin 1 cos 1 x1 cos 1 + y1 sin 1 0 0 1


2006 by Taylor & Francis Group, LLC

368 Autonomous Mobile Robots

Let xBC = [x2, y2, 2]T be a second transformation. The location of reference C relative to A is obtained by the composition of transformations xA B and xBC : xAC = xA
B

xBC = x1 + x2 cos 1 y2 sin 1

y1 + x2 sin 1 + y2 cos 1 1 + 2 The Jacobians of transformation composition are: J1{xA B, xBC }= 1 0 x2 sin 1 y2 cos 1 0 1 x2 cos 1 y2 sin 1 001 J2{xA B, xBC }= cos 1 sin 1 0 sin 1 cos 1 0 001

ACKNOWLEDGMENT
This research has been funded in part by the Direccin General de Investigacin of Spain under project DPI2003-07986.

REFERENCES

1. J. A. Castellanos, J. M. M. Montiel, J. Neira, and J. D. Tards. The SPmap: a probabilistic framework for simultaneous localization and map building. IEEE Transactions on Robotics and Automation, 15: 948953, 1999. 2. R. Smith, M. Self, and P. Cheeseman. A stochastic map for uncertain spatial relationships. In Faugeras O. and Giralt G. (eds), Robotics Research, The Fourth International Symposium, pp. 467474. The MIT Press, Cambridge, MA, 1988. 3. J. E. Guivant and E. M. Nebot. Optimization of the simultaneous localization and map-building algorithm for real-time implementation. IEEE Transactions on Robotics and Automation, 17: 242257, 2001. 4. J. J. Leonard, R. Carpenter, and H. J. S. Feder. Stochatic mapping using forward look sonar. Robotica, 19, 467480, 2001. 5. J. H. Kim and S. Sukkarieh. Airborne simultaneous localisation and map building. In IEEE International Conference on Robotics and Automation, Taipei, Taiwan, September 2003. 6. J. Knight, A. Davison, and I. Reid. Towards constant time SLAM using postponement. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 406412, Maui, Hawaii, 2001.
2006 by Taylor & Francis Group, LLC

Map Building and SLAM Algorithms 369 7. S. J. Julier and J. K. Uhlmann. Building a million beacon map. In Paul S. Schenker and Gerard T. McKee (eds), SPIE Int. Soc. Opt. Eng., vol. 4571, pp. 1021, SPIE, Washington DC, 2001. 8. Y. Liu and S. Thrun. Results for outdoor-SLAM using sparse extended information filters. In IEEE International Conference on Robotics and Automation, pp. 12271233, Taipei, Taiwan, 2003. 9. J. A. Castellanos, J. Neira, and J. D. Tards. Limits to the consistency of EKFbased SLAM. In 5th IFAC Symposium on Intelligent Autonomous Vehicles, Lisbon, Portugal, 2004. 10. J. D. Tards, J. Neira, P. Newman, and J. Leonard. Robust mapping and localization in indoor environments using sonar data. International Journal of Robotics Research, 21: 311330, 2002. 11. S. B.Williams, G. Dissanayake, and H. F. Durrant-Whyte. An efficient approach to the simultaneous localisation and mapping problem. In IEEE International Conference on Robotics and Automation, ICRA, vol. 1, pp. 406411,

Washington DC, 2002. 12. S. B. Williams. Efficient Solutions to Autonomous Mapping and Navigation Problems. Australian Centre for Field Robotics, University of Sydney, 13. M. Bosse, P. Newman, J. Leonard, M. Soika, W. Feiten, and S. Teller. An atlas framework for scalable mapping. In IEEE International Conference on Robotics and Automation, pp. 18991906, Taipei, Taiwan, 2003. 14. T. Bailey. Mobile Robot Localisation and Mapping in Extensive Outdoor Environments, Australian Centre for Field Robotics, University of Sydney, August 15. J. J. Leonard and P. M. Newman. Consistent, convergent and constant-time SLAM. In International Joint Conference on Artificial Intelligence, Acapulco, Mexico, August 2003. 16. C. Estrada, J. Neira, and J. D. Tards. Hierarchical SLAM: real-time accurate mapping of large environments. IEEE Transactions on Robotics and Automation, 21(4): 588596, 2005. 17. M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit. FastSLAM: a factored solution to the simultaneous localization and mapping problem. In Proceedings of the AAAI National Conference on Artificial Intelligence, Edmonton, Canada, 2002, AAAI. 18. J. S. Gutmann and K. Konolige. Incremental mapping of large cyclic environments. In IEEE International Symposium on Computational Intelligence in Robotics and Automation, CIRA, pp. 318325, Monterey, California, 1999. 19. F. Lu and E. Milios. Globally consistent range scan alignment for environment mapping. Autonomous Robots, 4: 333349, 1997. 20. S. Thrun, D. Hhnel, D. Ferguson, M. Montemerlo, R. Triebel, W. Burgard, C. Baker, Z. Omohundro, S. Thayer, and W. Whittaker. A system for volumetric robotic mapping of abandoned mines. In IEEE International Conference on Robotics and Automation, pp. 42704275, Taipei, Taiwan, 2003. 21. J.A. Castellanos and J.D. Tards. Mobile Robot Localization and Map Building: A Multisensor Fusion Approach. Kluwer Academic Publishers, Boston, MA, 1999.
2006 by Taylor & Francis Group, LLC

September 2001. http://www.acfr.usyd.edu.au/ 2002, http://www.acfr.usyd.edu.au/ 370 Autonomous Mobile Robots 22. P. Newman, J. Leonard, J. D. Tards, and J. Neira. Explore and return: experimental validation of real-time concurrent mapping and localization. In IEEE International Conference on Robotics and Automation, pp. 18021809. IEEE, Washington DC, 2002. 23. J.A. Castellanos, J. M. M. Montiel, J. Neira, and J. D. Tards. Sensor influence in the performance of simultaneous mobile robot localization and map building. In Corke Peter and Trevelyan James (eds), Experimental Robotics VI. Lecture Notes in Control and Information Sciences, vol. 250, pp. 287296. SpringerVerlag, Heidelberg, 2000. 24. A.J. Davison. Real-time simultaneous localisation and mapping with a single camera. In Proceedings of International Conference on Computer Vision, Nice, October 2003. 25. J. A. Castellanos, J. Neira, and J. D. Tards. Multisensor fusion for simultaneous localization and map building. IEEE Transactions on Robotics and Automation, 17: 908914, 2001. 26. M. W. M. G. Dissanayake, P. Newman, S. Clark, H. F. Durrant-Whyte, and M. Csorba. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Transactions on Robotics and Automation, 17: 229241, 2001. 27. A. H. Jazwinski. Stochastic Processes and Filtering Theory. Academic Press, New York, 1970. 28. T. Bar-Shalom and T. E. Fortmann. Tracking and Data Association. Academic Press, New York, 1988. 29. Y. Bar-Shalom, X. R. Li, and T. Kirubarajan. Estimation with Applications to Tracking and Navigation. John Wiley & Sons, New York, 2001. 30. W. E. L. Grimson. Object Recognition by Computer: The Role of Geometric Constraints. The MIT Press, Cambridge, MA, 1990. 31. J. M. M. Montiel and L. Montano. Efficient validation of matching hypotheses using Mahalanobis distance. Engineering Applications of Artificial Ingelligence, 11: 439448, 1998. 32. J. Neira and J. D. Tards. Data association in stochastic mapping using the joint compatibility test. IEEE Transactions on Robotics and Automation, 17:

890897, 2001. 33. J. Neira, J. D. Tards, and J. A. Castellanos. Linear time vehicle relocation in SLAM. In IEEE International Conference on Robotics and Automation, pp. 427433, Taipei, Taiwan, September 2003. 34. T. Bailey, E. M. Nebot, J. K. Rosenblatt, and H. F. Durrant-Whyte. Data association for mobile robot navigation: a graph theoretic approach. In IEEE International Conference on Robotics and Automation, pp. 25122517, San Francisco, California, 2000. 35. M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24: 381395, 1981. 36. R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge, U.K., 2000. 37. Thrun Sebastian, Liu Yufeng, Koller Daphne, Y. Ng Andrew, Ghahramani Zoubin, and H. F. Durrant-Whyte. Simultaneous localization and
2006 by Taylor & Francis Group, LLC

Map Building and SLAM Algorithms 371 mapping with sparse extended information filters. The International Journal of Robotics Research, 23: 693716, 2004. 38. S. J. Julier and J. K. Uhlmann. A counter example to the theory of simultaneous localization and map building. In 2001 IEEE International Conference on Robotics and Automation, pp. 42384243, Seoul, Korea, 2001.

BIOGRAPHIES

Jos A. Castellanos was born in Zaragoza, Spain, in 1969. He earned the M.S. and Ph.D. degrees in industrial-electrical engineering from the University of Zaragoza, Spain, in 1994 and 1998, respectively. He is an associate professor with the Departamento de Informtica e Ingeniera de Sistemas, University of Zaragoza, where he is in charge of courses in SLAM, automatic control systems, and computer modelling and simulation. His current research interest include multisensor fusion and integration, Bayesian estimation in nonlinear systems, and simultaneous localization and mapping. Jos Neira was born in Bogot, Colombia, in 1963. He earned the M.S. degree in computer science from the Universidad de los Andes, Colombia, in 1986, and the Ph.D. degree in computer science from the University of Zaragoza, Spain, in 1993. He is an associate professor with the Departamento de Informtica e Ingeniera de Sistemas, University of Zaragoza, where he is in charge of courses in compiler theory, computer vision, and mobile robotics. His current research interests include autonomous robots, data association, and environment modeling. Juan D. Tards was born in Huesca, Spain, in 1961. He earned the M.S. and Ph.D. degrees in industrial-electrical engineering from the University of Zaragoza, Spain, in 1985 and 1991, respectively. He is an associate professor with the Departamento de Informtica e Ingeniera de Sistemas, University of Zaragoza, where he is in charge of courses in real time systems, computer vision, and artificial intelligence. His current research interests include perception and mobile robotics.
2006 by Taylor & Francis Group, LLC

10
CONTENTS

Motion Planning: Recent Developments

Hctor H. Gonzlez-Baos, David Hsu, and Jean-Claude Latombe


10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 10.2 Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 10.2.1 Configuration Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376

10.2.2 Early Approaches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 10.2.2.1 Roadmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 10.2.2.2 Cell decomposition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 10.2.2.3 Potential field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 10.2.3 Random Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 10.2.3.1 Multi-query planning . . . . . . . . . . . . . . . . . . . . . . . . . . 382 10.2.3.2 Single-query planning. . . . . . . . . . . . . . . . . . . . . . . . . . 383 10.2.3.3 Probabilistic completeness. . . . . . . . . . . . . . . . . . . . . 385 10.2.3.4 Advantages of random sampling . . . . . . . . . . . . . . 386 10.3 Motion Planning under Kinematic and Dynamic Constraints . . . . . . 386 10.3.1 Kinematic and Dynamic Constraints . . . . . . . . . . . . . . . . . . . . . . 386 10.3.2 General Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 10.3.3 Random Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 10.3.4 Case Studies on Real Robotic Systems. . . . . . . . . . . . . . . . . . . . 390 10.3.4.1 Motion planning of trailer-trucks for transporting Airbus A380 components . . . . . . . . 391 10.3.4.2 A space robotics test-bed . . . . . . . . . . . . . . . . . . . . . . 392 10.4 Motion Planning under Visibility Constraints . . . . . . . . . . . . . . . . . . . . . . 393 10.4.1 Sensor Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 10.4.1.1 Sampling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 10.4.1.2 Near-optimal set covers . . . . . . . . . . . . . . . . . . . . . . . . 397 10.4.1.3 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 10.4.2 Indoor Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 10.4.2.1 Constraints on the NBW . . . . . . . . . . . . . . . . . . . . . . . 398 10.4.2.2 Safe regions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
373
2006 by Taylor & Francis Group, LLC

374 Autonomous Mobile Robots

10.4.2.3 Image registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 10.4.2.4 Evaluating next views . . . . . . . . . . . . . . . . . . . . . . . . . . 401 10.4.2.5 Computing the NBV . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 10.4.2.6 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 10.4.3 Target Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 10.4.3.1 State transition equations . . . . . . . . . . . . . . . . . . . . . . 403 10.4.3.2 Visibility constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 10.4.3.3 Tracking strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 10.4.3.4 Backchaining and dynamic programming . . . . 407 10.4.3.5 Escape-time approximations. . . . . . . . . . . . . . . . . . . 407 10.4.3.6 Robot localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 10.4.3.7 Other results and extensions . . . . . . . . . . . . . . . . . . . 408 10.5 Other Important Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 10.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416

10.1 INTRODUCTION
A key trait of an autonomous robot is the ability to plan its own motion in order to accomplish specified tasks. Often, the objective of motion planning is to change the state of the world by computing a sequence of admissible motions for the robot. For example, in the path planning problem, we compute a collision-free path for a robot to go from an initial position to a goal position among static obstacles. This is the simplest type of motion planning problems; yet it is provably hard to computational problem [1]. Sometimes, instead of changing the state of the world, our objective is to maintain a set of constraints on the state of the world (e.g., following a target and keeping it in view), or to achieve a certain state of knowledge about the world (e.g., exploring and mapping an unknown environment).

Ideally, the robot achieves its objectives despite the many possible motion constraints, internal or external to the robot. Traditionally, motion planning emphasizes a single external constraint: physical obstacles in the environment. This is actually the only constraint considered in path planning. However, real robots have inherent mechanical limitations, such as the nonholonomic constraints that prevent wheeled robots from moving sideways. Robots may also be constrained by sensor limitations, such as obstacles blocking the views of cameras. These internal constraints are important, but taking them into account further complicates motion planning. In recent years, random sampling has emerged as a powerful approach for motion planning. It is computationally efficient and relatively simple to implement. Its development was originally driven by the need to plan
2006 by Taylor & Francis Group, LLC

Motion Planning: Recent Developments 375

motions for robots with many degrees of freedom (dof), such as cooperating manipulator arms. However, we will downplay this aspect in this chapter. Instead, our main goal is to show how random sampling, combined with geometric and physical insights, can effectively handle motion constraints resulting from robots mechanical and sensor limitations. We start with an overview of path planning and proceed to the randomsampling approach to path planning (Section 10.2). Next, we focus on motion planning under two types of internal constraints: kinematic, dynamic constraints (Section 10.3) and visibility constraints (Section 10.4). We also briefly touch on the effect of uncertainty on motion planning (Section 10.5).

10.2 PATH PLANNING

In path planning, we are given a complete description of the geometry of a robot and a static environment populated with obstacles. Our goal is to find a collision-free path for the robot to move from an initial position and orientation to a goal position and orientation. Although path planning algorithms differ greatly in details, most of them follow a common framework (Figure 10.1). The first step is to map a robot, which may have complex geometric shape, to a point in a new, abstract space, called the configuration space [2]. This mapping transforms the original problem to that of path planning for a moving point. Next we discretize the continuous configuration space and construct a graph that represents the connectivity of the space. Finally, we search this graph to find a path for the robot to reach the goal. If no path is found, we may sometimes repeat the process by refining the discretization and searching for the path again. An important consideration for path planning algorithms is completeness. A path planning algorithm is complete, if it finds a path whenever one exists
Discretization Graph search Problem formulation Geometry of a robot and obstacles Conguration space Connectivity graph Solution path

FIGURE 10.1 A common framework for path planning.


2006 by Taylor & Francis Group, LLC

376 Autonomous Mobile Robots

and reports none exists otherwise. However, achieving completeness is often computationally intractable. In practice, we have to trade-off some amount of completeness for increased computational efficiency. In this section, we first present the concept of configuration space (Section 10.2.1). Next, we briefly describe some early approaches to path planning (Section 10.2.2), before focusing on how the random-sampling approach works in this relatively simple setting (Section 10.2.3).

10.2.1 Configuration Space

The configuration of a robot is a set of parameters that uniquely determine the position of every point in the robot. For example, the configuration of a mobile robot is usually its position (x, y) and orientation for [, ).

The configuration of an articulated robot manipulator is usually a list of joint angles (1, 2, . . .). Suppose that the configuration of a robot consists of d parameters. It can then be regarded as a point in a d-dimensional space C, called the configuration space. A configuration q is free, if the robot placed at q does not collide with the obstacles or with itself. We define the free space F to be the subset of all free configurations in C, and define the obstacle space B to be the complement of F : B = C\F. See Figure 10.2b for an illustration. For a robot that only translates in the plane, we can construct C explicitly by computing the Minkowski difference of the robot and the obstacles. Intuitively, we can think of the computation as growing the obstacles by the shape of the robot and shrinking the robot to a point (Figure 10.2). In general, a mobile robot not only translates, but also rotates. In this case, we compute slices of C with the robot in various fixed orientations and then stack and stitch these
B (x,y) (a) (b)

FIGURE 10.2 A robot translating in the plane. (a) The triangular robot moves in an environment with a single rectangular obstacle. (b) The configuration space of the robot. The configuration of the robot is represented by the position (x, y) of a reference point in the robot.
2006 by Taylor & Francis Group, LLC

Motion Planning: Recent Developments 377

slices together. Computing C exactly is also possible, though somewhat more complicated [3]. For high-dimensional configuration spaces, explicitly constructing C is difficult. Instead, we represent C implicitly by a function CLEARANCE: C _ R, which maps a configuration q C to the distance between a robot at q and the obstacles. If CLEARANCE(q) returns 0, then q is in collision. An efficient implementation of this function can be achieved with hierarchical collision detection or distance computation algorithms [4]. Whether represented explicitly or implicitly, the configuration space encodes the key information of whether a robot at a particular configuration is in collision with obstacles or not. We can thus state the path planning problem formally in the configuration space as follows. Problem 10.2.1 (path planning) Given an initial configuration qinit and a goal configuration qgoal, find a path in the free space F between qinit and qgoal. In essence, the robot becomes a point in C, and the path planning problem for the robot becomes that of finding a path for a moving point in F. This transformation does not change the problem in any way, but it is often easier to do.

You might also like