Academia.eduAcademia.edu

Towards a Fully Autonomous Indoor Helicopter

Towards a Fully Autonomous Indoor Helicopter Slawomir Grzonka, Samir Bouabdallah, Giorgio Grisetti, Wolfram Burgard and Roland Siegwart {grzonka, grisetti, burgard}@informatik.uni-freiburg.de [email protected] [email protected] Despite the significant progress in micro and information technologies and all the interest of the scientific community in the Micro Aerial Vehicles (MAV), fully autonomous micro-helicopters of the size of a small bird are still not available. The Mesicopter group at Stanford University [1] studied the feasibility of a centimeter scale quadrotor. The group of Prof. Nonami at Chiba University [2] achieved a 13g semiautonomous coaxial helicopter which is able to fly for three minutes. Unfortunately, none of these developments combine reasonable endurance and autonomous navigation in narrow environments. The European project muFly [3] was born in this context; it targets the development and implementation of a fully autonomous micro-helicopter, with a maximum span of 20 cm and mass of 50 g. The consortium is composed of six partners; each one will provide a subsystem of the entire helicopter. One of the objectives of muFly project is to introduce low processing-power localization algorithms for micro helicopters. However, compact and lightweight indoorlocalization sensors do not exist yet (unlike GPS for outdoor). Thus, CSEM research center (Switzerland) who is involved in muFly is presently designing a miniature omni-directional camera [4], to be coupled with a laser source and used as a 360◦ triangulation-based range finder. This sensor and the muFly platform will be available for testing in a couple of months. Until then, the algorithms have to be developed and validated with another sensor and on a different flying platform. The main scope of application for the helicopter are indoor environments which raises constraints which are not present when flying outdoors. Due to the absence of GPS information, the robot has to rely on other on-board sensors. Furthermore, the accuracy of the positioning system is an essential requirement for indoor operations, which is characterized by a limited safety margin (i.e. robot crossing a doorway). In the last decade, navigation systems for autonomous flying vehicles have received an increasing attention by the research community. Ng and colleagues [5] developed effective control algorithms for outdoor helicopters using reinforcement learning techniques. Haehnel et al. [6] proposed a 3D mapping technique for outdoor environments. For indoor navigation, Tournier et al. [7] used monocular vision in order to estimate and control the current pose of a quadrotor. Roberts et al. [8] utilized ultrasound sensors for control a quadrotor in an indoor environment. Recently, He et al. [9] described planning in the information space for muFly is a STREP project under the Sixth Framework Programme of the European Commission, contract No. FP6-2005-IST-5-call 2.5.2 Micro/Nano Based Sub-Systems FP6-IST-034120. Fig. 1. Our quadrotor: 1) Mikrokopter platform, 2) Hokuyo laser range finder, 3) XSens IMU, 4) Gumstix computer navigation planning. In this paper, we present the setup and the algorithms for estimating the pose of a flying vehicle within a known environment. For validating the algorithms we use a modified Mikrokopter [10] quadrotor illustrated in Figure 1. We equipped the quadrotor with a Hokuyo URG laser range scanner and an low-cost MTi XSens IMU. The laser range finder is able to measure distances up to 5.6 m with an angular resolution of approximately 0.35◦ . To measure the altitude of the vehicle with respect to the ground we deflect several laser beams towards the ground with a mirror. The remaining beams are used for 2D localization. The XSens provides orientation angles with a dynamic accuracy of 2◦ . The on-board computation is performed by a PXA-based embedded computer (Gumstix-verdex) running at 600 Mhz. This combination of laser-scanner and IMU allows us to simplify the localization problem by reducing the state space from 6 to 4 dimensions, since accurate roll and pitch angles are available from the IMU. Partitioning the remaining 4DOF into (x, y, θ) and z, makes it possible to use the broad range of existing algorithms for 2D (x, y, θ) wheeled mobile robot localization. We apply a particle filter [11] algorithm to estimate the current pose of the vehicle. In contrast to other filtering techniques, like Kalman Filters, particles filters are able to deal with highly non-linear systems and can approximate arbitrarily complex density functions. This property includes multi-modal pose estimation as well as global localization, i.e., when the starting pose of the vehicle is not known in advance. The key idea of Monte Carlo localization it to estimate the possible robot locations using a sample-based representation. Formally, the task consists in estimating the posterior p(xt | z1:t , u1:t ) of the current robot pose xt given the a known map of the environment, the odometry measurements u1:t = hu1 , . . . , ut i and the observations z1:t = hz1 , . . . , zt i made so far. In the particle filter framework, the probability distribution about the pose of the robot at time [j] step t is represented by a set of weighted samples {xt }. The robustness and efficiency of this procedure strongly depends on the proposal distribution that is used to sample the new state hypotheses in the selection step. Since our flying vehicle does not provide reliable odometry measurements, we apply an incremental scan-matching procedure to estimate the inter-frame motion of the vehicle. Our algorithm can be described as follows. In a first step we project the laser beams based on the latest roll and pitch from the IMU. The projected beams are then divided into two parts namely the beams for height estimation and the beams for 2D (x, y, θ) localization. We then perform incremental scan-matching by considering the localization beams. In this way we get an estimate of the inter-frame motion which is used in the prediction step of the particle filter. The measurement update utilizes the current (projected) laser beams and a likelihood-field map of the environment to calculate the individual weights of the particles. bottom image of Figure 3). Note that we highlighted the maximum a posteriori pose estimate in the three snapshots. Fig. 3. Global localization of our quadrotor. Top: initial situation, with uniformally drawn random poses. Middle: after about 1 m of flight, the particles start to focus on the true pose. Bottom: after approximately 5 m of flight the particle set has focused around the true pose of the helicopter. The blue circle highlights the current best estimate of the particle filter. The quadrotor was able to autonomously maintain its height of 50 cm during this experiment. R EFERENCES Fig. 2. In all our experiments the quadrotor autonomously kept a previously defined height. We tested our algorithms by remotely controlling the quadrotor flying through our building as shown in Figure 2. We implemented an autonomous height stabilization control in order to test the system for different height levels. The localization of one experiment performed at a flying height of 50 cm with 5000 particles for global localization is depicted in Figure 3. The top image shows the initial situation in which the current pose of the quadrotor is unknown. After few iterations (i.e., after about 1 m of flight) the localization algorithm starts to focus on relatively few possible poses only (middle image). After about 5 m of flight, the particles are highly focused around the true pose of the helicopter (see [1] I. Kroo et al., “The mesicopter: A miniature rotorcraft concept phase 2 interim report,” Stanford University, USA, 2000. [2] W. Wang et al., “Autonomous control for micro-flying robot and small wireless helicopter x.r.b,” in Proc. (IEEE) International Conference on Intelligent Robots (IROS’06), Beijing, China, 2006. [3] MuFly, http://www.mufly.org/. [4] C. Gimkiewicz, “Ultra-miniature catadioptrical system for an omnidirectional camera,” in Micro-Optics 2008, Strasbourg, France, 2008. [5] A. Coates, P. Abbeel, and A. Ng, “Learning for Control from Multiple Demonstrations,” Proceedings of the International Conference on Machine Learning, 2008. [6] S. Thrun, M. Diel, and D. Hahnel, “Scan Alignment and 3-D Surface Modeling with a Helicopter Platform,” Field and Service Robotics(STAR Springer Tracts in Advanced Robotics), vol. 24, pp. 287–297, 2006. [7] G. Tournier, M. Valenti, J. How, and E. Feron, “Estimation and Control of a Quadrotor Vehicle Using Monocular Vision and Moire Patterns,” AIAA Guidance, Navigation and Control Conference and Exhibit, pp. 21–24, 2006. [8] J. Roberts, T. Stirling, J. Zufferey, and D. Floreano, “Quadrotor Using Minimal Sensing For Autonomous Indoor Flight.” [9] R. He, S. Prentice, and N. Roy, “Planning in information space for a quadrotor helicopter in a GPS-denied environment,” Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pp. 1814–1820, 2008. [10] Mikrokopter, http://www.mikrokopter.de/. [11] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics, 2005.