RAL21 Li
RAL21 Li
RAL21 Li
net/publication/350535662
CITATIONS READS
112 352
3 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Kailai Li on 27 April 2021.
Towards High-Performance
Solid-State-LiDAR-Inertial Odometry and Mapping
Kailai Li, Meng Li, and Uwe D. Hanebeck
correlations between LiDAR and IMU measurements might be deskew point cloud Y sliding window
optimization IMU pre-
IMU keyframe
largely discarded. To guarantee high odometry accuracy, the selection
N optimize local
integration
factor graph
system requires nine-axis IMU readings of high frequency (500
build global
Hz as used in [17]) to de-skew the point cloud and initialize Livox filtered point cloud pose graph
Horizon
LiDAR odometry. Fusion with additional sensor modalities preprocessing
global map detect loop closure
<latexit sha1_base64="L4T8c4wWxX7urIE0Syg2cAcSErs=">AAAB+HicbVDLSsNAFL3xWeujsS7dDC2Cq5KIosuCLlxWsA9oQ5hMJ+3QySTMTIQa+iVuXPjArXv/wJU7/8ZJ24W2Hhg4nHMv98wJEs6Udpxva2V1bX1js7BV3N7Z3SvZ++WWilNJaJPEPJadACvKmaBNzTSnnURSHAWctoPRZe6376hULBa3epxQL8IDwUJGsDaSb5d6TPQirIdBkDUmPvPtqlNzpkDLxJ2Tar3w+VG+eqk0fPur149JGlGhCcdKdV0n0V6GpWaE00mxlyqaYDLCA9o1VOCIKi+bBp+gI6P0URhL84RGU/X3RoYjpcZRYCbzjGrRy8X/vG6qwwsvYyJJNRVkdihMOdIxyltAfSYp0XxsCCaSmayIDLHERJuuiqYEd/HLy6R1UnPPas6NaeMUZijAIVTgGFw4hzpcQwOaQCCFB3iCZ+veerRerbfZ6Io13zmAP7DefwDyU5Yv</latexit>
<latexit sha1_base64="aGOuGFYEe3zUmSN9ijm0p2EAFL8=">AAAB+HicbVDLSsNAFL2pr1ofjXXpJrQIrkoiii4LCrqsYB/QhDCZTtqhk0mYmQg19EvcuPCBW/f+gSt3/o2TtgttPTBwOOde7pkTJIxKZdvfRmFldW19o7hZ2tre2S2be5W2jFOBSQvHLBbdAEnCKCctRRUj3UQQFAWMdILRRe537oiQNOa3apwQL0IDTkOKkdKSb5Zdyt0IqWEQZFcTn/pmza7bU1jLxJmTWqP4+VG5fKk2ffPL7cc4jQhXmCEpe46dKC9DQlHMyKTkppIkCI/QgPQ05Sgi0sumwSfWoVb6VhgL/biypurvjQxFUo6jQE/mGeWil4v/eb1UhedeRnmSKsLx7FCYMkvFVt6C1aeCYMXGmiAsqM5q4SESCCvdVUmX4Cx+eZm0j+vOad2+0W2cwAxFOIAqHIEDZ9CAa2hCCzCk8ABP8GzcG4/Gq/E2Gy0Y8519+APj/QfklJYm</latexit>
<latexit sha1_base64="AEqLnoC4eowdcSgEi3krOC2nrnU=">AAAB6HicbZDLSgMxFIbPeK31VnXpJlgEV2VGFN1ZcOOyBXuBdiiZ9Ewbm8kMSUYopU/gxoUidelb+BrufBszbRfa+kPg4//PIeecIBFcG9f9dlZW19Y3NnNb+e2d3b39wsFhXcepYlhjsYhVM6AaBZdYM9wIbCYKaRQIbASD2yxvPKLSPJb3ZpigH9Ge5CFn1FiryjuFoltypyLL4M2hePM5yfRe6RS+2t2YpRFKwwTVuuW5ifFHVBnOBI7z7VRjQtmA9rBlUdIItT+aDjomp9bpkjBW9klDpu7vjhGNtB5Gga2MqOnrxSwz/8taqQmv/RGXSWpQstlHYSqIiUm2NelyhcyIoQXKFLezEtanijJjb5O3R/AWV16G+nnJuyy5VbdYvoCZcnAMJ3AGHlxBGe6gAjVggPAEL/DqPDjPzpszmZWuOPOeI/gj5+MH7TSRbg==</latexit>
i
<latexit sha1_base64="lozYeCck+I3AQBqWtMPMNmxZKdQ=">AAACDHicbVDLSgMxFM3UV62vqks3wSK4KjOi6LLgxmUF+4BOKUmaaUMzmSG5I5RhPsCNv+LGhSJu/QB3/o2ZdgRtPRA4nHMuuffQWAoDrvvllFZW19Y3ypuVre2d3b3q/kHbRIlmvMUiGekuJYZLoXgLBEjejTUnIZW8QyfXud+559qISN3BNOb9kIyUCAQjYKVBteaPCaR+SGBMadrMsoHwTUINhx+tk9mUW3dnwMvEK0gNFWgOqp/+MGJJyBUwSYzpeW4M/ZRoEEzyrOInhseETciI9yxVJOSmn86OyfCJVYY4iLR9CvBM/T2RktCYaUhtMt/QLHq5+J/XSyC46qdCxQlwxeYfBYnEEOG8GTwUmjOQU0sI08LuitmYaMLA9lexJXiLJy+T9lndu6i7t+e1xnlRRxkdoWN0ijx0iRroBjVRCzH0gJ7QC3p1Hp1n5815n0dLTjFziP7A+fgGc22cdQ==</latexit>
<latexit sha1_base64="TdhceuJEBYjKXo3xI55L5DuuZc0=">AAAB6HicbZDLSgMxFIbPeK31VnXpJlgEV2VGFN1ZcOOyBXuBdiiZ9Ewbm8kMSUYopU/gxoUidelb+BrufBszbRfa+kPg4//PIeecIBFcG9f9dlZW19Y3NnNb+e2d3b39wsFhXcepYlhjsYhVM6AaBZdYM9wIbCYKaRQIbASD2yxvPKLSPJb3ZpigH9Ge5CFn1FirqjqFoltypyLL4M2hePM5yfRe6RS+2t2YpRFKwwTVuuW5ifFHVBnOBI7z7VRjQtmA9rBlUdIItT+aDjomp9bpkjBW9klDpu7vjhGNtB5Gga2MqOnrxSwz/8taqQmv/RGXSWpQstlHYSqIiUm2NelyhcyIoQXKFLezEtanijJjb5O3R/AWV16G+nnJuyy5VbdYvoCZcnAMJ3AGHlxBGe6gAjVggPAEL/DqPDjPzpszmZWuOPOeI/gj5+MH+tiRdw==</latexit>
(A) 0 - 25 ms (B) 25 - 50 ms
A B C
Fig. 4: Illustration of proposed feature extraction method.
they are extracted as edge features (Fig. 4-C and Alg. 1, line
16-17). Otherwise, no feature is extracted from the current
patch (Fig. 4-B).
We show an example of extracted features in one frame of
Livox Horizon scan in Fig. 5. Note that the proposed feature
extraction algorithm is purely performed in the time domain
of every sweep given time stamps of the perceived point
array. This is radically different from the approach in [19] that
computes local smoothness of point candidates selected through (A) extracted plane features (red)
spatial retrieval. Also for traditional spinning LiDARs, common
point cloud segmentation or feature extraction methods [10],
[11], [20] are performed in (transformed) spatial domains, e.g.,
extracting features w.r.t. the horizontal scan angles. For Livox
Horizon, the number of extracted plane features are usually
much more than edge features due to its uniform scan coverage
(statistics are given in Sec. V-D). We further associate each
feature point with its corresponding edge’s direction vector or
(B) extracted edge (green) and plane (red) features
plane’s normal vector to represent local geometries. It will be
further exploited for weighting the LiDAR residual term in the Fig. 5: Feature extraction for point cloud (blue) from Horizon.
backend fusion module. Here, we also have pw = R(q) pl + t.
inserted into a global pose graph with only keyframe feature Xsens Mti-670
to find keyframe nodes that are spatially close but with enough
temporal distance (e.g., 20 keyframes). An ICP is performed (A) (B)
between the current feature scan and the candidate feature Fig. 7: Experimental setup for FR-IOSB data set.
map, from which a fitting score is computed for loop closure
detection. Once confirmed, a global pose graph optimization is For the remaining sequences, LIO-SAM still shows worse
invoked by imposing the constraint from the ICP. As depicted in tracking accuracy than LiLi-OM though it additionally exploits
Fig. 2, the keyframe local map M̆w at backend is then updated orientation measurements from a magnetometer. This mainly
by corrected poses to further incorporate the LiDAR constraint. results from the unified fusion scheme of LiLi-OM where
LiDAR and inertial measurements are directly fused. LIOM
fails on UrbanNav data sets (denoted as 7) and shows large drift
V. E VALUATION
on UTBM-1. It also cannot run in real time with recommended
A. Implementation and evaluation setup configurations. LOAM delivers large tracking errors on UTBM
We implement the proposed LiDAR-inertial odometry and as the implementation limits the iteration number in scan-
mapping system in C++ using ROS [23]. The three modules matching for real-time performance.
shown in Fig. 2 are structured as three individual nodes. The
nonlinear optimization problem in (2) is solved using the TABLE I: APE (RMSE) in meters on public data sets
Ceres Solver [24]. We use GTSAM [25] to perform factor dataset LOAM LeGO LIOM LINS LIO-SAM LiLi-OM?
UTBM-1 479.51 17.12 468.75 16.90 – 8.61
graph optimization for rectifying the global pose graph at loop UTBM-2 819.95 6.46 12.95 9.31 – 6.45
closures. Our system is developed for Livox Horizon with the UL-1 2.39 2.22 2.53 2.27 2.54 1.59
name LiLi-OM. It is, however, also applicable for conventional UL-2 2.58 2.30 2.00 2.99 2.50 1.20
UN-1 11.20 2.70 7 2.19 2.28 1.08
spinning LiDARs thanks to its generic backend fusion. Thus, UN-2 12.70 4.15 7 4.80 5.31 3.24
two versions of the system are evaluated: (1) the original LiLi-
OM for Livox Horizon with the proposed feature extraction
approach and (2) its variant LiLi-OM? using the preprocessing C. Experiment
module of [10] for spinning LiDARs. Evaluations are conducted To further test LiLi-OM in real-world scenarios, we set up
based on public data sets (recorded using conventional LiDARs) a sensor suite composed of a Livox Horizon and an Xsens
and experiments (including data sets from Livox Horizon). All MTi-670 IMU. The total cost is about 1700 Euros (Q1, 2020),
LiDAR frame rates are 10 Hz. which is much less than conventional LiDAR-inertial setups.
1) FR-IOSB data set: Shown in Fig. 7-(A), a mobile platform
B. Public data set is instrumented with the proposed Livox-Xsens suite. For
comparison with high-end mechanical spinning LiDARs, we
We deploy LiLi-OM? to compare with competing state-of- set up a Velodyne HDL-64E onboard and synchronize it with
the-art systems. These include works on (1) LiDAR odometry: an Xsens MTi-G-700 IMU (six-axis, 150 Hz). Three sequences
A-LOAM6 (open-source version of LOAM [10]), LeGO- were recorded at the Fraunhofer IOSB campus of Fig. 7-(B):
LOAM [11] (shortened as LeGO) and (2) LiDAR-inertial (1) Short for a short path in structured scenes, (2) Tree recorded
odometry: LIO-mapping (shortened as LIOM) [15], LINS [16], in bushes, and (3) Long for a long trajectory.
LIO-SAM [17]. For evaluation, we use the EU long-term data Both LiLi-OM and LiLi-OM? are tested. For comparison,
set (UTBM) that provides two long urban navigation sequences we run LOAM, LeGO and Livox-Horizon-LOAM (shortened
recorded by a Velodyne HDL-32E and a six-axis IMU (100 as LiHo)8 , a LOAM variant adapted to Livox Horizon with
Hz) [26]. LIO-SAM requires nine-axis IMU measurements. point clouds deskewed by IMU. Tab. II shows superior tracking
Thus, we include the UrbanLoco and UrbanNav data sets [5] accuracy (bold) of the proposed systems with our low-cost
recorded using a HDL-32E and Xsens MTi-10 IMU (nine-axis, hardware setup performing equally well as the high-end one
100 Hz). The RMSE of the absolute position error (APE) is in the same scenario. We show reconstructed maps (partial)
computed for the final estimated trajectory based on the ground on sequence Long in Fig. 8, where LiLi-OM? delivers superior
truth using the script in [27]. mapping quality using the proposed sensor fusion scheme.
Shown in Tab. I7 , the proposed LiLi-OM? achieves the best
tracking accuracy (bold) for all sequences in real time. LIO- TABLE II: End-to-end position error in meters on FR-IOSB
SAM [17] requires nine-axis IMU readings for de-skewing Velodyne HDL-64E Livox Horizon
and frontend odometry, thereby not applicable for UTBM. dataset length speed LOAM LeGO LiLi-OM? LiHo LiLi-OM
Short 0.49 km 2.15 m/s 0.78 0.25 0.34 5.04 0.25
6 https://github.com/HKUST-Aerial-Robotics/A-LOAM Tree 0.36 km 1.12 m/s 0.21 78.22 < 0.1 0.13 < 0.1
7 Data sets abbr.: UTBM-1: UTBM-20180719, UTBM-2: UTBM-20180418-RA, UL-1: Long 1.10 km 1.71 m/s 0.43 0.82 < 0.1 3.91 0.34
UrbanLoco-HK-20190426-1, UL-2: UrbanLoco-HK-20190426-2, UN-1: UrbanNav-HK-
20190314, UN-2: UrbanNav-HK-20190428. 8 https://github.com/Livox-SDK/livox_horizon_loam
7
Livox-
Xsens
suite
312m
496m
window optimization. Given the optimized keyframe states, [8] A. Segal, D. Haehnel, and S. Thrun, “Generalized-ICP,” in Proceedings
regular-frame poses are obtained via factor graph optimization. of the 2009 Robotics: Science and Systems (RSS 2009), vol. 2, no. 4,
Edinburgh, UK, June 2009.
The proposed LiDAR-inertial odometry and mapping system [9] Y. Chen and G. Medioni, “Object Modelling by Registration of Multiple
is universally applicable for both conventional LiDARs and Range Images,” Image and Vision Computing, vol. 10, no. 3, pp. 145–155,
solid-state LiDARs of small FoVs. For the latter use case, a 1992.
[10] J. Zhang and S. Singh, “LOAM: Lidar Odometry and Mapping in Real-
novel feature extraction method is designed for the irregular Time,” in Proceedings of the 2014 Robotics: Science and Systems (RSS
and unique scan pattern of Livox Horizon, a newly released 2014), vol. 2, no. 9, Berkeley, California, USA, July 2014.
spinning free, solid-state LiDAR with much lower price than [11] T. Shan and B. Englot, “LeGO-LOAM: Lightweight and Ground-
Optimized Lidar Odometry and Mapping on Variable Terrain,” in
conventional 3D LiDARs. We conduct evaluations on both Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent
public data sets of conventional LiDARs and experiments Robots and Systems (IROS 2018), Madrid, Spain, Oct. 2018, pp. 4758–
using the Livox Horizon. Results show that the proposed 4765.
[12] I. Bogoslavskyi and C. Stachniss, “Fast Range Image-Based Segmentation
system is real-time capable and delivers superior tracking and of Sparse 3D Laser Scans for Online Operation,” in Proceedings of
mapping accuracy over state-of-the-art LiDAR/LiDAR-inertial the 2016 IEEE/RSJ International Conference on Intelligent Robots and
odometry systems. The proposed system, LiLi-OM, is featured Systems (IROS 2016), Daejeon, Korea, Oct. 2016, pp. 163–169.
[13] J. Tang, Y. Chen, X. Niu, L. Wang, L. Chen, J. Liu, C. Shi, and J. Hyyppä,
as a cost-effective solution of high-performance LiDAR-inertial “LiDAR Scan Matching Aided Inertial Navigation System in GNSS-
odometry and mapping using solid-state LiDAR. Denied Environments,” Sensors, vol. 15, no. 7, pp. 16 710–16 728, 2015.
There is still much potential to exploit for the proposed [14] C. Le Gentil, T. Vidal-Calleja, and S. Huang, “IN2LAAMA: Inertial
Lidar Localization Autocalibration and Mapping,” IEEE Transactions on
system. The deployed Livox-Xsens suite is lightweight and Robotics, vol. 37, no. 1, pp. 275–290, 2021.
LiLi-OM is developed for universal egomotion estimation (not [15] H. Ye, Y. Chen, and M. Liu, “Tightly Coupled 3D Lidar Inertial Odometry
only for planar motion as [11]). Thus, it should be, for instance, and Mapping,” in Proceedings of the 2019 International Conference on
Robotics and Automation (ICRA 2019), Montreal, Canada, May 2019,
tested onboard unmanned aerial vehicles in applications such pp. 3144–3150.
as autonomous earth observation or environmental modeling [16] C. Qin, H. Ye, C. E. Pranata, J. Han, S. Zhang, and M. Liu, “LINS:
coping with aggressive six-DoF egomotion. For large-scale A Lidar-Inertial State Estimator for Robust and Efficient Navigation,”
in Proceedings of the 2020 International Conference on Robotics and
odometry and mapping with limited computational resources, Automation (ICRA 2020), Paris, France, May 2020, pp. 8899–8906.
advanced map representations can be employed to improve [17] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and R. Daniela,
memory as well as runtime efficiency. Potential options include “LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and
Mapping,” in Proceedings of the 2020 IEEE/RSJ International Conference
volumetric mapping using TSDF (Truncated Signed Distance on Intelligent Robots and Systems (IROS 2020), Las Vegas, Nevada, USA,
Fields) [3] or mapping with geometric primitives (especially Oct. 2020, pp. 5135–5142.
in man-made environment [28]). [18] M. Kaess, H. Johannsson, R. Roberts, V. Ila, J. J. Leonard, and F. Dellaert,
“iSAM2: Incremental Smoothing and Mapping Using the Bayes Tree,” The
International Journal of Robotics Research, vol. 31, no. 2, pp. 216–235,
ACKNOWLEDGMENT 2012.
[19] J. Lin and F. Zhang, “Loam Livox: A Fast, Robust, High-Precision
We would like to thank Thomas Emter from Fraunhofer LiDAR Odometry and Mapping Package for LiDARs of Small FoV,”
IOSB for providing the robot platform and assistance in in Proceedings of the 2020 International Conference on Robotics and
Automation (ICRA 2020), Paris, France, May 2020, pp. 3126–3131.
recording the FR-IOSB data set. [20] B. Wu, A. Wan, X. Yue, and K. Keutzer, “SqueezeSeg: Convolutional
Neural Nets with Recurrent CRF for Real-Time Road-Object Segmenta-
R EFERENCES tion from 3D LiDAR Point Cloud,” in Proceedings of the 2018 IEEE
International Conference on Robotics and Automation (ICRA 2018),
[1] J. Engel, J. Sturm, and D. Cremers, “Camera-Based Navigation of a Low- Brisbane, Australia, May 2018, pp. 1887–1893.
Cost Quadrocopter,” in Proceedings of the 2012 IEEE/RSJ International [21] S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale,
Conference on Intelligent Robots and Systems (IROS 2012), Vilamoura, “Keyframe-Based Visual–Inertial Odometry Using Nonlinear Optimiza-
Portugal, Oct. 2012, pp. 2815–2821. tion,” The International Journal of Robotics Research, vol. 34, no. 3, pp.
[2] S. Bultmann, K. Li, and U. D. Hanebeck, “Stereo Visual SLAM Based 314–334, 2015.
on Unscented Dual Quaternion Filtering,” in Proceedings of the 22nd [22] T. Qin, P. Li, and S. Shen, “VINS-Mono: A Robust and Versatile
International Conference on Information Fusion (Fusion 2019), Ottawa, Monocular Visual-Inertial State Estimator,” IEEE Transactions on
Canada, July 2019, pp. 1–8. Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
[3] V. Reijgwart, A. Millane, H. Oleynikova, R. Siegwart, C. Cadena, and [23] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger,
J. Nieto, “Voxgraph: Globally Consistent, Volumetric Mapping Using R. Wheeler, and A. Ng, “ROS: An Open-source Robot Operating System,”
Signed Distance Function Submaps,” IEEE Robotics and Automation ICRA Workshop on Open Source Software, vol. 3, no. 3.2, pp. 5–10,
Letters, vol. 5, no. 1, pp. 227–234, 2020. 2009.
[4] O. Kähler, V. A. Prisacariu, J. P. C. Valentin, and D. W. Murray, [24] S. Agarwal, K. Mierle, and Others, “Ceres Solver,” http://ceres-solver.org,
“Hierarchical Voxel Block Hashing for Efficient Integration of Depth 2016.
Images,” IEEE Robotics and Automation Letters, vol. 1, no. 1, pp. 192– [25] F. Dellaert, “Factor Graphs and GTSAM: A Hands-on Introduction,”
197, 2016. Georgia Institute of Technology, Atlanta, GA, USA, Tech. Rep. GT-RIM-
[5] W. Wen, Y. Zhou, G. Zhang, S. Fahandezh-Saadi, X. Bai, W. Zhan, CP&R-2012-002, Sep. 2012.
M. Tomizuka, and L.-T. Hsu, “Urbanloco: A Full Sensor Suite Dataset [26] Z. Yan, L. Sun, T. Krajnik, and Y. Ruichek, “EU Long-term Dataset with
for Mapping and Localization in Urban Scenes,” in Proceedings of the Multiple Sensors for Autonomous Driving,” in Proceedings of the 2020
2020 International Conference on Robotics and Automation (ICRA 2020), IEEE/RSJ International Conference on Intelligent Robots and Systems
Paris, France, May 2020, pp. 2310–2316. (IROS 2020), Las Vegas, Nevada, USA, Oct. 2020, pp. 10 697–10 704.
[6] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision Meets Robotics: [27] M. Grupp, “evo: Python Package for the Evaluation of Odometry and
The KITTI Dataset,” The International Journal of Robotics Research, SLAM,” https://github.com/MichaelGrupp/evo, 2017.
vol. 32, no. 11, pp. 1231–1237, 2013. [28] H. Möls, K. Li, and U. D. Hanebeck, “Highly Parallelizable Plane
[7] F. Pomerleau, F. Colas, R. Siegwart, and S. Magnenat, “Comparing ICP Extraction for Organized Point Clouds Using Spherical Convex Hulls,”
Variants on Real-World Data Sets,” Autonomous Robots, vol. 34, no. 3, in Proceedings of the 2020 IEEE International Conference on Robotics
pp. 133–148, 2013. and Automation (ICRA 2020), Paris, France, May 2020, pp. 7920–7926.