LIO-SAM: Tightly-Coupled Lidar Inertial Odometry Via Smoothing and Mapping

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

October 25-29, 2020, Las Vegas, NV, USA (Virtual)

LIO-SAM: Tightly-coupled Lidar Inertial Odometry via


Smoothing and Mapping
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) | 978-1-7281-6212-6/20/$31.00 ©2020 IEEE | DOI: 10.1109/IROS45743.2020.9341176

Tixiao Shan, Brendan Englot, Drew Meyers, Wei Wang, Carlo Ratti, and Daniela Rus

Abstract— We propose a framework for tightly-coupled lidar lidar odometry and mapping (LOAM) method proposed in
inertial odometry via smoothing and mapping, LIO-SAM, that [1] for low-drift and real-time state estimation and mapping
achieves highly accurate, real-time mobile robot trajectory es- is among the most widely used. LOAM, which uses a lidar
timation and map-building. LIO-SAM formulates lidar-inertial
odometry atop a factor graph, allowing a multitude of relative and an inertial measurement unit (IMU), achieves state-of-
and absolute measurements, including loop closures, to be the-art performance and has been ranked as the top lidar-
incorporated from different sources as factors into the system. based method since its release on the KITTI odometry
The estimated motion from inertial measurement unit (IMU) benchmark site [2]. Despite its success, LOAM presents
pre-integration de-skews point clouds and produces an initial some limitations - by saving its data in a global voxel
guess for lidar odometry optimization. The obtained lidar
odometry solution is used to estimate the bias of the IMU. map, it is often difficult to perform loop closure detection
To ensure high performance in real-time, we marginalize old and incorporate other absolute measurements, e.g., GPS, for
lidar scans for pose optimization, rather than matching lidar pose correction. Its online optimization process becomes less
scans to a global map. Scan-matching at a local scale instead of efficient when this voxel map becomes dense in a feature-rich
a global scale significantly improves the real-time performance environment. LOAM also suffers from drift in large-scale
of the system, as does the selective introduction of keyframes,
and an efficient sliding window approach that registers a new tests, as it is a scan-matching based method at its core.
keyframe to a fixed-size set of prior “sub-keyframes.” The In this paper, we propose a framework for tightly-coupled
proposed method is extensively evaluated on datasets gathered lidar inertial odometry via smoothing and mapping, LIO-
from three platforms over various scales and environments.
SAM, to address the aforementioned problems. We assume
I. I NTRODUCTION a nonlinear motion model for point cloud de-skew, estimating
the sensor motion during a lidar scan using raw IMU
State estimation, localization and mapping are fundamen-
measurements. In addition to de-skewing point clouds, the
tal prerequisites for a successful intelligent mobile robot,
estimated motion serves as an initial guess for lidar odometry
required for feedback control, obstacle avoidance, and plan-
optimization. The obtained lidar odometry solution is then
ning, among many other capabilities. Using vision-based
used to estimate the bias of the IMU in the factor graph.
and lidar-based sensing, great efforts have been devoted
By introducing a global factor graph for robot trajectory
to achieving high-performance real-time simultaneous lo-
estimation, we can efficiently perform sensor fusion using
calization and mapping (SLAM) that can support a mobile
lidar and IMU measurements, incorporate place recognition
robot’s six degree-of-freedom state estimation. Vision-based
among robot poses, and introduce absolute measurements,
methods typically use a monocular or stereo camera and
such as GPS positioning and compass heading, when they
triangulate features across successive images to determine
are available. This collection of factors from various sources
the camera motion. Although vision-based methods are es-
is used for joint optimization of the graph. Additionally, we
pecially suitable for place recognition, their sensitivity to
marginalize old lidar scans for pose optimization, rather than
initialization, illumination, and range make them unreliable
matching scans to a global map like LOAM. Scan-matching
when they alone are used to support an autonomous navi-
at a local scale instead of a global scale significantly im-
gation system. On the other hand, lidar-based methods are
proves the real-time performance of the system, as does the
largely invariant to illumination change. Especially with the
selective introduction of keyframes, and an efficient sliding
recent availability of long-range, high-resolution 3D lidar,
window approach that registers a new keyframe to a fixed-
such as the Velodyne VLS-128 and Ouster OS1-128, lidar
size set of prior “sub-keyframes.” The main contributions of
becomes more suitable to directly capture the fine details of
our work can be summarized as follows:
an environment in 3D space. Therefore, this paper focuses
on lidar-based state estimation and mapping methods. • A tightly-coupled lidar inertial odometry framework
Many lidar-based state estimation and mapping methods built atop a factor graph, that is suitable for multi-sensor
have been proposed in the last two decades. Among them, the fusion and global optimization.
T. Shan, D. Meyers, W. Wang, and C. Ratti are with the Department of
• An efficient, local sliding window-based scan-matching
Urban Studies and Planning, Massachusetts Institute of Technology, USA, {shant, approach that enables real-time performance by regis-
drewm, wweiwang, ratti}@mit.edu.
B. Englot is with the Department of Mechanical Engineering, Stevens Institute of
tering selectively chosen new keyframes to a fixed-size
Technology, USA, [email protected]. set of prior sub-keyframes.
T. Shan, W. Wang, and D. Rus are with the Computer Science & Artificial
Intelligence Laboratory, Massachusetts Institute of Technology, USA, {shant,
• The proposed framework is extensively validated with
wweiwang, rus}@mit.edu. tests across various scales, vehicles, and environments.

978-1-7281-6212-6/20/$31.00 ©2020 IEEE 5135

Authorized licensed use limited to: ULAKBIM UASL - YILDIZ TEKNIK UNIVERSITESI. Downloaded on July 01,2021 at 11:54:58 UTC from IEEE Xplore. Restrictions apply.
x x x x x x x

IMU measurements Lidar frames Lidar keyframe Lidar sub-keyframes GPS measurement x Robot state node

IMU preintegration Lidar odometry GPS factor Loop closure Scan matching
factor factor factor

Fig. 1: The system structure of LIO-SAM. The system receives input from a 3D lidar, an IMU and optionally a GPS. Four types of factors
are introduced to construct the factor graph.: (a) IMU preintegration factor, (b) lidar odometry factor, (c) GPS factor, and (d) loop closure
factor. The generation of these factors is discussed in Section III.

II. R ELATED W ORK to correct a robot’s state estimate recursively in a tightly-


coupled manner. A tightly-coupled lidar inertial odometry
Lidar odometry is typically performed by finding the
and mapping framework, LIOM, is introduced in [17].
relative transformation between two consecutive frames us-
LIOM, which is the abbreviation for LIO-mapping, jointly
ing scan-matching methods such as ICP [3] and GICP
optimizes measurements from lidar and IMU and achieves
[4]. Instead of matching a full point cloud, feature-based
similar or better accuracy when compared with LOAM. Since
matching methods have become a popular alternative due to
LIOM is designed to process all the sensor measurements,
their computational efficiency. For example, in [5], a plane-
real-time performance is not achieved - it runs at about 0.6×
based registration approach is proposed for real-time lidar
real-time in our tests.
odometry. Assuming operations in a structured environment,
it extracts planes from the point clouds and matches them by III. L IDAR I NERTIAL O DOMETRY VIA
solving a least-squares problem. A collar line-based method S MOOTHING AND M APPING
is proposed in [6] for odometry estimation. In this method,
line segments are randomly generated from the original point A. System Overview
cloud and used later for registration. However, a scan’s point We first define frames and notation that we use throughout
cloud is often skewed because of the rotation mechanism of the paper. We denote the world frame as W and the robot
modern 3D lidar, and sensor motion. Solely using lidar for body frame as B. We also assume the IMU frame coincides
pose estimation is not ideal since registration using skewed with the robot body frame for convenience. The robot state
point clouds or features will eventually cause large drift. x can be written as:
Therefore, lidar is typically used in conjunction with other T
x = [ RT , pT , vT , bT ] , (1)
sensors, such as IMU and GPS, for state estimation and
mapping. Such a design scheme, utilizing sensor fusion, can where R ∈ SO(3) is the rotation matrix, p ∈ R3 is the
typically be grouped into two categories: loosely-coupled position vector, v is the speed, and b is the IMU bias. The
fusion and tightly-coupled fusion. In LOAM [1], IMU is transformation T ∈ SE(3) from B to W is represented as
introduced to de-skew the lidar scan and give a motion T = [R | p].
prior for scan-matching. However, the IMU is not involved An overview of the proposed system is shown in Figure 1.
in the optimization process of the algorithm. Thus LOAM The system receives sensor data from a 3D lidar, an IMU and
can be classified as a loosely-coupled method. A lightweight optionally a GPS. We seek to estimate the state of the robot
and ground-optimized lidar odometry and mapping (LeGO- and its trajectory using the observations of these sensors. This
LOAM) method is proposed in [7] for ground vehicle map- state estimation problem can be formulated as a maximum a
ping tasks [8]. Its fusion of IMU measurements is the same as posteriori (MAP) problem. We use a factor graph to model
LOAM. A more popular approach for loosely-coupled fusion this problem, as it is better suited to perform inference
is using extended Kalman filters (EKF). For example, [9]- when compared with Bayes nets. With the assumption of a
[13] integrate measurements from lidar, IMU, and optionally Gaussian noise model, the MAP inference for our problem is
GPS using an EKF in the optimization stage for robot state equivalent to solving a nonlinear least-squares problem [18].
estimation. Note that without loss of generality, the proposed system can
Tightly-coupled systems usually offer improved accuracy also incorporate measurements from other sensors, such as
and are presently a major focus of ongoing research [14]. elevation from an altimeter or heading from a compass.
In [15], preintegrated IMU measurements are exploited for We introduce four types of f actors along with one
de-skewing point clouds. A robocentric lidar-inertial state variable type for factor graph construction. This variable,
estimator, LINS, is presented in [16]. Specifically designed representing the robot’s state at a specific time, is attributed
for ground vehicles, LINS uses an error-state Kalman filter to the nodes of the graph. The four types of factors are:

5136

Authorized licensed use limited to: ULAKBIM UASL - YILDIZ TEKNIK UNIVERSITESI. Downloaded on July 01,2021 at 11:54:58 UTC from IEEE Xplore. Restrictions apply.
(a) IMU preintegration factors, (b) lidar odometry factors, All the features extracted at time i compose a lidar frame Fi ,
(c) GPS factors, and (d) loop closure factors. A new robot where Fi = {Fei , Fpi }. Note that a lidar frame F is represented
state node x is added to the graph when the change in robot in B. A more detailed description of the feature extraction
pose exceeds a user-defined threshold. The factor graph is process can be found in [1], or [7] if a range image is used.
optimized upon the insertion of a new node using incremental Using every lidar frame for computing and adding factors
smoothing and mapping with the Bayes tree (iSAM2) [19]. to the graph is computationally intractable, so we adopt the
The process for generating these factors is described in the concept of keyframe selection, which is widely used in the
following sections. visual SLAM field. Using a simple but effective heuristic
approach, we select a lidar frame Fi+1 as a keyframe when
B. IMU Preintegration Factor
the change in robot pose exceeds a user-defined threshold
The measurements of angular velocity and acceleration when compared with the previous state xi . The newly saved
from an IMU are defined using Eqs. 2 and 3: keyframe, Fi+1 , is associated with a new robot state node,
ω̂ t = ω t + bω ω xi+1 , in the factor graph. The lidar frames between two
t + nt (2)
keyframes are discarded. Adding keyframes in this way not
ât = RBW
t (at − g) + bat + nat , (3) only achieves a balance between map density and memory
where ω̂ t and ât are the raw IMU measurements in B at consumption but also helps maintain a relatively sparse factor
time t. ω̂ t and ât are affected by a slowly varying bias bt graph, which is suitable for real-time nonlinear optimization.
and white noise nt . RBW is the rotation matrix from W to In our work, the position and rotation change thresholds for
t
B. g is the constant gravity vector in W. adding a new keyframe are chosen to be 1m and 10◦ .
We can now use the measurements from the IMU to infer Let us assume we wish to add a new state node xi+1 to
the motion of the robot. The velocity, position and rotation the factor graph. The lidar keyframe that is associated with
of the robot at time t + ∆t can be computed as follows: this state is Fi+1 . The generation of a lidar odometry factor
is described in the following steps:
vt+∆t = vt + g∆t + Rt (ât − bat − nat )∆t (4) 1) Sub-keyframes for voxel map: We implement a sliding
1 window approach to create a point cloud map containing
pt+∆t = pt + vt ∆t + g∆t2
2 (5) a fixed number of recent lidar scans. Instead of optimizing
1 the transformation between two consecutive lidar scans, we
+ Rt (ât − bat − nat )∆t2
2 extract the n most recent keyframes, which we call the
Rt+∆t = Rt exp((ω̂ t − bω ω
t − nt )∆t), (6) sub-keyframes, for estimation. The set of sub-keyframes
T {Fi−n , ..., Fi } is then transformed into frame W using the
where Rt = RWB = RBW .
Here we assume that the
t t transformations {Ti−n , ..., Ti } associated with them. The
angular velocity and the acceleration of B remain constant
transformed sub-keyframes are merged together into a voxel
during the above integration.
map Mi . Since we extract two types of features in the
We then apply the IMU preintegration method proposed
previous feature extraction step, Mi is composed of two sub-
in [20] to obtain the relative body motion between two
voxel maps that are denoted Mei , the edge feature voxel map,
timesteps. The preintegrated measurements ∆vij , ∆pij , and
and Mpi , the planar feature voxel map. The lidar frames and
∆Rij between time i and j can be computed using:
voxel maps are related to each other as follows:
∆vij = RT
i (vj − vi − g∆tij ) (7)
Mi = {Mei , Mpi }
1
∆pij = RT − pi − vi ∆tij − g∆t2ij )
i (pj (8) where : Mei = 0 Fei ∪0 Fei−1 ∪ ... ∪0 Fei−n
2
∆Rij = RT R . (9) Mpi = 0 Fpi ∪0 Fpi−1 ∪ ... ∪0 Fpi−n .
i j

Due to space limitations, we refer the reader to the de-


0 e
Fi and 0 Fpi are the transformed edge and planar features
scription from [20] for the detailed derivation of Eqs. 7 - in W. Mei and Mpi are then downsampled to eliminate the
9. Besides its efficiency, applying IMU preintegration also duplicated features that fall in the same voxel cell. In this
naturally gives us one type of constraint for the factor paper, n is chosen to be 25. The downsample resolutions for
graph - IMU preintegration factors. The IMU bias is jointly Mei and Mpi are 0.2m and 0.4m, respectively.
optimized alongside the lidar odometry factors in the graph. 2) Scan-matching: We match a newly obtained lidar
frame Fi+1 , which is also {Fei+1 , Fpi+1 }, to Mi via scan-
C. Lidar Odometry Factor matching. Various scan-matching methods, such as [3] and
When a new lidar scan arrives, we first perform fea- [4], can be utilized for this purpose. Here we opt to use the
ture extraction. Edge and planar features are extracted by method proposed in [1] due to its computational efficiency
evaluating the roughness of points over a local region. and robustness in various challenging environments.
Points with a large roughness value are classified as edge We first transform {Fei+1 , Fpi+1 } from B to W and obtain
features. Similarly, a planar feature is categorized by a small { Fi+1 ,0 Fpi+1 }. This initial transformation is obtained by
0 e

roughness value. We denote the extracted edge and planar using the predicted robot motion, T̃i+1 , from the IMU. For
features from a lidar scan at time i as Fei and Fpi respectively. each feature in 0 Fei+1 or 0 Fpi+1 , we then find its edge or

5137

Authorized licensed use limited to: ULAKBIM UASL - YILDIZ TEKNIK UNIVERSITESI. Downloaded on July 01,2021 at 11:54:58 UTC from IEEE Xplore. Restrictions apply.
planar correspondence in Mei or Mpi . For the sake of brevity,
the detailed procedures for finding these correspondences are
omitted here, but are described thoroughly in [1].
3) Relative transformation: The distance between a fea-
ture and its edge or planar patch correspondence can be
computed using the following equations:

e
(pi+1,k − pei,u ) × (pei+1,k − pei,v ) (a) Handheld device (b) Clearpath Jackal (c) Duffy 21

dek = (10)
pi,u − pei,v
e Fig. 2: Datasets are collected on 3 platforms: (a) a custom-built
handheld device, (b) an unmanned ground vehicle - Clearpath
(ppi+1,k − ppi,u )

Jackal, (c) an electric boat - Duffy 21.

(pp − pp ) × (pp − pp )
i,u i,v i,u i,w
dpk = , (11) dar inertial odometry grows very slowly. In practice, we only
p p p p
(pi,u − pi,v ) × (pi,u − pi,w )

add a GPS factor when the estimated position covariance is
where k, u, v, and w are the feature indices in their larger than the received GPS position covariance.
corresponding sets. For an edge feature pei+1,k in 0 Fei+1 , pei,u E. Loop Closure Factor
and pei,v are the points that form the corresponding edge
line in Mei . For a planar feature ppi+1,k in 0 Fpi+1 , ppi,u , ppi,v , Thanks to the utilization of a factor graph, loop closures
and ppi,w form the corresponding planar patch in Mpi . The can also be seamlessly incorporated into the proposed sys-
Gauss–Newton method is then used to solve for the optimal tem, as opposed to LOAM and LIOM. For the purposes of
transformation by minimizing: illustration, we describe and implement a naive but effective
  Euclidean distance-based loop closure detection approach.
We also note that our proposed framework is compatible
X X
min dek + dpk .
Ti+1 with other methods for loop closure detection, for example,
pei+1,k ∈0 Fei+1 pp 0 p
i+1,k ∈ Fi+1
[22] and [23], which generate a point cloud descriptor and
At last, we can obtain the relative transformation ∆Ti,i+1 use it for place recognition.
between xi and xi+1 , which is the lidar odometry factor When a new state xi+1 is added to the factor graph, we
linking these two poses: first search the graph and find the prior states that are close to
∆Ti,i+1 = TT xi+1 in Euclidean space. As is shown in Fig. 1, for example,
i Ti+1 (12)
x3 is one of the returned candidates. We then try to match
We note that an alternative approach to obtain ∆Ti,i+1 Fi+1 to the sub-keyframes {F3−m , ..., F3 , ..., F3+m } using
is to transform sub-keyframes into the frame of xi . In other scan-matching. Note that Fi+1 and the past sub-keyframes
words, we match Fi+1 to the voxel map that is represented in are first transformed into W before scan-matching. We
the frame of xi . In this way, the real relative transformation obtain the relative transformation ∆T3,i+1 and add it as a
∆Ti,i+1 can be directly obtained. Because the transformed loop closure factor to the graph. Throughout this paper, we
features 0 Fei and 0 Fpi can be reused multiple times, we instead choose the index m to be 12, and the search distance for
opt to use the approach described in Sec. III-C.1 for its loop closures is set to be 15m from a new state xi+1 .
computational efficiency. In practice, we find adding loop closure factors is espe-
cially useful for correcting the drift in a robot’s altitude, when
D. GPS Factor GPS is the only absolute sensor available. This is because the
Though we can obtain reliable state estimation and map- elevation measurement from GPS is very inaccurate - giving
ping by utilizing only IMU preintegration and lidar odometry rise to altitude errors approaching 100m in our tests, in the
factors, the system still suffers from drift during long- absence of loop closures.
duration navigation tasks. To solve this problem, we can
introduce sensors that offer absolute measurements for elim- IV. E XPERIMENTS
inating drift. Such sensors include an altimeter, compass, and We now describe a series of experiments to qualitatively
GPS. For the purposes of illustration here, we discuss GPS, and quantitatively analyze our proposed framework. The
as it is widely used in real-world navigation systems. sensor suite used in this paper includes a Velodyne VLP-
When we receive GPS measurements, we first transform 16 lidar, a MicroStrain 3DM-GX5-25 IMU, and a Reach M
them to the local Cartesian coordinate frame using the GPS. For validation, we collected 5 different datasets across
method proposed in [21]. Upon the addition of a new node to various scales, platforms and environments. These datasets
the factor graph, we then associate a new GPS factor with this are referred to as Rotation, Walking, Campus, Park and
node. If the GPS signal is not hardware-synchronized with Amsterdam, respectively. The sensor mounting platforms are
the lidar frame, we interpolate among GPS measurements shown in Fig. 2. The first three datasets were collected using
linearly based on the timestamp of the lidar frame. a custom-built handheld device on the MIT campus. The Park
We note that adding GPS factors constantly when GPS dataset was collected in a park covered by vegetation, using
reception is available is not necessary because the drift of li- an unmanned ground vehicle (UGV) - the Clearpath Jackal.

5138

Authorized licensed use limited to: ULAKBIM UASL - YILDIZ TEKNIK UNIVERSITESI. Downloaded on July 01,2021 at 11:54:58 UTC from IEEE Xplore. Restrictions apply.
The last dataset, Amsterdam, was collected by mounting the a series of aggressive rotational maneuvers while standing
sensors on a boat and cruising in the canals of Amsterdam. still. The maximum rotational speed encountered in this test
The details of these datasets are shown in Table I. is 133.7 ◦ /s. The test environment, which is populated with
structures, is shown in Fig. 3(a). The maps obtained from
TABLE I: Dataset details
LOAM and LIO-SAM are shown in Figs. 3(b) and (c) respec-
Elevation Trajectory Max rotation tively. Because LIOM uses the same initialization pipeline
Dataset Scans
change (m) length (m) speed (◦ /s) from [25], it inherits the same initialization sensitivity of
Rotation 582 0 0 213.9 visual-inertial SLAM and is not able to initialize properly
Walking 6502 0.3 801 133.7 using this dataset. Due to its failure to produce meaningful
Campus 9865 1.0 1437 124.8 results, the map of LIOM is not shown. As is shown, the
Park 24691 19.0 2898 217.4 map of LIO-SAM preserves more fine structural details of
Amsterdam 107656 0 19065 17.2 the environment compared with the map of LOAM. This
We compare the proposed LIO-SAM framework with is because LIO-SAM is able to register each lidar frame
LOAM and LIOM. In all the experiments, LOAM and LIO- precisely in SO(3), even when the robot undergoes rapid
SAM are forced to run in real-time. LIOM, on the other hand, rotation.
is given infinite time to process every sensor measurement. B. Walking Dataset
All the methods are implemented in C++ and executed on This test is designed to evaluate the performance of our
a laptop equipped with an Intel i7-10710U CPU using the method when the system undergoes aggressive translations
robot operating system (ROS) [24] in Ubuntu Linux. We note and rotations in SE(3). The maximum translational and
that only the CPU is used for computation, without parallel rotational speed encountered is this dataset is 1.8 m/s and
computing enabled. Our implementation of LIO-SAM is 213.9 ◦ /s respectively. During the data gathering, the user
freely available on Github1 . Supplementary details of the holds the sensor suite shown in Fig. 2(a) and walks quickly
experiments performed, including complete visualizations of across the MIT campus (Fig. 4(a)). In this test, the map of
all tests, can be found at the link below2 . LOAM, shown in Fig. 4(b), diverges at multiple locations
when aggressive rotation is encountered. LIOM outperforms
LOAM in this test. However, its map, shown in Fig. 4(c),
still diverges slightly in various locations and consists of
numerous blurry structures. Because LIOM is designed to
process all sensor measurements, it only runs at 0.56× real-
time while other methods are running in real-time. Finally,
(a) Test environment LIO-SAM outperforms both methods and produces a map
that is consistent with the available Google Earth imagery.
C. Campus Dataset

TABLE II: End-to-end translation error (meters)

Dataset LOAM LIOM LIO-odom LIO-GPS LIO-SAM


(b) LOAM Campus 192.43 Fail 9.44 6.87 0.12
Park 121.74 34.60 36.36 2.93 0.04
Amsterdam Fail Fail Fail 1.21 0.17

This test is designed to show the benefits of introducing


GPS and loop closure factors. In order to do this, we
purposely disable the insertion of GPS and loop closure
factors into the graph. When both GPS and loop closure
(c) LIO-SAM factors are disabled, our method is referred to as LIO-odom,
which only utilizes IMU preintegration and lidar odometry
Fig. 3: Mapping results of LOAM and LIO-SAM in the Rotation
test. LIOM fails to produce meaningful results. factors. When GPS factors are used, our method is referred to
as LIO-GPS, which uses IMU preintegration, lidar odometry,
A. Rotation Dataset and GPS factors for graph construction. LIO-SAM uses all
factors when they are available.
In this test, we focus on evaluating the robustness of our
To gather this dataset, the user walks around the MIT
framework when only IMU preintegration and lidar odometry
campus using the handheld device and returns to the same
factors are added to the factor graph. The Rotation dataset is
position. Because of the numerous buildings and trees in the
collected by a user holding the sensor suite and performing
mapping area, GPS reception is rarely available and inaccu-
1 https://github.com/TixiaoShan/LIO-SAM rate most of the time. After filtering out the inconsistent GPS
2 https://youtu.be/A0H8CoORZJU measurements, the regions where GPS is available are shown

5139

Authorized licensed use limited to: ULAKBIM UASL - YILDIZ TEKNIK UNIVERSITESI. Downloaded on July 01,2021 at 11:54:58 UTC from IEEE Xplore. Restrictions apply.
(a) Google Earth (b) LOAM (c) LIOM (d) LIO-SAM

Fig. 4: Mapping results of LOAM, LIOM, and LIO-SAM using the Walking dataset. The map of LOAM in (b) diverges multiple times
when aggressive rotation is encountered. LIOM outperforms LOAM. However, its map shows numerous blurry structures due to inaccurate
point cloud registration. LIO-SAM produces a map that is consistent with the Google Earth imagery, without using GPS.

in Fig. 5(a) as green segments. These regions correspond to in the horizontal plane, their relative translational errors are
the few areas that are not surrounded by buildings or trees. different (Table II). Because no reliable absolute elevation
The estimated trajectories of LOAM, LIO-odom, LIO- measurements are available, LIO-GPS suffers from drift in
GPS, and LIO-SAM are shown in Fig. 5(a). The results of altitude and is unable to close the loop when returning to
LIOM are not shown due to its failure to initialize properly the robot’s initial position. LIO-SAM has no such problem,
and produce meaningful results. As is shown, the trajectory as it utilizes loop closure factors to eliminate the drift.
of LOAM drifts significantly when compared with all other
E. Amsterdam Dataset
methods. Without the correction of GPS data, the trajectory
of LIO-odom begins to visibly drift at the lower right corner Finally, we mounted the sensor suite on a boat and cruised
of the map. With the help of GPS data, LIO-GPS can correct along the canals of Amsterdam for 3 hours. Although the
the drift when it is available. However, GPS data is not movement of the sensors is relatively smooth in this test,
available in the later part of the dataset. As a result, LIO- mapping the canals is still challenging for several reasons.
GPS is unable to close the loop when the robot returns Many bridges over the canals pose degenerate scenarios, as
to the start position due to drift. On the other hand, LIO- there are few useful features when the boat is under them,
SAM can eliminate the drift by adding loop closure factors similar to moving through a long, featureless corridor. The
to the graph. The map of LIO-SAM is well-aligned with number of planar features is also significantly less, as the
Google Earth imagery and shown in Fig. 5(b). The relative ground is not present. We observe many false detections from
translational error of all methods when the robot returns to the lidar when direct sunlight is in the sensor field-of-view,
the start is shown in Table II. which occurs about 20% of the time during data gathering.
We also only obtain intermittent GPS reception due to the
D. Park Dataset presence of bridges and city buildings overhead.
Due to these challenges, LOAM, LIOM, and LIO-odom
In this test, we mount the sensors on a UGV and drive
all fail to produce meaningful results in this test. Similar
the vehicle along a forested hiking trail. The robot returns
to the problems encountered in the Park dataset, LIO-GPS
to its initial position after 40 minutes of driving. The UGV
is unable to close the loop when returning to the robot’s
is driven on three road surfaces: asphalt, ground covered by
initial position because of the drift in altitude, which further
grass, and dirt-covered trails. Due to its lack of suspension,
motivates our usage of loop closure factors in LIO-SAM.
the robot undergoes low amplitude but high frequency vibra-
tions when driven on non-asphalt roads. F. Benchmarking Results
To mimic a challenging mapping scenario, we only use
GPS measurements when the robot is in widely open areas, TABLE III: RMSE translation error w.r.t GPS
which is indicated by the green segments in Fig. 6(a). Such a
mapping scenario is representative of a task in which a robot Dataset LOAM LIOM LIO-odom LIO-GPS LIO-SAM
must map multiple GPS-denied regions and periodically Park 47.31 28.96 23.96 1.09 0.96
returns to regions with GPS availability to correct the drift.
Similar to the results in the previous tests, LOAM, LIOM, Since full GPS coverage is only available in the P ark
and LIO-odom suffer from significant drift, since no absolute dataset, we show the root mean square error (RMSE) results
correction data is available. Additionally, LIOM only runs at w.r.t to the GPS measurement history, which is treated as
0.67× real-time, while the other methods run in real-time. ground truth. This RMSE error does not take the error along
Though the trajectories of LIO-GPS and LIO-SAM coincide the z axis into account. As is shown in Table III, LIO-GPS

5140

Authorized licensed use limited to: ULAKBIM UASL - YILDIZ TEKNIK UNIVERSITESI. Downloaded on July 01,2021 at 11:54:58 UTC from IEEE Xplore. Restrictions apply.
0
200
5

0 100
100
LOAM
5
0 LIOM
10 LIO-odom
200 LIO-GPS
15 −100
LIO-SAM
20 GPS availability
20 15 10 5 0 5
−200
300

LOAM −300
−800 −700 −600 −500 −400 −300 −200 −100 0 100
LIO-odom
400 LIO-GPS (a) Trajectory comparison
LIO-SAM
GPS availability
500
200 100 0 100 200 300 400
(a) Trajectory comparison

(b) LIO-SAM map aligned with Google Earth

Fig. 6: Results of various methods using the Park dataset that is


gathered in Pleasant Valley Park, New Jersey. The red dot indicates
the start and end location. The trajectory direction is clock-wise.

(b) LIO-SAM map aligned with Google Earth


TABLE IV: Runtime of mapping for processing one scan (ms)

Fig. 5: Results of various methods using the Campus dataset that Dataset LOAM LIOM LIO-SAM Stress test
is gathered on the MIT campus. The red dot indicates the start and Rotation 83.6 Fail 41.9 13×
end location. The trajectory direction is clock-wise. LIOM is not Walking 253.6 339.8 58.4 13×
shown because it fails to produce meaningful results. Campus 244.9 Fail 97.8 10×
Park 266.4 245.2 100.5 9×
Amsterdam Fail Fail 79.3 11×
and LIO-SAM achieve similar RMSE error with respect to
the GPS ground truth. Note that we could further reduce the
error of these two methods by at least an order of magni-
tude by giving them full access to all GPS measurements. speed is recorded and shown in the last column of Table
However, full GPS access is not always available in many IV when LIO-SAM achieves similar performance without
mapping settings. Our intention is to design a robust system failure compared with the results when the data playback
that can operate in a variety of challenging environments. speed is 1× real-time. As is shown, LIO-SAM is able to
The average runtime for the three competing methods to process data faster than real-time up to 13×.
register one lidar frame across all five datasets is shown We note that the runtime of LIO-SAM is more signif-
in Table IV. Throughout all tests, LOAM and LIO-SAM icantly influenced by the density of the feature map, and
are forced to run in real-time. In other words, some lidar less affected by the number of nodes and factors in the
frames are dropped if the runtime takes more than 100ms factor graph. For instance, the Park dataset is collected in
when the lidar rotation rate is 10Hz. LIOM is given infinite a feature-rich environment where the vegetation results in a
time to process every lidar frame. As is shown, LIO-SAM large quantity of features, whereas the Amsterdam dataset
uses significantly less runtime than the other two methods, yields a sparser feature map. While the factor graph of the
which makes it more suitable to be deployed on low-power Park test consists of 4,573 nodes and 9,365 factors, the graph
embedded systems. in the Amsterdam test has 23,304 nodes and 49,617 factors.
We also perform stress tests on LIO-SAM by feeding it Despite this, LIO-SAM uses less time in the Amsterdam test
the data faster than real-time. The maximum data playback as opposed to the runtime in the Park test.

5141

Authorized licensed use limited to: ULAKBIM UASL - YILDIZ TEKNIK UNIVERSITESI. Downloaded on July 01,2021 at 11:54:58 UTC from IEEE Xplore. Restrictions apply.
[3] P.J. Besl and N.D. McKay, “A Method for Registration of 3D Shapes,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.
14(2): 239-256, 1992.
[4] A. Segal, D. Haehnel, and S. Thrun, “Generalized-ICP,” Proceedings
of Robotics: Science and Systems, 2009.
[5] W.S. Grant, R.C. Voorhies, and L. Itti, “Finding Planes in LiDAR
Point Clouds for Real-time Registration,” IEEE/RSJ International
Conference on Intelligent Robots and Systems, pp. 4347-4354, 2013.
[6] M. Velas, M. Spanel, and A. Herout, “Collar Line Segments for Fast
Odometry Estimation from Velodyne Point Clouds,” IEEE Interna-
tional Conference on Robotics and Automation, pp. 4486-4495, 2016.
[7] T. Shan and B. Englot, “LeGO-LOAM: Lightweight and Ground-
optimized Lidar Odometry and Mapping on Variable Terrain,”
IEEE/RSJ International Conference on Intelligent Robots and Systems,
pp. 4758-4765, 2018.
[8] T. Shan, J. Wang, K. Doherty, and B. Englot, “Bayesian Generalized
Kernel Inference for Terrain Traversability Mapping,” In Conference
on Robot Learning, pp. 829-838, 2018.
[9] S. Lynen, M.W. Achtelik, S. Weiss, M. Chli, and R. Siegwart, “A
Robust and Modular Multi-sensor Fusion Approach Applied to MAV
Navigation,” IEEE/RSJ International Conference on Intelligent Robots
and Systems, pp. 3923-3929, 2013.
[10] S. Yang, X. Zhu, X. Nian, L. Feng, X. Qu, and T. Mal, “A Robust
Pose Graph Approach for City Scale LiDAR Mapping,” IEEE/RSJ
International Conference on Intelligent Robots and Systems, pp. 1175-
1182, 2018.
[11] M. Demir and K. Fujimura, “Robust Localization with Low-Mounted
Multiple LiDARs in Urban Environments,” IEEE Intelligent Trans-
portation Systems Conference, pp. 3288-3293, 2019.
Fig. 7: Map of LIO-SAM aligned with Google Earth. [12] Y. Gao, S. Liu, M. Atia, and A. Noureldin, “INS/GPS/LiDAR Inte-
grated Navigation System for Urban and Indoor Environments using
Hybrid Scan Matching Algorithm,” Sensors, vol. 15(9): 23286-23302,
V. C ONCLUSIONS AND D ISCUSSION 2015.
[13] S. Hening, C.A. Ippolito, K.S. Krishnakumar, V. Stepanyan, and M.
We have proposed LIO-SAM, a framework for tightly- Teodorescu, “3D LiDAR SLAM integration with GPS/INS for UAVs
coupled lidar inertial odometry via smoothing and mapping, in urban GPS-degraded environments,” AIAA Infotech@Aerospace
for performing real-time state estimation and mapping in Conference, pp. 448-457, 2017.
[14] C. Chen, H. Zhu, M. Li, and S. You, “A Review of Visual-Inertial
complex environments. By formulating lidar-inertial odom- Simultaneous Localization and Mapping from Filtering-Based and
etry atop a factor graph, LIO-SAM is especially suitable Optimization-Based Perspectives,” Robotics, vol. 7(3):45, 2018.
for multi-sensor fusion. Additional sensor measurements can [15] C. Le Gentil,, T. Vidal-Calleja, and S. Huang, “IN2LAMA: Inertial
Lidar Localisation and Mapping,” IEEE International Conference on
easily be incorporated into the framework as new factors. Robotics and Automation, pp. 6388-6394, 2019.
Sensors that provide absolute measurements, such as a GPS, [16] C. Qin, H. Ye, C.E. Pranata, J. Han, S. Zhang, and Ming Liu, “LINS:
compass, or altimeter, can be used to eliminate the drift of A Lidar-Inertial State Estimator for Robust and Efficient Navigation,”
arXiv:1907.02233, 2019.
lidar inertial odometry that accumulates over long durations, [17] H. Ye, Y. Chen, and M. Liu, “Tightly Coupled 3D Lidar Inertial
or in feature-poor environments. Place recognition can also Odometry and Mapping,” IEEE International Conference on Robotics
be easily incorporated into the system. To improve the and Automation, pp. 3144-3150, 2019.
[18] F. Dellaert and M. Kaess, “Factor Graphs for Robot Perception,”
real-time performance of the system, we propose a sliding Foundations and Trends in Robotics, vol. 6(1-2): 1-139, 2017.
window approach that marginalizes old lidar frames for scan- [19] M. Kaess, H. Johannsson, R. Roberts, V. Ila, J.J. Leonard, and F.
matching. Keyframes are selectively added to the factor Dellaert, “iSAM2: Incremental Smoothing and Mapping Using the
Bayes Tree,” The International Journal of Robotics Research, vol.
graph, and new keyframes are registered only to a fixed- 31(2): 216-235, 2012.
size set of sub-keyframes when both lidar odometry and [20] C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “On-Manifold
loop closure factors are generated. This scan-matching at Preintegration for Real-Time Visual-Inertial Odometry,” IEEE Trans-
actions on Robotics, vol. 33(1): 1-21, 2016.
a local scale rather than a global scale facilitates the real- [21] T. Moore and D. Stouch, “A Generalized Extended Kalman Filter
time performance of the LIO-SAM framework. The proposed Implementation for The Robot Operating System,” Intelligent Au-
method is thoroughly evaluated on datasets gathered on three tonomous Systems, vol. 13: 335-348, 2016.
[22] G. Kim and A. Kim, “Scan Context: Egocentric Spatial Descriptor for
platforms across a variety of environments. The results show Place Recognition within 3D Point Cloud Map,” IEEE/RSJ Interna-
that LIO-SAM can achieve similar or better accuracy when tional Conference on Intelligent Robots and Systems, pp. 4802-4809,
compared with LOAM and LIOM. Future work involves 2018.
[23] J. Guo, P. VK Borges, C. Park, and A. Gawel, “Local Descriptor for
testing the proposed system on unmanned aerial vehicles. Robust Place Recognition using Lidar Intensity,” IEEE Robotics and
Automation Letters, vol. 4(2): 1470-1477, 2019.
R EFERENCES [24] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs,
R. Wheeler, and A.Y. Ng, “ROS: An Open-source Robot Operating
[1] J. Zhang and S. Singh, “Low-drift and Real-time Lidar Odometry and System,” IEEE ICRA Workshop on Open Source Software, 2009.
Mapping,” Autonomous Robots, vol. 41(2): 401-416, 2017. [25] T. Qin, P. Li, and S. Shen, “Vins-mono: A Robust and Versatile
[2] A. Geiger, P. Lenz, and R. Urtasun, “Are We Ready for Autonomous Monocular Visual-Inertial State Estimator,” IEEE Transactions on
Driving? The KITTI Vision Benchmark Suite”, IEEE International Robotics, vol. 34(4): 1004-1020, 2018.
Conference on Computer Vision and Pattern Recognition, pp. 3354-
3361, 2012.

5142

Authorized licensed use limited to: ULAKBIM UASL - YILDIZ TEKNIK UNIVERSITESI. Downloaded on July 01,2021 at 11:54:58 UTC from IEEE Xplore. Restrictions apply.

You might also like