Ros Comparision

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

2017 14th Workshop on Positioning, Navigation and Communications (WPNC)

Comparison of ROS-based Visual SLAM


methods in homogeneous indoor environment
Ilmir Z. Ibragimov and Ilya M. Afanasyev
Institute of Robotics, Innopolis University
Universitetskaya str. 1, 420500 Innopolis, Russia
E-mail: {i.ibragimov, i.afanasyev}@innopolis.ru

Abstract — This paper presents investigation of various ROS- [1]), ElasticFusion [14], Google Cartographer [11], Direct
based visual SLAM methods and analyzes their feasibility for a Sparse Odometry (DSO, [4]), etc. To expand the qualitative
mobile robot application in homogeneous indoor environment. comparison of monocular SLAM performed in [15], we
We compare trajectories obtained by processing different sensor present in this paper quantative analysis of different SLAM
data (conventional camera, LIDAR, ZED stereo camera and based on ROS1, and sensor data of LIDAR, RGB-D,
Kinect depth sensor) during the experiment with UGV monocular and stereo camera. Our new measurements were
prototype motion. These trajectories were computed by verified by a tape measure as the ground truth.
monocular ORB-SLAM, monocular DPPTAM, stereo ZedFu
(based on ZED camera data) and RTAB-Map (based on MS The paper is structured as follows. Section 2 introduces
Kinect 2.0 depth sensor data), and verified by LIDAR-based software and hardware configurations. Section 3 describes
Hector SLAM and a tape measure as the ground truth. the work zone. Section 4 considers tests with different Visual
SLAM methods. Section 5 provides a comparative analysis
Keywords — Visual SLAM, visual odometry, ORB-SLAM, of Unmanned Ground Vehicle (UGV) trajectories obtained
DPPTAM, RTAB-Map, Kinect, LIDAR, ZED camera, ROS by processing various sensor data. Finally, we generalize our
results and conclude about visual SLAM applicability for
homogeneous indoor environment, and describe future plans.
I. INTRODUCTION
One of the most important challenge for mobile robotics II. SYSTEM SETUP
applications is the ability to navigate mobile robots in an For our experiments we created a human-operated UGV
unknown complex environment. Having a map and exact prototype, which was equipped with a computing system and
location on the map allows to navigate in environment and to a set of sensors (Fig. 1). To analyze ROS-based visual
predict a path to a target. Both robot location determination SLAM-related methods we recorded raw video data streams
and mapping do not make any difficulties if onboard sensors from industrial camera Basler acA2000-50gc GigE12 (Fig.
are precise enough. The difficulty is to solve both of these 2b) mounted on UGV prototype. The UGV computational
problems at the same time. Often the challenge arises when a platform is based on Intel Core i3 processor and GeForce
robot does not know neither map nor location, forming so GT740M graphics card, which supports CUDA technology,
called Simultaneous Localization and Mapping (SLAM) producing on-board parallel computation. Table I describes
problem, which is related to constructing a map of unknown UGV hardware and software system configuration.
environment while simultaneously keeping track of robot
location within it. Depending on applied algorithm the robot
can use various sensors: LIDAR, SONAR, IMU sensor,
altimeter, a depth or conventional RGB camera, etc. Over the
last 10 years actively began to develop SLAM methods based
on computer vision algorithms [1]-[13]. Problems of
autonomous navigation and mapping require fast, robust and
precise real-time SLAM algorithms. Robot navigation with
LIDAR is quite expensive, but one of the most effective way.
However, nowadays computer vision algorithms allow to use
RGB camera [3], stereo camera [5], [8] or RGB-D Kinect
depth sensor [3], [8]. The main challenge is to use computer
vision-based algorithms for indoor navigation within office-
style environment, where walls are painted in homogeneous
colors and can contain glass or mirrors.
Since the earliest real-time methods like MonoSLAM [2] Figure 1. Human-operated prototype of Unmanned Ground Vehicle (UGV).
and Parallel Tracking and Mapping (PTAM, [7]), the number
of monocular SLAM-related methods has grown with
appearance of Dense Tracking and Mapping (DTAM, [10]),
Large-Scale Direct Monocular SLAM (LSD-SLAM, [3]), 1 Robot Operating System (ROS) is a middleware that contains a set of

Oriented FAST and Rotated BRIEF (ORB-SLAM, [9]), software libraries and tools for robot applications, www.ros.org
2 Area Scan Camera from Basler AG Company (Fig. 2b),
Dense Piecewise Planar Tracking and Tracking (DPTTAM,
www.baslerweb.com/en/products/cameras/

978-1-5386-3089-1/17/$31.00 ©2017 IEEE


2017 14th Workshop on Positioning, Navigation and Communications (WPNC)

same time, ROS-based LIDAR package hector_slam7 was


running. After a successful check-in, the video stream was
processed by monocular SLAM algorithms. We made
verification LIDAR trajectory (which typically demonstrates
high precision and robustness to different robot motion
deviations [16]) with real UGV trajectory along the white
line. After that, we verified trajectories of monocular SLAM
algorithms with LIDAR trajectory and real UGV trajectory.
a) b) The same experiments we provided with the ZED stereo
camera and RGB-D Kinect sensor. To keep track of statistics
with experimental goals, we conducted these tests several
times. The results are discussed in the next sections.

c) d)
Figure 2. Sensors of the UGV prototype: a) Hokuyo UTM-30LX laser
rangefinder. b) Basler acA2000-50gc camera. c) Stereolabs ZED camera. d)
Microsoft Kinect 2.0 depth sensor. Courtesy of sensors manufacturers.

TABLE I. THE UGV PROTOTYPE SYSTEM CONFIGURATION

Parameters Configuration
UGV hardware
Processor Intel Core i3-416 CPU @ 3.60GHz x 4
GPU GeForce GT 740M
RAM 8 GB
Figure 3. Workspace for UGV prototype motion along white line.
Sensors
LIDAR HOKUYO UTM-30LX3
Camera Basler acA2000-50gc GigE IV. TESTS OF SLAM METHODS
Stereo camera Stereolabs ZED camera4
RGB-D sensor Microsoft Kinect 2.05
A. Lidar SLAM method
Software Hector SLAM [17] is based on data obtained from the 2D
LIDAR. It was created by PhD students of the Darmstadt
OS Ubuntu 14.04
University in 2011 with designing your own UGV prototype.
ROS Indigo Igloo Hector SLAM is one of the most popular SLAM methods and
is used in many projects related with mobile robots. Hector
SLAM requires only 2D LIDAR data in LaserScan format.
Algorithm is able to build a 2D map and localization in the
III. DESCRIPTION OF THE EXPERIMENTAL ENVIRONMENT
same frequency at which LIDAR scans. In this case IMU-
We marked as a white line the planned UGV trajectory sensor data or odometry data is not necessary. LIDAR can be
inside a typical office-style indoor environment, which mounted with some offset and at an angle to the surface that
contains monochrome, homogeneously painted walls and robot moves. To correctly build a map we need to convert
glass fences (Fig. 3). Since not all Visual SLAM methods from coordinate system of the LIDAR to the coordinate
work well with sharp turns, we had planned the smooth system of the surface movement.
trajectory and moved the UGV prototype under a human Hector SLAM builds 2D occupancy grid map (Fig. 4),
operator control strictly along the marked white line.
which publishes in ROS topic map in
Therefore, we know all values (scale in metric system and
(nav_msgs/OccupancyGrid) format. Each cell of the map is
turning angles) of the real UGV motion. Before the start of
the experiment, we switched on all onboard sensors of the painted to the definite color according to an occupation
UGV prototype and recorded their data simultaneously with probability (black means that a cell is busy, light grey – a cell
raw videodata stream ros_image_raw in rosbag6 file. At the is free, dark grey – cells were not scanned). The green line
shows the UGV trajectory on the map (Fig. 4).
The obtained 2D LIDAR trajectory is presented in Fig. 5
3 Hokuyo Automatic Co. 2D Lidar (Fig. 2a), www.hokuyo-aut.jp as the red line, where the black line estimated by a tape
4 ZED camera from Stereolabs company (Fig. 2c),
measure is the ground truth with the length of 11.5 m. It is
https://www.stereolabs.com
5 MS Kinect 2 depth sensor (Fig. 2d), www.xbox.com/xbox-

one/accessories/kinect
6 Rosbag is a set of tools for recording and playback ROS topics, 7 ROS package hector_slam is available at https://github.com/tu-

http://wiki.ros.org/rosbag darmstadt-ros-pkg/hector_slam
2017 14th Workshop on Positioning, Navigation and Communications (WPNC)

important to note that deviation from the ground truth is As the result of the experiment, we obtained ORB-SLAM
small enough, therefore, LIDAR measurements are trusted. trajectory in relative values (Fig. 7a). The scaled trajectory is
shown in Fig. 7b.

(a) Trajectory in relative values. (b) Scaled trajectory in meters.


Figure 7. ORB-SLAM trajectories (in red), marked line (in black).

C. Monocular DPPTAM method


Figure 4. 2D occupancy grid map by Hector SLAM. Dense Piecewise Planar Tracking and Mapping from a
Monocular Sequence (DPPTAM) is one of the newest direct
real-time visual SLAM methods, which adapts positive ideas
of previous SLAM algorithms .DPPTAM estimates dense 3D
reconstruction of scene and saves trajectory as a sequence of
points in point cloud. It works with an assumption that
homogeneous and monochrome regions belong to
approximately planar areas. DPPTAM implementation in
ROS is real-time ROS node, which does not have graphical
user interface (GUI). Therefore, we visualize extracted
features with standard ROS package RViz9 (Fig. 8a) and
point clouds in MeshLab10 (Fig. 8b).
Figure 5. Trajectory (red – Hector SLAM, black – marked line). Black curve
is the ground truth. The units of measure for the XY axes are meters.

B. Monocular ORB-SLAM method


ORB-SLAM8 is a feature-based real-time SLAM library
for monocular, stereo and RGB-D cameras [9]. Using the
Bundle Adjustment algorithm, the features from different (a) Extracted features. (b) Point cloud and trajectory
images are placed in 3D space. ORB-SLAM calculates a
camera trajectory and recovers a sparse 3D scene. Algorithm Figure 8. DPPTAM features and point clouds visualization.
use ORB (Oriented FAST and Rotated BRIEF) fast feature
detector. This detector allows to work in real-time. ORB- Finally, we estimated DPPTAM trajectory in relative
SLAM uses advanced approaches for loop closing, keyframe values (Fig. 9a), and scaled trajectory according to the
selection and localization for each frame. To process live marked white line (Fig. 9b). As far as there is no coincidence
monocular streams, the library is used with a ROS node. This of these trajectories, is means the failure of visual odometry
library provides required calculations and a graphical user with DPPTAM.
interface (GUI). Visualization of the ORB-SLAM feature
detector is presented in Fig. 6a. Map Viewer visualizes 3D
map (Fig. 6b) as a red point cloud, shows trajectory as a
green line and (blue rectangle) plane in which the camera
shoots at particular point in time.
(a) Trajectory in relative values. (b) Scaled trajectory in meters.

Figure 9. DPPTAM trajectories (in red), marked line (in black).

D. Stereo SLAM method with ZED camera


Stereolabs developers provide ZED camera with various
(a) Extracted features. (b) Point cloud and trajectory software tools and interfaces, from which we used ZED
Figure 6. ORB-SLAM features and point clouds visualization.
SDK11, ZED Calibration Tool and ZEDfu software for real-

9 RViz is 3D visualization tool for ROS, wiki.ros.org/rviz


10 3D mesh processing, viewing and editing software, www.meshlab.net
8 ORB-SLAM2 package, github.com/raulmur/ORB SLAM2 11 ZED SDK, www.stereolabs.com/developers/documentation/API/
2017 14th Workshop on Positioning, Navigation and Communications (WPNC)

time 3D mapping and 3D scene generation in the form of form of occupancy grid map, where each cell represents the
point cloud. ZED SDK has the integration with ROS, which probability of space occupancy. These data were visualized
is performed with zed-ros-wrapper12 package. After the by RViz standard package.
experiment with UGV prototype, a point cloud was recorded
in SVO format and processed by ZEDfu tool (Fig. 10). We
can see from Fig. 10 that the map is dense and all the
important obstacles such as chairs and tables are displayed.
To build the UGV trajectory based on the ZED camera
visual odometry data we used the ROS package and existing
Position Tracking visualization tool in OpenGL window. The
calculated trajectory is displayed in Fig. 11a. The comparison
of trajectories from ZED camera visual odometry and marked
line is shown in Fig. 11b, where we can see that maximal
error of trajectories mismatch is about 10 cm.
(a) Loop
closure detection. (b) Point cloud in RTAB-Map.

Figure 12. RTAB-Map features and point cloud visualization.

Thus, as the results of experiments we built a map (Fig.


12b) in the form of dense point cloud and obtained the data
of visual odometry, which were used to compute the UGV
prototype trajectory in XY coordinate plane (Fig. 13). The
Fig. 13 shows that at the beginning of UGV prototype motion
there are deviations from marked trajectory, but then Kinect
RBG-D camera-based RTAB-Map odometry gives the
Figure 10. ZEDfu 3D map.
results, which are close to the Ground Truth trajectory.

(a) Path in ROS Position Tracking. (b) Scaled trajectory in meters.

Figure 11. ZED Camera trajectories (in red), marked line (in black).

E. RTAB-Map SLAM with Kinect RBG-D camera


Figure 13. Kinect RBG-D camera-based RTAB-Map trajectories (in red),
Real-Time Appearance-Based Mapping (RTABMap, [8]) marked line (in black). The trajectories are scaled in meters.
is an algorithm of visual graph SLAM, based on closure
detection. Detector uses an algorithm bag-of-words to V. COMPARATIVE ANALYSIS OF SLAM METHODS
determine the level of equality of new camera images to
images of already visited locations. Each of such closures A. Maps analysis
adds a new edge to a camera position graph, and then graph ORB-SLAM is the feature-based method, which builds
is optimized. RTAB-Map13 provides a graphical interface,
sparse map from point cloud (Figure 14a), which has too
which visualizes the output of loop closure detector (Fig.
little details to determine some objects. However, this
12a), visual odometry and point cloud (Fig. 12b). Also it
method creates less amount of outliers.
allows to filter and save point cloud in PLY or PCD formats.
RTAB-Map supports operation with Kinect RGB-D camera, DPPTAM is the direct method, which is unlike ORB-
ZED stereo camera, conventional stereo pair, etc. SLAM and builds dense map from point cloud (Figure 14b),
Integration with ROS is implemented via RTAB-Map ros- trying to detect monochrome flat objects (walls). The number
pkg14 package. The results of the algorithm are a 3D map in of outliers is acceptable. Some obstacles were not found
form of a point cloud, visual odometry and 2D map in the during the tests, forming empty spaces instead of walls.
ZEDfu builds dense map (Fig. 14c) from point cloud,
triangulating area and overlaying colored surfaces. The
12 ROS package for ZED, github.com/stereolabs/zed-ros-wrapper
13 RTAB-map package, http://introlab.github.io/rtabmap/
number of outliers on the map is minimal because of good
14 ROS package for RTAB-Map, http://wiki.ros.org/rtabmap_ros visual odometry and point cloud filtering. This method
2017 14th Workshop on Positioning, Navigation and Communications (WPNC)

visualize all main obstacles and walls, showing them on the


3D map. max. deviation = max( ), (1)
RTAB-Map builds dense map from point cloud (Fig.
14d). The number of outliers is small, because of filtering. aver. deviation = , (2)
Some surfaces are repeated, showing disperse character
where , – marked line coordinates, xi, yi – computed
because of odometry noises, but it is insignificant. When
trajectory coordinates, N – number of grid nodes.
using for navigation of a real robot in a closed space,
OctoMap15 can be a valuable tool to translate qualitative Table II contains trajectory deviations received by tested
dense point cloud into obstacles map [18]. visual SLAM methods.

TABLE II. TRAJECTORY DEVIATIONS

SLAM Average Maximal


Sensors
method deviation, m deviation, m
Hector SLAM 2D LIDAR 0.11 0.18

ORB-SLAM Monocular Camera 0.19 0.43


a) ORB-SLAM; b) DPPTAM; c) ZEDfu; d) RTAB-Map.
Figure 14. 3D Point Clouds. DPPTAM Monocular Camera 2.05 4.26
Stereo ZED
ZEDfu 0.14 0.32
B. Odometry comparative analysis camera
Kinect 2.0 depth
In this section, we provide the comparative analysis of all RTAB-Map 0.42 0.67
sensor
trajectories obtained by processing different sensor data
(conventional camera, LIDAR, stereo ZED camera and As we mentioned, LIDAR is one of the most precise and
Kinect depth sensor) during the experiment with UGV fast sensors. Hector SLAM odometry has shown the least
prototype motion in indoor environment (Fig. 15). For error. ORB-SLAM has also shown good result. Despite the
monocular SLAM methods (ORB and DPPTAM) the small number of features, the odometry data were robust. The
computed trajectories were properly scaled. lack of feature points is affected adversely to DPPTAM
method. The camera rotation angles were not precisely
determined. ZED camera odometry has also shown good
results with errors close to LIDAR data. However, it loses
tracking on sharp turns. RTAB-map and Kinect cameras are
exposed to small noises (see, noisy trajectories in Fig. 13 and
Fig. 15). The algorithms are good at defining camera rotation
angles and absolute scales of movement trajectory. It should
be noticed, that in such test with a movement forward in the
straight line, all SLAM methods have shown good results of
visual odometry. The problems begin on turns, when the scan
area has fast changes. Table III represents comparison of
visual SLAM methods in terms of the major parameters.

TABLE III. COMPARISON OF VISUAL SLAM METHODS

ORB- RTAB-
Parameter DPPTAM ZEDfu
SLAM Map
Visualization Built-in ROS/RViz Built-in Built-in

Figure 15. Comparison of UGV prototype trajectories (in meters) computed CUDA No No Yes No
by monocular ORB-SLAM (yellow curve), monocular DPPTAM (cyan
curve), LIDAR data (Hector SLAM – red curve), stereo ZED camera (green Odometry No No Yes Yes
curve), and Kinect depth sensor (RTAB-Map – blue curve). Black curve is
the ground truth. The units of measure for the XY axes are meters. Trajectory Yes Yes No No
Noise level Low Low Low Middle
Error estimation was calculated as follows: marking line 3D map
Low Average Good Good
and movement trajectory line were split into uniform grid quality
with a uniform step. To compute the maximum and average 3D map type Sparse Dense Dense Dense
deviations we used the corresponding formulae: Odometry
Good Poor Good Good
quality (Max.
(0.43m) (4.26m) (0.32m) (0.67m)
deviation)
15 OctoMap is 3D occupancy mapping tool, http://octomap.github.io/
2017 14th Workshop on Positioning, Navigation and Communications (WPNC)

VI. CONCLUSION AND FUTURE PLANS [3] J. Engel, T. Schöps, D. Cremers. “LSD-SLAM: Large-Scale Direct
Monocular SLAM”, in Proc. of European Conf. on Computer Vision
In this paper we investigated various SLAM methods, (ECCV), Zurich, Switzerland, 2014, pp. 834-849.
providing the comparative analysis of all trajectories [4] J. Engel, V. Koltun, and D. Cremers. “Direct sparse odometry”, in
obtained by processing different sensor data (conventional IEEE Trans. on Pattern Analysis and Machine Intelligence, 2017.
camera, LIDAR, stereo ZED camera and Kinect depth [5] J. Engel, J. Stuckler, and D. Cremers, “Large-scale direct slam with
stereo cameras”, in Proc. IEEE/RSJ Intelligent Robots and Systems
sensor) during the experiment with UGV prototype motion in (IROS), 2015, pp. 1935–1942.
indoor environment. We compared such ROS-based visual [6] C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct
SLAM methods as monocular ORB-SLAM, monocular monocular visual odometry”, in IEEE Int. Conf. on Robotics and
DPPTAM, stereo ZedFu (based on ZED camera data) and Automation (ICRA), 2014.
RTAB-Map (based on MS Kinect 2.0 depth sensor data). To [7] G. Klein and D. Murray, “Parallel tracking and mapping for small AR
monitor the UGV prototype motion we used LIDAR data and workspaces”, in Proc. IEEE/ACM Int. Symp. on Mixed and
Augmented Reality (ISMAR), Nara, Japan, 2007.
ROS-based Hector SLAM trajectory, which was also used as [8] M. Labbe and F. Michaud. “Online Global Loop Closure Detection for
the ground truth (along with marked white tape on the floor). Large-Scale Multi-Session Graph-Based SLAM”, in Proc. IEEE/RSJ
Intelligent Robots and Systems (IROS), 2014, pp. 2661–2666.
The main conclusions from our measurements are: (1) [9] R. Mur-Artal, J.M.M. Montiel and J.D. Tardos, “ORB-SLAM: A
There is no a monocular SLAM algorithms which can Versatile and Accurate Monocular SLAM System”, in: IEEE Trans.
estimate the absolute scale of the received map and, on Robotics, 31(5), 2015, pp. 1147-1163.
therefore, localization. Therefore, it is necessary to use a [10] R.A. Newcombe, S.J. Lovegrove and A.J. Davison, “DTAM: Dense
ground truth to verify the size of map objects or to estimate Tracking and Mapping in Real-time”, in IEEE Int. Conf. on Computer
Vision (ICCV), 2011, pp. 2320-2327.
absolute value of camera displacement. (2) Visual odometry,
[11] W. Hess, D. Kohler, H. Rapp, and D. Andor, “Real-Time Loop
which is based on ORB-SLAM is more robust to Closure in 2D LIDAR SLAM”, in IEEE Int. Conf. on Robotics and
homogeneous indoor environment than DPPTAM. Although, Automation (ICRA), 2016, pp. 1271–1278.
DPPTAM creates denser 3D map. [12] T. Whelan, S. Leutenegger, R Salas-Moreno, B. Glocker, and A.
Davison, “Elasticfusion: Dense slam without a pose graph”, in
To answer on the question about applicability of visual Robotics: Science and Systems, 2015.
SLAM methods we provide the following recommendations: [13] B. Williams, M. Cummins, J. Neira, P. Newman, I. Reid, and J.
(1) ORB SLAM method should be used if there is a high Tardos, “A comparison of loop closing techniques in monocular
performance requirement and the environment does not have SLAM”, in Robotics and Autonomous Systems, 57(12), 2009, pp.
1188–1197.
any flat monochrome objects. (2) DPPTAM method should [14] T. Whelan, R.F. Salas-Moreno, B. Glocker, A.J. Davison, and S.
be used if a dense area map is required to build an obstacles Leutenegger, “ElasticFusion”, Int. J. Rob. Res., 35 (14), 2016, pp.
map or the environment does not contain enough number of 1697-1716.
features and the hardware has enough performance power. [15] A. Buyval, I. Afanasyev, E. Magid, “Comparative analysis of ROS-
(3) Stereo cameras or RGB-D sensors usage gives good based monocular slam methods for indoor navigation”, in Proc. SPIE
results and builds a map and a localization in absolute values. 10341 of Int. Conf. on Machine Vision (ICMV), Nice, France, 2016.
[16] M. Sokolov, O. Bulichev and I. Afanasyev, “Analysis of ROS-based
There is no significant difference in terms of resource Visual and Lidar Odometry for a Teleoperated Crawler-type Robot in
consumption. (4) If it is necessary to build a map with high indoor environment”, in Proc. Int. Conf. on Informatics in Control,
depth, then it is preferable to use a stereo camera. (5) If there Automation and Robotics (ICINCO), Madrid, Spain, 2017.
are large monochrome walls, stained glass or mirrors, then [17] S. Kohlbrecher, J. Meyer, O. von Stryk, and U. Klingauf, “A flexible
RGB-D sensor is a better option to use. and scalable SLAM system with full 3D motion estimation”, in Proc.
IEEE Int. Symp. on Safety, Security & Rescue Robotics (SSRR), 2011.
Our future plans deal with realization of other visual [18] A. Hornung, K.M. Wurm, M. Bennewitz, C. Stachniss, and W.
SLAM and odometry methods (like Google Cartographer, Burgard, “OctoMap: An efficient probabilistic 3D mapping
DSO, LSD-SLAM, ElasticFusion), paying attention to framework based on octrees”, in Autonomous Robots, 2013.
building quality dense point cloud, improving experimental
technique and providing tests in more complex environment.

ACKNOWLEDGMENT
This research has been supported by Russian Ministry of
Education and Science within the Federal Target Program
grant #14.609.21.0100, and Innopolis University.

REFERENCES
[1] A. Concha and J. Civera. “DPPTAM: Dense Piecewise Planar
Tracking and Mapping from a Monocular Sequence”, in IEEE/RSJ
Int. Conf. on Intel. Robots and Sys. (IROS), 2015, pp. 5686-5693.
[2] A.J. Davison, I.D. Reid, N.D. Molton, and O. Stasse. “MonoSLAM:
Real-time single camera SLAM”. IEEE Trans. on Pattern Analysis
and Machine Intelligence, 29(6), 2007, pp. 1052–1067.

You might also like