27 PDF
27 PDF
27 PDF
Abstract—In this paper, we are checking the flexibility of a this kind of privilege. Robot Operating System (ROS)
SLAM based mobile robot to map and navigate in an indoor provides us with the architecture to achieve this. ROS is open
environment. It is based on the Robot Operating System (ROS) source and there are a lot of codes available from good
framework. The model robot is made using gazebo package and research institutes which one can readily use and implement in
simulated in Rviz. The mapping process is done by using the their own projects. Further robot’s engineers earlier lacked a
GMapping algorithm, which is an open source algorithm. The common platform for collaboration and communication which
aim of the paper is to evaluate the mapping, localization, and delayed the adoption of robotic butlers and other related
navigation of a mobile robotic model in an unknown developments. The robotic innovation has quickly paced up
environment.
since last decade with the advent of ROS wherein the
Keywords—Gazebo; ROS; Rviz; Gmapping; laser scan;
engineers can build robotic apps and programs. Robot
Navigation; SLAM; Robot model; Packages. navigation is a very wide topic which most of the researchers
are concentrating in the field of robotics. For a mobile robot
I. INTRODUCTION system to be autonomous, it has to analyze data from different
sensors and perform decision making in order to navigate in
In the modern world, the need for machines are increasing
an unknown environment. ROS helps us in solving different
due to the probability of making mistakes by the robot is less.
problems related to the navigation of the mobile robot and also
The research and application of robotics are from healthcare to
the techniques are not restricted to a particular robot but are
artificial intelligence. A robot can’t understand the
reusable in different development projects in the field of
surroundings unless they are given some sensing power. We
robotics.
can use different sensors like LIDAR, RGB-D camera, IMU
(inertial measurement units) and sonar to give the sensing III. RELATED WORKS
power. By using sensors and mapping algorithms a robot can
create a map of the surroundings and locate itself inside the In the research paper [1], the Authors use ROS with
map. The robot will be continuously checking the environment agmapping algorithm to localize and navigate. Gmapping
for the dynamic changes happening there. algorithm uses laser scan data from the LIDAR sensor to make
the map. The map is continuously monitored by OpenCV face
Our aim was to build an autonomous navigation platform detection and corobot to identify human and navigate through
for indoor application. In this paper, we are checking the the working environment. The authors of research paper [2]
efficiency of a SLAM (Simultaneous Localization and explain about 2 cooperative robots which work based on ROS,
Mapping) based robot model implemented in ROS (Robot mapping, and localization. These robots are self-driving and
Operating System) by measuring the travel time taken by the working in unknown areas. For this project also the algorithm
robot model to reach the destination. The test is done in a used is SLAM. Here the main tasks of the robots are to pick
virtual environment created by Rviz. By placing different up three block pieces and to arrange them in a predetermined
dynamic obstacles for different destinations in the map, the manner. With the help of the ROS, they made robots for this
travel time is measured. purpose. In the research paper [3], the Authors created a
simulation of the manipulator and illustrated the methods to
II. MOTIVATION implement robot control in a short time. Using ROS and
Working with the robots need a lot of sensors and every gazebo package, they build a model of pick and place robot
process needs to be handled in real time. To use the sensors with 7 DOF. They managed to find a robot control which
and actuators which needs to be updated every 10-50 takes less time. A research paper [5] compares 3 SLAM
milliseconds we need a type of operating system that gives algorithms core SLAM, Gmapping, and Hector SLAM using
199
International Journal of Pure and Applied Mathematics Special Issue
simulation. The best algorithm is used to test unmanned ensuring that the threads aren't actually trying to read and
ground vehicles(UGV) in different terrains for defense write to shared resources, but are rather just publishing and
missions. Using simulation experiments they compared the subscribing to messages. ROS also helps us to create a virtual
performance of different algorithms and made a robotic environment, generate robot model, implement the algorithms
platform which performs localization and mapping. The and visualize it in the virtual world rather than implementing
authors of the research paper [6], made a navigation platform the whole system in the hardware itself. Therefore, the system
with the use of automated vision and navigation framework, can be improved accordingly which provides us a better result
With the use of ROS, the open source GMapping bundle was when it is finally implemented it in the hardware.
used for Simultaneous Localization and Mapping (SLAM).
Using this setup with rviz, the turtlebot 2 is implemented. B. Gazebo
Using a Kinect sensor in place of laser range finder, the cost is The gazebo is a robot simulator. Gazebo enables a user to
reduced. The journal [9], deals with indoor navigation based create complex environments and gives the opportunity to
on sensors that are found in smart phones. The smartphone is simulate the robot in the environment created. In Gazebo the
used as both a measurement platform and user interface. The user can make the model of the robot and incorporate sensors
Author of the journal [10] implemented a 6-degree of freedom in a three-dimensional space. In the case of the environment,
(DOF) pose estimation (PE) method and an indoor wayfinding the user can create a platform and assign obstacles to that. For
system for the visually impaired. The floor plane is extracted the model of the robot, the user can use the URDF file and can
from the 3-D camera’s point cloud and added as a landmark give links to the robot. By giving the link we can give the
node into the graph for 6-DOF SLAM to reduce errors. roll, degree of movement for each part of the robot. The robot
pitch, yaw, X, Y, and Z are the 6 axes. The user interface is model which is created for this research is a differential drive
through sound. Journal [11] explains why the indoor robot with two wheels, Laser, and a camera on it as shown in
environment is difficult for an autonomous quadcopter. Since Fig. 1. A sample environment is created in the Gazebo for the
the experiment is done indoor they couldn’t use GPS, they robot to move and map accordingly. The sample map is shown
used a combination of a laser range finder, XSens IMU, and in Fig. 2. In this environment, several objects were placed
laser mirror to make 3-D map and locate itself inside it. The randomly where the map is created along with it the objects as
quadcopter is navigating using SLAM algorithm.In paper [12] these objects were considered as static obstacles.
the authors describe fixed path algorithm and characteristics of
the wheelchair which uses this with the help of simulation C. SLAM
techniques. The authors of paper [13] explain about an auto Autonomous robots should be capable of safely exploring
navigation platform made in Arduino and the use of ani2c their surroundings without colliding with people or slamming
protocol to interface components like adigital compass and a into objects. Simultaneous localization and mapping (SLAM)
rotation encoder to calculate the distance. In the paper [14], enable the robot to achieve this task by knowing how the
using Fuzzy toolbox in Matlab the authors created an surroundings look like (mapping) and where it stays with
autonomous mobile robot and uses the robot for path planning.
respect to the surrounding (localization). SLAM can be
24 fuzzy rules on the robot are carried out. The authors of the
implemented using different types of 1D, 2D and 3D sensors
paper [15], creates an object level mapping of an indoor space
using RFID ultra-high frequency passive tags and readers. like acoustic sensor, laser range sensor, stereo vision sensor
they say the method is used to map a large indoor area in a and RGB-D sensor. ROS can be used to implement different
cost-effective manner. SLAM algorithms such as Gmapping, Hector SLAM,
KartoSLAM, Core SLAM, Lago SLAM.
200
International Journal of Pure and Applied Mathematics Special Issue
the point cloud and depth image. In rviz coordinates are the robot is controlled using the keyboard. As shown in the
known as frames. We can select many displays to be viewed Fig. 4, the final generated map in the Rviz which is very much
in Rviz they are data from different sensors. By clicking on similar to the created environment in the gazebo. For
the add button we can give any data to be displayed in Rviz. visualization in Rviz, necessary topics were selected and
Grid display will give the ground or the reference. Laser scan added. The Hokuyo laser sensor which is used in this robot
display will give the display from the laser scanners. Laser model publishes the laser data in the form of the topic “/scan”
scan displays will be of the type sensor msgs/Laser scans. which is selected as a topic of laser scan in rviz. In a similar
Point cloud display will display the position that is given by way for creating the map, “/map” topic is added. The
generated map is saved using the map_server package that is
the program. Axes display will give the reference point.
available in the ROS. Once the map is generated and saved the
robot is now ready for the incorporation of navigation stack
packages.
V. IMPLEMENTATION
The environment for the robot model to perform the
navigation is created in the gazebo and the robot model which
was created is imported into the environment. The robot
model consists of two wheels, two caster wheels for the ease
Fig. 4. The final map of the environment -1 created in Rviz.
of movement and a camera is attached to the robot model.
Later the Hokuyo Laser is added to the robot and plugins were
It is very important to note that a robot cannot be navigated
incorporated into the gazebo files. Hokuyo laser provides laser
without feeding the map to it. Navigation stack packages by
data which can be used for creating the map. Using the
using amcl were used which provides a probabilistic
Gmapping packages a map is created in the Rviz by adding the
localization system for a robot to move in a 2D. Now, the
different parameters that are necessary. The Fig. 3, shows the
robot is ready to navigate anywhere in the created map. The
initial generation of the map when launched. Initially, the
destination for the robot can be given using the 2D nav goal
robot model is moved to every corner of the environment until
option in the Rviz which basically acknowledges the robot
a full map is created using the “teleop_key” package where
201
International Journal of Pure and Applied Mathematics Special Issue
with a Goal. The user has to click on the desired area in the
map and should also point out the orientation of the robot that
it has to be in. The blue line is the actual path that the robot
has to follow to reach the destination. The robot may not
follow the exact path that is given to it due to some of the
parameters but it always tries to follow it by rerouting itself
constantly.
The node graph that is shown in the Fig. 5, indicates the
different topics that are being published and subscribed to the
different nodes. The /move_base node is subscribed to several
topics like odometry, velocity commands, map, goal, these
topics gives the necessary data for the base of the robot to
navigate in the environment.
202
International Journal of Pure and Applied Mathematics Special Issue
203
International Journal of Pure and Applied Mathematics Special Issue
204
205
206