Full Text
Full Text
Full Text
TOBIAS EDWARDS
JACOB SÖRME
TOBIAS EDWARDS
JACOB SÖRME
Abstract
Household robotics is on the rise with robotic vacuum cleaners taking
the lead. One of the challenges in creating great robotic vacuum clean-
ers is path planning for coverage of the room. For instance, the robot
might not know how the environment looks, or where in the room the
robot is.
Three path planning algorithms are tested: the random path algo-
rithm, the snaking algorithm and the spiral algorithm. In our sim-
ulations the random algorithm was initially faster than the snaking
algorithm. However, the random algorithm’s performance worsened
and it performed similar to the snaking algorithm in the end. The spi-
ral algorithm performed worse than the other tested algorithms. The
random algorithm showed a more stable performance than both the
snaking and spiral algorithms. However, we believe that the spiral al-
gorithm could be complemented with other algorithms. Errors in the
results could possibly come from the method used in the software to
detect wall collisions. Future research could improve the software and
also cover collision with objects in rooms and rooms of other shapes.
Sammanfattning
Antalet robotar i hemmet ökar i världen och robotdammsugaren är ett
populärt val. Bra ruttplaneringsalgoritmer är en av utmaningarna för
att tillverka bra robotdammsugare. Exempelvis behöver inte roboten
känna till sin position i ett rum eller ens känna till rummets utseende,
hur ska utvecklare gå till väga?
1 Introduction 1
1.1 Aim & Goal . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Path Planning 4
2.1 Random Walk Based Algorithm . . . . . . . . . . . . . . . 4
2.2 Spiral Algorithm . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Snaking and Wall Follow Algorithms . . . . . . . . . . . . 6
2.4 Path Transforms . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . 8
3 Methodology 10
3.1 Simulation, Testing & Software . . . . . . . . . . . . . . . 10
3.1.1 Used techniques . . . . . . . . . . . . . . . . . . . 10
3.1.2 Assumptions . . . . . . . . . . . . . . . . . . . . . 11
3.1.3 Robot . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.4 Movement of robot & Updates . . . . . . . . . . . 11
3.1.5 Trail behind robot . . . . . . . . . . . . . . . . . . . 12
3.1.6 Rooms . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.7 Algorithms Implemented . . . . . . . . . . . . . . 12
3.1.8 Runs . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.9 Displaying & Calculating Data . . . . . . . . . . . 13
4 Hypotheses 15
5 Results 16
6 Discussion 22
6.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.2 Future Research & Improvements . . . . . . . . . . . . . . 26
v
vi CONTENTS
7 Conclusion 27
Bibliography 28
Chapter 1
Introduction
The use of robotics within everyday life has grown as new research
and innovative ideas lead to new application possibilities. These ap-
plications often simplify and automate previously tedious work that
needed to be performed by humans, e.g. heavy lifting, lawn mowing
and warehouse logistics [2].
The robotics industry suffers from some of the problems the personal
computer industry suffered from 30 years ago. In 2007, the author
of [3] pointed out that lack of standards often resulted in designers
having to start from scratch. However, the robotics industry can take
advantage of the development of computers. Consider that sensors
and computing power are becoming more convenient and cheaper.
Furthermore, the increase in connectivity can push workloads from
smaller units to larger computers or servers [3].
In 2008 toy robots largely out sold utilitarian robots [8]. In 2016 vac-
uum and floor cleaning robots made up the largest share of robotic
1
2 CHAPTER 1. INTRODUCTION
space. Developers must also decide to what extent the robot should
be able to localise itself within a room using sensors. The problem
of covering every accessible area in the entire room is known as path
planning of coverage region, PPCR. An efficient PPCR can be formu-
lated as one that should be short in distance and contain a minimum
number of turning angles and revisited areas [11].
To achieve this goal, the aim of this paper is a short literature study and
a simulation where some of the discussed algorithms in this paper are
tested.
Chapter 2
Path Planning
Path planning for autonomous cleaning robots is the act of using algo-
rithms to decide how and where a robot should move in an environ-
ment [7]. The problem is often to decide how to completely cover a
room. Covering a room can for example be done by following a pre-
computed path from start to finish, or by following certain rules along
the way. All path planning algorithms rely on some type of knowl-
edge of the operating space. In the case of an algorithm that uses a
precomputed map the knowledge needs to be extensive. However, a
simpler algorithm could only rely on the knowledge of hitting an ob-
ject. In the latter case, the robot must be equipped with at minimum a
simple bumper sensor that signals when the robot hits an object. With
more sophisticated positioning equipment like IR sensors and GPS, al-
gorithms that allow for more efficient path planning in terms of time
and redundant (repeated) surface coverage can be developed [7].
4
CHAPTER 2. PATH PLANNING 5
day’s robots being unable run endlessly (it has to run out of power
at some point) is that random based approaches have no guarantee of
cleaning performance within a given amount of time [7].
perform lots of stopping and rotating. Consider that wheels are not
aligned or that sensors are cheap and imprecise. Over time errors
will accumulate [7]. For this reason the authors of [7] suggest using
the snaking pattern only for local, small areas that are cleared quickly
in order to prevent positional errors accumulating. [7] combine the
snaking path with a random walk path so that the random walk ex-
plores the room and the snaking pattern is used for the actual area
coverage. They suggest this is more efficient than running either the
random or snaking path separately.
Wall follow paths are mainly used to allow the robot to gain knowl-
edge of a room’s perimeter. As presented by the authors of [4], the
robot moves forwards until it collides with an obstacle. Thereafter it
performs turns until the robot is no longer colliding and can continue
along the edge of the obstacle. Wall following is particularly useful
for clearing tricky spots such as corners. The authors of [4] suggest
combining it with other techniques to achieve good coverage. Refer to
Figure 2.4.
8 CHAPTER 2. PATH PLANNING
Figure 2.4: Picture of wall following robot with its trail in a room [4].
Each time the robot hits the wall it rotates clockwise until it can go
forwards again.
This approach is supposed to not only make a robot follow the cir-
cular contour patterns which radiate from the goal point, but also fol-
low the shape profile of obstacles in the environment. This way creat-
ing an end result with fewer turns. A robot using this method needs
great accuracy, and can not solely rely on dead reckoning. A sensor
is needed. That sensor sights landmarks, which are compared with a
precomputed map.
The approach uses a fitness function to optimise the robot’s path. The
fitness function (similar to natural selection) combines different solu-
tions to create a new path. Every cell in the map gets a number and
becomes a gene, and a partial paths consisting of the neighbouring cells
within reach of a sensor becomes a chromosome.
The fitness function evaluates the partial paths possible to take from
every position of the robot. It takes into consideration: the total length
of the partial path, the number of unvisited cells it visits, and the to-
tal X-axis distance every visited cell has to the current position of the
robot. The latter is supposed to make the evolution proceed with the
partial paths that travel as little as possible in X-axis - meaning the
complete path will most likely travel along the Y-axis, making turns
and then travel back along the Y-axis but a bit farther away on the
X-axis.
Chapter 3
Methodology
10
CHAPTER 3. METHODOLOGY 11
3.1.2 Assumptions
In order to be able to create a simulation environment within the given
time frame, certain limitations and assumptions were made. The im-
pact that these limitations have on our test results will be discussed
in section 6. The following assumptions were made to the simulation
and testing:
3.1.3 Robot
The robot is represented with a circle. It has a diameter of 12 pixels.
It has the ability to move anywhere in the room. A wall collision is
detected if the centre point of the robot is less than 6 pixels (one radius
from the center) away from a virtual wall.
Since the robot moves 3 pixels in every update it will cover 36 pix-
els (12 × 3, a rectangle with sides 12 and 3) in every update. The trail
does not include the circular shape of the robot since the actual vac-
uum part on the robot is a line on the robot’s underside [6], and this is
what leaves a trail.
3.1.6 Rooms
The square room has a side length of 200 pixels. The rectangular room
has a width of 333 pixels and height of 120 pixels. Both of the rooms
have a total area of (roughly) 40000 pixels. For example: if a real
robotic vacuum cleaner has a 30 cm diameter, with our proportions
the square room would have sides of 5 meters.
3.1.8 Runs
Measuring the time during a run is done by counting the number of
updates of the canvas. An update of the canvas is equivalent to cal-
culating the new position of the robot and drawing its trail. For each
algorithm 2000 runs are performed in each room. The number of up-
dates and turns will be stored at each percentage of coverage. The per-
centages of coverage examined are 1%, 2% ... 95%. One of the reasons
we do not examine percentages greater than 95% is that we think that
CHAPTER 3. METHODOLOGY 13
• After each turn the turned radians are calculated. The robot will
rotate in the direction that results in the shortest turn to get to the
new direction.
• When the robot reaches 95% coverage the data is stored, then a
new run is initiated.
• Before the canvas is cleared and a new run begins, the whole
trail of the run is analysed. Data for a heat map is accumulated.
The opacity of each pixel is added to a average of that pixel from
previous runs.
Hypotheses
Our first hypothesis is that the random algorithm and the spiral al-
gorithm will have similar performance, as stated in [10]. We believe
that the spiral algorithm might have difficulties reaching the corners
of the room.
Our second hypothesis is that the snaking algorithm will perform bet-
ter than both the random- and snaking algorithms. Finally, we also
believe that all algorithms will perform better in the beginning of a
run than in the end. We believe that this is due to the fact that as the
covered area grows, the probability that the robot finds uncovered area
decreases.
15
Chapter 5
Results
Figures 5.1 and 5.2 show heat maps of the robot after 2000 runs in
square and elongated rooms, respectively. A red colour indicates that
the area was frequently visited by the robot whereas a blue colour in-
dicates the opposite.
Figure 5.1: Figures showing the heat maps for the three tested algo-
rithms after 2000 runs in the square room. A strong red colour indi-
cates frequent visits, while a strong blue colour indicates few visits.
16
CHAPTER 5. RESULTS 17
Figure 5.2: Figures showing the heat maps for the three tested algo-
rithms after 2000 runs in the elongated room. A red colour indicates
frequent visits, while a blue colour indicates few visits.
5000
0
0 10 20 30 40 50 60 70 80 90 100
% of area coverage
The spiral algorithm needed the most updates and turns to complete
any percentage of coverage. Refer to Figure 5.3 and Figure 5.4. The
spiral showed an increase in number of updates required for cover-
age after around 25% of the area was already covered in the elongated
room, see Figure 5.3. A similar increase for the spiral algorithm in the
square room happened at around 40% of coverage. The random al-
gorithm showed to need fewer turns than the snaking algorithm. The
largest difference of turning was in the elongated room, where the ran-
dom algorithm needed around ten fewer complete turns to complete
a 70% coverage. See Figure 5.4.
CHAPTER 5. RESULTS 19
150
Turns
100
50
0
0 10 20 30 40 50 60 70 80 90 100
% of area coverage
Discussion
We will continue to discuss the results. But what kind of results are
desirable? It might actually be desirable to have repeated coverage to
some extent, in order to achieve a clean surface. In our case this means
that fewer required updates might not always mean a better result.
Does the same goes for turns? Probably not, since turning does not
contribute to covering or cleaning a room and is only time consuming
[7]. Hence a great run would conclude of as few turns as possible and
few updates, but not too few.
As initially believed in our first hypothesis (see Section 4) the spiral al-
gorithm did not reach the corners of the rooms efficiently. We can con-
clude from the heat maps that the spiral algorithm showed a predom-
inance of coverage in the central regions of the room. It made many
spirals until it by chance bounced into a corner and covered that area.
The behaviour of the spiral algorithm resulted in many visits to the
central parts of the rooms. Refer to the heat maps for spiral algorithm
in Figure 5.1 and Figure 5.2. We can conclude from our results that
the spiral algorithm is not the best algorithm of choice. However, it
would be interesting to examine the spiral algorithm in realistic rooms
filled with objects. It might also perform better if combined with a wall
following algorithm, in order to cover the corners efficiently. It might
suffer more from its lacking ability to cover corners in a completely
square room than in a realistic room filled with objects. As the authors
of [6] mentioned, similar algorithms to the spiral algorithm are actu-
ally used in industry.
22
CHAPTER 6. DISCUSSION 23
Our belief that all of the tested algorithms would perform worse to-
wards the end of the runs is supported by our results. We believe that
this increases the probability of revisiting areas. Specifically, the spiral
algorithm proved to be especially vulnerable to having large portions
of the room already covered. This can be linked to the fact that the spi-
ral algorithm had difficulties reaching the corners. The snaking- and
the random algorithms also suffered towards the end of the runs, but
24 CHAPTER 6. DISCUSSION
at a lower degree.
The heat maps for both the random- and snaking algorithm showed
a uniform coverage of the room. Most parts of the room had the same
colour. For the snaking algorithm the colour was more blue, indicat-
ing fewer visits to the same spot. The part of the room the snaking
algorithm visited the most was along the walls. See figures 5.1 and 5.2
in section 5.
The authors of [10] stated regarding the random algorithm and the spi-
ral algorithm: "... the manner in which the robot is moving depends strongly
on its surroundings". However, it can be seen in Results (section 5) that
the random algorithm had an almost identical average performance in
a square and a rectangular room of the same size. On the other hand,
both the spiral algorithm and the snaking algorithm performed better
in the square room than in the elongated room. Refer to Figure 5.3 and
Figure 5.4, section 5. However, the authors of [10] examined rooms
with objects in them, which is something we have not tested. Hence
we cannot discuss how a room with objects affects the performance of
an algorithm.
The claim by the authors of [7] that the snaking algorithm performs
a lot of turning is supported by our simulation results if we compare
snaking with the random algorithm. A lot of turning means longer
running times if the robot must stop and rotate. Because we assumed
perfect motion, it is difficult to discuss the claim by the authors of [7]
CHAPTER 6. DISCUSSION 25
that errors accumulate fast over time for the snaking algorithm. It is
however reasonable to consider that, in general, a robot with cheap
sensors and poor build quality will generate larger errors as the robot
turns. Also, if the path algorithm applies a sophisticated heuristic or
relies on positional data, then errors will most likely have a negative
impact on the robot’s performance. Our tested algorithms rely on ran-
domness to some degree, not the robots position. Since errors likely
will affect the robot in random ways, the already used randomness in
decision making will not be affected greatly.
6.1 Limitations
The drawings on the HTML canvas only have so many pixels. This
means that the robot and the trace behind it can not be precisely drawn.
Movement in every direction can not be correctly represented on the
canvas, since the pixels are square. In each update the robot had a
theoretical maximum number of pixels to cover. This maximum was
sometimes exceeded due to imprecisions in the drawing of the trace.
The spiral algorithm in our simulations did not have a slower speed
while turning. Depending on the implementation it is likely that turn-
ing will reduce speed in reality, since motors have to work on turning.
When the robot hits a wall a new angle will be calculated according
to the algorithm being tested. Since hitting a wall is rather passing a vir-
tual line or in our software, some problems do occur. See Figure 6.1. If
the robot is in the room in the beginning of one update, and outside
a wall after, it is moved into the room again in the next update - the
update that detected that the robot bumped into a wall. When the robot
is moved into the room from outside a wall the position is changed.
For simplicity, only the position in one dimension is changed. For ex-
ample, if the robot passes the right wall, the x is changed to be just
inside that wall. The y-position is not changed - meaning that the po-
sition of the collision will be slightly off form where the real collision
should have happened. This error is evaluated as minimal, however
the speed of the robot is kept relatively low to be on the safe side.
26 CHAPTER 6. DISCUSSION
Conclusion
Our study and our results show that the random algorithm performed
best in our simulations. The random algorithm and the snaking algo-
rithm showed similar performance. The snaking algorithm required
more turns, while the random algorithm required more updates. Since
repeated coverage is sometimes desirable, but extra turns is not, the
random algorithm is better. The spiral algorithm did not perform well
in comparison. However, it would be interesting to combine the spiral
algorithm with an algorithm made to cover corners and walls, such as
the wall following algorithm. We believe that other solutions for path
planning for robotic vacuum cleaners that have not been tested in this
paper perform well in theory but might not be applicable in practice.
This is due to the initial assumptions of the environment and capabil-
ities of the robot.
27
Bibliography
28
BIBLIOGRAPHY