Usability Aspects On Industrial ABB Robot Calibration With A Focus On TCP and Work Object Calibration
Usability Aspects On Industrial ABB Robot Calibration With A Focus On TCP and Work Object Calibration
Usability Aspects On Industrial ABB Robot Calibration With A Focus On TCP and Work Object Calibration
Dilip Kota
April 1, 2007
Master’s Thesis in Computing Science, 20 credits
Supervisor at ABB Corporate Research: Ivan Lundberg
Supervisor at Umeå University: Anders Broberg
Examiner: Per Lindström
Umeå University
Department of Computing Science
SE-901 87 UMEÅ
SWEDEN
Abstract
ABB robots are world leading when it comes to different techniques used in robot pro-
gramming. These advance techniques can be huge time savers when programming
robots, however the complexity of these techniques makes them hard to understand
and perform for non experts. This leads to end users not using the very same tech-
nique they wanted and bought the robot for in the first place. Since humans program
and handle robots, human errors are introduced. The outcome of these problems would
be a loss of functionality and performance, leading to unwanted financial and time losses.
Implementing usability into robot applications is very important to lessen the knowledge
gap for the end users and diminishing the human errors.
Intelligent software agents are a way out of these problems; they help and assist end
users by guiding them step by step throughout a process. The agent acts like a personal
tutor that gives instant feedback, recommendations and draws the user’s attention to
where the next step will occur, the best part however is that the personal tutor is avail-
able any time and day of the week and never gets tired of teaching.
This thesis concerns usability aspects on industrial ABB robot calibration. Everyday
problems that users have with robot calibrations are discussed; to every problem solu-
tions are presented. With the combined results from the author and feedback from the
field studies a mock-up has been generated where usability aspects are taken into con-
sideration. The calibration interface has been restructured to appear more intuitive and
new features, like viewing text on the FlexPendant, have been introduced. The most
important new feature implemented in the mock-up is an intelligent software agent that
assists the users. With the prototype implemented even beginners understand and per-
form some of the calibration techniques.
ii
iii
Acknowledgements
First and foremost I would like to thank Ivan Lundberg, Development Engineer and
Supervisor at ABB CRC, Västerås, for allowing me do this thesis. His guidance, valu-
able suggestions, advice, and patience were the main reasons for completing this project.
I would like to thank Tommy Svensson, Senior System Engineer, and Bryan John-
son, Chief Software Engineer, for making me feel welcome on my trip to Fort Collins,
Colorado, USA, and for discussing robot calibration with me. I would also like to thank
all employees at Wolf Robotics, Fort Collins, for allowing me to interview them.
I would like to thank Douglas Hixon, Automotive Application Specialist, for taking
care of me in Auburn Hills, and for explaining certain parts of robot calibration. I would
also like to thank all employees at ABB, Auburn Hills, for allowing me to interview them.
1 Introduction 1
1.1 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Background 3
2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.4 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
v
vi CONTENTS
5 Mock-up 21
5.1 The Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Original GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3 Restructuring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.4 Viewing text on the FlexPendant . . . . . . . . . . . . . . . . . . . . . . 26
5.5 The Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6 Field Studies 29
6.1 The Mock-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.2 Calibration problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7 Analysis 33
7.1 How do we help robot users understand different calibration techniques? 33
7.2 Why aren’t users defining work objects? . . . . . . . . . . . . . . . . . . 33
7.3 Why work objects points should be saved . . . . . . . . . . . . . . . . . 34
7.4 Why the work object should be picture illustrated . . . . . . . . . . . . 34
7.5 Why the TCP points and calibration movements should to be saved . . 35
7.6 Problems encountered on installation . . . . . . . . . . . . . . . . . . . . 35
7.7 Why pictures are necessary on revolution counters update . . . . . . . . 36
7.8 Miscellaneous problems and thoughts about calibration . . . . . . . . . 36
8 Results 39
8.1 Reorganisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.2 Text viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
8.3 The Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
9 Discussion 43
10 Conclusions 45
10.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
10.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
References 47
vii
viii LIST OF FIGURES
ix
x LIST OF TABLES
Chapter 1
Introduction
Good performance on a robot implies precision and accuracy. To obtain these qualities
a robot needs to be correctly calibrated. Without accurate calibration, robot programs
will be off and it could also lead to unnecessary crashes. Some of the techniques used
in robot calibration are done manually by humans, and as we all know when humans
are involved we also introduce human errors. Errors like wrongfully performing or not
performing methods, which may not be vital but highly recommended. These errors are
important to diminish.
This thesis investigates, with a focus on usability aspects, the different problems with
some of the calibration techniques on an ABB robot.
ABB is one of the world’s leading engineering companies; they help their customers
to use electrical power effectively and to increase industrial productivity in a sustain-
able way. ABB is a global leader in power and automation technologies that enable
utility and industry customers to improve their performance while lowering environ-
mental impact. ABB operates in more than 100 countries and has offices in 87 of those
countries to give its global and local customers the support they need to develop and
conduct their business successfully.
1
2 Chapter 1. Introduction
Chapter 2 explains the problem statement of the project, the goal, the purpose, and
the methods used in the thesis are presented.
Chapter 3 describes the robot system, how an ABB robot looks like and components
needed to execute a robot program are described. This chapter also explains some
of the robots coordinate systems and the relation between them. How to perform
certain types of calibration techniques are also explained.
Chapter 4 presents the in-depth study of the thesis; it explains and discusses the topic
intelligent interface agents. Why we need agents, what they can do for the user,
challenges with them, and comparison between different types of agents are also
mentioned.
Chapter 5 describes how the graphical user interface (GUI) in the FlexPendant looks
today but the chapter also presents the first mock-up implemented. Differences
between the mock-up and the original GUI are also pointed out.
Chapter 6 summarises the field studies, what users on the field had problems with
when performing robot calibration and what they thought about the mock-up is
presented.
Chapter 7 analyses the major problems discussed on the field studies, real life examples
explaining the problems are presented. Solutions to the problems are presented
with every day examples of why the solutions would help and assist the users.
Chapter 8 presents the results of this thesis. It shows and discusses the improvements
made on the mock-up based on the data collected from the field studies.
Chapter 9 discusses the project, what the project is about but more importantly what
the project is not about. Which focus the usability took in this project and why
some things were left out is also presented.
Chapter 10 draws conclusions, it explains the limitations in the mock-up and discusses
about what the author believes ABB should look into in the future, concerning
robot calibration with a focus on usability.
Chapter 2
Background
This chapter explains the problem statement, the goal, and the purpose of this thesis.
The methods used in the project are also discussed.
2.2 Purpose
The purpose of this thesis is to get end users to realise that calibration is important and
necessary for getting great performance on ABB robots. The second part of the thesis is
to develop a vision for how the calibrations on ABB robots can be reshaped to become
more usable, more uniform and ensure a higher quality for end users.
2.3 Goals
The goal is to find out why calibration techniques such as work objects (they are ex-
plained later in this paper) are not widely used, apparently only 10 out of 500 customers
in France are using work objects in their robot programs. When the problem cause is
found a mock-up is to be implemented to present a solution to this problem.
3
4 Chapter 2. Background
2.4 Methods
The project consisted of four parts; in the beginning an inventory of different current
calibration offers was made. It included an understanding of some current calibration
techniques and laboratory work with a robot; it also included discussions with key
persons for a better understanding of a robot and its calibration.
The second part consisted of literature studies in the area of HRI (Human Robot
Interaction) and intelligent interface agents. This was conducted to get a theoretical
background of usability and robots. The second part also included a mock-up imple-
mentation; the implemented mock-up was implemented because some ideas needed to
be shown to end users in the forthcoming field study.
The third part was field studies and they were made with a company called Wolf
Robotics in Fort Collins, Colorado, USA and in ABB localized in Auburn Hills, Michi-
gan, USA.
The fourth and final phase of the thesis involved an analysis of the field studies, and
a final mock-up implementation illustrating how workflows could be improved when
dealing with robot calibration.
Chapter 3
This chapter describes the robot system, involving the mechanical unit, the controller,
and the FlexPendant. Different types of coordinate systems and the relation between
them are discussed, how to perform a work object calibration and a tool center point
calibration is explained.
A robot system (fig 3.1) comprises manipulator(s), the controller(s), and all equipment
that is controlled by the controller (such as tools, sensors, etc). It also includes hardware
and software required to operate the manipulator [4]. The manipulator is the mechan-
ical unit which performs the work. The controller is the ”brain” of the manipulator,
it consists of a processor, a memory, and all other electronic devices that are required
to handle a manipulator. Finally there is a unit called FlexPendant, which is a touch
screen device that controls the manipulator.
A The Manipulator
B The Controller
C The FlexPendant
5
6 Chapter 3. The Robot System
The manipulator (fig 3.2) or robot which ABB manufactures has six axes. The figure
shows the different axes and how they move.
The FlexPendant (fig 3.3), sometimes even called TPU (Teach Pendant Unit) is a touch
screen device for handling many of the functions involved with operating the robot sys-
tem. The functions are for instance running and writing programs, jogging (moving,
running) the mechanical unit, producing and editing application programs, etc [3].
A Connector
B Touch screen
C Emergency stop button
D Enabling device
E Joystick
F Program buttons
Keywords [4]:
– Manipulator A Manipulator is a generic term for mechanical units that are used
to move objects, tools, etc. The term also includes robots as well as positioners.
– Robot A robot is a mechanical unit with a tool center point (TCP). The term
robot does not include the controller.
– Positioner The positioner is a mechanical unit used to move a work object. It can
have one or more axes, normally no more than three axes. A positioner normally
does not have a TCP.
– Robot cell A robot cell comprises all parts needed for production.
For the robot to be accurate and precise, calibration is very important. If the robot or
any external axes is not calibrated correctly the outcome would be bad positioning in
one of the coordinate systems and this will have a negative affect in the agility of the
robot [1].
Some of the most important coordinate systems to understand for this thesis are the
base-, world-, tool-, user-, and object frame.
Figure 3.5: The world frame coordinate system of the robot [4]
Another situation when the world frame coordinate system is used, is when different
robots work together within the same working space. If the robot cell contains several
3.1. Coordinate System 9
robots that need to communicate with each other, then the same world frame coordinate
system is usually defined for all robots.
Figure 3.6: Center of the platter of axis six on the robot [4]
All tools must be defined with a tool center point (TCP) (fig 3.7), the TCP is the point
that reaches the programmed point or is the point which moves when jogging the robot.
The robot can have a number of TCPs but only one can be active at the time.
Figure 3.7: The tool frame coordinate system of the robot [4]
If a tool is replaced, teared, weared, or changed; the tool frame has to be redefined, but
the program does normally not have to be altered [1].
10 Chapter 3. The Robot System
– The majority of all applications deal with moving TCPs, moving TCPs moves in
space along with the manipulator. A typical moving TCP would for instance be
the tip of an arc welding gun (fig 3.7).
– When stationary TCP is used the manipulator moves and the TCP is stationary,
in those applications the TCP is defined in relation to the stationary equipment
and not the moving manipulator.
A robot can have several work objects, either for having different work pieces or the
same work piece but on different locations.
Suppose that a robot is working on a table (fixture), suppose further that the table
has to be moved for some reason. If the robot program is not built up by work objects,
then the whole program is useless, because all the robot targets are in a relation to the
base frame of the robot. Hence if the fixture moves then the points in space are not
in the same relation as they were before the fixture moved. On the other hand if the
robot program was built up by work objects, then the programmed robot points are in
relation to the work object which in turn is in a relation to the base frame. Since the
work object is definable by the user the only thing that needs to be done is to redefine
the work object and the relation between the work object and the base frame will auto-
matically be updated. Given that the work object is redefined the programmed points
are also moved, the robot can now continue the same work on the fixture as before even
though the fixture has been moved.
3.1. Coordinate System 11
This technique is well suited for offline programming, since work objects first can be
defined in a virtual world and then updated rather easily in the physical world; this is
often the case because the virtual world rarely coincides exactly with the physical world.
This figure (fig 3.9) summarises the different coordinate systems and the relation be-
tween them.
Figure 3.9: Different frames of the robot and their relation [4]
To define a tool using the robot as a measuring tool, a world fixed tip is needed within
the robots working range. The robot must then be jogged to at least four different
locations as close as possible to the world fixed tip. These positions are called approach
points (fig 3.10) [1].
To define a more accurate tool one has to use more points than four, to specify
the tools orientation, elongator points are used. The elongator points are defined by
pointing out the z axis and the x axis of the world fixed tip.
Figure 3.11: Defining work objects with the robot as a measuring tool
Chapter 4
This chapter presents the in-depth study of the thesis; it focuses on intelligent interface
agents. What agents can do for the user and the challenges with them are discussed. Dif-
ferences between wizards and guides are mentioned, direct manipulation and autonomous
agents are discussed.
Applications nowadays are getting more and more complex, much due to the huge
amount of features included into applications. Jensen Harris, the Lead Program Man-
ager for the Office User Experience at Microsoft, says when mentioning Microsoft Word
that [10] ”what was once a simple structure to visualize is now a more complicated,
branching structure. Browsing for features is now less like looking at a shopping list
and more like traversing a complex data structure”. Microsoft Word 2003 has over 1500
components, and to support this massive toolset, the application contains 31 toolbars
along with 19 task panes and a number of smart tags, the smart tags were designed for
people to discover the functionality [21].
On top of already having way more components than a non expert could handle there
are no signs for a decrease of components in the future. The figure below (fig 4.1) shows
the rapid increase of menu items for every new release of Microsoft Word.
13
14 Chapter 4. Intelligent Interface Agents
Applications like Microsoft Word are bloated, and the question is how do we solve the
complexity problem that exists? How do we help end users use features and methods in
an application, but more importantly how do help end users understand the features and
how do we help them to perform them. Even the graphical user interfaces (GUI) adds to
the learning complexity for inexperienced users [22], which leads to unnecessary failures.
These problems are highly noticeable even in the ABB robotics world, the application in
the FlexPendant offers great methods but they are sometimes hard to grasp and perform.
Intelligent interface agents are a way of alleviating these problems. Imagine having
a teacher that could tutor you any time of the day, who was always there to guide and
inform you. Or imagine that you have a supervisor that you could ask any question,
dumb or smart any time and day of the week. Image now that the tutor or supervisor
is not human, but rather a computer program called intelligent agent.
Intelligent interface agents could guide and inform users so that they could solve a
problem. Agents could teach users the interface by guiding them step by step through
the entire process and could also prevent users from making mistakes [7]. Intelligent
agents are computer programs that simulate a human relationship by doing something
that a another person could do for you [18].
In the 1990’s, drawing from the traditional research within artificial intelligence and
human computer interaction a new paradigm was born, the software agent [15]. Kay
[12] argues that the interface agent has the ability to revolutionize computing science
as we know it today, the agent(s) will allow user’s to go from direct manipulation to
interactions. Agents will act as experts so users doesn’t always need to have the spe-
cific ”know how” for accomplishing certain tasks. Regular people could now accomplish
things that before required experts.
The word agent is often used to describe people who assist or help others to achieve a
certain goal. Agents could for instance be travel agents, personal assistants, secretaries,
etc. The work an agent does is to assist someone in the best possible way, it could be
4.3. What can agents do for the user? 15
For the relationship between the agent and the user to work, the interaction between
them must be flawless. This is also the case with virtual computer agents; if the inter-
action between the computer agent and the user is good the advice or help from the
agent will most likely be of great interest for the user.
4.2.1 Intelligence
The word ”intelligence” is often associated with human intelligence; that kind of in-
telligence that doesn’t exist in today’s computer technology. But the word intelligence
could also mean expertise, knowledge and how to accomplish certain tasks; this is the
kind of intelligence that this paper refers to and focuses on.
4.2.2 Interface
For an agent to be considered an ”interface agent”, the agent is required to communicate
through input and output of the user interface. An agent can observe actions taken by
the user on the interface and act according to the findings. The agent should also be
able to manipulate the interface such as adding graphics or animation to it [11].
4.2.3 Agent
Defining the word agent is not easy; almost every researcher in the human computer
interaction field have their own definition. Lieberman [13] defines agent as ”an agent
is any program that can be considered by the user to be acting as an assistant or
helper, rather than as a tool in the manner of a conventional direct-manipulation tool”.
Maes [14] describes agents as ”Instead of user-initiated interaction via commands and/or
direct manipulation, the user is engaged in a co-operative process in which human and
computer agents both initiate communication, monitor events and perform tasks. The
metaphor used is that of a personal assistant who is collaborating with the user in the
same work environment.”
The most dominate interaction between users and computers now is direct manipu-
lation which Schneiderman [19] describes as, the user is required to initial all tasks and
monitor all events. This interaction needs to change if untrained users are supposed to
perform effectively and efficiently on an application.
For having non professional users perform well on computers, help is needed, help that
they could get any time of the day and for as long as they need. Help that never gets
tired of helping and help that returns feedback in electronic speed. This help could be
16 Chapter 4. Intelligent Interface Agents
from a software agent; this agent would be the electronic advisor and assistant. Maes
[14] discusses four main things that an agent can do for the user:
At any given time the interface agent must at least have an idea of what the user
is trying to do in order to give effective assistance, without being annoying. Another
problem can occur when users deal with multiple tasks, the agent must know now when
the user stops with one task, and starts with another [15].
Middleton [15] says that there is currently very little research about how user’s best
can be helped. He argues that real user trials are needed to demonstrate and evaluate
effectiveness and usefulness performed by agents. If an agent doesn’t reduce the user’s
workload in a real working environment, they then do more evil than good.
Guides work completely different than wizards; guides provide assistance through first
observing the interaction between the user and the system and then according to the
findings the guide presents information. A guide often lies ”on top” of the original GUI,
and somehow communicates how to perform the next step in a task and draw’s a per-
son’s attention to where the next step will occur [7].
There are some major differences between guides and wizards but when is one of them
better then the other?
Because wizards break down a task into subtasks they are best suited when the prob-
lems are linear and can be algorithmically solved. Since wizards also alter the original
GUI the user don’t learn that much, wizards are at their best when users do not use
a particular method often or care about how to perform it. When tasks are performed
very infrequent it is less important that users get informed of them, however if it is very
important that the task is performed successfully, wizards are best to use.
Guides on the other hand assists the user without altering the GUI, this means that
opportunities to make mistakes are more than with wizards. On the other hand since
guides lie on top of the original GUI the user learns more. When tasks need to be
performed quite often and/or it is necessary that users needs to get educated on them,
guides are often the choice to implement.
Autonomous agents on the other hand take a slightly different approach, here the idea
lies in that the agent may need to interact with the interface simultaneously as the
user [13]. Autonomy means always running and self-controlled, autonomous agents are
18 Chapter 4. Intelligent Interface Agents
precisely that, they operate in parallel with the user and when the agent discovers a
situation that needs to be viewed by the users, it shows the information to the client.
Assistants that need constant supervision and specific instructions are not very helpful
or timesaving; on the other hand if assistants based on previous knowledge or dele-
gations were allowed to act independently they would be of tremendous help. Some
autonomous agent’s operate outside the user interface, for instance there are programs
that send e-mail to clients to notify them that a webpage, which is of great interest to
that specific user, has been updated [13].
For an agent to be considered both interface agent and autonomous, there must be
some part of the interface that the agent operates by itself. The user must also be
able to see autonomous actions taken by the agent and the agent must be able to see
autonomous actions taken by the user in the interface [13].
Lieberman [13] says that autonomous interface agents work best in situations where
their decisions are not critical. Because people are afraid of letting go of the control
which could lead to bad decisions made by the agent without the users consent, this
fear is justified. There are however many scenarios where the absolute best choice is
not needed for the agent to be useful, sometimes the ”good enough guess” is highly
appreciated.
The aim with direct manipulation is to let users directly manipulate objects presented
to them. To help users learn and use the interface, intuitiveness such as having real-
world metaphors for actions would help. An instant feedback would help users to reduce
their error because with direct manipulation the users see the results of their actions
before completing the procedure. A great example of direct manipulation is for instance
Windows paint program, to resize a graphical object, such as a rectangle, the user has
to touch on one of the edges and drag it with the mouse. While the user holds down
the mouse button and drags the object the interface shows the user how the object is
being reshaped.
Even if some researches argue that direct manipulation is more intuitive and natural,
and Ben Shneiderman arguing that direct manipulation is the only way to go, Pattie
Maes [20] does not see eye to eye with them. She says that users need software agents
because ”our current computer environment is getting more and more complex, and the
users are becoming more and more naive, the number of tasks to take care of, and the
number of issues to keep track of, are continuously increasing”. In Shneiderman and
Maes debate in intelligent User Interface conference [20] she said:
”As we know from other domains, whenever workload or information load gets too
high, there is a point where a person has to delegate. There is no other solution than to
delegate. For example, many of you may have students that you delegate certain tasks
4.8. Final comments 19
to, or you may have personal assistants that you delegate certain tasks to, not because
you can’t deal with those tasks yourself, but because you are overloaded with work and
information. I think the same will happen with our computer environments: that they
become just so complex and we use them for so many different things that we need to be
able to delegate. We need to be able to delegate to what could be thought of as ”extra
eyes or extra ears” that are on the lookout for things that you may be interested in. We
also need ”extra hands or extra brains,” so to speak, because there will be tasks that
we just cannot deal with because of our limited attention span or limited time, and we
need other entities to be able to represent us and act on our behalf. ”
Even if we let agents act on our behalf we still need to feel in control over them, we need
to bypass them if we want to and we need to understand what they are doing [20]. This
is especially vital in the robotics world, if bad decisions are made without the human
consent there could be both time and financial losses.
There is however some issues left to discuss for the future, for instance; should there
be more than one agent? Should agents use personification such as facial expressions?
What is the best metaphor for an agent? Should a user be responsible for actions made
by their agent?
Nevertheless just like with human communication, the nature of the relationship be-
tween two intelligent agents, human and machine, is paramount to success.
20 Chapter 4. Intelligent Interface Agents
Chapter 5
Mock-up
This chapter presents the first mock-up made. It explains the technical parts of the
implementation, such as which code version that were used. It compares the original in-
terface against the mock-up, and it presents two new features, a guide and a text viewer.
One of the problems discovered in the beginning of the project was that work objects
were not easy to perform or understand. Manuals explaining the different techniques
where not written for first time users and the structure involving calibration were not
intuitive.
Since problems were discovered before the field studies a mock-up was generated to
present solutions to some of the basic problems. The mock-up was also to be used as
something for the ”experts” on the field to remark on, and to compare the original
graphical user interface (GUI) against.
The mock-up restructured the calibration offers so that they would seem more intu-
itive for the user and it also included an intelligent software agent (guide) that in a
simple way guided the user through a tool center point calibration. The guide consisted
of a textbox that gave information and recommendations to the user while calibrating a
tool center point. The guide also drew the user’s attention to where the next step would
occur by marking it on the GUI.
The code was built in ABB code version 5.08.0144. The code language was C#, and
the editor used was Visual Studio 2005 Professional Edition. The mock-up was con-
tinuously tested on Virtual Control Test Application, which is the virtual substitute to
a real robot controller. To test on a real controller the code has to be downloaded to
the robots controller. To download the code to the controller, the computers network
connection has to be connected to the controller’s service port.
21
22 Chapter 5. Mock-up
The final mock-up, mentioned in chapter 8, was tested on actual controllers. The code
works on the IRC5 controller with a 5.08.0144 system. The FlexPendant, the device
that controls the robot and which the application interface runs in, has Windows CE as
the operating system. The code was satisfyingly tested on a single system with an IRB
1400 robot, a single system with an IRB 6600 robot, and on a MultiMove system with
two IRB 1400 robots.
The window for calibrating a tool center point is shown in figure 5.4.
5.3 Restructuring
Since both of the methods tool center point and work object are called calibration
techniques and since there is a menu called calibration it would be more intuitive to put
the two methods there (fig 5.5).
When the user has clicked on the calibration label then he or she has to choose which
robot to calibrate (fig 5.6).
In the calibration window there are three different techniques to choose from (fig 5.7),
the tool center point-, the work object-, and the mechanical unit calibration.
If the tool center point button is clicked, three options are presented (fig 5.8).
The first option is a text viewer, which is explained more in detail in the next section,
the second option is the guide mode where a guide helps the user perform a tool center
point calibration (TCP) and the last option is for the ”experts” whom without help
wants to perform a TCP calibration.
26 Chapter 5. Mock-up
The guide is basically built up with a textbox that has information and recommendations
for the user to view depending on what they are doing. The first step in the guide mode
is to decide if a new tool has to be created and then calibrated or if an old tool has to
be recalibrated (fig 5.11). Besides giving information to the user the guide draws the
user’s attention to where the next step will occur (fig 5.11 and 5.12).
5.5. The Guide 27
Field Studies
This chapter summarises the field studies, the problems users deal with on the field is
mentioned as well as what users and experts thought about the mock-up.
The field studies took place in Fort Collins, Colorado, USA and Auburn Hills, Michi-
gan, USA. The company visited in Fort Collins was Wolf Robotics; they build robot
cells and do robot installations using ABB robots. The company visited in Auburn Hills
was ABB; they do a lot of installations for welding customers, they also have training
facilities where employees from different companies go to for robot learning and train-
ing. The people interviewed in the States were those that deal with the FlexPendant on
a daily basis, they were installation technicians, customer support, software engineers,
and so on.
All the employees interviewed were asked a couple of questions, they saw the mock-
up and then the robot calibration topic was discussed.
This is a summary of what the interviewed employee’s thought about the mock-up:
– The textbox is a great idea, because the users now have their own teacher with
them all the time. With the textbox the end users know how to do certain tasks
but most importantly they can read and understand why they need to do them.
– It’s good that the ”guide mode” has default values so that the end users know
what the default values (the recommended values) are when calibrating a robot.
– The idea that the ”guide mode” has the same background as the original graphical
user interface (GUI) was appreciated; because it leads to end users, after some
time, knowing how to perform a method.
29
30 Chapter 6. Field Studies
– The terminology in the guide must be formal and easy to understand. One of the
employees at Wolf Robotics said ”text should be in a formal language, nobody
cares about the math or the technique behind the method, what the customers do
care about is what the technique can do for them, the guide should present some
pros and cons with every choice”.
– Another employee at the same firm said that ”it would be better that the guide
presents text that compares the default option to the selected one instead of just
explaining what the selected value is, this would lead to better knowledge for the
customer”.
– The biggest reason why end users don’t use work objects is because they don’t
know why they need work objects and what the object can do for them.
– When a tool center point (TCP) is calibrated people often don’t know what an
elongator point is, they also don’t know why and when they should use more than
four points to calibrate a TCP.
– When calibrating TCP, it would be time saving to know which of the points that
is most far off, which point that causes the biggest discrepancy.
– The feedback in the application that shows how good precision the tool has after
a TCP calibration is done is very poor.
– It is difficult to figure out how the work object is placed, how the work object
slopes and where the z axis points (fig 6.1).
– The work object points should be saved; the benefits are explained in the analysis
chapter.
– When a robot program is created offline the installation technician needs to go
through every point when installing to test if the program actually runs correctly.
Since the installation technician doesn’t have knowledge about the program he or
she doesn’t know where the next point is placed in space. This becomes a major
problem and could easily lead to robot crashes, tool crashes or damage to the
fixture.
– There are benefits for saving TCP calibration points and they are explained in the
analysis chapter.
– It is difficult to know where all the markings are on a specific robot model when
trying to do a revolutions counter update.
– The FlexPendant uses the metric system which is hard to grasp for people that
don’t use mm in a daily basis, for instance Americans are more familiar to inches.
32 Chapter 6. Field Studies
Chapter 7
Analysis
This chapter analyses the problems mentioned in the previous chapter. Real life exam-
ples explaining the problem are presented as well as solutions to each and every problem.
Another thing appreciated with the mock-up which would assist people when perform-
ing calibration on a robot is the guide. The guide acts like a tutor and assists the user
step by step through the calibration process and explains why and how one should do
a work object or a TCP calibration. The guide also gives instant feedback and recom-
mendations.
For users that do not program offline, that is to say do a robot program on the flex-
pendant and not with RobotStudio, there could be some sort of warning (a pop-up) to
notify users that program without work objects may not be a good idea.
The reasons are many for not knowing or learning about a work object. The major
reason is time constraints. The people whose job it is to set up and program a robot
cell are pressured both by management and deadlines to make the cell work as soon
as possible. Because of the heavy work load and the fact that it takes longer time in
the beginning to set up a robot cell with work objects than without, they often don’t
33
34 Chapter 7. Analysis
Other reasons for not having work objects are that sometimes the robot has a big
tool mounted on it which makes is difficult to perform the calibration, and sometimes
the tool may lack a pointing device which is necessary for defining work objects. If the
robot lacks a pointing device the installation technician has to unmount the existing
tool, mount on a pointing device, perform the tool center point calibration, define the
work objects, unmount the pointing device and remount the original tool. Even though
work objects might take longer time in the beginning the technique is very powerful and
time saving in the end, because of its ability to move whole programs to any location in
space with just redefining the work object.
When customers want to double the productivity, the robot cell is often copied and
setup on a different location. When a robot cell is copied the new and the old one rarely
coincide exactly. Hence saved work object points would be of great interest even in this
example.
Two more scenarios where saved work object points come to use is when, the robot
cell is built on one place and then shipped to another location, or if somehow the fixture
that the robot is working on has moved.
Let’s assume a robot is programmed without RobotStudio, let’s also assume that the
robot or tool crashes, or that the fixture somehow moves, two years after the initial
setup. Since someone needs to recalibrate the robot and its work objects they need to
7.5. Why the TCP points and calibration movements should to be saved 35
find out where the work object was originally put, where the z axis pointed and if there
were any particular reasons why the work object was placed in that specific way. This in-
formation, is for the operator no where to be found in the FlexPendant, which becomes a
huge problem on the field and often the whole robot cell needs to be rebuilt from scratch.
Pictures of the work object and text explaining it would really be helpful if they were
to be viewed on the FlexPendant. The picture could be taken from the actual fixture
and be downloaded to the controller, the text could be typed in on the FlexPendant.
Let’s say that the TCP is bent or broken somehow and that there exists a similar
tool that could replace the broken one. Because the replacement tool is not going to
have the exact same geometry as the original tool a calibration of the new tool has to
be done to make sure that the TCP is accurate. If the TCP points and movements from
the original calibration were saved, the installer could press some button and the robot
could automatically orient to each point and then the installer, if the point is off, could
modify the position instead of jogging the robot to each and every position. This would
be a huge time saver but would also lead to a more accurate calibration of the TCP.
Another benefit of saving the TCP points is that the system could compare the old
tool against the new one. The user could get information about how accurate the new
tool is compared to the old one; the user could also get information about which of the
points in the new tool that causes the biggest discrepancy compared to the old tool.
There are some ideas that could make the setup of a robot cell safer. One thing that
could be done is to have some sort of a ”pre modify position” where the robot stops
before reaching the target point. When the robot stops in its ”pre modify position”
point the technician could go to the robot and ensure that no obstacles are in the way.
36 Chapter 7. Analysis
Something that might work better is to have pictures of every point the robot is going
to, this way the technician would know where the robot is heading and could take pre-
cautions according to the information. Even though this approach would be helpful for
the technician it would mean that pictures on every point on the fixture must be taken.
It is however not reasonable to assume that the programmer is going to copy and paste
every point from RobotStudio, which would take a lot of time and would be very tedious.
Another solution that could solve this problem without burdening the programmer
would be, when the installation technician presses the button for making the robot
go to the next robot target there could be some counter that counts down to the target
in mm or cm. The distance could be shown in x, y and z. This information would be
more than enough and it would be especially important when robot targets are near a
fixture so that the operator knows how near the fixture the robot is actually going.
After the revolution counters are updated, the robot should go to its sync position,
so that the technician can see if something is wrong with the robot. There should also
be some sort of a popup that tells the technician to actually go too the robot and check
its sync position. In real workplaces there are often fences around the robot and tech-
nicians frequently program and calibrate the robot from outside the fence, which in the
end not gives a good accuracy.
Suppose that axis six was badly calibrated and suppose further that a user defines a
work object while turning axis six. The work object would get an offset unknown to the
user. Suppose instead that the user didn’t turn axis six when defining the work object,
now the work object would still get an offset but the offset would be the same for every
point. It is better to have the same offset on every point than to have different offsets on
every point. It is recommended to use as few axes as possible when defining work objects.
7.8. Miscellaneous problems and thoughts about calibration 37
There is a need for warnings/notifications when renewing tool center points or work
objects, when users recalibrate an old tool there should be some sort of pop up that
mentions which modules and/or routines that will be affected by the change. If a user-
frame is modified then it should mention which objectframes that will be affected and
so on. This also helps out with the learning process for the users; by warning users they
understand the relations between the different coordinate systems.
38 Chapter 7. Analysis
Chapter 8
Results
This chapter presents the results of the thesis. With data collected from the field studies,
the first implemented mock-up could be improved, this chapter discusses and explains the
improvements.
As mentioned earlier, problems within the area of robot calibration exists, and one
of the biggest being the lack of knowledge in the users about how and why certain pro-
cedures should be done. To reduce the gap, restructuring the calibration bits of the
application has been done to ensure more intuitiveness. Features such as text viewing
on the Flexpendant are added to the mock-up. Most importantly, a guide has been
implemented to assist and inform end users throughout the calibration procedure.
Since the feedback on the mock-up collected from the field experts was very positive
big changes on the first prototype were not needed. However the graphical appearance
is changed and the prototype is now fully functional for robots working with the IRC5
controller.
8.1 Reorganisation
The experts discussed with on the field agreed with the belief that both work objects
and TCP were calibration methods therefore the techniques are still placed in the cali-
bration menu (fig 8.1).
39
40 Chapter 8. Results
However the calibration menu has been reorganized a bit (fig 8.2) compared to the orig-
inal mock-up.
The order of the buttons are now mechanical unit calibration, tool center point calibra-
tion and then work object calibration. Since this is the order the calibration methods
are supposed to be performed. If the mechanical unit is not properly calibrated then
the tool center point and the work objects will be off, and if the mechanical unit is
calibrated but the tool center point is not accurate then the work object will be off.
If the user clicks and holds down the yellow area on top, the guide can be moved, the
guide can also be hidden by clicking on the grey button on the upper left corner. To
scroll the text the user has to click on the yellow arrows pointing up or down behind the
text. The guide has a bright yellow colour because it needs to stand out from the rest
of the interface, on the right side of the guide there exists a blue rectangular object; it
indicates how much text there is to read.
If the user is well aware of how to calibrate a tool he or she doesn’t have to use the guide,
they could then click on ”TCP Calibration” (fig 8.2) and it would take them directly to
the TCP calibration window.
The teaching guide named Sam follows the user and gives instant feedback, recom-
mendations and marks the interface so that the user knows where the next step will
occur. Marking the interface is done by green colouring the object. When the user
chooses a certain value the guide gives instance feedback on what the user choose and
also what that value means for the tool. If the user clicks on any object that is clickable
on the interface the guide informs about what it is and what it can do for the user.
When users are prompted to insert values and insert a wrong value the guide warns
them and tells them why it is incorrect. When users have multiple choices the guide
recommends a certain value. Some screenshots of the guide in action are taken and they
are shown below (fig 8.5 and fig 8.6).
42 Chapter 8. Results
Discussion
This chapter discusses the thesis, what the thesis is about but more importantly what the
thesis is not about.
This thesis discusses the different problems within robot calibration with a focus on
usability. The prototype has restructured the calibration interface so that it seems more
intuitive to the user. The prototype also includes a guide with the main purpose of
helping and assisting the user to perform a certain type of calibration called tool center
point, the prototype also includes text viewing so that the user don’t need to consult
paper form manuals.
Even though some ideas have been implemented, far from every idea presented in this
thesis to improve the usability is implemented. The ideas not implemented are however
discussed and solutions are presented.
The focus on usability in this project is taken on a bigger perspective, meaning that
things like colour, contrast, icons, etc are not taken in consideration. Even though for
instance buttons are placed in the right ”window” and in the order the corresponding
operations should be performed in, it has not been considered exactly where in the
”window” they should have been placed.
These things have not been taken into consideration because the author believes that
when working with usability in an application there are more important things to con-
sider (structure, feedback, consistency, etc) than what colour to use, what the icons look
like, exactly where the buttons are placed, etc. These things are also important but not
in the initial stage of a prototype.
43
44 Chapter 9. Discussion
Chapter 10
Conclusions
This chapter draws conclusions of the thesis, presents limitations on the mock-up and
discusses the future work that the author believes ABB should look into.
The two main calibration problems dealt with are the tool center point calibration
and the work object calibration.
The problems concerning tool center point calibration are how to perform a TCP cali-
bration and how to reduce the time it takes to perform it, how to make the end users
realize why they need an accurate TCP and how to give better feedback to the user
when a TCP calibration is performed. The performing time would be reduced by saving
the TCP points and the robots movement from the initial installation and then use the
points and movements for next coming calibration; by saving the points better feedback
could be given to the user. The text viewer implemented in the mock-up would help
end users realize that an accurate TCP is crucial.
The other main calibration problem discussed in this thesis is work objects; the biggest
problem is the lack of knowledge in the user about how or why they should use the
technique. Users that recalibrate work objects have trouble with knowing where they
are placed and why they are positioned in a certain way. The guide and the text viewer
would solve the user’s lack of knowledge; the guide teaches the user how to perform a
work object and the text viewer explains why the user needs work objects. Saving work
object points in the initial setup would help users that recalibrate work objects, since
the robot can show them where the work object was originally put. Having text and
pictures included into every work object would solve the question why and how they are
placed.
By adding the text viewer user’s can read about different techniques in a rather easy
way and gain some knowledge about why one should calibrate a certain way and how
to calibrate the methods. The guide on the other hand acts like a tutor that never gets
bored and helps the user any time of the day. The guide helps, gives instant feedback
and recommendations to the user.
45
46 Chapter 10. Conclusions
10.1 Limitations
The prototype guide is not implemented to work for a work object calibration.
[1] ABB. BaseWare User’s Guide BW OS 4.0.60. ABB, ABB Automation Technologies
AB Robotics SE-721 68 Västerås Sweden, 3hac7793-14060 edition, 2003.
[2] ABB. RAPID reference manual. ABB, ABB Automation Technologies AB Robotics
SE-721 68 Västerås Sweden, 3hac16580-1 edition, 2003.
[3] ABB. Getting Started IRC5 and RobotStudioOnline. ABB, ABB Automation Tech-
nologies AB Robotics SE-721 68 Västerås Sweden, 3hac021564-001 edition, 2004.
[4] ABB. Operating manual IRC5 with FlexPendant. ABB, ABB Automation Tech-
nologies AB Robotics SE-721 68 Västerås Sweden, 3hac16590-1 edition, 2006.
[6] Amy Baylor. Beyond bulters: Intelligent agents as mentors. Educational Computing
Research, 22(4):373–382, 2000.
[7] D. Christopher Dryer. Wizards, guides, and beyond: rational and empirical meth-
ods for selecting optimal intelligent user interface agents. In IUI ’97: Proceedings
of the 2nd international conference on Intelligent user interfaces, pages 265–268,
New York, NY, USA, 1997. ACM Press.
[8] Oren Etzioni. Moving up the information food chain: Deploying softbots on the
world wide web. In Proceedings of the Thirteenth National Conference on Arti-
ficial Intelligence and the Eighth Innovative Applications of Artificial Intelligence
Conference, pages 1322–1326, Menlo Park, 4–8 1996. AAAI Press / MIT Press.
[9] Jensen Harris. Tipping the scale (why the ui, part 5). Blog, April 04 2006.
http://blogs.msdn.com/jensenh/archive/2005/10/24/484131.aspx.
[10] Jensen Harris. Ye olde museum of office past (why the ui, part 2). Blog, March 29
2006. http://blogs.msdn.com/jensenh/archive/2006/03/29/563938.aspx.
[11] Ted Selker Henry Lieberman. Agents for the user interface. MIT Press, 2003.
[12] Alan Kay. User interface: A personal view. in: The art of human-computer interface
design. Addison-Wesley, 1990.
[13] Henry Lieberman. Autonomous interface agents. In CHI 97: Proceedings of the
SIGCHI conference on Human factors in computing systems, pages 67–74, New
York, NY, USA, 1997. ACM Press.
47
48 REFERENCES
[14] Pattie Maes. Agents that reduce work and information overload. pages 811–821,
1995.
[15] Stuart E. Middleton. Interface agents: A review of the field. Technical Report
ECSTR-IAM01-001, University of Southampton, 1999.
[16] Donald T. Roesler, Marina; Hawkins. Intelligent agents: Software servants for an
electronic information world (and more!). CODEN ONLIDN, 18(7):18–32, 1994.
[17] Ira Rudowsky. Intelligent agents. 2004.
[18] Ted Selker. Coach: a teaching agent that learns. Commun. ACM, 37(7):92–99,
1994.
[19] Ben Shneiderman. Direct manipulation. a step beyond programming languages.
IEEE Transactions on Computers, 16(8):57–69, August 1983.
[20] Ben Shneiderman and Pattie Maes. Direct manipulation vs. interface agents. in-
teractions, 4(6):42–61, 1997.
[21] Paul Thurrott. Office 2007 public beta now available. Internet, May 22 2006.
http://www.winsupersite.com/showcase/office2007 public beta.asp.
[22] Reinout van Schouwen. User interface agents: a comparison.
[23] Michael Winikoff, Lin Padgham, and James Harland. Simplifying the development
of intelligent agents. In AI ’01: Proceedings of the 14th Australian Joint Conference
on Artificial Intelligence, pages 557–568, London, UK, 2001. Springer-Verlag.
Appendix A
– Tell me a little about your self, your background and experience with ABB robots
and calibration.
– Why is it that users don’t use work objects? (Apparently only 10 out of 500 in
France knows about work objects and its functionality)
• Hard to perform?
• Hard to understand?
– What can ABB do to make the technique easier to use?
– What kind of problems can be encountered when calibrating a tool center point?
– What can be done to make the tool center point calibration easier to use?
– What kind of problems does an installation technician encounter when doing a
robot installation? (Focus on the robot calibration)
– How can ABB best help/assist the technician?
– Are wizards, guides and the like a good way to ease the ”knowledge load” on the
user?
– Which methods/techniques need to be developed by ABB to make calibration
easier, faster and more usable in the future?
49