Industrial Robotics Theory Modelling and Control
Industrial Robotics Theory Modelling and Control
Industrial Robotics Theory Modelling and Control
Edited by
Sam Cubero
Abstracting and non-profit use of the material is permitted with credit to the source. State-
ments and opinions expressed in the chapters are these of the individual contributors and not
necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of
information contained in the published articles. Publisher assumes no responsibility liability
for any damage or injury to persons or property arising out of the use of any materials, in-
structions, methods or ideas contained inside. After this work has been published by the Ad-
vanced Robotic Systems International, authors have the right to republish it, in whole or part,
in any publication of which they are an author or editor, and the make other personal use of
the work.
Printed in Croatia
A catalog record for this book is available from the German Library.
Contents
Preface .......................................................................................................................................................... IX
1. Robotic Body-Mind Integration: Next Grand Challenge in Robotics .................. 1
K. Kawamura, S. M. Gordon and P. Ratanaswasd
Preface
Robotics is the applied science of motion control for multi-axis manipulators and is a large
subset of the field of "mechatronics" (Mechanical, Electronic and Software engineering for
product or systems development, particularly for motion control applications). Mechatronics
is a more general term that includes robotic arms, positioning systems, sensors and machines
that are controlled by electronics and/or software, such as automated machinery, mobile ro-
bots and even your computer controlled washing machine and DVD movie player. Most of the
information taught in mechatronic engineering courses around the world stems from indus-
trial robotics research, since most of the earliest actuator and sensor technologies were first
developed and designed for indoor factory applications.
Robotics, sensors, actuators and controller technologies continue to improve and evolve at an
amazing rate. Automation systems and robots today are performing motion control and real-
time decision making tasks that were considered impossible just 40 years ago. It can truly be
said that we are now living in a time where almost any form of physical work that a human
being can do can be replicated or performed faster, more accurately, cheaper and more consis-
tently using computer controlled robots and mechanisms. Many highly skilled jobs are now
completely automated. Manufacturing jobs such as metal milling, lathe turning, pattern mak-
ing and welding are now being performed more easily, cheaper and faster using CNC ma-
chines and industrial robots controlled by easy-to-use 3D CAD/CAM software. Designs for
mechanical components can be quickly created on a computer screen and converted to real-
world solid material prototypes in under one hour, thus saving a great deal of time and costly
material that would normally be wasted due to human error. Industrial robots and machines
are being used to assemble, manufacture or paint most of the products we take for granted
and use on a daily basis, such as computer motherboards and peripheral hardware, automo-
biles, household appliances and all kinds of useful whitegoods found in a modern home. In
the 20th century, engineers have mastered almost all forms of motion control and have proven
that robots and machines can perform almost any job that is considered too heavy, too tiring,
too boring or too dangerous and harmful for human beings.
Human decision making tasks are now being automated using advanced sensor technologies
such as machine vision, 3D scanning and a large variety of non-contact proximity sensors. The
areas of technology relating to sensors and control are still at a fairly primitive stage of devel-
opment and a great deal of work is required to get sensors to perform as well as human sen-
sors (vision, hearing, touch/tactile, pressure and temperature) and make quick visual and
auditory recognitions and decisions like the human brain. Almost all machine controllers are
very limited in their capabilities and still need to be programmed or taught what to do using
an esoteric programming language or a limited set of commands that are only understood by
highly trained and experienced technicians or engineers with years of experience. Most ma-
chines and robots today are still relatively "dumb" copiers of human intelligence, unable to
learn and think for themselves due to the procedural nature of most software control code.
X
In essence, almost all robots today require a great deal of human guidance in the form of soft-
ware code that is played back over and over again. The majority of machine vision and object
recognition applications today apply some form of mechanistic or deterministic property-
matching, edge detection or colour scanning approach for identifying and distinguishing dif-
ferent objects in a field of view. In reality, machine vision systems today can mimic human vi-
sion, perception and identification to a rather crude degree of complexity depending on the
human instructions provided in the software code, however, almost all vision systems today
are slow and are quite poor at identification, recognition, learning and adapting to bad images
and errors, compared to the human brain. Also, most vision systems require objects to have a
colour that provides a strong contrast with a background colour, in order to detect edges relia-
bly. In summary, today's procedural-software-driven computer controllers are limited by the
amount of programming and decision-making "intelligence" passed onto it by a human pro-
grammer or engineer, usually in the form of a single-threaded application or a complex list of
step-by-step instructions executed in a continuous loop or triggered by sensor or communica-
tion "interrupts". This method of control is suitable for most repetitive applications, however,
new types of computer architecture based on how the human brain works and operates is un-
chartered research area that needs exploration, modelling and experimentation in order to
speed up shape or object recognition times and try to minimize the large amount of human ef-
fort currently required to program, set up and commission "intelligent" machines that are ca-
pable of learning new tasks and responding to errors or emergencies as competently as a hu-
man being.
The biggest challenge for the 21st century is to make robots and machines "intelligent" enough
to learn how to perform tasks automatically and adapt to unforeseen operating conditions or
errors in a robust and predictable manner, without the need for human guidance, instructions
or programming. In other words: "Create robot controllers that are fast learners, able to learn
and perform new tasks as easily and competently as a human being just by showing it how to
do something only once. It should also learn from its own experiences, just like a young child
learning and trying new skills." Note that a new-born baby knows practically nothing but is
able to learn so many new things automatically, such as sounds, language, objects and names.
This is a "tall order" and sounds very much like what you would expect to see in a "Star Wars"
or "Star Trek" science fiction film, but who would have thought, 40 years ago, that most people
could be instantly contacted from almost anywhere with portable mobile phones, or that you
could send photos and letters to friends and family members instantly to almost anywhere in
the world, or that programmable computers would be smaller than your fingernails? Who
ever thought that a robot can automatically perform Cochlear surgery and detect miniscule
force and torque changes as a robotic drill makes contact with a thin soft tissue membrane
which must not be penetrated? (A task that even the best human surgeons cannot achieve con-
sistently with manual drilling tools) Who would have imagined that robots would be assem-
bling and creating most of the products we use every day, 40 years ago? At the current accel-
erating rate of knowledge growth in the areas of robotics and mechatronics, it is not
unreasonable to believe that "the best is yet to come" and that robotics technology will keep on
improving to the point where almost all physical jobs will be completely automated and at
very low cost. Mobile or "field" robotics is also a rapidly growing field of research, as more
XI
applications for robotic and mechatronic engineering technology are found outside the well-
structured and carefully controlled environments of indoor factories and production lines.
Technological development is now at the stage where robots can be programmed to automati-
cally plant and harvest food at low cost to end world hunger, engage in cooperative construc-
tion work to erect buildings and low-cost modular homes to house the poor, perform remote
surveying and video surveillance (land, sea, air & on other planets), automatically build space
stations or bases on the Moon or on Mars, perform fully automated mining operations deep
underground, safely transport people in flying aerial vehicles to avoid slow road traffic, mow
your lawn and recharge itself, guide blind people to their destinations using GPS or machine
vision and save human beings from the strain and boredom of highly repetitive production
work in factories. In fact, there is no limit to where practical robotic technologies may be used
to improve how people work and live. Rather than destroying factory and production jobs, ro-
bots are providing more opportunities for people to upgrade their skills to become technicians
or robot operators who are spared the difficulties of strenuous, repetitive and often boring
manual labour. We are not yet at the level of robotic automation depicted in films like "iRobot"
or cartoons like "The Jetsons", where humanoid robots roam the streets freely, however, mod-
ern society appears to be headed in that direction and robots of all types could play an increas-
ingly important role in our daily lives, perhaps improving the way we work, shop and play.
The one truth that faces us all is that life is short and it is important to do as much "good" as
possible in the limited time that we are alive. It is important to leave behind a better world for
future generations to inherit and enjoy so that they do not suffer unnecessary burdens, physi-
cal hardships, expensive education, poor employment opportunities or very high costs of liv-
ing that leave them with little or no savings or financial incentives to work. Robotic and
mechatronic engineers, researchers and educators are in an excellent position to help leaders in
education, business and politics to understand and realize the benefits of promoting robotic
applications. All it takes is the desire to do good for others and the kind of burning enthusi-
asm and zeal that makes it difficult to sleep at night! Unfortunately, most Universities do not
teach engineers how to be effective at developing, selling, promoting and commercializing
new technologies, good ideas and useful inventions that could change the world. Many educa-
tion systems today still value "rote learning" and memorization skills over "Problem Based
Learning" projects or design-and-build activities that promote creativity. It is this kind of "in-
ventor's mindset" and "entrepreneurial spirit" which motivated the great inventors and scien-
tists of the past to keep tinkering, exploring and experimenting with new ideas and concepts
which showed good potential for being useful and practical in the real world. In the "spirit of
discovery", robotic and mechatronic engineers and researchers around the world are working
hard, relentlessly pursuing their research goals in order to discover, develop and test a new
great idea or a new technological breakthrough that could make a significant impact or im-
provement to the world of robotics and mechatronics. Sometimes this work is arduous and
difficult, requiring a great deal of patience and perseverance, especially when dealing with
many failures. In fact, good results cannot always be guaranteed in new "cutting edge" re-
search work.
XII
Despite much frustration, the veteran researcher becomes adept at learning from past mis-
takes, viewing each failure as a necessary, vital "learning experience" and an opportunity to
make progress towards different goals which may present more interesting questions. This
kind of research and investigative work brings great joy when things are going well as
planned. I have laughed many times when very conservative research engineers jump and
even yell with joy when their experiments finally work for the first time after many failures.
The truth is, robotics and mechatronic engineering is very addictive and enjoyable because
continuous learning and solving challenging problems with a variety of intelligent people
makes every day different, unpredictable and fun. Is technological change happening too fast?
Advances in tools and devices are now happening at such a rapid pace that often, by the time
students learn a particular type of software or piece of hardware, it is probably already obso-
lete and something new and better has replaced it already. Today, it is now virtually impossi-
ble for an engineer to be an expert in all areas of robotics and mechatronics engineering, how-
ever, it is possible to grasp the fundamentals and become an effective system integrator, able
to bring together many different forms of technology to solve problems, and you will see
plenty of evidence of this type of problem solving in this book. Mechatronic and robotic auto-
mation engineers are becoming increasingly dependent on using "off the shelf" devices, com-
ponents and controllers. Using such commercially available components saves a great deal of
development time and cost, allowing system developers to focus on accomplishing the tasks of
designing, building and testing complete automation systems or manipulators customized for
specific applications. Perhaps the most important learning skill for a mechatronic or robotics
engineer is the ability to ask the right questions which could lead to the right answers.
This book covers a wide range of topics relating to advanced industrial robotics, sensors and
automation technologies. Although being highly technical and complex in nature, the papers
presented in this book represent some of the latest "cutting edge" technologies and advance-
ments in industrial robotics technology. This book covers topics such as networking, proper-
ties of manipulators, forward and inverse robot arm kinematics, motion path-planning, ma-
chine vision and many other practical topics too numerous to list here. The authors and editors
of this book wish to inspire people, especially young ones, to get involved with robotic and
mechatronic engineering technology and to develop new and exciting practical applications,
perhaps using the ideas and concepts presented herein. On behalf of all the authors and edi-
tors who have displayed a great deal of passion, tenacity and unyielding diligence to have this
book completed on time, I wish you all the best in your endeavours and hope that you find
this book helpful and useful in your research and development activities.
Please feel free to contact the publishers and let us know your thoughts.
Editor
Dr. Sam Cubero
Head & Course Coordinator
Mechatronic Engineering
Curtin University of Technology
Australia
1
1. Introduction
During the last thirty years, the fields of robotics, cognitive science and neuro-
science made steady progress fairly independently with each other. However,
in a quest to understand human cognition and to develop embedded cognitive
artifacts like humanoid robots, we now realize that all three fields will benefit
immensely by collaboration. For example, recent efforts to develop so-called
intelligent robots by integrating robotic body, sensors and AI software led to
many robots exhibiting sensorimotor skills in routine task execution. However,
most robots still lack robustness. What, then, would be the next challenge for
the robotics community? In order to shed light on this question, let’s briefly
review the recent history of robotic development from design philosophy
point of view.
In recent years, design philosophies in the field of robotics have followed the
classic dialectic. Initial efforts to build machines capable of perceiving and in-
teracting with the world around them involved explicit knowledge representa-
tion schemes and formal techniques for manipulating internal representations.
Tractability issues gave rise to antithetical approaches, in which deliberation
was eschewed in favor of dynamic interactions between primitive reactive
processes and the world [Arkin, 1998] [Brooks, 1991].
Many studies have shown the need for both, motivating work towards hybrid
architectures [Gat, 1998]. The success of hybrid architecture-based robot con-
trol led to wide-ranging commercial applications of robotics technologies. In
1996, a panel discussion was held at the IEEE International Conference on Ro-
botic and Automation (ICRA) Conference to identify the grand research chal-
lenges for The Robotics and Automation Society for the next decade.
Figure 1 shows three grand challenges identified by the panel and the progress
made in the last decade in each area.
Such an integration of robotic body, sensor and AI software led to a wide vari-
ety of robotic systems. For example, Sony’s QRIO (see Figure 1) can dance and
play a trumpet. The da Vinci robotic surgical system by Intuitive Surgical Inc.
(www.intuitivesurgical.com) can assist surgeon in laparoscopic (abdominal)
surgery.
1
2 Industrial Robotics: Theory, Modelling and Control
Such robots are fluent in routine operations and capable of adjusting behavior
in similar situations. We hypothesize, however, that robustness and flexibly
responding to the full range of contingencies often present in complex task en-
vironments will require something more than the combination of these design
approaches. Specifically, we see human’s perception and cognitive flexibility
and adaptability should be incorporated in the next generation of intelligent
robots. We call this “robotic body-mind integration” in this paper. Thus, a
fully cognitive robot should be able to recognize situations in which its reac-
tive and reasoning abilities fall short of meeting task demands, and it should
be able to make modifications to those abilities in hopes of improving the
situation. These robots can be classified as cognitive robots.
Recently several national and international research programs were initiated
to focus on “cognitive agents” [EU, 2004; DARPA, 2005; Asada, et al., 2006]. At
ICAR2003 in Coimbra, Portugal, we proposed a cognitive robotic system
framework (Figure 2) [Kawamura, et al, 2003a].
In this chapter, we will give details of our cognitive robot architecture with
three distinctive memory systems: short-term and long-term memories and an
adaptive working memory system will be described. Short-term and long-term
memories are used primarily for routine task execution. A working memory
system (MWS) allows the robot to focus attention on the most relevant features
of the current task and provide robust operation in the presence of distracting
or irrelevant events.
Robotic Body-Mind Integration: Next Grand Challenge in Robotics 3
ATTENTION EMOTION
Reactive Process
Perception Action
External Environment
Field of cognitive science has been interested in modeling human cognition for
some time. Cognitive scientists study human cognition by building models
that help explain brain functions in psychological and neuroscience studies.
Over the last decades, many different cognitive architectures and systems have
been developed by US cognitive scientists to better understand human cogni-
tion. In the following, we will briefly describe three of them. The first two
were chosen for their popularity in the US and their generality. The third was
chosen as an exemplary system to incorporate human perceptual and motor
aspects in more specific ways to analyze complex cognitive tasks such as air-
craft cockpit operation. All three have inspired our work.
2.1 ACT-R
(a) (b)
Figure 3(a). ACT-R architecture (b) How ACT-R works [ACT-R, 2006].
2.2 SOAR
formed into a new production rule. This new rule can then be applied when-
ever Soar encounters the situation again. The process of encoding and storing
the newly learned rules is called “chunking”. However, Soar’s chunking is dif-
ferent from the term “chunk” used by cognitive neuroscientists when referring
to human working memory. Soar's chunking is a learning method used to
process information already present in the working memory for storage in the
long-term memory. On the other hand in our architecture, as in human work-
ing memory, chunks refer to the arbitrary pieces of information stored in the
long-term memory. (See Section 5.3.2 for details)
2.3 EPIC
3.Multiagent Systems
Legend
Perception Self SES= Sensory EgoSphere
CEA
Stimuli Sensors Encodings Agent PM= Procedural Memory
SM=Semantic Memory
EM=Episodic Memory
CEA=Central Executive Agent
Working
SES
Memory
Attention
STM Network
System Behavior 1 …Behavior N
EM PM …
SM
LTM Behaviors
track of robot internal variables that will be used during action selection,
attention and learning. The Activator Agent invokes atomic agents to handle
temporal integration for the selected actions. The Central Executive Agent
(CEA) working closely with the Working Memory System and the other SA
agents provides cognitive control functions for ISAC. CEA is described in
detail in Section 6.2.
Self Agent
Description
Intention Agent
Agent
Human
Anomaly Detection Agent
Activator Agent
Atomic Agent
Agents
Mental Experiment
Emotion Agent
Agent
First-order Response
Agent
Legend
SES= Sensory EgoSphere
Central Executive PM= Procedural Memory
SM=Semantic Memory
Agent
EM=Episodic Memory
CEA=Central Executive Agent
Working
Memory SM PM Behavior 1 …Behavior N
SES System
EM
…
STM LTM
Behaviors
The Human Agent (HA) comprises a set of agents that detect and keep track of
human features and estimate the intentions of a person within the current task
context. It estimates the current state of people interacting with the robot based
on observations and from explicit interactions (Figure 10 a and b) [Rogers,
2004]. The HA receives input from various atomic agents that detects physical
aspects of a human (e.g., the location and identity of a face). The HA receives
procedural information about interactions from the SA that employs a rule set
for social interaction. The HA integrates the physical and social information
with certain inferred aspects of the cognitive states of interacting humans, such
as a person’s current intention.
The HA processes two types of human intentions. An expressed intention is
derived from speech directed toward ISAC, e.g., greetings and requests from a
human. Inferred intentions are derived through reasoning about the actions of
a person. For example, if a person leaves the room, ISAC assumes it means
that the person no longer intends to interact, therefore, it can reset its internal
expectations.
The Human Agent’s assessment of how to interact is passed on to the SA. The
SA interprets the context of its own current state, e.g. current intention, status,
tasks, etc. This processing guides ISAC in the selection of socially appropriate
behaviors that lead towards the ultimate goal of completing tasks with (or for)
humans.
Robotic Body-Mind Integration: Next Grand Challenge in Robotics 13
(a)
(b)
Figure 10. (a) ISAC interacting with humans and (b) Human Agent and associated
atomic agents.
5. Memory Structure
Currently, we are using a structure called the Sensory EgoSphere (SES) to hold
STM data. The SES is a data structure inspired by the egosphere concept as de-
fined by Albus [Albus, 1991] and serves as a spatio-temporal short-term mem-
ory for a robot [Peters, et al, 2001; Hambuchen, 2004]. The SES is structured as
a geodesic sphere that is centered at a robot's origin and is indexed by azimuth
and elevation.
The objective of the SES is to temporarily store exteroceptive sensory informa-
tion produced by the sensory processing modules operating on the robot. Each
vertex of the geodesic sphere can contain a database node detailing a detected
stimulus at the corresponding angle (Figure 11).
Figure 11. Mapping of the Sensory EgoSphere and topological mapping of object loca-
tions.
(a) (b)
Figure 12(a). ISAC showing SES screen, (b) NASA’s Robonaut.
LTM is divided into three types: Procedural Memory, Semantic Memory, and
Episodic Memory. Representing information such as skills, facts learned as well
as experiences gained (i.e. episodes) for future retrieval.
The part of the LTM called the Procedural Memory (PM) holds behavior in-
formation . Behaviors are stored in one of two ways: as motion primitives used
to construct behaviors or as full behavior exemplars used to derive variant mo-
tions.
Using the first method, stored behaviors are derived using the spatio-temporal
Isomap method proposed by Jenkins and Mataric [Jenkins, et al, 2003]. With
this technique motion data are collected from the teleoperation of ISAC. The
motion streams collected are then segmented into a set of motion primitives.
The central idea in the derivation of behaviors from motion segments is to dis-
cover the spatio-temporal structure of a motion stream. This structure can be
estimated by extending a nonlinear dimension reduction method called
Isomap [Tenenbaum, 2000] to handle motion data. Spatio-temporal Isomap
dimension reduction, clustering and interpolation methods are applied to the
motion segments to produce Motion Primitives (Figure 13a). Behaviors are
formed by further application of the spatio-temporal Isomap method and link-
ing Motion Primitives with transition probabilities [Erol, et al, 2003].
Motions recorded using spatio-temporal Isomap are stored in a separate man-
ner as shown in Figure 13(b). At the top of this structure, behavior descriptions
will be stored which will allow us to identify what each behavior can contrib-
ute to solving a given motor task. Each entry in the behavior table will contain
pointers to the underlying motion primitives.
16 Industrial Robotics: Theory, Modelling and Control
(a)
Behaviors
Motion
Raw Data
Primitives
(b)
The latter method stores behaviors using the Verbs and Adverbs technique
developed in [Rose, et al, 1998]. In this technique, exemplar behavior motions
are used to construct verbs while parameters of the motions are termed adverbs.
An important aspect in storing and re-using a motion for a verb is the identifi-
cation of the keytimes [Spratley, 2006; Rose, et al, 1998] of the motion. The
keytimes represent significant structural breaks in the particular motion. For
the Verbs and Adverbs technique to function properly individual motions for
the same verb must have the same number of keytimes and each keytime must
have the same significance across each motion. Figure 14(a) shows keytimes
for three example motions. The example motions are recording of the same
motion, three different times. This information is used to create the verb, hand-
Robotic Body-Mind Integration: Next Grand Challenge in Robotics 17
shake. The keytimes in this example are derived by analyzing the motions us-
ing a technique called Kinematic Centroid [Jenkins, et al, 2003]. The x-axis
represents the normalized point index for each motion. The y-axis represents
the Euclidian distance of the kinematic centroid of the arm from the base of the
arm.
(a)
(b)
Figure 14 (a). Example motions and their keytimes [Spratley, 2006], (b) Structure of
PM data representation for Verbs and Adverbs.
18 Industrial Robotics: Theory, Modelling and Control
Each verb can have any number of adverbs, each of which relate to a particular
space of the motion. For example, the verb reach could have two adverbs: the
first related to the direction of the reach and the second related to the distance
from ISAC’s origin that the particular motion is to extend. Extending this ex-
ample, adverbs could be added to include features from any other conceivable
space of the motion, such as the strength of the motion or the speed of the mo-
tion. Stored in the LTM are the verb exemplars and the adverb parameters for
each verb. New motions such as reaching, or handshaking are interpolated by
ISAC at run time using the new (desired) adverb values.
Figure 14(b) depicts the manner in which behaviors are stored in LTM using
Verbs and Adverbs. For each entry in PM, the motion and storage types are re-
corded. The next entry holds pointers to the verb information and the final en-
tries hold the adverb values.
5.3.1 Attention
Figure 15. The attention network’s assignment of FOA at the center node among
events registered in a common area on SES [Hambuchen, 2004].
There is much evidence for the existence of working memory in primates [Fu-
nahashi, et al, 1994; Miller, et al, 1996]. Such a memory system is closely tied to
the learning and execution of tasks, as it contributes to attention, learning and
decision-making capabilities by focusing on task-related information and by
discarding distractions [O’Reilly, et al, 1999; Baddeley, 1986; Baddeley, 1990].
The working memory in humans is considered to hold a small number of
“chunks” of information needed to perform a task such as retaining a phone
number during dialing.
Inspired by the working memory models developed in cognitive science and
neuroscience, the Working Memory System (WMS) in robots was designed to
provide the embodiment necessary for robust task learning and execution by
allowing ISAC to focus attention on the most relevant features of the current
task [Gordon & Hall, 2006].
WMS in our cognitive architecture was implemented using the Working
Memory Toolkit (WMtk) based on the computational neuroscience model of
working memory [Phillips, 2005]. This toolkit models the function of dopa-
mine cells in human brains using a neural net-based temporal difference (TD)
learning algorithm [Sutton, 1988]. The toolkit has a function to learn to select
and hold on to “chunks” of information that are relevant to the current task
based on future expected reward from processing these chunks. These chunks
include behaviors, current percepts, and past episodes. Figure 16 illustrates the
current WMS structure and associated system components. A simulated de-
layed saccade task using WMtk was reported by Philips and Noelle [Philips,
20 Industrial Robotics: Theory, Modelling and Control
2006]. Section 7.1 in this chapter details working memory training and task
learning conducted on ISAC.
Percepts
Candidate
SES Chunks List
Learned
Network
WM
Weights
Memory chunks
LTM
.
.
.
.
Cognitive robots, then, should have the ability to handle unexpected situations
and learn to perform new tasks. Also, cognitive control is expected to give
Robotic Body-Mind Integration: Next Grand Challenge in Robotics 21
flexibility to the robot to reason and act according to stimuli under conflicting
goals in dynamic environment. Realization of cognitive control functions for
ISAC is currently pursued through the maintenance of task-related informa-
tion in the working memory system through gating of information that could
be passed into the decision-making mechanism called the Central Executive
Agent discussed in Section 6.2.
Attention and emotions are known to play an important role in human’ deci-
sion and task execution[Arbib, 2004]. Therefore, we are now adding attention
and emotion networks to realize cognitive control for robots as shown in Fig-
ure 17 modified from [Miller, 2003].
Executive Functions
and goal-related
information Cognitive Control
Experience and
Working Memory
Phonological Loop
Central
Executive
Visuo-Spatial
Sketch Pad
Activator
Update Agent Action
Mapping
Emotion Intervention
Agent Emotion
CEA
Intention Execution Result
Goal
Agent
Past Update
Episodes Episode
Episodic
Memory
Sensory inputs, stimuli and/or task commands, are encoded into percepts and
posted on the SES. Only those percepts that have high emotional salience will
Robotic Body-Mind Integration: Next Grand Challenge in Robotics 23
receive attention from the Attention Network and will be passed to WMS.
These percepts, if not intervened, will cause corresponding actions to be se-
lected according to embedded stimuli-response mapping. On the other hand,
CEA selects actions using the combination of two paradigms. CEA will re-
trieve past episodes that are relevant to these percepts from the Episodic
Memory. These past episodes contain actions used in past execution and re-
sults. The results of invoked actions from stimuli-response mapping could be a
part of these episodes. CEA determines if the action to be executed is likely to
produce a negative result. If so, CEA will intervene by suggesting a different
action based on the current state of ISAC, current percepts, and action. Once
the action is executed, CEA will update the stimulus-response mapping ac-
cording to the execution result and the current task is then stored as a new epi-
sode in the Episodic Memory.
7. Experimental Results
The working memory system (WMS) is used to manage ISAC's memory focus
during task execution. For simple tasks, WMS holds a small number of chunks
of information related to the task. Typically on ISAC, the number of chunks
loaded into WMS ranges from 2 to 4. For example, if ISAC were to be asked to
“reach to the red bean bag”, WMS would be responsible for loading two
chunks of information: one chunk for the reach behavior and another chunk for
the red bean bag percept. For more complex tasks (i.e. those that require more
than 4 chunks of information) the tasks must be broken into simpler tasks and
WMS is responsible for handling each simple task in turn as well as maintain-
ing ISAC's focus on the long-term goal, the completion of the complex task.
WMS is not the tool that decomposes complex tasks into simple tasks. In the
future, another agent such as CEA (section 6.2) must do this job. WMS, given
the current state of ISAC, solely determines which chunks from LTM and STM
to load into the system, in essence focusing ISAC on those pieces of informa-
tion. Experiments utilizing WMS in this manner have already been conducted
[Gordon, et al, 2006].
Current work with ISAC’s WMS is centered on training a variety of different
WMS for different types of tasks, such as::
Figure 20 shows the architecture being used to train each these WMS.
During training, a reward rule is used to inform WMS how well it is perform-
ing. The reward rule captures whether or not the current chunks could be used
to accomplish the task and how well the task has been accomplished.
Using the architecture shown in Figure 20, an initial experiment was designed
for to test object interaction using working memory. Steps for this experiment
are as follows:
1. ISAC is given certain initial knowledge (i.e. embedded ability and/or in-
formation)
a) ISAC’s perceptual system is trained to recognize specific objects. The in-
formation is stored in the semantic memory section of the LTM.
b) Using the Verbs and Adverbs algorithm, ISAC is taught a small set of
motion behaviors including how to reach, wave, and handshake.
c) Figure 21 demonstrates ISAC performing these behaviors. This informa-
tion is stored in the procedural memory section of the LTM.
Robotic Body-Mind Integration: Next Grand Challenge in Robotics 25
2. Two bean bags are placed on a table as shown next in Figure 22.a.
3. ISAC is asked to “reach to the bean bag”, although a specific bean bag is
not specified.
4. ISAC’s perceptual system will recognize the bean bags and post the in-
formation to SES.
5. WMS will focus on “chunks” of information necessary for accomplishing
the task.
6. A reward is given based upon how well the action is completed.
7. Over time, ISAC learns the appropriate chunks to focus on from the SES
and LTM.
8. Once ISAC has demonstrated that it has learned the most appropriate
chunks to load into WMS (Figure 22.a), bean bags are rearranged (Figure
22.b) and ISAC is given the command “reach to the bean bag”.
9. Real-time experiments were conducted after initial simulation trials (Figu-
re 22.c).
When the bean bags are rearranged, ISAC should not necessarily reach to the
same bean bag as before but should choose the bean bag percept from the SES
that is the most appropriate. For this task the most appropriate bean bag is the
nearest one. The combination of percept and behavior, or “chunks”, will be
loaded into the working memory and used to execute the action.
1. What is the degree of success for the behavior WMS chose to load?
2. How well did the object chosen by WMS meet the task criteria? e.g., focu-
sing on any bean bag vs. focusing on another object.
3. How well is SAC able to act upon the object? e.g., in this experiment,
could ISAC reach the bean bag?
In order to measure Reward Criterion #3, the reward was given based on the
inverse proportion of the distance from ISAC’s hand to the object. Reward Cri-
teria #1 and #2 gave a discrete positive valued reward if the system chose ap-
propriately. No preference (i.e., reward of 0) was the result if the system did
not choose correctly. The values for the overall reward typically fell in the
range of 0 – 400. Since it was desired to give negative reward to the system
when it did not act appropriately, a negative weighting factor of –200 was
added to the final reward to “tilt” the low values into the negative range.
Note that when using these reward criteria, it is possible to incorrectly reward
the system for performing the task in less than an optimal manner. For exam-
ple, if the system performs the behavior handshake or wave while focusing on
the appropriate bean bag and if this action happens to bring the hand very
close to the bean bag, then the system would receive a positive reward. In or-
der to avoid this undesirable situation, more rules or knowledge are needed.
Initial trials for this experiment were performed off-line, in simulation, to
speed-up the initial testing phase of the system. This simulation was set-up to
remove the time-bottleneck of generating and performing motions. For the
simulation, when ISAC needed to act on an object within the workspace, the
motion was assumed to have been performed properly (Reward Criterion 3).
The action taken by ISAC was determined by what WMS currently believed
was the best choice. In other words the action that WMS believed would yield
the greatest reward. This system also contained an exploration percentage,
specified as a part of initial knowledge that determined the percentage of trials
that WMS chose a new or different action. This enabled WMS to always con-
tinue learning and exploring.
During initial research trials, simulation was not allowed to choose the same
action more than twice. This constraint enabled a much quicker simulation
time. Once the system finished exploration, the system was restarted with the
learned information and given the task to “reach to the bean bag”. For each ar-
rangement (Figures 22a,b) the system chose appropriately to reach towards the
correct bean bag, i.e. the nearest one. Table 1 shows the contents of ISAC’s
short-term and long-term memory systems during the training portion of the
simulation.
Robotic Body-Mind Integration: Next Grand Challenge in Robotics 27
SES LTM
1. bean bag: location = (Figure 1. reach
22.b), type = A
2. bean bag: location = (Figure 2. handshake
22.a), type = B
3. blank 3. wave
Table 1. Memory contents during simulation training.
In these trials, WMS was allowed to choose two “chunks” from the short- and
long-term memory systems to accomplish the task. However, the working
memory was not restricted to choosing exactly one object and one behavior. If
the working memory chose to focus on two objects or two behaviors, then re-
spectively a behavior or object was chosen at random. This ensured that an ac-
tion was still performed. The reasoning behind this was so that the system did
not learn to simply choose combinations that lead to no reward, a situation
that could be preferred if WMS was consistently getting negative reward for
its choices. Table 2 shows samples of the contents in the working memory in
these trials.
To evaluate system performance further, a third task was developed. For this
task ISAC was again given the command to “reach to the red bag”, however
this time the reach behavior was deleted from the initial knowledge limiting
the behavior choices to handshake and wave. The working memory had to
choose the next best behavior. For each of the arrangements shown previously
(Figures 22a,.b), WMS chose to perform the handshake behavior. This behavior
was chosen because it allowed the arm to get closest (Reward Criterion 3) to
the bean bag (Reward Criterion 2) and thus, best accomplished the goal task.
After the initial training and experimentation, ISAC was allowed to perform
the generated motions (Figure 22.c). Two new objects (a green Lego toy, and a
purple Barney doll) were added to the table, at random positions. ISAC’s vi-
sion system was trained (Step 1) to recognize each new object and recorded the
type of object as well as some simple descriptive information (color=green,
28 Industrial Robotics: Theory, Modelling and Control
purple; toy type=Barney doll, Lego). ISAC was given tasks (Step 3) such as
“reach to the bean bag” or “reach to the toy”. Each of these tasks did not specify
to which bean bag or toy ISAC was to reach. ISAC recognized the objects (Step
4). WMS focused on “chunks” of information from the SES and LTM in order
to accomplish the task (Step 5). ISAC was allowed to explore the space of pos-
sible actions receiving reward each time (Steps 6 and 7). After this was accom-
plished, the objects were rearranged in a variety of different positions (Step 8)
and ISAC was given a command. The results (set of 20 commands) were that
ISAC successfully performed the correct action on the nearest (easiest to reach)
requested object.
For this system to properly choose the correct set of chunks to focus on, the
system currently has to explore all the possibilities during training. Figure 23,
shows an example learning curve for this system for the reach command. The
graph shows the number of times each of the trained behaviors (see Figure 23)
was chosen during each ten trial segment. When the system first begins train-
ing, it is required to explore each of the possible behaviors as well as try differ-
ent percept/behavior combinations. As can be seen from this graph, it took
approximately 20 trials to learn reach before the system determined that the
reach behavior was the definite best.
Attempting to explore all possibilities in the future will lead to a combinatorial
explosion if a large number of behaviors or percepts are added to the system.
In order for this system to continue to operate properly in the future, im-
provements need to be made to the representational structures for behaviors
and percepts used by the system. One method of improving this representa-
tional structure that we are considering is to store intentionality along with
percepts (i.e. chairs are for sitting, tables are for placing, and bean bags are for
reaching and grabbing). This, along with a method discussed in section 7.1.3 of
pre-filtering chunks using Episodic Memory, will aid WMS to perform quick
and accurate chunk selection and retrieval.
10
# Behavior Selections per 10
9
8
7
6 Reach
Trials
5 Handshake
4 Wave
3
2
1
0
0 10 20 30 40 50 60 70 80 90 100
Trial
ISAC will be responsible for pouring through all stored, memorable informa-
tion to complete the task. As ISAC’s experience grows, however, the chunk list
will begin to shrink as ISAC learns what types of information are most rele-
vant. Once ISAC learns the information, an episode can be created and used to
filter chunks for future scenarios.
System components use are Central Executive Agent, Attention Network, and
Emotion Agent. Sound stimuli (i.e., music and alarm) are captured through a
microphone and processed in Matlab. Mozart’s Symphony No. 40 is used for
“music,” and a recorded sound of an actual fire alarm is used for “alarm.” The
initial state of ISAC’s emotional level is to dislike the alarm sound while liking the
music. This is accomplished through the emotion vectors shown in Table 3.
ISAC is also trained to perform three actions, i.e., performing the requested
task to fixate on the Barney doll, yelling “Alarm!”, and performing a free-style
dance.
perform task?
dance?
alarmed?
ignore?
Music - pleasure
Alarm - annoyance
Barney - Task
Figure 25. Overview of the experiment
Emotion Vector
Stimulus/Task- Emotional
happy- love- alert-
response salience
sad hate uninterested
Alarm sound 0.1 -0.8 -0.6 1.01
Musical piece 0.7 0.5 0.1 0.86
Task command 0.1 0 0.6 0.61
Other human
0.1 0.1 0.1 0.17
words
Attention
Perception Network
Encodings SES (Gating)
to WMS
(chunks)
Stimuli
Emotional Task
Salience command
Figure 26. Focus of attention using the Attention Network.
If two or more percepts and/or commands are given to ISAC at the same time,
ISAC must resolve the conflict. The Central Executive Agent (CEA) described
in Section 6.2 is responsible for conflict resolution. For example, if a percept
with a high emotional salience is detected while a task is being executed, CEA
must make a decision how to respond to the newly acquired percept. The cur-
rent situation is used by CEA for decision making. For this experiment, “a
situation” can be translated from the change in perceptual information as fol-
lows: Let the set of all percepts in the Focus of Attention (FOA) at a given time
be denoted by X. Members of X then comprise a combination of some known
percepts from LTM. In a perceptual event, either a percept disappears or a
new percept attracts the robot’s attention, and the original set of percepts in
FOA will change. For this experiment, “a situation” was considered to be any
change of specific members of X as illustrated in Figure 24.
Robotic Body-Mind Integration: Next Grand Challenge in Robotics 33
Situation S1
Percept B
Percept B
Percept C
Perceptual
Situation S2 event
Focus of Attention
Percept A
Appropriate action
provided by
human teacher
(teaching phase)
Situation Si
Update
Perceptual
probabilities
event
P1 Action A1
P1
P2
Action A2
P2 P[ A (ij ) ] Selected action
…
P3
(execution phase)
FOA
FOA
P1,P2,P3 = percepts
In the first situation, only the musical piece and alarm sound caused the Emo-
tion Agent to create the emotion vectors with a high emotional salience. Be-
cause no task (goal) was given, CEA selected the action based on the initial
emotional state. This situation demonstrated the system’s ability to focus at-
tention to those percepts that cause a high emotional salience
In the second situation, a task to fixate on the Barney doll was given to the sys-
tem prior to the presence of other stimuli. The changes in FOA then created
two situations, i.e. “Music was heard during the task execution” and “Alarm
was heard during the task execution”. Using the probabilistic model of the
situation as discussed above, CEA decided if it should pay attention to the
stimuli or keep focusing on the task based on prior knowledge of the stimuli
and situation. “Situation 2 (before learning)” in Table 4 summarizes the system
responses.
Situation 2 Situation 2
FOA Situation 1
(before learning) (after learning)
Music “Dancing” Ignored the music Ignored the music
Yelled
Alarm Yelled “Alarm!” Ignored the alarm
“Alarm!”
Table 4. Stimuli Response Results.
Finally, the model was later taught to respond to Situation 2 differently from
the initial knowledge. That is, the model entered the teaching phase again to
learn a new appropriate response, which in this case was to ignore the alarm
for Situation 2. 100 trials of teaching were performed and the results from
learning are shown in Figure 26. This figure shows the number of times the
model chose to ignore the alarm for every ten trials. In the beginning, the
model did not ignore the alarm right away because of the strong association
between the percepts and actions initially embedded in the model. After
about 20 trials, the supervised learning changed the associated probabilities in
the model enough so the model started to learn to ignore the alarm. With in-
creasing trials, the system learned to select the correct response. However, as
the selection was performed using a probabilistic method, it was still possible
that the system selected incorrect action occasionally as seen in the graph. This
allows the system to explore other possible actions in dynamic situations. Be-
cause the probabilistic model was updated for every teaching trial, the system
was more likely to select the correct action as the number of trials increased. If
Robotic Body-Mind Integration: Next Grand Challenge in Robotics 35
this number reached infinity, the system would then select the correct action
100% of the time.
10
8
# Correct selections per 10 trials
0
0 10 20 30 40 50 60 70 80 90 100
Trial
Figure 26. Learning curve for the response to the alarm in Situation 2.
This simple experiment was conducted to verify that the system did learn to
select the appropriate action under supervisory learning [Mitchell, 1997] using
attention and a set of “snapshot” state of emotions. As the next step, we are
now working to develop a more realistic, dynamic model of emotion which
will reflect the change in ISAC’s internal states over time. The details of how
this time-varying event-based model of emotion will influence action-selection
process will be described in Section 8.
Any cognitive robot should be able to use both external and internal stimuli to
consciously organize their behaviors such as action selection, attention and
learning. According to this, emotion could be one of main factors to mediate
decision-making process. In order to make the attention- and emotion-based
action selection process more realistic, we are now working to develop a time-
varying event-based model of emotion reflecting the change in ISAC’s internal
states over time. In this type of action-selection process, the system does not
necessarily perform the same action every time for the same set of external
stimuli.
In ISAC’s cognitive architecture, the Self Agent is responsible for meta-
management of its internal states similar to that proposed by Sloman [Sloman,
et al., 2005] as shown in Figure 30. We have used the fixed, embedded emo-
tion level as a part of the Self Agent in the experiment. The Emotion Agent will
36 Industrial Robotics: Theory, Modelling and Control
The incorporation of both internal and external stimuli in the architecture en-
ables the system to be as dynamic as possible, gearing responses so that they
are not a function of the external inputs alone. This creates a robot that can re-
spond differently to the same situation based solely on the internal state of the
robot. The internal stimulus that will be used for this experiment is the level of
excitement of the robot. The excitement level will be a product of both ISAC’s
external environment and ISAC’s other internal states (such as presence of
command, joint activity, etc.)
It is important that ISAC’s excitement or arousal to a given situation not be a
static function, but rather a dynamic function of time. For the time being,
ISAC’s level of excitement is calculated using a first-order exponential decay
function:
Excitement = α ( S ) ⋅ e − β ( S )⋅t
Robotic Body-Mind Integration: Next Grand Challenge in Robotics 37
The terms α(S) and β(S) are functions of the state, S, of ISAC and are designed
in such a way that they can be learned or modified over time using standard
reinforcement learning techniques. Therefore, a particular situation (or a
change in state) Si which may initially be embedded in ISAC as “very exciting”
(i.e. α(Si) returns a high value and β(Si) returns a low value) can, over time, ad-
just to reflect ISAC’s experience with that particular state. Conversely, states
initially embedded as “not exciting” can, based on experience, become exciting
states. One final point to add is that the decay nature of the excitement func-
tion ensures that no state continues to excite ISAC indefinitely (i.e. ISAC will
eventually get bored with even the most exciting event).
As ISAC’s other cognitive processes learn, these processes in turn will utilize
the current state of excitement when making decisions. This utilization will be
a function of the excitement level as well as the internal and external states that
have caused the current excitement level. As the stimuli that excite ISAC
change over time, ISAC’s decision-making process should reflect this change
and summarily, ISAC should make different choices. The experiment is de-
signed to teach ISAC this ability and then put ISAC in a situation in which
multiple possibilities exist forcing the robot to make a decision. It is hoped that
ISAC’s cognitive architecture will allow it to make this decision.
To demonstrate the use and effectiveness of utilizing both internal and exter-
nal stimuli during action selection and task switching, an experiment has been
designed that requires the principles of cognitive robotics discussed in this
chapter. During this experiment, ISAC will be presented with a range of dif-
ferent scenarios and be forced to decide whether to continue with the present
task or switch to another task. Close monitoring of ISAC’s internal level of ex-
citement or arousal will be the mechanism that that aids in making this deci-
sion.
Through habituation and learning, ISAC will develop an association between
excitement levels and different percepts or tasks. In other words, based on ex-
perience, certain percepts will excite ISAC more than other percepts, and cer-
tain tasks will excite ISAC more than other tasks. These associations will begin
as embedded knowledge, based on novelty, within ISAC. Over time and
through experience and habituation, these correlations will change and ISAC
will begin to develop its own sense of excitement/boredom.
he experiment steps are as follows:
1. Embed ISAC with knowledge that certain percepts and tasks are more ex-
citing than others (i.e. faces are more exciting than bean bags, dancing is
more exciting than reaching, etc.)
38 Industrial Robotics: Theory, Modelling and Control
2. Train a small set of WMS to react to certain situations (see WM1 and WM2
from section 7.1)
a) WM1 is trained to enable ISAC to interact with simple objects.
b) WM2 is trained for interaction with people.
c) WM3 is trained to enable ISAC to respond appropriately to sound
stimuli.
3. Have a person enter the room and give ISAC a task.
4. Repeat step 3 several times in order to cause a change in ISAC’s embedded
excitement function (Section 8.1)
5. Have a person enter the room and give ISAC a task. During the task exe-
cution have music begin playing in the room.
6. Continue playing the music for several minutes.
Steps 1 and 2 of this experiment are the initial embedding of knowledge into
the system. When a person enters the room and gives ISAC a command, this
interaction should excite ISAC causing it to desire to engage with the person
and complete the task. Through repetition of Step 3, this excitement level
should continue to decrease with each repeated command. Over time, the ex-
citement level associated with Step 3 should degrade to such an extent that
ISAC essentially becomes unmotivated to perform the task. At this point,
when ISAC hears music during the execution of the task (Step 5), the robot
should choose to ignore the person and pay attention to the music instead. Af-
ter the music plays for several minutes (Step 6), ISAC should eventually be-
come bored with this as well (as discussed in section 8.1.). Once bored with the
music, ISAC should transition back to the commanded task.
9. Conclusions
In the last forty years, industrial robots have progressed from the Plan-Sense-
Act paradigm to more robust, adaptive/intelligent control paradigm [Kawa-
mura, 2006]. In particular, the integration of body, sensor and AI-based soft-
ware has produced not only advanced industrial robots, but non-industrial
robots ranging from entertainment and home to a variety of health-related ro-
bots, we expect this trend to continue. This chapter introduced the next grand
challenge in robotics, i.e. the integration of body and mind. In particular, the
chapter described our efforts towards this challenge through the realization of
a cognitive robot using cognitive control, attention, emotion, and an adaptive
working memory system. In the last forty years, the field of industrial robotics
and automation has also seen many innovations. As manufacturing becomes
more distributed and sophisticated, realization of human-like robotic cowork-
ers with cognitive skills will be a challenge not only to academia, but to manu-
facturing engineers as well.
Robotic Body-Mind Integration: Next Grand Challenge in Robotics 39
10. References
1. Introduction
43
44 Industrial Robotics: Theory, Modelling and Control
2. System Architecture
ured to form robots with various structures and degrees of freedom. The ro-
bots, together with other peripheral devices, will form a complete robotic
workcell to execute a specific manufacturing task or process. The correspond-
ing intelligent control and simulation software components are then reconfig-
ured according to the change of the workcell configuration. The maintenance
and upgrade of the system are simplified by replacing the malfunctioned or
outdated components. Converting a manufacturing line from one product to
another can be very fast in order to keep up with the rapidly changing mar-
ketplace.
Component browser –- for viewing and editing the components available in the
component database;
Automatic Modeling for Modular Reconfigurable Robotic Systems – Theory and Practice 47
The system kernel, which is hidden from the user, provides automated model-
generation functions and the configuration-optimization function (a compo-
nent database is also associated with it):
The information passing from the component database to the modeling func-
tions is through the assembly incidence matrix. Robot geometries (serial,
branch, or hybrid) and detailed connection information, such as the connecting
orientation and the types of adjacent modules, are all indicated in the matrix.
This matrix is then passed to the geometry-independent functions for model
generation.
In such a system, the need to maintain a huge library of robot models is elimi-
nated; instead, we maintain a small selection of the component-database and
kernel functions for automated model generation, reducing the overall foot-
print of the system software.
A modular robot joint module is an ``active'' joint, which allows the generation
of a prescribed motion between connected links. Two types of joint modules,
the revolute joints (rotary motion) and the prismatic joints (linear or
translational motion), are considered. Rotary and linear actuators must reside
in the modules to produce the required motions and maintain the modularity
of the system. Multi-DOF motions can be synthesized with several 1-DOF
joints. Joint modules are attached to link modules through standardized
connecting interfaces for mechanical, power, and control connections.
The place on a link module where the joint is connected is called a connecting
port. Without loss of generality, we assume that a link module is capable of
multiple joint connections, and the link module has symmetrical geometry.
Such a design allows modules to be attached in various orientations, and the
robot geometry to be altered by simple reassembling. The modular robot com-
ponents developed in our university are shown in Figure 3. This design fol-
lows the building-block principle whereby modules can be stacked together in
various orientations through connecting points on all six faces of the cubes.
50 Industrial Robotics: Theory, Modelling and Control
Definition 1. (Graph)
A graph = (. , ) consists of a vertex set, . = {v 0 ," , v n } , and an edge set,
= {e 0 ," , e m } , such that every edge in is associated with a pair of vertices, i.e.,
ei = ( v j , v k ) .
(a)
(b)
Figure 4. (a) A branching modular robot; (b) kinematic graphs of the robot
ª1 3 5 0 0 0 0 Bº
«6 0 0 0 0 0 0 C1 »
« »
«0 1 0 6 0 0 0 C1 »
« » (1)
«0 0 2 0 6 0 0 C1 »
( ) = « 0 0 0 5 0 2 0 C 2» .
« »
«0 0 0 0 5 0 3 C 2»
«0 0 0 0 0 1 0 C 2»
« »
«0 0 0 0 0 0 2 C 2»
«¬ P R R R R P P 0 »¼
Note that there are three types of link modules in the robot: the base ( B ), the
large cubic module ( C1 ), and the small cubic module ( C 2 ). Cubic modules
have six connecting interfaces labeled 1 – 6; i.e., port = {1," ,6} , which follows
the labeling scheme on dice. The revolute joints and prismatic joints are
denoted by R and P respectively.
52 Industrial Robotics: Theory, Modelling and Control
Two matrices, namely the accessibility matrix and the path matrix, derived
from a given AIM are defined in this section to provide the accessibility
information from the base module to every pendant module in a branch-type
modular robot. The accessibility information enables us to formulate the
kinematics and dynamics of a general branch-type robot in a uniform way.
The links and joints of a serial-type robot can follow a natural order from the
base to the tip. A branch-type robot has more than one tips, and no loops. The-
refore, the order of the links of a branch-type robot depends on the graph tra-
versing algorithms (Cormen et al. 1990). Let = (V , ) represent the kine-
matic graph of a branch-type modular robot with n + 1 link modules, where
V = {v 0 , v1 ,...v n } represents the set of modules. The fixed base module is de-
noted by v 0 and is always the starting point for the traversing algorithm. The
rest modules are labeled by their traversing orders i . The traversing orders of
the links in the robot of Figure 4(a) are indicated by the numbers on the verti-
ces of the graph of Figure 4(b). This order is obtained by the depth-first-search
algorithm. Note that the farther the module is away from the base, the larger
its traversing order.
The accessibility matrix can be derived from the AIM once the traversing order
G
on the link modules is determined. For example, the accessibility matrix of
in Figure 4(b) is
v 0 v1 v 2 v 3 v 4 v 5 v 6 v 7
v0 ª0 1 1 1 1 1 1 1º
v1 «0 0 1 1 0 0 0 0»
« »
v2 «0 0 0 1 0 0 0 0»
G « »
* ( ) = v 3 0 0 0 0 0 0 0» . (2)
«0
v4 «0 0 0 0 0 0 0 0»
« »
v5 «0 0 0 0 0 0 1 1»
v6 «0 0 0 0 0 0 0 1»
« »
v7 ¬0 0 0 0 0 0 0 0¼
G
From * ( ) , we can obtain the shortest route from the base to the pendant
G
link. This route is called a path. The pendant links are the rows of *( ) with
all 0s. The number of paths in a branching robot is equal to the number of
pendant links. Let link v i be a pendant link. All link modules on the path from
the base to v i are shown in the nonzero entries of column i of
G
( *( ) + I ( n +1)×( n +1) )T . Collecting all the paths, we obtain the path matrix:
For instance, the robot of Figure 4(a) contains three branches (paths). The three
paths can be represented as a 3 × 8 path matrix:
v 0 v1 v 2 v 3 v 4 v 5 v 6 v 7
G ª1 1 1 1 0 0 0 0º
( ( ) = « . (3)
1 0 0 0 1 0 0 0»
« »
«¬1 0 0 0 0 1 1 1 »¼
54 Industrial Robotics: Theory, Modelling and Control
4. Geometry-Independent Models
sˆ j q j
Tij ( q j ) = Tij (0)e , (4)
where sˆ j ∈ se(3) is the twist of joint e j expressed in frame j , Tij ( qij ) and
Tij (0) ∈ SE (3) . Tij (0) is the initial pose of frame j relative to frame i . Note that
in the following context, the pose of a coordinate frame is referred to the 4 × 4
homogeneous matrix of the orientation and position of a coordinate frame:
where R ij (0) ∈ SO (3) and d ij (0) ∈ R 3 are the initial orientation and position of
link frame j relative to frame i respectively. The twist sˆ j of link assembly j
is the skew-symmetric matrix representation of the 6-vector line coordinate of
the joint axis, s j = (q j , p j ) ; p j , q j ∈ R 3 . p j = ( p jx , p jy , p jz ) is the unit-directional
vector of the joint axis relative to frame j , and q j = ( q jx , q jy , q jz ) = p j × r j , where
r j is the position vector of a point along the joint axis relative to frame j . For
revolute joints, s j = (0, p j ) ,and for prismatic joints, s j = (q j ,0) .
A tree-type robot consists of several paths that give the shortest routes from
the base to the respective pendant links. Each path can be considered as a
serially connected submanipulator so that the forward transformation can be
derived as conventional industrial manipulator. The sequence of the connected
G
modules in a path is indicated in a row of the path matrix ( ( ) . Let
a = {a0 , a1 , a2 ," , an } represent the links of path k . The base is a0 ≡ 0 and the
number of links in the path k is defined to be | a |= n + 1 . For instance, path 1 of
the robot in Figure 4(a) is a = {0,1, 2,3} . The forward kinematics from the base
to the pendant link an of path k is given by
For a branch-type modular robot with several paths, the forward kinematics is
ª n s ai qai º
ªTa0an º « ∏ i =1 (Tai −1ai (0)e )
»
« » « m s bi qbi » , (7)
T( q1 , q2 ,!, qn ) = «Tb0bm » = ∏ i =1 (Tbi −1bi (0)e )
« »
« # » « # »
¬ ¼
«¬ »¼
Let Ta0an be the forward transformation of path k as indicated in eq. (6). The
differential change in the position of the end-link an can be given by
|a |−1 ∂Ta0an
dTa0an = ¦
i =1 ∂qai
dqai
ª s a q º (8)
|a |−1 ∂ (Tai −1ai (0)e i ai )
= ¦ «
i =1 «
Ta0 ai −1
∂qai
Tai an » dqai
»
¬ ¼
|a |−1
= ¦ ª¬T s ai Tai an º dqai
i =1
a0 ai ¼
|a|−1
Ta−0a1n dTa0an = ¦Ta−i a1n s ai Tai an dqai . (9)
i =1
Equation (9) is the differential kinematic equation of a path. Let Tad0an denote
the desired position of the end-effector. When it is in the neighborhood of a
nominal position of the end-effector Ta0an , we have
58 Industrial Robotics: Theory, Modelling and Control
−1 −1
(Ta−0a1n Tad0an − I ) 2 (Ta−0a1n Tad0an − I )3
log(T T d
a0an a0an ) = (T T d
a0an a0 an − I) − + −" (11)
2 3
|a|−1
log(Ta−0a1n Tad0an ) = ¦Ta−0a1n s ai Ta0an dqai . (13)
i =1
Explicit formulae for calculating the logarithm of elements of SO (3) and SE (3)
were derived by Park and Bobrow (1994). Definitely, log(Ta−0a1n Tad0an ) is an
element of se(3) so that it can be identified by a 6 × 1 vector denoted by
log(Ta−0a1n Tad0an ) ∨ in which the first and later three elements represent the
positional and orientational differences between Ta0an and Tad0an . Converting eq.
(13) into the adjoint representation, we get
|a|−1
log(Ta−0a1n Tad0an ) ∨ = Ad T −1 ¦Ad s dqai .
Ta0 ai ai (14)
a0 an
i =1
where
DTk = log(Ta−0a1n Tad0an ) ∨ ∈ R 6×1 is referred as the pose difference vector for path k ;
J k = Ak Bk Sk ∈ R 6×(|a|−1) , is termed as body manipulator Jacobian matrix (Murray et
al. 1994);
Ak = Ad T −1 ∈ R 6×6 ;
a0 an
Automatic Modeling for Modular Reconfigurable Robotic Systems – Theory and Practice 59
Bk = row[ Ad Ta a , Ad Ta a ,! , Ad Ta a ] ∈ R 6×6(|a|−1) ;
0 1 0 2 0 n
6(|a |−1)×(|a|−1)
Sk = diag[ sa1 , sa2 ,! , san ] ∈ R ] ; and
dqk = column[dqa1 , dqa2 ,! , dqan ] ∈ R (|a|−1)×1.
Equation (15) defines the differential kinematics for path k . It can be utilized
in the Newton-Raphson iteration to obtain an inverse kinematics solution for a
given pose.
DT = Jdq , (16)
where
DT = column[ DT1 , DT2 ,! , DTm ] ∈ R 6 m×1 ,is termedthe generalized pose difference vector;
J = ABS ∈ R 6 m×n , is termed the generalized body manipulator Jacobian matrix;
A = diag[ A1 , A2 ,!, Am ] ∈ R 6 m×6 m ; and
The coefficient, pij (i = 1, 2,!, m; j = 0,1, 2,! , n ) is entry (i, j ) of the path matrix
(, and m is the total number of paths;
S = diag[ s1 , s2 ,!, sn ] ∈ R ; dq = column[dq1 , dq2 ,!, dqn ] ∈ R n×1 .
6 n ×n
dqi +1 = J * DT (17)
i +1 i +1
q = q + dq ,
i
(18)
∂f ∂f ∂f
dT0 n = dT0 + dsˆ + dq . (22)
∂T0 ∂sˆ ∂q
The differential dT0n can be interpreted as the difference between the nominal
position and the measured position.
described in eq. (4). Two assumptions are made in the dyad of link v i−1 and v i
of a modular robot chain: first, small geometric errors only exist in the initial
position Ti −1,i (0) ; second, the twist and joint angle qi assume the nominal
values through out the calibration analysis. Hence, instead of identifying the
module’s actual initial positions, joint twists and angle offsets, we look for a
new set of local initial positions (local frames, called calibrated initial
positions), in the calibration model, so that the twist of the joint remains the
nominal value. In other words, the errors in a dyad are lumped with the initial
position. Therefore, dsˆ and dq can be set to 0. Because SE(3) has the
dimension of six---three for positions and three for orientations---there can be
only six independent quantities in Ti −1,i (0) , and there will be six independent
error parameters in a dyad. Denote the small error in the initial position of
dyad ( v i −1 , v i ) as dTi −1,i (0) , then
li
dTi −1,i (0) = Ti −1,i (0) Δ
ª 0 −δ zi δ yi dxi º
« δz 0 −δ xi dyi » (23)
li
Δ = « i »,
« −δ yi δ xi 0 dzi »
« »
¬ 0 0 0 0¼
where dxi , dyi , and dzi are infinitesimal displacements along x - , y - , and z -
axes of link frame i respectively, and δ xi , δ yi and δ zi are infinitesimal
rotations about x -, y -, and z -axes of link frame i respectively.
Similar to the error model of a dyad, the gross-geometric error, dT0n between
the actual end-effector position the nominal position can be described as:
l 0n T
dT0 n = Δ (24)
0n
and
l 0n
Δ = dT0 nT0−n1 (25)
ª 0 −δ z0 n δ y0 n dx0 n º
« δz 0 −δ x0 n dy0 n »
= « 0n », (26)
« −δ y0 n δ x0 n 0 dz0 n »
« »
¬ 0 0 0 0 ¼
where δ x0n , δ y0n , δ z0n are the rotations about the axes of the base frame, and
Automatic Modeling for Modular Reconfigurable Robotic Systems – Theory and Practice 63
dx0n , dy0n , and dz0n are the displacements along the axes of base frame
respectively. Note that the gross error, dT0n , is expressed in the base frame.
Equation (25) follows the left multiplicative differential transformation of T0n .
The calibrated position of the end-effector becomes
Based on the assumptions, the errors in the dyads will contribut to the gross
error in the end-effector's position dT0n . Since the geometric errors are all very
small, the principle of linear superposition can be applied. We assume that the
gross errors dT0n are the linear combination of the errors in the dyads dTi −1,i (0) ,
(i = 1, 2,..., n ) ; then
n
dT0 n = ¦T0,i −1 dTi −1,i (0)e sˆi qi Ti ,n . (28)
i =1
Equation (28) converts and sums the initial position errors of the dyads in the
base-frame coordinates. The forward kinematics of link-frame j relative to
link-frame i ( i ≤ j ) is represented by Tij . Especially, Tij = I 4×4 when i = j .
Substituting eq. (23) into eq. (28), and right-multiplying T0−n1 ,
l 0n
dT0 n T0−n1 = Δ (29)
n
= ¦T T
0,i −1 i −1,i (0) Δˆ i Ti −−1,1 i T0,−i1−1 (30)
i =1
l 0 n = dT T −1 ≈ log(T 'T −1 ) .
Δ (31)
0n 0n 0n 0n
n
∨
log (T0 n 'T0−n1 ) = ¦Ad T0,i −1 ( Ad Ti −1,i (0) ( Δ i )) . (32)
i =1
y = Ax , (33)
64 Industrial Robotics: Theory, Modelling and Control
where
y = log ∨(T0 n 'T0−n1 ) ∈ R 6×1 ;
x = column[ Δ1 , Δ 2 ," , Δ n ] ∈ R 6 n×1 ; and
A = row[ Ad T0,1 (0) , Ad T0,1 ( Ad T1,2 (0) )," , Ad T0,n −1 ( Ad Tn −1,n (0) )] ∈ R 6×6 n .
Y x,
=A (34)
Where
†Y
x=A , (35)
(a) (b)
Figure 8. (a) Calibration algorithm for modular robots; (b) Dynamic model generation
4.4 Dynamics
Assume that the mass center of link assembly j is coincident with the origin of
the link module frame j . The Newton-Euler equation of this rigid link
assembly with respect to frame j is (Murray et al 1994)
ª f j º ªm j I 0 º ª v j º ª w j × m j v j º
Fj = « » = « + , (36)
¬τ j ¼ ¬ 0 J j »¼ «¬ w j »¼ «¬ w j × J j w j »¼
66 Industrial Robotics: Theory, Modelling and Control
where Fj ∈ R 6×1 is the resultant wrench applied to the center of mass relative to
frame j . The total mass of link assembly j is m j (which is equal to the sum of
link v j and joint e j ). The inertia tensor of the link assembly about frame j
is J j . Transforming eq. (36) into the adjoint representation,we have
ªm j 0º
• Mj =« ∈ R 6×6 is the generalized mass matrix;
¬0 J j »¼
ªvj º
• V j = « » ∈ R 6×1 is the generalized body velocity, where v j and w j are 3 × 1 vectors de-
¬w j ¼
fining body translational velocity, v j = ( v x , v y , v z )T , and the angular velocity,
w j = ( wx , wy , wz )T , respectively;
• adVTj ∈ R 6×6 is the transpose of adjoint matrix adV j related to V j
T
T Tª wˆ j vˆ j º ª − wˆ j 0 º
ad = ( adV j ) = « » =« »; (38)
¬ − vˆ j − wˆ j ¼
Vj
¬ 0 wˆ j ¼
• vˆ j and wˆ j ∈ R 3×3 are skew-symmetric matrices related to v j and w j respectively;
ª v j º
and Vj = « » ∈ R 6×1 is the generalized body acceleration.
¬ w j ¼
FORWARD ITERATION
The generalized velocity and acceleration of the base link are given initially,
Vb = V0 = (0,0,0,0,0,0)T (39)
Vb = V0 = (0,0, g ,0, 0,0)T (40)
Automatic Modeling for Modular Reconfigurable Robotic Systems – Theory and Practice 67
where Vb and Vb are expressed in the base frame 0. We assume that the base
frame coincides with the spatial reference frame. The generalized acceleration
(eq. (40)) is initialized with the gravitation acceleration g to compensate for
the effect of gravity. Referring to Figure 6, the recursive body velocity and
acceleration equations can be written as
Vj = Ad T −1 (Vi ) + s j q j (41)
ij
where all the quantities, if not specified, are expressed in link frame j .
tion of frame j relative to frame i with joint angle q j and Ad T −1 = ( Ad Tij ) −1 ; and
ij
6×1
• sj ∈ R is the twist coordinates of joint e j .
BACKWARD ITERATION
The backward iteration of the branch-type robot starts simultaneously from all
the pendant link assembly. Let .PD ⊂ . be set of the pendant links of the
branch-type robot. For every pendant link assembly d i ( v d ∈ .PD ), the
i
Fi = ¦ Ad T
Tij−1
( F j ) − Fi e + M iVi − adVTi ( M iVi ) , (44)
j∈VHi
68 Industrial Robotics: Theory, Modelling and Control
The applied torque/force to link assembly i by the actuator at its input joint
e i , can be calculated by
τ i = siT Fi . (45)
V = GSq (46)
= G V
T0 0 + GSq + GA 1 V
V (47)
+ GT A MV
F = GT FE + GT MV (48)
2
t = ST F (49)
where
- V = column[V1 ,V2 ," ,Vn ] ∈ R 6 n×1 is the generalized body-velocity vector;
- V = column[V ,V ," ,V ] ∈ R 6 n×1 is the generalized body-acceleration vector;
1 2 n
ª Ad T01−1 º
« »
« Ad T02−1 » 6 n×6
GT = «
0 »∈R ;and
« # »
« Ad −1 »
¬ T0 n ¼
ª I 6×6 0 0 " 0 º
« r Ad I 6×6 0 " 0 »
« 12 T12−1 »
G = « r13 Ad T13−1 r23 Ad T23−1 I 6×6 " 0 » ∈ R 6 n×6 n
« »
« # # # % # »
« »
«¬ r1n Ad T1−n1 r2 n Ad T2−n1 r3n Ad T3−n1 " I 6×6 »
¼ .
G
Note that *( ) = [ rij ] ∈ R ( n +1)×( n +1) is the accessibility matrix. The matrix G is
called a transmission matrix. Substituting eqs. (46)-(48) into eq. (49), we obtain
the closed-form equation of motion for a branch-type modular robot with n + 1
modules (including the base module)
where
M(q ) = ST GT MGS (51)
C(q, q ) = ST GT ( MGA1 + A 2M )GS (52)
N(q ) = ST GT MG V + ST GT F E (53)
T0 0
The mass matrix is M(q ) ; C(q, q ) represents the centrifugal and Coriolis
accelerations; N(q ) represents the gravitational force and external forces. The
procedure for obtaining the closed-form equation (eq. (50)) is summarized in
Figure 8(b). It has been successfully implemented in Mathematica code.
5. Configuration Optimization
Note that all the dimensions of the modules have been previously designed
and fixed at the selection stage. With a given set of modules, the possible
combination of robot-assembly configurations is always a finite number.
Therefore, the parameter space for the optimization is discrete, and combinato-
rial optimization methods can be applied. Exhaustive search algorithms can be
used to find the exact optimal solution, but the exponential growth of the data
set impedes the efficient implementation of such an algorithm. Random-search
techniques such as genetic algorithms (GA) (Chen 1994) and simulated anneal-
ing (SA) (Paredis & Khosla 1995) are more suitable for such problems. Transi-
tion rules for data points required in GA and SA can be easily implemented
based on a data-representation scheme such as AIM.
Automatic Modeling for Modular Reconfigurable Robotic Systems – Theory and Practice 71
number. An AIM with a large ACEF value represents a good assembly con-
figuration. The ACEF consists of two parts: task and structure evaluations.
Task evaluation is performed according to the given task specifications: the
task points (or the positions of the end-effector) and a designated criteria
measure, such as the dexterity or the manipulability. A workspace check on
the task points is executed before computing the measures for filtering out in-
accessible points. Structure evaluation assesses the kinematic constraints (joint
singularity and redundancy, link interference) and environmental constraints
(workspace obstacles) imposed on the robot in accomplishing the assigned
task. The proposed ACEF assumes the modular robot is operated in a struc-
tured environment, and that there are no obstacles in the workspace. An auxil-
iary function, termed the module-assembly preference (MAP) is defined on the
AIM to exclude undesirable kinematic features. Detailed implementation of
the task and structure evaluation can be obtained from Chen (1996).
13(b). As can be seen, the evolutionary algorithm does increase the fitness val-
ues generation by generation. Although the best solution may not be guaran-
teed, a suboptimal solution can always be found, and in return, the efficiency
of finding the solution is increased.
(a)
74 Industrial Robotics: Theory, Modelling and Control
(b)
Figure 13. (a) Optimal assembly configuration; (b) average and maximum fitness in
each generation
(two are active and one is passive) and a passive spherical joint. Once the ge-
ometry is determined, the workspace analysis is performed. From the result of
this analysis, the lengths of the rigid links and connectors are determined. Be-
cause of the modular design, the actuator modules can be freely located at the
nine revolute joints. The workspace of the robot changes according to the loca-
tions of the actuator modules. A disk-shaped moving platform is attached to
the three branches. An end-mill tool actuated by an intelligent motor is
mounted at the center of the platform. This motor uses the same control inter-
face as the standard actuator modules. Because of the lack of the force sensor,
the task is only carried out in simulated manner, i.e., the end-mill tool only
goes through the milling path without touching the surface of the workpiece.
The 1-DOF linear motion stage uses two standard modules: one rotary module
to drive the linear slide and one gripper module to hold the workpiece, to en-
sure uniformity in the workcell control. The specifications of the robots and the
motion stage are listed in Table 1.
plete the entire workcell set-up excluding the time spent on the preliminary
design stage.
Light-machining Workcell
7-DOF Redundant Serial Robot
Work envelope Approx. sphere, SR = 1200mm
Max speed 750 mm/s
Repeatability +/- 0.10 mm
Max Payload 5 Kg (excluding end-effector)
Weight 16 Kg (excluding base)
6-DOF RRRS Articulate Parallel Robot
Work envelope Approx. hemisphere, SR = 500mm
Max speed 500 mm/s
Repeatability +/- 0.05mm
Max Payload 25 Kg (excluding end-effector)
Weight 30 Kg (excluding base)
1-DOF Linear Motion Stage
Effective stroke L = 1500mm
Max speed 500 mm/s
Repeatability +/- 0.025mm
Max Payload 45 Kg (excluding fixture)
Weight 35 Kg
Table 1. Specifications of the light-machining workcell
8. Conclusion
Task Sequence
1 Robot A picks up workpiece from fixture
2 Robot A places workpiece on the motion stage
3 Motion stage moves workpiece under Robot B
4 Robot B performs milling task
5 Robot A shifts locations of un-processed workpieces
6 Robot B finishes milling task
7 Motion stage moves processed workpiece back
8 Robot A picks up processed workpiece from motion stage
9 Robot A places processed workpiece to the fixture
*Robot A: 7-DOF serial robot, Robot B: 6-DOF parallel robot
Table 2. Task Sequence
80 Industrial Robotics: Theory, Modelling and Control
9. References
Peter Mitrouchev
1. Introduction
Today, industrial robots can replace humans in carrying out various types of
operations. They can as well serve machine tools as to carry out various tasks
like welding, handling, painting, assembling, dismantling, foundering, forg-
ing, packaging, palletizing ….in different areas of the mechanical, car, aero-
space, automotive, electronics … and other industries. However, the complex-
ity of the industrial process poses difficult problems for insertion and
generation of the movements of a robot: the working environment of the robot
is often complex and varied (presence of obstacles during the execution of a
task for example).
One of the objectives concerning the problems of computer-aided design
(CAD) of robots is the validation of their topological structures. The robot-
design engineer puts forth assumptions as regards the provision of the links
and the joints of the mechanical articulated system. A first validation of this
choice is of a geometrical nature (Merlet, 1996). At first sight the design of a
mechanical architecture for a robot appears rather simple and yet it presents a
very complex basic difficulty, as it must take into account not only the me-
chanical possibilities of realization but also the possibilities of control’s devel-
opment, which passes by generating of a mathematical model. The latter
strongly affects the mechanical design if good performances are sought. Many
methods for mechanism description appeared with the creation of CAD sys-
tems (Warnecke, 1977) and (Coiffet, 1992). The existing methods may be sepa-
rated into two categories:
83
84 Industrial Robotics: Theory, Modelling and Control
the current industrial robots with planar chains have a structure created by the
kinematic graphs of MMT (Manulescu et al. 1987; Ma & Angeles, 1991).
The morphological (topological) synthesis of kinematic chains has, for a long
time, been the subject of many papers. There are different methods for number
synthesis of planar kinematic chains with simple revolute joints, with different
degrees of mobility and different numbers of links and joints. These chains are
usually called “planar pin-joined” chains in MMT. While number synthesis
originates from kinematics of mechanisms, all methods entail operations on
graphs, which in one way or another, represent kinematic chains.
There exist different methods for kinematic synthesis of planar chains with
simple joints (Tischler et al., 1995; Belfiore, 2000; Rao & Deshmukh, 2001): in-
tuition and inspection (Crossley, 1964), graphs theory (Dobrjanskyi and Freu-
denstein, 1967; Woo, 1967). Others consist in transformation of binary chains
(Mruthyunjaya, 1979; Mruthyunjaya, 1984-a; Mruthyunjaya, 1984-b;
Mruthyunjaya, 1984-c), in the concept of Assur groups (Manolescu et al., 1987;
Manolescu, 1964; Manolescu, 1979; Manolescu, 1987), or Franke's notation
(Davies & Crossley, 1966; Crossley, 1966). Recently, new methods based on
genetic algorithms or neuronal networks are also used (Tejomurtula & Kak,
1999; Abo-Hamour et al., 2002; Cabrera et al., 2002; Laribi et al., 2004).
The analysis of existing methods shows that there are several methods applied
to the development of a mathematical model concerning its application for the
control design of the robot. However, concerning the topological description of
the chains and robots, only Roth-Piper’s method (Roth, 1976; Pieper & Roth,
1969) tends towards mechanism description with a view to classify robots.
Generally speaking, the problem of synthesis of mechanism falls into three sub
problems:
This chapter relates in particular to the second sub problem. Its principal goal is
to present an overview concerning the chronology of the design of an indus-
trial robot kinematic chain. The chapter starts with a brief reminder of the the-
ory of Modular Structural Groups (MSG), and of the connectivity and mobility
laws of MMT presented in § 2. Afterwards, a new method for structural syn-
thesis of planar link chains in robotics is presented in § 3. It is based on the no-
tion of logical equations. Various levels of abstraction are studied concerning the
complexity of the structure. This permits the synthesis of planar chains with
various degrees of complexity expressed by the number of links, joints and the
Kinematic design and description of industrial robotic chains 85
degree of mobility. The logical equations allow the association of MSGs of type
A and closed chains of type G. The rules for associations of groups are also
presented. The aim is to execute all the possible combinations to join or trans-
form links in order to obtain as many structures as possible by avoiding those
which are isomorphic. The association of two groups allows the elaboration of
different closed chains of upper level. However there are some defective struc-
tures, which do not respect the connectivity and mobility laws. Therefore great
care has been taken to avoid them. The problem of degenerated structures is
central in their synthesis. It especially concerns chains with two and more de-
grees of mobility. The problem of defect, degeneration and isomorphism is then
approached. Later, a method for description of chains by contours and mole-
cules is proposed in § 4. It enables to reduce the number of the topological
structures of the robots concerning its frame and end-effector position by
comparing their respective molecules. This method is then applied in § 5 to
describe the structures thus obtained (by logical equations approach) and the
topology of the principal structure of the industrial robots in the initial phase of
their design. Finally a classification of industrial robot structures by different
levels of complexity is presented in § 6.
notation N2 N3 N4 …
Table 1. Planar link types
der to better appraise the topology of the structure. Nevertheless this presenta-
tion is difficult to manipulate. Any kinematic structure may be transformed
into Crossley's inverse graph (Crossley, 1964) replacing every link (binary, ter-
nary…) by a point. Lines linking the points concerned represent the joints
themselves.
- robots with simple (open) structure: one can traverse all the kinematic
joints by making an open chain. They are noted Ai,j where i and j are
respectively the degree of mobility and the number of links of the structu-
re (cf. § 3.1).
- robots with complex (arborescent or closed) structure: one can traverse all
the kinematic joints by making a closed loop. They are noted Gk,l where k
and l are respectively the degree of mobility and the number of links of the
structure (cf. § 3.1).
- the main structure which generates the main motion of the robot and upon
which is situated the rest of the MS,
- the regional structure which is composed of the arm and of the forearm
(mechanical arm),
- the local structure which is the wrist of the robot often with three degrees
of freedom in rotation.
local structure
regional structure
main structure
frame
5
M = 6N * −¦ kCk (1)
k =1
If the kinematic chain only contains simple rotary joints of class 5 (one degree
of freedom joint), instead of equation (1), the Tchebychev-Grübler equation is
used:
M = 3N * −2C 5 (2)
In this chapter only planar structures with simple rotary joints of class 5 shall
be considered. If the value of M in Tchebychev-Grübler’s equation (2) is M=0 it
becomes:
3N* = 2C 5 (3)
Some of the most characteristic MSGs (called Assur groups) resulting from
equation (3) are presented in Table 2 by their kinematic graphs or structural
diagrams (Manolescu, 1964).
A0,2 2 3 ………
A0,4 4 6
A0,6 6 9
… … …
………………………….
………………
Table 2. Modular Structural Groups (MSGs) with zero degree of mobility
3N * −1 = 2C 5 (4)
A1,3 3 4
A1,5 5 7
… … … …………………………
Table 3. SMGs with one degree of mobility
Finally for the MSGs adding two degrees of mobility to the structures (M=2),
one obtains:
3N * −2 = 2C 5 (5)
A2,4 4 5
A2,6 6 8
… … …
Table 4. MSGs with two degrees of mobility
C =I+N (6)
5
For the planar kinematic structures the general law of mobility giving the rela-
tionship between the number of degrees of mobility and the joints is:
C 5 = M + 3I + 3 (7)
N = M + 2I + 3 (8)
Those two latter equations (7 and 8) lead to the first manner of calculating the
number of joints C5 and the number of links N for planar kinematic structures
with different degrees of relative mobility M. The general law of mobility giving
the relationship between the number of degrees of mobility and the joints is
(Manolescu, 1987):
5
M + (1 + I )H = ¦ ( 6 − k )C k (9)
k = f +1
with: - H=(6-F),
- F the number of imposed constraints in a chain.
For a planar kinematic structures, F=3 hence H=3. Equation (9) becomes:
M + 3(1 + I ) = C 5 + 2C 4 (10)
This latter equation allows the second manner of calculating the number of links
and joints for planar kinematic structures with different degrees of mobility.
Kinematic design and description of industrial robotic chains 91
3.1.1 Notations
We note by:
- Ai,j a MSG (Assur group or its derivative) which may be open or closed.
There are always some exterior (free) joints which only become active
when the group is in connection with a joint shared with a chain or a
mechanism (Manolescu, 1964),
- Gk,l a closed group (structure) without exterior joints.
The first mark represents the degree of mobility of the group and the second
one represents the number of its links (cf. Table 5).
,
Table 5. Structural diagram and types of joints
It can be noted that the association of two closed structures is not possible.
The association of two groups may be done by two operations: direct junction
and transformation of links. They may be executed separately or together. The
aim is to use all possible combinations to join or transform the links in order to
obtain as many structures as possible (for given N and C5) by avoiding those,
which are isomorphic. Some examples are shown in the Table 6:
by direct junction
+ =
+ =
by transformation of links:
+ =
- the binary link 1 becomes 1 2
ternary
+ =
- the ternary link 1 becomes 2 2
quaternary
1 1
+ 2 = 2
- the binary link 1 becomes 3 1 3
1
quaternary and so on
Table 6. Rules for associations of groups
Kinematic design and description of industrial robotic chains 93
C 5 = 4 + 3I (11)
and
N = 4 + 2I (12)
For I=0, equations (11) and (12) give C5=4 and N=4. Therefore only one closed
planar structure of type G1,4 can be synthesised from the following logical
equation:
A0 , 2 + A0 , 2 = G1, 4 (13)
as follows:
A0 , 2 + A0 , 2 = G1, 4 G1,4
The equation (6) gives C5=N. For C5=N=4 there is a mono-contour mobile
structure called a group of type G4. For M=1, the equation (10) gives:
C 5 + 2C 4 = 4 (14)
The possible solutions of this equation are: a) C4=0, C5=4, b) C4=1, C5=2 and
c) C4=2, C5=0.
From the solution a) only one closed planar structure of type G4 and noted
G1,4 can be synthesised by the logical equation (13) above.
For I=1, equations (11) and (12) give C5=7 and N=6. The law for generation a
group of elementary modules allows the calculation of the number of links,
which are ternary and more (quaternary, quintary and so on) in a closed chain
(Manolescu et al, 1972).
94 Industrial Robotics: Theory, Modelling and Control
R
2I = ¦ ( j − 2) N
j =3
j (15)
For I=1 equation (15) gives N3=2. From equation (10) one deduces:
7 = C 5 + 2C 4 (16)
The possible solutions of this equation are: a) C4=0, C5=7; b) C4=1, C5=5; c)
C4=2, C5=3 and d) C4=3, C5=1. The number of the links is calculated by equa-
tion (6). It becomes:
N = C5 − I = 6 (17)
N2 = N − N3 (18)
The second level planar structures, of type G1,6 are obtained from the follow-
ing logical equations:
A0 , 4 + A0 , 2 = G1, 6 (20)
Both equations (19) and (20) give two solutions: Watt's and Stephenson's struc-
tures. These latter are described by their kinematic graphs (structural diagram)
in table 7.
Kinematic design and description of industrial robotic chains 95
5 6
5 6
1 2 3
4 2
1 3
4
Stephenson
3 3 5
by junction and
5 transformation 4
2 of links 2 6
4
6 1
1
A0 , 4 + A0 , 2 = G1, 6 1
2
3
3 4 by junction and
transformation 6 4
5 of links 5
2
6 5 6
1
2 3
1
5 6 4
by junction
2
1 4 3
It may be noted that there are three isomorphic structures, which restrict their
number to two. Watt's and Stephenson's structures are generated from closed
immobile structure composed of two ternary links by adding to it four binary
links as presented in figure 4.
The closed immobile structures have no degree of mobility. These are hyper-
static systems, thus they are invariant concerning the mobility parameter. The
theory of closed structures is applied just as much in robotics as in civil engi-
neering, concerning bridges and roadways, seismic structures etc. They are
equally applicable for planar structures as for spatial ones.
According to expressions (11) and (12) for I=2 one obtains C5=10 and N=8 in
other words the structure has 8 links and 10 joints. According to equation (15)
the number of links is:
N 3 + 2N 4 = 4 (21)
The possible solutions of this equation and the associated closed structures are
given in table 8:
associated clo-
sed structure
For the case of planar structure, the number of the binary links is:
R=I +2
N2 = N0 + ¦ ( j − 3)N j (22)
j =4
R
N 2 = 4 + ¦ ( j − 3 )Nj (23)
j =4
For I=0 and I=1 the equation (23) has no physical meaning. For j≤3, the number
of binary links is N2=4, so to have a mobile mechanism one needs four binary
links, hence for j=4:
N2 = 4 + N4 (24)
The possible solutions of this equation are: a) N2=4, N4=0, b) N2=5, N4=1, c)
N2=6, N4=2 and d) N2=7, N4=3. The three solutions of (24) which respect the
solutions of (21) are a), b) and c).
The third level planar structures of type G1,8 are obtained from the following
logical equations:
G1 , 4 + A0 , 4 = G1 , 8 (26)
A0 , 4 + A0 , 4 = G1, 8 (27)
A0 , 2 + A0 , 6 = G1, 8 (28)
Equation (25) gives 12 structures; equation (26) gives two structures and a
double (isomorphic); equation (27) gives one structure, one defect and one
double. Finally, equation (28) gives one structure and two doubles. Table 14
(cf. § 4.) recapitulates the sixteen structures solutions of these four logical equa-
tions.
98 Industrial Robotics: Theory, Modelling and Control
by junction
A0 , 4 + A0 , 4 = G1, 8
According to expressions (11) and (12) for I=3, one obtains: C5=13 and N=10.
The number of links, which are ternary or greater, is calculated from equation
(15). It becomes:
6 = N 3 + 2N 4 + 3N 5 (29)
The possible solutions of this equation and their associated closed structures
are given in the table 10:
associated
closed struc-
ture
associated
closed struc-
ture
N 2 = 4 + N 4 + 2N 5 (30)
a) N2=4, N4=0, N5=0, b) N2=5, N4=1, N5=0, c) N2=6, N4=2, N5=3, d) N2=6,
N4=0, N5=1, e) N2=7, N4=3, N5=0, f) N2=7, N4=1, N5=1, g) N2=8, N4=0,
N5=2 and h) N2=8, N4=4, N5=0.
100 Industrial Robotics: Theory, Modelling and Control
The solutions of (30), which respect the solutions of (29), are a), b), c), d), e) f)
and g). Thus the fourth level planar structures of type G1,10 are obtained from
the following logical equations:
A0 , 2 + A0 , 8 = G1, 10 (34)
A0 , 6 + A0 , 4 = G1, 10 (35)
Equation (31) gives 50 structures, (32) gives 95 structures, (33) gives 57 struc-
tures, (34) gives 18 structures and (35) gives 10 structures. In total 230 struc-
tures obtained also by Woo (Woo, 1967) using the graph theory.
C 5 = 5 + 3I (36)
And
N = 5 + 2I (37)
For I=0 the solutions of equations (36) and (37) are C5=5 and N=5. The only ki-
nematic structure of type G2,5 is provided by the following schematic:
A0 , 2 + A1 , 3 = G 2 , 5
+ G2,5
For I=1, the expressions (36) and (37) give C5=8 and N=7. The four second
level planar structures, of type G2,7 are shown in table 11:
Kinematic design and description of industrial robotic chains 101
= =
1
3
partial
partial
= =
2 4
total fractionated
Table 11. Planar structures of type G2,7
For I=2, the solutions of equations (36) and (37) are respectively C5=11 and
N=9. The third level planar structures, of type G2,9 are obtained from the fol-
lowing logical equations:
G 2 , 7 + A0 , 2 = G 2 , 9 (38)
G 2 , 5 + A0 , 4 = G 2 , 9 (39)
The forty structures produced by these four equations were also obtained by
Manolescu (Manolescu, 1964) using the method of Assur groups.
For M=3 the equations (7) and (8) give: C5 = 6 + 3I and N = 6 + 2 I . For I=0
one obtains C5=6 and N=6. The only kinematic structure of type G3,6 is pro-
vided by the following schematic:
A1, 3 + A1, 3 = G3 , 6
+ G 3,6
102 Industrial Robotics: Theory, Modelling and Control
For I=1, one obtains C5=9 and N=8. The six second level planar structures, of
type G3,8 are presented in Table 12:
G 3 , 6 + A0 , 2 = G 3 , 8 G 2 , 5 + A1 , 3 = G 3 , 8
+ = + =
1 4
partial total
+ = =
+
2 5
total fractionated
+ = + =
3 6
total Total
One can notice that there are two isomorphic structures 2 and 6 that restrict
their number to five.
For I=2, one obtains C5=12 and N=10. The third level planar structures, of type
G3,10 are obtained from the following logical equations:
G 3 , 8 + A0 , 2 = G 3 , 10 (42)
G 3 , 6 + A0 , 4 = G 3 , 10 (43)
G2 , 7 + A1, 3 = G3 , 10 (44)
G 2 , 5 + A1 , 5 = G 3 , 10 (45)
G1 , 6 + A2 , 4 = G 3 , 10 (46)
G1 , 4 + A2 , 6 = G 3 , 10 (47)
The ninety-seven structures produced from these six relations were also ob-
tained by T.S. Mruthyunjaya (Mruthyunjaya, 1984-a; Mruthyunjaya, 1984-b;
Mruthyunjaya, 1984-c) using the method of transformation of binary chains.
These mechanisms with three degrees of mobility are often present in the de-
sign of industrial robots.
Kinematic design and description of industrial robotic chains 103
Apart from the obvious manner to describe a structure by its kinematic graph,
the following two ways to describe the planar structures thus obtained are
proposed.
notation ǂ ǃ DŽ Dž dž …
C5 4 5 6 7 8 …
Table 13. Notation of contours
Table 15 represents the sixteen structures, with one degree of mobility, ob-
tained by logical equations method presented in § 3.2.3. Their contours and
molecules are also shown.
Ε α
ǃ–ǂ -ǂ -Ʀ β α
G1-82
Δ α
DŽ -ǂ -ǂ -ƥ γ α
G1-83
Γ α
ǂ -ǂ -ǃ-Ʀ α α
G1-84
Δ β
ǂ -ǂ -ǂ -Ƨ α α
G1-85
Ε α
ǂ -ǂ -DŽ -ƥ α α
G1-86
Γ γ
ǃ-ǃ-ǃ-Ƥ β β
G1-87
Β β
ǃ -ǂ - ǃ -ƥ β α
G1-88
Γ β
ǃ -ǂ - ǃ -ƥ β α
G1-89
Γ β
Kinematic design and description of industrial robotic chains 105
ǂ -ǃ-ǂ -Ʀ α β
G1-810
Δ α
ǂ - ǃ - ǃ -ƥ α β
G1-811
Γ β
ǂ -ǃ-ǂ -Ʀ α β
G1-812
Δ α
ǃ -ǂ - ǃ -ƥ β α
G1-813
Γ β
ǂ -DŽ -ǂ -ƥ α γ
G1-814
Γ α
ǂ -DŽ -ǂ -ƥ α γ
G1-815
Γ α
ǃ-ǃ-ǃ-Ƥ β β
G1-816
Β β
Table 15. Closed structures of type G1,8 ant theirs molecules (cf. § 3.2.3)
The presented method for mechanisms’ description is applied for the descrip-
tion and classification of the main structures of industrial robots with different
degrees of mobility.
The simplest structures applied in robotic design are the open loop simple
chain. In this case the definitions of contours and molecules have no physical
sense. Such a structure is shown in figure 5.
legend: frame
For example STAÜBLI RX90L industrial robot (average carrier) has a simple
(open) kinematic chain. With a capacity of 6 kg and operating range of 1185
mm it is used in different industrial fields like painting, handling, serving of
machine tools etc. .(source: http:// www.staubli.com/ web/web_fr
/robot/division.nsf).
5.2.1 Robot with a main structure having one degree of mobility and I=0
For M=1 and I=0, the most simple structure is the four-links mechanism (cf. §
3.2.1). This structure has only one closed loop to which correspond two identi-
cal contours α and Α. To one of its links an end-effector is attached, allowing it
to manipulate objects. The result of this operation is the mechanism corre-
sponding to the main structure of the level 1 Pick and Place industrial robot.
The structure is shown in figure 6:
Kinematic design and description of industrial robotic chains 107
ǂ−ƣ γα Α
This is the simplest robot with closed structure. Its task consists in moving
light parts from one place to another, which explains its name "Pick and Place".
This robot is essentially used in clock and pharmaceutical industries.
5.2.2 Robot with a main structure having two degrees of mobility and I=0
For M=2 and I=0, there is only one possible kinematic chain obtained by the
schematic of § 3.3. The main structure’s kinematic graph of this robot is shown
in figure 7. In order for the robot to be able to manipulate objects, this planar
structure is connected to a frame by transforming one of its binary links to a
ternary one (the choice is arbitrary). Finally an end-effector is added by the
same process. The result of these operations is the mechanism corresponding
to the main structure of the level 1 HPR Hitachi industrial robot.
ǃ−Ƥ γβ Β
5.2.3 Robot with a main structure having two degrees of mobility and I=1
In this case there are many possible structures generated from the four struc-
tures of table 11, according to the choice of frame position. Let us consider so-
lution number one of the first column (c.f. table 11, § 3.3). In the same process
carried out previously, by linking this structure to a frame and adding an end-
effector to it, a structure is obtained corresponding to the main structure (cf. fig.
3) of the level 2 AKR-3000 robot. Being able to manipulate a weight up to 15
daN, this robot is essentially used for paintwork.
β α
ǃ−ǂ−Ʀ
Δ
Mechanisme robot
5.2.4 Robot with a main structure having one degree of mobility and I=2
For M=1 and I=2 there are many possible structures (c.f. Table 15). Structure
G1,810 has been chosen here. By linking this structure to a frame and adding
an end-effector to it, a structure is obtained corresponding to the main level 3
structure of the Mitsubishi AS50VS Robot presented in Fig. 9.
Kinematic design and description of industrial robotic chains 109
ǂ−ǃ−ǂ−Ʀ α β
Δ α
mechanism robot
5.2.5 Robot with a main structure having two degrees of mobility and I=2
The starting point for generating of the kinematic structure is the first logical
equation of Table 7 (Watt's structure). For the desired robot, the G1,6 structure
thus obtained lacks one degree of mobility and five links. The following opera-
tions allow its completion (cf. fig. 10). Adding to this structure a frame and an
end-effector the resulting mechanism of this operation corresponds to the main
structure of the level 4 HPR Andromat robot.
adopted solution
110 Industrial Robotics: Theory, Modelling and Control
mechanism robot
Figure 10. Evolution of the generation of the Andromat robot topological chain
(source: http://arbejdsmiljoweb.dk/upload/rap_1_doc.doc)
Among the one hundred and ten available structures of type G2,11, a robot
manufacturer has implemented the required solution above in order to design
the main structure of the Andromat robot. According to the rules defined in §
3.1.2. the frame, initially a quaternary link, was transformed into a quintarny
one and the binary link, where the end-effector was attached, into a ternary
one.
This robot is equipped with a pantographic system with a working range of 2,5
m and weight range from 250 kg up to 2000 kg. The Andromat is a world-
renowned manipulator, which is widely and successfully used in foundry and
forging industries enabling operators to lift and manipulate heavy and awk-
ward components in hostile and dangerous environments. (source:
http://www.pearsonpanke.co.uk/).
During the initial design of the MS of robots, the validation of their topological
structures may be done by studying the kinematic graphs of their main struc-
tures. The representation by molecules mainly yields to the usual structural
diagram of the mechanism in order to visualise and simplify. This allows the
classification of their structures and their assignation to different classes of
structures, taking into account of their complexity expressed by the number of
closed loops. Those points are the subject of the next paragraph.
Kinematic design and description of industrial robotic chains 111
The structures of robots with simple kinematic chains may be represented by one
open kinematic structures of type A. We call these open structures 0 (zero)
level structures. Many industrial robots are of the same type for example: MA
23 Arm, SCARA carrier, AID-5V, Seiko 700, Versatran Vertical 80, Puma 500,
Kawasaki Js-2, Toshiba SR-854HSP and Yamaha robots (Ferreti, 1981; Rob-Aut,
1996).
The main structures of robots with closed kinematic chains may be represented
by closed kinematic chains of type G derived from MMT. The Pick and Place
robot, for instance, has only one closed chain. This is a level 1 (one) robot (cf. §
5.2.1). There are other industrial robots of the same level for example: Tokico,
Pana-Robo by Panasonic, SK 16 and SK 120 by Yaskawa, SC 35 Nachi etc (Rob-
Aut, 1996).
The main structure of the AKR-3000 robot is composed of two closed loops
represented by two internal contours in its molecule. This is a level 2 (two) ro-
bot. The main structure of Moise-Pelecudi robot (Manolescu et al, 1987) is
composed of three closed chains defining a level 3 (three) robot. The main
structure of the Andromat robot is composed of four closed chains. This is a
level 4 (four) robot etc. Hence the level n of a robot is defined by the number n
of internal contours in its molecule. Table 16 completes this classification of
certain robots presented by Ferreti in (Ferreti, 1981):
The two first aspects above are currently the subjects of our research. The
problem is how to choose among the possible structures provided by MMT ac-
Kinematic design and description of industrial robotic chains 113
cording to the position of the frame and the end-effector. As there may be a
large number of these mechanisms, it is usually difficult to make a choice
among the available structures in the initial design phase of the robot chain. In
fact, taking into account the symmetries it can be noticed that there are a sig-
nificant number of isomorphic structures according to the position of the
frame and of the end-effector of the robot. Our future objectives are:
8. References
Abo-Hammour, Z.S.; Mirza, N.M.; Mirza, S.A. & Arif, M. (2002). Cartesian
path generation of robot manipulators continuous genetic algorithms,
Robotics and autonomous systems. Dec 31, 41 (4), pp.179-223.
Artobolevski, I. (1977). Théorie des mécanismes, Mir, Moscou.
Belfiore, N.P. (2000). Distributed databases for the development of mecha-
nisms topology, Mechanism and Machine Theory Vol. 35, pp. 1727-1744.
Borel, P. (1979). Modèle de comportement des manipulateurs. Application à l’analyse
de leurs performances et à leur commande automatique, PhD Thesis, Mont-
pellier.
Cabrera, J.A.; Simon, A. & Prado, M. (2002). Optimal synthesis of mechanisms
with genetic algorithms, Mechanism and Machine Theory, Vol. 37, pp.
1165-1177.
Coiffet, P. (1992). La robotique. Principes et applications, Hermès, Paris.
Crossley, F. R. E. (1964). A cotribution to Grubler's theory in the number syn-
thesis of plane mechanisms, Transactions of the ASME, Journal of Engi-
neering for Industry, 1-8.
Crossley, F.R.E. (1966). On an unpublished work of Alt, Journal of Mecha-
nisms, 1, 165-170.
Davies, T. & Crossley, F.R.E. (1966). Structural analysis of plan linkages by
Franke’s condensed notation, Pergamon Press, Journal of Mechanisms,
Vol., 1, 171-183.
Dobrjanskyi L., Freudenstein F., 1967. Some application of graph theory to the
structural analysis of mechanisms, Transactions of the ASME, Journal of
Engineering for Industry, 153-158.
Erdman A., Sandor G. N., 1991. Mechanism Design, Analysis and Synthesis, Sec-
ond edition.
Ferreti, M. (1981). Panorama de 150 manipulateurs et robots industriels, Le
Nouvel Automatisme, Vol., 26, Novembre-Décembre, 56-77.
Gonzales-Palacois, M. & Ahjeles J. (1996). USyCaMs : A software Package for
the Interactive Synthesysis of Cam Mecanisms, Proc. 1-st Integrated De-
sign end Manufacturing in Mechanical Engineering I.D.M.M.E.'96, Nantes
France, 1, 485-494.
Hervé L.-M., 1994. The mathematical group structure of the set of displace-
ments, Mech. and Mach. Theory, Vol. 29, N° 1, 73-81.
Hwang, W-M. & Hwang, Y-W. (1992). Computer-aided structural synthesis of
plan kinematic chains with simple joints Mechanism and Machine Theory,
27, N°2, 189-199.
Karouia, M. & Hervè, J.M. (2005). Asymmetrical 3-dof spherical parallel
mechanisms, European Journal of Mechanics (A/Solids), N°24, pp.47-57.
Kinematic design and description of industrial robotic chains 115
Robot Kinematics:
Forward and Inverse Kinematics
1. Introduction
117
118 Industrial Robotics: Theory, Modelling and Control
problem more difficult to solve. Hence, only for a very small class of kinemati-
cally simple manipulators (manipulators with Euler wrist) have complete ana-
lytical solutions (Kucuk & Bingul, 2004). The relationship between forward
and inverse kinematics is illustrated in Figure 1.
θ1
θ Forward kinematics
Joint 2 0 Cartesian
. nT
space space
θn Inverse kinematics
Two main solution techniques for the inverse kinematics problem are analyti-
cal and numerical methods. In the first type, the joint variables are solved ana-
lytically according to given configuration data. In the second type of solution,
the joint variables are obtained based on the numerical techniques. In this
chapter, the analytical solution of the manipulators is examined rather then
numerical solution.
There are two approaches in analytical method: geometric and algebraic solu-
tions. Geometric approach is applied to the simple robot structures, such as 2-
DOF planar manipulator or less DOF manipulator with parallel joint axes. For
the manipulators with more links and whose arms extend into 3 dimensions or
more the geometry gets much more tedious. In this case, algebraic approach is
more beneficial for the inverse kinematics solution.
There are some difficulties to solve the inverse kinematics problem when the
kinematics equations are coupled, and multiple solutions and singularities ex-
ist. Mathematical solutions for inverse kinematics problem may not always
correspond to the physical solutions and method of its solution depends on the
robot structure.
This chapter is organized in the following manner. In the first section, the for-
ward and inverse kinematics transformations for an open kinematics chain are
described based on the homogenous transformation. Secondly, geometric and
algebraic approaches are given with explanatory examples. Thirdly, the prob-
lems in the inverse kinematics are explained with the illustrative examples. Fi-
nally, the forward and inverse kinematics transformations are derived based
on the quaternion modeling convention.
Robot Kinematics: Forward and Inverse Kinematics 119
A manipulator is composed of serial links which are affixed to each other revo-
lute or prismatic joints from the base frame through the end-effector. Calculat-
ing the position and orientation of the end-effector in terms of the joint vari-
ables is called as forward kinematics. In order to have forward kinematics for a
robot mechanism in a systematic manner, one should use a suitable kinematics
model. Denavit-Hartenberg method that uses four parameters is the most
common method for describing the robot kinematics. These parameters ai-1,
α i−1 , di and θi are the link length, link twist, link offset and joint angle, respec-
tively. A coordinate frame is attached to each joint to determine DH parame-
ters. Zi axis of the coordinate frame is pointing along the rotary or sliding di-
rection of the joints. Figure 2 shows the coordinate frame assignment for a
general manipulator.
Zi+1
Yi+1
Xi+1
Zi
Yi
α i−1
Zi-1
Yi-1 ai
a i-1 Link i
Xi-1 θi
Link i-1
di
Xi
As shown in Figure 2, the distance from Zi-1 to Zi measured along Xi-1 is as-
signed as ai-1, the angle between Zi-1 and Zi measured along Xi is assigned as
αi-1, the distance from Xi-1 to Xi measured along Zi is assigned as di and the an-
gle between Xi-1 to Xi measured about Zi is assigned as θi (Craig, 1989).
The general transformation matrix i −1iT for a single link can be obtained as fol-
lows.
120 Industrial Robotics: Theory, Modelling and Control
i −1
i T = R x (α i −1 )D x (a i −1 )R z (θ i )Q i (d i )
ª1 0 0 0º ª1 0 0 a i−1 º ªcθi − sθ i 0 0º ª1 0 0 0º
«0 cα − sα i−1 0» «0 1 0 0 » «sθi cθi 0 0 » «0 1 0 0»
=« i −1
»« »« »« »
«0 sα i−1 cα i−1 0» «0 0 1 0 »« 0 0 1 0 » «0 0 1 di »
« »« »« »« »
¬0 0 0 1 ¼ ¬0 0 0 1 ¼¬ 0 0 0 1 ¼ ¬0 0 0 1¼
ª cθi − sθ i 0 a i−1 º
«sθ cα cθi cα i−1 − sα i−1 − sα i−1d i »
= « i i−1 » (1)
«sθisα i−1 cθisα i−1 cα i−1 cα i−1d i »
« »
¬ 0 0 0 1 ¼
where Rx and Rz present rotation, Dx and Qi denote translation, and cθi and
sθi are the short hands of cosθi and sinθi, respectively. The forward kinematics
of the end-effector with respect to the base frame is determined by multiplying
all of the i −1iT matrices.
base
end _ effector T = 01T 12T ... n −1
n T (2)
base
An alternative representation of T can be written as
end _ effector
0
6 T = 01T (q1 ) 12T (q 2 ) 23T(q 3 ) 34T(q 4 ) 45T(q 5 ) 56T(q 6 ) (4)
where qi is the joint variable (revolute or prismatic joint) for joint i, (i=1, 2, ..
.6).
Robot Kinematics: Forward and Inverse Kinematics 121
Example 1.
As an example, consider a 6-DOF manipulator (Stanford Manipulator) whose
rigid body and coordinate frame assignment are illustrated in Figure 3. Note
that the manipulator has an Euler wrist whose three axes intersect at a com-
mon point. The first (RRP) and last three (RRR) joints are spherical in shape. P
and R denote prismatic and revolute joints, respectively. The DH parameters
corresponding to this manipulator are shown in Table 1.
θ6
y5 θ5 z6
z5
x5 x6
θ4
y6
z3 z4
d3
y2
z2
x3 x4
y3 x2 y4
z1
θ2
d2
x1
y1
h1
z0
z0,1
x0 θ1
y0
Figure 3. Rigid body and coordinate frame assignment for the Stanford Manipulator.
i θi αi-1 ai-1 di
1 θ1 0 0 h1
2 θ2 90 0 d2
3 0 -90 0 d3
4 θ4 0 0 0
5 θ5 90 0 0
6 θ6 -90 0 0
ªcθ1 − sθ1 0 0º
« sθ cθ1 0 0»
0
T = « 1 » (5)
1
«0 0 1 h1 »
« »
¬0 0 0 1¼
ªcθ 2 − sθ 2 0 0 º
« 0 0 − 1 − d2 »
« » (6)
1
T =
2
« sθ 2 cθ 2 0 0 »
« »
¬ 0 0 0 1 ¼
ª1 0 0 0º
«0 0 1 d3 » (7)
2
T =« »
3
«0 − 1 0 0»
« »
¬0 0 0 1¼
ªcθ 4 − sθ 4 0 0º
«sθ cθ 4 0 0»
« 4 » (8)
3
4 T =
« 0 0 1 0»
« »
¬ 0 0 0 1¼
ªcθ 5 − sθ 5 0 0º
« 0 0 − 1 0»
4
T = « » (9)
5
« sθ 5 cθ 5 0 0»
« »
¬ 0 0 0 1¼
ª cθ 6 − sθ 6 0 0º
« 0 0 1 0»
« » (10)
6T =
5
«− sθ 6 − cθ 6 0 0»
« »
¬ 0 0 0 1¼
where
In order to check the accuracy of the mathematical model of the Stanford Ma-
nipulator shown in Figure 3, the following steps should be taken. The general
position vector in equation 11 should be compared with the zero position vec-
tor in Figure 4.
124 Industrial Robotics: Theory, Modelling and Control
d3
d2
+z 0
-y0
h1
z0,1
+x 0 -x 0
+y0
ªp x º ª d 2sθ1 − d 3cθ1sθ 2 º
«p » = «− d cθ − d sθ sθ » (12)
« y» « 2 1 3 1 2»
«¬ p z »¼ «¬ h1 + d 3cθ 2 »¼
In order to obtain the zero position in terms of link parameters, let’s set
θ1=θ2=0° in equation 12.
All of the coordinate frames in Figure 3 are removed except the base which is
the reference coordinate frame for determining the link parameters in zero po-
sition as in Figure 4. Since there is not any link parameters observed in the di-
rection of +x0 and -x0 in Figure 4, px=0. There is only d2 parameter in –y0 direc-
tion so py equals -d2. The parameters h1 and d3 are the +z0 direction, so pz
equals h1+d3. In this case, the zero position vector of Stanford Manipulator are
obtained as following
Robot Kinematics: Forward and Inverse Kinematics 125
ªp x º ª 0 º
«p » = « − d » (14)
« y» « 2
»
«¬ p z »¼ «¬h 1 + d 3 »¼
It is explained above that the results of the position vector in equation 13 are
identical to those obtained by equation 14. Hence, it can be said that the
mathematical model of the Stanford Manipulator is driven correctly.
The inverse kinematics problem of the serial manipulators has been studied
for many decades. It is needed in the control of manipulators. Solving the in-
verse kinematics is computationally expansive and generally takes a very long
time in the real time control of manipulators. Tasks to be performed by a ma-
nipulator are in the Cartesian space, whereas actuators work in joint space.
Cartesian space includes orientation matrix and position vector. However,
joint space is represented by joint angles. The conversion of the position and
orientation of a manipulator end-effector from Cartesian space to joint space is
called as inverse kinematics problem. There are two solutions approaches
namely, geometric and algebraic used for deriving the inverse kinematics solu-
tion, analytically. Let’s start with geometric approach.
The components of the point P (px and py) are determined as follows.
126 Industrial Robotics: Theory, Modelling and Control
Y P
l2
θ2
l1
θ1
X
(a)
l2 l2sin(θ1+θ2)
θ2
θ1
l1
l1sinθ1
θ1
X
l1cosθ1 l2cos(θ1 +θ2)
(b)
where cθ12 = cθ1cθ 2 − sθ1sθ 2 and sθ12 = sθ1cθ 2 + cθ1sθ 2 . The solution of θ 2 can be
computed from summation of squaring both equations 15 and 16.
and so
p 2x + p 2y − l12 − l 22
cθ2 = (17)
2l1l 2
2
§ p 2x + p 2y − l12 − l 22 ·
sθ2 = ± 1 − ¨¨ ¸¸ (18)
© 2 l l
1 2 ¹
§ 2
§ p 2x + p 2y − l12 − l 22 · p 2x + p 2y − l12 − l 22 ·
¨
θ2 = A tan 2¨ ± 1 − ¨¨ ¸¸ , ¸ (19)
2 l l 2l1l 2 ¸
© © 1 2 ¹ ¹
Let’s first, multiply each side of equation 15 by cθ 1 and equation 16 by sθ1 and
add the resulting equations in order to find the solution of θ1 in terms of link
parameters and the known variable θ 2 .
In this step, multiply both sides of equation 15 by − sθ1 and equation 16 by cθ1
and then adding the resulting equations produce
128 Industrial Robotics: Theory, Modelling and Control
and so
p x (l1 + l 2 cθ 2 ) + p y l 2sθ2
cθ1 = (22)
p 2x + p 2y
sθ1 is obtained as
2
§ p (l + l cθ ) + p l sθ ·
sθ1 = ± 1 − ¨¨ x 1 2 2 2 2 y 2 2 ¸¸ (23)
© px + py ¹
§ 2
§ p x (l1 + l 2 cθ 2 ) + p y l 2sθ 2 · p x (l1 + l 2 cθ 2 ) + p y l 2sθ 2 ·
¨ ¸
θ1 = A tan 2¨ ± 1 − ¨¨ ¸ ,
¸ ¸¸ (24)
¨ © p 2
x + p 2
y ¹ p 2x + p 2y
© ¹
Although the planar manipulator has a very simple structure, as can be seen,
its inverse kinematics solution based on geometric approach is very cumber-
some.
Robot Kinematics: Forward and Inverse Kinematics 129
For the manipulators with more links and whose arm extends into 3 dimen-
sions the geometry gets much more tedious. Hence, algebraic approach is cho-
sen for the inverse kinematics solution. Recall the equation 4 to find the in-
verse kinematics solution for a six-axis manipulator.
To find the inverse kinematics solution for the first joint ( q1 ) as a function of
base
the known elements of end −effector T , the link transformation inverses are premul-
tiplied as follows.
[ T (q ) ]
0
1 1
−1 0
T = [01T (q1 )]
6
−1 0
1T (q1 )12T (q 2 ) 23T (q 3 ) 34T (q 4 ) 45T (q 5 ) 56T (q 6 )
where 01T(q 1 ) [ ]
−1 0
1 T(q 1 ) = I , I is identity matrix. In this case the above equation
is given by
[ T (q ) ]
0
1 1
−1 0
6
T = 12T (q 2 ) 23T (q 3 ) 34T (q 4 ) 45T (q 5 ) 56T (q 6 ) (25)
To find the other variables, the following equations are obtained as a similar
manner.
[ 0
1
T(q 1 ) 12T (q 2 )]
−1 0
6
T = 23T (q 3 ) 34T (q 4 ) 45T (q 5 ) 56T (q 6 ) (26)
[ 0
1 T(q 1 ) 12T(q 2 ) 23T(q 3 )]
−1 0
6 T = 34T(q 4 ) 45T(q 5 ) 56T(q 6 ) (27)
[ T (q )
0
1 1 T (q 2 ) 23T(q 3 ) 34T(q 4 )]
1
2
−1 0
6 T = 45T(q 5 ) 56T(q 6 ) (28)
[ 0
T (q 1 ) 12T (q 2 ) 23T (q 3 ) 34T (q 4 ) 45T (q 5 )]
1
−1 0
6T = 56T (q 6 ) (29)
can be solved as functions of r11,r12, … r33, px, py, pz and the fixed link parame-
ters. Once q1 is found, then the other joint variables are solved by the same
way as before. There is no necessity that the first equation will produce q1 and
the second q2 etc. To find suitable equation for the solution of the inverse kine-
matics problem, any equation defined above (equations 25-29) can be used
arbitrarily. Some trigonometric equations used in the solution of inverse kine-
matics problem are given in Table 2.
.
Equations Solutions
1 a sin θ + b cos θ = c (
θ = A tan 2(a , b) # A tan 2 a 2 + b 2 − c 2 , c )
2 a sin θ + b cos θ = 0 θ = A tan 2(−b , a ) or θ = A tan 2(b , − a )
Example 2.
As an example to describe the algebraic solution approach, get back the in-
verse kinematics for the planar manipulator. The coordinate frame assignment
is depicted in Figure 6 and DH parameters are given by Table 3.
i θi αi-1 ai-1 di
1 θ1 0 0 0
2 θ2 0 l1 0
3 0 0 l2 0
Z3
X3
l2
Y3
Z2
X2
l1
Z0,1 θ2
Y2
X0,1
θ1
Y0,1
ªcθ1 − sθ1 0 0º
«sθ cθ1 0 0»
0
T = « 1 » (30)
1
«0 0 1 0»
« »
¬0 0 0 1¼
ªcθ2 − sθ 2 0 l1 º
« sθ cθ2 0 0»
1
T = « 2 » (31)
2
« 0 0 1 0»
« »
¬ 0 0 0 1¼
ª1 0 0 l2 º
«0 1 0 0»
2
T = « » (32)
3
«0 0 1 0»
« »
¬0 0 0 1¼
Let us use the equation 4 to solve the inverse kinematics of the 2-DOF manipu-
lator.
132 Industrial Robotics: Theory, Modelling and Control
0
T −1 03T = 01T −1 01T 12T 23T
1 (34)
where
ª 01 R T − 01 R T 0 P1 º
0
T =«
1
−1
» (35)
¬0 0 0 1 ¼
In equation 35, 01 R T and 0 P1 denote the transpose of rotation and position vec-
tor of 01T , respectively. Since, 01T −1 01T = I , equation 34 can be rewritten as fol-
lows.
0
T −1 03T = 12T 23T
1 (36)
ª . . . cθ1p x + sθ1p y º ª . . . l 2 cθ 2 + l1 º
« . . . − sθ p + cθ p » « . . . l 2 sθ 2 »
« 1 x 1 y
»=« »
«. . . pz » « . . . 0 »
« » « »
¬0 0 0 1 ¼ ¬0 0 0 1 ¼
Squaring the (1,4) and (2,4) matrix elements of each side in equation 37
Robot Kinematics: Forward and Inverse Kinematics 133
p 2x (c 2 θ1 + s 2 θ1 ) + p 2y (s 2 θ1 + c 2 θ1 ) = l 22 (c 2 θ 2 + s 2 θ 2 ) + 2l1l 2 cθ 2 + l12
p 2x + p 2y = l 22 + 2l1l 2 cθ 2 + l12
p 2x + p 2y − l12 − l 22
cθ 2 =
2l1l 2
Finally, two possible solutions for θ 2 are computed as follows using the fourth
trigonometric equation in Table 2.
§ 2
ª p 2x + p 2y − l12 − l 22 º p 2x + p 2y − l12 − l 22 ·
¨
θ 2 = A tan 2 ¨ # 1 − « ¸
» , ¸¸
(38)
¨ ¬ 2 l l
1 2 ¼ 2l1l 2
© ¹
Now the second joint variable θ 2 is known. The first joint variable θ1 can be
determined equating the (1,4) elements of each side in equation 37 as follows.
Using the first trigonometric equation in Table 2 produces two potential solu-
tions.
Example 3.
As another example for algebraic solution approach, consider the six-axis Stan-
ford Manipulator again. The link transformation matrices were previously de-
veloped. Equation 26 can be employed in order to develop equation 41. The
inverse kinematics problem can be decoupled into inverse position and orien-
tation kinematics. The inboard joint variables (first three joints) can be solved
using the position vectors of both sides in equation 41.
[ T T]
0
1
1
2
−1 0
6T = 23T 34T 45T 56T (41)
134 Industrial Robotics: Theory, Modelling and Control
The revolute joint variables θ1 and θ 2 are obtained equating the (3,4) and (1,4)
elements of each side in equation 41 and using the first and second trigono-
metric equations in Table 2, respectively.
The prismatic joint variable d 3 is extracted from the (2,4) elements of each side
in equation 41 as follows.
The last three joint variables may be found using the elements of rotation ma-
trices of each side in equation 41. The rotation matrices are given by
where d = r31cθ2 − r11cθ1sθ2 − r21sθ1sθ2 and e = r32 cθ 2 − r12 cθ1sθ 2 − r22 sθ1sθ 2 . The
revolute joint variables θ 5 is determined equating the (2,3) elements of both
sides in equation 45 and using the fourth trigonometric equation in Table 2, as
follows.
(
θ 5 = A tan 2 ± 1 − (r33 cθ 2 − r13 cθ1sθ 2 − r23sθ1sθ 2 ) 2 , r33 cθ 2 − r13 cθ1sθ 2 − r23sθ1sθ 2 ) (46)
Extracting cos θ 4 and sin θ 4 from (1,3) and (3,3), cosθ 6 and sin θ 6 from (2,1)
and (2,2) elements of each side in equation 45 and using the third trigonomet-
Robot Kinematics: Forward and Inverse Kinematics 135
ric equation in Table 2, θ 4 and θ 6 revolute joint variables can be computed, re-
spectively.
§ r sθ − r cθ r sθ + r cθ cθ + r cθ sθ ·
θ 4 = A tan 2¨¨ 13 1 23 1 , − 33 2 13 1 2 23 2 1 ¸¸ (47)
© sθ 5 sθ 5 ¹
§ r cθ − r cθ sθ − r sθ sθ r cθ − r cθ sθ − r sθ sθ ·
θ 6 = A tan 2¨¨ − 32 2 12 1 2 22 1 2 , 31 2 11 1 2 21 1 2 ¸¸ (48)
© sθ 5 sθ 5 ¹
2.2.3 Some Drawbacks for the Solution of the Inverse Kinematics Problem
(
= A tan 2 # 1 − (0.4999) 2 , 0.4999 ) (49)
= A tan 2 ( # 0.866 , 0.4999 ) = #60 $
As can be seen from equation 49, θ 2 has two solutions, corresponding to the
positive (+60°) and negative (-60°) sign choices. Since cos(θ) = cos(−θ) , one
( θ 2 =60°) of above two solutions can be employed to find the numeric values of
the first joint as follows.
As a result, these four sets of link angle values given by equations 51 through
54 solve the inverse kinematics problem for the planar manipulator. Figure 7
illustrates the particular positions for the planar manipulator in each of above
solutions.
Robot Kinematics: Forward and Inverse Kinematics 137
Y Y
l2 θ2=60
θ2=-60
l1 l1
l2
(12.99, 2.5)
θ1=30 θ1=30
X X
θ1=30, θ2=60 θ1=30, θ2=-60
(a) (b)
Y Y
(12.99, 2.5)
X
θ1=-8.2
l2 l1
X θ2=-60
l1 θ2=60 l2
θ1=-8.2
(c) (d)
Although there are four different inverse kinematics solutions for the planar
manipulator, only two (Figure 7b and 6c) of these bring the end-effector to the
desired position of (px, py)=(12.99, 2.5).
Mathematical solutions for inverse kinematics problem may not always corre-
spond to physical solutions. Another words, there are physical link restrictions
for any real manipulator. Therefore, each set of link angle values should be
138 Industrial Robotics: Theory, Modelling and Control
checked in order to determine whether or not they are identical with the
physical link limits. Suppose, θ 2 =180°, the second link is folded completely
back onto first link as shown in Figure 8. One can readily verify that this joint
value is not physically attained by the planar manipulator.
l2 θ2=180
l1
θ1
Figure 8. The second link is folded completely back onto the first link when θ 2 =180°.
Formulating the suitable mathematical model and deriving the efficient algo-
rithm for a robot kinematics mechanism are very crucial for analyzing the be-
havior of serial manipulators. Generally, homogenous transformation based
on 4x4 real matrices is used for the robot kinematics. Although such matrices
are implemented to the robot kinematics readily, they include in redundant
elements (such matrices are composed of 16 elements of which four are com-
pletely trivial) that cause numerical problems in robot kinematics and also in-
crease cost of algorithms (Funda et al., 1990). Quaternion-vector pairs are used
as an alternative method for driving the robot kinematics of serial manipula-
tor. The successive screw displacements in this method provide a very com-
pact formulation for the kinematics equations and also reduce the number of
equations obtained in each goal position, according to the matrix counterparts.
Since (Hamilton, 2004)’s introduction of quaternions, they have been used in
many applications, such as, classical and quantum mechanics, aerospace, geo-
metric analysis, and robotics. While (Salamin, 1979) presented advantages of
quaternions and matrices as rotational operators, the first application of the
former in the kinematics was considered by (Kotelnikov, 1895). Later, general
properties of quaternions as rotational operators were studied by (Pervin &
Webb, 1982) who also presented quaternion formulation of moving geometric
Robot Kinematics: Forward and Inverse Kinematics 139
objects. (Gu & Luh, 1987) used quaternions for computing the Jacobians for ro-
bot kinematics and dynamics. (Funda et al., 1990) compared quaternions with
homogenous transforms in terms of computational efficiency. (Kim & Kumar,
1990) used quaternions for the solution of direct and inverse kinematics of a 6-
DOF manipulator. (Caccavale & Siciliano, 2001) used quaternions for kine-
matic control of a redundant space manipulator mounted on a free-floating
space-craft. (Rueda et al., 2002) presented a new technique for the robot cali-
bration based on the quaternion-vector pairs.
A quaternion q is the sum of scalar (s) and three dimensional vectors (v). Other
words, it is a quadrinomial expression, with a real angle lj and an axis of rota-
tion n = ix + jy + kz, where i, j and k are imaginary numbers. It may be ex-
pressed as a quadruple q = (lj, x, y, z) or as a scalar and a vector q = (lj, u),
where u= x, y, z. In this chapter it will be denoted as,
ª1 − 2 y 2 − 2z 2 2 xy − 2sz 2 xz + 2sy º
« »
R = « 2xy + 2sz 1 − 2 x 2 − 2z 2 2 yz − 2sx » (56)
«¬ 2 xz − 2sy 2 yz + 2sx 1 − 2 x 2 − 2 y 2 »¼
r13 − r31
y= (60)
4s
r21 − r12
z= (61)
4s
Although unit quaternions are very suitable for defining the orientation of a
rigid body, they do not contain any information about its position in the 3D
space. The way to represent both rotation and translation in a single transfor-
mation vector is to use dual quaternions. The transformation vector using dual
quaternions for a revolute joint is
where the unit quaternion q represents appropriate rotation and the vector
p=<px, py, pz> encodes corresponding translational displacement. In the case
of prismatic joints, the displacement is represented by a quaternion-vector pair
as follows.
where [1, < 0, 0, 0 >] represents unit identity quaternion. Quaternion multiplica-
tion is vital to combining the rotations. Let, q1 = [s1 , v1 ] and q 2 = [s 2 , v 2 ] denote
two unit quaternions. In this case, multiplication process is shown as
q 1 ∗ q 2 = [s1s 2 − v1 ⋅ v 2 , s1 v 2 + s 2 v1 + v1 × v 2 ] (64)
where (.), ( × ) and ( ∗ ) are dot product, cross product and quaternion multipli-
cation, respectively. In the same manner, the quaternion multiplication of two
point vector transformations is denoted as,
Q −1 = (q −1 , − q −1 * p * q ) (67)
where θ1 = θ1 / 2 . As shown in Figure 9, the unit line vector of the second joint
is the opposite direction of the y-axis of the reference coordinate frame, in this
case, the orientation of the second joint is given by
142 Industrial Robotics: Theory, Modelling and Control
θ4,θ6
θ5
d3
θ2
d2 θ1
h1 Z
z0,1
X
Figure 9. The coordinate frame and unit line vectors for the Stanford Manipulator.
Because, the third joint is prismatic; there is only a unit identity quaternion
that can be denoted as
Orientations of the last three joints are determined as follows using the same
approach described above.
The position vectors are assigned in terms of reference coordinate frame as fol-
lows. When the first joint is rotated anticlockwise direction around the z axis
of reference coordinate frame by an angle of θ1 , the link d2 traces a circle in the
xy-plane which is parallel to the xy plane of the reference coordinate frame as
given in Figure 10a. Any point on the circle can be determined using the vector
Robot Kinematics: Forward and Inverse Kinematics 143
If the second joint is rotated as in Figure 10b, in this case the rotation will be
xz-plane with respect to the reference coordinate frame. The position vector of
the second quaternion can be written as
d3
X
θ2
d2 d2
X
h1 Z h1 Z
θz10,1 z0,1
X X
Y Y
Figure 10. a) The link d2 traces a circle on the xy-plane; b) The link d3 traces a circle on
the xz-plane.
Since rotation of the last four joints do not create any displacement for the suc-
cessive joints, the position vectors are denoted as
where s = M11 and v =< M12 , M13 , M14 > are given by equation 98.
The quaternion vector products M i and the quaternion vector pairs N j+1 are
defined as
M i = Q i M i +1 (85)
N i +1 = Q i−1 N i (86)
where,
M 41 = c5 c( 4+6) ,
M 42 = − s5 s ( 4−6 ) ,
M 43 = s5 c ( 4−6) and
M 44 = c 5 s ( 6+ 4 ) .
where,
M 32 = M 42 ,
M 33 = M 43 and
M 34 = M 44 .
where,
146 Industrial Robotics: Theory, Modelling and Control
M 21 = c 2 M 41 − s 2 M 43 ,
M 22 = c 2 M 42 − s 2 M 44 , M 23 = c 2 M 43 − s 2 M 41 and
M 24 = c 2 M 44 + s 2 M 42 .
M1 = Q1M 2 = ([M11 , < M12 , M13 , M14 >], < M15 , M16 , M17 >) (98)
where,
M11 = c1M 21 − s1 ( c 2 M 44 − s 2 M 42 ) ,
M12 = c1M 22 − s1M 23 ,
M13 = c1M 23 + s1M 22 ,
M14 = s1M 21 + c1M 24 ,
M15 = d 2 sθ1 − d 3 cθ1sθ 2 ,
M16 = −d 2 cθ1 − d 3sθ1sθ 2 and
M 17 = h 1 + d 3 cθ 2 .
where,
N 21 = wc1 + c s1 ,
N 22 = ac1 + b s1 ,
N 23 = b c1 − a s1 ,
N 24 = cc1 − w s1 ,
N 25 = p x c1 + p y s1 and
N 26 = − p x s1 + p y c1 − d 2 .
where, N 41 = N 31 ,
N 42 = N 32 ,
N 43 = N 33 and
N 44 = N 34 .
The first joint variable θ1 can be determined equating the second terms of M2
and N2 as follows.
The joint variables θ 2 and d 3 are computed equating the first and third ele-
ments of M3 and N3 respectively.
(
θ 5 = arctan 2 ± N 242 + N 243 , N 241 + N 244 ) (106)
§N · § N · (107)
θ 4 = arctan¨¨ 44 ¸¸ + arctan¨¨ − 42 ¸¸
© N 41 ¹ © N 43 ¹
§N · § N · (108)
θ 6 = arctan¨¨ 44 ¸¸ − arctan¨¨ − 42 ¸¸
© N 41 ¹ © N 43 ¹
148 Industrial Robotics: Theory, Modelling and Control
4. References
1. Introduction
149
150 Industrial Robotics: Theory, Modelling and Control
is named as the Parametrized Joint Variable (PJV) method. In these solutions, the
singularities and the multiple configurations of the manipulators indicated by
sign options can be determined easily. Using these solutions, the inverse
kinematics can also be computerized by means of short and fast algorithms.
Owing to the properties of the exponential rotation matrices, the derived sim-
ple and compact equations are easy to implement for computer programming
of the inverse kinematic solutions. Besides, the singularities and the multiple
configurations together with the working space limitations of the manipulator
can be detected readily before the programming stage, which enables the pro-
grammer to take the necessary actions while developing the program. Thus,
during the inverse kinematic solution, it becomes possible to control the mo-
tion of the manipulator in the desired configuration by selecting the sign op-
tions properly. In this approach, although the derived equations are manipula-
tor dependent, for a newly encountered manipulator or for a manipulator to be
newly designed, there will be no need to follow the complete derivation pro-
cedure starting from the beginning for most of the cases; only a few modifica-
tions will be sufficient. These modifications can be addition or deletion of a
term, or just changing simply a subscript of a link length or offset. Even if the
manipulator under consideration happens to generate a new main group, the
equations can still be derived without much difficulty by using the procedure
described here, since the approach is systematic and its starting point is the
application of the Denavit-Hartenberg convention by identifying the twist an-
gles and the other kinematic parameters. In this context, see (Özgören, 2002)
for an exhaustive study that covers all kinds of six-joint serial manipulators.
The presented method is applicable not only for the serial manipulators but
also for the hybrid manipulators with closed chains. This is demonstrated by
applying the method to an ABB IRB2000 industrial robot, which has a four-bar
mechanism for the actuation of its third link. Thus, alongside with the serial
manipulators, this particular hybrid manipulator also appears in this chapter
with its compact forward kinematic equations and their inversion for the joint
variables. Finally, the chapter is closed by giving the solutions to some typical
trigonometric equations encountered during the inverse kinematic solutions.
For the solution of inverse kinematics problem, forward kinematic equations
are required. There are three methods for inverse kinematic solution; namely,
analytical, semi-analytical, and fully numerical. Presently, analytical methods
can be used only for certain manipulators with specific kinematic parameter
combinations such as PUMA 560. For a general case where the manipulator
does not have specific kinematic parameter combinations, it becomes impossi-
ble to obtain analytical solutions. So, either semi-analytical or fully numerical
methods have been developed. Since the present general semi-analytical
methods are rather cumbersome to use (Raghavan & Roth, 1993; Manseur &
Doty, 1996), fully numerical methods are mostly preferred. However, if the
forward kinematic equations can be simplified, it becomes feasible to use semi-
Structure Based Classification and Kinematic Analysis of … 151
analytical and even analytical methods for a large number of present industrial
robot types. On the other hand, although the fully numerical methods can de-
tect the singularities by checking the determinant of the Jacobian matrix, they
have to do this continuously during the solution, which slows down the proc-
ess. However, the type of the singularity may not be distinguished. Also, in
case of multiple solutions, the desired configurations of the manipulator can
not be specified during the solution. Thus, in order to clarify the singularities
and the multiple configurations, it becomes necessary to make use of semi-
analytical or analytical methods. Furthermore, the analytical or semi-analytical
methods would be of practical use if they lead to compact and simple equa-
tions to facilitate the detection of singularities and multiple configurations. The
methodology presented in this chapter provides such simple and compact
equations by making use of various properties of the exponential rotation ma-
trices, and the simplification tools derived by using these properties (Özgören,
1987-2002). Since different manipulator types with different kinematic parame-
ters lead to different sets of simplified equations, it becomes necessary to clas-
sify the industrial robotic manipulators for a systematic treatment. For such a
classification, one hundred currently used industrial robots are surveyed (Bal-
kan et al., 1999, 2001).
The kinematics of robotic manipulators can be dealt with more effectively and
faster by perceiving their particular properties rather than resorting to general-
ity (Hunt, 1986). After the classification, it is found that most of the recent,
well-known robotic manipulators are within a specific main group, which
means that, instead of general solutions and approaches, manipulator depend-
ent solutions and approaches that will lead to easy specific solutions are more
reasonable. The usage of exponential rotation matrices provide important ad-
vantages so that simplifications can be carried out in a systematic manner with
a small number of symbolic matrix manipulations and the resulting kinematic
equations become much simpler especially when the twist angles are either 0°
or ± 90°, which is the case with the common industrial robots.
For serial manipulators, the forward kinematics problem, that is, determina-
tion of the end-effector position and orientation in the Cartesian space for
given joint variables, can easily be solved in closed-form. Unfortunately, the
inverse kinematics problem of determining each joint variable by using the
Cartesian space data does not guarantee a closed-form solution. If a closed-
form solution can not be obtained, then there are different types of approaches
for the solution of this problem. The most common one is to use a completely
numerical solution technique such as the Newton-Raphson algorithm. Another
frequently used numerical method is the “resolved motion rate control” which
uses the inverse of the Jacobian matrix to determine the rates of the joint vari-
ables and then integrates them numerically with a suitable method (Wu &
Paul, 1982). Runge-Kutta of order four is a common approach used for this
purpose. As an analytical approach, it is possible to convert the forward kine-
152 Industrial Robotics: Theory, Modelling and Control
matic equations into a set of polynomial equations. Then, they can be reduced
to a high-order single polynomial equation through some complicated alge-
braic manipulations. Finally, the resulting high-order equation is solved nu-
merically. However, requiring a lot of polynomial manipulations, this ap-
proach is quite cumbersome (Wampler & Morgan, 1991; Raghavan & Roth,
1993).
On the other hand, the approach presented in this chapter aims at obtaining
the inverse kinematic solutions analytically by manipulating the trigonometric
equations directly without converting them necessarily into polynomial equa-
tions. In a case, where an analytical solution cannot be obtained this way, then
a semi-analytical solution is aimed at by using the method described below.
As explained before, the PJV method is a semi-analytical inverse kinematics
solution method which can be applied to different kinematic classes of six-joint
manipulators which have no closed-form solutions. In most of the cases, it is
based on choosing one of the joint variables as a parameter and determining
the remaining joint variables analytically in terms of this parametrized joint
variable. Parametrizing a suitable joint variable leads to a single univariate
equation in terms of the parametrized joint variable only. Then, this equation
is solved using a simple numerical technique and as the final step remaining
five joint variables are easily computed by substituting the parametrized joint
variable in their analytical expressions. However, for certain kinematic struc-
tures and kinematic parameters two and even three equations in three un-
knowns may arise (Özgören, 2002). Any initial value is suitable for the solution
and computational time is very small even for an initial condition far from the
solution. The PJV method can also handle the singular configurations and mul-
tiple solutions. However, it is manipulator dependent and equations are dif-
ferent for different classes of manipulators. PJV works well also for non-
spherical wrists with any structural kinematic parameter combination.
In this chapter, four different subgroups are selected for the demonstration of
the inverse kinematic solution method. Two of these subgroups are examples
to closed-form and semi-analytic inverse kinematic solutions for the most fre-
quently seen kinematic structures among the industrial robots surveyed in
(Balkan et al., 1999, 2001). Since the manipulators in these two subgroups have
revolute joints only, the inverse kinematic solution of subgroup 4.4 which in-
cludes Unimate 4000 industrial robot is also given to demonstrate the method
on manipulators with prismatic joints. The inverse kinematic solution for this
class of manipulators happens to be either closed-form or needs the PJV
method depending on the selection of one of its parameters. In addition, the
inverse kinematic solution for ABB IRB2000 industrial robot, which has a
closed chain, is obtained to show the applicability of the method to such ma-
nipulators.
Structure Based Classification and Kinematic Analysis of … 153
G −2) G −1) G )
u(k
3 u(k
3 u(k
3
Jkï1
Lk
Lkï1 Jk+1
A k −1 Jk
αk
a k −1 Ak
ak
dk θk
Ok G )
O k −1 u(k
1
G −1)
u(k
1
J k: Joint k.
Lk: Link k.
Ok: Origin of the reference frame Fk attached to Lk.
Ak: Auxiliary point between Lk-1 and Lk.
G
u i (k ) : ith unit basis vector of Fk ; i = 1, 2, 3.
G
ak: Effective length AkOk of Lk along u 1k .
G
dk: Distance Ok-1Ak of Lk from Lk-1 along u 3(k −1) . It is a constant parameter,
called offset, if Jk is revolute. It is the kth joint variable if Jk is prismatic. It
is then denoted as sk.
154 Industrial Robotics: Theory, Modelling and Control
G
ljk: Rotation angle of Lk with respect to Lk-1 about u 3(k −1) . It is the kth joint
variable if Jk is revolute. If Jk is prismatic, it is a constant parameter which
is either 0° or ±90° for common industrial robot manipulators.
G
αk: Twist angle of Jk+1 with respect to Jk about u 1(k ) . For common industrial
robot manipulators, it is either 0° or ±90°.
Among the industrial robots surveyed in this chapter, there is no industrial ro-
bot whose last joint is prismatic. Thus, the wrist point, which is defined as the
origin of F6 is chosen to be coincident with the origin of F5. That is, O5 = O6. The
other features of the hand frame F6 are defined as described below.
G G
u 3(6) = u 3(5) (1)
a6 = 0, d6 = 0, α6 = 0 (2)
The end-effector is fixed in F6 and assuming that its tip point P is on the axis
G
along the approach vector u 3(6) , its location can be described as dp = O6P.
The relationship between the representations of the same vector in two differ-
ent frames can be written as shown below.
ˆ (a,b) n (b)
n (a) = C (3)
G
Here, n (a) , n (b) are the column representations of the vector n in the frames Fa
and Fb while Ĉ (a,b) is the transformation matrix between these two frames.
In order to make the kinematic features of the manipulators directly visible
and to make the due simplifications easily, the hand-to-base transformation
matrix Ĉ (0,6) and the wrist point position vector r (0) , or the tip point position
vector p(0) are expressed separately, rather than concealing the kinematic fea-
tures into the overcompact homogeneous transformation matrices, which are
also unsuitable for symbolic manipulations. The wrist and tip point position
vectors are related as follows:
ˆ (0,6) u 3
p(0) = r (0) + d p C (4)
Here, r (0) and p(0) are the column matrix representations of the position vec-
tors in the base frame F0 whereas u 3 is the column matrix representation of the
approach vector in the hand frame F6.
The overall relative displacement from Fk-1 to Fk consists of two rotations and
G
two translations, which are sequenced as a translation of sk along u 3(k −1) , a ro-
G G
tation of ljk about u 3(k −1) , a translation of ak along u 1(k) , and a rotation of αk
G
about u 1(k ) .
Structure Based Classification and Kinematic Analysis of … 155
ˆ (0,6) = C
C ˆ (0,1) C
ˆ (1,2) C
ˆ (2,3) C
ˆ (3,4) C
ˆ (4,5) C
ˆ (5,6) (5)
According to the D-H convention, the transformation matrix between two suc-
cessive link frames can be expressed using exponential rotation matrices
(Özgören, 1987-2002). That is,
Here, Î is the identity matrix and n is the skew symmetric matrix generated
from the column matrix n = n (a) . This generation can be described as follows.
ªn1 º ª 0 - n3 n2 º
n = «« n 2 »» → n = «« n 3 0 - n 1 »» (8)
«¬ n 3 »¼ «¬ - n 2 n1 0 »¼
G G G (a)
Furthermore, if n = u (a) th
k where u k is the k basis vector of the frame Fa, then
n = u k and
Ĉ (a,b) = e u k θ (9)
Here,
ª 1º ª0 º ª0 º
u1 = ««0 »» , u2 = «« 1 »» , u3 = «« 0 »» (10)
«¬0 »¼ «¬ 0 »¼ «¬ 1 »¼
C ˆ (0,6) = e u 3 θ1 e u 1 α 1 e u 3 θ2 e u 1 α 2 e u 3 θ3 e u 1 α 3 e u 3 θ 4 e u 1 α 4 e u 3 θ5 e u 1 α 5 e u 3 θ6
ˆ =C (11)
156 Industrial Robotics: Theory, Modelling and Control
On the other hand, the wrist point position vector can be expressed as
G G G G G G G
r = r 01 + r 12 + r 23 + r 34 + r 45 + r 56 (12)
G
Here, rij is the vector from the origin Oi to the origin Oj.
Using the column matrix representations of the vectors in the base frame F0,
Equation (12) can be written as
r = r (0) = d 1 u 3 + a 1C ˆ (0,1) u 1 + d 2 C
ˆ (0,1) u 3 + a 2 C
ˆ (0,2) u 1 + d 3 C
ˆ (0,2) u 3 + a 3 C
ˆ (0,3) u 1
ˆ (0,3) u 3 + a 4 C
+d 4 C ˆ (0,4) u 1 + d 5 C
ˆ (0,4) u 3 + a 5 C
ˆ (0,5) u 1 (13)
r = d 1u 3 + a 1e u 3 θ1 u 1 + d 2 e u 3 θ1 e u 1 α 1 u 3 + a 2 e u 3 θ1 e u 1 α 1 e u 3 θ2 u 1
+d 3e u 3 θ1 e u 1 α 1 e u 3 θ2 e u 1 α 2 u 3 + a 3e u 3 θ1 e u 1 α 1 e u 3 θ2 e u 1 α 2 e u 3 θ3 u 1
+d 4 e u 3 θ1 e u 1 α 1 e u 3 θ2 e u 1 α 2 e u 3 θ3 e u 1 α 3 u 3
+ a 4 e u 3θ1 e u 1α 1 e u 3θ2 e u 1α 2 e u 3θ3 e u 1α 3 e u 3θ 4 u 1
+d 5e u 3 θ1 e u 1 α 1 e u 3 θ2 e u 1 α 2 e u 3 θ3 e u 1 α 3 e u 3 θ 4 e u 1 α 4 u 3
+ a 5 e u 3θ1 e u 1α 1 e u 3θ2 e u 1α 2 e u 3θ3 e u 1α 3 e u 3θ 4 e u 1α 4 e u 3θ5 u 1 (14)
As noticed in Equations (11) and (14), the general r expression contains five
joint variables and the general Ĉ expression includes all of the angular joint
variables. On the other hand, it is an observed fact that in the six-joint indus-
trial robots, many of the structural length parameters (ak and dk) are zero (Bal-
kan et al., 1999, 2001). Due to this reason, there is no need to handle the inverse
kinematics problem in a general manner. Instead, the zero values of ak and dk
of these robots can be used to achieve further simplifications in Equations (11)
and (14). In order to categorize and handle the simplified equations in a sys-
tematic manner, the industrial robots are grouped using a two step classifica-
tion scheme according to their structural parameters ak, αk, and dk for revolute
joints or ljk for prismatic joints. The primary classification is based on the twist
angles (αk) and it gives the main groups. Whereas, the secondary classification
is based on the other structural parameters (ak and dk or ljk) and it gives the
subgroups under each main group.
Structure Based Classification and Kinematic Analysis of … 157
In the main groups, the simplified r and Ĉ expressions are obtained using the
fact that the twist angles are either 0° or ± 90°. The Ĉ expression for each main
group is the same, because the rotation angles (ljk) are not yet distinguished at
this level whether they are constant or not. At the level of the subgroups, the
values of the twist and constant rotation angles are substituted into the r and
Ĉ expressions, together with the other parameters. Then, the properties of the
exponential rotation matrices are used in order to obtain simplified equations
with reduced number of terms, which can be used with convenience for the
inverse kinematic solutions. The main groups with their twist angles and the
number of robots in each main group are given in Table 1 considering the in-
dustrial robots surveyed here. The subgroups are used for finer classification
using the other structural parameters. For the manipulators in this classifica-
tion, the r expressions are simplified to a large extent especially when zeros
are substituted for the vanishing structural parameters.
Substituting all the nine sets of the twist angle values given in Table 1 into
Equations (11) and (14), the main group equations are obtained. The terms of
r involving ak and dk are denoted as T(ak) and T(dk) as described below.
T(a 2 ) = a 2 e ( θ1 , θ 2 , α 1 ) u 1 = a 2 e u 3 θ1 e u 1 α 1 e u 3 θ2 u 1 (17)
158 Industrial Robotics: Theory, Modelling and Control
T(d 2 ) = d 2 e ( θ1 , α 1 ) u 3 = d 2 e u 3 θ1 e u 1 α 1 u 3 (18)
Here, the derivation of equations is given only for the main group 1, but the
equations of the other groups can be obtained in a similar systematic manner
by applying the exponential rotation matrix simplification tools given in Appendix
A. The numbers (E.#) of the employed tools of Appendix A during the deriva-
tion of the Ĉ matrices and the terms (ak) and (dk) are shown in Table 2.
Table 2. Exponential Rotation Matrix Simplification Tool Numbers (E.#) Applied for Deriva-
tion of Ĉ Matrices and Terms (ak) and (dk) in Main Group Equations
Using the exponential rotation matrix simplification tools E.4 and E.6, the rota-
tion matrix for the main group 1, i.e. Ĉ 1 , can be obtained as follows.
Ĉ 1 = e u 3 θ1 e u 2 θ23 e u 3 θ 4 e u 2 θ5 e u 3 θ6 (21)
The simplifications can be made for the terms T(ak) and T(dk) of Equation (22)
as shown in Table 3 using the indicated simplification tools given in Appendix
A.
Replacing the terms T(ak) and T(dk) in Equation (22) with their simplified
forms given in Table 3, the wrist point location for the main group 1, i.e. r1 , can
be obtained as follows:
r1 = d 1u 3 + a 1e u 3 θ1 u 1 + d 2 e u 3 θ1 u 2 + a 2 e u 3 θ1 e u 2 θ2 u 1 + d 3e u 3 θ1 u 2 + a 3e u 3 θ1 e u 2 θ23 u 1
+d 4 e u 3θ1 e u 2θ23 u 3 + a 4 e u 3θ1 e u 2 θ23 e u 3θ 4 u 1 + d 5 e u 3θ1 e u 2 θ23 e u 3θ4 u 2
+ a 5 e u 3θ1 e u 2 θ23 e u 3θ 4 e u 2 θ5 u 1 (23)
The simplified equation pairs for Ĉ and r pertaining to the other main groups
can be obtained as shown below by using the procedure applied to the main
group 1 and the appropriate simplification tools given in Appendix A. The
subscripts indicate the main groups in the following equations. In these equa-
tions, dij denotes di+dj. Note that, if Jk is prismatic, then the offset dk is to be
replaced with the joint variable sk as done in obtaining the subgroup equations
in Subsection 3.2.
r2 = d 1u 3 + a 1e u 3 θ1 u 1 + d 234 e u 3 θ1 u 2 + a 2 e u 3 θ1 e u 2 θ2 u 1 + a 3e u 3 θ1 e u 2 θ23 u 1
+ a 4 e u 3θ1 e u 2 θ234 u 1 + d 5 e u 3θ1 e u 2 θ234 u 3 + a 5 e u 3θ1 e u 2 θ234 e u 3θ5 u 1 (25)
Ĉ 3 = e u 3 θ1 e u 2 θ2 e u 3 θ34 e u 2 θ5 e u 3 θ6 (26)
r3 = d 1u 3 + a 1e u 3 θ1 u 1 + d 2 e u 3 θ1 u 2 + a 2 e u 3 θ1 e u 2 θ2 u 1 + d 34 e u 3 θ1 e u 2 θ2 u 3
+ a 3 e u 3θ1 e u 2 θ2 e u 3θ3 u 1 + a 4 e u 3θ1 e u 2 θ2 e u 3θ34 u 1 + d 5 e u 3θ1 e u 2 θ2 e u 3θ34 u 2
+ a 5 e u 3θ1 e u 2 θ2 e u 3θ34 e u 2 θ5 u 1 (27)
Ĉ 4 = e u 3 θ1 e u 2 θ2 e u 3 θ3 e u 2 θ 4 e u 3 θ5 e u 2 θ6 e -u 1 π /2 (28)
r4 = d 1u 3 + a 1e u 3 θ1 u 1 + d 2 e u 3 θ1 u 2 + a 2 e u 3 θ1 e u 2 θ2 u 1 + d 3e u 3 θ1 e u 2 θ2 u 3
+ a 3 e u 3θ1 e u 2 θ2 e u 3θ3 u 1 + d 4 e u 3θ1 e u 2 θ2 e u 3θ3 u 2 + a 4 e u 3θ1 e u 2 θ2 e u 3θ3 e u 2 θ4 u 1
+d 5 e u 3θ1 e u 2 θ2 e u 3θ3 e u 2 θ 4 u 3 + a 5 e u 3θ1 e u 2 θ2 e u 3θ3 e u 2 θ4 e u 3θ5 u 1 (29)
Ĉ 5 = e u 3 θ1234 e u 2 θ5 e u 3 θ6 (30)
r6 = d 1u 3 + a 1e u 3 θ1 u 1 + d 2 e u 3 θ1 u 2 + a 2 e u 3 θ1 e u 2 θ2 u 1 + d 345e u 3 θ1 e u 2 θ2 u 3
+ a 3 e u 3θ1 e u 2 θ2 e u 3θ3 u 1 + a 4 e u 3θ1 e u 2 θ2 e u 3θ34 u 1 + a 5 e u 3θ1 e u 2 θ2 e u 3θ345 u 1 (33)
Ĉ 8 = e u 3 θ12 e u 2 θ3 e u 3 θ 4 e u 2 θ5 e u 3 θ6 (36)
r9 = d 1u 3 + a 1e u 3 θ1 u 1 + d 23e u 3 θ1 u 2 + a 2 e u 3 θ1 e u 2 θ2 u 1 + a 3e u 3 θ1 e u 2 θ23 u 1
+d 45 e u 3θ1 e u 2 θ23 u 3 + a 4 e u 3θ1 e u 2 θ23 e u 3θ4 u 1 + a 5 e u 3θ1 e u 2 θ23 e u 3θ 45 u 1 (39)
Structure Based Classification and Kinematic Analysis of … 161
The list of the subgroups of the nine main groups is given in Table 4 with the
non-zero link parameters and the number of industrial robots surveyed in this
study. In the table, the first digit of the subgroup designation indicates the un-
derlying main group and the second non-zero digit indicates the subgroup of
that main group (e.g., subgroup 2.6 indicates the subgroup 6 of the main group
2). The second zero digit indicates the main group itself. The brand names and
the models of the surveyed industrial robots are given in Appendix B with
their subgroups and non-zero link parameters. If the joint Jk of a manipulator
happens to be prismatic, the offset dk becomes a joint variable, which is then
denoted by sk. In the column titled “Solution Type”, CF denotes that a closed-
form inverse kinematic solution can be obtained analytically and PJV denotes
that the inverse kinematic solution can only be obtained semi-analytically us-
ing the so called parametrized joint variable method. The details of these two
types of inverse kinematic solutions can be seen in Section 4.
Using the information about the link lengths and the offsets, the simplified
subgroup equations are obtained for the wrist locations as shown below by us-
ing again the exponential rotation matrix simplification tools given in Appen-
dix A. In these equations, the first and second subscripts associated with the
162 Industrial Robotics: Theory, Modelling and Control
wrist locations indicate the related main groups and subgroups. For all sub-
groups of the main groups 1 and 2, the rotation matrix is as given in the main
group equations.
The constant joint angles associated with the prismatic joints are as follows for
the subgroups of the main group 3: For the subgroups 3.1 and 3.2 having s1, s2,
and s3 as the variable offsets, the joint angles are lj1 = 0°, lj2 = 90°, lj3 = 0° or 90°.
For the subgroups 3.3 and 3.4, having s3 as the only variable offset, lj3 is either
0° or 90°. This leads to the following equations:
u 1 θ34 u 2 θ′5 u 3 θ6
°e u 1θ 4 e u 2 θ′5 e u 3 θ6 for θ 3 = 0°
ˆ ˆ
C 31 = C 32 = e e e = ® u θ′ u θ′ u θ (56)
°̄e
1 4
e 2 5e 3 6 for θ 3 = 90°
r31 = s 3 u 1 + s 2 u 2 + s 1u 3 (57)
u 3 θ1 e u 2 θ2 e u 3 θ4 e u 2 θ5 e u 3 θ6 for θ 3 = 0°
C ˆ 34 = °®e
ˆ 33 = C (59)
u 3 θ1 u 2 θ 2 u 3 θ′4 u 2 θ 5 u 3 θ6
°̄e e e e e for θ 3 = 90°
r33 = d 2 e u 3 θ1 u 2 + s 3e u 3 θ1 e u 2 θ2 u 3 (60)
r34 = a 2 e u 3 θ1 e u 2 θ2 u 1 + s 3e u 3 θ1 e u 2 θ2 u 3 + d 5e u 3 θ1 e u 2 θ2 e u 3 θ34 u 2
°a 2 e u 3θ1 e u 2 θ2 u 1 + s 3 e u 3θ1 e u 2 θ2 u 3 + d 5 e u 3θ1 e u 2 θ2 e u 3θ4 u 2 for θ 3 = 0°
= ® u θ u θ (61)
°̄a 2 e e
3 1 2 2
u 1 + s 3 e u 3θ1 e u 2 θ2 u 3 + d 5 e u 3θ1 e u 2 θ2 e u 3θ′4 u 2 for θ 3 = 90°
The constant joint angles associated with the prismatic joints, for the sub-
groups of the main group 4 are as follows: For the subgroup 4.1 having s1, s2,
and s3 as the variable offsets, the joint angles are lj1 = 0°, lj2 = 90°, lj3 = 0° or 90°.
For the subgroup 4.2 having s2 as the only variable offset, lj2 is either 0° or 90°.
For the subgroups 4.3 and 4.4 having s3 as the only variable offset, lj3 is either
0° or 90°. This leads to the following equations:
164 Industrial Robotics: Theory, Modelling and Control
u 3 θ13 u 2 θ 4 u 3 θ 5 u 2 θ6 -u 1 π /2
°e e e e e for θ 2 = 0°
Ĉ 42 = ® u θ u θ u θ′ u θ u θ -u π /2 (64)
°̄e
3 1
e 1 3e 2 4e 3 5e 2 6e 1 for θ 2 = 90°
u 3 θ1 u 3 θ1 u 2 θ 2 u 3 θ 3
°s 2 e u 3 θ1 u 2 + d 4 e u 3 θ13 u 2 for θ 2 = 0°
r42 = s 2 e u 2 + d 4e e e u 2 = ® u θ (65)
°̄s 2 e
3 1
u 2 + d 4 e u 3 θ1 e u 1θ3 u 2 for θ 2 = 90°
r43 = s 3e u 3 θ1 e u 2 θ2 u 3 + d 5e u 3 θ1 e u 2 θ2 e u 3 θ3 e u 2 θ4 u 3
u 3 θ1 u 2 θ 2
°s 3 e e u 3 + d 5 e u 3θ1 e u 2 θ24 u 3 for θ 3 = 0°
= ® u θ u θ (67)
°̄s 3 e e
3 1 2 2
u 3 + d 5 e u 3θ1 e u 2 θ2 e -u 1θ 4 u 3 for θ 3 = 90°
r44 = a 2 e u 3 θ1 e u 2 θ2 u 1 + s 3e u 3 θ1 e u 2 θ2 u 3 + d 5e u 3 θ1 e u 2 θ2 e u 3 θ3 e u 2 θ4 u 3
°a 2 e u 3 θ1 e u 2 θ2 u 1 + s 3e u 3 θ1 e u 2 θ2 u 3 + d 5e u 3 θ1 e u 2 θ24 u 3 for θ 3 = 0°
= ® u θ u θ (68)
°̄a 2 e
3 1
e 2 2 u 1 + s 3e u 3 θ1 e u 2 θ2 u 3 + d 5e u 3 θ1 e u 2 θ2 e -u 1 θ4 u 3 for θ 3 = 90°
The constant joint angle lj3 associated with the prismatic joint J3 for the sub-
group of the main group 5 is either 0° or 90°. This leads to the following equa-
tions:
u 3 θ1234 u 2 θ 5 u 3 θ6
°e u 3 θ124 e u 2 θ5 e u 3 θ6 for θ 3 = 0°
Ĉ 51 = e e e = ® u θ′ u θ u θ (69)
°̄e
3 124
e 2 5e 3 6 for θ 3 = 90°
Structure Based Classification and Kinematic Analysis of … 165
For the subgroup of the main group 6, the rotation matrix is as given in the
main group equations and the wrist point location is expressed as
The constant joint angles associated with the prismatic joints for the subgroup
of the main group 7 are as follows: The joint angle lj2 is 90° for the prismatic
joint J2 and the joint angle lj3 is either 0° or 90° for the prismatic joint J3. This
leads to the following equations:
u 3 θ′1 u 2 θ 34 u 3 θ 5 u 2 θ6 -u 1 π /2
°e u 3 θ′1 e u 2 θ 4 e u 3 θ5 e u 2 θ6 e -u 1 π /2 for θ 3 = 0°
Ĉ 71 = e e e e e = ® u θ′ u θ′ u θ u θ -u π /2 (72)
°̄e
3 1
e 2 4e 3 5e 2 6e 1 for θ 3 = 90°
r71 = s 2 u 3 + a 2 e u 3 θ1 u 2 − s 3e u 3 θ1 u 1 (73)
For the subgroups of the main group 8, the rotation matrix is as given in the
main group equations and the wrist point locations are expressed as
For the subgroup of the main group 9, the rotation matrix is as given in the
main group equations and the wrist point location is expressed as
In the inverse kinematics problem, the elements of Ĉ and r are available and
it is desired to obtain the six joint variables. For this purpose, the required
elements of the r and Ĉ matrices can be extracted as follows:
ˆ uj
r i = u iT r and c ij = u iT C (77)
For most of the manipulators, which are called separable, the wrist point posi-
tion vector contains only three joint variables. The most typical samples of
such manipulators are those with spherical wrists (Pieper & Roth, 1969).
Therefore, for this large class of manipulators, Equation (14) is first used to ob-
tain the arm joint variables, and then Equation (11) is used to determine the re-
maining three of the wrist joint variables contained in the Ĉ matrix. After ob-
taining the arm joint variables, Ĉ equation is arranged in such a way that the
arm joint variables are collected at one side of the equation leaving the remain-
ing joint variables to be found at the other side within a new matrix M̂ , which
is called modified orientation matrix. The three arguments of M̂ happen to be the
wrist joint variables and they appear similarly as an Euler Angle sequence of
three successive rotations. After this preparation, the solution of the modified
orientation equation directly gives the wrist joint variables. The most com-
monly encountered sequences are given in Table 5 with their solutions and
singularities. In the table, σ = ±1 indicates the alternative solutions. When the
sequence becomes singular, the angles φ1 and φ3 can not be determined and the
mobility of the wrist becomes restricted. For a more detailed singularity analy-
sis, see (Özgören, 1999 and 2002).
However, there may also be other kinds of separable manipulators for which it
is the Ĉ matrix that contains only three joint variables. In such a case, the solu-
tion is started naturally with the Ĉ equation and then the remaining three
joint variables are found from the r equation. Besides, there are inseparable
manipulators as well for which both of the Ĉ and r equations contain more
than three joint variables. The most typical sample of this group is the Cincin-
nati Milacron-T3 (CM-T3 566 or 856) robot. It has four unknown variables in
each of its Ĉ and r equations. For such manipulators, since the Ĉ and r
equations are not separable, they have to be solved jointly and therefore a
closed-form inverse kinematic solution cannot be obtained in general. Never-
theless, for some special forms of such manipulators, Cincinnati Milacron-T3
being one of them, it becomes possible to obtain a closed-form inverse kine-
matic solution. For a more detailed analysis and discussion of inverse kinemat-
ics covering all possible six-joint serial manipulators, see (Özgören, 2002).
Structure Based Classification and Kinematic Analysis of … 167
For all the groups of six-joint manipulators considered in this chapter, there
are two types of inverse kinematic solution, namely the closed-form (CF) solu-
tion and the parametrized joint variable (PJV) solution where one of the joint
variables is temporarily treated as if it is a known parameter. For many of the
six-joint manipulators, a closed-form solution can be obtained if the wrist point
location equation and the end-effector orientation equation are separable, i.e. if
it is possible to write the wrist point equation as
r = r (q 1 ,q 2 ,q 3 ) (78)
where qk denotes the kth joint variable, which is either ljk or sk. Since there are
three unknowns in the three scalar equations contained in Equation (78), the
unknowns q1, q2, q3 can be directly obtained from those equations. The end-
effector orientation variables q4, q5, q6 are then obtained by using the equation
for Ĉ .
In general, the necessity for a PJV solution arises when a six-joint manipulator
has a non-zero value for one or more of the structural length parameters a4, a5,
and d5. However, in all of the manipulators that are considered here, only d5
exists as an offset at the wrist. In this case, r will be a function of four vari-
ables as
r = r (q 1 ,q 2 ,q 3 ,q 4 ) (79)
Since, there are more unknowns now than the three scalar equations contained
in Equation (79), the variables q1, q2, q3, and q4 can not be obtained directly. So,
one of the joint variables is parametrized and the remaining five joint variables
are obtained as functions of this parametrized variable from five of the six sca-
lar equations contained in the Ĉ and r equations. Then, the remaining sixth
scalar equation is solved for the parametrized variable using a suitable nu-
merical method. Finally, by substituting the numerically found value of this
joint variable into the previously obtained expressions of the remaining joint
variables, the complete solution is obtained.
168 Industrial Robotics: Theory, Modelling and Control
There may also be a situation that a six-joint manipulator can have non-zero
values for the structural parameters d5 and a5 so that
r = r (q 1 ,q 2 ,q 3 ,q 4 ,q 5 ) (80)
In this case, two joint variables can be chosen as the parametrized joint vari-
ables and the remaining four joint variables are obtained as functions of these
two parametrized variables. Then, using a suitable numerical method, the re-
maining two equations are solved for the parametrized joint variables. After-
wards, the inverse kinematic solution is completed similarly as described
above. However, if desired, it may also be possible to reduce the remaining
two equations to a single but rather complicated univariate polynomial equa-
tion by using methods similar to those in (Raghavan & Roth, 1993; Manseur &
Doty, 1996; Lee et al, 1991).
Although the analytical or semi-analytical solution methods are necessarily
dependent on the specific features of the manipulator of concern, the proce-
dure outlined below can be used as a general course to follow for most of the
manipulators considered here.
2. The three scalar equations are worked on in order to cast them into the
forms of the trigonometric equations considered in Appendix C, if they are
not so already.
4. If such an equation does not exist or cannot be generated, then the PJV
method is used. Thus, except the parametrized joint variable, there will
again be a single equation with a single unknown to start the solution.
5. The two remaining scalar equations pertaining to the wrist location are
then used to determine the remaining two of the arm joint variables again
by using the appropriate ones of the trigonometric equation solutions
given in Appendix C.
6. Once the arm joint variables are obtained, the solution of the orientation
equation for the three wrist joint variables is straightforward since it will
be the same as the solution pattern of one of the commonly encountered
rotation sequences, such as 1-2-3, 3-2-3, etc, which are shown in Table 5.
Structure Based Classification and Kinematic Analysis of … 169
When the manipulators in Table 4 are considered from the viewpoint of the so-
lution procedure described above, they are accompanied by the designations
CF (having inverse kinematic solution in closed-form) and PJV (having inverse
kinematic solution using a parametrized joint variable). As noted, almost all
the manipulators in Table 4 are designated exclusively either with CF or PJV.
Exceptionally, however, the subgroups 4.3 and 4.4 have both of the designa-
tions. This is because the solution type can be either CF or PJV depending on
whether lj3 = 0° or lj3 = 90°, respectively.
In this section, the inverse kinematic solutions for the subgroups 1.1 (e.g.
KUKA IR 662/10), 1.7 (e.g. GMF S-3 L or R) and 4.4 (e.g. Unimate 4000) are
given in order to demonstrate the solution procedure described above. As in-
dicated in Table 4, the subgroup 1.1 can have the inverse kinematic solution in
closed-form, whereas the subgroup 1.7 necessitates a PJV solution. On the
other hand, for the subgroup 4.4, which has a prismatic joint, the inverse ki-
nematic solution can be obtained either in closed-form or by using the PJV
method depending on whether the structural parameter lj3 associated with the
prismatic joint is 0° or 90°. Although lj3 = 0° for the enlisted industrial robot of
this subgroup, the solution for lj3 = 90° is also considered here for sake of dem-
onstrating the application of the PJV method to a robot with a prismatic joint
as well. It should be noted that, the subgroups 1.1 and 1.7 are examples to ro-
bots with only revolute joints and the subgroup 4.4 is an example to robots
with revolute and prismatic joints. The subgroups 1.1 and 1.7 are considered
particularly because the number of industrial robots is high within these cate-
gories. As an additional example, the ABB IRB2000 industrial robot is also con-
sidered to demonstrate the applicability of the method to manipulators con-
taining closed kinematic chains. However, the solutions for the other
subgroups or a new manipulator with a different kinematic structure can be
obtained easily by using the same systematic approach.
For all the subgroups of the main group 1, the orientation matrix is
Ĉ 1 = e u 3 θ1 e u 2 θ23 e u 3 θ 4 e u 2 θ5 e u 3 θ6 (81)
Since all the subgroups have the same Ĉ 1 matrix, they will have identical
equations for lj4, lj5 and lj6. In other words, these variables can always be de-
termined from the following equation, after finding the other variables some-
how from the wrist location equations of the subgroups:
e u 3 θ4 e u 2 θ5 e u 3 θ6 = M
ˆ1 (82)
170 Industrial Robotics: Theory, Modelling and Control
θ 4 = atan2 (σ 5m 23 , σ 5m 13 ) (83)
θ 5 = atan2 (σ 5 1 − m 33
2
, m 33 ) (84)
θ6 = atan2 (σ 5m 32 , - σ 5m 13 ) (85)
ˆ 1u j .
Here, ǔ5 = ±1 and m ij = u iT M
Note that this 3-2-3 sequence becomes singular if lj5 = 0 or θ 5 = ±180 o , but the
latter case is not physically possible. This is the first kind of singularity of the
manipulator, which is called wrist singularity. In this singularity with lj5 = 0, the
axes of the fourth and sixth joints become aligned and Equation (82) degener-
ates into
e u 3 θ4 e u 3 θ6 = e u 3 ( θ 4 +θ6 ) = e u 3 θ46 = M
ˆ1 (86)
This equation implies that, in the singularity, lj4 and lj6 become arbitrary and
they cannot be determined separately although their combination lj46 = lj4 + lj6
can still be determined as θ 46 = atan2 (m 21 , m 11 ) . This means that one of the
fourth and sixth joints becomes redundant in orienting the end-effector, which
in turn becomes underivable about the axis normal to the axes of the fifth and
sixth joints.
The wrist point position vector of this subgroup given in Equation (40) can be
written again as follows by transposing the leading exponential matrix on the
right hand side to the left hand side:
Here, r1, r2 and r3 are the base frame components of the wrist position vector,
r11 .
From Equation (89), lj1 can be obtained as follows by using the trigonometric
equation T1 in Appendix C, provided that r 22 + r12 ≠ 0 :
If r 22 + r12 = 0 , i.e. if r2 = r1 = 0 , i.e. if the wrist point is located on the axis of the
first joint, the second kind of singularity occurs, which is called shoulder singu-
larity. In this singularity, Equation (89) degenerates into 0 = 0 and therefore θ1
cannot be determined. In other words, the first joint becomes ineffective in po-
sitioning the wrist point, which in turn becomes underivable in the direction
normal to the arm plane (i.e. the plane formed by the links 2 and 3).
Here,
(ρ 12 + r32 ) − (a 22 + d 24 )
ρ2 = (95)
2a 2 d 4
Note that the constraint on ρ 2 implies a working space limitation on the ma-
nipulator, which can be expressed more explicitly as
(a 2 − d 4 )2 ≤ ρ 12 + r32 ≤ (a 2 + d 4 )2 (96)
172 Industrial Robotics: Theory, Modelling and Control
Expanding sin lj23 and cos lj23 in Equation (90) and (93) and rearranging the
terms as coefficients of sin lj2 and cos lj2, the following equations can be ob-
tained.
Here,
ρ 3 = a 2 + d 4 sin θ 3 (99)
ρ 4 = d 4 cos θ 3 (100)
The wrist point position vector of this subgroup is given in Equation (46).
From that equation, the following scalar equations can be obtained as done
previously for the subgroup 1.1:
Structure Based Classification and Kinematic Analysis of … 173
Here, r1, r2 and r3 are the components of the wrist position vector, r17 . Note
that Equations (102)-(104) contain four unknowns (lj1, lj2, lj3, lj4). Therefore, it
now becomes necessary to use the PJV method. That is, one of these four un-
knowns must be parametrized. On the other hand, Equation (103) is the sim-
plest one of the three equations. Therefore, it will be reasonable to parametrize
either lj1 or lj4. As it is shown in (Balkan et al., 1997, 2000), the solutions ob-
tained by parametrizing lj1 and lj4 expose different amounts of explicit infor-
mation about the multiple and singular configurations of the manipulators be-
longing to this subgroup. The rest of the information is concealed within the
equation to be solved numerically. It happens that the solution obtained by pa-
rametrizing lj4 reveals more information so that the critical shoulder singular-
ity of the manipulator can be seen explicitly in the relevant equations; whereas
the solution obtained by parametrizing lj1 conceals it. Therefore, lj4 is chosen as
the parametrized joint variable in the solution presented below. As the starting
step, lj1 can be obtained from Equation (103) as follows by using T3 in Appen-
dix C, provided that r 22 + r12 > 0 and r22 + r12 ≥ ρ 25 :
Here,
ρ 5 = d 5 cos θ 4 (106)
On the other hand, the inequality constraint r22 + r12 ≥ ρ 52 indicates a working
space limitation on the manipulator.
Here,
ρ 6 = d 5 sin θ 4 (109)
Here,
(ρ 12 + r32 ) − (a 22 + d 24 + ρ 62 )
ρ7 = (111)
2a 2
Expanding sin lj23 and cos lj23 in Equation (107) and (108) and collecting the
relevant terms as coefficients of sin lj2 and cos lj2, the following equations can
be obtained.
Here,
If ρ82 + ρ92 = 0 , the elbow singularity occurs. Then, θ 2 becomes arbitrary with
the same consequences as those of the subgroup 1.1.
Structure Based Classification and Kinematic Analysis of … 175
The inverse kinematic solution for the subgroup 4.4 is obtained in a similar
manner and the related equations are given in Table 6 indicating the multiple
solutions by σ i = ±1 . The orientation matrix Ĉ 4 is simplified using the kine-
matic properties of this subgroup and denoted as Ĉ 44 . Actually, the Unimate
4000 manipulator of this subgroup does not have two versions with lj3 = 0° and
lj3 = 90° as given below. It has simply lj3 = 0° and the other configuration is a
fictitious one. However, aside from constituting an additional example for the
PJV method, this fictitious manipulator also gives a design hint for choosing
the structural parameters so that the advantage of having a closed-form in-
verse kinematic solution is not lost.
176 Industrial Robotics: Theory, Modelling and Control
The method of inverse kinematics presented here is not limited to the serial
manipulators only. It can also be applied to robots with a main open kinematic
chain supplemented with auxiliary closed kinematic chains for the purpose of
motion transmission from the actuators kept close to the base. As a typical ex-
ample, the ABB IRB2000 industrial robot is selected here in order to demon-
strate the application of the method to such manipulators. The kinematic
sketch of this manipulator with its four-link transmission mechanism is shown
in Figure 2. It can be seen from the kinematic sketch that the four-link mecha-
nism can be considered in a sense as a satellite of the manipulator’s main open
kinematic chain. In other words, its relative position with respect to the main
chain is determined completely by the angle lj3. Once lj3 is found by the inverse
kinematic solution, the angular position of the third joint actuator φ 3 can be
determined in terms of lj3 as follows by considering the kinematics of the four-
link mechanism:
φ3 = ψ 2 + θ2 (118)
Here,
a 22 + b 22 + b 24 − b 23 a 2 b 4
a = a 2 + b 4 sin θ 3 , b = b 4 cos θ 3 , c = − sin θ 3 (120)
2b 2 b2
G
u (0)
3
G
u1(2)
G
θ3 u1(3)
O3 π
O2 xG a 3 ψ4 = − θ3
u (2)
3 2
d4 B θ3
a2 G
G u1(4)
u (4) b4
3 θ5 G b3
θ2 <0 O4 , O5, O 6 x u1(5) O2
G x G
u (1) u1(1)
3
O0 , O1 G A a2
u (3)
3 ψ2
θ1 θ2
b2
G G( 6)
u (5)
3 , u3 O1 φ 3
a) Complete Kinematic Structure b) Closed Kinematic Chain Details
The inverse kinematic solutions of all the subgroups given in Table 4 are ob-
tained. In main group 1, subgroup 1.2 has parameter d2 in excess when com-
pared to subgroup 1.1. This has an influence only in the solution of lj1. The re-
maining joint variable solutions are the same. Similarly subgroup 1.3 has
parameter a1 and subgroup 1.5 has d23 (d2+d3) in excess when compared to
subgroup 1.1. Considering subgroup 1.3, only the solutions of lj2 and lj3 are dif-
ferent than the solution of subgroup 1.1, whereas the solution of subgroup 1.5
is identical to the solution of subgroup 1.2 except that d23 is used in the formu-
las instead of d2. Subgroup 1.6 has parameter a1 in excess compared to sub-
group 1.4. Thus, the solutions of lj2 and lj3 are different than the solution of
subgroup 1.4. Subgroup 1.8 has parameter d2 and subgroup 1.9 has a1 and a3 in
excess when compared to subgroup 1.7. Considering subgroup 1.8, only the
solution of lj1 is different. For subgroup 1.9, a1 and a3 causes minor changes in
the parameters defined in the solutions of lj2 and lj3. The last subgroup, that is
subgroup 1.10 has the parameters a1, a3 and d3 in excess when compared to
subgroup 1.7. lj1, lj2 and lj3 have the same form as they have in the solution of
subgroup 1.7, except the minor changes in the parameters defined in the solu-
tions. It can be concluded that d2 affects the solution of lj2 and a1 affects the so-
lutions of lj2 and lj3 through minor changes in the parameters defined in the so-
lution. In main group 2, subgroup has parameter d2 in excess when compared
to subgroup 2.3 and thus the solution of lj1 has minor changes. Subgroup 2.5
has parameter a1 in excess when compared to subgroup 2.4 and the solutions
of lj2 and lj3 have minor changes. Subgroup 2.6 has parameter d4 in excess and
the term including it is identical to the term including d2 in subgroup 2.5 ex-
cept lj1 which includes d4 instead of d2. In main group 8, subgroup 8.2 has the
parameter a2 in excess compared to subgroup 8.1. This leads to a minor change
in the solutions of lj1 and lj2 through the parameters defined in the solution.
For main group 1, if lj1 is obtained analytically lj4, lj5 and lj6 can be solved in
closed-form. Any six-joint manipulator belonging to main group 2 can be
solved in closed-form provided that lj1 is obtained in closed-form. Using lj1,
lj234 can easily be determined using the orientation matrix equation. Since lj4
appears in the terms including a4, d5 and a5 as lj234, this lead to a complete
Structure Based Classification and Kinematic Analysis of … 179
5. Conclusion
6. References
Appendix A
E.10 : e -u i θk e u i θk = e u i 0 = Iˆ
Structure Based Classification and Kinematic Analysis of … 183
Appendix B
Appendix C
Ibrahim A. Sultan
1. Introduction
185
186 Industrial Robotics: Theory, Modelling and Control
2. Literature Survey
which were solved numerically to obtain different possible solutions to the in-
verse position problem; it could therefore be concluded that the maximum
number of meaningful solutions to the inverse position problem of a general
robotic structure is 16 (Tsai and Morgan 1985), rather than 32 as had previ-
ously been suggested (Duffy and Crane, 1980). However it has been pointed
out that a manipulator with 16 different real inverse position solutions can sel-
dom be found in real life (Manseur and Doty, 1989). In reality most manipula-
tors are designed to possess up to 8 solutions of which only one or two can be
physically attained.
It is possible to express the inverse position problem of robots in terms of a 16
degree polynomial in the tan-half-angle of a joint-displacement (Lee and Liang
1988a & 1988b; Raghavan and Roth 1989). However it has been argued that
the coefficients of such a polynomial are likely to contain too many terms
which may render such a tack impractical to use (Smith and Lipkin 1990).
Also, these high order polynomials are obtained by evaluating the eliminants
of hyper-intricate determinants which may be impossible to handle symboli-
cally in the first place. This may have motivated some researchers (Manocha
and Canny 1992; Kohli and Osvatic 1993) to reformulate the solutions in terms
of eigenvalue models in order to simplify the analysis and avoid numerical
complications. However, a numerical technique has been introduced to obtain
the inverse solutions without having to expand the system characteristic de-
terminant (Sultan, 2002).
The procedure introduced here for the inverse position analysis of robot ma-
nipulators is described in the rest of this paper.
3. Rotation of Vectors
xˆ i D zˆ i = 0 (1)
where xˆ i = 1 .
Then
yˆ i = zˆ i × xˆ i (2)
The original vector, vio , and the rotated vector vir , can both be expressed with
respect to the xˆ i yˆ i zˆ i -frame in terms of local coordinates, n, m and l as follows,
Inverse Position Procedure for Manipulators with Rotary Joints 189
ir
vio
θι
ẑ0
ŷi
x̂0 x̂i
ŷ0 ẑi
Base Coordinates
The inverse of this problem is encountered when zˆ i , vio and vir are all known
and it is required to obtain the corresponding value of θ i . With the values of
the local coordinates known, θi could be obtained as follows,
The arm, which is the largest kinematic part of the manipulator, consists of
three revolute joints connected through rigid links. Each joint, as shown in
Figure (2), is represented by the spatial pose of its axis. The first joint-axis has
a fixed location and orientation in space as it represents the connection be-
tween the whole manipulator and the fixed frame. Any other joint-axis num-
ber i can float in space as it rotates about the joint-axis number i–1.
In the current context, the main function of the arm is to displace a certain spa-
tial point from an initial known location to a required final position. In spheri-
cal-wrist manipulators, this point is at the intersection of the wrist axes. In a
calibrated (non-spherical-wrist) manipulator, it may represent a point on the
sixth axis as close as possible to the fifth joint-axis. In Figure (2), the arm is re-
quired to displace point pi to a final position pf. The position vectors, pi b and
pfb respectively, of these two points are known with respect to the base coor-
dinate system.
As per Appendix A, any joint-axis zˆ i is related to the successive axis, zˆ i +1 ,
through a common normal, xˆ i +1 . This common normal is used to construct a
local frame at the axis zˆ i +1 using the relation, yˆ i + 1 = zˆ i + 1 × xˆ i + 1 . The shortest dis-
tance, ai + 1 , between the axes, zˆ i and zˆ i +1 , is measured along xˆ i +1 which inter-
sects zˆ i at the point pi and zˆ i +1 at the point pi ( i + 1) .
At the zero initial position which is shown in Figure (2), the axis x̂1 is chosen
to coincide with x̂ 2 . In this figure, the position vectors, pi 3 o and pf1 r , of
points pi and pf respectively with respect to the frames xˆ 3 yˆ 3 zˆ 3 and xˆ 1 yˆ 1 zˆ 1 may
be numerically calculated as follows,
pf1r = pfb − p1 ½
¾ (7)
pi 3 o = pi b − p32 ¿
where p1 and p32 are the position vectors of the axes-attached, points p1 and
p 32 , respectively as measured from the origin of the base coordinates. Accord-
Inverse Position Procedure for Manipulators with Rotary Joints 191
ing to the concepts in (4) and (5), pf1r can be described with respect to the
xˆ 1 yˆ 1 zˆ 1 -frame in terms of known local coordinates ( n1r , m1r and l1r ). Also,
pi 30 can be described with respect to the xˆ 3 yˆ 3 zˆ 3 -frame in terms of known local
coordinates ( n3 o , m3 o and l3 o ).
pf ẑ1 z2
θ2
pf1r ŷ2 pi
pi3o ẑ3
ŷ1 θ1
p21 x̂2 θ3
x̂1 ŷ3
d2
p1 a2
p a3 x̂3
2 p32
ẑ0
x̂0
ŷ0
Base Coordinates
It is understood that the vector pf1 r resulted from rotating another vector pf1 o
about the zˆ i axis by an angle, θ 1 (i.e. a θ 1 ẑ1 -type rotation). The original vector,
pf1 o , can be expressed with respect to the xˆ 1 yˆ 1 zˆ 1 -frame in terms of local coor-
dinates ( n1 o , m1 o and l1 o ). Also, during the positioning process the vector pi 3 o
will perform a θ 3 ẑ 3 -type rotation to evolve into pi 3 r which can be expressed
with respect to the xˆ 3 yˆ 3 zˆ 3 -frame in terms of local coordinates ( n3 r , m3 r and
l3 r ). Therefore the two vectors, pf1 o and pi 3 r can be written as follows;
pf1 o = n1 o xˆ 1 + m1 o yˆ 1 + l1 o zˆ 1 ½
¾ (8)
pi 3 r = n3 r xˆ 3 + m3 r yˆ 3 + l3 r zˆ 3 ¿
192 Industrial Robotics: Theory, Modelling and Control
where the two equations above have four unknowns that need to be deter-
mined. These four unknowns are n1o , m1o , n3r and m3r . The numerical values
of the l-type local coordinates are calculated as follows;
l3 r = pi 3 o D zˆ 3 ½
¾ (9)
l1 o = pf1r D zˆ 1 ¿
In fact the value of l3r is calculated, and stored in a data file, once the manipu-
lator has been calibrated and an initial position has been nominated; however,
l1o has to be calculated for every new desired end-effector position. Moreover,
the end-effector positions which are defined by the vectors pf1o and pi 3r can
be used to study the rotation about the middle joint-axis, ẑ 2 . These same posi-
tions can be expressed relative to a point, p 2 , attached to ẑ 2 , using the two re-
spective vectors, p2r and p20 as follows;
p2 r = pf1 o + d2 zˆ 2 − a2 xˆ 2 ½
¾ (10)
p2 o = pi 3 r + a3 xˆ 3 ¿
where d2 = (p21 − p2 ) D zˆ 2
It may be noted that p2r and p20 are separated by a single rotation, θ 2 ẑ 2 . The
properties of this rotation may be utilised to show that,
pi 2 o D zˆ 2 = pi 2 r D zˆ 2 (11)
and
pi 2 o D pi 2 o = pi 2 r D pi 2 r (12)
Equations (7) to (12) may then be manipulated to obtain the following two lin-
ear equations,
m1 o yˆ 1 D zˆ 2 = m3 r yˆ 3 D zˆ 2 + l3 r zˆ 3 D zˆ 2 − l1 o zˆ 1 D zˆ 2 − d2 (13)
and
1
m1 o d2 yˆ 1 D zˆ 2 − a2 n1 o = a3 n3 r +
2
( pi 3i D pi 3i − pf1r D pf1r + a32 − a22 − d22 − 2l1o d2 zˆ 1 D zˆ 2 ) (14)
The concept in equations (3) and (5) may be employed to express the x2-, y2-
and z2-components of a rotated vector p r2 o which results from performing a
θ 2 ẑ 2 rotation on p20 . Then the coincidence of pr2 o and p2r may be described
by,
Inverse Position Procedure for Manipulators with Rotary Joints 193
pr2 o D xˆ 2 = p2 r D xˆ 2 (15)
and
pr2 o D yˆ 2 = p2 r D yˆ 2 (16)
n1 o − a2 = ( xˆ 3 D xˆ 2 c 2 − xˆ 3 D yˆ 2 s2 ) n3 r + ( yˆ 3 D xˆ 2 c 2 − yˆ 3 D yˆ 2 s2 ) m3 r
(17)
+ ( l3 r zˆ 3 D xˆ 2 + a3 xˆ 3 D xˆ 2 ) c 2 − ( l3 r zˆ 3 D yˆ 2 + a3 xˆ 3 D yˆ 2 ) s2
and
m1 o yˆ 1 D yˆ 2 + l1 o zˆ 1 D yˆ 2 = ( xˆ 3 D xˆ 2 s2 + xˆ 3 D yˆ 2 c 2 ) n3 r + ( yˆ 3 D xˆ 2 s2 + yˆ 3 D yˆ 2 c 2 ) m3 r
(18)
+ ( l3 r zˆ 3 D yˆ 2 + a3 xˆ 3 D yˆ 2 ) c 2 + ( l3 r zˆ 3 D xˆ 2 + a3 xˆ 3 D xˆ 2 ) s2
The four linear equations, (13), (14), (17) and (18) represent the mathematical
core of the kinematic model introduced in the present work for the inverse po-
sition analysis of the arm module. A symbolic solution for these equations can
be obtained such that, n3r and m3r are expressed in the following forms,
n3 r = f 1 / f (19)
and
m3 r = f 2 / f (20)
f 12 + f 22 = f 2 ( pi 3 o D pi 3 o − l32r ) (21)
where the coefficients are calculated symbolically from the model presented
above. The parameters which constitute these coefficients reflect the kinematic
relations between the successive arm axes and they can be calculated and
stored for run-time use once the arm is calibrated. These parameters are all
constant for a given arm except pf1r D pf1r and l1o which depend on the desired
final location of the end-effector as described above. The fact that only two pa-
rameters need to be calculated highlights the computational efficiency of the
described approach.
In (22), t is the tangent of half θ 2 and it is used to replace c 2 and s2 as follows,
1 − t2 ½
c2 =
1 + t 2 °° (23)
¾
2t °
s2 =
1 + t 2 ¿°
Equation (22), which is a fourth degree polynomial, can be solved using a sys-
tematic non-iterative technique (Tranter, 1980). The resulting roots can succes-
sively be plugged back in equations (23) to work out the corresponding values
of c 2 and s2 . These values are then employed to obtain the joint-displacement
θ 2 using the atan2 function referred to above. The values of the local coordi-
nates, n3r and m3r , may be calculated by using equations (19) and (20).
A numerically stable method to obtain m1o and n1o is to use equation (17) for
n1o and then obtain m1o from the following equation,
m1 o = ( pr2 o − d2 zˆ 2 ) D yˆ 1 (24)
Finally, n3r , m3r , n3o , m3o are employed in equation (6) to obtain the corre-
sponding values of θ 3 . Similarly, n1r , m1r , n1o , m1o are used to obtain the cor-
responding values of θ 1 .
As revealed by the polynomial in (22), the maximum number of arm configu-
rations, N arm , which correspond to a given end-effector position is four. In
some cases, however, the geometrical relationships between the consecutive
axes as well as the required position of pf allow for the inverse position prob-
lem to be solved through the use of quadratic, rather quartic or higher, poly-
nomials. Arms which exhibit this sort of simplification are said to have simple
structures. Some of these cases are outlined in the next section.
Inverse Position Procedure for Manipulators with Rotary Joints 195
Most industrial robots are designed to have their successive axes either paral-
lel or perpendicular to make a simplified closed form inverse position solution
achievable. Researchers have repeatedly assigned the term, simple structure, to
these robotic arms. The word "simple" usually implies that a non-iterative so-
lution can be obtained for the inverse position problem of this particular struc-
ture. However, as the discussion in the previous section reveals, a non-
iterative solution can still be obtained even for arms with arbitrarily positioned
and directed joint-axes. A definition has been proposed for this term in the
light of the conics theory (Smith and Lipkin, 1990). In the present section, a
consistent simplified geometrical definition is introduced.
To gain understanding of the results obtainable from the fourth-degree poly-
nomial equation (22), equations (13) and (14) along with the following two
equations may be considered,
The four equations, (13), (14), (25) and (26), together are useful in studying the
kinematic behaviour of the arm mechanism.
Essentially, the inverse position problem of the arm structure may be depicted
as shown in Figure (3). In the figure, points pf and pi assume their local circu-
lar paths about the rotary axes, ẑ 1 and ẑ 3 , creating two spatial circles Cz1 and
Cz3, respectively, in two planes perpendicular to ẑ 1 and ẑ 3 with their centres
located on the axes. Thus, a solution exists if a circle, Cz2, that intersects both
Cz1 and Cz3 simultaneously, can be drawn in a plane normal to ẑ 2 with its
centre located along it. As the analysis given in the previous section suggests,
if the three axes are located and directed arbitrarily in space, a maximum of
four different circles can be drawn about ẑ 2 to satisfy this condition. Each cir-
cle intersects each of Cz1 and Cz3 at one point and hence, the four possible so-
lutions.
196 Industrial Robotics: Theory, Modelling and Control
θ3
ẑ 3
pf ẑ1
θ1 ẑ2 θ2 Cz3
Cz1 Cz2 pi
x̂ 3
ẑ0
x̂0 x̂2
ŷ0
Base Coordinates
- i. use equations (13) and (14) to express n3r and m3r as functions of m1o .
m
- ii. use these functions in equation (25) to obtain the two roots of m1o : 1 o 1
m
and 1 o 2 ,
n
- iii. use equation (26) to obtain the four corresponding values of n1 o : 1 o 1 ,
−n1 o 1 n1 o 2 −n1 o 2
, and ,
Inverse Position Procedure for Manipulators with Rotary Joints 197
a2 = 0 , yˆ 1 D zˆ 2 = −1 and yˆ 3 D zˆ 2 = 0
This makes it possible to obtain the solution for a non-calibrated PUMA arm
substructure using the following procedure,
Thus, the four possible configurations of the arm are given by the following
root combinations,
( n1o , m1o , n3r , m3r ), ( n1o , m1o , n3r , −m3r ), ( −n1o , m1o , n3r , m3r ) and ( −n1o , m1o ,
n3r , −m3r ).
198 Industrial Robotics: Theory, Modelling and Control
Based on the above discussion it can be concluded that the middle axes ẑ 2
must lie in one plane with either ẑ 1 or ẑ 3 for a simplified mathematical pro-
cedure to be realisable. Once this condition is satisfied, the four equations,
(13), (14), (25) and (26) can be readily employed to obtain the inverse solution
and therefore the arm structure can be described as simple.
In the next section, the procedure which is presented for the inverse position
analysis of the wrist substructure is explained.
In the current context, the main task of the first two wrist joints (namely the
fourth and fifth joints on the manipulator structure) is to displace the axis of
the last joint (i.e. the sixth joint) from a given known orientation to a new de-
sired direction in space.
Figure (4) depicts an arrangement of two revolute joints with their axes ẑ 4 and
ẑ 5 pointing in the directions calculated using any suitable direct kinematic
procedure featuring three consecutive rotations, θ 3 ẑ 3 , θ 2 ẑ 2 and θ 1 ẑ1 . At this
specific pose, the axis of the sixth joint, zˆ i6 5o
, is also calculated using the same
consecutive rotations, and it is now required to be orientated in the direction
of ẑ6f . In the figure, the common normal x̂ 5 is directed from ẑ 4 to ẑ 5 (where
xˆ 5 = zˆ 4 × zˆ 5 ). At zero position x̂ 4 is selected to coincide with x̂ 5 such that two
Cartesian coordinate systems xˆ 4 yˆ 4 zˆ 4 and xˆ 5 yˆ 5 zˆ 5 can be established. Accord-
ing to the concepts in (4) and (5), ẑ6f can be described with respect to the
xˆ 4 yˆ 4 zˆ 4 -frame in terms of local coordinates ( n4r , m4r and l4r ). Also, zˆ i6 5o
can
be described with respect to the xˆ 5 yˆ 5 zˆ 5 -frame in terms of known local coordi-
nates ( n5o , m5o and l5o ).
Inverse Position Procedure for Manipulators with Rotary Joints 199
ẑf6 ẑ4 5
ŷ5
ŷ4 θ4 zˆ i6 5 o
x̂5
x̂4 θ5
ẑ0
x̂0
ŷ0
Base Coordinates
It is understood that the vector ẑ6f resulted from rotating another vector zˆ 6f 4o
about the ẑ 4 axis by an angle, θ 4 (i.e. a θ 4 ẑ 4 -type rotation). The original vec-
tor, zˆ 6f , can be expressed with respect to the xˆ 4 yˆ 4 zˆ 4 -frame in terms of local
4o
coordinates ( n4o , m4o and l4o ), where n4o and m4o are unknowns to be worked
out and l4o is numerically obtained from l4 o = l4 r = zˆ 6f D zˆ 4 .
The vector zˆ i6 5r
which results from rotating zˆ i6 5o
by an angle θ 5 ẑ 5 may be ex-
pressed in the following form,
zˆ i6 5r
= ( n5 oc 5 − m5 o s5 ) xˆ 5 + ( n5 o s5 + m5 oc 5 ) yˆ 5 + l5 o zˆ 5 (27)
l4 o = ( n5 o s5 + m5 oc 5 ) yˆ 5 D zˆ 4 + l5 o zˆ 5 D zˆ 4 (28)
This last expression (28) is a linear equation in s5 and c 5 . This equation may
be re-expressed in a polynomial form as follows,
where t is the tangent of half θ 5 and b j is the coefficient of the jth power term.
It could be concluded from equation (29), which is a second degree polyno-
mial, that the number of the wrist configurations, N wrist , which correspond to
the required orientation of ẑ6f is ≤ 2.
Once θ5 is obtained, m4o and n4o can be worked out as follows;
n4 o = n5 oc 5 − m5 o s5 ½
¾ (30)
m4 o = ( n5 o s5 + m5 oc 5 ) yˆ 5 D yˆ 4 + l5 o zˆ 5 D yˆ 4 ¿
Finally, n4r , m4r , n4o and m4o are employed in equation (6) to obtain the corre-
sponding values of θ 4 .
From the analysis presented in this and the previous sections, it can be con-
cluded that the maximum number of configurations of a spherical-wrist ma-
nipulator structure which correspond to any given position and orientation at
the end-effector is eight. The actual number of configurations, N, is calculated
by,
Once the first five joints on the manipulator structure have performed con-
secutive rotations ( θ i zˆ i , where i = 1, 2,...5 ) to place the sixth joint axis at its
desired position and orientation, one final rotation ( θ 6 ẑ 6 ), will be performed to
align any side axis on the end-effector with its desired direction. The term
“side axis” here refers to any axis, on the end-effector Cartesian frame, whose
direction is influenced by rotations performed about ẑ 6 . This final part of the
inverse kinematic procedure is a straight forward application of the model
presented in equations (3) to (6) to calculate the angle of rotation. However, it
worth noting here that this final step of the analysis is preceded by a direct ki-
nematic procedure to calculate the updated direction of the side axis after five
consecutive rotations, θ 5 zˆ 5 , θ 4 zˆ 4 , θ 3 zˆ 3 , θ 2 zˆ 2 and θ 1 zˆ 1 , have been performed.
Inverse Position Procedure for Manipulators with Rotary Joints 201
Figure (5) depicts a flow chart that has been designed to explain the procedure
proposed here for the inverse position analysis of manipulators. For spherical-
wrist manipulators, the procedure produces eight sets of solutions in a non-
iterative fashion. However, for calibrated robotic structures, the eight solu-
tions are obtained in a simple iterative approach which does not involve any
Jacobian matrix computations. By virtue of the concepts presented, the vari-
ous solutions may be calculated simultaneously if parallel computing facilities
are available.
In the present approach, the arm is assigned the task of positioning any point
on the sixth joint-axis at its required spatial location. The closest point on the
sixth-joint axis to the fifth joint-axis may be conveniently selected for this pur-
pose. This point will be referred to in the following discussion as pi0 . The four
joint-displacement solutions which correspond to this positioning task are
therefore obtained using the models presented above and saved in four three-
element vectors, v j , where j=1,2,3 and 4.
At arm configuration number j, the wrist joints align the sixth joint-axis with
its required final orientation, as previously described, and the two correspond-
ing solutions are accordingly obtained and saved in a pair of two-elements
vectors, w jk , where k may assume the values of 1 or 2. To this end, a set of
eight joint-displacement solutions have been obtained. If the robot was of the
spherical-wrist type these solutions should accurately represent the required
joint-displacements and no iterations would be required.
Calibrated robots, however, are not likely to have their last three joint-axes in-
tersecting at a common point (i.e. the spherical-wrist property is lost), the mo-
tions performed by the wrist joints will displace the point which was previ-
ously positioned by the arm to eight new locations, p0jk , corresponding to the
wrist solutions obtained.
At location number jk, the instantaneous position vector, p0jk , of the displaced
point may be calculated, using a suitable direct kinematic procedure, and
compared to the required position vector pn0 where the net radial error, e jk , is
calculated as follows,
k =1 k =2 jk k =2 k =1 k =2
Get p k =2 k =1
0
Yes
e≤ ε
No
m=1
θ =θ + θ new
m m-1
jk
Get p
m
Compare to the
required position and
work out the error e.
m=m+1 No
e≤ε
Yes
Calculate θ6
Stop
If the calculated value for e jk does not fall within an allowable error zone, the
calculations proceed such that at iteration number m, the arm sets out from the
jk
most updated configuration number jk(m-1) to position point pm-1 in the re-
n
quired location, p0 . The four solutions obtained may be stored in four three-
Inverse Position Procedure for Manipulators with Rotary Joints 203
M
v = v j + ¦ vmjk
n
jk (33)
m=1
The vector, w njk , which corresponds to the jk-solution of the wrist is calculated
as;
M
w njk = w jk + ¦ w mjk (34)
m=1
Once the jk-solution for the first five joint-displacements has been obtained,
the corresponding displacement of the last joint may simply be calculated.
The iterative technique presented here utilises the physical kinematic behav-
iour of manipulator joints and therefore fast and singularity-proof conver-
gence may be assured. The technique does not require initial guesses intro-
duced into the model.
In the next section a numerical example is given where the inverse position so-
lutions will be produced for a PUMA-type robot of both calibrated and ideal
structures.
204 Industrial Robotics: Theory, Modelling and Control
9. Numerical Example
A PUMA-like manipulator with six revolute joints is selected for the example.
The dimensions of the spherical-wrist structure of the manipulator are given in
Table (1). The dimensions of the non-spherical-wrist version of the same ma-
nipulator are similar to those given in Table (1) except for the locations of the
fourth and fifth joint-axes which were displaced to (-128.0, 818.51 and 205.04
mm) and (-130.5, 802.0 and 180.4 mm) respectively.
In both cases, the initial and final locations of the Tool Centre Point (TCP) of
the end-effector are given with respect to the base coordinates as, -120.54,
1208.36 and 175.095 and, –400.0, –400.0 and 1009.0 mm respectively. The initial
and final orientations of the end-effector are given in terms of an Euler ZYZ-
system as: 88.5733, 89.9604 and 89.722 and, 120.0, –20.0 and 150.0 degrees re-
spectively.
The models proposed in this paper were used to calculate the inverse position
solutions for both the spherical-wrist and general manipulator structures and
the results are displayed in Tables (2) and (3) respectively. The angular dis-
placements given in these tables are in degrees.
θ1 θ2 θ3 θ4 θ5 θ6
Sol. No. 1 -34.45 -163.09 64.67 86.12 -36.06 -130.97
Sol. No. 2 -34.45 -163.09 64.67 -85.75 31.73 58.63
Sol. No. 3 -47.14 -104.39 -72.4 19.01 -84.81 154.74
Sol. No. 4 -47.14 -104.39 -72.4 -160.03 80.47 -20.04
Sol. No. 5 104.39 -72.90 64.57 24.26 85.22 2.38
Sol. No. 6 104.39 -72.90 64.57 -156.27 -89.56 177.24
Sol. No. 7 117.07 -11.05 -72.30 68.50 28.82 68.23
Sol. No. 8 117.07 -11.05 -72.30 -120.58 -33.16 -122.18
θ1 θ2 θ3 θ4 θ5 θ6
Sol. No. 1 -39.20 -162.93 66.79 86.5 -36.82 -125.19
Sol. No. 2 -30.78 -162.92 70.95 -73.54 32.47 68.51
Sol. No. 3 -48.12 -106.90 -68.18 19.18 -83.65 156.37
Sol. No. 4 -46.46 -103.38 -65.35 -158.64 72.85 -17.25
Sol. No. 5 105.11 -73.14 67.49 24.37 87.28 0.42
Sol. No. 6 103.60 -70.98 72.08 -154.89 -98.19 173.67
Sol. No. 7 120.24 -11.00 -68.57 65.31 29.99 60.59
Sol. No. 8 114.00 -11.34 -62.96 -133.30 -37.94 -134.91
Table 3. Inverse Position Solutions for the Non-spherical-wrist Robot.
The solutions obtained for the spherical-wrist manipulator did not involve it-
erations at all. However, a maximum of 4 iterations were used for the non-
spherical-wrist manipulator. In most cases the number of iterations was 2 ex-
cept for the second and first solutions were this number was 3 and 4 respec-
tively. This demonstrates the numerical efficiency of the proposed models.
10. Conclusions
The work presented in this paper introduces a technique for inverse position
analysis of revolute-joint manipulators. The analysis developed results in
simplified solutions for both the arm and the wrist subassemblies. These solu-
tions are obtained in form of polynomials whose coefficients can be simply
calculated for a given manipulator structure. The technique can be used to ob-
tain inverse kinematic solutions for both spherical-wrist and calibrated ma-
nipulator structures.
The technique results in obtaining multiple sets of the joint-motor displace-
ments which correspond to a given pose at the end-effector. This enables the
trajectory designer to the select the joint-trajectory which best fits a desired
manipulator task.
206 Industrial Robotics: Theory, Modelling and Control
Appendix A
x i + 1 = zˆ i × zˆ i + 1 (A.1)
( ) (
if zˆ i × zˆ i + 1 = 0 then x i + 1 = p0( i + 1) − p0 i − zˆ i ª¬ p0( i + 1) − p0 i D zˆ i º¼ ) (A.2)
x i +1
xˆ i + 1 = (A.3)
x i +1
ẑi i+1
x̂i+1
p
(i+1)i
p ai+1 p
i 0(i+1)
ẑ0
p
0i
x̂0
ŷ0
Base Coordinates
( )
ai + 1 = p0( i + 1) − p0 i D xˆ i + 1 (A.4)
Inverse Position Procedure for Manipulators with Rotary Joints 207
pi = p0 i + bi zˆ i (A.5)
where
( zˆ i +1 × xˆ i +1 ) D ( p0( i +1) − p0 i )
bi = (A.6)
( zˆ i +1 × xˆ i +1 ) D zˆ i
pi ( i + 1) = pi + ai + 1 xˆ i + 1 (A.8)
Appendix B
Claim:
Any two spatial circles intersect at two points if, and only if, their axes lie in
one and the same plane.
Proof:
Figure (B) depicts two spatial circles, C1 and C2, and their axes, ẑ1 and ẑ 2 re-
spectively. The circles intersect one another at two points, s1 and s2 . To prove
that ẑ1 and ẑ 2 must lie in one and the same plane, the centres of the two cir-
cles, Pc1 and Pc2, are connected to the point, s3 , which divides the line s1s2 into
two equal parts.
ẑ1 ẑ2
C1 s1
C2
Pc1 Pc2
s3
s2
• Therefore s1s2 is perpendicular to the plane which contains the two intersec-
ting lines, ẑ1 and Pc1s3 . Let this plane be referred to as PN1.
• Similarly, it could be established that s1s2 is also perpendicular to the plane
which contains the two intersecting lines, z2 and Pc2 s3 . This plane may be
referred to as PN2.
• A general conclusion may now be drawn that, PN1 is parallel to PN2.
• However, PN1 and PN2 share one common point, s3 .
• Therefore, the two planes coincide and ẑ 1 , Pc1s3 , ẑ 2 and Pc 2 s3 must all lie
in one and the same plane.
11. References
Her, M.-G, Yen, C.-Y., Hung, Y.-C. and Karkoub, M. (2002), Approximating a
Robot Inverse Kinematics Solution Using Fuzzy Logic Tuned by Genetic
Algorithms, Int. J. Adv. Manuf. Technol, Vol. 20, pp. 372-380.
Kohli, D. and Osvatic, M. (1993), Inverse Kinematics of General 6R and 5R,P
Spatial Manipulators, ASME J. Mech. Des., Vol. 115, Dec. 1993, pp. 922-
930.
Lee, H.-Y. and Liang, C. G. (1988a), A New Vector Theory for the Analysis of
Spatial Mechanisms, Mech. Mach. Theory, Vol. 23, No. 13, pp. 209-217.
Lee, H.-Y. and Liang, C. G. (1988b), Displacement Analysis of the General Spa-
tial 7-Link 7R Mechanism, Mech. Mach. Theory, Vol. 23, No. 13, pp. 219-
226.
Lee, H.-Y., Reinholtz, C. F. (1996), Inverse Kinematics of Serial-Chain Manipu-
lators, J. Mech. Des.-Trans. ASME, Vol. 118, Sept. 1996, pp. 396-404.
Mahalingam, S. and Sharan (1987), A., The Nonlinear Displacement Analysis
of Robotic Manipulators Using the Complex Optimisation Method, Mech.
Mach. Theory, Vol. 22, No. 1, pp. 89-95.
Manocha, D. and Canny, J. F. (1992), Real Time Inverse Kinematics for General
6R Manipulators, Proceedings of IEEE Conf. on Robotics and Automation, pp.
383-389, Nice-France, May 1992.
Manseur, R. and Doty, K. (1992a), A Complete Kinematic Analysis of Four-
Revolute-Axis Robot Manipulators, Mech. Mach. Theory, Vol. 27, No. 5, pp.
575-586.
Manseur, R. and Doty, K. (1992b), Fast Inverse Kinematics of Five-Revolute-
Axis Robot Manipulators, Mech. Mach. Theory, Vol. 27, No. 5, pp. 587-597.
Manseur, R. and Doty, K. (1989), A Robot Manipulator with 16 Real Inverse
Kinematic Solution Sets, Int. J. Robot Res, Vol. 8, No. 5, pp. 75-79.
Manseur, R. and Doty, K. (1988), Fast Algorithm for Inverse Kinematic Analy-
sis of Robot Manipulators, Int. J. Robot Res, Vol. 7, No. 3, pp. 52-63.
Manseur, R. and Doty, K. (1996), Structural Kinematics of 6-Revolute-Axis Ro-
bot Manipulators, Mech. Mach. Theory, Vol. 31, No. 5, pp. 647-657.
Pieper, D. L. and Roth, B. (1969), The Kinematics of Manipulators Under Com-
puter Control, Proceedings of 2nd Int. Congress on the Theory of Machines and
Mechanisms, Vol. 2, Zakopane, Poland, pp. 159–168.
Poon, J. K. and Lawrence, P. D. (1988), Manipulator Inverse Kinematics Based
on Joint Functions, Proceedings of IEEE Conf. on Robotics and Automation,
Philadelphia, pp. 669-674.
Pradeep, A. K., Yoder, P. J. and Mukundan, R. (1989), On the Use of Dual-
Matrix Exponentials in Robotic Kinematic, Int. J. Robot Res, Vol. 8, No. 54,
pp. 57-66.
Raghavan, M. and Roth, B. (1989), Kinematic Analysis of the 6R Manipulator
of General Geometry, Proceedings of the 5th Int. Symposium on Robotics Re-
search, Tokyo, pp. 263-269.
210 Industrial Robotics: Theory, Modelling and Control
1. Introduction
Cable-based robots build upon mechanisms that not only use rigid links in
their structures but also utilize unilateral force elements such as cables to de-
liver the desired motion. Cables may be either connected to active winches to
provide a variable length and hence to actuate the mechanism or may be only
to provide a kinematic constraint to eliminate an undesired motion of the end-
effector. Manipulators in which the cables have variable lengths are usually
called cable-driven or wire-driven manipulators.
Cable-based manipulators posses several advantages over conventional se-
rial/parallel link manipulators including:
1. Large workspace: An active winch can provide a large range of length
change on the cables at a low cost. This facilitates building manipulators
for very large working spaces which cannot be obtained by other robots.
2. Low inertia: Materials provide their highest strength-to-mass ratio when
they are under tensile loading. Using cables, which can be only in tension,
maximizes the use of material strength and therefore reduces the mass
and inertia of the manipulator. Low inertia is desirable in many applicati-
ons including high speed/acceleration robotics.
3. Simplicity in structure: Cables simplify the robot structure by utilizing
bending flexibility as kinematic joints and reducing the fabrication cost by
minimizing the machining process.
4. Reconfigurability and transportability: Winch assemblies can be simply re-
located to reconfigure and adjust the workspace of a cable-driven manipu-
lator. The ease of assembly/disassembly of these manipulators also facili-
tates their transportation and quick setup.
5. Fully remote actuation: Using a fully cable-driven manipulator, all the ac-
tuators and sensitive parts are located away from the end-effector and the
actual working area. Such manipulators best suit harsh or hazardous envi-
ronments.
211
212 Industrial Robotics: Theory, Modelling and Control
It should be also noted that using cable structures in robot manipulators is ac-
companied by theoretical and technical difficulties. The unilateral force of ca-
bles complicates the workspace, kinematics and dynamics analysis. The con-
straint of tensile force in all cables should be incorporated into the design and
control procedure otherwise, the manipulator will collapse. Also, the low stiff-
ness of the cables compared to rigid links may result in undesired vibrations
requiring compensation by a proper control scheme.
As it was mentioned before, maintaining positive tension (tensile force) in all
the cables is an essential requirement for the rigidity of a cable-based manipu-
lator and hence, this property should be studied thoroughly before the cable-
based manipulator can be used in any real application. In other words, a cable-
based manipulator can be treated as a rigid link manipulator only if all the ca-
bles are in tension. As a result, most of the researchers’ efforts on this category
of robot manipulators have been spent on analyzing and proving the rigidity
of the cable-based structures.
The general problem of rigidity in cable-based manipulators has been studied
in the literature using different approaches and terminologies such as control-
lable workspace (Verhoeven & Hiller, 2000), dynamic workspace (Barette &
Gosselin, 2005), wrench closure (Gouttefarde & Gosselin, 2006), manipulability
( Gallina & Rosati 2002), fully constraint configuration (Roberts et al. 1998) and
tensionability (Landsberger & Shanmugasundram, 1992). General formulation
of this problem can be found in the works by (Ming & Higuchi 1994), (Tado-
koro et al., 1996), and (Verhoeven et al., 1998). They showed that for the rigid-
ity of a cable-based manipulator, it is necessary but not sufficient to have either
actuation redundancy or separate external loading sources to produce tension
in all cables. Ming (Ming & Higuchi 1994a,b) calls the first group Completely
Restrained Positioning Mechanisms, CRPM, in which all the cables can be
made taut with no external load while in an IRPM (Incompletely Restrained
Positioning Mechanism), the manipulator cannot maintain its own rigidity and
hence needs external load to make all cables taut.
The useful workspace of a cable-based manipulator is a subset of its geometri-
cal workspace in which the manipulator can be rigidified (either by actuation
redundancy or external loading). Determination of this workspace is the most
essential step in the design and operation of a cable-based manipulator and is
usually done by numerical search after the synthesis of the manipulator is
done. Examples of this approach can be found in (Kawamura et al., 1995; Fer-
raresi, 2004; Ogahara, 2003; So-Ryeok et al., 2005a,b; Pusey et al., 2004). In this
approach, if the workspace found through the search does not satisfy the de-
sign requirements, the synthesis of the manipulator and the workspace deter-
mination should be repeated. As a result and in order to avoid trial and error
in the design, it is desired to have cable-based manipulators that can be rigidi-
fied everywhere in their geometrical workspace or at least their workspace can
be analytically expressed. In this regard, a geometrical explanation for the
Cable-based Robot Manipulators with Translational Degrees of Freedom 213
1. Base: The fixed part of the manipulator to which the global system of
coordinate OXYZ is attached
2. End-effector: The moving body which carries the moving frame O ′X ′Y ′Z ′ .
3. Cables: The flexible tendon elements with negligible mass and diameter
connected from one end to the end-effector at points Pi (i = 1,2,..., m) and
pulled from the other end at Qi. The pulling actuator produces tension τi
inside the cable and can be simply a winch which pulls and winds the
cable or a separate mechanism that moves the cable's end (Qi) without
changing its length. Unit vectors uˆ i (i = 1,2,..., m) determine the direction of
the cables and point towards the base. Depending on the structure of the
manipulator, there may be some extra pulleys to guide the cables. The
number of cables, m, is equal to the dimension of the motion space of the
end-effector. Therefore, m is three and six for planar and spatial mecha-
nisms, respectively.
214 Industrial Robotics: Theory, Modelling and Control
4. Spine: The element that produces a force between the base and the end-
effector in order to keep all the cables in tension. The spine can be an acti-
ve element which generates a desired force. It can be also a passive ele-
ment such as a cylinder energized by compressed air or a compressive
spring designed properly to provide the sufficient force required to main-
tain tension in the cables. The direction of the spine is shown by unit vec-
tor ŵ pointing towards the end-effector.
It is important to note that both rigidity and tensionability deal with the exis-
tence of the static equilibrium condition for the manipulator in which all the
cables are in tension and hence, the manipulator does not collapse. However,
they do not explain the nature of the equilibrium. Considering the stiffness of
the manipulator, it may be rigid (meaning that it is in static equilibrium with
all cables in tension) although the equilibrium might be an unstable one which
implies that any small disturbance on the end-effector results in the collapse of
the manipulator. It is known that the stability of the manipulator from the
stiffness point of view is not specific to cable-based manipulators; however, it
is shown in (Behzadipour & Khajepour, 2006) that the cable tensions may have
a significant effect on the stiffness and even destabilization of the manipulator.
3. Tensionability
The goal of this section is to introduce an approach for the evaluation of ten-
sionability in a cable-based manipulator. According to the definition, the ten-
sionability of a manipulator must be evaluated for any arbitrary external load.
In the following, a theorem is introduced which gives a sufficient condition for
the manipulator to be tensionable.
The core idea of this theorem is to show that if positive tension (tensile force)
can be generated in all the cables to any desired extent while the static equilib-
216 Industrial Robotics: Theory, Modelling and Control
rium is satisfied in the absence of the external loads, then the manipulator can
be rigidified under any arbitrary external load by having enough pretension in
the cables.
Theorem 1.
A kinematically non-singular configuration of a cable-based manipulator is
tensionable if for an arbitrary positive spine force Fs (compressive force), the
static equilibrium equations of the manipulator have a solution with all posi-
tive cable tensions τi ‘s.
¦τ uˆ
i
i i + Fs = 0 (1)
¦ r × τ uˆ
i
i i i =0 (2)
This theorem simply states that if the manipulator can stay rigid and statically
balanced under an arbitrary compressive spine force, it is tensionable and thus
can stay rigid for any external force and torque by choosing a large enough
spine force.
Proof:
For the proof, it will be shown that such a manipulator can be made rigid for
any arbitrary external load. The balance of forces for an arbitrary external force
Fe applied at O ′ and moment Me is:
¦τ uˆ
i
i i + Fs + Fe = 0 (3)
¦ r × τ uˆ
i
i i i + Me = 0 (4)
The above equations have a set of nontrivial solutions forτi's since the manipu-
lator is assumed to be kinematically non-singular. Since the above set of equa-
tions is linear w.r.t. τi's, superposition can be applied to obtain the following
two sets of equations:
¦τ
i
i
e
uˆ i = −Fe , ¦ r ×τ
i
i i
e
uˆ i = −M e (5)
¦τ
i
i
s
uˆ i = −Fs , ¦ r ×τ
i
i i
s
uˆ i = 0 (6)
Cable-based Robot Manipulators with Translational Degrees of Freedom 217
where τ is + τ ie = τ i for i=1,2,...,m. In this formulation, τ is 's are the cable forces to
balance the spine force and are positive due to the assumption and τ ie 's are the
forces in the cables (positive or negative) due to the external force Fe and mo-
ment Me. If all τ ie 's are positive, then τ i 's will be positive too and the cable-
based manipulator is rigid. Otherwise, let − α 2 = min (τ ie ) i.e. the most negative
i
tension in the cables produced by the external load. Using the linearity of the
static equilibrium equations in Eq. (6), cable tensions τ is ’s can be increased by
increasing fs such that min (τ is ) > α 2 . As a result we have:
i
Therefore, by increasing the spine force, the rigidity can be obtained and
hence, the manipulator is tensionable.
The above theorem gives a sufficient condition for tensionability meaning that
there might be situations in which the spine force cannot produce tension in all
cables but the manipulator can be still rigidified. In those cases, sources other
than spine may be used to generate tension in cables. An example of such
cases will be studied in Section 5.1.
As a result from theorem 1, to evaluate the tensionability, instead of dealing
with external load on the end-effector, we only need to show that the static
equilibrium of the end-effector for an arbitrary spine force can be obtained by
tensile forces in all of the cables. This will ensure that the manipulator is ten-
sionable and thus can theoretically stand any external force and moment at the
end-effector. By “theoretically” we mean that the required spine force and ca-
ble tensions are finite, although these forces may not be feasible due to the
practical constraints. The above approach is used later in this paper to evaluate
the tensionability of the new cable-based manipulators.
In the rest of this paper, some new designs of reduced DoF1 cable-based ma-
nipulators are introduced. The target application of these manipulators is
high-speed pick-and-place operations in which, small objects (less than 1kg)
are moved with high speeds (more than 100 cycles per minute). High speed
and acceleration requires low inertia which makes cable-based manipulators
potential designs. However, most of the current spatial cable-based manipula-
tors have 6 DoF while in pick-and-place applications, three translational axes
of motion with a possible rotational DoF for reorienting the object are suffi-
cient. In the designs presented in this work, cables are used to constrain the ro-
tational motion of the end-effector in order to provide a pure translational mo-
1
DoF: Degree of Freedom
218 Industrial Robotics: Theory, Modelling and Control
tion. A more complete set of these designs can be found in (Khajepour et al.,
2003). One of these designs, DeltaBot, has been prototyped at the University of
Waterloo (Dekker et al. 2006). It can perform up to 150 cycles/minute of stan-
dard pick-and-place on small objects (less than 500gr).
4. BetaBot
In BetaBot, shown in Fig. 2, the upper triangle is the end-effector and the bot-
tom one is the base. Three pairs of parallel cables are attached to the end-
effector and wound by three winches after passing through guide holes on the
winch frames. The winches are attached to the base. Each pair of cables forms
a parallelogram such as ABCD as shown in the same figure. It is known that
(Clavel, 1991), three parallelograms can eliminate the rotational motion of the
end-effector. The spine is connected to the end-effector and base using two
ball-and-sockets or one universal joint and one ball-and-socket. Therefore, the
spine imposes no kinematic constraint on the end-effector.
The equivalent rigid link manipulator for BetaBot is obtained by replacing each
cable with a slider with two ball-and-sockets at the ends. In this equivalent
manipulator, there are 13 bodies, 12 ball-and-socket and 6 prismatic joints. The
Gruebler equation gives 13×6 − 12×3 − 6×5 = 12 degrees of freedom. There are
6 trivial DoF's due to the twist of the sliders and there is also one constraint on
each pair of sliders which forces their displacements to be the same (because
each pair of cables is wound using a single winch). Therefore, the end-effector
has 12 − 6 − 3 = 3 DoF's which are translational.
Since the size of the end-effector plays no role in the kinematics of BetaBot, the
end-effector can be shrunk to a point with three winches moving towards the
center of the base accordingly. As a result, the kinematics of BetaBot becomes
identical to that of a tripod (Mianowski & Nazarczuk, 1990), or the Tsai ma-
nipulator (Tsai, 1996).
The geometrical workspace of BetaBot is only limited by the maximum and
minimum lengths of the spine assuming that there are no limitations on the
cables' lengths and therefore, the shape of the workspace is a half sphere above
the base whose radius is determined by the maximum length of the spine. It is
clear that there is no possibility of any interference between the cables because
of the non-rotating motion of the end-effector.
Cable-based Robot Manipulators with Translational Degrees of Freedom 219
Condition 2. The guide holes on the winch frames are all on the same
plane and form a triangle called Base triangle. This triangle is
similar (and parallel) to the triangle of the end-effector but
larger. As a result, the end-effector along with the cables
form a convex region or a polyhedral
Condition 3. Any two cables never become in-line
2
The ratio between their distances from the triangle vertices are the same
220 Industrial Robotics: Theory, Modelling and Control
To prove the tensionability, we use the theorem given in the last section.
Therefore, for BetaBot to be tensionable, an arbitrary positive spine force
should be balanced by positive cable tensions. For the proof, a geometrical ap-
proach is employed which shows that the solution of static equilibrium equa-
tions consists of a set of positive tensions. Since the proof is lengthy, it is pre-
sented by four lemmas and one theorem.
Lemma 1.
If BetaBot meets the above mentioned conditions, then the three planes each of
which formed by one pair of parallel cables intersect at one point such as R
(see Fig. 4 ).
Proof.
In Fig. 4, consider the plane that includes P1, P2, B1 and B2: If B1P1 and B2P2 are
extended, they meet at a point called R and form triangle ¨B1RB2 (otherwise
they will be parallel which implies the end-effector and the base triangle are
equal contradicting Condition 2). In this triangle we have:
RP2 PP
P1 P2 B1 B2 = 1 2 (8)
RB2 B1 B2
Cable-based Robot Manipulators with Translational Degrees of Freedom 221
Figure 4. The three planes formed by parallel cables and the spine meet at a point
called R.
Now, consider the second plane that includes P2, P3, B2 and B3: If B2P2 and B3P3
are extended, they meet at a point called R′ . Note that R′ and R are both on
the line of P2B2. In triangle ¨B2 R′ B3 we have:
R′P2 PP
P2 P3 B2 B3 = 2 3 (9)
R′B2 B2 B3
Also, we know from Condition 2 that ¨P1P2P3 is similar to ¨B1B2B3 and hence:
P1 P2 PP
= 2 3 (10)
B1 B2 B2 B3
R′P2 RP2
= (11)
R′B2 RB2
222 Industrial Robotics: Theory, Modelling and Control
Considering that R and R′ are on the same line, Eq. (11) states that R and R′
are coincident and a similar reasoning shows that the third plane also passes
through R.
Lemma 2.
In BetaBot, the direction of the spine passes through point R as found in
Lemma 1.
Proof.
Assume point R, as given in Fig. 4, is connected to O′ and extended to intersect
the base at point O ′′ (not shown in the figure). It is needed to show that O ′′ co-
incides with O. This is shown by the following relations:
O′P1 RP1
ΔRO′′B1 : O′P1 O′′B1 = (12)
O′′B1 RB1
RP1 RP2
ΔRB1 B2 : P1 P2 B1 B2 = (13)
RB1 RB2
O′P2 RP
ΔRO′′B2 : O′P2 O′′B2 = 2 (14)
O′′B2 RB2
As a result:
(a) (b)
Figure 5. a) The polyhedra formed by the cables in BetaBot b) The cone of the cable
forces
Lemma 3.
According to Fig. 5a, the half line Rl which starts from R in the direction of û1
intersects line segment P1P2 at E.
Proof.
It is obvious that half line Rl intersects the line of P1P2 otherwise, they should
be in parallel which implies that the first pair of cables are in-line contradicting
Condition 3. The major effort of this proof is to show that the intersection oc-
curs between P1 and P2 , i.e. E ∈ P1P2. For the proof, it is sufficient to show that
E belongs to the polyhedral of Fig. 5a. Let E be the position vector of E with re-
spect to OXYZ reference frame. It is known that:
224 Industrial Robotics: Theory, Modelling and Control
E = R + αu 1 (19)
where R is the position vector of R and ǂ is a positive real number. The poly-
hedral of Fig. 5a is represented as a set of points in the Cartesian space by:
{
ĭ = φ ∈ R 3 nˆ ij ⋅ ij ≤ d ij i, j = 1,2,3 i ≤ j } (20)
nˆ ij ⋅ E ≤ d ij i, j = 1, 2, 3 i ≤ j (21)
nˆ 11 ⋅ E = nˆ 11 ⋅ (R + αuˆ 1 )
(22)
= nˆ 11 ⋅ R + αnˆ 11 ⋅ uˆ 1
nˆ 22 ⋅ E = nˆ 22 ⋅ (R + αuˆ 1 )
(23)
= nˆ 22 ⋅ R + αnˆ 22 ⋅ uˆ 1
which requires αnˆ 22 ⋅ uˆ 1 ≤ 0 . To show this, an auxiliary point with position vec-
tor P2 + αu 1 is considered. This point is on cable 2 according to Fig. 5a and
hence belongs to polyhedral Φ. Therefore, one of the inequalities that this
point should satisfy is:
P2 is a vertex of the end-effector triangle and thus, P2 ∈ S22 which implies that
which was needed to satisfy inequality (24). Similar proof is used for i=j=3, by
taking P1 + αu 1 as the auxiliary point.
For i=1 and j=2, the inequality to be satisfied is:
nˆ 12 ⋅ E ≤ d12
or nˆ 12 ⋅ (R + αuˆ 1 ) ≤ d12 (27)
or nˆ 12 ⋅ R + αnˆ 12 uˆ 1 ≤ d12
nˆ 23 ⋅ E = nˆ 23 ⋅ (R + αuˆ 1 ) ≤ d 23 (28)
nˆ 23 ⋅ (W + αuˆ 1 ) ≤ d 23 (29)
Lemma 4.
According to the force diagram of Fig. 5b, the spine force is balanced by posi-
tive σi's i=1, 2, 3 i.e.:
Proof.
We first show that the direction of spine force lies inside the cone formed by
cables direction vectors: û1 , û 2 and û 3 . For this purpose, we use Condition 4
which states the spine never intersects any faces of the polyhedral of cables
(Fig. 5a). Considering that the faces of the cone of Fig. 5b are parallel to S12, S13
226 Industrial Robotics: Theory, Modelling and Control
and S23 , respectively, it is understood that Fs never touches the faces of the
cone and hence, is inside the cone. Based on the definition of a convex cone
(Yamaguchi, 2002), the cone of Fig. 5b can be expressed as:
{
Ȍ = ψ ∈ R 3 ȥ = ı 1uˆ 1 + ı 2 uˆ 2 + ı 3 uˆ 3 ı 1 , ı 2 , ı 3 ≥ 0 } (31)
Now, since Fs is always inside the cone, it can be considered as the position
vector for a point inside the cone. As a result, the definition of the cone given
in Eq. (31) states that:
− Fs = ı 1uˆ 1 + ı 2 uˆ 2 + ı 3 uˆ 3 ı 1 , ı 2 , ı 3 ≥ 0 (32)
Theorem 2.
If BetaBot satisfies Conditions 1-4 as explained before, it is tensionable every-
where in its workspace.
Proof.
For the proof, we show that a positive spine force is statically balanced by
positive tension in cables. For this purpose, first consider the force equilibrium
of the end-effector (see Fig. 5b). According to Lemma 4, the force equilibrium
on the end-effector is met by positive σ1, σ2 and σ3, where for instance, σ1 is the
total tension of cables 1 (τ1) and 2 (τ2). Now, let:
P2 E P1 E
τ1 = σ1 τ2 = σ1 (33)
P1 P2 P1 P2
5. DishBot
End-effector
Spine
U-joint
Collar
Electrical
Motor Base
winch
(a)
Passive cables
Drive cables
Pretensioner
spring
(b) (c)
Figure 6. a) DishBot: a 3 DoF translational cable-based manipulator with two inde-
pendent sets of cables. b) The configuration of the drive cables c) The configuration of
the passive cables
a c
b d
Figure 7. Assuming the lengths of the two cables ab and cd are equal, the end-effector
always remains parallel to the base
Cable-based Robot Manipulators with Translational Degrees of Freedom 229
¦F = τ d
1 ( )
uˆ 1 + τ 2d uˆ 2 + τ 3d uˆ 3 + f s − τ 1d − τ 2d − τ 3d w
ˆ =0 (34)
¦ M = −τ (r × wˆ ) − τ (r
1
p
1 2
p
2
ˆ )uˆ 1 − τ 3p (r1 × w
×w ˆ )= 0 (35)
where τ id û i 's are drive cable forces (i=1,2,3), τ jp ŵ 's are passive cable forces, rj's
are the position vectors of the anchor points of the passive cables (j=1,2,3) and
f s ŵ is the spine force.
A quick inspection of the above equations shows that Eq. (35), which is the
equilibrium of the moments, is a set of homogenous equations independent
from the spine force which, in general, results in zero tension for passive ca-
bles. The tension of the drive cables are found from Eq. (34). Note that the
drive cables form a cone which contains the spine and hence, using Lemma 4,
the drive cable tensions are positive as long as the equivalent spine force,
f s − τ 1p − τ 2p − τ 3p , is positive (compressive). As a conclusion, tension in the
drive cables can be generated to any desired level by choosing a large enough
spine force but it does not affect the tension in the passive cables.
In order for DishBot to be tensionable, we also need to show that the tension in
the passive cables can be increased to any desired level. For this purpose, note
that, in Fig. 8, if O ′ is the geometrical center of the three anchor points of the
passive cables (P1, P2 and P3) then Eq. (35) has non-zero solutions for passive
cable tensions. In this case, the solution would be τ 1p = τ 2p = τ 3p = τ p where τ p
is an arbitrary real value and thus can be positive. It is known that such a
geometrical center coincides with the centroid of triangle ¨P1P2P3. As a result,
if O ′ is the centroid of triangle ¨P1P2P3, positive equal tensions in passive ca-
bles are determined only by the pre-tensioning springs.
Planar manipulators with translational motion (in XY plane) are sufficient for
many industrial pick-and-place applications such as packaging and material
handling. Simplicity of these manipulators compared to spatial ones further
facilitates their applications where a two axis motion is sufficient (Chan, 2005).
Two new designs of planar cables-based manipulators with translational mo-
tion are studied here that are tensionable everywhere in their workspace.
Schematic diagrams of these manipulators are shown in Fig. 9. The spine is
connected to the base and end-effector by revolute joints. The end-effector is
constrained by three cables. Two of the cables form a parallelogram which
eliminates the rotation of the end-effector as long as the cables are taut. As a
result, the end-effector can only move in X and Y directions.
Cable-based Robot Manipulators with Translational Degrees of Freedom 231
In the first design (Fig. 9a), the parallelogram is maintained by two winches
with a common shaft which makes them move simultaneously and hence,
keep the cable lengths equal. Similar to BetaBot and DishBot, the workspace of
this manipulator is only limited by the minimum and maximum lengths of the
232 Industrial Robotics: Theory, Modelling and Control
spine and hence it can theoretically span a half circle above the base. In the
second design (Fig. 9b), a pair of synchronous rotating arms preserves the par-
allelogram without changing the length of the cables and therefore, possess a
smaller workspace. The synchronization can be obtained by a pair of pulleys
and a timing belt or a simple 4-bar parallelogram as seen in Fig. 9b.
The kinematics of these manipulators consist of a single planar cone and hence
easy to formulate for both direct and inverse solutions. In this paper, however,
our main focus is on the their tensionability and rigidity which is presented in
the following.
Figure 10. a) The configuration of the cables and spine in planar manipulators, b) The
free body diagram of the end-effector
Condition 1. Cable 1 (Fid. 10a) is always on the right and Cable 3 is al-
ways on the left side of the spine. This is obtained if the
spine is hinged to the base at a proper point between the
two sets of cables.
Condition 2. On the end-effector, the spine and Cable 3 are concurrent at
point E which is located somewhere between Cables 1 and 2.
Cable-based Robot Manipulators with Translational Degrees of Freedom 233
Due to Condition 1, the direction of the spine, ŵ is located between û1 and û 2
(cable directions). Therefore, the projection of − f s ŵ on û1 and û 2 will be posi-
tive and hence σ 1 ,τ 3 >0. Now, let:
P1 E P2 E
τ1 = σ1 and τ2 = σ1 (37)
P1 P2 P1 P2
It is clear that τ 1 + τ 2 = σ 1 . Since σ 1 > 0 and due to the distribution given in Eq.
(37), the moment of τ 1û1 about E cancels the one of τ 2 û1 and hence, these two
forces can be replaced by σ 1û1 without violating the static equilibrium. Finally,
since all three forces on the end-effector, σ 1û1 , τ 1û1 and τ 2 û1 , are concurrent at
E, the equilibrium of the moments is also met which completes the proof.
7. Conclusion
using line geometry and static equilibrium in vector form. For each of these
manipulators, it was shown that as long as certain conditions are met by the
geometry of the manipulator, the tensionable workspace in which the manipu-
lator can be rigidified, is identical to the geometrical workspace found from
the kinematic analysis.
BetaBot and the planar manipulators are tensionable everywhere and can be
rigidified only by a sufficiently large spine force. In DishBot, on top of the
geometrical conditions, a relation between the spine force and pre-tensioning
springs of passive cables should be also satisfied to maintain the rigidity of the
manipulator.
8. References
Khajepour A. & Behzadipour S. & Dekker R. & Chan E. (2003), Light Weight
Parallel Manipulators Using Active/Passive Cables, US patent provisional
file No. 10/615,595
Landsberger S.E.; Sheridan T.B., (1985), A new design for parallel link manipu-
lators, Proceedings of the International Conference on Cybernetics and Society,
pp. 81-88, Tuscon AZ, 1985
Landsberger S.E.; Shanmugasundram P. (1992), Workspace of a Parallel Link
Crane,
Proceedings of IMACS/SICE International Symposium on Robotics, Mechatron-
ics and Manufacturing Systems, pp. 479-486, 1992
Ming A.; Higuchi T. (1994a), Study on multiple degree-of-freedom positioning
mechanism using wires (Part1), International Journal of Japan Society of
Precision Engineering, Vol. 28, No. 2, pp. 131-138
Ming A.; Higuchi T. (1994b), Study on multiple degree-of-freedom positioning
mechanism using wires (Part2), International Journal of Japan Society of
Precision Engineering, Vol. 28, No. 3, pp. 235-242
Ogahara Y.; Kawato Y.; Takemura K.; Naeno T., (2003), A wire-driven minia-
ture five fingered robot hand using elastic elements as joints, Pro-
ceedings of IEEE/RSJ International Conference on Intelligent Robots and Sys-
tems, pp. 2672-2677, Las Vegas, Nevada, 2003
Oh s.; Makala K. K.; Agrawal S., (2005a) Dynamic modeling and robust con-
troller design of a two-stage parallel cable robot, Multibody System Dy-
namics, Vol. 13, No. 4, pp. 385-399
Oh s.; Makala K. K.; Agrawal S., Albus J. S. (2005b), A dual-stage planar cable
robot: Dynamic modeling and design of a robust controller with posi-
tive inputs, ASME Journal of Mechanical Design, Vol. 127, No. 4, pp. 612-
620
Pusey j.; Fattah A.; Agrawal S.; Messina E., (2004), Design and workspace
analysis of a 6-6 cable-suspended parallel robot, Mechanisms and Ma-
chine Theory, Vol. 39, No. 7, pp. 761-778
Robers G.R; Graham T.; Lippitt T. (1998), On the inverse kinematics, statics,
and fault tolerance of cable-suspended robots, Journal of Robotic Sys-
tems, Vol. 15, No. 10, pp. 581-597
Stump E.; Kumar V., (2006), Workspace of cable-actuated parallel manipula-
tors, ASME Journal of Mechanical Design, Vol. 128, No. 1, pp. 159-167
Tadokoro S.; Nishioka S.; Kimura T. (1996), On fundamental design of wire
configurations of wire-driven parallel manipulators with redundancy,
ASME Proceeding of Japan/USA Symposium on Flexible Automation, pp.
151-158
Tsai L-W., (1996), Kinematics of A Three DOF Platform With Three Extensible
Limbs, In Recent Advances in Robot Kinematics, Lenarcic J. and Parenti-
Castelli V., pp. 401-410, Kluwer Academic, Netherlands
236 Industrial Robotics: Theory, Modelling and Control
1. Introduction
Podhorodeski and Pittens (1992, 1994) and Podhorodeski (1992) defined a ki-
nematically-simple (KS) layout as a manipulator layout that incorporates a
spherical group of joints at the wrist with a main-arm comprised of success-
fully parallel or perpendicular joints with no unnecessary offsets or link
lengths between joints. Having a spherical group of joints within the layouts
ensures, as demonstrated by Pieper (1968), that a closed-form solution for the
inverse displacement problem exists.
Using the notation of possible joint axes directions shown in Figure 1 and ar-
guments of kinematic equivalency and mobility of the layouts, Podhorodeski
and Pittens (1992, 1994) showed that there are only five unique, revolute-only,
main-arm joint layouts representative of all layouts belonging to the KS family.
These layouts have joint directions CBE, CAE, BCE, BEF, and AEF and are de-
noted KS 1 to 5 in Figure 2.
237
238 Industrial Robotics: Theory, Modelling and Control
KS 13 - CED
Figure 2. KS Family of Joint Layouts
Yang et al. (2001) used the concepts developed by Podhorodeski and Pittens
(1992, 1994) to attempt to generate all unique KS layouts comprised of two
revolute joints and one prismatic joint. The authors identified eight layouts.
Of these eight layouts, five layouts (the layouts they denote CAE, CAF, CBF,
CFE, and CCE) are not kinematically simple, as defined in this chapter, in that
they incorporate unnecessary offsets and one layout (the layout they denote
CBE) is not capable of spatial motion.
The purpose of this chapter is to clarify which joint layouts comprised of a
combination of revolute and/or prismatic joints belong to the KS family. The
chapter first identifies all layouts belonging to the KS family. Zero-
displacement diagrams and Denavit and Hartenberg (D&H) parameters (1955)
used to model the layouts are presented. The complete forward and inverse
displacement solutions for the KS family of layouts are shown. The applica-
tion of the KS family of joint layouts and the application of the presented for-
ward and inverse displacement solutions to both serial and parallel manipula-
tors is discussed.
The possible layouts can be divided into four groups: layouts with three revo-
lute joints; layouts with two revolute joints and one prismatic joint; layouts
with one revolute joint and two prismatic joints; and layouts with three pris-
matic joints.
2.2 Layouts with Two Revolute Joints and One Prismatic Joint
Layouts consisting of two revolute joints and one prismatic joint can take on
three forms: prismatic-revolute-revolute; revolute-revolute-prismatic; and
revolute-prismatic-revolute.
For the case where the two revolute joints are parallel to one another, the axes
of the revolute joints must also be parallel to the axis of the prismatic joint. In
addition, a necessary link length must exist between the two revolute joints.
The axes for this layout can be represented with the directions ADD. Note that
for this configuration, the layout is fundamentally degenerate, unless an addi-
tional link length is added between joints two and three, since without the ad-
ditional link length, the axis of the second revolute joint would always pass
through the centre of the spherical joint group (see Figure 4a). Figure 4b illus-
trates the non-degenerate KS layout with an additional link length between the
second revolute joint and the prismatic joint. However, the layout of Figure 4b
is kinematically equivalent to KS 7 and therefore is not counted as a unique KS
layout.
2.3 Layouts with One Revolute Joint and Two Prismatic Joints
Layouts consisting of one revolute joint and two prismatic joints can take on
three forms: prismatic-revolute-prismatic; prismatic-prismatic-revolute; and
revolute-prismatic-prismatic.
matically simple, the axis of the revolute joint must be parallel to the axis of
one of the prismatic joints. The feasible layout of joint directions can be repre-
sented by the axes directions CFD and is illustrated as KS 11 in Figure 2.
To achieve spatial motion with three prismatic joints and belong to the KS
class, the joint directions must be mutually orthogonal. A representative lay-
out of joint directions is CED. This layout is illustrated as KS 13 in Figure 2.
The layouts above represent the 13 layouts with unique kinematics belonging
to the KS family. However, additional layouts that have unique joint struc-
tures can provide motion that is kinematically equivalent to one of the KS lay-
outs. For branches where the axes of a prismatic and revolute joint are collin-
ear, there are two possible layouts to achieve the same motion. Four layouts,
KS 6, 7, 11, and 12, have a prismatic joint followed by a collinear revolute joint.
The order of these joints could be reversed, i.e., the revolute joint could come
first followed by the prismatic joint. The order of the joints has no bearing on
the kinematics of the layout, but would be very relevant in the physical design
of a manipulator. Note that the dj and ljj elements of the corresponding rows
in the D&H tables (see Section 3.2) would need to be interchanged along with
A Complete Family of Kinematically-Simple Joint Layouts: Layout Models… 243
The zero-displacement diagrams (lji = 0, for all revolute joints i ) for the KS
family of layouts for Craig's (1989) convention of frame assignment are pre-
sented in Figures 5 to 7. Note that the KS layouts in Figure 2 are not necessar-
ily shown in zero-displacement. The rotations necessary to put each of the KS
Layouts from zero-displacement configuration into the configuration illus-
trated in Figure 2 are outlined in Table 1.
KS Rotations KS Rotations
π π
1 θ2 = 2 θ2 =
2 2
π π π
3 θ3 = 4 θ2 = & θ3 =
2 2 2
π π π
5 θ2 = − & θ3 = 6 θ3 = −
2 2 2
π
7 None 8 θ2 = −
2
π
9 θ3 = 10 None
2
π
11 None 12 θ3 =
2
13 None
Table 2 shows the D&H parameters for the kinematically-simple family of joint
layouts. The D&H parameters follow Craig's frame assignment convention
(Craig, 1989) and correspond to the link transformations:
j −1
j T = Rotxˆ j −1 (α j −1 )Transxˆ j −1 (a j −1 )Transzˆ j (d j ) Rotzˆ j (θ j )
(1)
ª cos(θ j ) − sin (θ j ) 0 a j −1 º
«sin (θ )cos(α ) cos(θ )cos(α ) − sin (α ) − sin (α )d »
j −1 j −1 j −1 j −1
=« j j j»
j −1
where j T is a homogeneous transformation describing the location and orien-
tation of link-frame F j with respect to link-frame F j −1 , Rot xˆ j −1 (α j −1 ) denotes a
rotation about the xˆ j −1 axis by α j −1 , Trans xˆ j−1 (a j −1 ) denotes a translation along
the xˆ j −1 axis by a j −1 , Transẑ j (d j ) denotes a translation along the ẑ j axis by d j ,
and Rot ẑ j (θ j ) denotes a rotation about the ẑ j axis by θ j .
j −1 ª j −1j R j −1
p o j −1→o j º
j T=« » (2)
¬0 0 0 1 ¼
j −1
where j R is a 3x3 orthogonal rotation matrix describing the orientation of
j −1
frame F j with respect to frame F j −1 and p o j−1 →o j is a vector from the origin of
frame F j −1 to the origin of frame F j .
A Complete Family of Kinematically-Simple Joint Layouts: Layout Models… 245
KS 1 KS 2
KS 3 KS 4
KS 5
Figure 5. Zero-Displacement Diagrams for Layouts with Three Revolute Joints (KS 1
to 5)
246 Industrial Robotics: Theory, Modelling and Control
KS 6 KS 7
KS 8 KS 9
KS 10
Figure 6. Zero-Displacement Diagrams for Layouts with Two Revolute Joints and One
Prismatic Joint (KS 6 to 10)
A Complete Family of Kinematically-Simple Joint Layouts: Layout Models… 247
KS 11 KS 12
KS 13
Figure 7. Zero-Displacement Diagrams for Layouts with One Revolute Joint and Two
Primatic Joints (KS 11 and 12) or Three Prismatic Joints (KS 13)
The position and orientation of the spherical wrist frame Fsph with respect to
the base frame F0 is found from:
ª 0R 0
p o0 →osph º
T = 01T 12T 23T sph3 T = « sph
0
sph » (3)
¬0 0 0 1 ¼
j −1
where the homogeneous transformation j T is defined in equation (1). The
0 0
transformation T is the solution to the forward displacement problem:
sph sph R
248 Industrial Robotics: Theory, Modelling and Control
is the change in orientation due to the first three joints and 0 p o0 →osph is the loca-
0
tion of the spherical wrist centre. The homogeneous transformations sph T for
the KS family of layouts can be found in Tables 3 and 4. Note that in Tables 3
and 4, ci and si denote cos(θ i ) and sin (θ i ) , respectively.
KS F j −1 α j −1 a j −1 dj θj Fj KS F j −1 α j −1 a j −1 dj θj Fj
1 F0 0 0 0 θ1 F1 2 F0 0 0 0 θ1 F1
F1 π /2 0 0 θ2 F2 F1 π /2 0 0 θ2 F2
F2 0 f 0 θ3 F3 F2 −π / 2 f 0 θ3 F3
F3 π /2 0 g 0 Fsph F3 π /2 0 g 0 Fsph
3 F0 0 0 0 θ1 F1 4 F0 0 0 0 θ1 F1
F1 π /2 0 f θ2 F2 F1 0 f 0 θ2 F2
F2 −π / 2 0 0 θ3 F3 F2 π /2 0 0 θ3 F3
F3 π /2 0 g 0 Fsph F3 −π / 2 0 g 0 Fsph
5 F0 0 0 0 θ1 F1 6 F0 0 0 d1 0 F1
F1 π /2 f 0 θ2 F2 F1 0 0 0 θ2 F2
F2 −π / 2 0 0 θ3 F3 F2 π /2 0 0 θ3 F3
F3 π /2 0 g 0 Fsph F3 −π / 2 0 g 0 Fsph
7 F0 0 0 d1 0 F1 8 F0 0 0 0 θ1 F1
F1 0 0 0 θ2 F2 F1 π /2 0 0 θ2 F2
F2 0 f 0 θ3 F3 F2 −π / 2 0 d3 0 F3
F3 π /2 0 g 0 Fsph F3 0 0 g 0 Fsph
9 F0 0 0 0 θ1 F1 10 F0 0 0 0 θ1 F1
F1 π /2 0 d2 π /2 F2 F1 π /2 0 d2 0 F2
F2 π /2 0 0 θ3 F3 F2 0 0 0 θ3 F3
F3 −π / 2 0 g 0 Fsph F3 −π / 2 0 g 0 Fsph
11 F0 0 0 d1 0 F1 12 F0 0 0 d1 0 F1
F1 0 0 0 θ2 F2 F1 π /2 0 d2 0 F2
F2 π /2 0 d3 0 F3 F2 0 0 0 θ3 F3
F3 0 0 g 0 Fsph F3 −π / 2 0 g 0 Fsph
13 F0 0 0 d1 0 F1
F1 π /2 0 d2 π /2 F2
F2 π /2 0 d3 0 F3
F3 0 0 g 0 Fsph
Table 2. D&H Parameters for the KS Layouts
A Complete Family of Kinematically-Simple Joint Layouts: Layout Models… 249
0
KS T
sph
For the inverse displacement solution, the location of the spherical wrist centre
with respect to the base, 0 p o0 →osph , is known:
px ½
° °
0
p o0 →osph = ® py ¾ (4)
°p °
¯ z¿
0 0
KS sph T KS T
sph
« s2 0 c2 c2 (d 3 + g ) » « c3 0 − s3 − s3 g »
« » « »
¬ 0 0 0 1 ¼ ¬ 0 0 0 1 ¼
ªc1c3 − s1 − c1s3 − c1s3 g + s1d 2 º ª c2 0 s 2 s2 ( g + d 3 ) º
«s c c1 − s1s3 − s1s3 g − c1d 2 »» « s 0 − c − c ( g + d )»
10 «1 3 11 « 2 2 2 3 »
« s3 0 c3 c3 g » «0 1 0 d1 »
« » « »
¬ 0 0 0 1 ¼ ¬0 0 0 1 ¼
ªc3 0 − s3 − s3 g º ª0 0 1 g + d 3 º
«0 1 0 − d 2 »» «0 − 1 0 − d »
12 « 13 « 2 »
« s3 0 c3 c3 g + d1 » «1 0 0 d1 »
« » « »
¬0 0 0 1 ¼ ¬0 0 0 1 ¼
Table 4. Forward Displacement Solutions for KS 6 to 13
From the forward displacement solution presented in Table 3 for KS 1, the fol-
lowing relation exists:
0
where the left-hand-side of equation (5) is the last column of sphT. Pre-
multiplying both sides of equation (5) by 10T = ( T)
0
1
−1
isolates θ1 to the right-
hand-side of the expression:
c2 f + s23 g ½ c1 p x + s1 p y ½
° 0 ° °− s p + c p °
° ° ° 1 x 1 y°
® ¾=® ¾ (6)
° s2 f − c23 g ° ° pz °
°¯ 1 °¿ °¯ 1 °¿
A Complete Family of Kinematically-Simple Joint Layouts: Layout Models… 251
From the second row of equation (6), a solution for θ1 can be found as:
(
θ 3 = atan2 s3 , ± 1 − s , where s3 =
2
3 ) p x2 + p y2 + p z2 − f 2 − g 2
2 fg
(8)
θ 2 = atan2(s2 , c2 ) (9)
where
s2 =
( )
c3 g c1 px + s1 py + ( f + s3 g ) pz
f + g + 2s3 fg
2 2
A similar procedure can be followed for the other KS layouts. Inverse dis-
placement solutions for all 13 of the KS layouts are summarized in Tables 5
and 6.
252 Industrial Robotics: Theory, Modelling and Control
(
θ 3 = atan2 s3 , ± 1 − s32 , where s3 = ) p x2 + p y2 + p z2 − f 2 − g 2
2 fg
θ 2 = atan2(s2 , c2 ),
c g (c p + s p ) + ( f + s3 g ) p z ( f + s3 g )(c1 px + s1 p y ) − c3 gpz
where s2 = 3 1 x 2 1 2y & c2 =
f + g + 2s3 fg f 2 + g 2 + 2s3 fg
2
( )
θ 3 = atan2 s3 , ± 1 − s , where s3 = 2
3
p x2 + p y2 + p z2 − f 2 − g 2
2 fg
= atan2(s , ± 1 − s ), where s
pz
θ2 2
=
f + s3 g
2 2 2
= atan2(± 1 − c , c ), where c
f 2 + s32 g 2 − p x2 − p y2
θ2 2
2 2 2 =
2s3 fg
s2 s3 gp x + ( f − c2 s3 g ) p y
θ1 = atan2(s1 , c1 ), where s1 =
p x2 + p y2
(f − c2 s3 g ) p x − s2 s3 gp y
& c1 =
p x2 + p y2
where c2 s3 = x y z
2 fg
( )
θ 3 = atan2 ± 1 − c32 , c3 , where c3 = (s1 px − c1 p y ) g
pz c p +s p − f
θ 2 = atan2(s2 , c2 ), where s2 = & c2 = 1 x 1 y
s3 g s3 g
Table 5. Inverse Displacement Solutions for KS 1 to 5
A Complete Family of Kinematically-Simple Joint Layouts: Layout Models… 253
( )
θ 3 = atan2 s3 , ± 1 − s , where s3 =
2
3
p x2 + p y2 − f 2 − g 2
2 fg
c3 gp x + ( f + s3 g ) p y ( f + s3 g ) px − c3 gp y
θ 2 = atan2(s2 , c2 ), where s2 = & c2 =
( f + s3 g )2 + c32 g 2 ( f + s3 g )2 + c32 g 2
8 θ1 = atan2( p y , px ) or atan2(− p y , − p x )
d 3 = ± px2 + p y2 + p z2 − g
− c1 p x − s1 p y pz
θ 2 = atan2(s2 , c2 ), where s2 = & c2 =
d3 + g d3 + g
9 θ1 = atan2( px , − p y ) or atan2(− px , p y )
d 2 = s1 p x − c1 p y ± g 2 − p z2
θ 3 = atan2(s3 , c3 ), where s3 = − p z g & c3 = (s1 p x − c1 p y − d 2 ) g
10 ( )
θ 3 = atan2 ± 1 − c32 , c3 , where c3 = p z g
d 2 = ± p x2 + p y2 − s32 g 2
− s3 gp y + d 2 p x − s3 gp x − d 2 p y
θ1 = atan2(s1 , c1 ), where s1 = & c1 =
p +p 2
x
2
y p x2 + p y2
11 d1 = p z
d 3 = ± px2 + p y2 − g
px − py
θ 2 = atan2(s2 , c2 ), where s2 = & c2 =
d3 + g d3 + g
12 d2 = − py
d1 = p z ± − p x2 + g 2
θ 3 = atan2(s3 , c3 ), where s3 = − p x g & c3 = ( pz − d1 ) g
13 d3 = px − g
d2 = − py
d1 = p z
Table 6. Inverse Displacement Solutions for KS 6 to 13
254 Industrial Robotics: Theory, Modelling and Control
5. Discussion
F j −1 α j −1 a j −1 dj θj Fj
Fsph 0 0 0 θ4 F4
F4 −π / 2 0 0 θ5 F5
F5 π /2 0 0 θ6 F6
Table 8. D&H Parameters for the Roll-Pitch-Roll Spherical Wrist
For the KS family of layouts with a spherical wrist, the forward displacement
solution is:
0
ee (
T = 01T 12T 23T sph3 T )( sph
4 )
T 45T 56T ee6 T= sph0 T sph6Tee6 T (10)
6
where T is the homogeneous transformation describing the end-effector
ee
frame Fee with respect to frame F6 and would be dependent on the type of
0 sph
tool attached, sph T is defined in equation (3), and 6 T is:
ª c 4 c5 c 6 − s 4 s 6 − c4 c5 s6 − s4 c6 c 4 s5 0º
«s c c + c s − s4 c5 s6 + c4 c6 s 4 s5 0»»
6T =
sph « 4 5 6 4 6
(11)
« − s 5 c6 s5 s 6 c5 0»
« »
¬ 0 0 0 1¼
For a 6-joint serial manipulator, Pieper (1968) demonstrated that for a manipu-
lator with three axes intersecting, a closed-form solution to the inverse dis-
placement problem can be found. As demonstrated by Paul (1981), for a 6-
joint manipulator with a spherical wrist, the solutions for the main-arm and
wrist displacements can be solved separately. Therefore, the presented inverse
A Complete Family of Kinematically-Simple Joint Layouts: Layout Models… 257
displacement solutions for the KS family of layouts (see Section 4.2) can be
used to solve for the main-arm joint displacements for serial manipulators that
use KS layouts as their main-arm and have a spherical wrist.
For the inverse displacement solution of the main-arm joints, the location
( 0 p o0 →o6 ) and orientation ( 06 R ) of frame F6 with respect to the base frame in
terms of the known value ee0 T can be found from:
( T) ª 0R 0
-1 p o0 →o6 º
0
6 T = ee0 T 6
ee = ee0 Tee6T = « 6 » (12)
¬0 0 0 1 ¼
ee
where T is constant and known. Since the manipulator has a spherical wrist:
6
px ½
° °
0
p o0 →osph = p o0 →o6 = p o0 →o5 = p o0 →o4
0 0 0
= ® py ¾ (13)
°p °
¯ z¿
where p x , p y , and p z are found from equation (12). The inverse displacement
solutions for the KS family of layouts discussed in Section 4.2 can now be used
to solve for the main-arm joint displacements.
For the inverse displacement solution of the spherical wrist joints, in terms of
the known value ee0 T , the orientation of F6 with respect to the base frame, 06 R ,
was defined in equation (12). Note that:
Since the main arm joint displacements were solved above, the elements of
matrix 03 R are known values and thus the right-hand-side of equation (14) is
known, i.e., rij , i = 1 to 3 and j = 1 to 3, are known values.
Substituting the elements of the rotation matrix 63 R = 3
sph R sph6 R into equation
(14) yields:
where 3
sph R is dependent on the D&H parameter α 3 for the manipulator, i.e.:
ª1 0 0º
3
sph R = ««0 1 0»», if α 3 = 0
«¬0 0 1»¼
ª1 0 0 º
« 0 − 1», if α = π (16)
sph R = «0
3
» 3
2
«¬0 1 0 »¼
ª1 0 0º
π
3
sph R = ««0 0 1»», if α 3 = −
2
«¬0 − 1 0»¼
Equation (15) can be used to derive expressions for the wrist joint displace-
ments θ 4 , θ 5 , and θ 6 . For example, if α 3 = π 2 , equation (15) becomes:
( )
θ 5 = atan2 ± 1 − c52 , c5 , where c5 = −r23 (18)
Using elements (1, 3) and (3, 3) of equation (17) allows θ 4 to be solved as:
r33 r
θ 4 = atan2(s4 , c4 ), where s4 =
& c4 = 13 (19)
s5 s5
Using elements (3, 1) and (3, 2) of equation (17) allows θ 6 to be solved as:
θ 6 = atan2(s6 , c6 ) (20)
where
−c 4r31 + s4c5r32
s6 =
−s42c52 − c 42
−s4c5r31 − c 4r32
c6 =
−s42c52 − c 42
A Complete Family of Kinematically-Simple Joint Layouts: Layout Models… 259
Note that if s5 = 0 , joint axes 4 and 6 are collinear and the solutions for θ 4 and
θ 6 are not unique. In this case, θ 4 can be chosen arbitrarily and θ 6 can be
solved for.
Similar solutions can be found for the cases where α 3 equals 0 and − π 2 .
Two frames common to the branches of the parallel manipulator are estab-
lished, one frame attached to the base ( Fbase ) and the other attached to the plat-
form ( Fplat ). The homogeneous transformations from the base frame Fbase to
base
the base frame of each of the m branches F0i are denoted 0i T , i = 1 to m. The
homogeneous transformations from the platform frame Fplat to the m passive
plat
spherical group frames Fsphi are denoted sphi T , i = 1 to m. Note that for a given
base plat
parallel manipulator all 0i T and sphi T would be known and would be con-
stant.
For the inverse displacement solution, for the ith branch, the location and orien-
tation of the spherical wrist frame, Fsphi , with respect to the base frame of the
base
branch, F0i , in terms of the known value plat T can be found from:
( ) ª sph0ii R 0i
-1 base p o0 →osph º
0i
sphi T= base
0i T plat T plat
sphi T= 0i
T
base
base
plat T plat
sphi T=« i i
» (21)
¬0 0 0 1 ¼
where 0i
sphi R is the orientation of Fsphi with respect to F0i and 0i p o0i →osphi is the po-
sition vector from the origin of F0i to the origin of Fsphi with respect to F0i . The
position vector is defined as:
pxi ½
° °
0i
p o0i →osphi = ® p yi ¾ (22)
°p °
¯ zi ¿
(1993) showed that for the general 6-61 Stewart-Gough platform up to 40 solu-
tions could exist to the forward displacement problem. Note that the notation
i-j denotes the number of connection points of the branches at the base and
platform, respectively.
Innocenti and Parenti-Castelli (1990) showed that for a class of parallel ma-
nipulators that have the branches connected at three distinct points on the end-
effector platform (e.g., 6-3 and 3-3 layouts), the forward displacement problem
can be reduced to a 16th order polynomial of one variable leading to a maxi-
mum of 16 possible solutions to the forward displacement problem.
Numerous researchers (e.g., Inoue, et al. (1986); Waldron, et al. (1989); Cheok,
et al. (1993); Merlet (1993); Notash and Podhorodeski (1994 and 1995); and
Baron and Angeles (2000)) have shown that utilizing redundant sensing in
parallel manipulators is necessary to achieve a unique solution to the forward
displacement problem. For the purposes of the solutions presented here, it is
assumed that the manipulator is redundantly sensed and that the positions of
the passive spherical groups ( p i , i = 1 to m) would be known.
Noting that:
ª base R base
p obase →o plat º
base
plat T = « plat » (23)
¬0 0 0 1 ¼
For 6-3 and 3-3 layouts, the origin of Fplat can be found as:
base
p obase →o plat =
base
(p1 + p 2 + p3 ) (24)
3
where p1 , p 2 , and p 3 are the positions of the passive spherical groups. The
positions of the passive spherical groups can be found using the solutions pre-
sented in Tables 3 to 4.
The orientation of the platform frame can be found as:
base
plat R=
base
[n o a] (25)
where
1 Note that the notation i-j denotes the number of connection points of the branches at the base and plat-
form, respectively.
A Complete Family of Kinematically-Simple Joint Layouts: Layout Models… 261
base
§ p3 − p 2 ·
base
n= ¨
¨ p −p ¸
¸
© 3 2 ¹
base
n × base c
base
a = base
n × base c
base
o = base a × base
n
with
base
§ p1 − p 2 ·
base
c= ¨
¨ p −p
¸
¸
© 1 2 ¹
For 6-6 and 3-6 layouts, the origin of Fplat can be found as:
base
p obase →o plat =
base
(p1 + p 2 + p3 + p 4 + p5 + p6 ) (26)
6
where p1 to p 6 are the positions of the passive spherical groups. Note that it is
assumed that the passive spherical groups are symmetrically distributed about
the platform. The positions of the passive spherical groups can be found using
the solutions presented in Tables 3 and 4.
The orientation of the platform frame can be found as:
base
plat R=
base
[n o a] (27)
where
base
§ p c − pb ·
base
n= ¨
¨ p −p ¸
¸
© c b ¹
base
n × base c
base
a = base
n × base c
base
o= base a × base
n
with
p a − pb
base
c=
p a − pb
p a = (p1 + p 2 ) 2
p b = (p 3 + p 4 ) 2
p c = (p 5 + p 6 ) 2
262 Industrial Robotics: Theory, Modelling and Control
6. Conclusions
Acknowledgements
The authors wish to thank the Natural Sciences and Engineering Research
Council (NSERC) of Canada for providing funding for this research.
A Complete Family of Kinematically-Simple Joint Layouts: Layout Models… 263
7. References
1. Introduction
The conceptual design of parallel robots can be dated back to the time when
Gough established the basic principles of a device with a closed-loop kine-
matic structure (Gough 1956), that can generate specified position and orienta-
tion of a moving platform so as to test tire wear and tear. Based on this princi-
ple, Stewart designed a platform for use as an aircraft simulator in 1965
(Stewart 1965). In 1978, Hunt (1978) made a systematic study of robots with
parallel kinematics, in which the planar 3-RPS (R-revolute joint, P-prismatic
joint, and S-spherical joint) parallel robot is a typical one. Since then, parallel
robots have been studied extensively by numerous researchers.
The most studied parallel robots are with 6 DOFs. These parallel robots pos-
sess the advantages of high stiffness, low inertia, and large payload capacity.
However, they suffer the problems of relatively small useful workspace and
design difficulties (Merlet 2000). Furthermore, their direct kinematics possess a
very difficult problem; however the same problem of parallel robots with 2
and 3 DOFs can be described in a closed form (Liu 2001). As is well known,
there are three kinds of singularities in parallel robots (Gosselin and Angeles
1990). Moreover, not all singularities of a 6-DOF parallel robot can be found
easily. But for a parallel robot with 2 and 3 DOFs, the singularities can always
be identified readily. For such reasons, parallel robots with less than 6 DOFs,
especially 2 and 3 DOFs, have increasingly attracted more and more research-
ers’ attention with respect to industrial applications (Tsai & Stamper 1996;
Ceccarelli 1997; Tonshoff et al 1999; Siciliano 1999; Liu et al. 2001; Liu et al.
2005; Liu & Kim 2003). In these designs, parallel robots with three translational
DOFs have been playing important roles in the industrial applications (Tsai &
Stamper 1996; Clavel 1988; Hervé 1992; Kim & Tsai 2002; Zhao & Huang 2000;
Carricato & Parenti-Castelli 2001; Kong & Gosselin 2002; Liu et al. 2003), espe-
cially, the DELTA robot (Clavel 1988), which is evident from the fact that the
design of the DELTA robot is covered by a family of 36 patents (Bonev 2001).
Tsai’s robot (Tsai & Stamper 1996), in which each of the three legs consists of a
parallelogram, is the first design to solve the problem of UU chain. A 3-
265
266 Industrial Robotics: Theory, Modelling and Control
malization can guarantee that a dimensional robot and its corresponding nor-
malized robot are similar not only in size but also in performance. The dimen-
sional robot is defined as similarity robot (SR), and the normalized robot is re-
ferred to as basic similarity robot (BSR). A design space which embodies all
kinds of BSRs will be established. The space can be used not only in analysis
but also in the optimal design of the parallel robot. Within the design space,
the performance atlas that illustrates the relationship between a performance
index and the BSRs can be plotted. The optimal kinematic design can be im-
plemented with respect to the performance atlases. Design examples will be
finally given in the chapter. Compared with the traditional design methods,
the proposed optimal design methodology has some advantages as follows: (a)
one performance index corresponds to one atlas; (b) for such a reason in (a),
the fact that some performance indices are antagonistic is no longer a problem
in the design; (c) the optimal design process can consider multi-objective func-
tions or multi-indices, and also guarantees the optimal result; and finally, (d)
the design method provides a set of possible solutions, and ideally, all the de-
sign solutions.
The novel 2-DOF translational parallel robot proposed here is shown in Fig.
1(a). A schematic of the robot is shown in Fig. 1(b). The end-effector of the ro-
bot is connected to the base by two kinematic legs 1 and 2. Leg 1 consists of
three revolute joints and leg 2 two revolute joints and one cylinder joint, or
three revolute joints and one prismatic joint. In each leg, the revolute joints are
parallel to each other. The axes of the revolute joints in leg 1 are normal to
those of the joints in leg 2. The two joints attached to the end-effector are put in
the adjacent sides of a square. The kinematic chain of the robot is denoted as
RRR-RRC (C-cylinder joint) or RRR-RRRP.
2.2 Capability
Here, a Plücker coordinate like $j=( x , y , z ; x , y , z ) is used to describe the
capability of an object j. In $j, Trj=( x , y , z ) and Roj=( x , y , z ) express the
translation and rotation of the object, respectively. If an element in $ is equal to
0, there is no such a translation or rotation. If it is equal to 1, there is the capa-
bility. For example, x =0 means that the object has no the translation along the
x-axis; y =1 indicates that the object can rotate about the y-axis.
268 Industrial Robotics: Theory, Modelling and Control
Observing only the leg 1, the Plücker coordinate of the end-effector in the leg
can be written as $1=(1, 1, 0; 0, 0, 1). Letting the leg 1 alone, the Plücker coordi-
nate of the end-effector with the leg 2 can be expressed as $2=(1, 1, 1; 1, 0, 0).
Then, the intersection of the two Plücker coordinates $1 and $2 is $, i.e.,
which describes the capability of the robot, i.e., the translations of the end-
effector along the x and y axes. That means the end-effector has two purely
translational degrees of freedom with respect to the base.
(a)
(b)
Figure 1. The 2-DOF translational parallel robot: (a) the CAD model; (b) the schematic
On the Analysis and Kinematic Design of a Novel 2-DOF Translational Parallel Robot 269
Observing the robot shown in Fig. 1, it is not difficult to reach such a conclu-
sion that if the axes of the joints in the leg 1 are normal to those of the joints in
the leg 2 the robot will have two translational DOFs. Based on this idea, some
topological architectures are shown in Fig. 2. It is noteworthy that the leg 2
shown in Fig. 1 and Fig. 2 can be also the kinematic chain RRR(Pa) shown in
Fig. 3, where Pa means planar parallelogram.
3. Kinematics Analysis
Although the robot has some topologies, this chapter consideres only the ar-
chitecture shown in Fig. 1. In this section, the inverse and forward kinematics
of the robot will be given.
c ℜ = (x , y ) T (2)
(a)
270 Industrial Robotics: Theory, Modelling and Control
(b) (c)
where θ is the actuated input for the leg 1. Vector p1ℜ in the fixed frame ℜ
can be written as
p1ℜ = (− R3 0 )T + cℜ = (x − R3 y )T (4)
The inverse kinematics problem of the leg 1 can be solved by writing following
constraint equation
that is
On the Analysis and Kinematic Design of a Novel 2-DOF Translational Parallel Robot 271
Then, there is
θ = 2 tan −1 (m ) (7)
where
− b + σ b 2 − 4 ac
m= (8)
2a
σ = 1 or − 1
a = (x − R3 )2 + y 2 + R12 − R22 + 2(x − R3 )R1
b = −4yR1
c = (x − R3 )2 + y 2 + R12 − R22 − 2(x − R3 )R1
s=x (9)
in which s is the input of the leg 2. From Eqs. (8) and (9), we can see that there
are two solutions for the inverse kinematics of the robot. Hence, for a given ro-
bot and for prescribed values of the position of the end-effector, the required
actuated inputs can be directly computed from Eqs. (7) and (9). To obtain the
configuration as shown in Fig.1, parameter σ in Eq. (8) should be 1. This con-
figuration is called the “+“ working mode. When σ = − 1 , the corresponding
configuration is referred to as the “ɔ“ working mode.
The forward kinematic problem is to obtain the output with respect to a set of
given inputs. From Eqs. (6) and (9),one obtains
y = e +σ f (11)
and
x=s (12)
(a) (b)
Figure 4. Two kinds of working modes: (a) “+“ working mode; (b) “ɔ“ working mode
(a) (b)
4. Singularity Analysis
Equations (6) and (9) can be differentiated with respect to time to obtain the
velocity equations. This leads to
s = x (13)
( )
where q = s θ T and p = (x y ) T are the joint and Cartesian space velocity
vectors, respectively, and A and B are , respectively, the 2 × 2 matrices and
can be expressed as
ª1 0 º ª 1 0 º
A=« » , and B = « » (16)
¬«0 R1 y cos θ − R1 (x − R3 ) sin θ ¼» ¬«x − R3 − R1 cos θ y − R1 sin θ ¼»
ª 1 0 º
J=A B= « −1
x − R3 − R1 cosθ y − R1 sin θ » (17)
« »
¬ R1 y cosθ − R1 (x − R3 ) sin θ R1 y cosθ − R1 (x − R3 ) sin θ ¼
from which one can see that there is no any parameter of Ln (n=1,2,3) in this
matrix.
4.2 Singularity
and
For the above example, if R3 = 0.5mm the loci of point P are shown in Fig. 7.
Note that, R1 = 0 leads to det( A) = 0 as well. Therefore, R1 = 0 also results in
this kind of singularity.
The uncertainty singularity, occurring only in closed kinematics chains, arises
when B becomes singular but A remains invertible. B = 0 results in
y = R1 sin θ . Physically, this corresponds to the configuration when link B1 P1 is
parallel to the x-axis. Two such configurations are shown in Fig. 8. In such a
singularity, the loci of point P can be written as
C sec_ r : ( x − R3 − R2 )2 + y 2 = R1 2 (20)
and
C sec_ l : ( x − R3 + R2 )2 + y 2 = R1 2 (21)
(a) (b)
Figure 7. Singular loci of point P when the robot is in the stationary singularity
(a) (b)
Figure 8. Two kinds configurations of the uncertainty singularity: (a) point P1 is in the
right of point B1 ; (b) point P1 is in the left of point B1
(a) (b)
Figure 9. Singular loci of point P when the robot is in the stationary singularity: (a)
R1 ≥ R2 ; (b) R1 < R2
5. Workspace Analysis
One of the most important issues in the design process of a robot is its work-
space. For parallel robots, this issue may be more critical since parallel robots
will sometimes have rather a limited workspace.
Theoretical workspace of the studied robot is defined as the region that the
output point can reach if θ changes from 0 to 2π and s between − ∞ and ∞
without the consideration of interference between links and the singularity.
From Eq. (6), one can see that if θ is specified, the workspace of the leg 1 is a
circle centered at the point (R1 cosθ + R3 , R1 sin θ ) with a radius of R2 . The
circle is denoted as C 11 . If θ i changes from 0 to 2π , the center point is located
at a circle centered at point (R3 0 ) with a radius of R1 . The circle is denoted
as C 12 . Then, the workspace of the leg is the enveloping region of the circle C 11
when its center rolls at the circle C 12 . Actually, the enveloping region is an an-
nulus bounded by two circles C fir _ o and C fir _ i given in Eqs. (18) and (19), re-
spectively. Especially, when R1 = R2 the workspace is the region bounded by
the circle C fir _ o .
Thinking about the architecture of the studied parallel robot, we can see that
the workspace of leg 1 is limited with respect to the parameters R1 and R2 .
On the Analysis and Kinematic Design of a Novel 2-DOF Translational Parallel Robot 277
But, the workspace of leg 2 has the advantage along x-axis. That means the
workspace can be infinite if the input s is not limited. Practically, this case can-
not occur. However, to enlarge the workspace of the robot, we are sure to find
a solution that the workspace of leg 1 can be embodied by that of leg 2. Actu-
ally, enlarging the workspace is our pursuing objective. In this sense, the
workspace of the robot should be that of the leg 1. The workspace of the leg 1
is then our research objective.
For example, the theoretical workspace of leg 1 of the robot with parameters
R1 = 1.2mm , R2 = 0.8mm and R3 = 0.5mm is shown as the shaded region in
Fig. 10. The theoretical workspace and any other type of workspace of the ro-
bot can be that which embodies the corresponding workspace of the leg 1 by
assigning appropriate values to the parameters Ln (n=1,2,3), which will be de-
scribed in details in the section 7.2. Therefore, in this chapter, the workspace of
the leg 1 is regarded as the workspace of the parallel robot. The theoretical
workspace is actually bounded by the stationary singularity loci C fir _ i and
C fir _ o . Its area can be calculated by
From Fig. 9, we can see that within the theoretical workspace there is station-
ary singularity.
As there exist singular loci inside the theoretical workspace, if a robot wants to
move from one point to another it maybe should passes a singular configura-
tion. That means it maybe changes from one working mode to another. In
practice, changing working mode during the working process is definitely im-
possible. Therefore, we should find out a working space without singularity.
278 Industrial Robotics: Theory, Modelling and Control
Figure 11. The uncertainty singular loci of a robot with different working modes
What we are concerned about here is the robot with the “+” working mode.
Fig. 12 shows all singular loci of such kinds of robots.
On the Analysis and Kinematic Design of a Novel 2-DOF Translational Parallel Robot 279
The theoretical workspace is divided into two parts by the singular loci shown
in Fig. 12, which can be used to identify the usable workspaces of the robots
with the “+” working mode and, at the same time, the down-configuration. In
order to reduce the occupating space, the lower region shown in Fig. 12 is re-
ferred to as the usable workspace of the parallel robot. They are shown as the
shaded region in Fig. 13. Actually, the usable workspace is the half of the theo-
retical workspace. The area can be calculated by
π
Suw =
2
[( R
1 + R2 )2 − ( R1 − R2 )2 ] = 2πR1 R2 (22)
(a) (b)
Figure 12. Singular loci of the robot with the “+” working mode: (a) R1 ≥ R2 ; (b)
R1 < R2
(a) (b)
Figure 13. Usable workspace of the robot with both the “+” working mode and down-
configuration: (a) R1 ≥ R2 ; (b) R1 < R2
280 Industrial Robotics: Theory, Modelling and Control
The Jacobian matrix is the matrix that maps the relationship between the veloc-
ity of the end-effector and the vector of actuated joint rates. This matrix is the
most important parameter in the field. Almost all performances are depended
on this parameter. Therefore, based on the Jacobian matrix, we can identify
which geometrical parameter should be involved in the analysis and kinematic
design.
For the parallel robot considered here, there are three parameters in the Jaco-
bian matrix (see Eq. (17)), which are R1 , R2 and R3 . Theoretically, any one of
the parameters R1 , R2 and R3 can have any value between zero and infinite.
This is the biggest difficulty to develop a design space that can embody all ro-
bots (with different link lengths) within a finite space. For this reason, we must
eliminate the physical link size of the robots.
Let
D = (R 1 + R 2 + R 3 ) 3 (23)
r1 = R1 D , r2 = R2 D , r3 = R3 D (24)
On the Analysis and Kinematic Design of a Novel 2-DOF Translational Parallel Robot 281
r1 + r2 + r3 = 3 (25)
Based on Eqs. (25) and (26), one can establish a design space as shown in Fig.
14(a), in which the triangle ABC is actually the design space of the parallel ro-
bot. In Fig. 14(a), the triangle ABC is restricted by r1 , r2 and r3 . Therefore it
can be figured in another form as shown in Fig. 14(b), which is referred to as
the planar-closed configuration of the design space. In this design space, each
point corresponds a kind of robot with specified value of r1 , r2 and r3 .
For convenience, two orthogonal coordinates r and t are utilized to express
r1 , r2 and r3 . Thus, by using
°r = 2 r1 3 + r3 3
® (27)
°̄t = r3
coordinates r1 , r2 and r3 can be transformed into r and t . Eq. (27) is useful for
constructing a performance atlas.
From the analysis of singularity and workspace, we can see that the singular
loci and workspace shape of a robot when r1 > r2 are different from those of
the robot when r1 < r2 . For the convenience of analysis, the line r1 = r2 is used
to divide the design space into two regions as shown in Fig. 14(b).
(a) (b)
Figure 14. Design space of the 2-DOF translational parallel robot
282 Industrial Robotics: Theory, Modelling and Control
Using the normalization technique in Eqs. (23) and (24), the dimensional pa-
rameters R1 , R2 and R3 were changed to non-dimensional ones r1 , r2 and r3 .
The kinematic, singularity and workspace analysis results can be obtained by
replacing Rn (n=1,2,3) with rn (n=1,2,3) in Eqs. (2)-(22). Then, using Eq. (21), we
can calculate the theoretical workspace area of each robot in the design space
shown in Fig. 14(b). As a result, the atlas of the workspace can be plotted as
shown in Fig. 15. To plot the atlas, one should first calculate the theoretical
workspace area of each non-dimensional robot with r1 , r2 and r3 , which is in-
cluded in the design space. Using the Eq. (27), one can then obtain the relation-
ship between the area and the two orthogonal coordinates r and t (see Fig.
14(b)). This relationship is practically used to plot the atlas in the planar sys-
tem with r and t . The subsequent atlases are also plotted using the same
method. Fig. 15 shows not only the relationship between the workspace area
and the two orthogonal coordinates but that between the area and the three
non-dimensional parameters as well. What we are really most concerned about
is the later relationship. For this reason, r and t are not appeared in the fig-
ure. From Fig. 15, one can see that
Since the usable workspace area is the half of the theoretical workspace area, the
atlas of usable workspace is identical with that of Fig. 15 in distribution but is
different in area value. From Figs. 10 and 15, we can see that the theoretical
workspaces of robots r1 = u and r2 = w , and r1 = w and r2 = u are identical
with each other not only in area but also in shape. It is noteworthy that, al-
though, the usable workspace area atlas is also symmetric about the line r1 = r2 ,
the usable workspace shape of the robot with r1 = u and r2 = w is no longer same
as that of the robot with r1 = w and r2 = u . This result is not difficult to be
reached from Fig. 13.
On the Analysis and Kinematic Design of a Novel 2-DOF Translational Parallel Robot 283
From Fig. 15, one can know the workspace performance of a non-dimensional
parallel robot. Our objective is usually the dimensional robot. If the workspace
performance of a robot with parameters rn (n=1,2,3) is clear, one should know
the corresponding performance of the robot with parameters Rn (n=1,2,3). Oth-
erwise, the normalization of geometric parameters and the developed design
space will be nonsense. Comparing Eqs. (21) and (22), it is not difficult to reach
the following relationship
where Stw ′ and S′uw are the theoretical and usable workspace areas, respec-
tively, of a non-dimensional robot. Eq. (28) indicates that the workspace of a
dimensional robot is D2 times that of a non-dimensional robot. That means,
from Fig. 15, one can also know the workspace performance of a dimensional
robot.
Therefore, the robot with normalized parameters rn (n=1,2,3) has a generalized
significance. The workspace performance of such a robot indicates not only the
performance of itself but also those of the robots with parameters Drn, i.e. Rn.
Here, the robots with parameters Drn are defined as similarity robots; and the
robot with parameters rn is referred to as the basic similarity robot. The analy-
sis in the subsequent sections will show that the similarity robots are similar in
terms of not only the workspace performance but also other performances,
such as conditioning index and stiffness. For these reasons, the normalization
of the geometric parameters can be reasonably applied to the optimal design of
the robot. And it also simplifies the optimal design process.
284 Industrial Robotics: Theory, Modelling and Control
From Section 5, one can know characteristics of the workspace, especially the
usable workspace of a robot with given rn or Rn (n=1,2,3). Usually, in the de-
sign process and globally evaluation of a performance, a kind of workspace is
inevitable. Unfortunately, due to the singularity, neither the theoretical work-
space nor the usable workspace can be used for these purposes. Therefore, we
should define a workspace where each configuration of the robot can be far
away from the singularity. As it is well known, the condition number of Jaco-
bian matrix is an index to measure the distance of a configuration to the singu-
larity. The local conditioning index, which is the reciprocal of the condition
number, will then be used to define some good-condition indices in this sec-
tion.
κ= J J −1 (29)
J = tr (J T WJ ); W =
1
I (30)
n
in which n is the dimension of the Jacobian matrix and I the n × n identity ma-
trix. Moreover, one has
1≤κ ≤ ∞ (31)
and hence, the reciprocal of the condition number, i.e., 1 κ , is always defined
as the local conditioning index (LCI) to evaluate the control accuracy, dexterity
and isotropy of a robot. This number must be kept as large as possible. If the
number can be unity, the matrix is an isotropic one, and the robot is in an iso-
tropic configuration.
Let’s first check how the LCI is at every point in the workspace of the similar-
ity robot with parameters R1 = 1.2mm , R2 = 0.8mm and R3 = 0.5mm . Its us-
On the Analysis and Kinematic Design of a Novel 2-DOF Translational Parallel Robot 285
able workspace is shown in Fig. 13(a). Fig. 16 shows the distribution of the LCI in
the workspace.
From Fig. 16 one can see that, in the usable workspace, there exist some points
where the LCI will be zero or very small. At these points the control accuracy
of the robot will be very poor. These points will not be used in practice. They
should be excluded in the design process. The left workspace, which will be
used in practice, can be referred to as good-condition workspace (GCW) that is
bounded by a specified LCI value, i.e., 1 κ . Then, the set of points where the
LCI is greater than or equal to (GE) a specified LCI is defined as the GCW.
Using the numerical method, by letting the minimum LCI be 0.3, the GCW
area of each basic similarity robot in the design space shown in Fig. 14(b) can
be calculated.
The corresponding atlas can be then plotted as shown in Fig. 17, from which
one can see that
Since there is no singularity within the whole GCW, it can be used as a refer-
ence in the definition of a global index, e.g. global conditioning index.
286 Industrial Robotics: Theory, Modelling and Control
Jacobian matrix is pose-dependent (see Eq. (17)). Then, the LCI is depended on
the pose as well. This indicates that the LCI at one point may be different from
that at another point. Therefore, the LCI is a local index. In order to evaluate
the global behaviour of a robot on a workspace, a global index can be defined
as (Gosselin & Angeles, 1989)
η J = ³ 1 κ J dW ³ dW (32)
W W
which is the global conditioning index (GCI). In Eq. (32), W is the workspace.
In particular, a large value of the index ensures that a robot can be precisely
controlled.
For the robot studied here, the workspace W in Eq. (32) can be the GCW when
LCI ≥ 0.3. The relationship between the GCI and the three normalized parame-
ters rn (n=1,2,3) can be studied in the design space. The corresponding atlas is
shown in Fig. 18, from which one can see that the robots near r1 = 1.2 have
large GCI. Some of these robots have very large GCW, some very small.
q = J p (33)
On the other hand, by virtue of what is called the duality of kinematics and
statics (Waldron & Hunt, 1988), the forces and moments applied at the end-
effector under static conditions are related to the forces or moments required
at the actuators to maintain the equilibrium by the transpose of the Jacobian
matrix J . We can write
τ = JT f (34)
f = K p Δq (35)
With
ªk p1 º
Kp = « » (36)
«¬ k p 2 »¼
in which k pi is a scalar representing the stiffness of each of the actuators.
In the operational coordinate space, we define a stiffness matrix K which re-
lates the external force vector τ to the output displacement vector D of the
end-effector according to
τ = KD (37)
The Eq. (33) also describes the relationship between the joint displacement vec-
tor Δq and the output displacement vector D , i.e.,
Δq = J D (38)
τ = J TKp J D (39)
K = J TKpJ (40)
Then, we have
D = K −1τ (41)
D T D = τ T (K −1 ) K −1 τ
T
(42)
τ = τ Tτ = 1
2
(43)
On the Analysis and Kinematic Design of a Novel 2-DOF Translational Parallel Robot 289
Under the condition (43), one can derive the extremum of the norm of vector
D. In order to obtain the conditional extremum, using the Lagrange multiplier
λD , one can construct the Lagrange equation as following
LD = τ T (K −1 ) K −1 τ − λD (τ Tτ − 1)
T
(44)
∂LD ∂LD
= 0 : (K −1 ) K −1τ − λD τ = 0
T
= 0 : τ Tτ − 1 = 0 , and (45)
∂λD ∂τ
from which one can see that the Lagrange multiplier λD is actually an eigen-
value of the matrix (K −1 ) K −1 . Then, the norm of vector D can be written as
T
D = D T D = τ T (K −1 ) K −1 τ = τ T λD τ = λD
2 T
(46)
2
Therefore, the extremum of D is the extremum of the eigenvalues of the ma-
trix (K )K
−1 T −1
. Then, if k p 1 = k p 2 = 1 and τ
2
= 1 , the maximum and minimum
deformations on the end-effector can be described as
( )
Dmax = max λD i and Dmin = ( )
min λDi (47)
Dmin are actually the maximum and minimum deformations on the end-
effector when both the external force vector and the matrix K p are unity. The
maximum and minimum deformations form a deformation ellipsoid, whose
axes lie in the directions of the eigenvectors of the matrix (K −1 ) K −1 . Its magni-
T
tudes are the maximum and minimum deformations given by Eq. (47). The
maximum deformation Dmax , which can be used to evaluate the stiffness of
the robot, is defined as the local stiffness index (LSI). The smaller the deforma-
tion is, the better the stiffness is.
Similarly, based on Eq. (47), the global stiffness index (GSI) that can evaluate
the stiffness of a robot within the workspace is defined as
η D max =
³
W
Dmax dW
(48)
³
W
dW
290 Industrial Robotics: Theory, Modelling and Control
where, for the robot studied here, W is the GCW when LCI ≥ 0.3 . Usually,
η D max can be used as the criterion to design the robot with respect to its stiff-
ness. Normally, we expect that the index value should be as small as possible.
Figure 19 shows the atlas of η D max , from which one can see that the larger the
parameter r3 , the smaller the deformation. That means the stiffness is propor-
tional to the parameter r3 .
In this section, a method for the optimal kinematic design of the parallel robot
will be proposed based on the results of last sections.
Relationships between performance indices and the link lengths of the 2-DOF
translational parallel robot have been studied. The results have been illustrated
by their atlases, from which one knows visually which kind of robot can be
with a better performance and which cannot. This is very important for us to
find out a global optimum robot for a specified application. In this section, the
optimum region will be shown first with respect to possible performances.
On the Analysis and Kinematic Design of a Novel 2-DOF Translational Parallel Robot 291
In almost all designs, the workspace and GCI are usually considered. From the
atlas of the GCW (see the Fig. 17), we can see that the workspace of a robot
when r1 is near 1.5 and r3 is shorter can be larger. From the atlas of GCI (Fig.
18), we know that robots near r1 = 1.2 have better GCI. If the GCW area, de-
noted as SGCW′ , is supposed to be greater than 6 ( SGCW ′ > 6 ) and the GCI greater
than 0.54, the optimum region in the design space can be obtained shown as
the shaded region in Fig. 20(a). The region is denoted as
Ω GCW −GCI = [(r1 , r2 , r3 )|SGCW
′ > 6 and η J > 0.54 ] with performance restriction.
One can also obtain an optimum region with better workspace and GCI, for
′
example, the region Ω′GCW −GCI where SGCW > 7 and η J > 0.57 as shown in Fig.
20(b). In order to get a better result, one can decrease the optimum region with
stricter restriction. Such a region contains some basic similarity robots, which are
all possible optimal results.
(a) (b)
Figure 20. Two optimum region examples with respect to both GCI and GCW per-
formance restrictions
After the optimum region is identified, there are two ways to achieve the op-
timal design result with non-dimensional parameters. One is to search a most
optimal result within the region Ω GCW −GCI or Ω′GCW −GCI using one classical
searching algorithm based on an established object function. The method will
yield a unique solution. This is not the content of this paper. Another one is to
select a robot within the obtained optimum region. For example, the basic simi-
larity robot with r1 = 1.2 , r2 = 1.65 and r3 = 0.15 can be selected as the candidate
if only workspace and GCI are involved in the design. Its GCW area and the
292 Industrial Robotics: Theory, Modelling and Control
GCI value are 7.2879 and 0.5737, respectively. The robot with only rn (n=1,2,3)
parameters and its GCW are shown in Fig. 21.
Figure 21. The robot with parameters r1 = 1.2 , r2 = 1.65 and r3 = 0.15 in the
Ω′GCW −GCI region and its GCW when LCI ≥ 0.3
example, the values of the GCW, GCI and GSI of the basic similarity robot with
parameters r1 = 1.12 , r2 = 1.68 and r3 = 0.2 in the optimum region are
′
SGCW = 6.8648 , η J > 0.5753 and η D max = 6.5482 . Fig. 23 shows the robot and its
GCW when LCI is GE 0.3.
Figure 22. One optimum region example with respect to the GCI, GCW and GSI per-
formance restrictions
Figure 23. The robot with parameters r1 = 1.12 , r2 = 1.68 and r3 = 0.2 in the
Ω GCW −GCI −GSI region and its GCW when LCI ≥ 0.3
294 Industrial Robotics: Theory, Modelling and Control
The final objective of optimum design is determining the link lengths of a ro-
bot, i.e. the similarity robot. In the last section, some optimum regions have
been presented as examples. These regions consist of basic similarity robots with
non-dimensional parameters. The selected optimal basic similarity robots are
comparative results, not final results. Their workspaces may be too small to be
used in practice. In this section, the dimension of an optimal robot will be de-
termined with respect to a desired workspace.
As an example of presenting how to determine the similarity robot with respect
to the optimal basic similarity robot obtained in section 7.1, we consider the ro-
bot with parameters r1 = 1.12 , r2 = 1.68 and r3 = 0.2 selected in section 7.1.2.
The robot is from the optimum region Ω GCW −GCI −GSI , where the workspace, GCI
and stiffness are all involved in the design objective. To improve the GCI and
GSI performances of the robot, letting LCI be GE 0.5, the values of the GCW,
GCI and GSI of the robot with parameters r1 = 1.12 , r2 = 1.68 and r3 = 0.2 are
′
SGCW = 4.0735 , η J > 0.6977 and η D max = 2.5373 . Fig. 24 shows the revised GCW.
Comparing Figs. 23 and 24, it is obvious that the improvement of perform-
ances GCI and GSI is based on the sacrifice of the workspace area.
Figure 24. GCW of the robot with parameters r1 = 1.12 , r2 = 1.68 and r3 = 0.2 when
LCI ≥ 0.5
The process to find the dimensions with respect to a desired practical work-
space can be summarized as following:
Step 1: Investigating the distribution of LCI and LSI on the GCW of the basic
similarity robot. For the aforementioned example, the distribution is
On the Analysis and Kinematic Design of a Novel 2-DOF Translational Parallel Robot 295
shown in Fig. 25 (a) and (b), respectively, from which one can see the
distributing characteristics of the two performances. The investigation
can help us determining whether it is necessary to adjust the GCW. For
example, if the stiffness at the worst region of the GCW cannot satisfy
the specification on stiffness, one can increase the specified LCI value
to reduce the GCW. In contrary, if the stiffness is permissible, one can
decrease the specified LCI value to increase the GCW.
(a)
(b)
Figure 25. Distribution of LCI and LSI in the GCW of the basic similarity robot when
LCI ≥ 0.5: (a) LCI; (b) LSI
Step 2: Determining the factor D , which was used to normalize the parame-
ters of a dimensional robot to those that are non-dimensional. The
GCW area when LCI ≥ 0.5 of the selected basic similarity robot is
296 Industrial Robotics: Theory, Modelling and Control
′
SGCW = 4.0735 . If the desired workspace area SGCW of a dimensional
robot is given with respect to the design specification, the factor D can
be obtained as D = SGCW SGCW ′ , which is identical with the relation-
ship in Eq. (28). For example, if the desired workspace shape is similar
to the GCW shown in Fig. 24 and its area SGCW = 500mm , there is
′
D = SGCW SGCW = 500 4.0735 ≈ 11.08mm .
Step 3: Achieving the corresponding similarity robot by means of dimensional
factor D . As given in Eq. (24), the relationship between a dimensional
parameter and a non-dimensional one is Rn = Drn (n=1,2,3). Then, if D
is determined, Rn can be obtained. For the above example, there are
R1 = 12.41mm , R2 = 18.61mm and R3 = 2.22 mm . In this step, one can
also check the performances of the similarity robot. For example, Fig. 26
(a) shows the distribution of LCI on the desired workspace, from
which one can see that the distribution is the same as that shown in
Fig. 25 (a) of the basic similarity robot. The GCI is still equal to 0.6977.
Fig. 26 (b) illustrates the distribution of LSI on the workspace. Compar-
ing Fig. 26 (b) with Fig. 25 (b), one can see that the distributions of LSI
are the same. The GSI value is still equal to 2.5373. Then, the factor D
does not change the GCI, GSI, and the distributions of LCI and LSI on
the workspaces. For such a reason, we can say that, if a basic similarity
robot is optimal, any one of its similarity robots is optimal.
Step 4: Determining the parameters Ln (n=1,2,3) that are relative to the leg 2.
Since the parameters are not enclosed in the Jacobian matrix, they are
not the optimized objects. They can be determined with respect to the
desired workspace. Strictly speaking, the workspace analyzed in the
former sections is that of the leg 1. As mentioned in section 5.1, to
maximize the workspace of the 2-DOF parallel translational robot and,
at the same time, to reduce the cost, the parameters Ln (n=1,2,3) should
be designed as those with which the workspace of leg 2 can just em-
body the workspace of the leg 1. To this end, the parameters should be
subject to the following equations
Ymax = L1 + L2 − L3 + R3 (49)
Ymin = L1 − L2 − L3 + R3 (50)
in which Ymax and Ymin are y-coordinates of the topmost and lowest
points of the desired workspace. For the desired GCW shown in Fig.
26, there are Ymax = -3.32mm and Ymin = -29.92mm . Substituting them
On the Analysis and Kinematic Design of a Novel 2-DOF Translational Parallel Robot 297
Then, the parameters of the optimal robot with respect to the desired work-
space SGCW = 500mm are R1 = 12.41mm , R2 = 18.61mm , R3 = 2.22 mm ,
L1 = L2 = 13.30mm , L3 = 32.14mm , θ ∈ [- 83.3040°, 81.7649°] and
s ∈ [- 6.10mm, 25.49mm ] . It is noteworthy that this result is only one of all
possible solutions. If the designer picks up another basic similarity robot from
the optimum region, the final result will be different. This is actually one of the
advantages of this optimal design method. The designer can adjust the final
result to fit his design condition. It is also worth notice that, actually, the de-
sired workspace shape cannot be that shown in Fig. 26. It is usually in a regu-
lar shape, for example, a circle, a square or a rectangle. In this case, a corre-
sponding similar workspace should be first identified in the GCW of the basic
similarity robot in Step 2. This workspace, which is the subset of the GCW, is
normally just embodied by the GCW. The identified workspace area will be
used to determine the factor D with respect the desired workspace area in Step
2.
(a) (b)
Figure 26. Distribution of LCI and LSI in the desired workspace of the obtained simi-
larity robot: (a) LCI; (b) LSI
298 Industrial Robotics: Theory, Modelling and Control
Acknowledgement
9. References
Liu, X.-J., Kim, J. and Wang, J. (2002). Two novel parallel mechanisms with less
than six DOFs and the applications, Proceedings of Workshop on Fundamental
Issues and Future Research Directions for Parallel Mechanisms and Manipulators,
pp. 172-177, Quebec City, QC, Canada, October, 2002.
Liu, X.-J., Wang, J., Gao F., & Wang, L.-P. (2001). On the analysis of a new spa-
tial three degrees of freedom parallel manipulator, IEEE Transactions on
Robotics and Automation, Vol.17, pp.959-968.
Liu, X.-J., Wang, Q.-M., & Wang, J. (2005). Kinematics, dynamics and dimen-
sional synthesis of a novel 2-DOF translational manipulator, Journal of Intel-
ligent & Robotic Systems, Vol.41, No.4, pp.205-224.
McCloy, D. (1990). Some comparisons of serial-driven and parallel driven
mechanisms, Robotica, Vol.8, pp.355-362.
Merlet, J.-P. (2000). Parallel robots, Kluwer Academic Publishers, London.
Ottaviano, E. & Ceccarelli, M. (2002). Optimal design of CaPaMan (Cassino
Parallel Manipulator) with a specified orientation workspace, Robotica,
Vol.20, pp.159–166
Siciliano, B. (1999). The Tricept robot: inverse kinematics, manipulability
analysis and closed- loop direct kinematics algorithm, Robotica, Vol.17,
pp.437-445.
Stewart, D. (1965). A platform with six degrees of freedom, Proc Inst Mech Eng,
Vol.180, pp.371-386,
Stock, M. & Miller, K. (2004). Optimal kinematic design of spatial parallel ma-
nipulators: application to linear Delta robot, Journal of Mechanical Design,
Vol.125, pp.292-301
Strang, G. (1976). Linear algebra and its application, Academic Press, New York
Tonshoff, H.K., Grendel, H. and Kaak, R. (1999). Structure and characteristics
of the hybrid manipulator Georg V, In: Parallel Kinematic Machines, C.R.
Boer, L. Molinari- Tosatti and K.S. Smith (editors), pp.365-376, Springer-
Verlag London Limited.
Tsai, L. W. & Stamper, R. (1996). A parallel manipulator with only translational
degrees of freedom, Proceedings of ASME 1996 Design Engineering Technical
Conference, Irvine, CA, paper 96-DETC-MECH -1152.
Waldron, K.J. & Hunt, K.H. (1988). Series-parallel dualities in actively coordi-
nated mechanisms, Robotics Research, Vol.4, pp.175-181.
Zhao, T. S. & Huang, Z. (2000). A novel three-DOF translational platform
mechanism and its kinematics, Proceedings of ASME 2000 International De-
sign Engineering Technical Conferences, Baltimore, Maryland, paper
DETC2000/MECH-14101.
10
1. Introduction
301
302 Industrial Robotics: Theory, Modelling and Control
derived by the authors (Moustris & Tzafestas, 2005; Zavlangas & Tzafestas,
2000; 2001; 2002). To help the reader appreciate the importance of the tech-
niques presented in the chapter, a short review is first included concerning the
general robot motion planning problem along with the basic concepts and
some recent results. A good promising practical approach is to use fuzzy logic
along the path of behavior–based system design which employs Brooks’ sub-
sumption architecture (Brooks, 1986; Izumi & Watanabe, 2000; Topalov &
Tzafestas, 2001; Watanabe et al., 1996; Watanabe et al., 2005).
Section 2 provides the overview of robot motion planning for industrial and
mobile robots. Section 3 presents the authors’ technique for industrial manipu-
lators’ fuzzy path planning and navigation. Section 4 extends this technique to
mobile robots and discusses the integration of global and local path planning
and navigation. Global path planning uses topological maps for representing
the robot’s environment at the global level, in conjunction with the potential
field method. Section 5 presents a representative set of experimental results for
the SCARA Adept 1 robotic manipulator and the Robuter III mobile robot
(which can be equipped with a robotic arm). Results for both local and global
path planning / navigation are included for the Robuter III robot. Finally, a
general discussion on the results of the present technique is provided, along
with some indications for future directions of research.
Robot motion planning techniques have received a great deal of attention over
the last twenty years. It can roughly be divided into two categories : global and
local. Most of the research in global techniques has been focused on off-line
planning in static environments. A plan is then computed as a geometric path.
An important concept developed by this research is the Configuration space or
C-space of a robot (Latombe, 1991; Lozano-Perez, 1983).
The global techniques, such as road map (Latombe, 1991), cell decomposition (La-
tombe, 1991) and potential fields methods (Khatib, 1986), generally assume that
a complete model of the robot’s environment is available.
The roadmap approach to path planning consists of capturing the connectivity of
the robot’s free space in a network of one-dimensional curves (called the
roadmap), lying in the free space. Once the roadmap has been constructed, it is
used as a set of standardized paths. Path planning is thus reduced to connect-
ing the initial and goal configuration to points in the roadmap and searching it
for a path between these points (Latombe, 1991).
Industrial and Mobile Robot Collision–Free Motion Planning…. 303
Cell decomposition methods are perhaps the motion planning methods that have
been most extensively studied so far (Latombe, 1991). They consist of decom-
posing the robot’s free space into simple regions, called cells, such that a path
between any two configurations in a cell can be easily generated. A nondi-
rected graph representing the adjacency relation between the cells is then con-
structed and searched. This graph is called the connectivity graph. Its nodes are
the cells extracted from the free space and two nodes are connected by a link if
only the corresponding cells are adjacent. The outcome of the search is a se-
quence of cells called a channel. A continuous free path can be computed from
this sequence (Latombe, 1991). A straightforward approach to motion plan-
ning is to discretize the configuration space into a fine regular grid of configura-
tions and to search this grid for a free space. This approach requires powerful
heuristics to guide the search. Several types of heuristics have been proposed.
The most successful ones take the form of functions that are interpreted as po-
tential fields (Latombe, 1991). The robot is represented as a point in configura-
tion space, moving under the influence of an artificial potential produced by
the goal configuration and the C-obstacles. Typically, the goal configuration
generates an “attractive potential” which pulls the robot towards the goal, and
the C-obstacles produce a “repulsive potential” which pushes the robot away
from them. The generated gradient of the total potential is treated as an artifi-
cial force applied to the robot. At every configuration, the direction of this
force is considered to be the most promising direction of motion.
The advantage of global approaches lies in the fact that a complete trajectory
from the starting point to the target point can be computed off-line. However,
global approaches are not appropriate for fast obstacle avoidance. Their
strength is global path planning. Additionally, these methods were proven prob-
lematic when the global world model is inaccurate, or simply not available, as
it is typically the case in the most populated environments. Some researchers
have shown how to update global world models based on sensory inputs, us-
ing probabilistic representations. A second disadvantage of global methods is
their low speed due to the inherent complexity of robot motion planning. This
is particularly the case if the underlying world model changes with time, be-
cause of the resulting requirement for repeated adjustments of the global plan.
In such cases, planning using a global model is usually too expensive to be
done repeatedly.
Local approaches, on the other hand, use only a small fraction of the world
model to generate robot control. This comes at the obvious disadvantage that
they cannot produce optimal solutions. Local approaches are easily trapped at
local minima. However, the key advantage of local techniques over global ones
lies in their low computational complexity, which is particularly important
when the world model is updated frequently based on sensor information. For
example, potential field methods determine the next step by assuming that ob-
stacles assert negative forces on the robot, and that the target location asserts a
304 Industrial Robotics: Theory, Modelling and Control
positive force. These methods are extremely fast, and they typically consider
only a small subset of obstacles close to the robot. However, such methods
have often failed to find trajectories between closely spaced obstacles; they also
can produce oscillatory behaviour in narrow spaces.
To be useful in the real world, mobile robots need to move safely in unstruc-
tured environments and achieve their given goals despite unexpected changes
in their surroundings. The environments of real robots are rarely predictable
or perfectly known so it does not make sense to make precise plans before
moving. The robot navigation problem can be decomposed into the following
two problems (Ratering & Gini, 1995) :
• Getting to the goal. This is a global problem because short paths to the goal
generally cannot be found using only local information. The topology of
the space is important in finding good routes to the goal.
• Avoiding obstacles. This can often be solved using only local information,
but for an unpredictable environment it cannot be solved in advance be-
cause the robot needs to sense the obstacles before it can be expected to
avoid them.
Over the years, robot collision avoidance has been a component of high-level
controls in hierarchical robot systems. Collision avoidance has been treated as
a planning problem, and research in this area was focused on the development
of collision-free path planning algorithms. These algorithms aim at providing
the low-level control with a path that will enable the robot to accomplish its
assigned task free from any risk of collision. However, this places limits on the
robot’s real-time capabilities for precise, fast, and highly interactive operations
in a cluttered and evolving environment. Collision avoidance at the low-level
control is not intended to replace high-level functions or to solve planning
problems. The purpose is to make better use of low-level control capabilities in
performing real-time operations. A number of different architectures for
autonomous robot navigation have been proposed in the last twenty years (La-
tombe, 1991; Fugimura, 1991; Tzafestas, 1999). These include hierarchical ar-
chitectures that partition the robot’s functionalities into high-level (model and
plan) and low-level (sense and execute) layers; behaviour – based architectures
that achieve complex behaviour by combining several simple behaviour-
producing units; and hybrid architectures that combine a layered organization
with a behaviour-based decomposition of the execution layer (see e.g., (Izumi
& Watanabe, 2000; Watanabe et al., 1996; Topalov & Tzafestas, 2001; Watanabe
et al., 2005; Lozano-Perez, 1983; Khatib, 1986; Ratering & Gini, 1995; Erdmann
& Lozano-Perez, 1987; Griswold & Elan, 1990; Gil de Lamadrid & Gini, 1990;
Industrial and Mobile Robot Collision–Free Motion Planning…. 305
Fibry, 1987; Gat, 1991; Sugeno & Nishida, 1985; Yen & Pflunger, 1992)). While
the use of hybrid architectures is gaining increasing consensus in the field, a
number of technological gaps still remain.
As mentioned in Section 2.1, the classical approaches can be mainly divided
into two categories : global path planning and local navigation (Latombe, 1991).
The global approaches preassume that a complete representation of the con-
figuration space has been computed before looking for a path. They are com-
plete in the sense that if a path exists it will be found. Unfortunately, comput-
ing the complete configuration space is very time consuming, worst, the
complexity of this task grows exponentially as the number of degrees of free-
dom increases. Consequently, today most of the robot path planners are used
off-line. The planner is equipped with a model of the environment and pro-
duces a path which is passed to the robot controller for execution. In general,
the time necessary to achieve this, is not short enough to allow the robot to
move in dynamic environments. The local approaches need only partial
knowledge of the robot’s workspace. The decisions to move the robot are
taken using local criteria and heuristics to choose the most promising direc-
tion. Consequently, the local methods are much faster. Unfortunately, they are
not complete, it may happen that a solution exists and cannot be found. The
local approaches consider planning as an optimization problem, where finding
a path to the goal configuration corresponds to the optimization of some given
function. As an optimization technique, the local approaches are subject to get
trapped in some local minima, where a path to the goal has not been found
and from which it is impossible or, at least, very difficult to escape.
From the above, it is very clear, that both global and local techniques have
many advantages, as well as important disadvantages. The output of a global
path planner is a continuous path along which the robot will not collide with
obstacles. However, any model of the real world will be incomplete and inac-
curate, thus collisions may still occur if the robot moves blindly along such a
path. One conventional application is for the robot to track the global path.
More recently, work has been done on increasing the level of competence, by
including real-time collision avoidance capabilities. Such local or reactive be-
haviours operate in real time but cannot solve the global problem of moving to
an arbitrary goal. It is very clear that to built a complete system, the above ap-
proaches must be combined. A path planner must provide the robot with a
global path to the goal. A local controller then moves the robot along the
global path while handling small changes in the environment and unexpected
or moving obstacles.
Some researchers have solved the navigation problem by solving these two
sub-problems one after the other. A path is first found from the robot’s initial
position to the goal and then the robot approximates this path as it avoids ob-
stacles. This method is restrictive in that the robot is required to stay fairly
close to or perhaps on a given path. This would not work well if the path goes
306 Industrial Robotics: Theory, Modelling and Control
1. the distance between the link and the nearest obstacle d j , and
2. the difference between the current link configuration and the target confi-
guration,
θ j − θ j ,target = Δθ j .
The output variable of each unit is the motor command τ j . All these variables
can be positive or negative, i.e., they inform both about the magnitude and the
sign of displacement relative to the link – left or right. The motor command
which can be interpreted as an actuation for the link motor is fed to the ma-
nipulator at each iteration (Figure 1). For the calculation of the distance, the
only obstacles considered are those which fall into a bounded area surround-
ing each link and move along with it. In this implementation, each such area is
chosen to be a cylindrical volume around each link. The area is as long as the
link and reaches up to a predefined horizon. This area can be seen as a simpli-
fied model for the space scanned by ranging sensors (for example, ultra-sonic
sensors) attached to the sides of a link (Pedrycz, 1995). Of course, other shapes
to describe the 3-dimensional scan areas are conceivable. It is, for example, ad-
visable to deal with the blind zones near the joints when the magnitude of the
angles is large so as to assure that small moving obstacles are not missed by
the algorithm (Pedrycz, 1995).
308 Industrial Robotics: Theory, Modelling and Control
Each fuzzy unit receives via an input the difference between target and actual con-
figuration, and, via a second input, two values in a sequential way represent-
ing the distance between the corresponding link and the nearest obstacle on
the left and on the right of this link. If no obstacle is detected inside the scan
area, the fuzzy unit is informed of an obstacle in the far distance. Additionally,
proximal units are informed about obstacles in the vicinity of more distal links.
Besides an input from ultrasonic sensors, a camera can be used to acquire the
environment. Either a stationary camera or a set of cameras which oversee the
entire workspace can be utilised (Pedrycz, 1995; Jaitly & Fraser, 1996). The task
of each fuzzy unit is to provide a control function which produces an appro-
priate motor command from the given inputs. In broad lines, the control func-
tion can be described as follows : on the one hand the function has to lead the
corresponding link to its attracting end – position; on the other hand, it has to
force the link to back up when approaching an obstacle which conveys a repel-
ling influence. The fuzzy-rule-base (which represents the control function in
each fuzzy unit) is built up by using common sense rules.
In our particular implementation, at each iteration the distances of the nearest
obstacle on the left ( d jleft ) and on the right ( d jright ) of link l j are fed sequen-
tially into the fuzzy unit. This process could also be carried out in a parallel
fashion where two equivalent fuzzy controllers compute the response for the
left and the right obstacle separately. The resulting two motor commands are
superimposed, hence, both obstacles influence the final motor command
which is applied to the link. The use of this method guarantees that the repul-
sion caused by one obstacle on one side of the link does not result in a collision
with a nearby obstacle on the opposite side. Only those obstacles are consid-
ered which are the nearest on the left and right.
Industrial and Mobile Robot Collision–Free Motion Planning…. 309
In addition, fuzzy units of distal links communicate information about the dis-
tance to their nearest obstacles on the left and right to units of more proximal
links. Once sent to fuzzy units of more proximal links, this information can be
used by the decision process of those units to slow down or even reserve the
motion of the more proximal links. Without this propagation of information
the control strategy might fail in situations where one obstacle is “known”
only to a fuzzy unit of a distal link, while proximal links continue their motion
based on their local environment dictating an adverse motion for the rest of
the arm. This is especially important, since the same change of angle occurring
at a proximal link and at a distal link produces a different velocity at the ma-
nipulator’s tip. Thus, the motion of a proximal link might not be sufficiently
compensated by an adverse motion at a more distal link. Fuzzy units are only
fed with the distance values of those obstacles which are inside the scan range.
If no obstacle is detected inside a scan range, the fuzzy unit is informed of an
obstacle which is far left or far right, respectively.
The first input of each fuzzy unit is the difference between the actual angle and
the target angle, θ j − θ j ,target = Δθ j ∈ Θ j , j = 1,! , n ( n is the number of links).
The value Δθ j is positive if the target is on the right, and negative if the target
is on the left. The second input receives values describing the distance between
link l j and the nearest obstacles on the left and right in the “scanned” region,
d j ∈ D j . An obstacle on the left produces a negative distance value, while an
obstacle on the right produces a positive one. The single output is the motor
command τ j ∈ T j . A positive motor command moves the link to the left and a
negative one to the right (Althoefer, 1996; Althoefer & Fraser, 1996).
Each universe of discourse D j can be partitioned by fuzzy sets μ1( j ) ,! , μ (pjj ) .
Each of the sets μ (pj j ) , p j = 1,! , p j , represents a mapping μ (pj j ) ( d j ) : D j → [ 0, 1] by
which d j is associated with a number in the interval [ 0, 1] indicating to what
degree d j is a member of the fuzzy set. Since d j is a signed value, “close_left”,
for example, may be considered as a particular fuzzy value of the variable dis-
tance and each d j is assigned a number μclose _ left ( d j ) ∈ [ 0, 1] which indicates the
extent to which that d j is considered to be close_left (Mamdani & Assilian,
1981). In an equivalent way, fuzzy sets ν 1( j ) ,! ,ν qj( j ) can be defined over the uni-
verse of discourse Θ j . The Fuzzy Navigator is based on the Mamdani fuzzy
model (Mamdani, 1974) and has an output set which is partitioned into fuzzy
sets τ rj .
There is a variety of functions that can be employed to represent fuzzy sets
(Mamdani & Assilian, 1981). In the proposed controller asymmetrical trape-
310 Industrial Robotics: Theory, Modelling and Control
zoidal functions were employed to represent the fuzzy sets. The parameters,
ml ( ) , mr ( ) , which are the x-coordinates of the left and right zero crossing, re-
j j
spectively, and mcl ( j ) , mcr ( j ) , which describe the x-coordinate (left and right)
where the fuzzy set becomes 1, define the following trapezoidal functions :
°min ¨ (
§ d j − ml (p j )
j
,
)
0
·
¸ if ml ( j ) < d ≤ mcl ( j ) ,
°
° © (
¨ mcl ( j ) − ml ( j )
pj pj
¸
¹ ) pj j pj
(1)
μ pj (d j ) = ® 1
( j) °
if mcl (p j ) < d j ≤ mcrp( j ) ,
j j
°
°
° min ¨
j (
§ d − mr ( j )
pj
, 0
)·
¸ if mcr ( j ) < d ≤ mr ( j ) .
( j) ( j) pj j pj
° ¨ mcrp j − mrp j ¸
¯ © ¹
°
°1 if ml1( j ) < d j ≤ mcl1( j ) ,
°
μ (p jj ) ( d j ) = ® 1
° (2)
if mcl1( j ) < d j ≤ mcr1( j ) ,
( )
°
° § d j − mr1( j ) ·
min ¨ , 0 ¸ if mcr1( j ) < d j ≤ mr1( j ) .
( )
° ¨ ( j) ( j) ¸
°¯ mcr1 − mr1
© ¹
and
°min ¨ j
(
§ d j − ml (p j )
, 0
· )
¸ if ml ( j ) < d ≤ mcl ( j ) ,
°
° © pj (
¨ mcl ( j ) − ml ( j )
pj
¸
¹ )
pj j pj
μ pj (d j ) = ® 1
( j) °
if mcl (p j ) < d j ≤ mcrp( j ) , (3)
j j
°
°1 if mcrp( jj ) < d j ≤ mrp( jj ) .
°
°
¯
Note, that for the two fuzzy sets of Equations (2) and (3) :
The fuzzy sets for Δθ j can be defined in the same way. The fuzzy sets
m1( j ) ,! , m(pj ) and n1( j ) ,! , nq( j ) for link 2 as functions of the two inputs d j and
Δθ j are shown in Figure 2, together with the output fuzzy set.
The first diagram (a) is the fuzzy set of the distance between the link and the
obstacle, and the second one (b) is the fuzzy set of the difference between the
actual and target configuration. The output fuzzy set is shown in the third dia-
gram (c).
Additionally to the two inputs d j and Δθ j , each fuzzy unit (apart from the
most distal one) uses the distance fuzzy sets of more distal links μ (p jj++11) ,! , μ (pnn )
for decision making to assure that proximal links are slowed down, in case a
more distal link is about to collide with an obstacle. Thus, each unit uses the
following fuzzy sets : μ (pkk ) , k = j + 1,! , n , and ν q( jj ) . Each of the fuzzy sets μ (pkk )
and ν q( j ) are associated with linguistic terms Ap( j ) and Bq( j ) , respectively. Thus,
j j j
for link l j the linguistic control rules R1( ) ,! , Rr(j ) , which constitute the rule
j j
where rj = 1,! , rj , rj , is the number of rules for the fuzzy unit of link l j , and
τ r is a numerical entry in the rule base used in the defuzzification process
j
(Equation (5)). The most popular methods to calculate the fuzzy intersection
(fuzzy – AND) are the minimum and product operators (Tzafestas & Venet-
sanopoulos, 1994; Zadeh, 1965; 1973). If the minimum operator is used, the
minimum of the inputs is chosen. If the product operator is chosen, the inputs
are multiplied with each other. While the result of the first approach contains
only one piece of information, the second approach produces results which are
influenced by all inputs (Brown, 1994).
Here, the fuzzy intersection is calculated by using the product operator “∗” :
σ r = μ (p j ),r ( d j ) " μ (pn ),r ( d n ) ν q( j,)r ( Δθ j ) = μ (p j ),r ( d j ) *" * μ (pn ),r ( d n ) *ν q( j,)r ( Δθ j )
j j j n n j j j j n n j j
The output of the unit is given by the centroid defuzzification over all rules
(Kosko, 1992) :
τj =
rj
¦ τ
r j =1 r j
* μ (p j ),rj trj
j
( ) (5)
¦
rj
r =1j
μ ( j)
p j , rj (t )
rj
Basically, the technique to be described for mobile fuzzy path planning and
navigation is the same with that described in Section 3 for the case of industrial
manipulators, i.e. it is based on Khatib’s potential field method (Khatib, 1986)
and on Brooks subsumption structure (Brooks, 1986).
Khatib computes an artificial potential field that has a strong repelling force in
the vicinity of obstacles and an attracting force produced by the target location.
The superposition of the two forces creates a potential field, which incorpo-
rates information about the environment. Following the steepest gradient from
a start position, a path can be found that guides the robot to the target position
avoiding obstacles. In our approach the amount of computation, that is re-
quired, is reduced by using only the nearest obstacles to determine the direc-
tion of motion. The main idea of Brooks is a collection of modules, which are
interconnected on different layers with different hierarchies. These modules
are for example wall following, obstacle avoidance, goal reaching, etc. Depending
on sensory input, a module becomes active and generates a command for the
robot. While Brooks’ system resembles an expert system where for any input
signal one specific reaction module or a specific combination of modules is ac-
tive, our fuzzy approach is a parallel processing strategy where each input
contributes to the final decision (see Section 3.2).
The technique is based on two fuzzy – based controllers, one for steering con-
trol, and the other for velocity control. The steering controller has three prin-
cipal inputs :
The output variable of this unit is the required change of angle Δθ j , and can
be considered as a command for the robot’s steering actuators. The velocity
controller has two principal inputs :
(a) (b)
Figure3. (a) The Robosoft Robuter III mobile robot of IRAL / NTUA connected to the
corresponding fuzzy-based obstacle avoidance unit.
This steering control unit receives via an input the angle between the robot’s di-
rection and the straight line connecting the current position of the robot and
the goal configuration θ j = α j − β j , and the distance and the angle to the nearest
obstacle ( d j , γ j ) (see (b)). If no obstacle is detected inside the scan area, the
fuzzy unit is informed of an obstacle in the far distance. The output variable of
this unit is the required change of angle Δθ j , and can be considered as a com-
mand for the robot’s steering actuators. The velocity control unit has two in-
puts : the distance between the robot and the nearest obstacle d j , and the dis-
tance between the robot and the goal configuration d g j . The output variable of
this unit is an acceleration command Δv j and can be considered as a command
for the robot’s driving actuators.
The obstacles considered are those that fall into a confined area surrounding
the robot and moving along with it. Here, this area is chosen to be a cylindrical
volume around the mobile platform. This area is regarded as a simplified
model for the space scanned by ranging sensors (for example ultrasonic sen-
sors) attached to the sides of the robot (Pedrycz, 1995). Besides an input from
ultrasonic sensors, a camera can also be used to acquire the environment. Mo-
Industrial and Mobile Robot Collision–Free Motion Planning…. 315
bile robots are usually equipped with a pan / tilt platform where a camera is
mounted. This camera can be utilized as shown in (Pedrycz, 1995; Jaitly et al.,
1996; Kamon & Rivlin, 1997). If no obstacle is detected inside the scan area, the
fuzzy units are informed of an obstacle in the far distance. The task of the
fuzzy units is to provide a control function, that produces appropriate motor
commands from the given inputs. This control function on the one hand has to
lead the mobile robot to its attracting goal–position, and on the other hand it
has to force the robot to back up when approaching an obstacle which conveys
a repelling influence. The fuzzy–rule – base (which represents the control func-
tion in the fuzzy unit) is constructed using common sense rules or by a neural
network training algorithm (see e.g., (Tzafestas & Zavlangas, 1999)). An alter-
native fuzzy motion control scheme of mobile robots which employs the slid-
ing – mode control principle can be found in (Rigatos & Tzafestas, 2000).
value of the variable distance and each d j is associated with, e.g., a number in
the interval [0,1] indicating to what degree d j is a member of the fuzzy set.
Since d j is a measure of distance, “very_close”, it may be assigned a number :
μvery _ close ( d j ) ∈ [ 0,1] which indicates the extent to which this particular d j is
considered to be “very_close” (Mamdani & Assilian, 1981).
316 Industrial Robotics: Theory, Modelling and Control
Fig. 4. Fuzzy sets for the mobile robot : (a) distance to obstacle, (b) angle between ro-
bot and goal position, (c) angle between robot and obstacle, and (d) steering motor
command. Note that the output is not partitioned into fuzzy sets, but consists of crisp
values.
Similarly, fuzzy sets v1( ) ,! vq( j ) can be defined over the universe of discourse
j j
Θ j which represents the angle between the robot’s direction and the straight
line connecting the current position of the robot and the goal configuration :
θ j ∈ Θ j . Finally, fuzzy sets u1( j ) ,! , u g( jj ) can be defined over the universe of dis-
course Γ j that represents the angle to the nearest obstacle γ j ∈ Γ j . In contrast
to the Mamdani’s controller, Sugeno’s controller (see (Sugeno, 1985; Sugeno &
Murakami, 1984; Tzafestas & Zikidis, 2001)), of which ours is an example, has
an output set which is not partitioned into fuzzy sets (see Fig. 2). Thus, the rule
conclusions merely consist of scalars Δθ rj , rj = 1,! , rj .
The fuzzy sets μ (p j ) , p j = 1,! , p j , are described by asymmetrical triangular and
j
trapezoidal functions. Defining the parameters, ml p( j ) and mrp( j ) as the x-co-
j j
ordinates of the left and right zero crossing respectively, and mcl p( j ) and mcrp( j )
j j
as the x-co-ordinates of the left and right side of the trapezoid’s plateau, the
trapezoidal functions can be written as :
° j(( p j )(
max d − ml ( j ) / mcl ( j ) − ml ( j ) , 0
p j p j ) ) if d j < mcl (p jj )
μ (p jj ) ( d j ) = ® 1
°
if mcl (p jj ) ≤ d j ≤ mcrp( jj ) (6)
°
(( )( ) )
°max d j − mrp( jj ) / mcrp( jj ) − mrp( jj ) , 0
¯
if d j > mcrp( jj )
At the left and right sides of the interval, the functions are continued as con-
stant values of magnitude one, i.e. :
if d j ≤ mcr1(
j)
1
μ1 ( j)
( j ) °®max d − mr ( j ) / mcr ( j ) − mr ( j ) , 0
=
(( ) )
d (7)
°̄ j 1 1 )(
1 if d j > mcr1(
j)
and
μp ( j) j ((pj )(
max d − ml ( j ) / mcl ( j ) − ml ( j ) , 0
( d j ) = °®
pj pj ) ) if d j ≤ mcl (p jj )
(8)
j
( j)
°1
¯ if d j > mcl p j
The fuzzy sets for θ j and γ j are defined analogously. Figure 4 shows the fuzzy
sets μ1( ) ,! , μ (p j ) , v1( ) ,! , vq( j ) and u1( ) ,! , u g( j ) .
j j j j j j
318 Industrial Robotics: Theory, Modelling and Control
Every fuzzy set, μ (p j ) , vq( j ) and u g( j ) , is associated with linguistic terms Aq( j ) , Bq( j )
j j j j j
and Cg( j ) respectively. Thus, for the mobile robot, the linguistic control rules
j
σ r = μ (p j ),r ( d j ) ∩ vq( j ,)r (θ j ) ∩ u g( j ),r ( γ j ) = μ (p j ),r ( d j ) * vq( j ,)r (θ j ) * u g( j ),r ( γ j )
j j j j j j j j j j j j j
(10)
Finally, the output of the unit is given by a weighted average over all rules (see
Fig. 2 and (Topalov & Tzafestas, 2001)) :
rj
¦σ
rj =1
rj ⋅ Δθ rj
Δθ j = rj (11)
¦σ
rj =1
rj
Eq. (9) together with Eqs. (10) and (11) define how to translate the intuitive
knowledge reflected in Table 2 into a fuzzy rule – base. The details of this
translation can be modified by changing the number of fuzzy sets, the shape of
the sets (by choosing the parameters, ml p( j ) and mrp( j ) , mcl (p j ) , mcrp( j ) as well as
j j j j
the value Δθ rj of each of the rules in Eq. (11). A technique for the automated
selection of the above parameters is provided in (Tzafestas & Zikidis, 2001). As
an example, the control rules for the particular mobile robot are shown in Ta-
ble 2. In this application, the number of fuzzy sets which fuzzify the obstacle
distance d j , the angle to the goal θ j and the angle to the nearest obstacle γ j
are chosen to be four, nine and seven, respectively. All the other parameters
were refined by trial and error.
Industrial and Mobile Robot Collision–Free Motion Planning…. 319
Figure 5. Fuzzy sets for the velocity control unit of the mobile robot : (a) the distance
between the robot and the nearest obstacle, and (b) the distance between the robot
and the goal configuration. Note that the output is an acceleration command and is
not partitioned into fuzzy sets, but consists of crisp values.
For the velocity controller, each input space is partitioned by fuzzy sets as
shown in Figure 5. Asymmetrical triangular and trapezoidal functions have
also been used to describe each fuzzy set, which allow a fast computation, es-
sential under real-time conditions (see Eqs. (12) – (13)).
To calculate the fuzzy intersection, the product operator is employed (see Eq.
(16)). The final output of the unit is given by a weighted average over all rules
(see Eq. (17)).
Intuitively, the rules for the velocity controller can be written as sentences with
two antecedents and one conclusion. This structure is represented by Table 3
which provides the prior knowledge of the problem domain. The tools of
fuzzy logic allow us to translate this intuitive knowledge into a control system.
320 Industrial Robotics: Theory, Modelling and Control
( ) μ (d )
v dg j j
very close close far very far
The fuzzy sets μ (p j ) , p j = 1,! , p j , are described by asymmetrical triangular and
j
trapezoidal functions. Defining the parameters, ml p( j ) and mrp( j ) as the x-co-
j j
ordinates of the left and right zero crossing respectively, and mcl (p j ) and mcrp( j )
j j
as the x-co-ordinates of the left and right side of the trapezoid’s plateau, the
trapezoidal functions can be written as :
° j (( p j )(
max d − ml ( j ) / mcl ( j ) − ml ( j ) , 0
p j p j ) ) if d j < mcl (p j )
j
μ (p j ) ( d j ) = ® 1
°
j
if mcl (p jj ) ≤ d j ≤ mcrp( jj ) (12)
°
(( )( ) )
°max d j − mrp( jj ) / mcrp( jj ) − mrp( jj ) , 0
¯
if d j > mcrp( j )
j
At the left and right side of the interval the functions are continued as constant
values of magnitude one :
if d j ≤ mcr1(
j)
1
μ1( j)
( j ) °®max d − mr ( j ) / mcr ( j ) − mr ( j ) , 0
= (13)
(( ) )
d
°̄ j 1 1 )( 1 if d j > mcr1( j )
and
μp( j) j pj (( pj)(
max d − ml ( j ) / mcl ( j ) − ml ( j ) , 0
( d j ) = °®
pj ) ) if d j ≤ mcl (p jj )
(14)
if d j > mcl (p jj )
j
°1
¯
Industrial and Mobile Robot Collision–Free Motion Planning…. 321
The fuzzy sets for d gj are defined analogously. Figure 5 shows the fuzzy sets
μ1( j ) ,! , μ (p j ) and v1( j ) ,! , vq( j ) .
j j
The linguistic control rules R1( ) ,! , Rr(j ) , which constitute the rule base, are de-
j j
fined as :
where the AND operations use the t-norm product operator, defined as :
j j j j j
( )
j j j j j
( )
σ r = μ (p j ),r ( d j ) ∩ vq( j ,)r d g = μ (p j ),r ( d j ) * vq( j ,)r d g j
(16)
Finally, the output of the velocity control unit is given by a weighted average
over all rules (Eq. 16) :
rj
¦σ
rj =1
rj ⋅ Δvrj
Δv j = rj
¦σ
rj =1
rj
The path planner and navigator presented in Sections 4.1 and 4.2 is of the reac-
tive type and is used at the local level. The full path following problem in-
volves also a global path planner at a higher level (called geometric level) as
shown in Fig. 6. The authors have derived and implemented a global path
planner based on the potential field method which was selected for its mathe-
matical elegance and simplicity. However, as it is evident, more complex /
complete global path planning techniques can be embedded into our system.
Global path planning strategies are more easily planned using a topological
map, where the planner can decide the sequence of rooms and corridors to be
traversed (Dudek et al., 1991; Ryu & Yang, 1999; Kortenkamp & Weymouth,
1994; Thrun, 1999; Kuipers & Levitt, 1988).
322 Industrial Robotics: Theory, Modelling and Control
The topological maps (networks) are placed at the top level of the hierarchical
structure. The authors have conducted a number of experiments where both
the topological and local metric maps were given to the robot a priori. The
enormous compactness of topological maps when compared to the underlying
grid-based map, increases substantially the efficiency of global planning. The
paths are planned using the abstract topological map of the robot’s environ-
ment (Fig. 7). Shortest paths on the topological maps can be easily found using
one of the standard graph search algorithms, such as Dijkstra’s or Floyd and
Warshal’s shortest path algorithm, the A* algorithm, or dynamic program-
ming. In our case we have used the Dijkstra’s shortest path algorithm. This al-
gorithm finds the best (shortest) path on the topological map going from the
sector that contains the current location of the robot to the one which contains
the goal. For example, a plan to get from the “start” position in Fig. 7 to an of-
fice desk in room R2 may be :
(a)
Industrial and Mobile Robot Collision–Free Motion Planning…. 323
(b)
Figure 7. (a) The topological map used for the experiments, and (b) the corresponding
network of the local metric maps.
5. Experimental Results
To test the functionality and performance of the basic algorithms, the fuzzy
navigator was applied to a simulated 3-dof industrial manipulator (Adept 1).
All experiments were carried out on a personal computer (Pentium II, 350
MHz).
324 Industrial Robotics: Theory, Modelling and Control
(a)
(c)
The proposed fuzzy navigator was successfully tested for the path planning / obsta-
cle avoidance problem in three working scenarios with different obstacle constella-
tions (a), (b) and (c), each time providing the manipulator with a collision – free trajec-
tory.
The fuzzy navigator was developed using Microsoft C and for the visualiza-
tion and animation of the robot’s path Workspace 4 software package was
used.
The performance of the fuzzy navigator for the Adept 1 industrial manipulator
was tested in a variety of environments, with different obstacle constellations
and working scenarios, even in dynamic environments with moving obstacles.
In all cases, the fuzzy navigator provided the system with a collision free mo-
tion. Simulation results obtained in three different working scenarios (with dif-
ferent obstacle constellations) are presented in Figure 8.
Industrial and Mobile Robot Collision–Free Motion Planning…. 325
The experimental results were first derived for the simulated omnidirectional
mobile robot of the IRAL Laboratory (Robosoft Robuter III) using a Personal
Computer (Pentium IV, 1.3 GHz) and the Matlab 5.2 software package. The
fuzzy navigator was tested in a variety of environment scenarios (with both
static and moving obstacles). In all cases a collision – free motion was ob-
tained. A representative set of results are shown in Fig. 9 for four distinct
working scenarios.
326 Industrial Robotics: Theory, Modelling and Control
Figure 9. Simulation results using the Robosoft Robuter III mobile robot.
The proposed strategy was successfully tested in four working scenarios with
different obstacle constellations each time providing the mobile robot with a
collision – free movement. In (a) the path of the robot with only steering con-
trol is shown, and in (b) the path with the steering and velocity controllers is
shown.
The topological – map based global path planning technique was applied to
the simulated Robosoft Robuter III mobile robot using the environment de-
picted in Fig. 7.
In experiment 1, the mobile robot was in corridor C1 and was instructed to go
to an office desk in room R2 . The output sequence / plan of the topological
Industrial and Mobile Robot Collision–Free Motion Planning…. 327
In this chapter the problem of collision – free path planning and navigation
was studied for both robotic manipulators and omnidirectional mobile robots.
A short review of the problem itself was provided along with a number of
fundamental and recent references. The methodology followed was based on
fuzzy logic and reasoning and a promising fuzzy path planner and navigator
was developed, that can be applied to both industrial and mobile robots. This
path planner / navigator has been tested by simulation in a variety of working
scenarios with different obstacle constellations, both static and dynamic, pro-
viding each time a collision-free trajectory for the robotic manipulator. This lo-
cal planning method has shown a robust and stable performance and the ex-
perimental results were very satisfactory. The difference between the
presented approach and the existing obstacle avoidance techniques is that the
proposed navigator considers only the nearest obstacle to decide upon the ro-
bot’s next move. This clearly leads to a large reduction in the required remote
sensing and computations. A drawback is that, this reduction in information
about the robot’s environment, leads to an increased possibility of getting
trapped in local minima. There may be routes leading to the goal that avoid
the obstacles, which our navigator will not be able to find.
Concerning the mobile robots both local and global path planners were devel-
oped and tested. The local geometric information was encoded in a set of local
sectors, whereas the global topological information was encoded in a network
connecting these sectors. Each sector is a Cartesian representation, with its
own reference system, that covers a limited area of the environment, like a
room, a hall, or a corridor, and includes an approximate geometric description
of the boundaries of the objects in the environment. The environment at the lo-
cal level was represented by local metric maps connected to the topological
network. In this way one can use maps that are not metrically consistent on the
global scale although they are metrically consistent locally. This structure al-
lows also the combination of abstract global reasoning and precise local geo-
metric computations. To navigate in the environment, the robot uses, at the
higher level of abstraction, the topological information to plan a sequence of
sectors and gateways to traverse, and, to the lower level, uses the metric in-
formation in each sector to perform geometric path planning and navigation,
and to locally move within the sector and to the next one. The system has
shown a very stable and robust performance, providing each time the mobile
robot with a collision free movement. The proposed fuzzy navigator is very
fast and can be used in real time.
Future research may be devoted to the application of neural learning mecha-
nisms applied on the fuzzy navigator providing an adaptive neurofuzzy plan-
ner (Topalov & Tzafestas, 2001; Brown, 1994; Tzafestas & Zavlangas, 1999;
Tzafestas & Zikidis, 2001; Wang, 1994; Stamou & Tzafestas, 2000). The authors’
330 Industrial Robotics: Theory, Modelling and Control
group is also investigating the motion planning and control problem of mobile
robots equipped with manipulator arms, i.e., mobile manipulators (Tzafestas
& Tzafestas, 2001; Bayle et al., 2003; Tzafestas et al., 2000; Erden et al., 2004).
Other potential directions for future research needed in both industrial and
non-industrial (e.g., service, medical) applications include multirobotic sys-
tems involving industrial manipulators, mobile robots and mobile manipula-
tors, and the application of behavior-based techniques to both single robotic
and multiple cooperating robots (Erden et al., 2004; Hsu & Liu, 2005; Arai et
al., 1999; Balch & Arkin, 1998; Yamaguchi et al., 2001; Wang, 1991).
7. References
Dudek, G.; Jenkin, M.; Milios, E. & Wilkes, D. (1991). Robotic exploration as
graph construction, IEEE Trans. on Robotics and Automation, 7(6), 859-865.
ajjaji, A. El. (2004). A Ciocan and D. Hamad, Four wheel steering control by
fuzzy approach, J. Intell. and Robotic Systems, 41(2-3), 141-156.
Erden, M. S.; Leblebicioglu, K. & Halici, U. (2004). Multi – agent system –
based fuzzy controller design with genetic tuning for a mobile manipula-
tor robot in the hand over task, J. Intell. and Robotic Systems, 39(3), 287-306.
Erdmann, M. & Lozano-Perez, T. (1987). On Multiple Moving Obstacles, Algo-
rithmica 2(4), 477-521.
Fibry, J. R. (1987). An Investigation into Reactive Planning in Complex Do-
mains, Proc. AAAI Conference.
Fugimura, K. (1991). Motion Planning in Dynamic Environments, Springer, To-
kyo.
Gat, E. (1991). Reliable Goal-Directed Reactive Control for Real – World
Autonomous Mobile Robots, PhD Dissertation, Virginia Polytechnic Insti-
tute and state University.
Gil de Lamadrid, J. & Gini, M. (1990). Path Tracking through Uncharted Mov-
ing Obstacles, IEEE Transactions on System, Man, and Cybernetics, 20(6),
1408-1422.
Griswold, N. C. & Elan, J. (1990). Control of Mobile Robots in the Presence of
Moving Obstacles, IEEE Transactions on Robotics and Automation 6(2).
Hsu, H. C.-H. & Liu, A. (2005). Multiagent – based multi-team formation con-
trol for mobile robots, J. Intell. and Robotic Systems, 42(4), 337-360.
Ishikawa, S. (1995). A method of autonomous mobile robot navigation by us-
ing fuzzy control, Advanced Robotics, 9(1), 29-52.
Izumi, K. & Watanabe, K. (2000). Fuzzy behavior – based control trained by
module learning to acquire the adaptive behaviors of mobile robots,
Mathem. and Computers in Simulation, 51(3/4), 233-243.
Jaitly, R.; Althoefer, K. & Fraser, D. (1996). From vision to path planning : A
neural-based implementation, Proc. 2nd international conference on engineer-
ing applications of neural networks, 209-212, London, UK.
Jaitly, R. & Fraser, D. A. (1996). Automated 3D object recognition and library
entry system, Neural Network world, 6(2), 173-183.
Kamon, I. & Rivlin, E. (1997). Sensory-based motion planning with global
proofs, IEEE Trans. Robotics and Automation, 13(6).
Khatib, O. (1986). Real – time obstacle avoidance for manipulators and mobile
robots, Internat. J. Robotics Res. 5(1), 90-98.
Kortenkamp, D. & Weymouth, T. (1994). Topological mapping for mobile ro-
bots using combination of sonar and vision sensing, Proc. AAAI Conf.,
Menlo Park, CA, USA, 979-984.
Kosko, B. (1992). Neural Networks and Fuzzy Systems. A Dynamical Systems Ap-
proach to Machine Intelligence, Prentice – Hall, Englewood Cliffs, NJ, 2-3,
314-316.
332 Industrial Robotics: Theory, Modelling and Control
Kuipers, B. & Levitt, T. (1988). Navigation and mapping in large scale space, AI
Magaz., 9, 25-43.
Latombe, J. C. (1991). Robot Motion Planning, Kluwer, Boston.
Lozano-Perez, T. (1983). Spatial planning : A configuration space approach,
IEEE Trans. Comput. 32(2), 108-120.
Mamdani, E. H. (1974). Applications of fuzzy algorithms for simple dynamic
plant, Proc. IEE 121(12), 1585-1588.
Mamdani, E. H. & Assilian, S. (1981). An experiment in linguistic synthesis
with a fuzzy logic controller, in : Fuzzy Reasoning and its Applications, Aca-
demic Press, New York, 311-323.
Martinez, A.; Tunstel, E. & Jamshidi, M. (1994). Fuzzy – logic based collision –
avoidance for a mobile robot, Robotica, 12 (6), 521-527.
Moustris, G. & Tzafestas, S. G. (2005). A robust fuzzy logic path tracker for
non-holonomic mobile robots, Intl. J. Artif. Intelligence Tools, 14(6), 935-
965.
Parhi, D. R. (2005). Navigation of mobile robots using fuzzy logic, J. Intell. and
Robotic Systems, 42(3), 253-273.
Pedrycz, W. (1995). Fuzzy Sets Engineering, CRC Press, Boca Raton, FL, 3-7.
Ratering, S. & Gini, M. (1995). Robot Navigation in a Known Environment
with Unknown Obstacles, Autonomous Robots, 149-165.
Rigatos, G. G.; Tzafestas, C. S. & Tzafestas, S. G. (2000). Mobile robot motion
control in partially unknown environments using a sliding mode fuzzy
controller, Robotics and Autonomous Systems, 33(1), 1-11.
Ryu, B. S. & Yang, H. S. (1999). Integration of reactive behaviors and enhanced
topological map for robust mobile robot navigation, IEEE Trans. on Sys-
tems, Man and Cybernetics, 29(5), 474-485.
Seraji, H. (1998). A unified approach to motion control of mobile manipulators,
Intl. J. Robotics Research, 17(2), 107-118.
Seraji H. & Howard, A. (2002). Behavior – based robot navigation on challeng-
ing terrain : A fuzzy logic approach, IEEE Trans. Robotics and Automation,
18(3), 308-321.
Stamou G. B. & Tzafestas, S. G. (2000). Neural fuzzy relational systems with a
new learning algorithm, Mathem. and Computers in Simulation, 51(3-4), 301-
314.
Sugeno, M. (1985). An Introductory Survey of Fuzzy Logic, Information Science,
36, 59.
Sugeno, M. & Murakami, K. (1984). Fuzzy Parking Control of Model Car, Proc.
23rd IEEE Conf. on Decision and Control, 902, Las Vegas, USA.
Sugeno, M. & Nishida, M. (1985). Fuzzy Control of a Model Car, Fuzzy Sets and
Systems,16, 103-113.
Thrun, S. (1999). Learning metric-topological maps for indoor mobile robot
navigation, Artificial Intelligence, 1, 21-71.
Industrial and Mobile Robot Collision–Free Motion Planning…. 333
1. Introduction
335
336 Industrial Robotics: Theory, Modelling and Control
1997), which slightly alters the corners of the trajectory so that to make sure
that the end-effector actually traces the trajectory even with the presence of the
delay dynamics. The new trajectory planning algorithm together with the feed-
forward compensator appears as a single front end block in the control system,
and it can be easily accommodated to existing industrial manipulator systems
without taking the risk and time of hardware reconfigurations.
A trajectory planning algorithm can generate position, velocity, and accelera-
tion profiles for all of the joints of the manipulator. In most industrial manipu-
lators, the system input is the joint position data, which are widely known in
the industry as taught data. Paul (Paul, 1979) described how homogeneous
transformations (Mittal & Nagrath, 2003) can be used in representing position
and orientation of a serial link manipulator in order to control it through a
Cartesian trajectory. The work by Shin et. al. (Shin et al. 1985) looks similar to
ours, however it is difficult to be implemented in industrial systems as it needs
to know many link/joint parameters of the manipulator. In industrial manipu-
lator systems, most of these parameters are not precisely known.
In our previous works we have addressed acceleration and velocity constraints
for 2-space trajectory planning (Munasinghe, 2001), and in this work we ex-
tend it to 3-space, while also considering error tolerance of the trajectory. The
proposed method has been tested on a Performer MK-3s industrial manipula-
tor, and its effectiveness has been experimentally verified.
With the taught data and position feedback, reference input generator deter-
mines the control commands for each joint, and send those commands to the
servo controller, which actuates joint motors accordingly. Refering to Fig.1, ki-
nematics of the manipulator is given by
ª x º ªθ1 º
« y » = J (ș) «θ » (2)
« » « 2»
«¬ z »¼ «θ3 »
¬ ¼
Industrial robot manipuators are designed to meet the performance level re-
quired by the application such as welding, cutting, part handling, etc. The spe-
cifications in general are limited only to a certain degree of accuracy, velocity,
and complexity. Therefore, most industrial robot manipulators are designed
with linear proportional-integral-derivetive (PID) servo controllers with cur-
rent limiting power amplifiers. This saturating current determines the accele-
ration limit of the joint. Furthermore, joints are independently controlled, whe-
reas unknown inertia torques, coriolis and centrifugal torques, and torques
due to friction and gravity are treated as disturbances to be rejected by the
controller. To support this assumption, manipulator links are designed with
low inertia, and joints are driven through high gear reductions (Sage et. al.
1999). These controllers are simple, and also provide sufficient robustness. Fi-
gure 2 illustrates three degree of freedom decoupled joint dynamic model of
an industrial manipulator.
Figure 2. Three degree of freedom joint dynamic model of an industrial robot manipu-
lator
This model also includes power amplifier saturation of joint actuators. K jp and
K vj are the servo controller gains in the position loop and velocity loop of joint
j , and these gains are periodically tuned by the trained operators to maintain
the level of performance. As only two tunning prameters are involved,
controller tuning process is quite simple. Within the linear reagion of joint ac-
Trajectory Planning and Control of Industrial Robot Manipulators 339
Θ j ( s) K jp K vj
= (3)
U j (s) s 2 + 2 K vj s + K jp K vj
where
θjmax z > θjmax
°
sat ( z ) = ® z z ≤ θjmax
°− θmax z < −θmax
¯ j j
in that θjmax is the mximum aceleration of joint j . In this view, the objective of
the trajectory planning is to make the best use of joint acceleration capability,
while avoiding saturation.
In this work, we consider the following three major issues which are practi-
cally applicable in industrial robot manipulator applications.
where v, vr , and vtmax are end-effector velocity, rated velocity, and maximum
tangential velocity (at a rounded corner), respectively. e, and ρ are the trajec-
tory error and error tolerance. Constraint (5) describes the linear region for
joint acceleration, within which linear dynamics (3) is maintained. A violation
of this coinstraint results in nonlinear joint dynamics (4), which causes the end-
effector to deviate from the planned trajectory. Consraint (6) specifies the velo-
city limit while end-effector moves along straight lines and through corners.
Rated velocity of the joint ω r is given by ωr = 2.π .RPM r /(60.N G ) , where
RPM r is the rated RPM (revolutions per minute) of the joint and N G is the gear
reduction ratio. Then, the rated velocity v r = ω r L , where L is the link length.
340 Industrial Robotics: Theory, Modelling and Control
3. Trajectory Planning
Figure 4(a) illustrates a sharp corner of the objective trajectory O(s) with the er-
ror tolerance (dashed line). Error tolerances are quite common in industrial
applications, and it can be used to effectively plan the realizable trajectory P(s).
It is required however, to make sure that the realizable trajectory is contained
within the error tolerance. Referring to Fig. 4, the largest possible circular arc
should pass through point R, and it should be tangential to the section of the
tolerance cylinder on the plane of ΔABC . In order to construct this curve,
points A′, B ′, and C ′ are determined from A, B, and C according to the following
procedure:
In ΔABC AB = ( x A − xB ) 2 + ( y A − y B ) 2 + ( z A − z B ) 2 ,
Figure 4(b) illustrates the circular arc constructed at a corner with radius r is
given by
Figure 4 (a) A sharp corner of the objective trajectory, and (b) A planned corner of the
realizable trajectory
D and E are the terminal points of the circular arc and they could be located on
A′B ′ and B ′C ′ as DB ′ = EB ′ = {r + 2 ρ / sin( β / 2)}cos( β / 2) . Along the circular arc,
from D to E trajectory is sampled at each δ as marked by M. Sampling angle is
given by
δ = v tmax t s / r (9)
Using t min (k ) for planning the trajectory between the two via points guaran-
tees minimum time motion, and resulting joint accelerations are
2(Δθ j (k ) − θ j (k )t min (k ))
θj (k ) = (12)
{t min (k )}2
This algorithm continues as via point advances k= 0, 1, 2…, and in the same
time end-effector velocity is calculated using (2) together with (13) and (14).
When the end-effector velocity reaches rated velocity the algorithm terminates
(b1 in Fig. 3). As illustrated in Fig. 5, end-effector reaches the rated velocity at
PF in the forward direction, and at PR in the reverse direction.
Referring to Fig. 5 PF PR is the middle segment of the straight line. This seg-
ment is planned in Cartesian space by maintaining rated velocity v r as fol-
lows.
x
ª x(t ) º ªv r º ª x( PF ) º
« y(t )» = «v y » + « y( P )» (15)
« » « r» « F »
«¬ z (t ) »¼ «¬v rz »¼ «¬ z ( PF ) »¼
where vrx , vry , and vrz are the velocity components of vr along major axes, and
x( PF ) , y ( PF ) and z ( PF ) are the cartesian position coordinates of PF . Middle
segment is transformed into joint coordinates using inverse kinematics.
Planned corners and straight lines are merged to form the realizable trajectory.
As shown in Fig. 3, taught data is obtained by compensating realizable trajec-
tory for delay dynamics. In (Goto et. al., 1997) pole placement with linear state
feedback were used to develop a feed-forward delay compensator as described
by
a s 3 + a 2 s 2 + a1 s + a 0
F j (s) = − 3 (16)
( s − μ1 )(s − μ 2 )(s − γ )
Where
a o = − μ1 μ 2 γ
2
a1 = ( K vj + γ )(μ1 + μ 2 ) + K vj + μ1 μ 2 + K vj γ − μ1 μ 2 γ / K jp
a2 =
1
Kjp
{ 2
} μμγ
( K vj + γ )(μ1 + μ 2 ) + K vj + μ1 μ 2 + K vj γ − 1p 2 v
Kj Kj
a3 =
1
p
K Kjv
{ 2
( K vj + γ )(μ1 + μ 2 ) + K vj + μ1 μ 2 + K vj γ }
j
in that μ1 and μ2 are the regulator poles, and γ is the observer pole. These poles
can be tentatively tuned for better performance. A theoretical determination of
compensator poles can be found in (Munasinghe & Nakamura, 2003)
Trajectory Planning and Control of Industrial Robot Manipulators 345
The objective trajectory was set as follows: start (0.35, 0, 0.1)[m], first cor-
ner(0.41, 0.1, 0.15)[m], second corner (0.28, -0.1, 0.3)[m], and end (0.35, 0,
0.35)[m]. Rated velocity and tangential velocity were set to v r = 0.15 [m/s2] and
v tmax = 0.02 [m/s2]. Maximum joint acceleration for all joints were set to
θjmax = 0.72 [rad/s2]. Trajectory error tolerance was set to ρ = 0.001 [m]. Servo con-
troller gains were set to K jp = 15 [1/s] and K vj = 15 [1/s]. In the delay compensa-
tor, regulator poles were set to μ1 = μ 2 = −60 , and the observer pole was set to
γ = −200 . In order to compare performance of the new method, we simulated a
conventional trajectory planning algorithm in that uniform end-effector veloc-
ity of 0.05 [m/s] was used to plan the trajectory in Cartesian space through the
mentioned objective trajectory.
Figure 6. End-effector velocity and joint acceleration profiles under the control of con-
ventional and proposed methods.
4.3 Discussion
The proposed trajectory planning algorithm takes the crude objective trajec-
tory and the constraints for velocity, acceleration and error tolerance, and
plans the realizable trajectory. The realizable trajectory is compensated for de-
lay dynamics. The proposed method brings the best possible performance as it
Trajectory Planning and Control of Industrial Robot Manipulators 347
always maintains at least one of the given constraints (5), (6), or (7) within the
entire motion.
Figure 7. Profiles of the end-effector in 3-space, (a) objective trajectory, (b) resulting
motion under conventional method (simulation), and (c) resulting motion under pro-
posed method.
5. Conclusion
This chapter presented a new trajectory planning algorithm for industrial ro-
bot manipulators. This algorithm considers joint acceleration constraint, rated
end-effector velocity, and trajectory error tolerance, and plans the realizable
trajectory accordingly so that all these constraints are maintained in the best
possible manner during the entire motion. A feed-forward compensator is also
used to compensate the realizable trajectory against delay dynamics of the
joints. The method was successfully tested on Performer MK-3s industrial ro-
bot manipulator using a complex three dimensional trajectory, in that very ac-
348 Industrial Robotics: Theory, Modelling and Control
curate motion was realized without violating any of the constraints. The pro-
posed method appears as a single feed-forward block in the control system,
therefore, it could be conveniently incorporated into existing industrial ma-
nipulators without undertaking a significant cost or risk.
6. References
Goto, S.; Nakamura, M., & Kyura, N.; (1997). Modified taught data method for
industrial mechatronic servo-controller to achieve accurate contour
control performance, Proceedings of the IEEE/ASME International Conference
on Advanced Intelligent Mechatronics, 525B, June 1997
Mittal, R. K. & Nagrath I. J. (2003). Chapter 2, In: Robotics and Control, 35-69,
Tata McGraw Hill, 0-07-048293-4, New Delhi
Munasinghe, S. R. & Nakamura, M. (2001). Optimum contouring of industrial
robot arms under assigned velocity and torque constraints. IEEE Transac-
tions on Systems, Man, and Cybernetics-Part C, Vol. 31, No. 2., (May, 2001)
159-167
Munasinghe, S. R. & Nakamura, M.; (2002). Determination of maximum tan-
gential velocity at trajectory corners in robot manipulator operation un-
der torque constraint, Proceedings of the Annual Conference of the Society of
Instrumentation and Control Engineers(SICE), MA17-2, 1170-1175, August,
2002
Munasinghe, S. R., Nakamura, M., Goto S., & Kyura, N., (2003). Pole selection
of feedforward compensators considering bounded control input of in-
dustrial mechatronic systems. IEEE Transactions on Industrial Electronics,
Vol. 50, No. 6, (December, 2003) 1199-1206
Nakamura, M., Munasinghe, S. R., Goto, S., and Kyura, N., (2000). Enhanced
contour control of SCARA robot under toque saturation constraint.
IEEE/ASME Transactions on Mechatronics, Vol. 5, No. 4, (December, 2000)
437-440
Paul, R. (1979). Manipulator Cartesian path control. IEEE Transactions on Sys-
tems, Man, and Cybernetics, Vol. 9, No. 11., (Nov., 1979) 702-711
Sage, H. G., De Mathelin M. F., & Ostertag F., (1999). Robust control of indus-
trial manipulators: a survey. International Journal of Control, Vol. 72, No.
16, (Nov., 1999) 1498-1522
Shin, K. G., & Mckay N. D., (1985). Minimum-time control of robotic manipu-
lators with geometric path constraints. IEEE Transactions on Automatic
Control, Vol. 30, No. 6, (Jun., 1985) 531-541
12
1. Introduction
Path planning is a very important issue in robotics. It has been widely studied
for the last decades. This subject has gathered three interesting fields that were
quite different in the past. These fields are robotics, artificial intelligence and
control. The general problem of path planning consists of searching a collision
free trajectory that drives a robot from an initial location (position and orienta-
tion of the end effector) to a goal location. This problem is very wide and it has
many variants such as planning for mobile robots, planning for multiple ro-
bots, planning for closed kinematic chains and planning under differential
constraints. It includes also time varying problems and molecular modeling,
see (LaValle, 2006) for a complete review. In this study we focus on the case of
multi-Degrees of Freedom (DoF) serial manipulators.
The first works on serial manipulators path planning began in the seventies
with Udupa (Udupa, 1977), then with Lozano-Pérez and Wesley (Lozano-
Pérez & Wesley, 1979) who proposed solving the problem using the robot's
configuration space (CSpace). Since then, most of path planning important
works have been carried out in the CSpace. There are two kinds of path plan-
ning methods: Global methods and Local methods. Global methods (Paden et
al., 1989; Lengyel et al., 1990; Kondo, 1991) generally act in two stages. The first
stage, which is usually done off-line, consists of making a representation of the
free configuration space (CSFree). There are many ways proposed for that: the
octree, the Voronoï diagram, the grid discretization and probabilistic road-
maps. For each chosen representation, an adapted method is used in order to
construct the CSFree, see (Tournassoud, 1992; LaValle, 2006). The representa-
tion built in the first stage is used in the second one to find the path. This is not
very complicated since the CSFree is known in advance. Global methods give
a good result when the number of degrees of freedom (DoF) is low, but diffi-
culties appear when the number of DoF increases. Moreover, these methods
are not suitable for dynamic environments, since the CSFree must be recom-
puted as the environment changes. Local methods are suitable for robots with
a high number of DoF and thus they are used in real-time applications. The
349
350 Industrial Robotics: Theory, Modelling and Control
potential field method proposed by Khatib (Khatib, 1986) is the most popular
local method. It assumes that the robot evolves in a potential field attracting
the robot to the desired position and pushing its parts away from obstacles.
Because of its local behavior these methods do not know the whole robot's en-
vironment, and can easily fall in local minima where the robot is stuck into a
position and cannot evolve towards its goal. Constructing a potential field
with a single minimum located in the goal position, is very hard and seems to
be impossible, especially if there are many obstacles in the environment.
Faverjon and Tournassoud proposed the constraint method (Faverjon &
Touranssoud, 1987), which is a local method acting like the potential field
method in order to attract the end effector to its goal and dealing with the ob-
stacles as constraints. Although it yields remarkable results with high DoF ro-
bots, this method suffers from the local minima problem.
Probabilistic methods were introduced by Kavraki et al. (Kavraki et al., 1996)
in order to reduce the configuration free space complexity. These methods
generate nodes in the CSFree and connect them by feasible paths in order to
create a graph. Initial and goal positions are added to the graph, and a path is
found between them. This method is not adapted for dynamic environments
since a change in the environment causes the reconstruction of the whole
graph. Several variants of these methods were proposed: Visibility based PRM
(Siméon et al., 2000), Medial axis PRM (Wilmarth et al., 1999) and Lazy PRM
(Bohlin & Kavraki, 2000).
Mediavilla et al. (Mediavilla et al., 2002) proposed a path planning method for
many robots cooperating together in a dynamic environment. This method
acts in two stages. The first stage chooses off-line, a motion strategy among
many strategies generated randomly, where a strategy is a way of moving a
robot. The second stage is the on-line path planning process, which makes
each robot evolve towards its goal using the strategy chosen off-line to avoid
obstacles that might block its way.
Helguera et al. (Helguera & Zeghloul, 2000) used a local method to plan paths
for manipulator robots and solved the local minima problem by making a
search in a graph describing the local environment using an A* algorithm until
the local minima is avoided.
Yang (Yang 2003) used a neural network method based on biology principles.
The dynamic environment is represented by a neural activity landscape of a
topologically organized neural network, where each neuron is characterized
by a shunting equation. This method is practical in the case of a 2-DoF robot
evolving in a dynamic environment. It yields the shortest path. However, the
number of neurons increases exponentially with the number of DoF of the ro-
bot, which makes this method not feasible for realistic robots.
Here, we propose two methods to solve the path planning problem. The first
method (Lahouar et al., 2005a ; Lahouar et al., 2005b) can be qualified as a
Collision free Path Planning for Multi-DoF Manipulators 351
3. Non-collision constraints
d − ds
d ≥ −ξ if d ≤ d i (1)
di − d s
With d is the minimal distance between the robot and the object and d is the
variation of d with respect to time. d i is the influence distance from where the
objects are considered in the optimization process, d s is the security distance
and ξ is a positive value used to adjust the convergence rate.
Collision free Path Planning for Multi-DoF Manipulators 353
&
Vx 2 ∈R2 / R0
& x2
Vx 1 ∈ R 1 / R 0
Object 2
x1 &
n
Object 1
Figure 2. Two objects evolving together
d
45
di
40
35
30
25
ξ =1
20
ξ =2
15 ξ =3
ds
10
0
t
0
4
8
12
16
20
24
28
32
36
40
44
48
52
56
60
64
68
72
76
80
84
88
Where V( xi∈Ri / R0 ) is the velocity vector evaluated at the point xi of object i hav-
ing the minimal distance with the second object and n is the unit vector on the
line of the minimal distance.
The non-collision constraints, taking into account the velocities of objects, are
written as:
354 Industrial Robotics: Theory, Modelling and Control
d − ds
V(Tx1∈R1 / R0 ) .n − V(Tx2∈R2 / R0 ) .n ≤ ξ (3)
di − d s
d − ds
V(Tx1∈R1 / R0 ) .n ≤ ξ (4)
di − d s
d − ds
nT J x1 (q ) Δq ≤ ξ (5)
di − d s
[a1 aN ][Δq1 Δq N ] ≤ b
T
(6)
Collision free Path Planning for Multi-DoF Manipulators 355
d − di
with [a1 aN ] = nT J , Δq = [Δq1 ΔqN ]T and b = ξ
d s − di
Figure 5. shows two PUMA robots evolving together. We consider that each
robot is controlled separately.
In that manner, each robot is considered as a moving obstacle by the other one.
The motion of the two robots must satisfy the following conditions:
d − di d − di
V(Tx1∈R1 / R0 ) .n ≤ ξ ' and − V(Tx2∈R2 / R0 ) .n ≤ ξ ' (7)
d s − di d s − di
Where ξ '= 12 ξ . While adding the two conditions of equation (7), we notice that
the non-collision constraint defined by (3) is satisfied. So with a suitable choice
of the parameters ξ , d i and d s , it is possible to use only condition (5) to avoid
collisions with all objects in the environment.
In the next paragraph, we propose an approach that does not construct the
whole grid, representing the CSpace. Only cells necessary to find the path to
the goal position are checked for collision.
356 Industrial Robotics: Theory, Modelling and Control
The planner we propose uses two modes. The first one makes the robot evolve
towards its goal position if there is no obstacle obstructing its way and the sec-
ond mode is active near the obstacles and enables the robot to find the best
way to avoid them. This latter mode is the most important as it needs to gen-
erate all the cells near the obstacle until it is avoided. For this reason, we do
not have to store all the cells but just the ones near the obstacles which are suf-
ficient to describe the CSfree.
4.1 Definitions
A Cell
The algorithm we propose is based on a “Cell” class in terms of object oriented
programming. A cell ci is made of:
{ [ ] }
Vic(q1 ) = q = q11 + s1Δq q1N + s N Δq ; (s1 ,, s N ) ∈ {− 1,0,1} / (0,,0 )
T N
(8)
Queue
Another important item in our approach is the Queue, Q, which is defined as
an ordered set of cells. The first cell in the Queue is named head and denoted
h(Q). While the last cell is the tail of the Queue and denoted t(Q). If the Queue
is empty we write h (Q ) = t (Q ) = 0/ .
In order to handle the Queue Q, we use some operators that we define here.
h + (Q, c1 ) adds the cell c1 to the head of Q.
t + (Q, c1 ) adds c1 to the tail of Q.
h − (Q ) removes the head cell from Q.
t − (Q ) removes the tail cell from Q.
Stop Condition
We define the stop condition as the condition for which we judge that the goal
position has been found. We write this condition as follows:
where qgoal is the goal configuration, q is the configuration of the cell verifying
the stop condition and Δq is the step of the grid.
If the algorithm can no longer evolve and the stop condition is not satisfied, it
means that there is no possible solution for the given density of the grid.
4.2 Algorithm
The algorithm outlined in Fig. 6, starts by constructing the initial cell in step 1.
It sets the parent pointer to NULL and evaluates the distance to the goal. The
algorithm uses a variable c representing the cell searched by the algorithm.
ℵ is the set of explored cells and ℵ1 is the set of unexplored cells in the vicinity
of cell c.
Step 6 computes non-collision constraints using distances between obstacles
and robot parts evaluated in the posture defined by cell c. Steps 8 to 13 con-
struct unexplored cells in the vicinity of cell c. For each cell the parent pointer
358 Industrial Robotics: Theory, Modelling and Control
is set to c, the distance to goal is evaluated and the non-collision constraints are
checked. A cell is considered a collision if it does not verify constraints given
by equation (3).
Step 15 determines the nearest cell to the goal in the vicinity of c, using the dis-
tance to goal already evaluated. If that cell is not an obstacle, it is placed in the
head of the queue Q at step 17. This makes the planner perform a depth search
since there is no obstacle bothering the robot.
However, if the cell computed by step 15 is a collision, all non-collision cells in
the vicinity of c that are close to collision cells are placed in the tail of the
queue Q by step 22. This makes the planner perform a depth search until the
obstacle is bypassed.
&
y
q2
l1 l2
q1
&
x
&
20
y
Obstacles
O2 O1
10
Goal position
O3
&
x
0
0 10 20 30
Start position
-10
Figure 8. Path planning consists of moving the robot from the start position to the
goal position while avoiding obstacles
360 Industrial Robotics: Theory, Modelling and Control
Obstacle O1 O2 O3
x 16 4 10
y 12 10 4
Table 1. Position of obstacles
spectively, and the three point obstacles. We set the lengths of the arms of the
robot l1 = l2 = 10 .
Fig. 9 shows the CSpace of the robot, the dark regions correspond to CSpace
obstacles.
q2
Goal configuration
q1
Start configuration
The construction order of cells is shown in Fig. 10. The algorithm evolves to-
wards its goal using the depth-search mode while there is no obstacle bother-
ing it. When an obstacle is detected the algorithm uses the width-search mode.
The algorithm overlaps the obstacle in order to find the best direction to by-
pass it. When the obstacle is avoided the depth search mode is resumed. The
algorithm gives the best way to go around the C obstacle (which is the portion
of CSpace corresponding to a collision with one obstacle).
362 Industrial Robotics: Theory, Modelling and Control
20
15
10
-5
-10
1 2 3
4 5 6
7 8 9
Figure 12. Simulation results for the PUMA robot
Collision free Path Planning for Multi-DoF Manipulators 363
The result of the simulation is shown in Fig. 11. Moreover, out of 5329 cells,
which corresponds to 73 points on each axis, only 375 cells were computed.
This represents less than 10% of the whole workspace.
The method described above is useful in the case of cluttered static environ-
ments. It can be used offline to generate repetitive tasks. In many cases robots
evolve in dynamic environments, which are unknown in advance. That is why
we propose to solve the path planning problem for many manipulator robots
evolving in a dynamic environment using a real-time local method. This ap-
proach is based on the constraints method coupled with a procedure to avoid
local minima by bypassing obstacles using a boundary following strategy.
q goal − q
Δq goal = Δqmax if q goal − q > Δqmax (10)
q goal − q
where q goal is the goal configuration of the robot, q is the current configura-
tion of the robot and Δqmax is the maximum variation of each articulation of the
robot. If there are obstacles in the environment, we add constraints (defined in
paragraph 3) to the motion of the robot in order to avoid collisions. Path plan-
ning becomes a minimization under constraints problem formulated as:
where Δq is the change of the robot joints at each step. We can formulate then
the planning problem as follows:
d − di
Minimize Δq − Δq goal Under linear constraints nT J Δq ≤ ξ (13)
d s − di
Before explaining the method in the general case of an n-DoF robot, we present
it for the 2D case. The proposed approach to escape from the deadlock situa-
tion is based on an obstacle boundary following strategy.
The 2D case
This method was first used in path planning of mobile robots (Skewis & Lu-
melsky, 1992; Ramirez & Zeghloul, 2000).
When the local planner gets trapped in a local minimum (see Fig. 13), it be-
comes unable to drive the robot farther. At this point the boundary following
Collision free Path Planning for Multi-DoF Manipulators 365
method takes over and the robot is driven along the boundary of the obstacle
until it gets around it. The robot in this case has the choice between two direc-
tions on the line tangent to the obstacle boundary or on the line orthogonal to
the vector to the goal (Fig. 13). It can go right or left of the obstacle. Since the
environment is dynamic and unknown in advance, we have no idea whether
going left or going right is better. The choice of the direction is made ran-
domly. Once the obstacle is avoided the robot resumes the local method and
goes ahead towards the goal configuration.
Solution 1
Goal position
Direction 1
Solution 2
Dead lock position
Direction 2 C Obstacle
Figure 13. Two possible directions to bypass the obstacle in the case of a 2DoF robot
Figure 14. The case where there is no feasible path to the goal
366 Industrial Robotics: Theory, Modelling and Control
If the boundary following method drives back the robot to the original dead-
lock position, one can conclude that there exists no feasible path to reach the
goal (Fig. 14) and the process is stopped.
Fig. 15 shows the two feasible paths leading the robot to the goal position.
Each path corresponds to one choice of the direction of the motion to follow
the boundary of the obstacle. Therefore, and since the environment can be dy-
namic, the choice of the direction (left or right) is made once and it stays the
same until the goal is reached. This unique choice guarantees a feasible path in
all situations whenever a deadlock position is found by the local planner (even
if in certain cases the choice seems to be non optimal as it is the case for the
path 2 using the left direction in Fig. 15).
Path 2
C Obstacle
Right direction
Path 1
Figure 15. If a solution exists any chosen direction will give a valid path
Random direction
Goal configuration
qgoal
Dead lock position C Obstacle
qlock
This method is applicable only to the primary 3-DoF case when the 3D graphi-
cal model can be visualized. Also, the user can choose paths using only the
primary DoF, which eliminates other possibilities using the full DoF of the ro-
bot. Moreover, this method cannot be applied in real-time applications.
One possible strategy is to make a random choice of the direction to be fol-
lowed by the robot in the TCplane. This strategy can lead to zigzagged paths
and therefore should be avoided. In our case, whenever the robot is in a dead-
lock position, we make it evolve towards its upper joint limits or lower joint
limits, defined by the vector qlim . This strategy allowed us to find a consistent
368 Industrial Robotics: Theory, Modelling and Control
way to get out of the deadlock position. This chosen direction is defined by the
intersection of the TCplane and the bypassing plane (P) containing the three
points: qlim , qlock and qgoal (Fig. 17).
In the general case of robots with many DoF, the TCplane becomes a hyper-
plane which is normal to the vector pointing from qlock to qgoal and containing
qlock . The direction chosen to go around the obstacle is defined by the intersec-
tion of the TCplane and the plane (P) defined by the three points : qlim , qlock
and qgoal . New constraints, reducing the motion of the robot to the plane (P),
are defined with respect to non-collision constraints.
The boundary following method will follow these constraints until the obstacle
is avoided. This plane (P) will be characterized by two vectors U1 and U 2 ,
where U1 is the vector common to all possible subspaces pointing out to the
goal configuration. Vector U1 is given by:
qgoal − q
U1 = (14)
qgoal − q
U 2 is the vector that defines the direction used by the robot in order to avoid
the obstacle. This vector is defined by the intersection of plane (P) and the
TCplane. It is not the only possible direction, any random direction can define
a bypassing plane that can be used in the boundary following method. The
systematic use of qlim in the definition of U 2 avoids the problem of zigzagged
paths. As the robot evolves in a dynamic environment, it has no prior knowl-
edge of obstacles and of their motion and it can not compute the best direction
to bypass obstacles. In order to define U 2 we use the vector V given by:
qlim − q
V= (15)
qlim − q
where qlim = qinf if the chosen strategy makes the robot move towards the lower
limits of its joints, and qlim = qsup if the chosen strategy makes the robot move
towards the upper limits of its joints. U 2 is the unit vector orthogonal to U1
and located in the plane (U1 , V ) . Vector U 2 is given by:
U2 =
( )
V − U1T V U1
(16)
( ( ) ) (V− ( ) )
1
T
ª V − UT V U U1T V U1 º
2
«¬ 1 1 »¼
Collision free Path Planning for Multi-DoF Manipulators 369
While avoiding the obstacle, the robot will move in the defined subspace (P),
and Δq could be written as
Where, Δu1 is the motion along the U1 direction and Δu2 is the motion along
the U 2 direction.
Whenever an object is detected by the robot, which means that the distance be-
tween the robot and the obstacle is less then the influence distance, a constraint
is created according to equation (6). Constraints are numbered such that the ith
constaint is written as:
Let
A iT Δu ≤ bi (22)
with
A i = [aui1 aui 2 ]
T
(23)
Δu = [Δu1 Δu2 ]
T
(24)
370 Industrial Robotics: Theory, Modelling and Control
U2
Δu
S U1
A lock
U2
A block
Δu
U1
A lock
S
V (q ) = q − qgoal (25)
which is the distance from the current position of the robot to the goal posi-
tion. The value of the distance function is strictly decreasing when the robot is
evolving towards its goal using the local planner. When a deadlock is detected,
we define d lock = qlock − q goal as the distance function in the deadlock configura-
Collision free Path Planning for Multi-DoF Manipulators 371
tion. While the robot is going around the obstacles using the boundary follow-
ing method, the distance function, V(q), is continuously computed and com-
pared to dlock . When the value of the distance function is lower than dlock , the
robot has found a point over the C obstacle boundary that is closer to the goal
than the deadlock point. At this moment, the robot quits the boundary follow-
ing method and continues to move towards the goal using the local planner.
The vector of the followed constraint is named A lock . It corresponds to the vec-
tor of the projected constraint blocking the robot. The boundary following
method can be stated as follows:
where
At each step the algorithm tracks the evolution of the followed constraint
among the set of the projected constraints. The tracked constraint is the one
maximizing the dot product with A lock . In certain cases the resultant vertex Δu
is null when there is another projected constraint blocking the robot (Fig. 19).
This is the case of point B in Fig. 20. In this case, the robot switches the fol-
lowed constraint. It uses the blocking constraint to escape from the deadlock.
Fig. 20 shows the case of a point robot moving from point S to point q goal . The
robot moves from point S to point qlock1 using the local planner.
372 Industrial Robotics: Theory, Modelling and Control
ds ds dlock2
G
dlock
B Boundary following
A module
Local method
S ds
The point qlock1 corresponds to a deadlock position where the robot can no
longer move towards the obstacle while respecting the security distance, d s ,
from the obstacle. This point corresponds also to a local minimum of the dis-
tance function, V (q ) = d lock1 . At this point, the robot starts to follow the bound-
ary of the blocking obstacle and the distance function V (q ) is continuously
compared to dlock1 . In point B there is another obstacle preventing the robot
from following the first one. In that case, the boundary following module
changes the path of the robot to follow the new obstacle. In point C the dis-
tance to the goal has decreased and becomes equal to dlock1 , which means that
the robot bypassed the obstacle and the local planner is resumed. When reach-
ing point qlock 2 , a second deadlock position occurs. Therefore, the boundary
following module is activated again until point D is reached, which corre-
sponds to a distance from the goal equal to dlock 2 . At this point the local
method is resumed to drive the robot to its goal position.
Collision free Path Planning for Multi-DoF Manipulators 373
1 2
3 4
5 6
7 8
Figure 21. Results using two 5-DoF robots
374 Industrial Robotics: Theory, Modelling and Control
1 2 3
4 5 6
7 8 9
10 11 12
13 14 15
6. Conclusion
In this paper, we presented two methods of free path planning. The first one is
based on lazy grid methods. It searches for a path without using a heuristic
function. This method reduces the gap between classic grid methods where all
the grid cells must be computed before searching for a path, and lazy grid
methods where the grid is computed while searching for the path. The pro-
posed planner is very general and is guaranteed to find a path, if one exists, at
a given resolution. However, this algorithm depends on the resolution of the
grid. The higher this resolution is, the closer the robot can squeeze between
obstacles. This method reduces the number of computed cells and gives the
best direction to go around a C obstacle. It can be combined with quasi-
random methods and it replaces the A* searching module, where quasi-
random sampling of the CSpace appears to offer performance improvements
in path planning, see for instance (Branicky et al., 2001).
The second part of this work was concerned with a novel method for path
planning suitable for dynamic environments and multi-DoF robots. This
method is a combination of the classical local method and the boundary fol-
lowing method needed to get the robot out from deadlock positions in which
the local method gets trapped. The local path planner is based on non-collision
constraints, which consists of an optimization process under linear non-
collision constraints. When a deadlock, corresponding to a local minimum for
the local method, is detected, a boundary following method is launched. A
similar method can be found for the 2D cases, and we show in this work how
it can be applied to the case of multi-DoF robots. When the robot is stuck in a
deadlock position, we define the direction of motion of the robot, in the con-
figuration space, as the intersection of a hyperplane, called TCplane, a plane
defined by the vector to its goal and a vector to its joint limits. This direction of
motion allows the robot to avoid the obstacle by following its boundary until it
finds a path to the goal, which does not interfere with the obstacle. Starting
from this point the classical local planner takes over to drive the robot to its
goal position. This method is fast and easy to implement, it is also suitable for
several cooperating robots evolving in dynamic environments.
Collision free Path Planning for Multi-DoF Manipulators 377
7. References
Bohlin, R. & Kavraki, L. (2000). Path planning using lazy prm, Proceedings of
IEEE International Conference on Robotics and Automation, pp. 521–528, San
Francisco, CA, April 2000.
Branicky, M.; LaValle, S.; Olson, K. & Yang, L. (2001). Quasi-randomized path
planning, Proceedings of IEEE International Conference on Robotics and Auto-
mation, pp. 1481-1487, Seoul, Korea, May 2001.
Faverjon, B. & Touranssoud, P. (1987). A local based approach for path plan-
ning of manipulators with a high number of degrees of freedom, Proceed-
ings of IEEE International Conference on Robotics and Automation, pp. 1152–
1159, Raleigh, March 1987.
Helguera, C. & Zeghloul, S. (2000). A local-based method for manipulators
path planning in heavy cluttered environments, Proceedings of IEEE Inter-
national Conference on Robotics and Automation, pp. 3467–3472, San Fran-
cisco, CA, April 2000.
Kavraki, L.; Svestka, P.; Latombe, J-C. & Overmars M. (1996). Probabilistic
roadmaps for path planning in high dimensional configuration spaces.
IEEE Transactions on Robotics and Automation, Vol. 12, No. 4, (august 1996),
pp. 566–580, ISSN 1042-296X.
Khatib, O. (1986). Real-time obstacle avoidance for manipulators and mobile
robots. The International Journal of Robotics Research, Vol. 5, No. 1, (1986),
pp. 90–99.
Kondo, K. (1991). Motion planning with six degrees of freedom by multistrate-
gic bidirectional heuristic free-space enumeration, Proceedings of IEEE In-
ternational Conference on Robotics and Automation, pp. 267–277, Sacramento,
CA, April 1991.
Lahouar, S.; Zeghloul, S.; Romdhane, L. (2005a). Path planning for manipula-
tor robots in cluttered environments. WSEAS Transactions On Systems,
Vol. 4, No. 5, (May 2005) pp. 555-560. ISSN 1109-2777.
Lahouar, S.; Zeghloul, S.; Romdhane, L. (2005b). Path planning for manipula-
tor robots in cluttered environments. International Design Engineering
Technical Conferences & Computers and Information In Engineering Confer-
ence, DETC2005-84993. Long Beach, CA, September 2005.
Lahouar, S.; Zeghloul, S.; Romdhane, L. (2006). Real-time path planning for
multi-DoF robot manipulators in dynamic environment. International
Journal of Advanced Robotic Systems, Vol. 3, No. 2, (June 2006) pp. 125-132.
ISSN 1729-8806.
LaValle, S. (2006). Planning Algorithms, Cambridge University Press, ISBN
0521862051.
Lengyel, J.; Reichert, M.; Donald, B. & Greenberg D. (1990). Real-time robot
motion planning using rasterizing computer graphics hardware. Com-
puter Graphics, Vol. 24, No. 4, (August 1990), pp. 327–335.
378 Industrial Robotics: Theory, Modelling and Control
1. Introduction
379
380 Industrial Robotics: Theory, Modelling and Control
R = ( R A O ) ( RB O ) (1)
where ”O” represents the obstruction area of one of the segmented rectangular
obstacles, and ”R” represents the inspected possible region that the robot can
be located for the working points A and B. To check each rectangular shape in
sequence, the possible location areas are searched, so that the robot location is
determined.
Similarly, the passable area for considering all of the segmented obstacles (O1,
O2, …, On) etc., are defined as
R = [( R A O1 ) ( RB O1 )] [( RA O2 )
(2)
( RB O2 )] " [( R A On ) ( RB On )]
Concerning all of the desired working passing points (A,B,C,D, …), the possi-
ble location region R is inspected with the same process in (2). In case that the
intersection area is none, that is, R =[ ], then, there may not have suitable robot
Determination of Location and Path Planning Algorithms for Industrial Robots 381
location for the subsequent path planning. On the other hand, large space of R
may provide better solution for searching a suitable path.
As shown in Figure 1, each of the fours bigger circles represents the possible
location area while the robot stretches out to pass each of the points A,B,C,D,
respectively. Similarly, the other four smaller circles represent the possible lo-
cation area while the robot withdraws to pass the points A,B,C,D, respectively.
Similar tests are also carried on in the other Y-Z and Z-X planes, which are not
addressed in detail here. It is concluded that the shaded area is the possible re-
gion for robot location. Via numerical analysis, the point E in Figure 1 is se-
lected as the robot location.
RA RD
(Stretch out)
RB RC
(Withdraw)
A B C D
3. Collision Inspection
The robot and the obstacle need to be defined in the geometric space. Since the
last three joints inherited with limited manipulation space for most of the six
DOF industrial robots, the robot structure can be lumped to express by the first
three arms of the elbow (Ting et al., 2002).
The robot and the obstacle can be described in a discretized space by the dis-
tance maps method. In this 2D map, the perpendicular grids intersect on the
nodes and cut the space into numerous equivalent squares (Pobil et al., 1992).
382 Industrial Robotics: Theory, Modelling and Control
4. Path Planning
7 6 5 4 3 2 3 4 5 6 7 7 6 5 4 3 2 1 1
6 5 4 3 2 1 2 3 4 5 6 6 5 4 3 2 1
5 4 3 2 1 1 2 3 4 5 6 5 4 3 2 1
5 4 3 2 1 1 2 3 4 5 5 4 3 2 1
5 4 3 2 1 1 2 3 4 5 5 4 3 2 1
5 4 3 2 1 1 2 3 4 5 6 5 4 3 2 1
5 4 3 2 1 1 2 3 4 5 5 5 5 4 3 2 1
5 4 3 2 1 1 2 3 4 4 4 4 5 5 4 3 2
5 4 3 3 2 1 1 2 3 3 3 3 3 4 5 5 4 3
4 3 2 2 3 2 1 2 3 3 2 2 2 2 3 4 5 5 4
3 2 1 1 2 3 2 3 3 2 1 1 1 1 2 3 4 5 5
2 1 1 2 3 3 2 1 1 2 3 4 5 X
2 1 1 2 3 2 1 1 2 3 4 5
While searching for suitable path, it is suggested to start from the node of final
position with the largest marked number since there is no guarantee to have a
solution to start on the initial position. All of the passable nodes with one unit
distance around the start node, 8 nodes at most for two dimension and 26 no-
des at most for three dimensions, are checked to see whether there exists at le-
ast one passable neighboring node. If the marked number of the start node is
n, the node around it with the smallest number, for example, n-1 or n-2, is the
desired node as the next passable destination. Then, from that node, the sub-
sequent search is continued until the marked number of the desired passable
node is 1. To conjoin these searched nodes, thus determines the path. In case
that the chosen passable node does not eventually search a path successfully, it
needs to switch to another node with smaller number, and then continues the
path search from there again.
The depth-first method is derived based upon the data structure concept of the
computer system. It starts at the node of the final position with the largest
marked integer number n, and searched the neighboring node whose marked
number (n–1) must be smaller than it by one. For instance, the searched path in
the tree structure is illustrated in Figure 3. The white nodes are the likely
nodes to pass through, and the black nodes indicate the obstructive area or the
robot unreachable area. The start node means the initial position to begin wave
expansion, and the end node means the final position to begin the marked
number. The Null indicates the termination of search with no solution.
M1
4
End
M2 M3
3 3
M4 M5 M6
2 2 2
M7 M8 M9 M10
1 1 1 1
Via many experimental testing, the neighboring search method does not obtain
a good path in comparison with the depth-first search method. It is interesting
to dig out why the former one searching 26 neighboring nodes along various
directions cannot obtain better outcome than the latter one searching only 6
neighboring nodes. It is attempted to develop an extensile neighboring search
method that outperforms the depth-first search method by solving the defects
of the neighboring search method as described below.
While using the wave expansion method, the start and the end nodes are se-
lected to be either the initial or the final positions. Once the initial position is
chosen, the wave expansion begins at that nodal point. An investigation is car-
ried out to examine whether there is different wave expansion solution by ex-
changing the initial and the final destinations. As illustrated in Figure 4, two
paths, the solid and the dotted lines, are obtained by interchanging the initial
with the final destinations. The bold line represents the passable region of both
cases. It is obviously to see that the searched two paths are not the same.
q 2 (degree)
q 1 (degree)
0
Figure 4. Two searched paths by exchanging the initial with the final positions
Determination of Location and Path Planning Algorithms for Industrial Robots 385
Therefore, the selection of the start node for wave expansion and the end point
for backward numbering is important, and may seriously affect the search re-
sult. This situation is more obvious in the 3D environment for the search
neighboring nodes are increased from 8 (2D) to 26 directions. To double-check
on the searched path by interchanging the initial with the final position is nec-
essary.
In Figure 4, a solid line is searched on condition that the furthest left node is
selected as the initial position to start wave expansion process, and the furthest
right node is the final position to begin the path search. On contrary, the initial
and the final positions are exchanged to obtain the dotted path. It is seen that
the difference of the searched paths between the two cases appears two paral-
lelogram areas. In these areas, it is for sure that no collision occurs. Thus, it is
likely to merge the two searched paths into one as shown in Figure 5.
q 2 (degree)
q1 (degree)
0
While using the neighboring search method, the robot path must pass the grid
nodes of the same square so that a saw tooth type of path is likely obtained.
Thus, it limits the searched path no matter how small the grid is planned. This
defect may result in the searched path is not flexible and the manipulation
time is elongated because of passing more nodes. This phenomenon is illus-
trated in Figure 6.
D E C Y
q3
A B F q2
X q1
(a) (b)
On the assumption that the path A → B → C shown in Figure 6(a) is the original
searched path, it can be reduced to A → C. It is not solvable by the neighboring
search method since the searched path has to move along the grid nodes in the
range of a unit square. The nodes D and E on the unit square ABED are con-
sidered to inspect collision (Ting et al., 2002). Since the triangle Δ ABE includes
the dotted line across the parallelogram, it is necessary to check only the node
E. To extend to the 3D environment shown in Figure 6(b), the path X → Y is de-
sired, and it only needs to check the area is enveloped in the grid nodes in-
cludes the dotted line. This method is useful to reduce the path. For example,
once the circled grid nodes shown in Figure 7 are ensured no collision occurs,
then the path is defined to be P → Q, which thus saves many manipulation
steps.
q2
q1
P
Figure 7. Further reduction of searched path by checking the circled nodes
It is quite difficult to intuitively choose the grid nodes for checking in the 3D
space. A recursive program is developed to systematically inspect the collision
of the nodes around the unit square where the considered path intersects. The
algorithm is described as below.
Recursive Algorithm:
Assuming the kth grid node is located at (a,b,c), and the (k+2)th grid point is at
(x,y,z). Provided that the distance along the X-Y-Z direction is defined with |x-
a| ≥ |y-b| ≥ |z-c|, and the grid unit distance is designated with d degree. Let
the grid number N is defined as N= |x-a|/d, and the checked nodes (p,q,r) is
defined as (p,q,r)=(x-a,y-b,z-c)/N, and the plus or minus symbol of the
checked nodes is defined by (u,v,w)=sgn(p,q,r), respectively. By use of the
above definitions, three conditions are defined as below.
Determination of Location and Path Planning Algorithms for Industrial Robots 387
q 2 (degree)
8 7 6 5 6 7 8 9 10 11 12 13 14 13 12 11 10 9
9 8 7 6 7 8 9 10 11 12 13 14 15 14 13 12 11 10
10 9 8 7 8 9 10 11 12 13 14 15 16 15 14 13 12 11
11 10 9 8 9 10 11 12 13 14 15 16 17 16 15 14 13 12
12 11 10 9 10 11 12 13 14 15 16 17 18 17 16 15 14 13
end (final position)
15 16 17 16 15 14 13 12
14 15 16 15 14 13 12 11
9 8 7 6 7 13 14 15 14 13 12 11 10
8 7 6 5 6 7 12 13 14 13 12 11 10 9
7 6 5 4 5 6 7 11 12 13 12 11 10 9 8
6 5 4 3 4 5 6 7 8 9 10 9 8 7
5 4 3 2 3 4 5 6 7 9 8 7 6
4 3 2 1 2 3 4 5 6
3 2 1 1 2 3 4 5 6
start (initial position)
4 3 2 1 2 3 4 5 6 7 8 9 10 9 8 7 6 5
5 4 3 2 3 4 5 6 7 8 9 10 11 10 9 8 7 6
6 5 4 3 4 5 6 7 8 9 10 11 12 11 10 9 8 7
q 1 (degree)
7 6 5 4 5 6 7 8 9 10 11 12 13 12 11 10 9 8
0
It is seen that there is no subsequent node to continue with when the path
search meets the grid number 11. That is, the depth-first search method is not
workable if the grid node of number (n-1), number 10 in this example, does not
neighbor to the current searched node. On the other hand, the extensile
neighboring search method checks the surrounding nodes with number (n-2),
number 9 in this example, so that it is able to find a path.
5. Manipulation Time
As a matter of fact, the manipulation time is more concerned than the manipu-
lation steps. In general, any type of motion profile can be planned by motion
programming method (Tesar & Matthew, 1976). In this study, for example, ac-
cording to the driving motor of ABB IRB-1400 industrial robot (ABB, 1996), a
polynomial form to express the motion profile is given by
Two cases of motion profiles are presented in Figure 9. In Figure 9(a), the ini-
tial time is at t0. The time between t0 and t1 is the time needs to arrive at the
maximum speed. The time between t1 and t2 is the time to manipulate with the
maximum speed. The time between t2 and t3 is the time needs to reduce the
speed to zero. In case the motor of the robot does not reach its maximum
speed due to too short distance of the manipulation step, the motion profile is
illustrated in Figure 9(b). Once the coefficients in (3) are defined, the manipula-
tion time following the designed motion profile is then computed for both
cases. It is noted that the distance of each piece of straight line in the joint
space formed by the passable nodes may be different. For example, some pass-
able nodes are connected to form a straight line. That is, more manipulation
steps are accompanied with. Therefore, the motion profile planned for each
piece of straight line is different. The entire manipulation time is the sum of
the manipulation time of each piece of the passing straight line.
Velocity
Velocity
Vmax
Vmax
S1 S2 S3
Time(sec S1 S3 Time(sec
) )
t0 t1 t2 t3 t0 t1 t3
(a) (b)
D E F
B C
7. Conclusion
Acknowledgement
This research is supported by NSC90-2212-E-033- 009.
392 Industrial Robotics: Theory, Modelling and Control
9. References
Luis T. Aguilar
1. Introduction
393
394 Industrial Robotics: Theory, Modelling and Control
Motivated by the above-mentioned features, and taking into account that only
incomplete and imperfect state measurements are available, the main objective
of this paper is introduce an output feedback homogeneous controller for
tracking the trajectories of robot manipulators. The control design proposed
here is inspired by the passivity-based approach, which consists of an observer
part, a precomputed reference trajectory, and a controller part, but is aug-
mented with a relay part in both the controller and the observer, which yields
a certain degree of robustness under disturbances and finite-time stability of
the closed-loop system. We note that the passivity-based approach, which is
represented by the Slotine and Li + observer controller, allows semi-global sta-
bility and the addition of the relay terms ensures robustness, despite the pres-
ence of the observer without destroying the closed-loop stability.
This paper is organized as follows: In Section 2 homogeneous systems are de-
fined. Section 3 states the problem and introduces the Euler-Lagrange repre-
sentation of the robot manipulator, along with some fundamental properties of
the dynamic model. Section 4 presents the homogeneous controller and its sta-
bility analysis. Section 5 provides a simulation study for a 2-DOF robot ma-
nipulator using the controller described in Section 4 as well as performance of
the controller for external perturbations. Section 6 establishes final conclusions.
The following notation will be used throughout the paper. The norm x 2 , with
x ∈ R n , denotes the Euclidean norm and x 1 = x1 + ! + x n stands for the sum
norm. The minimum and maximum eigenvalues of a matrix A ∈ R n×n are de-
noted by λ min {A} and λ max {A} , respectively. The vector sign(x) is given by
sign(x)=[sign(x1),…,sign(xn)]T where the signum function is defined as
1 if y > 0,
°
sign( y ) = ®[ −1,1] if y = 0 , ∀y ∈ R. (1)
° − 1 if y < 0,
¯
2. Basic definitions
x = f ( x , t ) (2)
x ∈ F( x , t ) (3)
almost everywhere on I.
f i (c r1 x1 , ! , c rn x n , c − η t ) = c η+ ri f i ( x ) .
T (t 0 , x 0 ) ≤ τ( x 0 , EL ) +
1
η
(δL−1 )η s(δ)
1− 2
where
x(t 0 ) = x 0 and
and
§ ·
s(δ) = sup τ¨¨ x 0 , E 1 ¸¸ ,
x 0 ∈Eδ © δ
2 ¹
396 Industrial Robotics: Theory, Modelling and Control
° n
§ xi ·
2 ½°
E L = ®x ∈ R n : ¦ ¨ ri ¸ ≤ 1¾,
°̄ i =1 © L ¹ °¿
where q is the n×1 vector of joint positions and is considered to be the only in-
formation available for feedback; U is the n×1 vector of applied joint torques; w
is the n×1 unknown perturbation vector; M(q) is the n×n symmetric positive-
definite inertia matrix; C (q , q )q is the n×1 vector of centripetal and Coriolis
forces; and g(q) is the n×1 vector of gravitational torques. The dynamic equa-
tion (4) has the following properties, which will be used in the closed-loop sta-
bility analysis (Kelly et al., 2005):
• The inertia matrix M(q) is bounded above and below for all q ∈ R n ; that is,
m1 I ≤ M( q ) ≤ m 2 I where m1 and m2 are positive scalars and I is the identity
matrix.
• The matrix C( q, q ) is chosen such that the relation q T [ M(q ) − 2C (q, q )]q = 0
holds for all q , q ∈ R n .
• The vector C(q,x)y satisfies C(q,x)y=C(q,y)x and C(q,z+αx)y=C(q,z)y+αC(q,x)y
for all q , x , y , z ∈ R n and α ∈ R .
• The matrix C(q,x) satisfies C( q , x ) ≤ k c x for all x , q ∈ R n and kc is a positive
constant.
We further assume
• A known constant, W>0, is the upper boundary for the perturbation vector,
w, that is, w ≤ W .
Homogeneous Approach for Output Feedback Tracking Control of Robot Manipulators 397
lim q d ( t ) − q( t ) = 0. (5)
t →∞
4. Homogeneous controller
Proposition 1. Consider the equation of the robot (2) along with the following control
law:
U = M( q )qr + C( q , q 0 )q r + g( q ) − K D ( q 0 − q r )
° − K P e − K α sign( e ) − K β sign( q 0 − q r )
°
® (6)
°q r = q d − Λ( q̂ − q d )
°¯q 0 = q̂ − Λz
q̂ = p + ( Λ + l d I )z
°
® p = qr + l d Λz + M −1 ( q )K P [ z − e + γsign( z )] (7)
° − M −1 ( q )[ K α sign( e ) + 2K β sign( q 0 − q r )]
¯
where e=q-qd is the n×1 tracking error vector; qˆ is the n×1 estimated velocity vector;
z = q − qˆ is the n×1 observation error vector; Λ, KP, KD, Kα, Kβ, γ are n×n diagonal
positive definite matrices and ld is a positive constant. Then, for any initial condition
some sufficiently large gains of the controller (6), (7) always exist such that (5) holds.
Proof. First, the equations of the closed-loop system [(4), (6), (7)] must be intro-
duced in terms of the tracking and observation errors, which are given, respec-
tively, by
398 Industrial Robotics: Theory, Modelling and Control
ªeº
« »
d « z»
=
dt « s »
« »
¬r ¼
ª e º
« z » (8)
« »
« M −1 (q )[K D r − C (q , r )q r − C (q , q )s − K P e + w − K D s − K α sign( e) − K β sign(s − r )] »
« −1 »
¬ M ( q )[ K D r − C ( q , r )q r − C ( q ,
q )s − K D s − K P z + w − l d M ( q )r + K β sign( s − r ) − γsign( z )]¼
where s = q − q r = e + Λ( e − z) and r = q − q 0 = z − Λz .
n n
1 1 1 1
V ( x, t ) = eT K P e + sT M (q) s + z T K P z + r T M (q)r + ¦ Kαi ei + ¦ γ i zi (9)
2 2 2 2 i =1 i =1
where K αi and γ i are the elements of the main diagonal of K α and γ , respec-
tively. The time derivative of V(x,t) along the solution of (8) yields
1 (q )s + z T K z + r T M(q )r + 1 r T M
V ( x , t ) = e T K P e + s T M(q )s + s T M P
(q )r
2 2
+ e T K α sign( e ) + z T γsign( z).
V ( x , t ) = − e T ΛK P e + e T ΛK P z − z T ΛK P z − s T K D s − r T [ M(q )l d − K D ]r + s T C (q , r )[s − q ]
+ r T C (q , s)[r − q ] − e T ΛK α sign( e ) − (s − r )T K β sign(s − r ) − z T Λγsign( z ) (10)
+ z T ΛK α sign( e) + (s + r )T w.
Using properties H1, H2, and H5; and employing the well-known inequality
the following boundary is obtained
Homogeneous Approach for Output Feedback Tracking Control of Robot Manipulators 399
2g h ≤ g + h ,
2 2
g, h ∈ Rn , (11)
ºª e º ª º
T
1 ª e º ªλ min { ΛK P } 0
V ( x , t ) = − « » « − « λ {Λ γ} − λ {Λ K }»z
2¬ z¼ ¬ 0 λ min { ΛK P } »¼ «¬ z »¼ « min
max
α
»
¬ Q2 ¼
Q1
T
ª s º ªλ { K } − k c r k c q ºª s º
− « » « min D »« » (12)
¬r¼ ¬ k
c q m1l d − λ max {K D } − k c s ¼ ¬ r ¼
Q3
− λ min {ΛK α } e − λ min {K β } s − r + λ max {W } s + r .
1
λ min {K D } > x
kc
and
Thus it is always possible to find some controller gains to ensure that all the
above inequalities hold. Therefore, (12) is locally negative definite almost eve-
rywhere and the equilibrium point is exponentially stable.
Finally, we define the domain of attraction and prove that it can be enlarged
by increasing the controller gains. For this, we first find some positive constant
α1, α2 such that
2 2
α1 x 2
≤ V (x , t ) ≤ α 2 x 2 . (13)
V>
1
2
[
λ min {K P } e + m1 s + λ min {K P } z + m1 r
2 2 2 2
]
400 Industrial Robotics: Theory, Modelling and Control
so we define α1 as
1
α1 = min{λ min {K P }, m1 }.
2
x
lc1
l1
q1
q2
lc2
l2
V ≤
1
2
[(λ max {K P } + 2nλ max {K α }) e + m 2 s + m 2 r
2 2 2
so we define
1 α
x ≤ λmin {K D } 1 . (14)
kc α2
Homogeneous Approach for Output Feedback Tracking Control of Robot Manipulators 401
5. Simulation results
π
q di = π + cos(2πft ),
2 i = 1,2
where f=10 Hz. The position and velocity initial conditions were set to
q(0) = q(0) = 0 ∈ R 2 . The motion of the 2-DOF manipulator with rotational
joints, depicted in Figure 1, was governed by (4) where
And
were taken from (Berghuis & Nijmeijer, 1993a). In the simulation, the control
gains were selected as follows:
KP=diag{10,10} KD=diag{5,5}
Kα=diag{5,5} Kβ=diag{2,2}
γ=diag{12,12}.
The resulting position and observation errors of the closed-loop system (8) for
the unperturbed and perturbed case are depicted in Figure 2. This figure also
shows the control input. The figure demonstrates that the homogeneous con-
troller asymptotically stabilizes the manipulator in the desired trajectory, thus
satisfying the control objective (5). Figure 3 shows chattering due to the relay
part of the control law (last two terms of (6)):
u h = −K α sign( e) − K β sign(q 0 − q r ).
For the sake of comparison, both controllers (6)-(7) and the controller without a
402 Industrial Robotics: Theory, Modelling and Control
relay part (Kα=Kβ=γ=0) that were proposed by Berghuis and Nijmeijer (1993b)
were simulated assuming permanent disturbances (w=10). In contrast to the
proposed controller, the simulations results depicted in Figure 4 show that the
continuous controller drives position of each joint of the manipulator to a
steady-stable error of 0.3 [rad]. We repeated the last simulations for a reference
trajectory consisting of a circle in task space with radius r=0.25 [m] and center
(1,1/2) which is given by
1
° x d = 1 + cos(0.1t )
® 4
1 1
°y d = + sin(0.1t );
¯ 2 4
where xd and yd are coordinates in the Cartesian space and the initial position
in task space is fixed at x(0) = 2 [m] and y(0) = 0 [m]. The corresponding trajec-
tory in the joint space is
§x · § l sin (q d 2 ) · § x 2 + y d 2 − l1 2 − l 2 2 ·
q d1 = tan −1 ¨¨ d ¸ − tan −1 ¨ 2
¸
¸
¨ l + l cos(q ) ¸ , q d 2 = cos −1 ¨¨ d ¸
¸
© yd ¹ © 1 2 d2 ¹ © 2l1l 2 ¹
where l1 and l 2 are the lengths of links 1 and 2 respectively. Figure 5 illustrates
the effectiveness of the controller.
Homogeneous Approach for Output Feedback Tracking Control of Robot Manipulators 403
Figure 2. Joint position errors, observation errors, and control input for the unper-
turbed case (left column) and perturbed case (right column).
404 Industrial Robotics: Theory, Modelling and Control
Figure 4. Joint position errors for the continuous controller (i.e., Kα=Kβ=γ=0): The perturbed
case.
(a)
406 Industrial Robotics: Theory, Modelling and Control
(b)
Figure 5: Motion of the end effector of the manipulator following a circular path (x)
using the homogeneous controller (a) and the continuous controller (b).
6. Conclusions
7. References
Baccioti, A. & Rosier, L. (2005). Liapunov functions and stability in control theory:
2nd edition, Springer, 978-3-540-21332-1, Berlin.
Bartolini, G., Orani, N.; Pisano, A. & Usai, E. (2006). Higher-order sliding
mode approaches for control and estimation in electrical drives, In: Ad-
vances in variable structure and sliding mode control, C. Edwards, E. Fossas
& L. Fridman, (Ed.), 423-445, Springer, 3-5403-2800-9, London.
Berghuis, H. & Nijmeijer, H. (1993a). Global regulation of robots using only
position measurements. Systems and Control Letters, Vol. 21, 289-293,
0167-6911.
Berghuis, H. & Nijmeijer, H. (1993b). A passivity approach to controller-
observer design for robots. IEEE Transactions on Automatic Control, Vol.
9, No. 6, 740-754, 0018-9286.
Bhat, S. & Bernstein, D. (1997). Finite time stability of homogeneous systems,
Proceedings of the 1997 American Control Conference, pp. 2513-2514, 0-
7803-3832-4, Alburquerque, USA, June 1997, IEEE.
Ferrara, A.; Giacomini, L. & Vecchio, C. (2006). Control of nonholonomic sys-
tems with uncertainties via second order sliding modes, Proceedings of
the 2006 American Control Conference, pp. 5384-5389, 1-4244-0210-7, Min-
neapolis, USA, June 2006, IEEE.
Filippov, A.F. (1988). Differential equations with discontinuous right-hand sides,
Kluwer academic, 9-0277-2699-X, Dordrecht.
Fridman, L. & Levant, A. (1996). Higher order sliding modes as a natural phe-
nomenon in control theory, In Robust Control via Variable Structure and
Lyapunov Techniques Garofalo & Glielmo, (Ed.), Lectures notes in control
and information science, 217, 107-133, Springer, 3-5407-6067-9, Berlin.
Fridman, L. & Levant, A. (2002). Higher order sliding modes, In Sliding mode
control in engineering, W. Perruquetti and J.P. Barbot, (Ed.), 53-102, Mar-
cel Dekker, 0-8247-0671-4, New York.
Hermes, H. (1995). Homogeneous feedback control for homogeneous systems.
Systems and Control Letters, Vol. 24, 7-11, 0167-6911.
Hong, Y.; Huang, J. & Xu, Y. (2001). On an output feedback finite-time stabili-
zation problem. IEEE Transactions on Automatic Control, Vol. 46, No. 2,
305-309, 0018-9286.
Kelly, R.; Santibañez, V. & Loría, A. (2005). Control of robot manipulators in joint
space, Springer, 1-8523-3994-2, London.
Lebastard, V.; Aoustin, Y.; Plestan, F. & Fridman, L. (2006). Absolute orienta-
tion estimation based on high order sliding mode observer for a five link
walking biped robot. Proceedings. of the International Workshop on Variable
Structure Systems, pp. 373-378, 1-4244-0208-5, Alghero, Italy,.
Levant, A. (2005a). Homogeneity approach to high-order sliding mode design.
Automatica, Vol. 41, 823-830, 0005-1098.
408 Industrial Robotics: Theory, Modelling and Control
1. Introduction
409
410 Industrial Robotics: Theory, Modelling and Control
investigate which design strategy leads to the best control performance under
various robot conditions. Section 5 concludes this chapter.
The basic structure of the fuzzy logic controller (FLC) most commonly found
in the literature is presented in Fig. 1 (Lee, 1990a). The basic configuration of a
fuzzy system is composed of a fuzzification interface, a knowledge base, a
fuzzy inference machine and a defuzzification interface as illustrated in the
upper section of Fig. 1. The measured values of the crisp input variables are
mapped into the corresponding linguistic values or the fuzzy set universe of
discourse at the fuzzification interface. The knowledge base comprises both
the fuzzy data and fuzzy control rules. The fuzzy data base contains all the
necessary definitions used in defining the fuzzy sets and linguistic control
rules whereas, the fuzzy control rule base includes the necessary control goals
and control policy, as defined by an experts, in the form of a set of linguistic
rules. The fuzzy inference engine emulates human-decision making skills by
employing fuzzy concepts and inferring fuzzy control actions from the rules of
inference associated with fuzzy logic. In contrast to the fuzzification stage, the
defuzzification interface converts the values of the fuzzy output variables into
the corresponding universe of discourse, which yields a non-fuzzy control ac-
tion from the inferred fuzzy control action.
In general, for a regulation control task, the fuzzy logic controller maps the
significant and observable variables to the manipulated variable(s) through the
chosen fuzzy relationships. The feedback from the process output is normally
returned a crisp input into the fuzzification interface. The crisp or non-fuzzy
input disturbance, illustrated in Fig. 1, would normally include both error and
change in error, and these are mapped to their fuzzy counterparts at the fuzzi-
fication stage. These latter variables are the inputs to the compositional rules of
inference from which the fuzzy manipulated variable is obtained. At the out-
put from the defuzzification process, a crisp manipulated variable is available
for input to the process. In conclusion, it can be stated that to design a fuzzy
logic controller, six essential stages must be completed:
In any fuzzy logic control system, the observed input must be fuzzified before
it is introduced to the control algorithm. The most commonly used antecedents
at this fuzzification stage are the state variables, error and rate of change in er-
ror. For the case of positioning a joint within a robot arm, the first variable is
the difference (error) between the desired and the current joint position. The
value of the second state variable is the numerical difference between two suc-
cessive values of error (change in error). These two state variables give a good
indication of the instantaneous performance of the system and both variables
are quantifiable by fuzzy sets. In this project, error (E) and change in error
(CE) are defined as the input fuzzy sets and the controlled action (CU) as the
output fuzzy set. The evaluation of the error and the change in error at sample
interval, k, is calculated as follows :
According to Lee (1990a), there are two methods for defining a fuzzy set; nu-
412 Industrial Robotics: Theory, Modelling and Control
control action can be inferred for every state of the process. The union of the
support sets on which the primary fuzzy sets are defined should cover the as-
sociated universe of discourse in relation to some value, dž. This property is re-
ferred to as the "dž-completeness" by Lee (1990a). To ensure a dominant rule
always exists, the recommendation is that the value of dž at the crossover point
of two overlapping fuzzy sets is 0.5. At this value of dž, two dominant rules will
be fired. To define the input fuzzy sets, error (E) and change in error (CE), the
following procedure is adopted. In the case of the former fuzzy sets, the
maximum range of error for a particular joint actuator is calculated. For exam-
ple, a robot waist joint with a counter resolution of 0.025 degree per count, and
a maximum allowable rotation of 300.0 degree would result in a maximum po-
sitional error of 12000 counts. A typical schematic representation for the error
fuzzy set universe of discourse would be as illustrated in Fig. 2. The linguistic
terms used to describe the fuzzy sets in Fig. 2 are:
To determine the domain size for the change in error variable in this project,
an open loop test was conducted. In this test, a whole range of voltage (from
the minimum to the maximum) was applied to each of the robot joint actuator
and the respective change in angular motion error was recorded every sample
interval. From this information, the fuzzy sets illustrated in Fig. 3 for the
change in error were initially estimated. Although the open loop response of
414 Industrial Robotics: Theory, Modelling and Control
the system will be different from the close loop response, it will give a good
initial guide to the size of the domain appropriate for use with the fuzzy logic
controller.
It should be noted that the choice of sampling interval is very important be-
cause it will affect the maximum change in error value recorded. It was found
that the use of a very high sampling rate caused the recorded maximum
change in angular motion error to be close to zero and this made it impossible
to define the location of each fuzzy set in the domain of discourse. For exam-
ple, a sampling period of 0.001 seconds will result in a maximum change in
waist positional error of 2 counts; a value found experimentally. In a similar
manner, the control variable output fuzzy sets were selected. However, in this
particular case, the dimentionality of the space is determined by the resolution
of the available D/A converters. The D/A converters adopted are of an 8-bit
type which yield 256 resolution levels as indicated on the horizontal axis in
Fig. 4(a). Again, the universe of discourse was partitioned into 7 fuzzy set
zones as depicted in Fig. 4(b).
It should be noted that the fuzzy set labelled Zero is defined across the dead
zone of the dc-servo motor in order to compensate for the static characteristics
of the motor in this region. The initial sizes and distribution of the fuzzy sets
are tuned during operation to improve the closed loop performance of the sys-
tem.
(3)
where the gradients slope.ab and slope.cd are calculated from the expressions;
(4)
(5)
416 Industrial Robotics: Theory, Modelling and Control
The fuzzy rule base employed in FLC contains fuzzy conditional statements
which are currently chosen by the practitioner from a detailed knowledge of
the operational characteristics of the process to be controlled. The fuzzy rule
base can be derived by adopting a combination of four practical approaches
which are mutually exclusive, but are the most likely to provide an effective
rule base. These can be summarised as follows (Lee, 1990a): 1. Expert experi-
ence and control engineering knowledge. In nature, most human decision
making are based on linguistic rather than numerical descriptions. From this
point of view, fuzzy control rules provide a natural framework for the charac-
terisation of human behaviour and decision making by the adoption of fuzzy
conditional statements and the use of an inference mechanism. 2. Operational
experience. The process performance that can be achieved by a human opera-
tor when controlling a complex process is remarkable because his reactions are
mostly instinctive. An operator through the use of conscious or subconscious
conditional statements derives an effective control strategy. These rules can be
deduced from observations of the actions of the human controller in terms of
the input and output operating data. 3. Fuzzy model of the process. The lin-
guistic description of the dynamic characteristics of a controlled process may
be viewed as a fuzzy model of a process. Based on this fuzzy model, a set of
fuzzy control rules can be generated to attain an optimal performance from a
dynamic system. 4. Learning. Emulation of human learning ability can be car-
ried out through the automatic generation and modification of the fuzzy con-
trol rules from experience gained. The rule base strategy adopted in this work
Design and Implementation of Fuzzy Control for Industrial Robot 417
(6)
the fuzzy conditional statements and searching for appropriate rules to form
the output action. As mention earlier, the input and output variables of error,
change in error and control action, UE , UCE and UCU. respectively, are all chosen
to be discrete and finite, and are in the form of;
(7)
(8)
At sample interval k, the jth fuzzy control rule, equation (8), can be expressed
as;
(9)
where e(k), ce(k) and cu(k) denote the error, change in error and manipulated
control variable respectively. The jth fuzzy subsets Ej , CEj and CUj are defined
as;
(10)
Alternatively, Equation (9) can be evaluated through the use of the composi-
tional rule of inference. If the minimum operator is utilised, the resulting
membership function can be expressed as;
(11)
(12)
(13)
Design and Implementation of Fuzzy Control for Industrial Robot 419
Several approaches (Lee, 1990b) have been proposed to map the fuzzy control
action to a crisp value for input to the process. Basically, all have the same aim
that is, how best to represent the distribution of an inferred fuzzy control ac-
tion as a crisp value. The defuzzification strategies most frequently found in
the literature are the maximum method and centre of area method:
(14)
In the case when there is more than one maximum membership grade
in W, the value of zo is determined by averaging all local maxima in W.
This approach known as mean of maximum method (MOM) is ex-
pressed as;
(15)
where zi = max ǍcT ( z) zi = max ǍcT ( z) and n is the number of times the
z∈w z∈w
2. The center of area method (COA). The center of area method sometimes
called the centroid method produces the center of gravity of the possi-
bility distribution of a control action. This technique finds the balance
point in the output domain of the universe of discourse. In the case
when a discrete universe of discourse with m quantisation levels in the
output, the COA method produces;
(16)
3. Experimental Setup
The robot control system is composed of the host computer, the transputer
network, and the interface system to a small industrial robot. The schematic
representation of the control structure is presented in Fig. 6.
The controller structure is hierarchically arranged. At the top level of the sys-
tem hierarchy is a desktop computer which has a supervisory role for support-
ing the transputer network and providing the necessary user interface and disc
storage facilities. The Transputer Development System acts as an operating
system with Occam II as the programming language. At the lower level are the
INMOS transputers; in this application one T800 host transputer is resident on
the COMET board and mounted in one of the expansion slots of the desktop
Design and Implementation of Fuzzy Control for Industrial Robot 421
Figure 7. System hardware and interfacing. (a) host computer, (b) B004 transputer
system, (c) GESPIA card, (d) DAC cards, (e) counter cards and (f) power amplifier.
422 Industrial Robotics: Theory, Modelling and Control
This industrial robot is a five degree of freedom robot with a vertical multi-
joint configuration. The robot actuators are all direct current servo motors, but
of different powers. At the end of each joint, a sensor is provided to limit the
angular movement. The length of link and its associated maximum angular
motion is listed in Table 2. Fig. 9(a) and 9(b) illustrate the details of the robot
dimensions1 and it’s working envelop. The maximum permissible handling
weight capacity is 1.2 kg including the weight of the end effector. Table 2 The
Mitsubishi RM-501 Move Master II geometry.
Figure 9(a). Range of movement of waist joint and robot dimensions (all dimensions
are measured in millimeter).
Figure 9(b). Robot dimension and range of movement when hand is not attached.
424 Industrial Robotics: Theory, Modelling and Control
4. Experimental Studies
A program code for the development of a FLC was written in the Occam lan-
guage and executed in a transputer environment. This approach would enable
the evaluation of the robustness of the controller design proposed and applied
to the first three joint of a RM-501 Mitsubishi industrial robot. A T800 tran-
sputer is assigned to position each joint of the robot independently. To deter-
mine the effect on controller performance of changing different controller pa-
rameters, one joint only is actuated and the other two are locked. In the first
experiment the impact on overall robot performance of changes in sample in-
terval was assessed. This was followed by an investigation into how best to
tune a controller algorithm and whether guide-lines can be identified for fu-
ture use. The problem is to overcome the effect of changing robot arm configu-
ration together with a varying payload condition.
Inputs (error and change in error) to the fuzzy logic control algorithm that
have zero membership grades will cause the membership grades of the output
fuzzy sets to be zero. For each sample period, the on-line evaluation of the al-
gorithm with 49 control rules has been found by experiment to be 0.4 millisec-
onds or less. Hence, to shorten the run time, only inputs with non-zero mem-
bership grades are evaluated. For times of this magnitude, real-time control is
possible for the three major joint controllers proposed. It has been cited in the
literature that it is appropriate to use a 0.016 seconds sampling period (60
Hertz) because of its general availability and because the mechanical resonant
frequency of most manipulators is around 5 to 10 Hz (Fu et al., 1987). Experi-
ments have been carried out to determine how much improvement can be
achieved by shorten the sampling period from 0.02 seconds to 0.01 seconds. In
the first experiment, the waist joint is subjected to a 60.0 degree (1.047 radian
or 2400 counter count) step disturbance with all other joints in a temporary
state of rest. The results shown in Fig. 10 suggest that very little improvement
in transient behaviour will be achieved by employing the shorter sampling pe-
riod. The only benefit gained is a reduction in the time to reach the steady state
of 0.4 seconds. In a second test, the waist joint is commanded to start from its
zero position and to reach a position of 5 degree (0.0087 radian or 20 counter
count) in 2 seconds; it remains at this position for an interval of 1 second after
which it is required to return to its home position in 2 seconds as showed in
Fig. 11. Again the benefit is only very marginal and of no significance for most
industrial applications. Despite these results, it was decided that the higher of
the two sampling rates would generally ensure better transient behaviour,
hence the 0.01 seconds sampling period is used throughout this project.
Design and Implementation of Fuzzy Control for Industrial Robot 425
Figure 10. Waist response to a step input for different sampling periods.
Tuning of a FLC may be carried out in a number of ways; for instance modify-
ing the control rules, adjusting the support of the fuzzy sets defining magni-
tude and changing the quantisation levels. The objective is always to minimise
the difference between the desired and the actual response of the system. Be-
cause of the large number of possible combinations available and the different
operational specifications that exist, no formal procedure exists for tuning the
parameters of a FLC. In practice, trial and observation is the procedure most
commonly employed. This can be tedious and time consuming and may not
result in the selection of the most suitable parameters and in many practical
situations does require safeguards to prevent excessive output variations and
subsequent plant damage. To establish a rule base for a FLC, it is necessary to
select an initial set of rules either intuitively or by the combination of methods
described in Figure 2.3 for the process to be controlled. Rule modifications can
be studied by monitoring the response of the close loop system to an aperiodic
disturbance in a phase plane for error and change in error in this case. This
trial and observation procedure is then repeated as often as required until an
acceptable response is produced. In this project, three different ways for tun-
ing the controller have been investigated. The initial control rules were initially
selected by modifying the rule based initiated by Daley (1984). The modifica-
tions made to the rule base given in Daley (1984) were necessary to ensure a
faster transient response with minimum overshoot and steady state error for
the robot arm. The rule base listed in Table 1 was found to be the most appro-
priate for the robotic control process. A second procedure for tuning a FLC is
investigate in this section and the final method will be presented in the next
section. The use of two inputs and one output means that there is a three di-
mensional space in which to select the optimal solution. In most reported
cases, scaling factors or gains were introduced to quantified these three uni-
verses of discourse. Hence, making it possible to tune a controller by varying
the values of these gain terms. However, in this project, the functional forms of
the fuzzy sets were utilised and these were mapped directly to the correspond-
ing universe of discourse. Thereby, tuning is carried out by adjusting or rede-
fining each fuzzy set location in the universe of discourse. The strategy devel-
oped in this project is to show the effect of changing each of the input and an
output definitions to establish the impact on the overall performance of the ro-
bot. The initial estimate for the fuzzy sets employed in the three domains of
discourse were made off-line as detailed in Figure 2.2.1. Fig. 12 shows the es-
timates for the error fuzzy set definitions. The corresponding estimates for the
change in error and controlled action fuzzy set definitions are plotted in Fig. 13
and Fig. 14, respectively. Tuning of the error fuzzy sets is made by gradually
moving the fuzzy set locations in the universe of discourse closer to the zero
value of error. A similar procedure is adopted to tune the output fuzzy sets,
Design and Implementation of Fuzzy Control for Industrial Robot 427
however, the initial selection positioned these fuzzy sets as close as possible to
the equilibrium point. For these sets, tuning is executed by gradually moving
the fuzzy sets away from the equilibrium point until an acceptable close loop
response is found. To demonstrate the effect of changing the error fuzzy sets
definition, three choices of error fuzzy sets definition were made and are plot-
ted in Fig. 12(a) - 12(c) named Case 1, 2 and 3. The other two fuzzy set defini-
tions remained unchanged and are shown in Fig. 13 and 14. A step disturbance
of 30.0 degree (1200 counter counts or 0.523 radian) was introduced at the
waist joint and the other joints, shoulder and elbow, were locked at 10 degrees
and 90 degrees respectively2. This robot arm configuration was chosen to ex-
clude variations in inertia, gravitational force and cross coupling resulting
from the movement of these two joints. The impact from the use of an inap-
propriate definition for the output fuzzy sets is significant when variations in
these forces are present and this is studied in latter experiments. The joint re-
sponse to a 30.0 degree step disturbance is shown in Fig. 15(a) and the associ-
ated control signals are shown in Fig. 15(b). Notice that in Figure 15(a), at
about 1.0 second the joint response for error fuzzy set definition Case 1 started
to deviate from the other two cases because of the large spread in the fuzzy
sets in comparison to the other two cases depicted in Fig. 12(a). There is no no-
ticeable difference in transient response between Cases 2 and 3 because the
fuzzy set definitions are close to the equilibrium point. However, differences
start to emerge as the two responses approach the chosen set point, Case 3 re-
sponse is more oscillatory. This corresponds to the excessive voltage variations
that can be observed in Fig. 15(b). This phenomenon occurred because of the
use of a very tight definition for error (close to zero) in Case 3 (Fig. 12(c))
which leads to a large overshoot and hunting around the set point. The use of
the fuzzy set definition, Case 2 gives a good accuracy throughout the whole
envelope of operation for the waist joint with this particular robot arm con-
figuration. The maximum steady state error is found to be 0.0013 radian (3
counter count) as a result of the coarseness of the fuzzy sets used; fuzzy set la-
belled ZERO is defined between -5 to +5 counts.
428 Industrial Robotics: Theory, Modelling and Control
Figure 15(a). Waist response for different error fuzzy set definitions.
Figure 15(b). Control signals for different error fuzzy set definitions.
Design and Implementation of Fuzzy Control for Industrial Robot 431
The drawback of using the smaller fuzzy set combinations is obvious when the
waist is subjected to a large step disturbance, for example at 30.0 degrees (1200
counter count or 0.523 radian) with the arm in the second configuration. The
subsequent waist response is plotted in Fig. 20 together with the response for
the second definition of the fuzzy set combinations. The waist response when
using the smaller fuzzy set combinations of Fig. 16 and Fig. 20 show that the
controller could position the waist with a smaller overshoot of 0.025 degree (1
counter count) and zero steady state error, however, the penalty to be paid is
an increase in rise time. The zero steady state error is achieved because of the
use of a fuzzy singleton definition in the error fuzzy set labelled ZERO, i.e. the
error fuzzy set is defined between ± 1 in the universe of discourse as is de-
picted in Fig. 17. Although the Case 2 fuzzy set combinations can provide a
faster response (about 1.2 seconds quicker for a 30.0 degree step input), the
overshoot (0.25 degree) and steady state error (0.075 degree) are both greater,
Fig. 20. These results deteriorate when the controller operates under gravita-
tional force and variable payload. A further comparison to evaluate the per-
formance between the smaller fuzzy set combinations and the Case 2 fuzzy set
combinations is conducted by subjecting the waist to a sinusoidal signal dis-
turbance of 30.0 degree amplitude and 4.0 seconds time periods. Fig. 21(a)
shows clearly that trajectory following with the Case 2 fuzzy set combinations
432 Industrial Robotics: Theory, Modelling and Control
is by far the better result. Fig. 21(b) illustrates that the use of a smaller range of
voltage in the output fuzzy sets definition could not generate an adequate con-
trolled signal. These results suggest that tuning can never optimise simultane-
ously speed of response and accuracy. If fuzzy logic is to be used successfully
in industrial process control, a method which can provides a fast transient re-
sponse with minimum overshoot and steady state error must be found. One
way to achieve this is to partition the problem into coarse and fine control, an
approach suggested by Li and Liu (1989).
Figure 20. Waist response for different tuned fuzzy sets at second robot arm configu-
ration.
434 Industrial Robotics: Theory, Modelling and Control
Figure 21(a). Response of the smaller and Case 2 fuzzy set combinations to sinusoidal
tracking.
Having investigated the problems associated with the control of the waist
joint, the investigation was extended to the more difficult upper-arm link, the
Design and Implementation of Fuzzy Control for Industrial Robot 435
shoulder joint. The control of this joint is difficult because of the gravitational
force acting on it. For example, when the elbow is fully stretched and the
shoulder is at 30.0 degree location to the working envelop, a load of 0.4 kg. is
enough to drag the shoulder joint downwards with 0 voltage (127 DAC input
value) applied to the actuator. The use of a single output fuzzy set definition
was found only suitable for a limited range of operation and not applicable for
the robot employed in this study. To illustrate this limitation, Fig. 22 shows the
effect of using a single output definition in 4 different operational regions
when the elbow is fully stretched. To compensate for the gravitational loading
effect, 4 operational regions were identified, and each was assigned a different
output fuzzy set. The switches 1, 2, 3 and 4 would control the choice of the
output fuzzy set in the range of 0 to 10 degrees, 10 to 30 degrees, 30 to 90 de-
grees and 90 to 130 degrees of the shoulder joint working envelop, respec-
tively. The four switched output fuzzy sets are presented in Fig. 23(a) - 23(d)
and these have been tuned as previously discussed. In all four modes of opera-
tion, the input fuzzy set combinations of Case 2 were utilised. From Fig. 23(a) -
23(d), it is obvious that the fuzzy sets labelled ZERO is moving towards the
right of the plot from the left as the region of operation moves from 1 to 4. This
is to compensate for the gravitational load which forces the joint to overshoot
when moving downwards as can be seen in Fig. 20. It should be noted that the
use of the switches in selecting the output fuzzy set definition is just a coarse
estimate, and as a result can give up to a maximum steady state error of 0.125
degrees (5 counter count) for the shoulder joint working envelop. If more accu-
rate positional control is needed, it will be necessary to increase the number of
switching regions or alternatively a different method will have to be found.
Figure 23(a). Switch 1; control action fuzzy set definitions for shoulder.
Figure 23(b). Switch 2; control action fuzzy set definitions for shoulder.
Figure 23(c). Switch 3; control action fuzzy set definitions for shoulder.
Design and Implementation of Fuzzy Control for Industrial Robot 437
Figure 23(d). Switch 4; control action fuzzy set definitions for shoulder.
5. Conclusion
In this chapter, a methodology for the application of fuzzy logic theory to the
development of a fuzzy logic controller had been presented and successfully
implemented. The developed algorithm had been shown to be of simple de-
sign and implementable for real-time operation of a three joint industrial robot
using joint space variables for control. The methodology to estimate the initial
fuzzy sets has been presented. The use of the function form of template to rep-
resent the fuzzy sets provides a way to directly map these fuzzy sets into the
corresponding universe of discourse. Unfortunately, this design could only be
arrived at by the use of a trial and observation procedure and would suggest a
more formal procedure must be developed for industrial applications. Fur-
thermore, design by a trail and observation procedure cannot be guaranteed to
yield the best result. In conclusion, it had been shown that a FLC can be de-
signed to match specific process dynamics without the use of a process model
within the control loop. Therefore, if automatic tuning can be introduced into
the FLC design a very robust control approach will result and this could be di-
rectly applied to any poorly defined non-linear process.
438 Industrial Robotics: Theory, Modelling and Control
6. References
Lee, C.S.G., chung, M.J., Turney, J.L. & Mudge, T.N. (1982). On the control of
mechanical manipulators, Proceedings of the Sixth IFAC Conference in
Estimation and Parameter Identification, pp.1454-1459, Washington DC,
June, 1982.
Li, Y.F. & Lau, C.C. (1989). Development of fuzzy algorithms for servo sys-
tems, IEEE Control Systems Magazine, pp.65-71, April 1989.
Luh, J.Y.S., Walker, M.W. & Paul, R. (1980). Resolved acceleration control of
mechanical manipulators, IEEE Transaction on Automatic Control, Vol.
AC-25, No.3, 468-474.
Sugeno, M. (1985). An introductory survey of fuzzy control, Information Sci-
ence, vol.36, pp.59-83, 1985.
Ying, H., siler, W. & Buckley, C. (1990). Fuzzy control theory : A nonlinear
case, Automatica, Vol.26, No.3, pp.513-520, 1990.
Lee, C.C. (1990a). Fuzzy logic in control systems : Fuzzy logic controller – Part
I, IEEE Transactions on Systems, Man & Cybernatics, Vol.20, No.2,
pp.404-418, March/April 1990.
Lee, C.C. (1990b). Fuzzy logic in control systems : Fuzzy logic controller – Part
II, IEEE Transactions on Systems, Man & Cybernatics, Vol.20, No.2,
pp.419-453, March/April 1990.
Fu, K.S., Gonzalez, R.C. & Lee, C.S.G. (1987). ROBOTICS : Control, Sensing,
Vision and Intelligence, McGraw-Hill International Edition, New York,
1987.
Daley, S. (1984). Analysis of fuzzy logic control algorithms and their applica-
tion to engineering systems, Ph.D theses, University of Leeds, UK. 1984.
Li, Y.F. & Lau, C.C. (1989). Development of fuzzy algorithms for servo sys-
tems, IEEE Control System Magazine, pp.65-71, April 1989.
16
Recep Burkan
1. Introduction
439
440 Industrial Robotics: Theory, Modelling and Control
In pure adaptive control laws, parameters are updated in time and there is no
additional control input. However, parameters are not adaptive and fixed (or
adaptive) uncertainty bound is used as an additional control input in robust
control laws. In the studies (Burkan, 2002; Uzmay & Burkan 2002, Burkan &
Uzmay 2003 a, Burkan & Uzmay 2006) adapts previous results on both robust
and adaptive control techniques for robot manipulators in an unified scheme,
so an adaptive-robust control law is proposed. As distinct from previous stud-
ies, variable functions are used in derivation, and parameter and bound esti-
mation laws are updated using exponential and logarithmic functions depend-
ing on the robot link parameters and tracking error.
rected desired velocity and acceleration vectors for nonlinearities and decoup-
ling effects are proposed as:
q r = q d + Λ~
q q r = q d + Λq~ (3)
where Λ is a positive definite matrix. Then the following control law is consid-
ered.
where σ = q r - q = ~
q + Λ~
q is a corrected velocity error and Kσ is the vector of
PD action.
Suppose that the computational model has the same structure as that of the
manipulator dynamic model, but its parameters are not known exactly. The
control law (4) is then modified into
~ ~ ~
M(q)ı + C(q, q )σ + Kı = -M(q)q r - C(q, q )q r - G = -Y(q, q , q r , q r )~
ʌ (6)
~ ~ ~
M = M̂ − M , C = Ĉ − C , G = Ĝ − G (7)
1 1 T ~ 1 ~T ~
V(σ, ~ ʌ) = ı T M(q)ı + ~
q, ~ q Bq + ʌ K ʌ ʌ > 0 (8)
2 2 2
= -~
V q T K~
q - ~
q T ȁKȁ~
q+~
ʌ T (K ʌπ~ - Y T (q, q , q r , q r )ı) (9)
= -~
V q T K~
q - ~
q T ȁKȁ~
q (11)
442 Industrial Robotics: Theory, Modelling and Control
.
So, V is negative semidefinite and Equation (6) is stable. It should be noted
that ʌ̂ = ~
ʌ ( π is constant) (Sciavicco & Siciliano, 1996). The parameter estima-
tion law (10) can also be written as
where πˆ (0) is the initial estimation of the parameters. The resulting block dia-
gram of the adaptive control law is given in Fig. 1 (Sciavicco & Sciliano, 1996)
Figure 1. Implementation of the adaptive control law (10) (Sciavicco & Siciliano, 1996).
Consider the nominal control vector for the model system described by Equa-
tions (1) and (2).
The definition of the nominal control law τ0 is based on the adaptive algorithm
of Slotine and Li (1987). It is important to understand that the nominal control
vector τ0 in Equation (13) is defined in terms of fixed parameters which are
not changed or updated in time as would be an adaptive control strategy. The
control input τ can be defined in terms of the nominal control vector τ0 and a
compensation vector for parameter variations as:
where
q~ = q − q d ; q r = q d − Λ~
q ; q r = q d − Λ~
q (15)
~
ʌ = ʌ - ʌ0 ≤ ȡ (16)
Let ε>0 and the additional control vector as defined by Spong (1992) as:
YTı
° − ȡ if YTı > İ
° YTı
u(t) = ® (17)
T
° Y ı
°¯ − ȡ if YTı ≤ İ
İ
Considering adaptive control law (Sciavicco & Siciliano, 1996), the block dia-
gram of the pure robust controller is given in Fig. 2.
Figure 2 Block diagram of the robust control law. (Burkan &Uzmay, 2003 c)
Since the controller which is defined by Equation (17) consists of two different
input depending on ε, the matrices A and A’ are introduced to select appropri-
ate control input. The A matrix is diagonal with ones and zeros on the diago-
nal. When Y T ǔ − dž>0 , a one is present in A, a zero is present in A’ and the
first additional control input is in effect. When Y T ǔ − dž ≤ 0 a zero is present
in A, a one is present in A’, and so the second additional control input is in ef-
444 Industrial Robotics: Theory, Modelling and Control
fect. Hence, the matrices A and A’ are simple switches which set the mode of
additional control input to be used (Burkan & Uzmay, 2003 c).
As a measure of parameter uncertainty on which the additional control input
is based, ǒ can be defined as
1/2
§ p 2·
ȡ = ¨¨ ¦ ȡ i ¸¸ (18)
© i =1 ¹
Let ȣ i denote the ith component of the vector Y T ı , İ i =i=1,2....pi represent the
ith component of ε, and define the ith component of the control input IJ i as
(Spong, 1992), then
− ρ υ / υ if υi > ε i
u(t) i = ® i i i (20)
¯ − (ρ i / ε i )υ i if υi ≤ ε i
Adaptive robust parameters are identical as adaptive control law in the known
parameter case such as ǔ, qr, ƭ and K. It is assumed that the parameter error is
unknown such that
~
π = πˆ − π = ȡ(t) (23)
Modelling of Parameter and Bound Estimation Laws …………….. 445
where ʌ̂ is the estimate of the available parameters and updated in time. The
upper bounding function ȡ̂(t) is assumed to be unknown, and should be de-
termined using the estimation law to control the system properly. Finally the
shows the difference between parameter error and upper bounding
error ȡ(t)
function as
Į
2 − t
ʌ̂ = - e -Įt Y T (q, q , q r , q r ) + ʌ ; δ (t) = ρ e 2 (25)
Į
and substitute them in the control input (21) for the trajectory control of the
model manipulator, then the tracking errors ~
q and ~q will converge to zero.
Proof:
By taking into account above parameters and control algorithm, the Lyapunov
function candidate is defined as
1 1 T ~ 1~ T ~
V(σ, ~
q, ~
ȡ (t)) = ı T M(q)ı + ~q Bq + ȡ (t) Γ ȡ (t) (26)
2 2 2
Apart from similar studies, Γ is the positive definite diagonal matrix and
change in time. The time derivative of Equation (26) is written as
. 1 1
M(q)ı + ~
q T B~
q + ~ȡ (t) T Γ ~
ȡ (t) + ~
ȡ (t) Γ~
q ( t )
T
V = ı T M(q)σ + ı T (27)
2 2
where
~ ȡ (t) = ρ (t)− ȡ̂ (t) = ʌ̂ − ȡ̂ (t)
ȡ (t) = ȡ(t) - ȡ̂(t) = ʌ̂ - ʌ - ȡ̂(t) ; ~ (28)
Let B = 2ΛK and use the property σ T [M (q) - 2C(q, q )]σ = 0 , ∀σ ∈ R n , the time
derivative of V along the system (22) is
. 1
V = -~
q T K~
q - ~ q − ı T Yδ(t) − ı T Yρ(t) + ~
q T ȁKȁ~ ȡ (t) T Γ ~
ȡ (t) + ~
ȡ (t) T Γ~
q ( t ) (29)
2
446 Industrial Robotics: Theory, Modelling and Control
Since K>0, and Λ>0 the first terms of Equation (29) are less or equal to zero
that is:
-~
q T K~
q - ~
q T ȁKȁ~
q≤0 (30)
.
So, in order to find conditions to make V ≤ 0 we concentrate on the remaining
terms of the equation. If the rest of Equation (29) is equal to or less than zero,
the system will be stable. Substituting Equation (24) into the remaining terms
of Equation (29) the following equation is obtained:
1 .
- ı T Yδ(t) − ı T Yρ(t) + [ρ(t) − ȡ̂(t)]T Γ[ρ(t) − ȡ̂(t)] + [ρ(t) − ȡ̂(t)]T Γ[ȡ (t) − ρˆ ( t )] = 0 (31)
2
1 .
ı T Yȡ̂(t) − ı T Yρ (t) + [ ρ (t) − ȡ̂(t)]T Γ[ ρ (t) − ȡ̂(t)] + [ ρ (t) − ȡ̂(t)]T Γ[ȡ (t) − ρˆ (t )] = 0 (32)
2
1 .
[(ρ(t) − ȡ̂(t)]T [− Y T ı + Γ[ρ(t) − ȡ̂(t)) + Γ[ȡ (t) − ρˆ ( t )] = 0 (33)
2
1
− Y T ı + Γ [ ʌ̂ − ʌ − ȡ̂(t)] + Γ[πˆ − ρˆ ( t )] = 0 (35)
2
Then
1 . 1 .
− Y T ı + Γ(ʌ̂ − ʌ) + Γπˆ − [ Γ ȡ̂(t) + Γρˆ ( t )] = 0 (36)
2 2
Modelling of Parameter and Bound Estimation Laws …………….. 447
A solution for Equation (36) can be derived if it is divided into two equations
as:
1
− Y T ı + Γ (ʌ̂ − ʌ) + Γπˆ = 0 (37)
2
1
− ( Γ ȡ̂(t) + Γρˆ ( t )) = 0 (38)
2
1
Γ(ʌ̂ − ʌ) + Γπˆ = Y T (q, q , q r , q r )σ (39)
2
For the proposed approach, ī and its time derivative are chosen as a positive
definite diagonal matrix of the form
Γ = e Įt I , Γ = Įe Įt I (40)
1
e Įt πˆ + Įe Įt (ʌ̂ − ʌ) = Y T (q, q , q r , q r )σ (41)
2
Į
t
Dividing Equation (41) by the factor e result in the following expression.
2
Į Į Į Į
t 1 t − t 1 t
e πˆ + Įe 2 ʌ̂ = e 2 Y T (q, q , q r , q r )σ + Įe 2 ʌ
2
(42)
2 2
Į Į Į
d 2t - t 1 t
(e ʌ̂) = e 2 Y T (q, q , q r , q r )σ + Įe 2 ʌ (43)
dt 2
Į Į Į Į Į
t - t 1 t 2 - t t
(e ʌ̂) = ³ (e Y (q, q , q r , q r )σ + Įe 2 ʌ)dt = - e 2 Y T (q, q , q r , q r )σ + e 2 ʌ + C (44)
2 2 T
2 Į
448 Industrial Robotics: Theory, Modelling and Control
α
t
If Equation (44) is divided by e 2 , the result is
1
2 αt
πˆ = - e -αt Y T (q, q , q r , q r )σ + π + Ce 2
(45)
α
If the initial condition is π̂ (0)=Ǒ, the constant C becomes zero. So, the parame-
ter adaptation algorithm is derived as
2 - αt T
πˆ = - e Y (q, q , q r , q r )σ + π (46)
α
1
- (e Įt ρˆ (t) + Įe Įt ρˆ (t)) = 0 (47)
2
Į
t
By dividing Equation (47) by the factor e , the following expression is found.
2
Į Į
t 1 t
- (e ρˆ (t) + Įe 2 ρˆ (t)) = 0
2
(48)
2
Į Į
t − t
- (e ρˆ (t)) = C −(ρˆ (t)) = Ce
2 2
(50)
Į
− t
ρˆ (t) = −ρe 2
(51)
Į
2 − t
τ = Y(q, q , q r , q r )[− e −Įt Y T (q, q , q r , q r )σ + ʌ + ρe 2 )] + Kσ (52)
Į
Figure 3. Block diagram of the adaptive-robust control law (52) (Burkan & Uzmay,
2003a)
If Equation (46) and (51) are substituted in Equation (29) it will become a nega-
tive semidefinite function of the form of Equation (30). So, the system (22) will
be stable under the conditions assumed in the theorem.
At this point, it is very important to choose the variable function Γ in order to
solve the Equations (38) and (39), and there is no a certain rule for selection of
Γ for this systems. We use system state parameters and mathematical insight
to search for appropriate function of Γ as a solution of the first order differen-
tial in Equations (38) and (39). For the second derivation, we choose variable
function Γ and its derivative such that (Uzmay & Burkan, 2002).
Y T ıdt T
Γ = e³ ; Γ = Y T σe ³
Y ıdt
(53)
Y T ıdt 1 T
e³ πˆ + Y T σe ³
Y ıdt
(ʌ̂ − ʌ) = Y T σ (54)
2
Y T ıdt
π = πˆ ( π is constant). Dividing Equation (54) by e ³
Remembering that ~
yields
450 Industrial Robotics: Theory, Modelling and Control
1 − Y T
1 T
πˆ + Y T ıʌ̂ = e ³
ıdt
YTı + Y ıʌ (55)
2 2
1
Y T ıdt
Multiplying Equation (55) by the factor e 2 ³ results
1 T 1 T 1 T 1
³Y ıdt 1 ³Y ıdt ³Y ıdt − Y T
1 T 2 ³ Y T ıdt
e ³
ıdtt
e2 πˆ + Y T σe 2 ʌ̂ = e 2 YTσ + Y σe ʌ (56)
2 2
1 1 1
d 2 ³ Y T ıdt − ³ Y T ıdt 1 T
³ Y ıdt
(e πˆ ) = e 2 Y T σ + Y T σe 2 π (57)
dt 2
1 1 1
³Y
T
− ³Y T
1 Y T ıdt
2³
ıdt ıdt
πˆ = ³ e 2 Y T ıdt + π T
σ
2³
e2 Y e dt (58)
1 1 1
Y T ıdt
³ − ³Y T ıdt Y T ıdt
³
e 2
πˆ = −2e 2
+ πe 2
+C (59)
1
Y T ıdt
³
By dividing both sides of Equation (59) by e 2
, the following result is ob-
tained.
1
− Y T − Y T ıdt
³
πˆ = −2e ³
ıdt
+ π + Ce 2
(60)
1 1
− Y T ıdt − ³Y T ıdt − ³Y T ıdt − Y T
ʌ̂ = −2e ³ −e ³
ıdt
+ π + 2e 2
= 2(e 2
)+π (61)
1 T
³Y ıdt 1 − Y T
ρˆ ( t ) + Y T σe ³
ıdt
− e2 ρˆ ( t ) = 0 (62)
2
1 T
³Y ıdt
By dividing e 2 Equation (62), the following expression is found
Modelling of Parameter and Bound Estimation Laws …………….. 451
1 1
³Y T ıdt 1 ³Y
T
ıdt
− (e 2
ρˆ ( t ) + Y T σe 2 ρˆ ( t )) = 0 (63)
2
1
d 2 ³ Y T ıdt
− (e ρˆ ( t ) = 0 (64)
dt
1 T
³Y ıdt
(e 2 ρˆ ( t ) = −C (65)
Then
1
− Y T ıdt
³
ρˆ ( t ) = −Ce 2
(66)
1 1
− ³Y T ıdt − Y T ıdt − ³Y T ıdt
τ = Y(q, q , q r , q r )[2(e 2
−e ³ ) + π+ ρe 2
)] + Kσ (68)
ª T º
«
³
« ln((α Y ıdt)1 + 1) »
»
ª
«
ρ1 º
»
ª
«
ρ1 º
»
« (ĮY T ıdt)1 + 1 » « (ĮY ı) 1 dt + 1 » « (ĮY ı) 1 dt + 1 »
«
«
³ »
» ªπ º «
³
T
»
³
«
T
»
T ρ2 ρ2
«
πˆ = (1 / α) «
³
« ln((α Y ıdt) 2 + 1) » « 1 »
» « π2 »
; ρˆ (t )
« » « »
= − « (ĮY T ı) 2 dt + 1 » ; δ( t ) = « (ĮY T ı) 2 dt + 1 » (69)
»+
« ³ T
(ĮY dt) 2 + 1 » «......»
« » «³ » ³
« »
« ..... » « πp » « ...... » « ...... »
« » ¬ ¼ « ρp » « ρp »
« ln((α Y T ıdt) p + 1) »
« ³ » «
« (ĮY ı) p dt + 1 »
T
» «
« (ĮY ı) p dt + 1 »
T
»
«
«
³ (Į Y T
ıdt) p + 1
»
» ¬³ ¼ ³
¬ ¼
¬ ¼
where δ (t ) = − ρˆ (t ) . Substitute ʌ̂ and į (t) into the control input (21) for the tra-
jectory control of the model manipulator, then the tracking errors ~ q and ~ q
will converge to zero.
Proof:
In the previous approaches, it is difficult to derive another parameter and
bound estimation law because selection of appropriate variable function DZ and
solution of the differential equation are not simple. However, the selection of
the DZ and solution of the differential equation are simplified in the studies
(Burkan 2005, Burkan & Uzmay, 2006) In order to simplify selection of the
variable function DZ and simplify the solution of the differential equation, the
following Lyapunov function is developed (Burkan & Uzmay, 2006).
1 1 T ~ 1 ~ T 2~
V(σ , ~
q, ~
ȡ (t)) = ı T M(q)ı + ~q Bq + ȡ (t) Γ ȡ (t) (70)
2 2 2
where ī is a pxp dimensional diagonal matrix and change in time. The time
derivative of Equation (70) is written as
= ı T M(q)ı + ı T 1 M
V (q)ı + ~
q T B~
q + ~
ȡ (t) T īī ~
ȡ (t) + ~
ȡ (t) T ī 2 ~
ȡ (t) (71)
2
Let B = 2ΛK and use the property σ T [M (q) - 2C(q, q )]σ = 0 , ∀σ ∈ R n , the time
derivative of V along the system (22) is determined as
Modelling of Parameter and Bound Estimation Laws …………….. 453
= -~
V q T K~
q - ~
q T ȁKȁ~
q − ı T Yδ (t) − ı T Yρ (t) + ~
ȡ (t) T ΓΓ ~
ȡ (t) + ~
ȡ (t) T Γ 2 ~
p (t ) (72)
Substituting Equation (24) into Equation (72) yields the following equation.
- ı T Yδ(t) − ı T Yρ(t) + [ρ(t) − ȡ̂(t)]T ΓΓ [ρ(t) − ȡ̂(t)] + [ρ(t) − ȡ̂(t)]T Γ 2 [ρ ( t ) − ρˆ ( t )] = 0 (73)
Now, let’s consider δ (t)=- ρ̂( t ) , then Equation (73) is written as:
ı T Yȡ̂(t) − ı T Yρ(t) + [ρ(t) − ȡ̂(t)]T ΓΓ [ρ(t) − ȡ̂(t)] + [ρ(t) − ȡ̂(t)]T Γ 2 [ρ ( t ) − ρˆ ( t )] = 0 (74)
Then
− Y T ı + īī (ʌ̂ − ʌ) + ī 2 ʌ̂ − [īī ȡ̂(t) + ī 2 ȡ̂ ] = 0 (77)
A a result, two different equations can be obtained from Equation (77) as fol-
lows.
d
(Γʌ̂) = Γ −1 Y T ı + Γ ʌ (81)
dt
In Equation (83), π̂ and DZ are unknown and in order to derive π̂ , DZ must be de-
fined. There is no a certain rule for definition of Γ for this systems. We use sys-
tem state parameters and mathematical insight to search for appropriate func-
tion of Γ as a derivation of the π̂ . For the third derivation, we choose Γ and Γ-1,
such that (Burkan & Uzmay, 2006).
ª T º
«
³
«( αY ıdt )1 + 1 0 ... 0 »
»
( αY T ıdt ) 2 + 1
«
Γ=«
«
0
³ ... 0 »
»;
»
..... ... ...
« T
»
«
¬
0
³
... ( αY ıdt ) p + 1»
¼
ª 1 º
« 0 ... 0 »
T
« ³
« ( αY ıdt )1 + 1 »
»
« 1 »
0 ... 0
−1 « »
Γ =«
³ T
( αY ıdt ) 2 + 1 » (84)
« »
« ..... ... ... »
« 1 »
« 0 ... »
T
«
«¬ ³
( αY ıdt ) p + 1 »
»¼
where DZ and DZ-1 are pxp dimensional diagonal matrices. Substitution of Equa-
tion (84) into Equation (83) yields
ª (Y Tσ )1 º
« »
« ( ³ αY ıdt )1 + 1 »
T
ª( αY T ıdt )1 + 1 º ª πˆ1 º
«³
0 ... 0
» « » « (Y Tσ ) 2 »
« 0 ( ³ αY T ıdt ) 2 + 1 ... 0 » «πˆ 2 » « »
» x « .... » = ³ « ( ³ αY ıdt ) 2 + 1 »dt
T
« ..... ... ...
« » « » « .... »
« 0 ... ( ³ αY T ıdt ) p + 1» ¬«πˆ p ¼» « (Y T
σ ) » (85)
¬ ¼ « p
»
« ( ³ αY T ıdt ) p + 1»
¬ ¼
ª( αY ıdt )1 + 1
T º ªπ1 º
«³
0 ... 0 ª1º
» « » «1»
( ³ αY ıdt ) 2 + 1 ... » «π 2 »
T
« 0 0 « »
+« x
» « .... » + C
..... ... ... «....»
« » « » « »
« 0 ... ( ³ αY ıdt ) p + 1» ¬«π p ¼»
T
¬1¼
¬ ¼
Multiplying both sides of Equation (86) by DZ-1 and taken initial condition as
π̂ (0)=Ǒ, the constant C will be equivalent to zero. Hence, the parameter adap-
tation law is derived as
ª T º
«
³
« ln((α Y ıdt)1 + 1) »
»
« (ĮY T ıdt)1 + 1 »
«
«
³ »
» ªπ º
T
«
πˆ = (1 / α) «
³
« ln((α Y ıdt) 2 + 1) » « 1 »
» « π2 »
T »+
(ĮY dt) 2 + 1 » «......»
«
«
³ « »
» « πp »
(87)
« ..... » ¬ ¼
« ln((α Y T ıdt) p + 1) »
«
«
³ »
»
T
«
¬ ³ (ĮY ıdt) p + 1 »
¼
ª ρ1 º
« »
« (ĮY ı) 1 dt + 1 »
T
«
³ »
ρ2
« »
ȡ̂(t) = −Γ −1ρ = − « (ĮY T ı) 2 dt + 1 » (91)
« ³ »
« ...... »
« ρp »
« »
« (ĮY T ı) p dt + 1 »
³
¬ ¼
ª ln((α Y T ıdt) + 1) º
³
« 1
» ª ρ1 º
« » « »
³ (ĮY ıdt) 1 + 1 »
T
« (ĮY ı) 1 dt + 1 »
T
«
« ³
» ª π1 º « »
« ln((α Y ıdt) 2 + 1) » « π » «
T
ρ2
³ »
» + « 2 » + « (ĮY T ı) dt + 1 » ] + Kσ
τ = Y(q, q , q r , q r ) [(1 / α) « (92)
³ « ³
(ĮY dt) 2 + 1 » «......» «
T 2
»
« » « » « ...... »
« ..... π
» ¬« p ¼» « ρp »
« ln((α Y ıdt) p + 1) »
T
³
« »
« »
« (ĮY T ı) p dt + 1 »
« (ĮY T ıdt) p + 1 »
³ ¬ ¼
³ ¬« ¼»
Figure 5. Implementation of the adaptive-robust control law (92) (Burkan & Uzmay,
2006).
Modelling of Parameter and Bound Estimation Laws …………….. 457
For the fourth derivation, Γ and Γ-1 are chosen such that
ª 1 º
« 0 ... 0 »
T
« (α1Y σ)1 + β1 »
« 1 »
« 0 ... 0 »
Γ=« T
( α 2 Y σ) 2 + β 2 »
« ..... ... ... »
« 1 »
« 0 ... »
T
«¬ (α p Y σ) p + β p »¼
(93)
ª( α Y σ) + β
T
0 ... 0 º
« 1 1 1
T
»
−1 « 0 (α 2 Y σ) 2 + β 2 ... 0 »
Γ =« »
« ..... ... ... »
T
«
¬ 0 ... ( α p Y σ ) p + β p »¼
ª 1 º
«(ǂYT ǔ) + ǃ 0 ... 0 »
ǔ)1 + ǃ1 )(YT ǔ) º
1 1 1
« » ª Ǒ̂1 º ª ((ǂY 1
T
« 1 » « » « »
... » x « Ǒ̂2 » = «((ǂ2Y ǔ)2 + ǃ2 )(Y ǔ)2 » dt
T T
« 0 0
(ǂ2Y ǔ)2 + ǃ2
T
« » «....» ³ « .......... »
« ..... ... ... » « » « »
« » «
¬ Ǒ̂p ¼» ¬«((ǂpYT ǔ)p + ǃp )(YT ǔ)p ¼»
« 1 »
0 ...
«¬ (ǂpY ǔ)p + ǃp »¼
T
(94)
ª 1 º
«( ǂ YT ǔ ) + ǃ 0 ... 0 »
1
« 1 1
» ª Ǒ1 º ª1º
« 1 » « » « »
« 0 ... 0 » x « Ǒ2 » + C « 1 »
+« ( ǂ2Y ǔ )2 + ǃ2
T
» « .... » «....»
« ..... ... ... » « » « »
« 1 » ¬Ǒp ¼ ¬1¼
« 0 ... »
«¬ ( ǂpY T ǔ )p + ǃp »¼
458 Industrial Robotics: Theory, Modelling and Control
ª 1 º
«(ǂYT ǔ) + ǃ 0 ... 0 »
» ª Ǒ̂1 º ª 0.5ǂ1( ³ Y ǔdt)1 + ǃ1 ³ (Y ǔ)1 º
1 1 1 T 2 T
«
« 1 » « » « »
0 ... 0 » x « Ǒ̂2 » = «0.5ǂ2( ³ Y ǔdt)2 + ǃ2 ³ (Y ǔ)2 »
T 2 T
« (ǂ2Y ǔ)2 + ǃ2
T
« » «.... » « .......... »
« ..... ... ... » « » « »
« » « Ǒ̂
¬ ¼ ¬0.5ǂp( ³ Y ǔdt)p + ǃp ³ (Y ǔ)p »¼
p » « T 2 T
« 1 »
0 ...
«¬ (ǂpYT ǔ)p + ǃp »¼
(95)
ª 1 º
«( ǂ Y T ǔ ) + ǃ 0 ... 0 »
1
« 1 1
» ª Ǒ1 º ª1º
« 1 » « » « »
« 0 ... 0 » x « Ǒ2 » + C « 1 »
( ǂ2Y ǔ )2 + ǃ2
T
« » « .... » «....»
« ..... ... ... » « » « »
« 1 » ¬Ǒp ¼ ¬1¼
« 0 ... »
«¬ ( ǂpY T ǔ )p + ǃp »¼
Multiplying both sides of Equation (95) by DZ-1 and taken initial condition as
π̂ (0)=Ǒ, the constant C will be equivalent to zero. Hence, the parameter adap-
tation law is derived as
ª (α1YTσ)1 + β1)ρ1 º
« »
« (α2YTσ)2 + β2 )ρ2 »
ρˆ (t ) = −«
...............................» (97)
« »
«¬.(αp Y σ)p + βp )ρp »¼
T
Modelling of Parameter and Bound Estimation Laws …………….. 459
l2 m2, I2
lc2 q2
l1
lc1 m1, I1
q1
x
With this parameterization, the dynamic model in Equation (1) can be written
as
y14= gccos(q1);
y15= gccos(q1);
y16= gccos(q1+q2) ;
(101)
y21=0;
y 22 = q1 + q 2 ;
y 23 = cos(q 2 )q1 + sin(q 2 )(q 1 ) ;
2
y24=0 ;
Modelling of Parameter and Bound Estimation Laws …………….. 461
y25=0 ;
y26= gccos(q1+q2).
y11 = q r1 ;
y12 = q r1 + q r 2 ;
y13 = cos(q 2 )(2q r1 + q r 2 ) − sin(q 2 )(q 1q r 2 + q 1q r2 + q 2 q r2 ) ;
y14=gccos(q1);
y15= gccos(q1) ;
y16= gccos(q1+q2)
(102)
y21=0;
y 22 = q r1 + q r 2 ;
y 23 = cos(q 2 )qr1 + sin(q 2 )(q 1q r1 ) ;
y24=0 ;
y25=0 ;
y26= gccos(q1+q2).
For illustrated purposes, let us assume that the parameters of the unloaded
manipulator are known and the chosen values of the link parameters are given
by Table 1. Using these values in Table 1, the ith component of Ǒ obtained by
means of Equation (99) are given in Table 2. These parametric values also
show lower and unloaded robot parameters.
m1 m2 l1 l2 lc1 lc2 I1 I2
10 5 1 1 0.5 0.5 10/12 5/12
Table 1. Parameters of the unloaded arm (Spong, 1992)
ʌ1 ʌ2 ʌ3 ʌ4 ʌ5 ʌ6
8.33 1.67 2.5 5 5 2.5
Table 2. Ǒi for the unloaded arm (Spong, 1992)
If an unknown load carried by the robot is regarded as part of the second link,
then the parameters m2, lc2, and I2 will change m2+Ʀm2, lc2+Ʀlc2 and I2+ƦI2, re-
spectively. A controller will be designed that provides robustness in the inter-
vals
462 Industrial Robotics: Theory, Modelling and Control
15
0 ≤ ǻm 2 ≤ 10 ; 0 ≤ ǻl c2 ≤ 0.5 ; 0 ≤ I 2 ≤ (103)
12
π0 is chosen as a vector of nominal parameters and it also has the loaded arm
parameters and their upper bounds. The computed values for ith component
of Ǒ0 are given in Table 3.
With this choice of nominal parameter vector π0 and uncertainty range given
by (103), it is an easy matter to calculate the uncertainty bound ρ as follows:
6
~
π 2 = ¦ (π i 0 − π i ) 2 ≤ 181.26 (104)
i =1
and thus ρ = 181.26 = 13.46 . Since extended algorithm (20) is used, the uncer-
tainty bounds for each parameter separately are shown in Table 4. The uncer-
tainty bounds ρi in Table 4 are simply the difference between values given in
Table 3 and Table 2 and that the value of ǒ is the Euclidean norm of the vector
with components ǒi (Spong, 1992).
ǒ1 ǒ2 ǒ3 ǒ4 ǒ5 ǒ6
5 7.29 6. 25 0 5 6.25
Table 4. Uncertainty bound (Spong, 1992)
7. Conclusion
In the studies (Burkan, 2002; Uzmay & Burkan 2002, Burkan & Uzmay 2003 a),
it is very difficult to use different variable functions for other derivation, and
derivation of parameter and bound estimation laws are also not simple. How-
ever, in the recent studies (Burkan, 2005; Burkan & Uzmay 2006), first of all, a
new method is developed in order to derive new parameter and bound esti-
mation laws based on the Lyapunov function that guarantees stability of the
uncertain system and the studies (Burkan, 2002; Uzmay&Burkan, 2002; Bur-
kan&Uzmay, 2003a) provides basis of this study. In this new method, deriva-
Modelling of Parameter and Bound Estimation Laws …………….. 463
tion of the parameter and bound estimation laws are simplified and it is not
only possible to derive a single parameter and bound estimation laws, but also
it is possible to derive various parameters and bound estimation laws using
variable functions.
Parameters and bound estimation laws can be derived depending the variable
function Γ, and if another appropriate variable function Γ is chosen, it will be
possible to derive other adaptive-robust control laws. In derivation, other inte-
gration techniques are also possible to use in derivation for the new parameter
and bound estimation laws.
πˆ and ρˆ (t ) are error-derived estimation rules act as a compensators, that is,
estimates the most appropriate parameters and upper bounding function to
reduce tracking error. The aim of this approach is to solve for finding a control
law that ensures limited tracking error, and not to determine the actual pa-
rameters and upper bounding function. πˆ is considered as an adaptive com-
pensator, ρˆ (t ) is implemented as a robust controller and both of them are em-
ployed during the control process. This has the advantages that the employed
adaptive controller increases the learning, while the employed robust control-
ler offers the ability to reject disturbance and ensures desired transient behav-
iour. This improvement is achieved by computation of the upper bounding
function.
8. References
Abdullah, C.; Dawson, D.; Dorato, P & Jamshidi, M. (1991) Survey of robot
control for rigid robots, IEEE Control System Magazine, Vol 11, No. 2, 24-
30, ISSN: 1066-033X
Burkan, R. (2002). New approaches in controlling robot manipulators with pa-
rametric uncertainty, Ph.D. Thesis, Erciyes University, Institute of Science,
Turkey.
Burkan R, Uzmay ú. (2005). A model of parameter adaptive law with time
varying function for robot control, Applied Mathematical Modelling, Vol. 29,
361-371, ISSN: 0307-904X
Burkan, R. (2005). Design of an adaptive control law using trigonometric func-
tions for robot manipulators, Robotica, Vol.23, 93-99, ISSN:0263-5747.
Burkan, R. & Uzmay, ú. (2003 a). Variable upper bounding approach for adap-
tive- robust control in robot control, Journal of Intelligent & Robotic Systems,
Vol.37, No.4, 427-442, ISSN:0921-0296.
Burkan, R. & Uzmay, ú. (2003 b). Upper bounding estimation for robustness to
the parameter uncertainty in trajectory control of robot arm, Robotics and
Autonomous Systems, Vol.45, 99-110, ISSN: 0921-8890
Burkan, R. & Uzmay, ú. (2003 c). A Comparison of Different Control Laws in
Trajectory Control for a Revolute-Jointed Manipulator, Turkish Journal of
464 Industrial Robotics: Theory, Modelling and Control
Kelly, R.; Carelli, R. & Ortega, R. (1989). Adaptive motion control design of ro-
bot manipulators: an input output approach, International Journal of Con-
trol, Vol.50, No.6, 2563-2581. ISSN: 0020-7179
Leitmann, G. (1981). On the efficiency of nonlinear control in uncertain linear
system, Journal of Dynamic Systems Measurement and Control, Vol.102, 95-
102, ISSN:0022-0434.
Liu, G. & Goldenberg, A. A. (1996a). Uncertainty decomposition-based robust
control of robot manipulators, IEEE Transactions on Control Systems Tech-
nology, Vol.4, 384-393, ISSN: 1063-6536.
Liu, G. & Goldenberg, A. A. (1996b). Comparative study of robust saturation-
based control of robot manipulators: Analysis and experiments, Interna-
tional journal of Robotics Research, Vol.15, 474-491, ISSN: 0278-3649.
Liu, G. & Goldenberg, A. A. (1997). Robust control of robot manipulators
based on dynamics decomposition, IEEE Transactions on Robotics and
Automation, Vol.13, 783-789, ISSN: 1552-3098.
Middleton, R. H. & Goodwin, G. C. (1988). Adaptive computed torque control
for rigid link manipulators, System Control Letters, Vol.10, 9-16,
ISSN:0167-6911.
Ortega, R. & Spong, M. W. (1989). Adaptive motion control of rigid robots: A
tutorial. Automatica, Vol.23, No.6, 877-888, ISSN: 0005-1098.
Sage, H. G.; De Mathelin, M. F. & Ostretag, E. (1999). Robust control of robot
manipulators: A survey, International Journal of Control, Vol.72, No.16,
1498-1522, ISSN: 0020-7179
Sciavicco, L. & Siciliano, B. (1996). Modelling and Control of Robot Manipulators,
The McGraw-Hill Companies, ISBN:0-07-057217-8, New York.
Spong, M. W. & Ortega, R. (1990). On adaptive inverse dynamics control of
rigid robots, IEEE Transactions on Automatic Control, Vol.35, No.1, 92-95,
ISSN: 0018-9286.
Spong, M. W. & Vidyasagar, M. (1989). Robot Dynamics and Control, Willey,
New York, 1989, ISBN:0-471-50352-5.
Slotine, J. J. & Li, W.(1987). On the adaptive control of robotic manipulator,
The International Journal of Robotics Research, Vol.6, No.3, 49-59,ISSN:0278-
3649.
Slotine, J. J. & Li, W. (1988). Adaptive manipulator control: A case study, IEEE
Transactions on Automatic Control, Vol.33, No.11, 994-1003, ISSN: 0018-
9286.
Spong, M. W; Ortega, R & Kelley, R. (1990). Comment on adaptive manipula-
tor control: A case study, IEEE Transactions on Automatic Control, Vol.35,
No.6, 761-762, ISSN: 0018-9286.
Spong, M. W. (1992). On the robust control of robot manipulators, IEEE Trans-
actions on Automatic Control, Vol.37, 1782-1786, ISSN: 0018-9286.
466 Industrial Robotics: Theory, Modelling and Control
Uzmay, ú. & Burkan, R. (2002). Parameter estimation and upper bounding ad-
aptation in adaptive-robust control approaches for trajectory control of
robots, Robotica, Vol.20, 653-660, ISSN: 0263-5747
Yaz, E.( 1993). Comments on the robust control of robot manipulators, IEEE
Transactions on Automatic Control, Vol.38, No.38, 511-512, ISSN: 0018-9286.
Zenieh, M. & Corless, M. A. (1997). A simple robust r-α tracking controllers for
uncertain fully-actuated mechanical systems, Journal of Dynamics Systems,
Measurement and Control, Vol.119, 821-825, ISSN:0022-0434.
17
1. Introduction
During the last decades, numerous papers have been written on how to apply
neuronal networks, fuzzy (multi-valued) logic, genetic algorithms and related
ideas of learning from data and embedding structured human knowledge.
These concepts and associated algorithms form the field of soft computing.
They have been recognized as attractive alternatives to the standard, well es-
tablished hard computing (conventional) paradigms. Traditional hard comput-
ing methods are often too cumbersome for today’s problems. They always re-
quire a precisely stated analytical model and often a lot of computation time.
Soft computing techniques which emphasize gains in understanding system
behaviour in exchange for unnecessary accuracy have proved to be important
practical tools for many real world problems. Because they are universal ap-
proximators of any multivariate function, the neuronal networks and fuzzy
logic are of particular interest for modelling highly nonlinear, unknown or
partial known complex systems. Due to their strong learning and cognitive
ability and good tolerance to uncertainties and imprecision, soft computing
techniques have found wide applications in robotic systems control. According
to Zadeh (Zadeh, 1994), the basic premises of soft computing are
• The real world is pervasively imprecise and uncertain.
• Precision and certainty carry a cost.
And the guiding principle of soft computing, which follows from these prem-
ises, is exploit tolerance for imprecision, uncertainty, and partial truth to
achieve tractability, robustness, and low solution costs.
Both the premises and the guiding principle differ strongly from those in clas-
sical hard computing, which require precision, certainty, and rigor. However,
since precision and certainty carry a cost, the soft computing approach to
computation, reasoning, and decision making should exploit the tolerance for
imprecision (inherent in human reasoning) when necessary. A long standing
tradition in science gives more respect to theories that are quantitative, formal,
and precise than those that are qualitative, informal, and approximate. Many
467
468 Industrial Robotics: Theory, Modelling and Control
Industrial Robotics includes mechanical systems that are highly non-linear, ill
defined and subject to a variety of unknown disturbances. The control of such
systems is facing challenges in order to meet the requirements that can be of
different natures. A lot of effort has been devoted to capitalizing on the ad-
vances in mathematical control theory resulting in several techniques ap-
peared to tackle this kind of mechanical systems. The navigational planning
for mobile robot is a search problem, where the robot has to plan a path from a
given initial position to goal position. The robot must move without
hitting an obstacle in its environment. So, the obstacles in the robot workspace
act as constraints to the navigational planning problem. A genetic algorithm
can solve the problem, by choosing an appropriate fitness function that takes
into account the distance of the planned path segments from the obstacles, the
length of the planned path and the linearity of the path as practicable. Fur-
thermore, the learning process is constrained by the three mutually compro-
mising constraints complexity of the task, number of training examples and
prior knowledge. Optimisation of one or two of these objectives often results
in a sacrifice of the third. Learning a complex behaviour in an unstructured
environment without prior knowledge requires a long exploration and train-
ing phase and therefore creates a serious problem to robotic applications. To-
day’s robots are faced with imprecise, uncertain, and randomly changing envi-
ronments. The desire to deal with these environments leads to the basic
premises and the guiding principles of soft computing.
Robot control is predominately motion control using classical servomechanism
control theory. Due to the nonlinearity of the manipulator motion, a wide vari-
ety of control schemes have been derived. Classical schemes include computed
torque, resolved motion, PID decoupled model control, reference adaptive and
resolved motion adaptive control (Whitney, 1969), (Begczy, 1974), (Dubowsky
& DesForges, 1979). These schemes can be very complicated and require inten-
sive computer resources. For instance, the computer torque technique uses the
Lagrange–Euler or Newton–Euler equations of motion of the manipulator to
determine the required torque to servo each joint in real time to track the de-
sired trajectory as closely as possible. However, since there are always uncer-
tainties in the robot dynamic model, the ideal error response cannot be
achieved and the performance could be well degraded. This problem led peo-
ple to using adaptive control approaches to solve these problems and rela-
tively good results were obtained (Craig et al, 1987), (Spong & Ortega, 1988).
The problem is complicated if we think to enlarge the workspace of the ma-
nipulator by mounting over it a mobile platform, resulting on a new system
called a mobile manipulator. Researches to investigate the capabilities of mo-
bile platforms with onboard manipulators are devoting considerable effort to
come up with solutions to this complicated system (Yamamoto & Yun, 1994).
470 Industrial Robotics: Theory, Modelling and Control
Now, since the first control a plication of Mamdani (Mamdani & Assilian 1974)
and his team, a lot of efforts have been devoted to capitalizing on the advances
of fuzzy logic theory. Many fuzzy control approaches appeared. In fact, fuzzy
logic provides tools that are of potential interest to control systems. Fuzzy con-
trollers are a convenient choice when an analytical model of the system to be
controlled cannot be obtained. They have shown a good degree of robustness
in face of large variability and uncertainty in the parameters, and they lend
themselves to efficient implementations, including hardware solutions. These
characteristics fit well the needs to precision motion control of mobile manipu-
lators. However, the main difficulty in designing a fuzzy logic controller is the
efficient formul tion of the fuzzy If-Then rules. It is well known that it is easy
to produce the antecedent parts of a fuzzy control rules, but it is very difficult
to produce the consequent parts without expert knowledge. The derivation of
such rules is often based on the e perience of skilled operators, or using heuris-
tic thinking (Zadeh, L.A.1973), (Mamdani, E.H. 1974). In recent years and due
to the availability of powerful computer platform, the theory of evolutionary
algorithms starts to become popular to the problem of parameter optimization.
Genetic algorithm as one approach to the implementation of evolutionary al-
gorithms was used by Karr, (Karr, C.L. (1991) to generating the rules of the
cart-pole balancing fuzzy logic controller. In this work, we investigate the
problem of the motion control of a mobile manipulator using fuzzy control
schemes. The m chanical system is split into two subsystems where the mobile
platform and the manipulator constitute the parts. Appropriate fuzzy control-
lers are used to control each of these two subsystems. A genetic algorithm
generates the rules of the fuzzy controllers letting the system turning around
an optimal solution. The motion of the platform and that of the manipulator
are coordinated by a Neural like network, wich is a sort of adaptive graph of
operations, designed from the kinematics model of the system. A learning
paradigm is used to produce the required reference variables for each of the
mobile platform and the robot manipulator for an overall coordinate behav-
iour.
3. Robot Model
Such that the matrix TPW is determined by a certain A(q) matrix, TBW is a fixed
T
matrix and TEB is determined by the joint variable vector lj = ª¬lj1 ,lj2 ,...,ljn º¼ ,nm
m
The vector of position of the end effector xEW is a non-linear function of the
T
configuration vector , q = ª¬ pT ,ljT º¼ ∈ ℜn ,(n = 3 + nm ). The joint coordinates of
the manipulator are lj = [lj1 ,lj2 ,lj3 ] (thus (nm = 3 ) . Therefore the generalized
T
Such that,
θ = θ1 + ϕ (2)
Where ϕ is the heading angle of the mobile platform, and l1 , l2 and l3 are the
lengths of the three links composing the manipulator arm. xWB , yWB ,zWB , are the
coordinates of the point B located in the front of the mobile platform with re-
spect to the world frame {W } . In the sequel, we consider zWB equals zero for
Soft Computing Based Mobile Manipulator Controller Design 473
simplicity. The goal is to find the generalized trajectory q(t) for a given task
space trajectory X E (t ) such that F(q(t)) = XE(t) is satisfied. Since the system is
redundant, the number of solutions is expected to be infinite. To realize a gen-
eralized task of the mechanical system, one has to derive the set of the λ gener-
alized coordinates. In this context, an approach is suggested to investigate and
solve this problem when we make a complete motion of the end effector re-
sulting from a combined operation of the two subsystems that work in a coor-
dinate manner.
Most feedback control laws, where a PD or PID controllers are used, are based
on simplified dynamic equations. However the approach works well only at
slow speeds of movement. At high speeds of movement the Coriolis and cen-
474 Industrial Robotics: Theory, Modelling and Control
trifugal forces are major components of the dynamic equations and conse-
quently the error can not corrected. A lot of effort has been devoted to capital-
izing on the advances in mathematical control theory resulting in several tech-
niques appeared to tackle this kind of mechanical systems. May be the most
famous and which is considered, as the basic approach becoming very popular
is the model based computed torque method. However, since there are always
uncertainties in the robot dynamic model and the disturbances possibly aris-
ing from the actual running of the actuator or some other causes, the ideal er-
ror response cannot be achieved and the performance could be well degraded.
Now, since the reliability of the PID controller has been field proven besides
the application of fuzzy logic theory to process control, we propose in the next
section a combination of the two to make a robust controller for robot manipu-
lators.
y A − x A tan φ = 0 (4)
φ is the heading angle of the vehicle from the X-axis of the world coordinates
as it is depicted in Figure 2.
Writing the classical relationships between the velocity point O, and those of
G
points Ol and Or, we can easily determine the linear velocity vo , and the instan-
G
taneous angular velocity ω of the mobile robot:
G G G G
vo = vor + oor ∧ ω (5)
G G G G
vo = vol + ool ∧ ω (6)
G
ω = φ.kˆ (7)
G G
k̂ is the unit vector along the ZA axis; and vol and vor are the linear velocities
of the mobile robot points Ol and Or respectively. When projecting expres-
sions (5) and (6) on the X-axis and the Z-axis, we get the expressions of vo and
φ as follows:
r
vo = (ωr + ωl ) (8)
2
r
φ = (ωr − ωl ) (9)
2R
r and R are respectively the radius of the wheels and the width of the vehicle
as it is shown in Figure 3. It has been proven by Samsung and Abderrahim
(Samsung, C. & Abderrahim, K.A. 1990), that the vehicle converges better to its
reference when controlling a point located in front of the rear wheel axis.
O
2r
2R
xB = xo + d .cos φ (10)
yB = yo + d .sin φ (11)
where:
Δφ
xo (t + 1) = xo (t ) + ΔD.cos(φ + ) (12)
2
Δφ
yo (t + 1) = yo (t ) + ΔD.sin(φ + ) (13)
2
Such that:
r
ΔD = (Δqr + Δql ) (14)
2
r
Δφ = (Δqr − Δql ) (15)
2R
4. Robot Control
The control strategy combines the mobile base behaviour and the manipulator
behaviour to produce an integrated system that performs a coordinated mo-
tion and manipulation. We propose in this section the two layer robot control-
ler and the genetic algorithm to determine the solution that gives the optimum
rule base for a precompensator which is associated with the PID controller
(Abdessemed, F. & Benmahammed, K. 2001).
Control inputs to the joints are composed of both feedback PID control and
precompensator sub-systems components, (Fig. 4). The output of the precom-
Soft Computing Based Mobile Manipulator Controller Design 477
Where:
ei' (k ) = θ ic (k ) − θ i (k ) (16)
Note that the desired angular position is not directly compared to the meas-
ured one, but passes first through the precompensator to be transformed to a
new reference angular value for the PID-plant system. Thus, one writes:
where:
e(k) and Δe(k) are inputs to the map F, and mi(k) is the output of the i-th joint;
such that:
Where e(t ) = θ d − θ and θ d and θ are the desired reference and the measured
trajectories respectively. The required controller parameters are found based
on simplified dynamic equations; i.e, the manipulator is supposed evolving at
slow speeds of movement. Consequently, the Coriolis and centrifugal forces
are of the dynamic equations are neglected, thus
τ = M (θ )θ (23)
Considering equations (22) and (23), the following transfer function is ob-
tained:
θi k Di s 2 + k Pi s + k Ii
= (24)
θ id J i s 3 + k Di s 2 + k Pi s + k Ii
Soft Computing Based Mobile Manipulator Controller Design 479
k Di 2 k P i K
Δ( s ) = s 3 + s + s + Ii = 0 (25)
Ji Ji Ji
Ji is taken to be the fixed term of the Jii element of the inertial matrix J. In ro-
botic control, the gains are typically chosen so that the two poles are double
and real negative. This gives a damping coefficient of one and by consequence
a fast response without oscillations. The remaining pole is then placed on the
real axis far away from the first two. Thus:
Δ(s)=(s+β)2(s+nβ). (26)
k Pi = mi (2n + 1) β 2 ,
k Di = mi (2 + n) β , (27)
k Ii = mi nβ 3
with: β>0, n>1. The choice of β depends on the sampling frequency as well as
to possible saturation effects.
The precompensator is a fuzzy controller. Its main goal is to enter into action
whenever it is needed to reinforce the conventional controller to provide the
necessary commands that allow the end effector to track the desired trajectory
with minimum error. A fuzzy controller is a system, which use a rule-based
expert in the form of If Then statements. The input state with respect to a cer-
tain universe of discourse constitutes the premise of the rule, whereas the out-
put state constitutes the consequence of the rule. We can distinguish among
the steps of the rule treatment, the following procedures: Fuzzification, Fuzzy
inference and Defuzzification. The main difficulty in designing a fuzzy logic con-
troller is the efficient formulation of the fuzzy If-Then rules. It is well known
that it is easy to produce the antecedent parts of a fuzzy control rules, but it is
very difficult to produce the consequent parts without expert knowledge. In
this work, the fuzzy rule base of the precompensator designed is found by us-
ing a genetic algorithm that search for the solution that gives the optimum rule
base for the precompensator. If we assume that the error and its derivative are
partitioned into K and L subsets then, the the number of possible combinations
is LxK, which represent the number of rules per output. A single rule is de-
fined as:
480 Industrial Robotics: Theory, Modelling and Control
Rule Rij : if e is Aie and Δe is AΔj e then mij is Apm i=1, 2, ... 7; j=1, 2, ..., 7; p=1, 2, ..., 7.
Rij is the label of he fuzzy if-then rule, Aie and AΔj e are fuzzy subsets on the in-
terval [-1, 1], and Apm is the consequent fuzzy subset. The aim is to sort out
the appropriate optimised rules adequate for the system by employing an
evolving genetic algorithm. The system being evolved is encoded into a long
valued string called a chromosome. Initially a random population of these
strings is generated. Each string is then evaluated according to a given per-
formance criterion and assigned a fitness score. The strings with the best score
are used in the reproduction phase to give the next generation. Here, the ge-
netic algorithm is presented as a seven-tuple entity and is abstracted in the fol-
lowing encapsulated form:
where:
P (t ) = {M 1 (t ), M 2 (t ), ... ,M D (t )} (30)
Where mij ∈ Sm ∈ ℜ, are the j-th consequent parts of the fuzzy rules of the i-th
individual. It takes its value from the set Sm={-1, -0.66, -0.33, 0.0, 0.33, 0.66, 1}.
These values correspond to the projection of the peaks of the membership
functions on the normalized universe of discourse of the action.
Soft Computing Based Mobile Manipulator Controller Design 481
If we consider the vehicle moving in a free obstacle environment, then the op-
timal trajectory from its current position to its end configuration is naturally a
line joining these two extreme points as it is shown for instance by Figure 5,
where θ is the angle between the symmetric axis of robot and the line that joins
the control point of the robot to its final point.
If we link the points with segments, then the goal is to control the driving
point A of the autonomous robot with respect to these segments and to come
the closest possible to the end point. The distance ρ becomes zero when the
vehicle stabilizes at its final configuration. Figure 6 gives a schematic block
diagram of this architecture. From this figure one can notice that the inputs to
the fuzzy controller are ρ and θ, and its output is the steering angle γ. (Abdes-
semed, F. & Benmahammed, K. 2004)
ρ
Calculation Fuzzy Logic γ Robot
of ρ and θ θ
Controller
Odometry
xA, yA, φ
Figure 6. Block diagram of the controlled system
The best fuzzy system is implemented with five and eight triangular member-
ship functions for the controller input variable ρ and θ respectively. The sec-
ond step in designing an FLC is the fuzzy inference mechanism. For instance,
the knowledge base of the system consists of rules in the form:
482 Industrial Robotics: Theory, Modelling and Control
Rule Rij : IF ρ is S AND θ is NM THEN γ is np i=1, 2, ... N; j=1, 2, ..., K; p=1, 2, ..., L.
Such that:
γ(k)=F[ρ(k),θ(k)] (31)
ρ(k) and θ(k) are inputs to the map F, and the output γ(k) denotes a numerical
value within the interval [-90°, +90°], characterizing the relative variation in
the direction that should be taken by the vehicle to reach the final point. The
rules could be defined by using the human-like description of the vehicle's
movement behaviour. But, this approximate human reasoning may lead to cer-
tain unsatisfactory rules. Furthermore, the global behaviour of the vehicle may
result from the combination of several basic behaviours for instance, the trajec-
tory tracking, the vehicle speed, and the obstacle avoidance. As a matter of
fact, it is not obvious to define the respective rules. Therefore, and following
the same approach described in the last section, we propose an evolutionary
algorithm as an efficient solution for the extraction of the consequent part of
the rules. A (μ+λ)-evolutionary programming is described by an octuple entity
defined by the following format:
Where Ii designates the i-th individual in which the components aj describe the
consequent parts of the rules and the standard deviations. In this application,
we have rather chosen the floating point encoding instead of the binary code.
The algorithm seeks many local optima and increases the likelihood of finding
the global optimum, representing the problem goal.
The control strategy combines the mobile base behavior and the manipulator
behavior to produce an integrated system that performs a coordinated motion
and manipulation. If we refer to the arm manipulator by agent1 and the mobile
platform by agent2, then the architecture shown by Figure 7 illustrates the ac-
tions on the environment by the two defined agents.
Ve-
hicle
percepti- Reflex Action
Arm
percepti- Reflex Action
on
Task
For convenience we define xij as the outputs of the network, and which
x31 ,x32 ,x33 are the outputs of the network, and wich designates the Cartesian
coordinates of the task variable E.( xEW ,y EW zEW ). Let q(k) be the input vector,
such that qT (k ) = [θ ,θ 2 , θ3 , xB , yB ] and m
X EW the measured output vector such
that X EW ( k ) = [ m xEW , m y EW , m zEW ]T , , and the weighting vector W such that
W T ( k ) = [ w11 ,w22 ,w33 ] . If we set the criterion Ep be the tracking error given by
equation (35), then the control objective is to design a control law, which guar-
anties: Ep to go zero when k goes to infinity,, k is the running time.
N (3)
Where
d
X EW = (r1 ,r2 ,r3 )T = ( d xEW , d y EW , d zEW )T , represent the desired operational coor-
dinates and m XEW = ( x31 ,x32 ,x33 )T = ( m xEW , m y EW , m zEW )T the operational meas-
ured coordinates in the world frame. The effect of adjusting the weighting vec-
tor W to the error Ep is determined by the ordered derivatives ∂ + Ep / ∂W ( k )
(Werbos, 1974). Now, we apply the back-propagation learning rule to generate
the appropriate parameter-weighting vector W(k) (Rumelhart et al, 1986). Once
determined, the weights are used to update the input vector q. The elements of
this vector will serve as inputs to the low level controllers of the two agents as
486 Industrial Robotics: Theory, Modelling and Control
illustrated by the block diagram of Fig. 9. The reference states of the plant at
time k+1 are functions of the reference states at time k and the computed
weights at time k+1, and can be expressed symbolically as
Figure 9. Configuration of the controlled system including the adaptive graph of op-
erations
The error signal for the j-th output node can be calculated directly:
∂ + E p ∂E p
ε 3,i = = (37)
∂x3,i ∂x3,i
Therefore
The error signals of these internal nodes at the j-th position are calculated us-
ing the following equation
N(l +1)
∂fl +1,m
ε l,i = ¦ ε l +1,m (39)
m =1 ∂xl,i
2 ∂f
3, m
ε = ¦ ε . (40)
2, j 3, m ∂x
m=l 2, j j = 1, 2 .
Therefore the error signals of the nodes at the internal layer are as follows:
ε 2,1 =ε 3,2.x2,2 , ε 2,2 =ε 3,1.x1,1+ε 3,2.x2,1 , ε 2,3 =−ε 33 , ε 2,4 =−ε 3,3
The first layer contains four neurons arranged in the way presented by Figure
8. The general form for the error signal is given by the following equation:
4
∂f 2,m
ε1,i =¦ε 2,m (41)
m =1 ∂x1,i
x1,1
ε 1,1 = −ε 2,1 + ε 3,1 .x2 ,2 (42)
1 − x12,1
∂ Ep
wijl (k + 1) = wijl (k ) − μ (46)
∂ wijl (k ) wijl ( k )
Where
∂+ Ep ∂ + E p ∂ + f l ,i ∂f l ,i
= = ε l ,i (47)
∂w ∂xl ,i ∂w ∂w
Therefore the weights are updated according to the following resulting equa-
tions:
Where, the two last equations represent the updates of the biases b1 and b2.
Nevertheless, the steepest descent algorithm is slower for on-line applications.
For that reason, it is rather advisable to use the Levenberg-Marquardt algo-
rithm, which has proved to be an effective way to accelerate the convergence
rate. Its principal advantage is that it uses information about the first and sec-
ond derivatives and does not need to invert the Hessian matrix.
-1
xk +1 = xk - ª¬ J T J + μ I º¼ J T ε (53)
Soft Computing Based Mobile Manipulator Controller Design 489
The weights are updated on each iteration by the following found expressions:
j11
w11 (k + 1) = w11 (k ) − ε1,1
j +μ
2
11
j22
w22 (k + 1) = w22 (k ) − ε1,2
j +μ
2
22
j33
w33 (k + 1) = w33 (k ) − ε1,4
j +μ
2
33
(54)
j44
b1 (k + 1) = b1 (k ) − ε 3,1
j +μ
2
44
j55
b2 (k + 1) = b2 (k ) − ε 3,2
j +μ
2
55
Where
Where, I is the identity matrix. At this end stage, we give the reference state
variables at time k+1 and which are represented by the following expressions:
θ (k + 1) = w11 (k + 1)θ (k )
θ 2 (k + 1) = w22 (k + 1)θ 2 (k )
θ3 (k + 1) = w33 (k + 1)θ3 (k )
θ1 = θ − ϕ (56)
xB (k + 1) = b1 (k + 1) xB (k )
yB (k + 1) = b2 (k + 1) yB (k )
490 Industrial Robotics: Theory, Modelling and Control
7. Simulation Results
Simulation examples are carried out in order to evaluate the developed ap-
proach. It is desirable to move the end effector from its initial position P1 (1, 1,
0.2) to its final position P2 (5,5,0.5), by tracking instantaneously a linear speci-
fied trajectory of the end effector generated by a uniform Cartesian movement.
Neural-like-network learns the desired values presented and adjusts the
weights appropriately in order to present to the system the corresponding ref-
erence state variables. The results of the simulation are shown in Figures 12 to
15 and indicate how successfully the Cartesian coordinates of the endeffector
track their corresponding reference values very closely. We notice that the
small departures from the reference trajectories are due to the cumulated tol-
erable errors from the learning process. The learning algorithm was run by us-
ing a learning rate Ǎ=0.05 for a laps of time not exceeding real time control. All
the weights have been initialised to the value of one. At each step, the learning
rate is updated depending on the behaviour obtained. If the overall error is
improved, then the learning rate is increased by the value Ǎ=Ǎ*Ǎ_inc; other-
wise, it is deceased by the value Ǎ=Ǎ*Ǎ_dec, knowing that Ǎ_inc and Ǎ_dec take
the values of 1.05, and 0.95 respectively. Figures 16 to 19 show the plots of the
manipulator angular values as well as the orientation of the mobile platform,
and Figure 20 clearly shows the trajectories of the end effector and the mobile
platform in the xyz space. Figure 21 depicts a 3D perspective of the simulation
environment.
Soft Computing Based Mobile Manipulator Controller Design 491
Figure 12. Desired and measured x-trajectory plots and the error resulted
Figure 13. Desired and measured y-trajectory plots and the error resulted
492 Industrial Robotics: Theory, Modelling and Control
Figure 14. Desired and measured z-trajectory plots and the error resulted.
Figure 15. X-Y Plots of the end-effector an d the mobile platform trajectories
Soft Computing Based Mobile Manipulator Controller Design 493
Figure 20. X-Y-Z plots of the end-effector and the mobile platform trajectories
8. Conclusion
9. References
Werbos, P. (1974). Beyond Regression: New tools for Prediction and Analysis
in the Behavioral Sciences, Ph.D. Dissertation, Harvard Univ., 1974.
Whiteney, D. E. (1969). Resolved Motion Rate Control of Manipulators and
Human Protheses, IEEE Transactions on Man Machine System, pp. 47-53,
Vol. MMS-10, No. 2, June 1969.
Yamamoto, Y. & Yun, X. (1994). Coordination locomotion and manipulation of
a mobile manipulator, IEEE Transactions on Automatic Control, pp. 1326-
1332, 39(6),1994
Zadeh, L. (1994). Fuzzy logic, Neural network, and Soft computing, commun.
ACM, pp. 77-84, Vol. 37, no. 3, 1994.
Zadeh, L.A. (1973). Outline of a New Approach to the Analysis of Complex
Systems and Decisions, IEEE Trans.SMG, pp. 28-44. Vol. 3, 1973
18
Mirosãaw Galicki
1. Introduction
499
500 Industrial Robotics: Theory, Modelling and Control
hand, the controller proposed in our work adaptively varies both kinematic
parameters of the Jacobian matrix and the dynamic ones in such a way as to
stably follow by the end-effector a geometric path. The paper is organized as
follows. Section 2 formulates the robotic task to be accomplished in terms of a
control problem. Section 3 describes how to employ the Lyapunov stability
theory to determine controls (if they exist). Section 4 provides us with a com-
puter example of generating the robot controls in a two dimensional task space
for a planar redundant manipulator comprising three revolute kinematic pairs.
Finally, some conclusions are drawn.
where M (q) denotes the n × n inertia matrix; C (q, q )q is the n -dimensional
vector representing centrifugal and Coriolis forces; G (q) stands for the n -
dimensional vector of gravity forces; u = (u1 ,..., u n ) is the vector of controls
T
p ( q ) − Θ( s ) = 0 (2)
path; s is the current parameter of the path (e.g. its length); s ∈ [0, smax ] ; smax is
the maximal path length. The mapping Θ is assumed to be bounded together
with the first and second derivatives and not degenerated, i.e. dΘ > 0 .
ds
502 Industrial Robotics: Theory, Modelling and Control
p ( q 0 ) − Θ ( 0) = 0 (3)
s (T ) − s max = 0 (4)
Furthermore, at the initial and the final time moment, the manipulator and
path velocities equal zero, i.e.
q (0) = q (T ) = 0 (5)
and
§ M ( q ) ·
∀v,q,q ∈ R n v,¨ − C( q,q ¸ v = 0 (8)
© 2 ¹
3. The dynamic equations (1) are linear with respect to an ordered set of physi-
cal parameters X = ( X 1 ,..., X d )T , i.e.
4. The right hand side of (7) is linear with respect to e Y . Consequently, equa-
tion (7) can be expressed as follows
Our aim is to control the manipulator such that the end-effector fulfils (2)-(6).
Therefore, we propose adaptive Jacobian path following controllers for robots
with both uncertain kinematics and dynamics. In our approach, the exact
knowledge of both robot dynamic equations and Jacobian matrix is not requi-
red in updating the uncertain parameters.
In the presence of kinematic uncertainty, the parameters of the Jacobian matrix
are uncertain and hence equality (10) can be expressed as follows
where Jˆ = J( q,Yˆ ) ∈ R m×n is an adaptive Jacobian matrix and Ŷ stands for the
vector of estimated kinematic parameters. In order to show the stability of the
path following control system in the presence of both kinematic and dynamic
uncertainties, we define an adaptive joint-space sliding vector z as
z = λJˆ T e + q (13)
Based on equations (1) and (13)-(14), we can easily express robot dynamic e-
quations by means of vector z and its time derivative, as
(
r = qr ,1 ,..., qr , n
Where q )T = −λJˆ T e − λJˆ T e (
and q r = q r ,1 ,..., q r , n )T = −λJˆ T e
Moreover, we know from property 3., that the last three terms of equation (15)
become linear with respect to vector X and hence they can be written as fol-
lows
Inserting the right hand side of (16) into (15), robot dynamic equations (15)
take the following form
Based on (13), (14) and (17) , lets propose the following adaptive Jacobian con-
troller
where X̂ stands for the estimated physical parameter vector defined below;
k and k p are some positive scalars which could be replaced by diagonal ma-
trices of positive constants without affecting the stability results obtained fur-
ther on (this should lead to improved performance). Here, scalar constants are
chosen for simplicity of presentation. In order to measure error e , path param-
eterization s = s(t ) is required which is computed by solving the following
scalar differential equation
§ dΘ ( s ) 1 dγ
s = − k s s − k P ¨ e, − + γ ( s − 1) + (s − 1)2 ·¸ (19)
© ds 2 ds ¹
s ≤ 1+ DŽ0
inf {DŽ} , where DŽ 0 = DŽ ( 0 ) (as will be seen further on, DŽ 0 determines
the upper bound on the accuracy of the path following and may be specified
by the user).
The choice of function DŽ is crucial for computational effectiveness of control
dDŽ
scheme (18), (19). One possibility is DŽ = DŽ(s) with > 0 . An alternative choi-
ds
506 Industrial Robotics: Theory, Modelling and Control
1 dDŽ
DŽ+ ( s − 1) = 0
2 ds
As will be seen further on, Assumption 1. results in an asymptotic convergence
of s to 1 .
Let us note, that the first two terms from (18) present an adaptive transpose Ja-
cobian controller with adaptively varying kinematic parameter vector Ŷ . The
last term in dependence (18) is an estimated control based on equation (16). Es-
ˆ
timated kinematic parameters Ŷ of the adaptive Jacobian matrix Ĵ = J q ,Y ( )
are updated according to the following law
Yˆ = wk k P K T (q, q )e (20)
Xˆ = − wd DT (q, q , q r , qr )z (21)
Let us note, that setpoint controllers proposed in works (Cheach et al., 1999;
Yazarel & Cheach, 2002; Cheach et al., 2003) , which are computationally so-
mewhat simpler, can not be applicable to our task. The reason is that the error
of approximation in (Cheach et al., 1999; Yazarel & Cheach, 2002; Cheach et al.,
2003) is bounded by a constant and approximate Jacobian matrix does not inc-
lude parameters to be adapted. Due to adaptation law (20), one can not gua-
rantee in our task to satisfy assumption regarding the approximation error
made in works (Cheach et al., 1999; Yazarel & Cheach, 2002; Cheach et al.,
2003) .
The closed-loop error dynamics is obtained by inserting the right hand side of
equation (18) into equation (17)
~
M (q ) z = −C (q, q )z + D(q, q , q r , qr ) X − kz − k P Jˆ T e
dΘ( s )
e = J (q, Y )q − em +1
ds
§ dΘ( s ) 1 dγ 2 ·
em +1 = −k s em +1 − k P ¨ e, − + γem +1 + em +1 ¸ (22)
© ds 2 ds ¹
~
Y = w k K T (q, q )e
k P
~
X = − wd DT (q, q , q r , qr )z
~ ~
where Y = Yˆ − Y ; X = Xˆ − X . Applying the Lyapunov stability theory, we
derive the following result.
Theorem 1. If there exists a solution to the problem (1)-(6) and adaptive Jaco-
bian matrix Ĵ is non-singular along end-effector path (2) and function DŽ fulfils
Assumption 1., then control scheme (18) generates manipulator trajectory
whose limit point (q (∞), em +1 (∞), e(∞), em +1 (∞) ) = (0, 0, 0, 0) ,i.e. satisfying state
constraints (2)-(6), is asymptotically stable.
1§ 1 ~ 2 1 ~2 2 ·
V = ¨¨ Mz, z + X + Y + em +1 + k Pγem2 +1 + k P e 2 ¸¸. (23)
2© wd wk ¹
M 1 ~ ~ 1 ~ ~
V = z, Mz + z , z + k P J T e, q + X, X + Y ,Y +
2 wd wk
(24)
dΘ 1 dγ 2
em +1em +1 + k P e, − em +1 + k Pγem +1em +1 + k P em +1em +1.
ds 2 ds
m+1 ,Y and X from V for the right-hand sides of closed-
Substituting Mz,.e,.e
loop error dynamics (22) and using the skew-symmetric property of matrix
M
− C [property 2. eqn. (8)], we obtain after simple calculations, that
2
, and Y
Since V ≤ 0 , function V is bounded. Therefore, z,X are bounded
vectors. This implies that X̂ and Y are bounded, too. Consequently, it follows
from (13), that q is also bounded. Moreover, s and s are bounded, too. As can
be seen, V is negative for all ( z, e, em+1 ) ≠ 0 and is zero only when
( z,e,em+1 ) = 0 , which implies (using LaSalle-Yoshizawa invariant theorem
(Krstic et al., 1995) that ( z,e,em+1 ) tends asymptotically to zero, i.e.
z(T ) → 0 ,e (T ) → 0 , and em +1 → 0 ,as.T → ∞ , as. By differentiating
em+1 in (22)
d3 em+1
with respect to time, it is also easy to see, that is bounded function by
dt 3
assumptions regarding ƪ and DŽ . This means, that em+1 is uniformly continu-
ous. Hence, em+1 (T ) → 0 , , as T → ∞ , , too. The convergence of path velocity and
acceleration yields the following equation
1 dγ 2
γem +1 (∞) + em +1 (∞) = 0. (25)
2 ds
1 dDŽ
Consequently, em+1 ( ∞ ) = 0 or DŽ + em+1 ( ∞ ) = 0 . On account of Assumption
2 ds
1, the second equality is not fulfilled. Thus, em+1 ( ∞ ) = 0 (or equivalently
s ( ∞ ) = 1 ). On account of (13), q(T
) → 0 , as T → ∞ , too. Consequently,
boundary conditions (4)-(6) are (asymptotically) fulfilled and limit point
( q ( ∞ ) ,e ( ∞ ) ,e ( ∞ ) ,e ( ∞ ) )
m +1 m +1 = ( 0 , 0 , 0 , 0 ) is asymptotically stable. Finally, it
should be emphasized, that the chosen Lyapunov function does not guarantee
convergence of parameter estimations X̂ and Ŷ to their true values.
Control of Redundant Robotic Manipulators with State Constraints 509
X t2=0 Yt2=0 k p DŽ 0
On account of (3)-(6), we have Vt =0 = + + .
2 wd 2 w k 2
For sufficiently large wd and wk , the first two terms in this equality may be
kp DŽ 0
omitted. Hence, we obtain Vt =0 ≅
2
kp DŽ 0
Since V is not positive, function V fulfils the inequality V ≤ .
2
Consequently, the following bound on e may easily be obtained, based on
(23) and the last dependence
e ≤ DŽ0 (26)
An important remark may be derived from the proof carried out. Namely,
Inequality (26) presents an upper bound (path independent) on the accuracy of
path following by the end-effector according to the control law (18). Let us no-
te that estimation of the upper bound on path following error (26) is very con-
servative. Consequently, control gains wd and wk do not require large values
to achieve a good path following accuracy, as the numerical simulations (given
in the next section) show.
Moreover, several observations can be made regarding the control strategy
(18). First note, that the proposed control law requires, in fact no information
concerning the robot kinematic and dynamic equations. Second, the choice of
controller parameters k ,k p ,ks , wd and wk according to dependencies (18)-(21)
guarantees asymptotic stability of the closed-loop error dynamics (22) during
the manipulator movement.
Moreover, the transpose of Ĵ (instead of a pseudoinverse) in control scheme
(18) does not result in numerical instabilities due to (possible) kinematic singu-
larities met on the robot trajectory. Nevertheless, (18) has been derived under
the assumption of full-rank adaptive Jacobian matrix along the path. Further-
more, controller (18) does not require the knowledge of task space velocity.
Due to conservative estimation of the path following accuracy, control algo-
rithm (18) results in a better accuracy of the path following as compared to
upper bound given from (26), as the numerical computations carried out in the
next section show. In order to prevent control (torque) oscillations at the very
beginning of time histories (caused by e.g. the non-zero initial path following
error) ks from (19) should be a bounded, quickly decreasing time dependent
function as t → ∞ (see the next section).
510 Industrial Robotics: Theory, Modelling and Control
Due to real-time nature of robot controller (18), we shall try to estimate the
number of arithmetic operations required to implement the algorithm presen-
ted in this section. The dimension of the robot task space is assumed in estima-
tion to be constant. Operations required for computation of sin,cos , and
ƪ ( ⋅) functions are not taken into account. Furthermore, matrices Ĵ , K and D
are assumed in estimation to be given. Moreover, estimations are carried out at
any time instant of the robot task accomplishment. It follows from (13) and
(18) that terms kz,k p ˆJ T e , require O ( n ) operations. Computation of the right
hand sides (19) and (20) involves O (1) and O ( n ) operations, respectively as-
suming that k = O ( n ) . Computational complexity for the right hand side of
(21) equals O(n 2 ) by assumption that d = O ( n ) . Computation of estimated
control D(q ,q ,q r ,q ˆ requires also the same order of complexity, i.e. O(n 2 )
r )X
operations. Finally, computational complexity of the whole robot controller
(18) is of the order O(n 2 ) .
4. A numerical example
The aim of this section is to illustrate the performance of the proposed adapti-
ve
control algorithm using a dynamic three-joint direct-drive arm (n = 3 ) of
SCARA-type robotic manipulator operating in a two-dimensional (m = 2 ) task
space. Kinematic scheme of this manipulator and the task to be accomplished
is shown in Fig. 1. In the simulations, SI units are used. The components of
dynamic equations
of this manipulator are as follows (Spong & Vidyasagar, 1989):
M = M [ ]
ij 1 ≤ i , j ≤ 3
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
p2 [m]
0
−0.1 q1 the geometric path
−0.2
l
1
−0.3
−0.4
−0.5 l
q 2 q3
2
−0.6 l
3
−0.7
−0.8
−0.6−0.5−0.4−0.3−0.2−0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
p [m]
1
[ ]
C = Cij 1≤ i , j ≤ 3
where
C11 = −( X 4 s 2 + X 5 s12)q 2 − ( X 5 s13 + X 6 s13)q3 ;
C12 = −( X 4 s 2 + X 5 s12)(q1 + q 2 ) − ( X 5 s12 + X 6 s12 )q3 ;
C13 = ( X 5 s12 + X 6 s12)(− q1 + q 2 + q3 ) ;
C21 = ( X 5 s 2 + X 5 s12)q1 + X 6 s3q3 ;
C22 = −( X 5 s12 + X 6 s12)q3 ;
C23 = − X 6 s3(3q1 + q 2 + q3 ) ;
C31 = ( X 4 s 2 + X 5 s12 )q1 − X 6 s3q 2 ;
C32 = X 6 s3(q1 + q 2 ) ;
C33 = 0 .
G = (G1 , G2 , G3 )
T
where
G1 = X 7 c1 + X 8c12 + X 9 c123 ; G2 = X 8c12 + X 9c123 ; G3 = X 9c123 .
Parameters X i , i = 1 : 9 take the following nominal values:
512 Industrial Robotics: Theory, Modelling and Control
X 1 = 1.1956 ,
X 2 = 0.3946 ,
X 3 = 0.0512 ,
X 4 = 0.4752 ,
X 5 = 0.1280 ,
X 6 = 0.1152 ,
X 7 = (m1lc1 + m2l1 + m3l1 )g ,
X 8 = (m2lc 2 + m3l2 )g ,
X 9 = m3lc 3 g ,
where g stands for the gravity acceleration; mi ,li , and lci , i = 1 : 3 denote link
mass, length and location of the mass center which is assumed to be equal to
Where Yi = li ;i = 1 : 3 ;
§ e1 · § p1 − Θ1 ·
¨ ¸ ¨ ¸
¨ e2 ¸ = ¨ p2 − Θ 2 ¸
¨ e ¸ ¨ s −1 ¸
© 3¹ © ¹
Θ1 ( s ) = 0.36 + 0.24 s
Θ 2 ( s ) = −0.7 + 1.2 s
−3
x 10
3
1
e [m]
0
1
−1
−2
−3
0 5 10 15
t [sec]
−3
x 10
5
2
e2 [m]
−1
−2
0 5 10 15
t [sec]
Figure 3. Path following error e2 vs. time
−0.2
−0.4
3
e
−0.6
−0.8
−1
0 5 10 15
t [sec]
Figure 4. Path following error e3 vs. time
30
25
u1 [Nm]
20
15
10
0 5 10 15
t [sec]
25
20
15
u2 [Nm]
10
0
0 5 10 15
t [sec]
Figure 6. Fig. 6 Input torque u2 vs. time
3
u [Nm]
2
3
0
0 5 10 15
t [sec]
Figure 7. Input torque u3 vs. time
0.8
0.6
0.4
0.2
p2 [m]
−0.2
−0.4
−0.6
−0.8
−0.5 0 0.5 1
p1 [m]
As might be expected, the path following errors from Figs 2-3 are much smal-
ler than those obtained from the conservative dependence (26). Moreover, as
one can observe from Figs 2-7, the time dependent damping function ks de-
creases (eliminates) errors and torques oscillations at the very beginning of
time histories. Furthermore, as seen from Figs 8-19, estimations X̂ , Yˆ do not
converge to their real (nominal) values.
5. Conclusion
This study has presented an adaptive robot controller for the path following
by the end-effector. The control generation scheme has been derived using the
Lyapunov stability theory. An advantage of the proposed control law (18) is
that it requires, in fact no information regarding the parameters of the robot
dynamic equations. The control strategy (18) is shown to be asymptotically
stable (by fulfilment of practically reasonable assumptions). The proposed ro-
bot controller has been applied to a planar redundant manipulator of three re-
volute kinematic pairs operating in a two dimensional task space. Numerical
simulations have shown that the results obtained are in accordance with the
theoretical analysis. The novelty of the strategy proposed lies in its relative
simplicity in design, program code and real-time implementation. The appro-
ach presented here will also be in future directly applicable to cooperating ki-
nematically redundant manipulators.
Acknowledgement. This work was supported by the DFG Ga 652/1--1,2.
6. References
Arimoto, S. (1990). Design of robot control systems, Adv. Robot., vol. 4, no. 1,
pp. 79-97.
Arimoto, S. (1996). Control theory of non-linear mechanical systems, Oxford,
U. K., Clarendon.
Berghuis, H.; R. Ortega, & H. Nijmeijer (1993). A robust adaptive robot cont-
roller, IEEE Trans. on Robotics and Automat., vol. 9, no. 6, pp. 825-830.
Canudas, de Wit, C.; B. Siciliano & G. Bastin (1996). Theory of robot control,
New York: Springer-Verlag.
Cheah, C. C.; S. Kawamora & S. Arimoto (1999). Feedback control for robotic
manipulators with an uncertain Jacobian matrix, J. Robot. Syst., vol. 6,
no. 2, pp. 119-134.
Control of Redundant Robotic Manipulators with State Constraints 521
1. Introduction
Nowadays, there are still two major challenges for industrial robotics in auto-
mated production. These are enhancing manufacturing precision and reducing
cycle-times. Beside the advances made in the classic robotics over the last dec-
ades, new technologies are emerging in the industrial field aiming more flexi-
ble high-speed and accurate manufacturing. Robots with parallel structures
are attracting the attention of automation industry as an innovative product
with high dynamic potentials. Such robots, like the tricpet are integrated
nowadays by BMW, Volvo or Airbus in their manufacturing lines. Compared
to each other, the classic serial (or open) chain robots and the parallel (or
closed) chain robots have their specific benefits and suffer from own draw-
backs. The proposed chapter gives a comparison of the two types in the scope
of their suitability for solving modern problems in industrial robotics. Addi-
tionally, appropriate approaches are proposed to remove drawbacks of classic
industrial control solutions. Hereby, it is focussed on model-based strategies
for ameliorating control accuracy at high dynamics and therefore to expose so-
lutions towards high-speed automation.
One of the main purposes of the proposed chapter is to contribute to extending
the state of the art in industrial robotics by the innovative class of parallel ro-
bots. Furthermore, classic and advanced model-based control approaches are
discussed for both robot types. Uniform methodologies for both classes are
given. It is focused on crucial issues for practical application in the industrial
filed.
The first aspect is surely the modelling of kinematics (see section 2) and dy-
namics (see section 3) for serial and parallel robots. Here, an opposite duality
in formalism is shown. By appropriate choice of minimal coordinates and ve-
locities, the inverse dynamics of the two robot classes can be derived by the
principle of virtual power. This yields computational highly efficient models
that are well appropriate for real-time applications. Since the success of such
feedforward control depends on the estimation quality of the model parame-
523
524 Industrial Robotics: Theory, Modelling and Control
2. Kinematic Analysis
To enable giving uniform approaches for serial and parallel robots, elementary
assumptions and definitions at the formal level have to be revised. As men-
tioned in the introduction, we will concentrate on the case of industrial rele-
vant robotic systems, i.e. n = 6 -DOF non redundant mechanisms. Both mecha-
nisms are supposed to have na actuated joints grouped in the vector q a , that
defines the actuation space A. Additionally, passive joints are denoted by q p .
[ T
]
Both vectors can be grouped in the joint vector q = q a T q p T that correspond
consequently to the joint space Q. The operational or work-space W of an in-
dustrial robot is defined by the 6-dimensional pose vector x containing the po-
sition and orientation of the end-effector (EE) with respect to the inertial frame.
Let the vector z now denotes the generalized (or minimal) coordinates, which
contains the independent coordinates that are necessary to uniquely describe
the system. Its dimension coincides therefore with the number of DOF's (Mei-
rovitch, 1970; Bremer, 1988) and it defines the configuration space C.
Already at this formal level, important differences between serial open-chain
robots and parallel closed-chain robots are necessary to consider. For classic
industrial robots, the case is quite simple and well known. Such systems do
Model-Based Control for Industrial Robots: Uniform Approaches… 525
not have passive joints, the actuated joints correspond to the minimal coordi-
nates, which yields the coincidence of almost all coordinate spaces:
q = q a Q ≡ A and z = q a = q C ≡ Q ≡ A.
The case of 6-DOF parallel robot is more complicated. The pose vector x de-
fines uniquely the configuration of the system. Besides the robot contains pas-
sive joints (Merlet, 2000)
z = x C ≡ W and q ≠ q a , q a ≠ z C ≠ Q ≠ A
In robotics, the motion of each link is described with respect to one or more
frames. It is though necessary to define specifications to transform kinematic
quantities (positions, velocities and accelerations). Homogenous transforma-
tions are state of the art in robotics. Besides the fixed inertial frame (KS )0 and
the end-effector frame (KS )E , each link i is associated with body-fixed frame
(KS )i . For efficient formulation and calculation of the homogenous transfor-
mations (given by Tii −1 ) between two adjacent links i − 1 and i , it is recom-
mended to use the modified DENAVIT-HARTENBERG-notation (or MDH),
that yields unified formalism for open and closed-chain systems (Khalil &
Kleinfinger, 1986; Khalil & Dombre, 2002). We obtain:
ª cϑi sϑi 0 ai º
ªRi−1
r
( i−1 ) i
i −1
º ««sϑi c αi cϑi c αi − s αi − d i s αi »
Tii − 1 = « i
» = «s s » (1)
¬0 0 0 1 ¼ ϑi αi cϑi sαi c αi d i c αi »
« »
¬ 0 0 0 1 ¼
526 Industrial Robotics: Theory, Modelling and Control
(i)
~
vi = ( i ) vi−1 + ( i ) ω i−1 r
(i) i
i−1
+ e z d i (2)
~
v i = ( i ) v i − 1 + ( i ) ω r i−1 ~
+(i)ω ω~i − 1 r i−1
+ di e z + 2 d i ω~i − 1 e z (3)
(i) i−1 (i) i i−1 (i) (i) i (i)
~b = a × b.
a
f : A →W
q a → x = f ( qa )
g:W→A
x → qa = g (x )
We mentioned above, that only the minimal coordinates describe the system
uniquely. Consequently, only the transformations having the argument set be-
Model-Based Control for Industrial Robots: Uniform Approaches… 527
ing the configuration space can be computed or given in a closed form. This
fact explains, that the solution of the inverse problem is quite simple and
available analytically for parallel robots (C ≡ W ) . Whereas the solution of the
forward kinematics can be generally obtained only in a numerical way (Tsai,
1999; Merlet, 2000). In contrast, the forward kinematics can be easily obtained
for serial-chain robots (C ≡ A ) , whereas the inverse problem is generally cum-
bersome to solve. As it will be discussed in following sections, such system-
inherent properties have an important impact on the practical implementation
of control. E.g. the well-known computed-torque feedback approach is not
suitable for parallel robots, since the minimal coordinates z = x can not be
measured.
x = [x y z α β γ ]T ,
ª c β cγ − c β sγ sβ xº
«c s + s s c c α c γ − sα s β s γ − sα c β y»
TE0 ( x ) = «
α γ α β γ »
« sα s γ − c α s β c γ sα c γ + c α s β s γ cα c β z»
« »
¬ 0 0 0 1¼
The differential kinematics maps the velocity of the end-effector into the veloc-
ity of the actuated joints q a and vice versa. It is necessary to relate a desired
motion in the task-space to the necessary motion of the actuated joints. This is
achieved by the jacobian matrix
ª ∂f 1 ∂f 1 º
« ∂q " »
∂q a,n a
« a, 1 » (7)
x = « # % # » q a
« ∂f n "
∂f n »
« ∂q a, 1 ∂q a,n a »
¬ ¼
or simply
x = J A q a . (8)
528 Industrial Robotics: Theory, Modelling and Control
The analytical Jacobian J A relates the time derivative of the pose vector to the
articulated velocities. Since the orientation vector π is composed of pseudo-
coordinates, whose time derivative has no physical meanings (Bremer, 1988,
Meirovitch, 1970) it is convenient to define the rotational velocities of the end-
effector in respect to the fixed frame: ω E = ω x [ ωy ωz ] T
, such that
ªω x º ª 1 0 s β º ªα º
« »
ω E = ««ω y »» = «0 c α − s α c β » «« β »» (9)
«¬ω z »¼ «¬0 s α c c »¼ «¬ γ »¼
αβ
R K (α , β )
ª rE º ªI 0 º
v E = « » = Jq a , with J = « »JA (10)
¬ω E ¼ ¬0 RK ¼
By regarding eq. (7) it is obvious that the analytic derivation of the jacobian is
only available, when the direct kinematic solution f (q a ) is given in a closed
form. This is the case for classic open-chain robots, whereas for parallel robots,
the inverse jacobian J −1 is available (Merlet, 2000). For such mechanisms, the
jacobian is obtained by numerical inversion of its analytically available inverse
(Merlet, 2000; Abdellatif et al., 2005a). The mobility of robots depends on the
structure of the related jacobian that describes the velocity and also the force
transmission between the operational space and the actuation space. It is well
known, that singularities occur at configurations, when the jacobian loses its
rank (det( J ) = 0 ) . The study of singularities is omitted in this paper. The inter-
ested reader may be referred to standard and excellent literature in this area
(Gosselin & Angeles, 1990; Sciavicco & Siciliano, 2000; Merlet, 2000; Bonev,
2002).
It is now necessary to define further quantities to describe the motion of ro-
botic manipulators. In analogy to the generalized coordinates, the generalized
velocities are introduced (Meirovitch, 1970; Bremer, 1988) and are denoted by
s . They always present a linear combination of the time-derivatives of the
generalized coordinates z . The simplest case is when these combinations cor-
respond to the identity:
s = Iz s = z
∂ ( i ) vi ∂ (i)ωi
J Ti = and J Ri =
∂s ∂s
(
J Ti = Rii− 1 J Ti − 1 − ( i − 1 ) ~ )
ri i − 1 J Ri − 1 + e z
∂d i
∂s
(11)
∂ϑi
J Ri = Rii− 1 J Ri − 1 + e z
∂s (12)
The next subsection demonstrates the efficiency and uniformity of the pro-
posed method for deriving the kinematics of a serial and a parallel industrial
robot.
we obtain the pose vector x . The jacobian is also joint-wise simple to obtain:
ª ( 0 ) e zi ×( 0 ) rEi º
[
J (q ) = J 1 J2 ]
! J n with J i = « i » (13)
¬ ( 0) ez ¼
530 Industrial Robotics: Theory, Modelling and Control
This can be deduced by using the MDH-notation and the recursive formulae
given above. Although the solution of the inverse kinematics is generally hard
to obtain for open-chain mechanisms, industrial robots are characterized by
simple geometry, such that a closed-form solution exists. This is the case here,
where the three last revolute joint axes intersect at a common point (spherical
wrist) (Sciavicco & Siciliano, 2000).
A
[
rB j j = x j yj zj ]
T
= − r A0 j + rE0 + RE0 ( E ) rBEj . (14)
q a j = l j = x 2j + y 2j + z 2j (15)
§ xj ·
α j = arctan¨ ¸ (16)
¨ − zj ¸
© ¹
§ yj ·
β j = arctan¨ ¸, (17)
¨ rj ¸
© ¹
which are quite simple equations. The differential kinematics can be deduced
analytically for the inverse problem by the inverse jacobian:
Figure 1. Definition of the MDH Coordinate systems and parameter for the KUKA KR
15
Many methods are proposed in the literature for calculating the inverse jaco-
bian. We propose here the most straight-forward way in our case. Every single
chain j corresponds to the j th raw of the inverse jacobian:
J −j 1 =
∂q a j
∂v B j
∂v B j
∂s
=
∂q a j ª ∂v B j
«
∂v B j «¬ ∂rE
»=
∂ω E »¼ ∂v B j
[
∂v B j º ∂q a j
I−~
rBEj ] (19)
~ r E = r + ~
v B j = rE + ω rBEj ω E (20)
E Bj R
By using the recursive laws given by eq. (3-5) the complete inverse kinematics
of the subchains can be solved, yielding velocities and accelerations of each
limb and moreover a functional relationship between q a j and v B j that is
needed for (19).
532 Industrial Robotics: Theory, Modelling and Control
i ϑi di ai αi
1 α i − π2 0 0 π
2
2 β i − π2 0 0 − π2
3 0 li 0 − π2
As conclusions, we can retain that the formal differences between parallel and
serial robots have to be taken into account. A unified approach can be given if
the notions of minimal coordinates and velocities are kept in mind. The MDH-
notation provide the same procedure when solving kinematics for both robotic
types. For parallel robots it is sufficient to formulate the constraint equations.
Hereafter the mechanism is separated into serial subchains that can be treated
exactly as any other open-chain manipulator.
Qa = M a (z )q
a + N (z , s ) (22)
Qa = M a (z )q
a + c a (z , s ) + g a (z ) (23)
The suggested approach is the Jourdainian principle of virtual power that pos-
tulates power equality balances with respect to the forces in different coordi-
nate spaces (Bremer, 1988). For instance, a power balance equation is obtained
as
T
§ ∂q ·
∂s τ = ∂q Qa ⇔ τ = ¨ a ¸ Qa
T T
(25)
© ∂s ¹
a
where τ is the vector of the generalized forces. Equation (25) means that the
virtual power resulting in the space of generalized velocities is equal to the ac-
tuation power. The power balance can be applied for rigid-body forces:
−T
§ ∂q ·
Qa,rb = ¨ a ¸ τ rb (26)
© ∂s ¹
The generalized forces are computed as the summation of the power of all N K
limbs:
T
NK ª§ ∂v S ∂ω
T º
¸¸ m i a S i + §¨ i ·¸ (I i( Si )ω i +ω
~ (I ( Si )ω ))»
·
τ rb = ¦ «¨¨ i
(27)
i = 1 «© ∂s © ∂s ¹
i i i
¹ »¼
¬
ª
« ∂ v T
( )
ΝΚ
§ (i) i · ~ s + ω ~ ~
τ = ¦ «¨¨ ¸¸ mi ( i ) a i + ( i ) ω i i ( i ) i ( i ) ω i si
ι =1 «© ∂s ¹
« T
¬ J Ti
º
»
§ ∂ (i)ωi ·
T
+ ¨¨ ¸¸ ( ( i ) I i( i ) ( i ) ω i + ( i ) ω
~ ( I(i) ω )+ ~ s a )» (28)
∂
i (i) i (i) i i (i) i
©
s ¹ »
»
J RT i ¼
−T −T
§ ∂q a · § ∂q ·
¨ ¸ = ¨¨ a ¸¸ =I
© ∂s ¹ © ∂q a ¹
−T
§ ∂q ·
Qa,rb = ¨ a ¸ τ rb = J Tτ rb
© ∂s ¹
By transforming the dynamics into form (24), two main advantages result. At
one hand, regrouping the parameters will further reduce the calculation bur-
den and at the other hand, one obtains the set of identifiable parameters of the
robotic system. We focus now on the dynamic parameters presented by m i , si
and I i . To regroup such parameters, the definition of two new operators (.)∗
and (.) ¡ are required first:
ω i∗ I i¡ : = ( i ) I i( i ) ( i ) ω i (29)
[ ]
ªω i ωi ωi 0 0 0 º
with ω ∗ : = « 0 » and I i¡ = I i xx T
x y z
ωi 0 ωi ωi 0 » I i xy I i xz I i yy I i yz I i zz
i « x y z
« 0 0 ωi 0 ωi ω iz »¼
¬ x y
~ + ω ~ ω ~ ª I i¡ º
ω
[ ª
] º
NK
0 a
(i) i « »
τ rb = ¦ J TTi J RTi « ~
(i) (i) i (i) i
~ » « si »
i)ωi +(i)ωi (i)ωi
∗ ∗
i=1 ¬« ( − ) ai 0 »
( i
¼ «¬m i »¼
Hi N
p rb, i
(30)
[
= H 1 H 2 ! H NK p
p ! p
][ T
1
T
2
T
NK ] T
which is now linear in respect to the parameter set p rb , that groups all physical
parameters of all limbs of the robot. Since each limb contributes with 1 mass
parameter, 3 first-moment parameters and 6 inertiatensor elements, we obtain
536 Industrial Robotics: Theory, Modelling and Control
H i = H i − 1 Li + K i (31)
can be achieved. The matrices Li and K i are given in (Khalil & Dombre, 2002;
Grotjahn & Heimann, 2000). The first step considers in eliminating all parame-
ters p rb, j that correspond to a zero row h j of H , since they do not contribute to
the dynamics. The remaining parameters are then regrouped to eliminate all
linear dependencies by investigating H . If the contribution of a parameter p rb, j
∗
depends linearly on the contributions of some other parameters p rb, 1 j ,! , p rb, kj ,
k
h j = ¦ a lj h lj (32)
l=1
Then p rb, j can be set to zero and the regrouped parameters p rb,lj ,new can be ob-
tained by
∗
p rb,lj ,new = p rb,lj + a lj p rb, j (33)
The recursive relationship given in (31) can be used for parameter reduction. If
one column or a linear combination of columns of Li is constant with respect
to the joint variable and the corresponding columns of K i are zero columns,
the parameters can be regrouped. This leads to the rules which are formulated
in (Gautier & Khalil, 1990; Khalil & Dombre, 2002) and in (Grotjahn &
Model-Based Control for Industrial Robots: Uniform Approaches… 537
Heimann, 2000). The use of MDH-notation is a benefit for applying the reduc-
tion rule in an analytical and a straight-forward manner. For revolute joints
with variable ϑi , the other MDH-parameters are constant. This means that the
9 th , the 10 th and the sum of the 1 st and 4 th columns of Li and K i comply
with the mentioned conditions. Thus, the corresponding parameters I iyy , s ix
and m i can be grouped with the parameters of the antecedent joint i − 1 . For
prismatic joints however, the moments of inertia can be added to the carrying
antecedent joint, because the orientation between both links remain constant.
For a detailed insight, it is recommended to consider (Khalil & Dombre, 2002)
and (Grotjahn & Heimann, 2000).
In the case of parallel robots, where the end-effector platform closes the kine-
matic loops, further parameter reduction is possible. The velocities of the plat-
form joint points B j and those of the terminal MDH-frames of the respective
leg are the same. The masses can be grouped to the inertial parameter of the
EE-platform according to steiner's laws (Khalil & Guegan, 2004; Abdellatif et
al., 2005a).
For further accuracy enhancement of the inverse dynamics models, the effects
of friction and motor inertia should be considered. Especially the first category
is important for control applications (Grotjahn & Heimann, 2002; Armstrong-
Hélouvry, 1991; Bona & Indri, 2005). The general case is considered, when fric-
tion is modeled in all active as well as in passive joints. The friction is given in
the joint space Q, usually as nonlinear characteristics Q f i (q i ) = f (q i ) with re-
spect to the joint velocity, i.e.
The joint losses have to be mapped into the actuation (or control) space. Uni-
formly to the rigid-body dynamics, the Jourdainian principle of virtual power
is recommended. The power balance for friction can be derived as
T T
§ ∂q · ∂q
Qa,f = ¨¨ ¸¸ Qf = J T §¨ ·¸ Qf (35)
© ∂q a ¹ © ∂s ¹
that means: the friction dissipation power in all joints (passive and active) has
to be overcome by an equivalent counteracting actuation power. From the lat-
ter equation it is clear that the case of classic open-chain robots is restrictive,
when the joint-jacobian ∂q ∂q a is equal to the identity matrix. In the more
general case of parallel mechanisms, friction in passive joints should not be
538 Industrial Robotics: Theory, Modelling and Control
neglected like it is almost always assumed in control application for such ro-
bots (Ting et al., 2004; Cheng et al., 2003). The compensation of friction is sim-
pler and more accurate for serial robots, since it can be achieved directly in all
actuated joints. For the parallel counterpart, the compensation of the physical
friction Q f is only possible indirectly via the projected forces Qa,f to account
for passive joints. Since the latter are usually not equipped with any sensors,
friction compensation in parallel robots is less accurate, which is one of the
typical drawbacks of such robotic systems.
By using friction models that are linear with respect to the friction coefficients,
like (34) it is more or less easy to derive a linear form of (36). The following re-
lationship results:
Qa,f = A f (z , s )p f (36)
able ones (Gautier & Khalil, 1990). Following section discusses the experimen-
tal identification of parameters and the implementation of identified inverse
models in control.
s 3x − l 3 (m 4 + m 5 + m 6 )
6
p9 s zE + m 3 ¦ j = 1 rBz j
p 10 s 3 y − s 4 z − l 4 (m 4 + m 5 + m 6 ) m E + 6m 3
p 11 I 4 xx − I 4 yy + I 5 yy –
p 12 I 4 zz + I 5 yy –
p 13 s 4y –
p 14 I 5xx − I 5 yy + I 6 xx –
p 15 I 5 zz + I 6 xx –
p 16 s 5y − s 6 z –
p 17 I 6 zz –
p 18 I M3 –
p 19 I M4 –
p 20 I M5 –
p 21 I M6 –
Table 1. Minimal rigid-body parameter set for the 6-DOF robots KUKA KR15 and PaLiDA.
540 Industrial Robotics: Theory, Modelling and Control
A main result retained from the last section is that the inverse dynamics of ro-
botic manipulators can be written as
with the vector p containing the minimal rigid-body parameters and friction
coefficients. With such an LP (linear in the parameter)-structure computational
efficient linear estimators can be applied for identification. The formulation of
the related problem is derived for an experiment producing N data vectors as
follows
ª Qa 1 º ª A(z 1 , s 1 , s1 ) º ª e1 º
« » « » « » (37)
« # »=« # »p + « # »
«Qa » « A(z N , s N , sN )» «¬ e N »¼
¬
N ¼ ¬
¼
N
Γ Ψ η
p̂ WLS = (Ψ T W − 1Ψ ) Ψ T W − 1 Γ
−1
(38)
p̂ LS = (Ψ TΨ ) Ψ T Γ
−1
(39)
The quality of the estimation results depends on the so called experiment de-
sign that define how and which data has to be collected to guarantee good es-
timate. Here, two main categories exist: the direct and indirect approach. The
first one suggests collecting the data along one single trajectory that excite all
parameters. The second approach proposes collecting different measurements
in single configurations within the workspace. Each approach has characteris-
tic advantages and drawbacks that depend also on the regarded robot type. It
can be stated generally, that the identification procedure for parallel robots is
more difficult, because the necessary information about the minimal coordi-
nates, velocities and accelerations can not be directly measured (Abdellatif et
al., 2005c). A systematic comparison for both approaches is given in Table 2 as
Model-Based Control for Industrial Robots: Uniform Approaches… 541
The recommended approach depends on the real setup and equipment. If the
available control system allows the achievement of arbitrarily complex trajec-
tories, it is recommended to measure optimized trajectories. Otherwise, long
measuring processes have to be taken into account, if the indirect approach is
chosen. E.g. the identification of friction models for the KUKA KR15 and for
PaLiDA required 45 min and 120 min measurement time, respectively.
The validation of the identified dynamics models is achieved by considering
reference trajectories that were not used for identification. For the industrial
KUKA KR15 robot, the ISO-92833 standard trajectory is validated. The meas-
ured torques are compared with those calculated by the identified model. The
results are illustrated in Figure 3.
542 Industrial Robotics: Theory, Modelling and Control
Figure 3. Accuracy validation of identified model of the KUKA KR15: Torques for ISO-
9283 trajectory
Unfortunately, standard benchmarks are not defined for parallel robots yet
and the ISO-92833 violates the workspace of the here studied robot. Thus, an
inclined circular motion in the middle of the workspace is chosen as a bench-
mark test for validating the accuracy of the identified model for PaLiDA. The
results are shown in Figure 4.
Model-Based Control for Industrial Robots: Uniform Approaches… 543
For both robot types, very good agreement between model output and ex-
periment is noticeable. Of course, some deviations still remain, since a com-
plete agreement of the model and the reality is quite impossible. Nevertheless,
these results are an excellent base for the compensation of nonlinearities of
both robots. For place reasons, the values of the identified parameters of the
studied systems are not illustrated. We refer though to former publications
with deeper insight and more discussion on dynamics identification of robots
(Abdellatif et al., 2005b; Abdellatif et al., 2005d; Grotjahn et al., 2004; Grotjahn
et al., 2001).
544 Industrial Robotics: Theory, Modelling and Control
The computed force/torque feedforward control is one of the most classic ap-
proaches in robotics (Sciavicco & Siciliano, 2000; Khalil & Dombre, 2002). A
uniform scheme can be given according to the formalism defined in previous
sections and is depicted in Fig 5. The global input consists in the minimal co-
ordinates z with an explicit or implicit dependency on time t , to enable the
derivation of velocities and accelerations s and s . Only the nonlinear block
(z → q a,d ) depends on the robot’s type. It consists in the trivial identity trans-
formation for serial robots and the inverse kinematics (14) for parallel manipu-
lators. For both systems, the desired forces (or torques) Qa,d can be only de-
rived from z , s and s . Of course, such an approach depends on the presence
of a motor-current interface to achieve the direct forward feeding of the calcu-
lated forces or torques.
Model-Based Control for Industrial Robots: Uniform Approaches… 545
(a) (b)
Figure 8.Path deviations of the KUKA KR15: (a) ISO-9283 trajectory; (b) single motion
of the second joint.
Although the linear models disregard nonlinearities, they can be used to their
compensation by taking into account actual path deviations. The advantage is
that arbitrary effects can be compensated, even effects which are not included
in the complex nonlinear model. This is done by ’iterative learning’ algorithm
that is presented in the following. Alternatively ‘training’ of a feedforward
joint controller is explained. Both approaches are based on the actuation vari-
ables. Thus, their implementation is similar for parallel and serial robots.
Learning control is based on the idea that measured path deviations can be
used to adjust the trajectory that is transmitted to the controller so that the ac-
tual trajectory comes closer to the originally desired one (Moore, 1999; Long-
man, 2000). Thereby, the path accuracy is improved by iterative correction (see
Figure 9).
548 Industrial Robotics: Theory, Modelling and Control
x ( k + 1) = A( k )x ( k ) + B( k )u( k ) + w 1 ( k )
y ( k ) = C ( k )x ( k ) + w 2 ( k ) (40)
with u being the input and y being the output. It is assumed that w 1 repre-
sents some deterministic disturbance that appears every repetition and that
w 2 is the measurement disturbance. The aim of Learning is to change the
command input every trial j using the learning control law:
( )
u j +1 ( k ) = f L u j ( k ), y j ( k ), y d ( k ) (41)
(
u j + 1 = u j + Le j q a, j + 1 = q a, j + L q a, j − q a,d ) (42)
The design of the learning gain matrix L has to achieve desired properties. It
is simple to derive the iterative error dynamics as
e j +1 = ( I − PL)e j (43)
ª g (1 ) º
« g(2 ) g (1 ) »
P=« » (44)
« # # % »
« »
¬ g(N ) g(N − 1) " g(1)¼
¦ r (l )(q (k + l ) − q (k ))
m
q a ( k ) = q a, d ( k ) + a, d a, d (45)
l = mt
¦ r (l ) (q (k + l ) − q (k ))
m
q a ( k ) = q a,d ( k ) + a, d a, d
l = mt
(46)
p
+ ¦ s(l ) sign (q a,d ( k + l ) − q a,d ( k ))
l = pt
(a) (b)
Figure 11. Behaviour of the first axis of the manutec-r15 for a vertical circle:
(a) learned and trained corrections; (b) comparison of resulting tracking errors
path velocity is only 0.02 m/s, joint velocities and accelerations are quite high.
Therefore, the couplings between different links have strong impact. These ef-
fects can not be compensated by the decoupled 'trained' feedforward joint con-
trollers.
Figure 12. Comparison of cartesian path errors for a vertical circle of the manutec-r15
In analogy to the case of serial robots, the compensation techniques were im-
plemented to the parallel manipulator PaLiDA. It is important to notice, that
the feedforward compensation techniques are all based on the actuation vari-
ables, for serial, as well as for parallel mechanisms. The procedure of experi-
mental investigation and evaluation is very similar to that mentioned in the
previous section. Figure 14 shows the improvement of tracking for the here
considered robot. We compare the performances of simple single-joint control,
Feedforward control and Learning control for a circular and a quadratic mo-
tion. The range of investigated velocity is here much higher than the case of se-
rial robots. The average end-effector's velocity is equal to 1.5 ms −1 and 2 ms −1
for the circular and quadratic motion, respectively.
Figure 14. Comparison of Cartesian Path Errors for the Parallel Robot PaLiDA. Left:
Circle Motion, Right: Quadratic Motion.
6. Conclusions
7. References
Abdellatif, H.; Grotjahn, M. & Heimann, B. (2005a). High efficient dynamics calcula-
tion approach for computed-force control of robots with parallel structures, Pro-
ceedings of 44th IEEE Conference on Decision and Control and the 2005 European
Control Conference (CDC-ECC05), pp. 2024-2029, Seville, 2005.
Abdellatif, H.; Heimann, B. & Grotjahn, M. (2005b). Statistical approach for bias-free
identification of a parallel manipulator affected with high measurement noise,
Model-Based Control for Industrial Robots: Uniform Approaches… 555
Proceedings of the 44th IEEE Conference on Decision and Control (CDC-ECC05), pp.
3357-3362, Seville, 2005.
Abdellatif, H.; Heimann, B. & Holz, C. (2005c). Time-effective direct dynamics identi-
fication of parallel manipulators for model-based feedforward control, Proceed-
ings of the 2005 IEEE/ASME International Conference on Advanced Intelligent Mecha-
tronics (AIM2005), pp. 777-782, Monterey, 2005.
Abdellatif, H.; Heimann, B. & Holz, C. (2005d). Applying efficient computation of the
mass matrix for decoupling control of complex parallel manipulators, Preprints
of the 16th IFAC World Congress, Prag, 2005.
Angeles, J. (2003). Fundamentals of Robotic Mechanical Systems, Springer, New York,
Berlin, Heidelberg, 2003.
Armstrong-Hélouvry, B. (1991). Control of Machines with Friction, Kluwer Academic
Publishers, Boston.
Bremer, H. (1988). Dynamik und Regelung mechanischer Systeme, Teubner, Stuttgart.
Bona, B. & Indri, M. (2005). Friction compensation in robotics: an overview, Proceed-
ings of the 44th IEEE Conference on Decision and Control and the 2005 European Con-
trol Conference (CDC-ECC05), pp. 4360-4367, Seville, 2005.
Bonev, I. (2002). Geometric analysis of parallel mechanisms, Faculté des études supérieu-
res de l’Université Laval.
Cheng, H.; Kuen, Y. & Li, Z. (2003). Dynamics and control of redundantly actuated
parallel manipulators. IEEE/ASME Transactions on Mechatronics, 8, 4, (2003) 483-
491.
Fisette, P.; Raucent, B. & Samin, J. C. (1996). Minimal Dynamic Characterization of
Tree-Like Multibody Systems. Nonlinear Dynamics, 9, 1-2 (1996) 165-184.
Gautier, M. & Khalil, W. (1990). Direct calculation of minimum set of inertial parame-
ters of serial robots. IEEE Transactions on Robotics and Automation, 6, 3 (1990) 368-
373.
Gosselin, C. & Angeles, J. (1990). Singularity analysis of closed-loop kinematic chains.
IEEE Transactions on Robotics and Automation, 6, 3, (1990) 281-290.
Grotjahn, M. & Heimann, B. (2000). Determination of dynamic parameters of robots
by base sensor measurements, Proceedings of the sixth IFAC Symposium on Robot
Control (SYROCO), Vienna, 2000.
Grotjahn, M.; Daemi, M. & Heimann, B. (2001). Friction and rigid body identification
of robot dynamics. International Journal of Solids and Structures, 38, (2001) 1889-
1902.
Grotjahn, M. & Heimann, B. (2002). Model-based feedforward control in industrial
robotic. International Journal of Robotics research, 21, 1, (2002) 99-114.
Grotjahn, M.; Heimann, B. & Abdellatif, H. (2004). Identification of friction and rigid-
body dynamics of parallel kinematic structures for model-based control. Multi-
body System Dynamics, 11, 3, (2004) 273-294.
Khalil, W. & Kleinfinger, J. (1986). A new geometric notation for open and closed-loop
robots, Proceedings of the 1986 IEEE International Conference on Robotics and Auto-
mation, pp. 1174-1179, San Francisco, 1986.
Khalil, W. & Kleinfinger, J.-F. (1987). Minimum Operations and Minimum Parameters
of the Dynamic Models of Tree Structure Robots. IEEE Transactions of Robotics
and Automation, 3, 6 (1987) 517-526.
556 Industrial Robotics: Theory, Modelling and Control
Khalil, W. & Dombre, E. (2002). Modelling, Identification and Control of Robots, Hermes,
London
Khalil, W. & Guegan, S. D. (2004). Inverse and direct dynamics modeling of gough-
stewart robots. IEEE Transactions on Robotics, 20, 4, (2004) 754-762.
Lange, F. & Hirzinger, G. (1994). Learning to improve the path accuracy of position
controlled robots, Proceedings of the Conference on Intelligent Robots and Systems,
pp. 494-501, Munich, 1994.
Lange, F. & Hirzinger, G. (1996). Learning of a controller for non-recurring fast
movements. Advanced Robotics, 10, 2 (1996) 229-244.
Ljung, L. (1999). System Identification: Theory for the User, Prentice-Hall, New Jersey.
Longman, R. W. (2000). Iterative learning control and repetitive leraning control for
engnieering practice. International Journal of Control, 73, 10, (2000) 930-954.
Meirovitch, L. (1970). Methods of Analytical Dynamics, McGraw-Hill, New York.
Merlet, J.-P. (2000). Parallel Robots, Kluwer Academic Publishers, Dordrecht.
Moore, K. L. (1999). Iterative learning control: An expository overview. Applied and
Computational Control, Signals and Circuits, 1, (1999) 151-214.
Norrlöf, M. & Gunnarsson, S. (2002). Experimental Results of Some Classical Iterative
Learning Control Algorithmus. IEEE Transactions on Robotics and Automation, 18,
4 (2002) 636-641.
Sciavicco, L. & Siciliano, B. (2000). Modeling and Control of Robot Manipulators,
Springer, London.
Swevers, J.; Gansemann, C.; Tükel, D.; Schutter, J. d. & Brussel, H. v. (1997). Optimal
robot excitation and identification. IEEE Transactions on Robotics and Automation,
13, 5, (1997) 730-740.
Ting, Y.; Chen, Y.-S. & Jar, H.-C. (2004). Modeling and control for a gough-stewart
platform cnc-mashine. Journal of Robotic Systems, 21, 11, (2004) 609-623.
Tsai, L.-W. (1999). Robot Analysis, Wiley-Interscience, New York.
Walter, E. & Pronzato, L. (1997). Identification of Parametric Models form Experimental
Data, Springer, London.
20
Parallel Manipulators with Lower Mobility
Raffaele Di Gregorio
1. Introduction
557
558 Parallel Manipulators with Lower Mobility
(a.1) linear translation subgroup, {T(u)}, that collects all the translations
parallel to the unit vector u. As many {T(u)} as unit vectors, u, can be
defined. The identity subgroup {E} is included in all the {T(u)}. A pris-
matic pair (hereafter denoted with P) with sliding direction parallel to u
physically generates the motions of {T(u)}.
(a.2) revolute subgroup, {R(O, u)}, that collects all the rotations around an
axis (rotation axis) passing through point O and parallel to the unit vec-
tor u. As many {R(O, u)} as rotation axes, (O, u), can be defined. The
identity subgroup {E} is included in all the {R(O, u)}. A revolute pair
(hereafter denoted with R) with rotation axis (O, u) physically generates
the motions of {R(O, u)}.
(a.3) helical subgroup, {H(O, u, p)}, that collects all the helical motions
with axis (O, u) and finite pitch p that is different from zero and con-
stant. As many {H(O, u, p)} as sets of helix parameters, (O, u, p), can be
defined. The identity subgroup {E} is included in all the {H(O, u, p)}. A
helical pair (hereafter denoted with H) with helix parameters (O, u, p)
physically generates the motions of {H(O, u, p)}.
(b.1) planar translation subgroup, {T(u1, u2)}, that collects all the transla-
tions parallel to a plane perpendicular to u1×u2 where u1 and u2 are two
orthogonal unit vectors. As many {T(u1, u2)} as unit vectors u1×u2 can be
defined. The identity subgroup {E} and all the linear translation sub-
groups {T(v)} with v equals to au1+u2 1 − a 2 are included in {T(u1, u2)}.
Two prismatic pairs in series, whose sliding directions are respectively
parallel to two independent vectors that are linear combination of u1
and u2, physically generate the motions of {T(u1, u2)}.
(b.2) cylindrical subgroup, {C(O, u)}, that collects all the motions obtained
by combining a rotation around a rotation axis (O, u) and a translation
parallel to the unit vector u. As many {C(O, u)} as (O, u) axes can be de-
fined. The subgroups {E}, {T(u)}, {R(O, u)} and {H(O, u, p)} are all in-
cluded in {C(O, u)}. A cylindrical pair (hereafter denoted with C) or,
which is the same, a revolute pair with rotation axis (O, u) in series with
a prismatic pair with sliding direction parallel to u physically generate
the motions of {C(O, u)}.
2. Classification of the Parallel Manipulators with Lower Mobility 559
(c.1) spatial translation subgroup, {T}, that collects all the spatial transla-
tions. Only one subgroup {T} can be defined. The identity subgroup {E},
all the {T(u)} subgroups and all the {T(u1, u2)} subgroups are included in
{T}. Three prismatic pairs in series whose sliding directions are respec-
tively parallel to three independent unit vectors, u1, u2 and u3, physi-
cally generate the motions of {T}.
(c.2) spherical subgroup, {S(O)}, that collects all the spherical motions with
center O. As many {S(O)} as O points can be defined. The identity sub-
group {E} and all the {R(O, u)} subgroups are included in {S(O)}. A
spherical pair (hereafter denoted with S) or, which is the same, three
revolute pairs in series with rotation axes that intersect one another in O
physically generate the motions of {S(O)}.
(c.3) planar subgroup, {G(u1, u2)}, that collects all the planar motions with
motion plane perpendicular to u1×u2 where u1 and u2 are two orthogo-
nal unit vectors. As many {G(u1, u2)} as unit vectors u1×u2 can be de-
fined. The subgroups {E}, {T(u1, u2)}, {R(O, u1×u2)} and {T(v)} with v
equals to au1+u2 1 − a 2 are included in {G(u1, u2)}. A PPR kinematic
chain where the sliding directions of the two prismatic pairs are respec-
tively parallel to two independent vectors that are linear combination of
u1 and u2, and the revolute-pair axis is orthogonal both to u1 and to u2
physically generates the motions of {G(u1, u2)}.
(c.4) pseudo-planar subgroup, {Y(u1, u2, p)}, that collects all the motions
obtained by combining a planar translation belonging to {T(u1, u2)} with
a helical motion belonging to {H(O, u1×u2, p)}. As many {Y(u1, u2, p)} as
sets of parameters (u1×u2, p) can be defined. The subgroups {E}, {T(u1,
u2)}, {H(O, u1×u2, p)} and {T(v)} with v equals to au1+u2 1 − a 2 are in-
cluded in {Y(u1, u2, p)}. A RRH kinematic chain where the axes of the
two revolute-pairs and the helical-pair’s axis are all parallel to one an-
other and orthogonal both to u1 and to u2 physically generates the mo-
tions of {Y(u1, u2, p)}.
(d.4) Schoenflies subgroup, {X(u1, u2)}, that collects all the motions ob-
tained by combining a planar translation belonging to {T(u1, u2)} with a
cylindrical motion belonging to {C(O, u1×u2)}. As many {X(u1, u2)} as
unit vectors u1×u2 can be defined. The subgroups {E}, {T}, {G(u1, u2)},
{Y(u1, u2, p)}, {T(u1, u2)}, {C(O, u1×u2)}, {H(O, u1×u2, p)} and {T(v)} with v
equals to au1+u2 1 − a 2 are included in {X(u1, u2)}. A RRC kinematic
chain where the axes of the two revolute pairs and the cylindrical-pair’s
560 Parallel Manipulators with Lower Mobility
axis are all parallel to one another and orthogonal both to u1 and to u2
physically generates the motions of {X(u1, u2)}.
According to this (Rico et al. 2006), the set of the LM-Ms can be separated into
two subsets: (i) the subset of the pure-motion LM-Ms and (ii) the subset of the
mixed-motion LM-Ms. The first subset collects all the LM-Ms whose end effec-
tor exhibits motions that belong to only one out of the ten motion subgroups of
{D}, whereas the second one collects all the other LM-Ms.
The pure-motion LM-Ms can be further spread into ten pure-motion subsets:
one for each pure motion identified by the ten subgroups of {D}. In (Hervé
1978, 1999), a kinematic chain is called mechanical bond when it connects one
rigid body (end effector) to another (frame) so that the relative motion between
end effector and frame is constrained. A mechanical bond is called mechanical
generator when all the allowed relative motions between end effector and
frame belong to only one of the ten subgroups of {D}.
The nature of an LM-M can be identified by analysing its workspace, {W} (the
workspace is the connected set of poses (positions and orientations) the end ef-
fector can assume without disassembling the LM-M). In fact, if any couple of
poses belonging to {W} defines an end-effector motion that belongs to the same
motion subgroup of {D}, then the LM-M is a pure-motion LM-M, otherwise it
is a mixed-motion LM-M. Hereafter, if a set of motions, {M}, only collects the
motions identified by all the couples of poses that belong to the same con-
nected set of poses, {P}, then it will be said that “{P} corresponds to {M} and
vice versa” (it is worth noting that different set of poses may correspond to the
same set of motions).
When serial manipulators with lower mobility (LM-SMs) are considered, the
end-effector motion is obtained by composing the motions of all the manipula-
tor’s joints (Hervé 1978). Therefore, a pure-motion LM-SM can be obtained
only by using joints whose motions belong to the same motion subgroup.
Moreover, the sum of the degrees of freedom (dofs) of the joints must be equal
to the dimension of that motion subgroup.
When a parallel manipulator with lower mobility (LM-PM) is considered, the
identification of the set of motions, {L}, the end effector can perform is a bit
more complex. From a structural point of view, a parallel manipulator is a ma-
chine where the end effector is connected to the frame through a number, n, of
kinematic chains (limbs) acting in parallel. Therefore, in an LM-PM with n
limbs, both {L} and {W} are subsets of the common intersection of the n sets,
respectively, of motions, {Lj}, j=1,…,n, and of poses, {Wj}, j=1,…,n, the j-th limb
would allow to the end effector if it were the only kinematic chain joining end
effector and frame. If all the {Wj} are restricted to the end effector poses that
can be reached without disassembling the LM-PM and all the corresponding
{Lj} are accordingly restricted (hereafter, if it is not differently specified, this re-
striction will be implicitly assumed), then the following relationships hold:
2. Classification of the Parallel Manipulators with Lower Mobility 561
{W} = {W } j
(1)
j = 1,n
{L} = {L } j
(2)
j = 1,n
This fact discloses a wide scenario of possible machine architectures even for
pure-motion LM-PMs since conditions (1) and (2), where {L} is a subset of a
motion subgroup, can be satisfied by using, in the limbs, joints of any type (not
necessarily belonging to the same motion subgroup), and, as limbs, kinematic
chains with number of dof (limb’s connectivity) greater than the manipulator’s
dofs.
Each subset of LM-PMs can be further separated into two classes: the class of
the overconstrained manipulators and the class of the non-overconstrained
manipulators. Overconstrained manipulators are machines whose dof number
is higher than the one computed as if the constraints due to the joints were in-
dependent. An overconstrained LM-PM can be obtained by using, as limbs, a
number, n, of serial kinematic chains with the same number of dofs as the LM-
PM, provided that n be equal to the LM-PM’s dofs, and the limbs be arranged
so that they warranty a non-empty intersection, {W}, among the n sets, {Wj},
j=1,…,n. This principle has guided the design of many overconstrained LM-
PMs among which the most known is certainly the 3RRR wrist (Gosselin &
Angeles 1989) which uses, as limbs, three serial wrists of type RRR with the
same spherical-motion center (see Fig. 1). The advantage of a pure-motion LM-
PM obtained by using this principle, with respect to the serial manipulator that
have the same topology as the one of the LM-PM’s limbs, is that the LM-PM
has all the actuators located on the frame, which allows fast manipulators to be
manufactured. This advantage is paid with a reduced workspace and with an
architecture that need an high manufacturing precision to avoid jamming of
the joints and high internal loads in the links.
end effector
R
R
frame
Conditions (1) and (2) guide the determination of limbs’ topologies suitable for
making the end effector of an LM-PM perform a given type of motion. In the
literature, the analyses that bring to identify such topologies have been mainly
addressed through three different approaches: (i) group theory (Hervé 1978,
1995, 1999, 2004; Hervé & Sparacino 1991; Karouia & Hervé 2000, 2002; Huynh
& Hervé 2005; Lee & Hervé 2006; Rico et al. 2006), (ii) screw theory (Tsai 1999;
Fang & Tsai 2002; Frisoli et al. 2000; Kong & Gosselin 2002, 2004a, 2004b, 2005;
Huang & Li 2002, 2003; Carricato 2005) and (iii) velocity-loop equations (Di
Gregorio & Parenti-Castelli 1998; Di Gregorio 2001a, 2001b, 2002, 2003; Carri-
cato & Parenti-Castelli 2002, 2003).
The first approach (group theory) determines the generic {Lj} by composing
the set of motions generated by each joint of the kinematic chain that is candi-
date to be a limb, and, then, searches for the geometric conditions the potential
limb must satisfy in order to make a subset of an assigned motion subgroup of
{D} be a subset of {Lj}. The result of this type of analysis is the determination
(topology plus geometric conditions) of all the generators of a given motion
subgroup. Each generator of a given motion subgroup of {D} can be used as
limb in an LM-PM that must bound the end-effector motions to a subset of that
motion subgroup.
The second approach (screw theory) determine the screw (twist), $j, which
represents the generic element of {Lj}, as a linear combination of the twists of
all the joints of the kinematic chain that is candidate to be a limb. Then, the
screw (wrench), ξj, reciprocal to $j, which represents the system of forces ex-
erted on the end effector by the j-th limb, is computed. Finally, the wrench sys-
tem obtained as linear combination of the ξj is considered, and the geometric
conditions that make it coincide with the wrench system reciprocal to all the
elements of the motion subgroup, which {L} must be a subset of, are deduced.
The third approach (velocity-loop equations) consists in writing n times both
the end-effector angular velocity, ω, and the velocity, P , of an end-effector
point by exploiting the kinematic properties of the n limbs of the generic LM-
PM topology under analysis. So doing n expressions of the couple of vectors
(ω, P ) are obtained where the j-th expression, j=1,…,n, is a linear combination
of the joint rates of the j-th limb. The analysis of these (ω, P ) expressions is suf-
ficient to determine the geometric conditions that each limb has to satisfy in
order to make (I) all the n expressions compatible, and (II) the end-effector’s
motion characteristics (ω, P ) respect the conditions which warranty that all the
end effector motions belong to an assigned motion subgroup of {D}. Since this
approach deduces geometric conditions by analysing the instantaneous end-
effector motion, the characteristics of the finite end-effector motion are stated
by demonstrating that those conditions are sufficient to warranty an infinite
4. Singularity Analysis 563
4. Singularity Analysis
{C} = {C } , j (3)
j = 1,n
must be a subset of {W}, if {C} is a non-empty set and {W} contains subsets that
belong to different m-dimensional motion subgroups (or to m-dimensional
subsets of mixed motions together with subsets of a m-dimensional motion
subgroup), then, there is a non-empty subset, {S}, of {C} whose elements are
end-effector poses from which the LM-PM can move the end effector by mak-
ing it perform motions that belong to different m-dimensional motion sub-
groups of {D} (that belong to either a m-dimensional motion subgroup of {D}
or a mixed-motion subsets of {W}). The end-effector’s possibility of leaving a
pose by performing motions that belong to disjoint m-dimensional subsets of
{D} implies that the end effector locally has at least one additional dof (i.e. m+h
dofs with h≥1) when the end effector assumes that pose. Therefore, when the
end effector assumes that pose, the end effector’s motion characteristics, (ω,
P ), are not determined even though the m actuated-joint rates are assigned
(i.e. the LM-PM’s configuration with the end effector located at that pose is a
particular type-(II) singularity).
In (Zatlanov et al. 2001), it has been presented a three-dof LM-PM with topol-
ogy 3URU (i.e. with three limbs of type URU (U stands for universal joint)),
named DYMO (Fig. 2), that, by crossing constraint singularities, can become ei-
ther a TPM, or an SPM, or a PPM, or a three-dof mixed-motion LM-PM.
4. Singularity Analysis 565
end effector
frame
°P ½°
A ® ¾ = Bq (4)
°¯ω°¿
where A and B are a 6×6 matrix and a 6×m matrix respectively, and both, in
general, depend on the m-dimensional vector q which collects the m actuated-
joint variables, qp, p=1,…,m, (i.e. they depend on the manipulator configura-
tion). Since the LM-PM has m dofs, 6−m equations of system (4) simply state
that ω and P cannot be arbitrarily chosen.
A non-singular configuration is characterised by rank(A)=6 and rank(B)=m. A
type-(I) singularity is characterised by rank(A)=6 and rank(B)<m. A type-(II)
singularity is characterised by rank(A)<6 (i.e. det(A)=0) and rank(B)=m. A
type-(III) singularity is characterised by rank(A)<6 and rank(B)<m.
In order to find the type-(II) singularities the values of q that solve the equa-
tion
det(A)=0 (5)
ity conditions (Gosselin & Angeles 1991): the nearer to one the condition num-
ber is, the farther from type-(II) singularity conditions that configuration is (the
condition number ranges from 1 to infinity). The configurations where the
condition number of A is equal to one are the farthest from type-(II) singular-
ity conditions. Such configurations are called isotropic configurations. In an
isotropic configuration, the matrix ATA is proportional to the 6×6 identity ma-
trix, I6, or, which is the same, the singular values of A are all equal.
In an LM-PM has a matrix A that is constant (i.e. does not depend on q) and
non singular, then all the manipulator configurations have the same condition
number and are not singular. Such a manipulator will be called constant-
isotropy LM-PM. In addition, if, in a constant-isotropy LM-PM, the constant
value of the condition number of A is one, then all the manipulator configura-
tions are isotropic and the manipulator is called fully isotropic.
The appealing properties of constant-isotropy or fully isotropic LM-PMs
pushed researchers to determine their topologies (Kong & Gosselin 2002a; Car-
ricato & Parenti-Castelli 2002; Carricato 2005; Gogu 2004). Among all the pro-
posed fully-isotropic architecture, the Cartesian 3PRRR (Kong & Gosselin
2002b; Di Gregorio 2002b; Kim & Tsai 2003) is certainly the most known. With
reference to Fig. 4, the Cartesian 3PRRR has three limbs of type PRRR where
the prismatic pair is actuated. In the j-th limb, j=1, 2, 3, of type PRRR, the three
revolute-pair axes and the prismatic-pair sliding direction are all parallel. Fi-
nally, the sliding directions of the three prismatic pairs are mutually orthogo-
nal.
end effector
R
R
R
P
frame
Figure 4. Cartesian 3PRRR (Kong & Gosselin 2002b; Di Gregorio 2002b; Kim & Tsai
2003)
568 Parallel Manipulators with Lower Mobility
5. Conclusions
The functional (kinetostatic) design of a LM-PM starts from the analysis of the
manipulation task to accomplish, continues with the identification of the limb
topologies and finishes with the determination of the machine architecture
passing through the singularity analysis.
In the literature, the methodologies for the identification of the limb topologies
suitable for a given manipulation task has been well described. How to com-
bine the limbs in order to avoid singularities has been diffusely discussed at
least for the most popular architectures. Nevertheless, comparison criteria
among different architectures that perform the same manipulation task are not
well established, yet. So that, even though is quite easy to find long lists of
limb’s topologies that are suitable for a given manipulation task (the works re-
ported in the references are just a sample of the vast literature on this subject),
stating which is the best one still is an open problem.
Some authors (Tsai & Joshi 2001) proposed the use of the “global condition
number” (defined as the average value, on the workspace, of the inverse of the
condition number) as index for evaluating or optimising a machine, but the
same authors had to recognise that the comparison among different LM-PMs
must take into account also the inertia properties of the machines. In general, it
can be said that machines which exhibit good kinetostatic properties do not
necessarily provide good dynamic performances.
6. References
Kim, H.S. & Tsai, L.-W., (2003). Design optimization of a Cartesian parallel
manipulator. ASME Journal of Mechanical Design, Vol. 125, No. 1, (2003),
pp. 43-51
Kong, X. & Gosselin, C.M. (2002a). Type synthesis of linear translational paral-
lel manipulators, In: Advances in Robot Kinematics, Lenarcic, J. & Thomas,
F., (Ed.), pp. 411-420, Kluwer Academic Publishers, Dordrecht (Nether-
lands)
Kong, X. & Gosselin, C.M. (2002b). Kinematics and singularity analysis of a
novel type of 3-CRR 3-dof translational parallel manipulator. The Inter-
national Journal of Robotics Research, Vol. 21, No. 9, (2002), pp. 791-798
Kong, X. & Gosselin, C.M. (2004a). Type synthesis of 3-DOF translational par-
allel manipulators based on screw theory. ASME Journal of Mechanical
Design, Vol. 126, No. 1, (2004), pp. 83-92
Kong, X. & Gosselin, C.M. (2004b). Type synthesis of 3-DOF spherical parallel
manipulators based on screw theory. ASME Journal of Mechanical Design,
Vol. 126, No. 1, (2004), pp. 101-108
Kong, X. & Gosselin, C.M. (2005). Type synthesis of 3-DOF PPR-equivalent
parallel manipulators based on screw theory and the concept of virtual
chain. ASME Journal of Mechanical Design, Vol. 127, No. 6, (2005), pp.
1113-1121
Lee, C.-C. & Hervé, J.M. (2006). Pseudo-planar motion generators, In: Advances
in Robot Kinematics, Lenarcic, J. & Roth, B., (Ed.), pp. 435-444, Kluwer
Academic Publishers, Dordrecht (Netherlands)
Ma, O., & Angeles, J. (1991). Architecture singularities of platform manipula-
tors, Proceedings of the 1991 IEEE International Confonference on Robotics
and Automation, pp. 1542-1547, Sacramento (CA, USA), April 1991
Rico, J.M.; Cervantes-Sancez, J.J.; Tadeo-Chavez, A.; Perez-Soto, G.I. & Rocha-
Chavaria, J. (2006). A comprehensive theory of type synthesis of fully
parallel platforms, Proceedings of the ASME 2006 International Design En-
gineering Technical Conferences & Computers and Information in Engineering
Conference, paper No DETC2006-99070, Philadelphia (USA), September
2006
Tsai, L.-W., (1999). The enumeration of a class of three-dof parallel manipula-
tors, Proceedings of the 10th World Congress on the Theory of Machine and
Mechanisms, pp. 1121-1126, Oulu (Finland), Gune 1999
Tsai, L.-W. & Joshi, S. (2001). Comparison study of architectures of four 3 de-
gree-of-freedom translational parallel manipulators, Proceedings of the
2001 IEEE International Confonference on Robotics and Automation, pp.
1283-1286, Seoul (Korea), May 2001
Zlatanov, D.; Fenton, R.G. & Benhabib, B. (1995). A unifying framework for
classification and interpretation of mechanism singularities. ASME Jour-
nal of Mechanical Design, Vol. 117, No. 4, (1995), pp. 566-572.
572 Parallel Manipulators with Lower Mobility
1. Introduction
573
574 Industrial Robotics: Theory, Modelling and Control
Manufacturing tolerances, installation errors and link offsets cause devia-
tions with respect to the nominal kinematic parameters of the robot system. As
a result, if the nominal values of these parameters are used within the robot
system control software, the resulting pose of the system will be inaccurate.
Accuracy of a robot is the closeness with which its actual pose matches the
pose predicted by its controller. A robot normally designed for repeated work
such as spray painting, pick and place, etc., has high repeatability but low ac-
curacy. An accurate robot is required in applications where off-line program-
ming is involved. To a large extent, robot inaccuracy is induced by the propa-
gation of geometric errors, compliance errors and time-variant thermal errors. The
geometric errors of a robot come from manufacturing imperfections, mis-
alignments or joint wear. Compliance errors are due to the flexibility of robot
joints and link deflection under self-gravity and external load. The compliance
errors also depend on the robot’s changing position. Thermal errors result
from thermal distortion and expansions of robot components due to internal
and external heat sources such as motors, bearings and ambient temperature
change.
Link and joint flexibility has a significant impact on robot performance and
stability. Link gravity and external payload cause the deflection of links and
flexible joints, and therefore degrade the robot performance. Link compliance
effects are represented by six differential component changes: three transla-
tional and three rotational changes. This paper presents a systematic method-
ology for estimating the compliance characteristics of serial and parallel ma-
nipulators due to external concentrated load/deflection. In related
experiments, special measurement tools and sensors are necessary to identify
the stiffness of driving joints.
Also in this paper a general methodology is presented to calibrate and com-
pensate for robot compliance errors and thermal errors in addition to geomet-
ric errors, An error synthesis model based on the Denavit-Hartenberg (D-H)
convention is derived for simultaneously considering geometric errors, com-
pliance errors and thermal errors. Based on this model a general methodology
is developed to calibrate geometric errors, compliance errors and thermal er-
rors. Experimental results show that the accuracy of the robot is improved by
an order of magnitude after the compensation.
ang et al, 1988]. Pose measurement of robotic end-effecter has been the focus
[Ziegert and Datseries, 1990; Zhu and Cui, 2001, 2003].
Precision booster (Figure 3) a 6-DOF piezoelectric ultraprecision positioning
drive is developed to provide industrial robots with 6-DOF fine positioning
capability. It is designed to mount at the end of the forearm of a robot before
its end-effector. With the added fine positioning capability, the accuracy of in-
dustrial robots can be greatly enhanced. Working with more accurate feed-
back sensors or calibration processes, the booster enables industrial robot to
reach micrometer accuracy – one or two orders of magnitude higher than those
of conventional serial robots. The accuracy of the precision booster can be de-
signed in the range of sub-micrometer or micrometer over a range of millime-
ters enough to cover the sub-millimeter positioning resolution offered by exist-
ing industrial robots. The booster features monolithic flexure construction and
the flexure structure functions as a spatial motion mechanism. This monolithic
motion mechanism is backlash free and stick-slip free. High strength and high
stiffness piezoelectric actuators are used to power the booster to perform fine
positioning.
TAU robot
Work area 1.5 m * 3 m
Repeatability 15 μm
Path accuracy 30 μm
Acceleration 5g
Maximum positioning speed 180m / min
Excitation frequency > 40 Hz
Cost < 250 KUSD
Tricept robot, shown in Figure 4, logically derived from the Tetrabot (Thorn-
ton, 1988), has a 3-DOF (degree of freedom) configuration of the parallel type
to execute translational motions and a 3-DOF spherical wrist to execute rota-
tional motions (Neumann and Neos Robotics, 1998). Its workspace is to be
considered relatively large compared to the size of the robot. In order to fur-
ther enlarge the size of the workspace, the addition of a revolute joint at the
fixed base has been envisaged, introducing kinematic redundancy into the ro-
botic manipulator. Its translational part can be thought as a reduced Stewart
Error Modeling and Accuracy of Parallel Industrial Robots 579
platform with only three limbs. Like the Stewart platform, its kinematics has
not been completely obtained: the inverse kinematics problem admits an ana-
lytical solution whereas the direct kinematics problem may require the use of
iterative algorithms (Siciliano, 1999).
Delta robot, patented in U.S. in 1990 (Clavel, 1990), is shown in Figure 5. The
basic idea behind the Delta parallel robotic design is the use of parallelograms.
A parallelogram allows an output link to remain at a fixed orientation with re-
spect to an input link. The use of three such parallelograms restrains com-
pletely the orientation of the mobile platform, which remains only three purely
translational degrees of freedom. The input links of the three parallelograms
are mounted on rotating levers via revolute joints. The revolute joints of the ro-
tating levers are actuated in two different ways: with rotational (DC or AC
servo) motors or with linear actuators. Finally, a fourth leg is used to transmit
rotary motion from the base to an end-effector mounted on the mobile plat-
form.
The use of base-mounted actuators and low-mass links allows the mobile plat-
form to achieve accelerations of up to 50-G in experimental environment and
12 G in industrial applications. This makes the Delta robot a perfect candidate
for pick and place operations of light objects. The Delta design has been ap-
plied to industry robot for several years. Its kinematics and dynamics also
have been developed (Hunt 1973 and Codourey 1998).
Figure 3.Piezo Driven Flexure Based Hexapod (Zhu and Cui, 2001)
580 Industrial Robotics: Theory, Modelling and Control
Figure 7 Typical Arms and Wrist Configuration of Industrial Robots (Handbook of In-
dustrial Robotics)
Error Modeling and Accuracy of Parallel Industrial Robots 583
1. All the actuators are mounted on a fixed platform, which minimizes the
mass of the moving arm system.
2. The links connected to the actuated platform are two-force members
transmitting only compression and tensile forces and do not carry bend-
ing and twisting loads, which makes it easy to achieve a moving arm
system of high in stiffness and low in mass.
3. The joints can be implemented as ball and socket bearings, which makes
it possible to obtain high precision in addition to high stiffness and low
mass for the joint arrangement.
4. The actuated platform is positioned with 3 translational DOFs in a par-
allel fashion without angular displacement.
Systematic clustering of the links connected to the actuated robot platform has
been studied. Based on this design approach new parallel arm structures have
been identified and some new robot concepts have been found (Brogardh, T,
2000).
Figure 8 shows schematically the basic components needed to achieve the
584 Industrial Robotics: Theory, Modelling and Control
Delta parallel arm robot with the kinematic features listed above. The actuated
platform is connected to 6 links of type A by means of ball and socket joints
that each has 3 DOFs. Type A means that the links are designed to be stiff only
for forces along their axial direction in the structure. This force loading charac-
teristic in the links of type A is guaranteed since a ball and socket joint cannot
transmit bending moment or twisting torque to the link it is connected to.
The actuators in Figure 8 are mounted on the fixed platform and the moving
part of the actuators is connected to the links of type A via links of type B. The
type B links are designed to be stiff against also bending moment and twisting
torque. All the links of type B do not need to be connected to actuators, but 3 of
them must, otherwise the actuated platform cannot be manipulated in 3 DOFs.
Connecting
bar
Figure 8. Components for the Design of Structures with the Same Features as the
Delta Robot. (Courtesy of Brogardh, T, 2000).
Each of the links of type B (Figure 8) can be connected to one or more of the
links of type A. One could say that each link of type B can be connected to a
cluster of links of type A and it is possible to introduce a simple clustering
scheme, where for example 2/2/2 means that the links of type A are clustered
with 2 links to each of the 3 links of type B. To achieve parallel movements of
the actuated platform (to preserve the tilt angles), type A links belonging to the
same cluster must be parallel and have the same length. Moreover, to avoid a
Error Modeling and Accuracy of Parallel Industrial Robots 585
Cluster 3
Cluster2
Cluster 2
Cluster1
Cluster1
Cluster3
Figure 9. Useful Clustering Strategies When the Links of Type A Are Attached to the
Actuator Platform in a 2-D Pattern.
586 Industrial Robotics: Theory, Modelling and Control
Figure 10. Useful 2/2/2 Clustering Strategies when the Links of Type A are Attached
to the Actuator Platform in a 3-D pattern. (Courtesy of Brogardh, T, 2000).
A new class of parallel robot, namely, TAU robot, has been created based on
the 3/2/1 configuration. It combines the performance advantages of parallel
arm mechanism (e.g., high stiffness, high accuracy) with the large workspace
of serial robot.
As shown in Figure 11, the primary design of the TAU prototype robot has
three actuators mounted on the base fixture and arranged in a line, which is
called an I-configuration TAU. From bottom to top, actuators and upper arms
(type B link) are numbered as 1, 2 and 3, and connected with 3, 2 and 1 lower
arm(s) (type A link) respectively. This configuration basically performs a 3-
DOF motion in its workspace. The 3-DOF parallel robot has a small footprint
but with an enhanced stiffness.
The six links (lower arms) connected to the tool plate are driven by the three
upper arms rotating around Z-axis. This structure has 3 DOFs in its work-
space. With its geometric constraint, the DOF of a TAU robot is equal to (Tsai,
1999)
DOF = λ (n − j − 1) + ¦ f i
i =1
λ: degree of freedom of the space in which a mechanism is intended to
function
Error Modeling and Accuracy of Parallel Industrial Robots 587
Joints between fixture and upper arms are 1-DOF rotational joints. Joints con-
necting upper and lower arms are 2-DOF universal joints. 3-DOF spherica-
joints connect lower arms and moving plate.
The parallel robotic configuration for translational motion has a higher stiff-
ness compared to the serial robotic configuration. It also has the following fea-
tures: Large workspace, 360 degree around its base axis like a serial robot, ana-
lytic kinematic solution and analytic rigid-body dynamic solution.
Potential Applications
Laser cutting or welding, as a non-contact process requires an accu-
racy/repeatability less than 100 μm. Payload, which is the laser gun and acces-
sories, is usually less than 50 kg. Speed required is not high in such applica-
tions.
Coordinate measuring function is typically performed by a CMM. It has a
strict accuracy requirement of less than 50 μm for both static and path follow-
ing at a low speed.
Fine material removal is as precision machining application now dominated
by CNC machines. It requires the highest stiffness and system accuracy.
Design Objectives
The mechanism design is application orientated. Three typical future applica-
tions were selected and studied in the design phase: 2-D laser cutting, CMM
for automobile vehicle and material removal applications. Each of them repre-
sents a typical application with certain requirements. Accuracy is a dominating
factor reflecting the level of performance of any measurement system. The ac-
curacy is low for current articulated robot arms. Material removal application
requires high stiffness. Current serial configured CNC machines or parallel-
configured machines have a limited workspace.
Table 2 shows the performance comparison between the TAU robot and the
gantry robot currently used in laser cutting application, which indicates the
potential applications instead of using linear gantry robot. The performance of
TAU covers all advantages of the Linear Motor Gantry.
Figure 12. Single Arm Test Platform for Drive Motor Error Analysis
This chapter gives the nominal (no error) kinematics of the TAU robot. It is a
general solution for this type of 3-DOF parallel-serial robots. For the two-arm
test platform, a simple kinematic solution can be obtained based on its double
SCARA configuration and it is not included in this chapter. The two-arm test
platform kinematics was used in friction model identification and kinematic
error calibration of the two-arm test platform. It will be introduced as needed
in the related chapters.
To solve the kinematics of this 3-DOF TAU robot, three independent equations
are needed. The three lower arm links, connected between the moving plate
and upper arm 1, are designed to be parallel to each other and have the same
length. Similarly, the two lower arms of upper arm 2 are also parallel and
equal in length, which gives another length equation. The third equation
comes from the lower arm 3.
Formulating these three equations all starts from point P in Figure 3.1, where
three kinematic chains meet. Three basic equations for the kinematic problem
are:
θ3 Z
d11
a11
D1
a12
KCP (Px, Py, Pz)
θ2
d31 D3
a31
θ1
D2 a21 d21
a22 a32
ϕ O
Universal joint
X
Spherical joint
Basic equations
a122 = ( D1x − Px) 2 + ( D1 y − py ) 2 + ( D1z − Pz ) 2 (1)
2
a22 = ( D2 x − Px) 2 + ( D2 y − Py ) 2 + ( D2 z − Pz ) 2 (2)
2
a32 = ( D3 x − Px) 2 + ( D3 y − Py ) 2 + ( D3 z − Pz ) 2 (3)
2
a21 − a22
2
+ Px 2 + Py 2 + ( D2 Z − Pz ) 2 Py
θ1 = cos −1 + tan −1 (4)
2a21 Px + Py
2 2 Px
Error Modeling and Accuracy of Parallel Industrial Robots 593
Therefore:
− a32 + C 3 x + C 3 y + ( D3 z − C 3 z ) 2
2 2 2 2
−1
a31 C3 y
θ 2 = cos + tan −1 (5)
2a31 C 3 x + C 3 y
2 2 C3x
Where,
C 3 x = Px + a33 cos(120 + θ 1 )
C 3 y = Py + a33 sin(120 + θ 1 )
C 3 z = Pz
Therefore:
Equations (4), (5) and (6), therefore, are the inverse kinematics ended at point P
on the moving platform. Noticed that point P is the kinematic calculation
point, and additional inverse kinematic is needed to transfer TCP (Tool Center
Point) to point P, when tool or wrist assembly is attached to the moving plat-
form.
Thus, define:
( D1x − D2 x ) Px + ( D1 y − D2 y ) Py + ( D1z − D2 z ) Pz = (a 22
2
− a122 + d1 − d 2 ) / 2
Let
a1 = D1x − D2 x
b1 = D1 y − D2 y
c1 = D1z − D2 z
e1 = (a 22
2
− a122 + d1 − d 2 ) / 2
a1 ⋅ Px + b1 ⋅ Py + c1 ⋅ Pz = e1
Similarly define
Error Modeling and Accuracy of Parallel Industrial Robots 595
a 2 = D1x − D3 x
b2 = D1 y − D3 y
c 2 = D1z − D3 z
e2 = (a32
2
− a122 + d1 − d 3 ) / 2
Then Equation (11) as
a 2 ⋅ Px + b2 ⋅ Py + c 2 ⋅ Pz = e2
a1 ⋅ Px + b1 ⋅ Py = e1 − c1 ⋅ Pz
a2 ⋅ Px + b2 ⋅ Py = e2 − c2 ⋅ Pz
ª a1 b1 º ª Px º ª e1 − c1 ⋅ Pz º
«a =
¬ 2 b2 »¼ «¬ Py »¼ «¬e2 − c 2 ⋅ Pz »¼
Define Δ = a1b2 − a 2 b1
For case 1, Δ ≠ 0:
Δx = (e1 − c1 ⋅ Pz )b2 − (e2 − c 2 ⋅ Pz )b1
= (b2 e1 − b1e2 ) + (b1c 2 − b2 c1 ) ⋅ Pz
Δy = (e2 − c 2 ⋅ Pz )a1 − (e1 − c1 ⋅ Pz )a 2
= (a1e2 − a 2 e1 ) + (a 2 c1 − a1c 2 ) ⋅ Pz
Δx b2 e1 − b1e2 b1c 2 − b2 c1
Px = = + ⋅ Pz
Δ Δ Δ
Δy a1e2 − a 2 e1 a 2 c1 − a1c 2
Py = = + ⋅ Pz
Δ Δ Δ
Define:
b e −b e
f1 = 2 1 1 2
Δ
a e − a 2 e1
f2 = 1 2
Δ
b1c 2 − b2 c1
fx =
Δ
a 2 c1 − a1c 2
fy =
Δ
Thus:
Px = f 1 + f x Pz
(12)
Py = f 2 + f y Pz
596 Industrial Robotics: Theory, Modelling and Control
2
a32 = ( D3 x − f 1 − f x Pz ) 2 + ( D3 y − f 2 − f y Py ) 2 + ( D3 z − Pz ) 2
Where,
f11 = f1 − D3 x
f 22 = f 2 − D3 y
Then, let
A = 1 + f x2 + f y2
B = 2( f 11 f x + f 22 f y − D3 z )
C = f112 + f 222 + D32z − a32
2
− B ± B2 − 4 ⋅ A⋅C
Pz = ............................ (13)
2⋅ A
Px = f1 + f x Pz ............................ (14)
Py = f 2 + f y Pz ............................ (15)
Δ = a1b2 − a 2 b1
a1 ⋅ Px + b1 ⋅ Py = e1 − c1 ⋅ Pz
i.e.
Py = (e1 − c1 ⋅ Pz − a1 ⋅ Px) / b1
= f y Px + f1 Pz + f 2
Error Modeling and Accuracy of Parallel Industrial Robots 597
Where:
f1 = −c1 / b1
f 2 = +e1 / b1
f x = −a1 / b1
Substitute Px, Py into Equation (3) and resort the equation above:
Where, f11 = f1 Pz + f 2 − D3 y
Then, let
A = 1 + f x2
B = 2( f11 f x − D3 x )
C = f 112 + D32x − a 32
2
− B ± B2 − 4 ⋅ A ⋅ C
Px =
2⋅ A
Py = f x Px + f1 Pz + f 2
Pz = D3 z
Px = (e1 − c1 ⋅ Pz ) / a1
Py can be solved by one of those basic Equations, for example for Equation
(1).
a1 ⋅ Px + b1 ⋅ Py = e1 − c1 ⋅ Pz
Py = (e1 − c1 ⋅ Pz ) / b1
Thus forward kinematic is solved based on geometry constrains. Like the in-
verse kinematics, additional mathematic work is needed for the kinematic
chain from point P to final TCP depending on the configuration details of the
tool or wrist.
The purpose of error analysis is to minimize the error of robot system through
assembly based on the comprehensive system error model. The reason is based
Error Modeling and Accuracy of Parallel Industrial Robots 599
on the fact that all error source will either have a negative or positive influence
on the system error, which is then possible to arrange them in a way that can-
cellation or at least error reduction will happen. The methodology is described
as:
• The direction and degree of influence of an error source on system error var-
ies in the whole workspace.
• Random errors can not be dealt effectively.
• Effective fixture and measuring are important.
• The methodology reduces robotic system error and opens the door for more
accurate error compensation.
For the TAU-robot, an important thing needed is the error analysis. One needs
to assign an error limit or range to all components in order to obtain a given
robotic system accuracy. The procedure is so called Error Budget.
Before the error budget, an important thing to accomplish is to establish and
analyze the Jacobian Matrix. It is necessary to know Jacobian Matrix for all
components before assigning error to all components. On the other hand one
can also obtain the final accuracy with knowing Jacobian Matrix. Besides one
can know which components are more important than others based on the
Jacobian Matrix. Table 3 lists all the design variables for the TAU robot.
600 Industrial Robotics: Theory, Modelling and Control
There are six kinematic chains from the base to the end-effector as:
Where,
ª cos ( joint1 + Δθ 1) −sin( joint1 + Δθ 1) ⋅cos (Δα 1) sin( joint1 + Δθ 1) ⋅sin(Δα 1) (700 + Δa1) ⋅cos ( joint1 + Δθ 1) º
« »
M1 →
« sin( joint1 + Δθ 1) cos ( joint1 + Δθ 1) ⋅cos (Δα 1) −cos ( joint1 + Δθ 1) ⋅sin(Δα 1) (700 + Δa1) ⋅sin( joint1 + Δθ 1) »
« 0 sin(Δα 1) cos (Δα 1) 750 + Δd1 »
« »
¬ 0 0 0 1 ¼
ª cos ( joint1 + Δθ 2) −sin( joint1 + Δθ 2) ⋅cos (Δα 2) sin( joint1 + Δθ 2) ⋅sin(Δα 2) (900 + Δa2) ⋅cos ( joint1 + Δθ 2) º
« »
sin( joint1 + Δθ 2) cos ( joint1 + Δθ 2) ⋅cos (Δα 2) −cos ( joint1 + Δθ 2) ⋅sin(Δα 2) (900 + Δa2) ⋅sin( joint1 + Δθ 2) »
M2 → «
« 0 sin(Δα 2) cos (Δα 2) 750 + Δd2 »
« »
¬ 0 0 0 1 ¼
ª cos ( joint2 + Δθ 4) −sin( joint2 + Δθ 4) ⋅cos (Δα 4) sin( joint2 + Δθ 4) ⋅sin(Δα 4) (900 + Δa4) ⋅cos ( joint2 + Δθ 4) º
« »
« sin( joint2 + Δθ 4) cos ( joint2 + Δθ 4) ⋅cos (Δα 4) −cos ( joint2 + Δθ 4) ⋅sin(Δα 4) (900 + Δa4) ⋅sin( joint2 + Δθ 4) »
M4 →
« 0 sin(Δα 4) cos (Δα 4) 950 + Δd4 »
« »
¬ 0 0 0 1 ¼
§ cos § 1 ⋅ joint + 1 ⋅ joint + Δθ · −sin§ 1 ⋅ joint + 1 ⋅ joint + Δθ · ⋅cos (−90 + Δα ) sin§ 1 ⋅ joint + 1 ⋅ joint + Δθ · ⋅sin(−90 + Δα ) Δa ⋅cos § 1 ⋅ joint + 1 ⋅ joint + Δθ · ·
¨ ¨ 1 2 6¸ ¨ 1 2 6¸ 6 ¨ 1 2 6¸ 6 6 ¨ 1 2 6¸ ¸
¨ ©2 2 ¹ ©2 2 ¹ ©2 2 ¹ ©2 2 ¹ ¸
¨ §1 ¸
M6 → ¨ sin¨ ⋅ joint1 +
1
⋅ joint
2 + Δθ 6
·
¸ cos
§
¨
1
⋅ joint
1 +
1
⋅ joint
2 + Δθ 6
·
¸ ⋅ cos ( − 90 + Δα 6) − cos
§
¨
1
⋅ joint1 +
1
⋅ joint2 + Δθ 6
·
¸ ⋅ sin( − 90 + Δα 6) Δa6 ⋅ sin
§
¨
1
⋅ joint
1 +
1
⋅ joint
2 + Δθ 6
·
¸ ¸
©2 2 ¹ ©2 2 ¹ ©2 2 ¹ ©2 2 ¹
¨ ¸
¨ 0 sin(−90 + Δα 6) cos (−90 + Δα 6) 1700 + Δd6 ¸
¨ ¸
© 0 0 0 1 ¹
ª cos ( joint3 + Δθ 7) −sin( joint3 + Δθ 7) ⋅cos (Δα 7) sin( joint3 + Δθ 7) ⋅sin(Δα 7) (900 + Δa7) ⋅cos ( joint3 + Δθ 7) º
« »
sin( joint3 + Δθ 7) cos ( joint3 + Δθ 7) ⋅cos (Δα 7) −cos ( joint3 + Δθ 7) ⋅sin(Δα 7) (900 + Δa7) ⋅sin( joint3 + Δθ 7) »
M7 → «
« 0 sin(Δα 7) cos (Δα 7) Δd7 »
« »
¬ 0 0 0 1 ¼
So, the six length equations can be obtained from matrices above.
602 Industrial Robotics: Theory, Modelling and Control
∂x
N
dx = ¦ dg i (16)
1 ∂li
N
∂y
dy = ¦ dg i (17)
1 ∂li
N
∂z
dz = ¦ dg i (18)
1 ∂li
The error model is actually a 6-DOF model since all error sources have been
considered. It includes both the position variables X, Y, Z and also rotational
angles α , β , γ . From the six kinematic chains, the equations established based
on D-H models are
f1 = f1 ( x, y , z, α , β , γ , g ) = 0
f 2 = f 2 ( x, y , z, α , β , γ , g ) = 0
..............................................
f 6 = f 6 ( x, y, z, α , β , γ , g ) = 0
∂f i ∂f ∂f ∂f ∂f ∂f ∂f i
⋅ dx + i ⋅ dy + i ⋅ dz + i ⋅ dα + i ⋅ dβ + i ⋅ dγ + ¦ ⋅ dg j = 0 (19)
∂x ∂y ∂z ∂α ∂β ∂γ j ∂g j
Rewrite it in matrix as
Error Modeling and Accuracy of Parallel Industrial Robots 603
J 1 dX = dG (21)
Where
ª − ∂f 1 º
«¦ ∂g j
dg j »
« j »
« − ∂f 2
«¦ dg j »» ª ∂f1 ∂f1 ∂f 1 º
∂g j . . . ª º
« j » « ∂g ∂g 2 ∂g N » « dg1 »
« − ∂f 3 » « 1 »
«¦
dg j « dg »
∂g j » « . . . . . . » « 2»
dG= « j » −« .
= . . . . . » •« . » (22)
− ∂f 4 « »
«¦ dg j » « . »
« j ∂g j » « . . . . . . » « . »
« » « ∂f 6 ∂f 6 ∂f 6 »
− ∂f 5 « ∂g
. . .
»
«dg »
«¦ dg j » ¬ 1 ∂g 2 ∂g N ¼ 6× N ¬ N ¼ N ×1
« j ∂g j »
« − ∂f 6 »
«¦ dg j »
«¬ j ∂g j »¼
dG = J 2 dg (23)
J 1dX = J 2 dg (24)
−1
dX = ( J 1 J 2 )dg (25)
−1
The Jacobian matrix is obtained as J 1 ⋅ J 2
604 Industrial Robotics: Theory, Modelling and Control
−1
ª ∂f 1 ∂f 1 ∂f 1 ∂f 1 ∂f 1 ∂f 1 º
ª ∂f1 ∂f1 ∂f1 º
«− ∂g − . . . −
« ∂x ∂y ∂β ∂γ »
∂z ∂α
« » ∂g 2 ∂g N »
« ∂f 2 ∂f 2 ∂f 2 ∂f 2 ∂f 2 ∂f 2 » « 1
»
−1
« ∂x ∂y ∂z ∂α ∂β ∂γ » « . . . . . . »
J = J1 ⋅ J 2 = « ∂f 3 ∂f 3 ∂f 3 ∂f 3 ∂f 3 ∂f 3 » •« . . . . . . » (26)
« »
« ∂x ∂y ∂z ∂α ∂β ∂γ » « »
« ∂f 4 ∂f 4 ∂f 4 ∂f 4 ∂f 4 ∂f 4 » « . . . . . . »
«
∂α ∂β ∂γ »
» «− ∂f 6 −
∂f 6 ∂f
− 6 »
« ∂x ∂y ∂z
«¬ ∂g1 ∂g 2
. . .
∂g N »¼
« ∂f 5 ∂f 5 ∂f 5 ∂f 5 ∂f 5 ∂f 5 »
« ∂x ∂y ∂z ∂α ∂β ∂γ »
« ∂f ∂f 6 ∂f 6 ∂f 6 ∂f 6 ∂f 6 »
« 6 »
«¬ ∂x ∂y ∂z ∂α ∂β ∂γ »¼
For a prototype of the TAU robotic design, the dimension of the Jacobian ma-
trix is 6 by 71. An analytical solution can be obtained and is used in error
analysis.
With the six chain equations obtained before, the following can be obtained
This equation is used later to calculate the forward kinematic problem, and it
is also compared with the method described in the next section.
A quick and efficient analytical solution is still necessary even though an accu-
rate result has been obtained by the N-R method. The N-R result is produced
based on iteration of numerical calculation, instead of from an analytical
closed form solution. The N-R method is too slow in calculation to be used in
on-line real time control. No certain solution is guaranteed in the N-R method.
So the Jacobian approximation method is established. Using this method, er-
ror analysis, calibration, compensation, and on-line control model can be in
turn established. As the TAU robot is based on a 3-DOF configuration, instead
of a general Stewart platform, the Jacobian approximate modification can be
obtained based the 3-DOF analytical solution without any errors. The mathe-
matical description of the Jacobian approximation method can be described as
follows.
X = F (θ , ε )
(29)
X = F (θ ,0) + J FORWARD⋅dε
Where J FORWARD = F ' (θ , ε ) and ε represents error. Thus, the analytical solution
F (θ ,0) and F ( X ,0) , is obtained. Therefore, the Jacobian Approximation as an
analytical solution is obtained and is used to solve nonlinear equations instead
of using N-R method.
A real tool should be attached on the wrist of robots as robots are used for any
application. Here a probe means a real tool.
From the six kinematic chains, the equations established based on D-H models
are
f1 = f1 ( x, y , z, α , β , γ , g ) = 0
f 2 = f 2 ( x, y , z, α , β , γ , g ) = 0
(30)
..............................................
f 6 = f 6 ( x, y, z, α , β , γ , g ) = 0
606 Industrial Robotics: Theory, Modelling and Control
Differentiating all the equations against all the variables x, y , z, α , β , γ and g,
where g is a vector including all geometric design parameters:
∂f i ∂f ∂f ∂f ∂f ∂f ∂f i
⋅ dx + i ⋅ dy + i ⋅ dz + i ⋅ dα + i ⋅ dβ + i ⋅ dγ + ¦ ⋅ dg j = 0 (19)
∂x ∂y ∂z ∂α ∂β ∂γ j ∂g j
Rewrite it in matrix as
J 1 dX = dG (21)
Where
ª − ∂f 1 º
«¦ ∂g j
dg j »
« j » ª ∂f1 ∂f1 ∂f 1 º ª º
« − ∂f 2 » « ∂g . . . »
∂g 2 ∂g N « dg1 »
«¦ ∂g j
dg j »
« 1 » « dg »
« j
− ∂f 3
» « . . . . . . » « 2»
dG= ««¦ »
dg j = − « . . . . . . » •« . » (22)
∂g j »
« » « . »
j
« »
«¦ − ∂f 4 « . . . . . . »
dg j » « . »
« j ∂g j » « ∂f 6 ∂f 6
. . .
∂f 6 »
« »
« − ∂f 5 » « ∂g
¬ 1 ∂g 2 ∂g N »¼ 6× N ¬dg N ¼ N ×1
«¦ dg j »
« j ∂g j »
« − ∂f 6 »
«¦ dg j »
¬« j ∂g j ¼»
Error Modeling and Accuracy of Parallel Industrial Robots 607
dG = J 2 dg (23)
J 1dX = J 2 dg (24)
−1
dX = ( J 1 J 2 )dg (25)
−1
The Jacobian matrix is obtained as J 1 ⋅ J 2
−1
ª ∂f1 ∂f1 ∂f 1 ∂f1 ∂f1 ∂f 1 º
« ∂x ∂y ∂β ∂γ »
« ∂z ∂α »
« ∂f 2 ∂f 2 ∂f 2 ∂f 2 ∂f 2 ∂f 2 »
« ∂x ∂y ∂z ∂α ∂β ∂γ »
« ∂f ∂f 3 ∂f 3 ∂f 3 »
« 3 ∂f 3 ∂f 3 »
−1
J = J 1 ⋅ J 2 = « ∂x ∂y ∂z ∂α ∂β ∂γ » •
« ∂f 4 ∂f 4 ∂f 4 ∂f 4 ∂f 4 ∂f 4 »
« »
« ∂x ∂y ∂z ∂α ∂β ∂γ »
« ∂f 5 ∂f 5 ∂f 5 ∂f 5 ∂f 5 ∂f 5 »
« ∂x ∂y ∂z ∂α ∂β ∂γ »
« ∂f ∂f 6 ∂f 6 ∂f 6 ∂f 6 ∂f 6 »
« 6 »
¬« ∂x ∂y ∂z ∂α ∂β ∂γ ¼»
(26)
ª ∂f1 ∂f1 ∂f1 º
«− ∂g −
∂g 2
. . . −
∂g N »
« 1
»
« . . . . . . »
« . . . . . . »
« »
« . . . . . . »
«− ∂f 6 −
∂f 6
. . .
∂f
− 6 »
«¬ ∂g1 ∂g 2 ∂g N »¼
ªdX P º ª xL º ª dxL º ª dx º
« dY » = DR ⋅ « y » + R ⋅ «dy » + «dy » (28)
« P» « L» « L» « »
«¬ dZ P »¼ «¬ z L »¼ «¬ dz L »¼ «¬ dz »¼
Where
ª dx º
« dy »
ªdX P º ª1 0 0 M 11 M 12 M 13 º « » ªdxL º
« dz »
« dY » = «0 1 0 M M 22 M 23 » « » + R ⋅ ««dy L »»
» (32)
« P» « 21
dα
«¬ dZ P »¼ «¬0 0 1 M 31 M 32 M 33 »¼ « » «¬ dz L »¼
« dβ »
« dγ »
¬ ¼
ª xL º
Where M ij = DR ⋅ «« yL »» then substitute dX = ( J1 J 2 )dL
−1
into Equation (32)
«¬ z L »¼
Finally
ª º
« dL1 »
« »
« dL2 »
ªdX P º ª ª1 0 0 M 11 M 12 M 13 º º « . »
« dY » = « «0 1 0 M » « »
« P » «« 21 M 22 M 23 »» ⋅ J R» ⋅ « . »
«
«¬ dZ P »¼ ¬ «¬0 0 1 M 31 M 32 M 33 »¼ »¼ «dLN »
« dxL »
« »
« dy L »
« dz L »
¬ ¼
Error Modeling and Accuracy of Parallel Industrial Robots 609
ª ª1 0 0 M 11 M 12 M 13 º º
« »
The final Jacobian matrix with a probe is « ««0 1 0 M 21 M 22 M 23 »» ⋅ J R»
«¬ «¬0 0 1 M 31 M 32 M 33 »¼ »¼
f1 = f1 (θ1 , θ 2 , θ 3 , x, y , z , α , β , γ , g1 ) = 0
f 2 = f 2 (θ1 , θ 2 , θ 3 , x, y , z, α , β , γ , g1 ) = 0
(33)
..............................................
f 6 = f 6 (θ1 , θ 2 , θ 3 , x, y , z, α , β , γ , g1 ) = 0
f 7 = f 7 ( x, y, z, α , β , γ , g 2 ) = 0 ªxp º ª x º ª xL º
« » « « »
f 8 = f 8 ( x, y , z , α , β , γ , g 2 ) = 0 from « y p » = « R y» « yL » (34)
»⋅
«zp » « z » « zL »
f 9 = f 9 ( x, y, z, α , β , γ , g 2 ) = 0 « » «0
¬1¼ ¬ 1 »¼ «¬ 1 »¼
df i = 0 or
∂fi ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f
⋅ dθ1 + i ⋅ dθ2 + i ⋅ dθ3 + i ⋅ dx+ i ⋅ dy+ i ⋅ dz+ i ⋅ dα + i ⋅ dβ + i ⋅ dγ + i ⋅ dg1 = 0 (35)
∂θ1 ∂θ2 ∂θ3 ∂x ∂y ∂z ∂α ∂β ∂γ ∂g1
ª − ∂f1 º
« ∂g dg1 »
« 1 »
« − ∂f 2 dg1 »
« ∂g1 »
« − ∂f 3 »
« dg1 »
∂g
− ∂G1 = « 1 » (39)
« − ∂f 4 dg »
« ∂g 1»
1
« − ∂f »
« 5
dg1 »
« ∂g1 »
« − ∂f 6 »
« dg1 »
¬ ∂g1 ¼
ª ∂f 7 ∂f 7 ∂f 7 ∂f 7 ∂f 7 ∂f 7 º
« ∂x ∂y ∂z ∂α ∂β ∂γ »
« »
∂f ∂f 8 ∂f 8 ∂f 8 ∂f 8 ∂f 8 »
Where J 3 = « 8 (41)
« ∂x ∂y ∂z ∂α ∂β ∂γ »
« ∂f 9 ∂f 9 ∂f 9 ∂f 9 ∂f 9 ∂f 9 »
« »
¬ ∂x ∂y ∂z ∂α ∂β ∂γ ¼
Error Modeling and Accuracy of Parallel Industrial Robots 611
ª − ∂f 7 º
« ∂g dg 2 »
« 2 »
− ∂f 8
− ∂G2 = « dg 2 » (42)
« ∂g 2 »
« − ∂f »
9
« dg 2 »
«¬ ∂g 2 ¼»
From Equation (36)
−1
dX = − J 2 ⋅ ∂G1 − J 2−1 ⋅ J 1dθ (43)
ª dL1 º
ªdx L º « dL »
J 3 ⋅ J 2−1 ⋅ J 1 ⋅ dθ = J 5 ⋅ «dy L » − J 3 ⋅ J 2−1 J 4 ⋅ « 2 » (44)
« » « . »
«¬ dz L »¼ « »
¬dLN ¼
ª ∂f1 ∂f1 º
« ∂L ...
∂LN »
« 1 »
Where J 4 = « ... ... ... » (45)
« ∂f 6 ...
∂f 6 »
«¬ ∂L1 ∂LN »
¼
and
ª ∂f 7 ∂f 7 ∂f 7 º
« ∂x ∂y L ∂z L »»
« L
∂f ∂f 8 ∂f 8 »
J5 = « 8 (46)
« ∂x L ∂y L ∂z L »
« ∂f ∂f 9 ∂f 9 »
« 9 »
¬« ∂x L ∂y L ∂z L ¼»
Finally
612 Industrial Robotics: Theory, Modelling and Control
ª dL1 º
« . »
« »
« . »
dθ = [ J 3 ⋅ J 2−1 ⋅ J 1 ]−1 ⋅ [− J 3 ⋅ J 2−1 ⋅ J 4 # J 5 ] ⋅ «dLN » (47)
« dx »
« L»
« dy L »
«¬ dz L »¼
With the reality that all the parts of a robot have manufacturing errors and
misalignment errors as well as thermal errors, errors should be considered for
any of the components in order to accurately model the accuracy of the robotic
system. Error budget is carried out in the study and error sensitivity of robot
kinematics with respect to any of the parameters can be obtained based on er-
ror modeling. This is realized through the established Jacobian matrix.
To find those parameters in the error model that are linearly dependent and
those parameters that are difficult to observe, the Jacobian matrix is analyzed.
SVD method (Singular Value Decomposition) is used in such an analysis.
A methodical way of determining which parameters are redundant is to inves-
tigate the singular vectors. An investigation of the last column of the V vector
will reveal that some elements are dominant in order of magnitude. This im-
plies that corresponding columns in the Jacobian matrix are linearly depend-
ent. The work of reducing the number of error parameters must continue until
no singularities exist and the condition number has reached an acceptable
value.
A total of 31 redundant design variables of the 71 design parameters are elimi-
nated by observing the numerical Jacobian matrix obtained. Table 7 in Section
6 lists the remaining calibration parameters.
When the SVD is completed and a linearly independent set of error model pa-
rameters determined, the Error Budget can be determined. The mathematical
description of the error budget is as follows:
Error Modeling and Accuracy of Parallel Industrial Robots 613
J = U • S •V T
dX = J • dg = U • S • V T • dg (48)
U • dX = S • V • dg
T T
dg = (V • U T • dX ) / S ii (49)
Thus if the dX is given as the accuracy of the TAU robot, the error budget dg
can be determined.
Given the D-H parameters for all three upper arms and the main column, the
locations of the joints located at each of the three upper arms can be known ac-
curately. The six chain equations are created for the six link lengths, as follows:
Where
Upperarm _ po int = f (ε )
S i = RPi h (51)
Where Pi h denotes the position of the center on the end plate in local coordi-
nate. R is the
Li = Ph + S i − Pi b (52)
Ph is the position coordinate of the center on the end plate. From the end plate
velocities
L = JX (53)
Where L is the vector of link velocities and X = [ PhT , ω T ]T is the velocity vector
Error Modeling and Accuracy of Parallel Industrial Robots 615
Si
Pih
Ph
Pib Li
ª z1T ( s1 × z1 ) T º
« »
J = « .. .. » (56)
« z 6T
¬ ( s 6 × z 6 ) T »¼
λ max ( J −1 )
A= (57)
λ min ( J −1 )
5. System Stiffness
The stiffness of the robot is a very important performance, which will have a
significant influence on the robotic applications like cutting, milling, grinding
616 Industrial Robotics: Theory, Modelling and Control
etc. In this chapter, general formulations for the stiffness of robotic system and
the stiffness measurement result are presented, TCP stiffness is calculated
based on theoretical analysis and modeling. In the stiffness analysis, the stiff-
ness of individual component in related directions will be the output of stiff-
ness model.
Based on the designed robot with certain component errors, Error modeling
will be used to map the robot error over its working space. Thermal model
will also be established. Deflection under load will be part of the modeling
too. This comprehensive error model is the base for error analysis and robotic
product design. It will also be used, or partly used for error compensation.
For error compensation, however, suitable sensors will have to be used.
As measurement is concerned, it is important is to choose the suitable per-
formance evaluation standard. The type of sensors will be selected based on
the evaluation method. In selecting the sensors, resolution, repeatability, and
accuracy under certain environments will be the key to consider. The factors of
price and user-friendliness will also be weighted heavily. Measurement pro-
cedure will be carefully generated and measurement will be performed using
certified metrology equipment only to ensure the results.
li = gi(R,d) (58)
where d = [x,y,z], is the position vector of the platform coordinate system’s ori-
gin in the base coordinate system, li is the length of the i-th leg and gi is only a
function of R and d for constant geometric the i-th leg parameters.
ªcos φvosθ cos φ sin θ sin ψ − sin φ cosψ cos φ sin θ cosψ + sin φ sin ψ º
R = ««sin φ sin θ sin φ sin θ sin ψ + cos φ cosψ sin φ sin θ cosψ − cos φ sin ψ »»
«¬ − sin φ cosθ sin ψ cosθ cosψ »¼
(59)
Error Modeling and Accuracy of Parallel Industrial Robots 617
Above is the rotation matrix relating the platform’s coordinate system, to the
base co-ordinate system, Here R is constructed using Roll-Pitch-Yaw (RPY)
angle rotations, where R (roll) = φ around the z axis, P (pitch) = θ around the
y axis, and Y (yaw) = ψ around the x axis.
Thus, R is a rotation about the x axis of ψ, followed by θ , a rotation around y
axis, and ending with a rotation of φ around z axis.
Equation (58) represents the inverse kinematic solution. For some R and d, the
i-th leg length (li) can be easily calculated.
If Equation (58) is expanded using Taylor series expansion, and the first order
term considered only, the change in leg length, Δ li, is obtained as a row vector
Ji, multiplied by the column twist vector Δ p as given below:
Δ p = Ji Δ li (60)
where
ª ∂x ∂y ∂z ∂ψ ∂θ ∂φ º
Ji = « , , , , , » (61)
¬ ∂g i ∂g i ∂g i ∂g i ∂g i ∂g i ¼
And
Δp = JΔq (63)
From the principle of duality between the force/torque and velocity fields, or
what is more commonly known as contragradience
f = J Tτ (64)
where
τ = [ Fx , Fy , Fz , M x , M y , M z ]T
f = [ f1 , f 2 , f 3 , f 4 , f 5 , f 6 ]T
is the vector of forces experienced by the legs, and J T is the transpose of the
Jacobian J, (described earlier).
618 Industrial Robotics: Theory, Modelling and Control
As previously mentioned, the static stiffness (or rigidity) of the mechanism can
be a primary consideration in the design of a parallel link manipulator for cer-
tain applications (specifically, those involving large forces and high accuracy).
The static stiffness of the PLM is a function of:
The rigidity of the platform and the base is much greater than that of the legs
and, therefore, can be considered as infinite (or in general, the manipulator’s
joints are the least stiff elements in the structure, and hence, dictate the ma-
nipulator stiffness). If k is the axial or arm stiffness, then for the i-th leg or arm
f i = ki Δli (65)
where fi is the force needed to cause a Δli change of the i-th leg length. Assem-
bling the equations for all the legs, Equation (65) becomes
f i = kΔq
(66)
Substituting for Δq from Equation (63)
f = kJ −1Δ p (67)
Error Modeling and Accuracy of Parallel Industrial Robots 619
Multiplying both sides of Equation (67) with J-T and substituting f with
J Tτ from Equation (64) to obtain
τ = J −T kJ −1Δp. (68)
Equation (68) can be interpreted as τ is the wrench required to cause the plat-
form to experience a twist of Δp . So the stiffness is obtained as
J −T kJ −1 (69)
From Equation (69), the stiffness if the robot can be obtained, including the
component or joint stiffness Ki . In order to obtain the total stiffness of the ro-
bot, the joint stiffness has to be measured.
From Equation (68), the following Equation (70) can be obtained by finding the
inverse of the matrix J −T kJ −1 as
−1
Δp = JK i J Tτ . (70)
Equation (70) is very important for measuring the joint stiffness. Many differ-
ent equations can be obtained by applying different force τ with different di-
rections then measuring the deflections Δp . Least square method is applied to
solve Equation (70). As variable 1/Ki is the unknown, one can simplify Equa-
tion (70) as linear equations since K i = [1 / ki ] is a diagonal matrix.
−1
x
y
z
Fx Fy Fz dx dy dz
-180 0 0 -0.4561 0.1767 -0.1211
-360 0 0 -0.9232 0.2812 -0.2723
-360 0 0 -0.9604 0.2825 -0.2452
-180 0 0 -0.4822 0.1983 -0.0943
-180 0 0 -0.5359 0.2062 -0.1103
-360 0 0 -0.9775 0.3464 -0.2344
-180 0 0 -0.7276 0.0201 -0.4238
-360 0 0 -1.423 0.0073 -0.8206
-360 0 0 -1.4246 -0.0099 -0.7893
-180 0 0 -0.768 0.0184 -0.44
-180 0 0 -0.7194 0.0518 -0.4242
-360 0 0 -1.4357 0.0577 -0.7922
0 -275 25 0.0061 -0.8927 0.0336
0 -275 25 -0.0004 -0.9184 -0.0111
-40 -295 10 0.134 -1.1826 -0.0926
-40 -295 10 0.1308 -1.2146 -0.1407
-360 0 0 -0.9344 0.2758 -0.2987
Fig. 5.6 also gives the standard deviation from the measurement data.
Based on the results, the measurement data can be trusted and the standard
deviation of residual error is 0.042mm. Also, verification of solved stiffness
agrees well. The stiffness model can provide a method for position compensa-
tion to reach a high level accuracy, with a force sensor measuring the process
force in real time, the impact on position deformation can be estimated and
compensated.
Error Modeling and Accuracy of Parallel Industrial Robots 625
The same procedure can be applied to the TAU robot. The stiffness at TCP
point was measured by applying a load at TCP and measuring the resulting
displacements, see Figure 24. The results of the measurements are shown in
Figure 25.
First, the surface quality of the aluminum block can be recorded as cutting
without position compensation then cutting again to measure the surface qual-
ity with on line compensation. The surface will be measured via the laser. See
Fig. 5.9 for the robotic milling setup. Based on the result shown in Figs. 5.10
and 5.11, the compensation procedure is effective reducing the error to less
than 0.1 mm compared with the original error of 0.5 mm.
Conclusions:
Verification of solved stiffness agrees very well. The stiffness model can pro-
vide a method to model and test robot stiffness, with a force sensor measures
the process force in real time, the impact on position deformation can be esti-
mated and compensated.
Figure 28. Surface Quality with Compensation, Mean Error < 0.1 mm
The validation of the analytical model has been carried out, as well as the re-
alization of control scheme. Besides the analytical result and data, additional
results used in this chapter come from three sources:
Simulation results from ADAMS simulation software, see Figure 29 for details;
Test results from two-arm test platform, see Table 11;
Test results from the TAU prototype.
The Jacobian Matrix and N-R method need to be verified to guarantee their
correctness. These simulations are made by ADAMS (commercial simulation
software) see Figure 29.
The effect of the robot configurations were considered, all “verification points”
are located in the whole work-space and with total different configurations.
Figures 30, 6.3, and 6.4 show position differences between the N-R method and
the ADAMS simulation, which indicates that accurate results have been ob-
tained up to 0.06 um compared with ADAMS simulation results. These results
guarantee the correctness of Jacobian Matrix and N-R method. Based on the
simulation results, the N-R method with analytical Jacobian matrix can be
used in error modeling, error budget, offline calibration.
Like most of the methods in this thesis this method suffers from a drawback: it
can not be used in online position compensation and online control because it
is an iteration method even with an analytical, full size Jacobian Matrix. Next
Error Modeling and Accuracy of Parallel Industrial Robots 629
section will focus on the Jacobian Approximation Method (JAM), which is able
to deal with the online compensation and online control problems.
Figure 29. Using Adams to Verify the Analytical and Error Model
0,00006
0,00004
0,00002
0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70
-0,00002
-0,00004
-0,00006
-0,00008
Error
Figure 30. Position Error between the N-R Method and ADAMS Simulation
630 Industrial Robotics: Theory, Modelling and Control
1200,00
1000,00
800,00
X N-R
600,00 X Adams
400,00
200,00
0,00
1 2 3 4 5 6 7 8 9 10 11 12
Positions in workspace
900,00
800,00
Displacements in Y
700,00
600,00
direction
500,00 Y N-R
400,00 Y Adams
300,00
200,00
100,00
0,00
1 2 3 4 5 6 7 8 9 10 11 12
Positions in workspace
1000,00
Displacements in Z direction
900,00
800,00
700,00
600,00
Z N-R
500,00
Z Adams
400,00
300,00
200,00
100,00
0,00
1 2 3 4 5 6 7 8 9 10 11 12
Positions in workspace
Error Modeling and Accuracy of Parallel Industrial Robots 631
0,80
0,70
Rotation angles in Z
0,60
0,50
direction
0,20
0,10
0,00
1 2 3 4 5 6 7 8 9 10 11 12
-0,10
Positions in workspace
0.06
Error unit um and arcsec
0.04
x
0.02 y
0.00 z
-0.02 afa
-0.04 bta
gma
-0.06
-0.08
Various Positions in Work Space
Figure 32. TCP Difference between ADAMS Simulation and N-R Method
Based on the D-H model of TAU with all error parameters, inverse and for-
ward kinematic models have been established. From the point of view of
mathematics, the TAU kinematic problem is to solve 6 nonlinear equations us-
ing Newton-Raphson method with Jacobian matrix as the searching direction
632 Industrial Robotics: Theory, Modelling and Control
and accurate results have been obtained up to 0.06 um compared with
ADAMS simulation results.
It can be observed from the Figure 34, for data in detail, see Table 6, the JAM
(Jacobian Approximation Method) is effective with an accuracy of 1.53 μm
with an input error of 1 mm (Link 1 of lower arm 1). This was verified using
ADAMS simulation results. Results from N-R method match very well with
ADAMS simulation with a difference of only 0.06 μm.
The JAM can be used in on-line control and position compensation of the ro-
bot. For the TAU robot, a closed form solution of a forward kinematics prob-
lem is reached with a high accuracy instead of N-R numerical solution. The
simulation results are almost perfect compared with that from ADAMS.
A series of results have been presented for error analysis. Figure 34 shows the
results of SVD calibration. Which indicates the number of independent design
variable is reduced from 71 to 31. A sudden drop can be observed from the
Figure 34, which indicates other parameters behind variable #31 are not neces-
sary and their effects on error model can be neglected. From Table 7, totally 40
redundant variables are removed also Table 10 gives the result of error budge.
Tables 8 and 9 give the actuator (driving motor) error and thermal error, which
indicate the change of temperature should be controlled within ±50C otherwise
the accuracy of system can not be reached to 50um. The resolution of drive
motor should be at least < 10 arc second (1arc second=1/3600 degree).
SVD calibration is carried out for three parameters that contribute to the final
position error, see Table 11 and 12. These parameters are Arm3 length, link13
length, and link12 length. Calibration process is completed for only1 iteration.
Based on the Table 12 the accuracy of calibration is 4um for Link12 and others
are below 1um, which indicates the calibration method and error model are
correct.
0,0020
0,0015
0,0010
0,0005
0,0000
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70
-0,0005
-0,0010
-0,0015
Reihe1
Figure 33. Position Error between Jacobian Approximation Method and ADAMS
Error Modeling and Accuracy of Parallel Industrial Robots 633
D rive A n g le s TC P P o s e J a c o b ia n N e w t o n _ ra p h s o n E rro r b e t w e e n J a n d N
X 0,00E + 00 1 , 5 3 E -0 3 0,001531339
Y -1 , 8 1 E + 0 0 -1 , 8 1 E + 0 0 -0 , 0 0 4 9 5 5 9
jo in t 1 = 0
Z -1 , 6 1 E -1 6 -9 , 2 0 E -0 4 -0 , 0 0 0 9 1 9 8 8 9
jo in t 2 = 0
a fa 5 , 0 1 E -0 3 5 , 0 1 E -0 3 2 , 6 3 4 E -0 7
jo in t 3 = 0
bta -9 , 3 2 E -1 9 -9 , 3 3 E -1 9 -1 , 0 0 6 7 9 E -2 1
gm a -9 , 3 2 E -1 9 -9 , 3 2 E -1 9 -1 , 5 9 7 6 E -2 2
X 1 , 1 9 E -0 1 1 , 2 0 E -0 1 0,00119916
Y -1 , 8 1 E + 0 0 -1 , 8 1 E + 0 0 -0 , 0 0 0 9 7 3 6
jo in t 1 = 3 . 7 5
Z -2 , 0 9 E -1 6 -9 , 4 5 E -0 4 -0 , 0 0 0 9 4 5 0 4 8
jo in t 2 = 3 . 7 5
a fa 5 , 0 1 E -0 3 5 , 0 1 E -0 3 2 , 7 5 6 6 E -0 6
jo in t 3 = 2
bta 0,00E + 00 9 , 4 6 E -1 6 9 , 4 5 6 8 3 E -1 6
gm a 0,00E + 00 -4 , 8 4 E -1 6 -4 , 8 4 1 5 3 E -1 6
X 2 , 3 7 E -0 1 2 , 3 8 E -0 1 0,00135537
Y -1 , 8 0 E + 0 0 -1 , 8 0 E + 0 0 0,0007562
jo in t 1 = 7 . 5
Z -1 , 7 9 E -1 6 -9 , 6 9 E -0 4 -0 , 0 0 0 9 6 8 8 7 6
jo in t 2 = 7 . 5
a fa 5 , 0 2 E -0 3 5 , 0 2 E -0 3 3 , 5 4 7 E -0 7
jo in t 3 = 4
bta 0,00E + 00 3 , 1 5 E -1 6 3 , 1 4 8 5 3 E -1 6
gm a 0,00E + 00 -4 , 8 2 E -1 6 -4 , 8 2 1 2 9 E -1 6
X 3 , 5 4 E -0 1 3 , 5 5 E -0 1 0,00149511
Y -1 , 7 8 E + 0 0 -1 , 7 8 E + 0 0 0,0001837
jo in t 1 = 1 1 . 2 5
Z -1 , 7 9 E -1 6 -9 , 9 1 E -0 4 -0 , 0 0 0 9 9 1 3 9 7
jo in t 2 = 1 1 . 2 5
a fa 5 , 0 3 E -0 3 5 , 0 3 E -0 3 3 , 2 6 3 E -0 6
jo in t 3 = 6
bta 0,00E + 00 -3 , 1 0 E -1 8 -3 , 1 0 0 7 7 E -1 8
gm a -9 , 3 2 E -1 9 1 , 1 5 E -1 8 2 , 0 7 8 2 E -1 8
X 4 , 7 0 E -0 1 4 , 7 1 E -0 1 0,00111796
Y -1 , 7 5 E + 0 0 -1 , 7 5 E + 0 0 -0 , 0 0 2 7 7 3 7
jo in t 1 = 1 5
Z -5 , 9 6 E -1 7 -1 , 0 1 E -0 3 -0 , 0 0 1 0 1 2 6 2 4
jo in t 2 = 1 5
a fa 5 , 0 5 E -0 3 5 , 0 5 E -0 3 1 , 7 2 8 6 E -0 6
jo in t 3 = 8
bta 0,00E + 00 0,00E + 00 0
gm a 0,00E + 00 0,00E + 00 0
X 5 , 8 3 E -0 1 5 , 8 5 E -0 1 0,00173003
Y -1 , 7 2 E + 0 0 -1 , 7 2 E + 0 0 0,0017688
jo in t 1 = 1 8 . 7 5
Z -5 , 9 6 E -1 7 -1 , 0 3 E -0 3 -0 , 0 0 1 0 3 2 5 6 5
jo in t 2 = 1 8 . 7 5
a fa 5 , 0 7 E -0 3 5 , 0 8 E -0 3 6 , 0 4 6 5 E -0 6
jo in t 3 = 1 0
bta 4 , 6 6 E -1 9 -6 , 3 9 E -1 6 -6 , 3 9 4 2 5 E -1 6
gm a -9 , 3 2 E -1 9 9 , 5 9 E -1 6 9 , 6 0 1 5 E -1 6
X 6 ,9 4 E -0 1 6 ,9 6 E -0 1 0 ,0 0 1 8 4 6 1 2
Y -1 ,6 8 E + 0 0 - 1 ,6 8 E + 0 0 0 ,0 0 3 6 6 4 2
jo in t1 = 2 2 .5
Z 2 ,0 9 E -1 6 -1 ,0 5 E -0 3 -0 ,0 0 1 0 5 1 2 2
jo in t2 = 2 2 .5
a fa 5 ,1 1 E -0 3 5 ,1 1 E -0 3 -3 ,4 3 2 3 E - 0 6
jo in t3 = 1 2
b ta 0 ,0 0 E + 0 0 -8 ,4 7 E -2 2 -8 ,4 7 0 3 3 E -2 2
gm a 0 ,0 0 E + 0 0 8 ,4 7 E -2 2 8 ,4 7 0 3 3 E - 2 2
X 8 ,0 3 E -0 1 8 ,0 4 E -0 1 0 ,0 0 0 9 9 1 7 9
Y -1 ,6 3 E + 0 0 - 1 ,6 3 E + 0 0 0 ,0 0 2 7 3 4
jo in t1 = 2 6 .2 5
Z 0 ,0 0 E + 0 0 -1 ,0 7 E -0 3 - 0 ,0 0 1 0 6 8 5 8 2
jo in t2 = 2 6 .2 5
a fa 5 ,1 4 E -0 3 5 ,1 4 E -0 3 3 ,7 0 9 1 E -0 6
jo in t3 = 1 4
b ta 0 ,0 0 E + 0 0 3 ,2 6 E -1 6 3 ,2 5 6 7 2 E - 1 6
gm a 0 ,0 0 E + 0 0 -4 ,7 8 E -1 6 -4 ,7 7 9 0 1 E -1 6
X 9 ,0 7 E -0 1 9 ,0 9 E -0 1 0 ,0 0 1 7 0 5 4 4
Y -1 ,5 7 E + 0 0 - 1 ,5 7 E + 0 0 -0 ,0 0 1 2 3 0 6
jo in t1 = 3 0
Z -2 ,0 9 E -1 6 -1 ,0 8 E -0 3 - 0 ,0 0 1 0 8 4 6 4 3
jo in t2 = 3 0
a fa 5 ,1 9 E -0 3 5 ,1 9 E -0 3 -2 ,0 3 4 6 E - 0 6
jo in t3 = 1 6
b ta 0 ,0 0 E + 0 0 8 ,4 7 E -2 2 8 ,4 7 0 3 3 E - 2 2
gm a 0 ,0 0 E + 0 0 0 ,0 0 E + 0 0 0
X 1 ,0 1 E + 0 0 1 ,0 1 E + 0 0 -0 ,0 0 0 4 5 9 7
Y -1 ,5 1 E + 0 0 - 1 ,5 1 E + 0 0 0 ,0 0 1 5 3 1 9
jo in t1 = 3 3 .7 5
Z 1 ,4 9 E -1 6 -1 ,1 0 E -0 3 - 0 ,0 0 1 0 9 9 3 9 1
jo in t2 = 3 3 .7 5
a fa 5 ,2 4 E -0 3 5 ,2 4 E -0 3 -7 ,5 4 E -0 8
jo in t3 = 1 8
b ta 0 ,0 0 E + 0 0 -6 ,7 5 E -1 6 -6 ,7 4 9 2 3 E -1 6
gm a 0 ,0 0 E + 0 0 4 ,5 5 E -1 8 4 ,5 4 7 7 2 E - 1 8
X 1 ,1 0 E + 0 0 1 ,1 1 E + 0 0 0 ,0 0 6 0 6 6 3
Y -1 ,4 4 E + 0 0 - 1 ,4 4 E + 0 0 0 ,0 0 0 7 5 4 7
jo in t1 = 3 7 .5
Z 2 ,9 8 E -1 7 -1 ,1 1 E -0 3 - 0 ,0 0 1 1 1 2 8 1 9
jo in t2 = 3 7 .5
a fa 5 ,3 0 E -0 3 5 ,3 0 E -0 3 2 ,8 6 9 E -0 7
jo in t3 = 1 8
b ta 0 ,0 0 E + 0 0 0 ,0 0 E + 0 0 0
gm a 0 ,0 0 E + 0 0 0 ,0 0 E + 0 0 0
X 1 ,2 0 E + 0 0 1 ,2 0 E + 0 0 -0 ,0 0 2 1 2 8
Y -1 ,3 6 E + 0 0 - 1 ,3 6 E + 0 0 -0 ,0 0 3 8 5 6 3
jo in t1 = 4 1 .2 5
Z -2 ,9 8 E -1 7 -1 ,1 2 E -0 3 - 0 ,0 0 1 1 2 4 9 3 1
jo in t2 = 4 1 .2 5
a fa 5 ,3 7 E -0 3 5 ,3 7 E -0 3 - 1 ,1 E - 0 7
jo in t3 = 2 2
b ta 0 ,0 0 E + 0 0 0 ,0 0 E + 0 0 0
gm a 0 ,0 0 E + 0 0 0 ,0 0 E + 0 0 0
Table 6. Comparison between the Results of JAM and N-R Method
Error Modeling and Accuracy of Parallel Industrial Robots 635
P aram eter Num ber P aram eter Definition P aram eter
16 height of the TCP a
22 joint 3 a6
23 arm 3 a7
24 joint 1 & arm 1 d1
25 s hort arm 1 d3
28 joint3 d6
31 joint_link 11_arm 1 y1
34 joint_link 21_arm 1 y2
37 joint_link 31_arm 1 y3
40 joint_link 12_arm 2 y4
43 joint_link 22_arm 2 y5
46 joint_link 13_arm 3 y6
48 joint_link 11p x 11
49 joint_link 11p y 11
51 joint_link 31p x 22
52 joint_link 31p y 22
54 joint_link 21p x 33
55 joint_link 21p y 33
56 joint_link 21p z 33
57 joint_link 12p x 44
58 joint_link 12p y 44
59 joint_link 12p z 44
60 joint_link 22p x 55
61 joint_link 22p y 55
62 joint_link 22p z 55
63 joint_link 13p x 66
64 joint_link 13p y 66
67 link 11 L1
68 link 31 L2
69 link 21 L3
70 link 22 L4
Error Budget
Variable
No. Description Name Budget
1 drive 1 Joint 1 32 arcsec
2 drive 2 Joint 2 ar6 arcsec
3 drive 3 Joint 3 1.2 arcsec
17 a1 1.62 um
24 d1 363 um
joint 1 and arm 1
4 sit1 10.4 arcsec
10 afa1 110 arcsec
18 joint_link11_arm 1 a2 373 um
19 a3 174 um
25 d3 449 um
short arm 1
5 sit3 9.24 arcsec
11 afa3 9.45 arcsec
20 a4 1.9 mm
26 d4 485 um
joint 2 and arm 2
6 sit4 1.22 arcsec
12 afa4 38.5 arcsec
21 a5 430 um
27 d5 D
short arm 2
7 sit5 11.2 arcsec
13 afa5 D
22 a6 0
28 d6 D
joint 3
8 sit6 4.64 arcsec
14 afa6 D
23 a7 0
29 d7 D
arm 3
9 sit7 6.14 arcsec
15 afa7 D
30 joint_link11_arm1 x1 D
31 y1 43 um
Error Modeling and Accuracy of Parallel Industrial Robots 637
32 z1 123 um
33 x2 D
34 joint_link21_arm1 y2 49.4 um
35 z2 D
36 x3 115 um
37 joint_link31_arm1 y3 108 um
38 z3 D
39 x4 D
40 joint_link12_arm2 y4 1.28 mm
41 z4 D
42 x5 2.6 mm
43 joint_link22_arm2 y5 68.2 um
44 z5 D
45 x6 D
46 joint_link13_arm3 y6 21.6 um
47 z6 213 um
48 x11 50 um
49 joint_link11_platform y11 50 um
50 z11 D
51 x22 50 um
52 joint_link31_platform y22 50 um
53 z22 D
54 x33 50 um
55 joint_link21_platform y33 50 um
56 z33 13.3 um
57 x44 50 um
58 joint_link12_platform y44 50 um
59 z44 37.9 um
60 x55 50 um
61 joint_link22_platform y55 50 um
62 z55 398 um
63 x66 50 um
64 joint-link13_platform y66 50 um
65 z66 50 um
16 height of the TCP a 436 um
66 link 13 L0 0
67 link 11 L1 88 um
68 link 31 L2 151 um
69 link 21 L3 54.3 um
70 link 22 L4 213 um
71 link 12 L5 1.47 mm
LM - Gauss Newton -
SVD
Nonlinear optim ization Nonlinear optim ization
Av erage Absolute
0.11718325 0.11395309 0.11395309
Accuracy (m m )
Av erage Standard
0.04774522 0.04849159 0.04849159
Dev iation (MM)
The results discussed above indicate that a closed form forward kinematic so-
lution can be computed and finished much faster than the conventional itera-
tive algorithms. The closed form solution is very difficult to obtain because the
problem is highly nonlinear. The N-R method (iterative method) can give an
accurate result but it usually takes an average of 4290 multiplications and 630
sine functions for the iterative N-R algorithm. The Polynomial Based method
needs at least to solve a 16th-order polynomial equation, which is slow and so-
lution is with spurious roots.
The proposed JAM algorithm can eliminate these drawbacks, and it has an ef-
fective closed-form solution with an accuracy of 1.53um.
Table 13 below summarizes the features of the methods proposed by the au-
thor in solving the parallel robotics problems involved. The methodology and
approach are also used in other robotics applications to effectively increase
system modeling, control and process accuracy.
The TAU robot represents a new configuration of parallel robots. This robotic
configuration is well adapted to perform with a high precision and high stiff-
ness within a large working space compared with a serial robot. It has the ad-
vantages of both parallel robots and serial robots.
In this study, the kinematic modeling and error modeling are established with
all errors considered using Jacobian matrix method for the robot. Meanwhile, a
very effective Jacobian Approximation Method is introduced to calculate the
forward kinematic problem instead of Newton-Raphson iteration method. It
denotes that a closed form solution can be obtained instead of a numerical it-
eration solution. A full size Jacobian matrix is used in carrying out error analy-
sis, error budget, and model parameter estimation and identification. Simula-
tion results indicate that both Jacobian matrix and Jacobian Approximation
Method are correct and with a level of accuracy of micron meters. ADAMS’s
simulation results are used in verifying the established models. Experimental
results obtained based on both the lab prototype and industrial prototype
show that the established models enabled the realization of high precision for
the new class of robots.
The established models are also used in the development of other precision
robotics systems. Precision robotic machining processes using existing serial
robots have been realized successfully with industry partners involved. These
precision processes include robotic milling of aluminum engine blocks, and
belt grinding of complicated parts of curved surfaces such as engine blades,
and human knee joint replacements.
Based on the analytical Jacobian matrix solution, SVD calibration is carried out
for three parameters that contribute to the final position error, the accuracy of
calibration is within 4um for individual components.
In the milling application of engine block, the position compensation proce-
dure is proved, which reduces the error to < 0.1 mm compared with the origi-
nal error of 0.5 mm.
Error Modeling and Accuracy of Parallel Industrial Robots 641
For Tau robot, the global error function is not only the function of component
sizes but also the function of robot positions, and the global error comes from
three directions (X, Y and Z) so this multi-objects optimal problem can be
transformed into single object problem then combined global error function
can be written as object function as follows:
F ( X i ,θ i ) = ω x Fx + ω y Fy + ω z Fz
(71)
Fx = (C x + ¦ Si x X i )
2
(72)
642 Industrial Robotics: Theory, Modelling and Control
Where Fy = (C y + ¦ Siy X i ) 2
Fz = (C z + ¦ Siz X i )
2
k
F ( X i ,θ i ) = ω x Fx + ω y Fy + ω z Fz + ¦λ g j j (Xi) (73)
j =1
k
∇F ( X * , θ * ) + ¦ λ j ∇g j ( X * ) = 0
j =1
λjg j (X ) = 0
*
( j = 1,2,..., k ) (74)
λj ≥ 0 ( j = 1,2,...k )
Since from mentioned above the object functions have to satisfy whole work-
space. A most serious configuration (movement position) has to be found so
that one can optimize the object function in this situation.
One needs to utilize the other method to optimize the object function since
the relationship between the object function and position variable is not ex-
plicit. Here “Pattern Search method” is adopted to search the most serious po-
sition of robot that means to maximum the object function. The final optimal
result can be obtained by using the Lagrange multiplier Method as the most
serious position is known.
“Pattern Search method” is consisted of two steps ‘move’, one is exploratory
move and other is module move. The former is to obtain the useful direction
by calculating the variations of object function the later is to get a better new
“point” instead of old “point” in the useful direction, which is similar to the
gradient direction.
Description of Exploratory Move and Module Move:
Given initial point X(0), step length α = (α1 ,α 2 ,....,α n )T , 0 < β < 1, error ε .
1. 0 K
2. Exploratory Move, α α
2.1 0 i , X ( k ) Xˆ ( k ,i )
~
2.2 Xˆ ( k ,i ) + α i +1ei +1 X , Xˆ ( k ,i ) − α i +1ei +1 X , Xˆ ( k ,i ) X
~ ~
2.3 if f ( X ) < f ( X ), then X Xˆ ( k ,i +1) ;
~
if f ( X ) ≥ f ( X ) > f ( X ), then X Xˆ ( k ,i +1) ;
~
if f ( X ) ≤ f ( X ) ≤ f ( X ), then X Xˆ ( k ,i +1)
2.4 i +1 i
2.5 if i < n, then, do 2.2; if i = n, do 3
3. if Xˆ ( k , n ) ≠ Xˆ ( k , 0) , then Xˆ ( k , n ) Xˆ ( k +1) , do 5;
if Xˆ ( k , n )
= Xˆ ( k , 0 )
, then, do 4.
4. if α ≤ ε , then solve the optimal solutions, X * = Xˆ ( k , n ) . if α > ε and
βα α , go to 2.1
ˆ
5. 2 Xˆ ( k +1) − X ( k ) Xˆ ( k +1) , obtain yˆ ( k +1) from X̂ by exploratory move.
6. if f ( yˆ ( k +1) ) < f ( Xˆ ( k +1) ), then yˆ ( k +1) X ( k +1) , k + 1 k , go to 2;
if f ( yˆ ( k +1) ) ≥ f ( Xˆ ( k +1) ), then Xˆ ( k +1) X ( k +1) , k + 1 k ,
βα α go to 2.1
Given
0=>
α => α
From X(k)
No (?
βα => α Xˆ ( k +1) ~ X ( k )
Ye
No (?
) α ~ε X * = Xˆ ( k +1
Ye STO
ˆ
Xˆ ( k +1) = 2 Xˆ ( k +1) − X
ˆ
FromXˆ ( k +1) to Yˆ ( k +1)
K+1=
f ( yˆ ( k +1) ) ~ f ( xˆ ( k +
> or =
<
X ( k +1) = Yˆ ( k + X ( k +1) = Xˆ ( k +
Error Modeling and Accuracy of Parallel Industrial Robots 645
9. References
M. Moallem
1. Introduction
With the advent of new computing, sensor, and actuator technologies, the ap-
plication of robotic systems has been growing rapidly in the past decade. Ro-
botic systems were originally developed due to their capability to increase
productivity and operate in hazardous environments. In recent years, robotics
has found its way to a completely new range of real-world applications such as
training, manufacturing, surgery, and health care (Bernard et al., 1999; Craig,
1997; Goldberg et al., 2000; Taylor and Stoianovici, 2003). From the advanced
manufacturing domain to daily life applications, Internet-based telerobotic
systems have the potential to provide significant benefits in terms of tele-
presence, wider reachability, cost effectiveness and maximal resource utiliza-
tion. Challenging problems with regard to Internet-based telerobotic systems
include such issues as uncertain Internet time delays (Luo and Chen, 2000),
system reliability, interaction capability (Schulz al., 2000), and augmented
Human-Machine interfaces. Due to the emergence of new areas in the field of
647
648 Industrial Robotics: Theory, Modelling and Control
robotics, there is a growing need for applications that go beyond classical ones
such as simple pick-and-place operations involving a single robot.
Many conventional robot manipulators are closed architecture, meaning that the
user does not have direct access robot’s sensory data and actuator inputs. To
operate a robot, the user is usually confined to a Robot Programming Language
(RPL) that is specific to the robotic system being used (Craig, 1997). This is re-
strictive in many cases, including the robotic applications requiring coordina-
tion of such robots. For example, in developing robotic work-cells that require
interaction of two or more robots at a time, there is a growing need for robots
to share and exchange information through a network. Use of an RPL is re-
strictive in such cases due to the limited capability of the programming envi-
ronment. In this work we present a laboratory setup that can be utilized in or-
der to perform tasks requiring multi-robot scheduling and cooperation tasks
using industry grade manipulators. The objective is to create a flexible soft-
ware environment so that the robot programmer can perform robotic tasks us-
ing a programming language such as C/C++ and a real-time operating system.
The local networking and communication platform utilizes two types of me-
chanisms as shown in Figure 2, consisting of serial port which connect target
Networking Multiple Robots for Cooperative Manipulation 649
machines to robot controllers, and an Ethernet network, which links together all
the target and host machines. Many commercial robots use serial communica-
tion between the host computers and the robot controllers. In this setup, we
use the RS232 serial lines to transmit control commands and sensory informa-
tion between the target machines and robot controllers. The robot motion
commands are issued by a target machine and are sent to the robot controllers.
The robot controllers also transmit sensory information such as gripper dis-
tances, force sensor readings, and the status of executed commands, back to
the target machines. Similarly, the target machines transmit sensory and com-
mand data through the network to other machines. The robot controllers are
embedded computer systems without network connectivity and standard op-
erating system support. Lack of network connectivity is a main drawback of
many conventional robot controllers. In Figure 2, the three target machines run
under the VxWorks real-time operating system. However, any other operating
system that supports networking tools and inter-task synchronization and
communication primitives such as semaphores, message queues, and signals,
can be used to do the same job.
W a ll T ra c k R o b o t
"T2"
S ta tio n a r y R o b o t
"S3"
Real-time operating systems have emerged in the past decade to become one
of the basic building blocks of embedded computer systems, including com-
plex robotic systems. A modular approach to software development for time
critical embedded systems calls for decomposition of applications into multi-
ple tasks and use of operating system primitives. A real-time operating system
can be used to run on the supervisory computers such as the Pentium com-
puters shown in Figure 2. We have used the Tornado development environ-
ment which provides a graphical user interface and tools for developing real
time multitasking applications under VxWorks (www.wrs.com). However,
any other real-time operating system can be used for this purpose. In Figure 2,
once the programs are compiled and linked, the tasks can be downloaded into
the memory of the PC workstations running VxWorks. These computers are
used as supervisory controllers that enable communication between robots
through the network communication ports.
The starting point for implementing modular software for robotic applications
is representing the robot as a class consisting of private data attributes and
member functions as shown in Figure 3.
652 Industrial Robotics: Theory, Modelling and Control
In the class diagram of Figure 3, the robot identification, position, speed, tor-
que, and other variables are defined as the attributes of the robot object. The
commands sent to the robots are string variables stored in the robotCommand[
] array. Similarly, the status received for each robot after executing a command
is stored in the robotResponse[ ] array. The communication between the Pen-
tium PCs in Figure 2 with the robot controller is two-way which is performed
through the RS-232 serial interface. The serialPort attribute in Figure 3 is used
to identify which serial port each robot object is using for communication. The
member functions of the Robot Class shown in Figure 3 are used to initialize
the robot object using InitializeRobot(), move the robot to a calibrated position
using Calrdy(), move the robot to a particular point using moveToPoint(), send a
command using SendRobotCommand(), and to open or close a serial port using
openPort() and closePort(), respectively. If needed, the above class can be modi-
fied or other classes can be inherited from it.
One benefit of modular software development is the convenience of develop-
ing the modules one by one. After finishing each part, testing and debugging
can be performed on other parts to be implemented. As a result, the final inte-
gration and testing can be done without much difficulty. In the following we
discuss some of the projects that have been performed using this environment.
Networking Multiple Robots for Cooperative Manipulation 653
Before running the system under VxWorks, the robot programming language
robcom is used to send commands to the robot from the application shell,
which is similar to MSDOS or UNIX prompts. For example, by issuing the
command “joint 5, 20”, the robot’s fifth joint will rotate 20 degrees clockwise.
This allows for the commands sent to the robot to be tested at the command
prompt level before executing them from a program. The second step involves
sending the commands through the serial ports using a high level program
running under VxWorks instead of the application shell. For example, the func-
tion interface SendRobotCommand() in Figure 3 is written to send commands
through the serial port, or the moveToPoint() command is to move the robot to
a previously taught positions in the robot’s workspace.
In this demonstration, one robot catches an object and passes it to a second ro-
bot. The goal is to develop code for coordination of motion of two robots using
synchronization mechanisms such as semaphores provided by the operating
654 Industrial Robotics: Theory, Modelling and Control
system. A semaphore is a token which if available, will cause the task to con-
tinue and if not available, will block the calling task until the token becomes
available. At the beginning of the program, two robot objects are declared and
initialized. The first robot is programmed to move and fetch the object from a
known pick-up point that has been previously taught to it. Meanwhile, the se-
cond robot moves to the delivery position and waits for the first robot to de-
liver the object. A waiting mechanism is implemented using an empty sema-
phore by issuing a semTake() command in VxWorks which causes the task to
block until the semaphore token becomes available. When the first robot has
reached the delivery point and is ready to deliver the object, it releases the
empty semaphore. The task running the first robot then unblocks, opens its
gripper, and the process of transferring the object is completed. When the ro-
bot catches the object, it moves toward the release point where it allows the o-
ther robot to move to its final destination. At the end, both robots move to
their calibration positions.
There are many situations that robots must coordinate their operations with
external devices. For example in a smart manufacturing workcell, robots have
to grab parts from devices such as indexers, or deliver parts to them in a syn-
chronized manner. A rotary indexing table is a simple example that simulates
part of a manufacturing process. The indexing table can rotate by means of a
stepping motor. The motor controller is interfaced with a VxWorks station and
a digital I/O card. The indexer and robot can be connected to the same com-
puter or to separate computers on a shared network. The situation is similar to
the previous example in part E. The program that controls the indexing table
operation is spawned as a separate task which can coordinate its operation
with other robots, for example by using semaphores or a TCP/IP client-server
mechanism on a network.
the network to a host computer that may be located in a control room. A visu-
alization program written in VRML is running on the host computer. This
computer obtains real-time sensory data from supervisory robot computers on
the network and presents to the operator a visualization of the robotic work-
cell. A main advantage of using this scheme over sending data through a cam-
era vision system is the small bandwidth required for sending sensory data, as
opposed to a relatively large bandwidth required for transmitting picture
frames when a vision system is used.
The environment has also been used in a distributed robotic system with In-
ternet connectivity where robot operations can be monitored and operated
from the Internet (Wang et al., 2003). The scheduling of robotic work-cells used
in manufacturing has been another topic, where issues such as checking for
deadlock situations, or scheduling the operation of multiple robotic systems
with different timing requirements have been addressed (Yuan et al., 2004).
Networking Multiple Robots for Cooperative Manipulation 657
5. Conclusion
6. References
Bernard, C., Kang, H., Sigh, S.K. and Wen, J.T. (1999), “Robotic system for col-
laborative control in minimally invasive surgery”, Industrial Robot: An in-
ternational Journal, Vol. 26, No. 6, pp. 476-484.
Craig, C.G. (1989), Introduction to Robotics: Mechanics and Control, Addison-
Wesley, Boston, MA.
Goldberg, K., Gentner, S., Sutter, C. and Wiegley, J. (2000), “The mercury pro-
ject: A feasibility study for Internet robots” IEEE Robotics & Auto. Maga-
zine, Vol. 7, No.1, pp. 35-40.
Pressman, R. (1997), Software Engineering: A Practitioner's Approach, McGraw-
Hill, New York, NY.
DeviceNet Vendors Association (1997), DeviceNet Specifications, 2nd. ed. Boca
Raton, FL.
Raji, R.S. (1994), “Smart Networks for Control,” IEEE Spectrum, Vol. 31, pp. 49-
55, June 1994.
ControlNet International (1988), ControlNet Specifications, 2nd ed. Boca Raton,
FL.
Tanenbaum, A.S. (1996), Computer Networks, 3rd ed. Upper Saddle River, Pren-
tice Hall, NJ.
Lian, F.-L. Moyne, J.R.; Tilbury, D.M. (2001), “Performance evaluation of con-
trol networks: Ethernet, ControlNet, and DeviceNet,” IEEE Control Sys-
tems Magazine, Vol. 25, No. 1, pp. 66-83, 2001.
Hung, S.H., (2000), “Experimental performance evaluation of Profibus-FMS,”
IEEE Robotics & Automation Magazine, Vol. 7, No. 4, pp. 64-72.
Taylor, R.H. and Stoianovici, D. (2003), “Medical Robotics in Computer Inte-
grated Surgery,” IEEE Transactions on Robotics and Automation, vol. 19, pp.
765–781.
Luo, R.C., and Chen, T.M. (2000), “Development of a Multibehavior-based
Mobile Robot for Remote Supervisory Control through the Internet”,
IEEE/ASME Trans. on Mechatronics, Vol.5, No.4, pp. 376-385.
658 Industrial Robotics: Theory, Modelling and Control
Schulz, D., Burgard, W., Fox, D., Thrun, S. and Cremers, A.B., (2000), “Web
Interfaces for Mobile Robots in Public Places”, IEEE Robotics & Auto.
Magazine, Vol. 7, No.1, pp. 48-56.
Wang, X-G., Moallem, M. and Patel, R.V., (2003), “An Internet-Based Distrib-
uted Multiple-Telerobot System,” IEEE Transactions on Systems, Man, and
Cybernetics, Part A, Vol. 33, No. 5, pp. 627- 634.
Yuan, P., Moallem, M. and Patel, R.V. (2004), “A Real-Time Task-Oriented
Scheduling Algorithm for Distributed Multi-Robot System," IEEE Interna-
tional Conference on Robotics and Automation, New Orleans, LA.
23
1. Introduction
During the last decade, the Web has gained widespread acceptance in both
academic and business areas. The Web is used by many as a medium of shar-
ing data, information, and knowledge. Today, it is widely used for develop-
ment of collaborative applications to support dispersed working groups and
organizations because of its platform, network and operating system transpar-
ency, and its easy-to-use user interface – Web browser. In addition to the Web
technology, Java has brought about a fundamental change in the way that ap-
plications are designed and deployed. Java’s “write once, run anywhere”
model has reduced the complexity and cost traditionally associated with pro-
ducing software on multiple distinct hardware platforms. With Java, the
browser paradigm has emerged as a compelling way to produce applications
for collaboration over the Internet. As business grows increasingly diversified,
the potential of this application is huge. Targeting distributed, real-time moni-
toring and control in manufacturing sectors, a framework with high efficiency
for cyber collaboration is carefully examined.
The objective of this research is to develop a Web-based digital shop floor
framework called Wise-ShopFloor (Web-based integrated sensor-driven e-
ShopFloor) for distant shop floor monitoring and control. The Wise-ShopFloor,
with an appropriate architecture for effective data communication among a
dispersed engineering team, can serve real-time data from bottom up and can
function as a constituent component of e-manufacturing. The framework is de-
signed to use the popular client-server architecture and VCM (view-control-
model) design pattern with secured session control. The proposed solutions
for meeting both the user requirements demanding rich data sharing and the
real-time constraints are: (1) using interactive Java 3D models instead of
bandwidth-consuming camera images for visualization; (2) transmitting only
the sensor data and control commands between models and device controllers
for monitoring and control; (3) providing users with thin-client graphical inter-
face for navigation; and (4) deploying control logic in an application server. A
659
660 Industrial Robotics: Theory, Modelling and Control
by real-time interactions with quick responses, but also can obtain more flexi-
ble control of the real world. In the near future, open-architecture devices
(such as OpenPLCs and Open-CNC Controllers, etc.) will have web servers
and Java virtual machines embedded. This will make the proposed Wise-
ShopFloor framework more efficient for real-time monitoring and control.
3. Research Background
local CPU and can work on behalf of its remote counterpart showing real be-
haviour for visualization at a client side. It remains alive by connecting with
the physical device through low-volume message passing (sensor data and
user control commands). The 3D model provides users with increased flexibil-
ity for visualization from various perspectives, such as walk-through and fly-
around that are not possible by using stationary optical cameras; whereas the
largely reduced network traffic makes real-time monitoring, remote control,
on-line inspection, and collaborative trouble-shooting practical for users on
relatively slow hook-ups (e.g. modem and low-end wireless connections)
through a shared Cyber Workspace [12].
By combining virtual reality models with real devices through synchronized
real-time data communications, the Wise-ShopFloor allows engineers and shop
floor managers to assure normal shop floor operations and enables web-based
trouble-shooting – particularly useful when they are off-site.
Figure 1 shows how it is linked to a real shop floor. Although the Wise-
ShopFloor framework is designed as an alternative of camera-based monitoring
systems, an off-the-shelf web-ready camera can easily be switched on remotely
to capture unpredictable real scenes for diagnostic purposes, whenever it is
needed. In addition to real-time monitoring and control, the framework can
also be extended and applied to design verification, remote diagnostics, virtual
machining, and augmented virtuality in construction. It is tolerant to hostile,
invisible or non-accessible environments (e.g. inside of a nuclear reactor or
outside of a space station).
Managers
Operators Engineers
Web-based Integrated
Sensor-driven e-ShopFloor
“Wise-ShopFloor”
Control
Monitoring
Cyber World
Real World
4. Architecture Design
Registrar
Virtual DataAccessor
SessionManager
SignalCollector
Commander
StatusMonitor
(Pushlet)
(Servlet)
(Servlet)
ChatRoom
Knowledge
Database
CyberController M
Secured Access
S/W Layer
Internet H/W Layer
Factory Network
PKM 1 PKM 3
PKM 2
Real Machines
The mid-tier application server handles major security concerns, such as ses-
sion control, viewer registration, data collection/distribution, and real device
manipulation, etc. A central SessionManager is designed to look after the issues
of user authentication, session control, session synchronization, and sensitive
664 Industrial Robotics: Theory, Modelling and Control
data logging. All initial transactions need to pass through the SessionManager
for access authorization. In a multi-client environment – the Wise-ShopFloor,
different clients may require different sets of sensor data for different models.
Constrained by network security, a Java 3D model residing in an applet is not
allowed to communicate directly with a real device through socket communi-
cation. It is also not efficient to have multiple clients who share the same
model talking with the same device at the same time. The publish-subscribe
design pattern is adopted to collect and distribute sensor data at the right time
to the right client efficiently. As a server-side module, the SignalCollector is re-
sponsible for sensor data collection from networked physical devices. The col-
lected data are then passed to another server-side module SignalPublisher who
in turn multicasts the data to the registered subscribers (clients) through app-
let-servlet communication. A Registrar is designed to maintain a list of sub-
scribers with the requested sensor data. A Java 3D model thus can communi-
cate indirectly with sensors no matter where the client is, inside a firewall or
outside. The JMF (Java Media Framework) is chosen for the best combination
between applets and servlets. For the same security reasons, a physical device
is controllable only by the Commander that resides in the application server.
Another server-side component called DataAccessor is designed to separate the
logical and physical views of data. It encapsulates JDBC (Java Database Con-
nectivity) and SQL codes and provides standard methods for accessing data
(Java 3D models, knowledge base of the devices, or XML documents). The
knowledge base is found helpful for device trouble-shooting, while XML will
be used for high-level data communication in future extensions.
Although the global behaviours of Java 3D models are controlled by the server
based on real-time sensor signals, users still have the flexibility of monitoring
the models from different perspectives, such as selecting different 3D machine
models, changing viewpoint, and zooming, through J3DViewer at the client
side. Authorized users can submit control commands through CyberController
to the application server. The Commander at server-side then takes over the
control for real device manipulations. Another client-side module StatusMoni-
tor can provide end users with a view of run-time status of the controlled de-
vice. For the purpose of collaborative trouble-shooting, a ChatRoom is included
in the framework for synchronized messaging among connected users.
A proof-of-concept prototype is developed on top of the framework to demon-
strate its application on remote monitoring and control. Figure 3 shows one
snapshot of the web user interface of the prototype. A more detailed discus-
sion from device modelling to control is provided in Section 6 through a case
study of a Tripod test bed.
Web-Based Remote Manipulation of Parallel Robot in Advanced Manufacturing Systems 665
Factory Network
Milling
Tripod Robot
Machine
Wise-ShopFloor
6. Implementation
This section describes how a physical device is modelled, monitored, and con-
trolled. The Tripod is a parallel kinematic machine developed at IMTI’s lab
[14]. Instead of camera images, the Tripod is modelled by using the scene
graph-based interactive Java 3D with behavioural control nodes embedded.
Once downloaded from the application server, it behaves in the same way as
its physical counterpart for remote monitoring and control at client-side, facili-
tated by the model-embedded kinematics and sensor signals of the real Tripod.
T T T
Moving Platform End Effecter Moving Platform
Kinematic Control GW-1 GW-2 GW-3
T T T
B
Base-1 X-Table SL-1 SL-2 SL-3
into its open-branched tree topology. In the case of our Tripod monitoring and
control, models of both constrained kinematics and inverse kinematics are
solved separately and embedded into the behaviour control nodes in a scene
graph to calculate the motions of respective components. Typically, constraints
can be expressed in a number of equations or inequalities that describe the re-
lationships among Tripod components. Based on sensor signals collected from
the real Tripod, both constrained kinematic model and inverse kinematic
model of the Tripod are needed to calculate the positions and orientations of
the three sliding-legs and moving platform for 3D Tripod model rendering.
For the purpose of mathematical formulation, a Tripod kinematic model is
shown upside-down in Figures 6 and 7. It is a 3-dof parallel mechanism with
linear motion component actuators, the type of linear motion actuated ma-
chines with fixed leg lengths and base joints movable on linear guideways (e.g.
HexaM, Eclipse, Hexaglide, GeorgV, Z3 Head). This mechanism consists of
three kinematic chains, including three fixed length legs with identical topol-
ogy driving by ballscrews, which connects the fixed base to the moving plat-
form. In this 3-dof parallel mechanism, the kinematic chains associated with
the three identical legs consist, from base to platform, of an actuated linear mo-
tion component (ballscrew in the case), a revolute joint (connected by a nut), a
fixed length moving link, and a spherical joint attached to the moving plat-
form. The arrangement of the structure would be subject to bending in the di-
rection parallel to the axis of the revolute joint. The advantages of the structure
are: 1) with this basic structure of parallel mechanism, it can be easily extended
to 5-dof by adding two gantry type of guideways to realize the 5-dof machin-
ing; 2) with the fixed length legs, one can freely choose the variety of leg forms
and materials and use linear direct driver to improve the stiffness; and 3) due
to reduced heat sources, it is possible to keep the precision in a high level and
to maintain a stable stiffness if compared with variable legs.
The kinematic equation for the position of the ith spherical joint is given as
p i = h + Rp ′i (1)
where, p i = [ pix , piy , piz ]T is the vector representing the position of the ith joint in
the global coordinate system O-xyz, p ′i is the vector representing the same
point but in the local coordinates C- x ′y ′z ′ , h = [ xc , yc , zc ]T is the vector represent-
ing the position of the moving platform, and R is the rotation matrix of the
moving platform in terms of rotation angles θ x , θ y , and θ z about x, y, and z
axis, respectively.
670 Industrial Robotics: Theory, Modelling and Control
Moving Platform z′ z
y′
Spherical Joint p2
x′
Rp i′
C p1
p3
li
Sliding-Leg
s2 y
pi
Revolute Joint
s3 s1
Guide-way b2
Base si
α2
bi O
b3 α3 α1
x
b1
Among the six motion components of the moving platform, it is known that
x c , y c , and θ z are dependent variables. The constraint equations can be de-
rived as
3
xc = − L p cos θ y sin θ z (2a)
3
yc =
6
3
(
L p − cos θ y cos θ z + cos θ x cos θ z − sin θ x sin θ y sin θ z ) (2b)
§ sin θ x sin θ y ·
θ z = arctan ¨ − ¸ (2c)
¨ cos θ x + cos θ y ¸
© ¹
pi = bi + si + li (3)
where b i is the vector representing the position of the lower end of the ith
guide-way attached to the base, s i is the vector representing the displacement
along the ith guide-way, and l i is the vector representing the ith sliding leg.
Web-Based Remote Manipulation of Parallel Robot in Advanced Manufacturing Systems 671
Since s i = s i u is and l i = li u li , where u is and u li are the direction vectors of the ith
guide-way and the ith leg, respectively, the actuator displacement s i can be
solved considering that the leg length is a constant
p i − b i − s i u is = l i (4)
where li is the length of the ith sliding leg. For given θ x , θ y , and z c , dependent
variables x c , y c , and θ z can be determined by eqs. (2a) - (2c), then h and R are
fully defined. With this, p i can be determined by eq. (1), and subsequently s i
can be solved using eq. (4). The true solution of eq. (4) should be the one closer
to the previous value, that is
where j stands for the jth step. In practice, the initial value of s i is provided by
an encoder. For the sake of brevity, interested readers are referred to [18] for
further details of the Tripod kinematics.
Web-based remote device monitoring and control are conducted by using the
StatusMonitor and CyberController, which communicate indirectly with the de-
vice controller through an application server. In the case of Tripod monitoring
and control, they are further facilitated by the kinematic models, to reduce the
amount of data travelling between web browsers and the Tripod controller.
The required position and orientations of the moving platform are converted
into the joint coordinates s i (i = 1, 2, 3) by the inverse kinematics for both Java
3D model rendering at client-side and device control at server-side. The three
sliding-legs of the Tripod are driven by three 24V DC servomotors combined
with three lead screws. Each actuator has a digital encoder (1.25 μm/count) for
position feedback. The position data s i (i = 1, 2, 3) of the sliding-legs are multi-
cast to the registered clients for remote monitoring, while only one user at one
time is authorized to conduct remote control. A sampling rate of 1 kHz is used
for the case study. Figure 8 shows how the Tripod is manipulated from one
state to another within the proposed Wise-ShopFloor framework.
7. Conclusions
8. References
Cao, J., Li, M.L., Zhang, S.S. and Den, Q. N. 2004. Composing Web Services
Based on Agent and Workflow. Grid and Cooperative Computing, Part 1.
Berlin: Springer-Verlag Berlin, pp. 948-955.
Zeng, L. Benatallah, B., Ngu, A. H. H., Dumas, M., Kalagnanam, J., and Chang,
H. 2004. QoS-Aware Middleware for Web Services Composition", IEEE
Transactions on Software and Engineering, May. 30(5): 311-327.
G. Pritschow, “Research and Development in the Field of Parallel Kinematic
Systems in Europe”, Parallel Kinematic Machines – Theoretical Aspects
and Industrial Requirements, edited by Boër, C.R., Molinari-Tosatti, L, and
Smith, K.S., pp.1-16, Springer-Verlag, (1999).
J. Tlusty, J. Ziegert, and S. Ridgeway, “Fundamental Comparison of the Use of
Serial and Parallel Kinematics for Machine Tools”, Annals of the CIRP, Vol.
48/1, pp. 351-356, (1999).
M. Honegger, A. Codourey, and E. Burdet, “Adaptive Control of the
Hexaglide, a 6 DOF Parallel Manipulator”, Proceedings of the 1997 IEEE
674 Industrial Robotics: Theory, Modelling and Control
1. Introduction
At present, industrial robot arms have been widely adopted in many areas.
Unfortunately, operation of them is not easy to master for workers due to
complex architectures as well as various control patterns required for each
situation. Therefore, if there is a user-friendly human-robot interface and
through this interface workers can operate industrial robot arms by their fa-
miliar language, it will remarkably reduce the difficulty of the usage of them.
The aim of this research is to develop a new human-robot interaction control
approach for industrial robot arms by means of software platform for agents
and knowledge management in order to construct a symbiotic human-robot
system that could be adopted in industrial area (Ueno, 2002).
Conventionally, industrial robot arms only can be operated by experts who
should possess sufficient knowledge on features of industrial robot arms and
be able to control their movement for performing a task. To improve the hu-
man-robot interface, researchers have already developed some software inter-
faces for the operation of industrial robots (Mizukawa et al, 2002) (Konukseven
et al, 2004) (Sales et al, 2004) (Cengiz, 2003). Unfortunately, effective use of
these interfaces still depends on the technical training. This paper proposes a
knowledge-based human-robot interaction control approach in conjunction
with a humanoid robot Robovie as a communication robot (Tao Zhang et al,
2005) for industrial robot arms. With this method, an operator can easily in-
teract with the autonomous communication robot by his natural language. The
communication robot transfers the sentences of the operator to a sentence
parser. The key words extracted by the sentence parser are then sent to a soft-
ware platform, called SPAK (Software Platform of Agents and Knowledge
Management) (Ampornaramveth et al, 2004). In SPAK, it maintains sufficient
knowledge on this human-robot system. According to the defined human-
robot interaction control in SPAK, industrial robot arms can be correctly oper-
ated according to operator’s request. With the proposed method, a person is
not required to be an expert of industrial robot arms but just an ordinary op-
erator of industrial robot arms. He can operate industrial robot arms like an
expert.
677
678 Industrial Robotics: Theory, Modelling and Control
Although there are many types of industrial robot arms to be operated, the
knowledge on these robots can be easily defined in SPAK in a uniform manner
by using frame-based knowledge representation schema. The knowledge is
maintained as a key component of a communication robot, i.e. a dialog robot,
for the robot arms. Therefore, a person only needs to explain his requests to
the communication robot. SPAK can assist the person to select appropriate ro-
bots and arrange their operations to satisfy the person’s requests to achieve
tasks. In addition, SPAK can control different types of robots even they use
various kinds of operation systems. From the side of operators, it is no need to
possess knowledge on the operations of different types of robots.
The remainder of this chapter is organized as follows. In section 2, human-
robot system as well as its interaction control process is modelled by frame-
based knowledge representation. In section 3, human-robot system is defined
in SPAK according to its knowledge model using XML format. Through hu-
man-robot interaction, industrial robot arm is controlled by use of SPAK via
wireless network in section 4. Section 5 introduces an actual system comprised
of human, humanoid robot (Robovie) and industrial robot arm (MELFA) and
its experimental results demonstrate the effectiveness of the proposed human-
robot interaction control method.
Human-robot interaction control for an industrial robot arm is based on the in-
teraction between an operator of the robot arm and a communication robot.
The operator’s request is transferred to SPAK via wireless network and con-
verted into commands of the robot arm. SPAK can control the robot arm with
these commands and get the feedback signals from the robot. By converting
these feedback signals into the sentences of natural language and speaking out
these sentences by the communication robot, the operator can understand the
status of the robot arm and continue his operation successfully. From this op-
eration process, the definition of human-robot system in SPAK is one of the
important components. In order to implement the definition of human-robot
system, the modelling of human-robot system is necessary.
The modelling of human-robot system is based on the frame-based knowledge
representation. It is well known that frame representation systems are cur-
rently the primary technology used for large-scale knowledge representation
in Artificial Intelligence (AI) (Koller & Pfeffer, 1998). A frame is a data-
structure for representing a stereotyped situation (Minsky, 1974). Attached to
each frame are several kinds of information, called knowledge. Collections of
related frames are linked together into frame-systems. The structure of a frame
is consisted of several items, such as Frame name, Frame type, A-kind-of, De-
scendants, Slots, etc. (Tairyou, 1998). A frame consists of slots, each of which
Human-Robot Interaction Control for Industrial Robot Arm through Software… 679
has different roles in description of knowledge. Table 1 and 2 shows the defini-
tion of a frame as well as its slot.
Items Meanings
Frame name Identification for frame
Frame type Type of frame
A-kind-of Pointer to parent frame for expressing IS_A relation
Descendants Pointer list to children frame
Has-part Components of this frame
Semantic-link-from Links from other frames according to their semantic relation
Semantic-link-to Links to other frames according to their semantic relations
Slots Components of the frame
Table 1.Meanings of each item in a frame
Items Meanings
Slot name Identification for slot
Role Purpose of slot
From Source of slot
Data type Explain the attribute of information recorded into the value
Value Slot value
Condition Condition of slot
Argument Argument for slot
If-required If slot is required, check this item.
If-shared If slot can be shared with other frames, check this item.
Frame-related A frame related with slot
Frame-list-related Several frames related with slot
Default If the slot value is not determined, the default value can be
recorded. But the value and default value cannot be given at
the same time.
Table 2.Meanings of each item in a slot
Using frames and their slots, a human-robot system can be modelled simply.
This knowledge model is comprised of different frames to represent various
pieces of knowledge. For instance, frames for robots include features of a
communication robot and industrial robot arms as well as their operation
commands. Particularly, the frames for the communication robot include the
knowledge on a human-robot interface. At present, a human-robot interface
can be implemented by vision recognition, robot speech, physical input, etc.
While, frames for users include much information about the users. All frames
for the knowledge model are organized by their ISA relations in a hierarchy.
That is, a lower level frame is a subclass of its upper level frame. The bottom
frames are the instances of the upper level frame. Based on these relations, a
680 Industrial Robotics: Theory, Modelling and Control
3.1 SPAK
existing modules over a TCP/IP network. The features of SPAK are “platform-
independent” as existing robots and software modules often rely on different
platforms or operation systems, “network-aware” as the modules must inte-
ract on a network, supporting “software agent” and being “user friendly”.
SPAK is targeted to be the platform on which a group of coordinative robots
(or their agents) operate on top of frame knowledge.
• GUI Interface:
A user-friendly graphical interface to the internal knowledge manager and
the inference engines. It provides the users direct access to the frame-based
knowledge.
• Knowledge Database and Knowledge Manager:
This is the SPAK core module, which maintains the frame systems as Java
class hierarchy, and performs knowledge conversion to/from XML format.
• Inference Engines:
Verify and process information from external modules, which may result in
instantiation or destruction of frame instances in the knowledge manager,
and execution of predefined actions.
• JavaScript Interpreter:
Interprets JavaScript code which is used for defining condition and procedu-
ral slots in a frame. It also provides access to a rich set of standard Java class
libraries that can be used for customizing SPAK to a specific application.
• Base Class for Software Agent:
Provide basic functionality for developing software agents that reside on ne-
tworked robots.
• Network Gateway:
This is a daemon program allowing networked software agents to access
knowledge stored in SPAK. All SPAK network traffics are processed here.
<FRAME>
<NAME>FaceDetect</NAME>
<ISA>Behavior</ISA>
<ISINSTANCE>FALSE</ISINSTANCE>
<SLOTLIST>
<SLOT>
<NAME>mSensor</NAME>
<TYPE>TYPE_INSTANCE</TYPE>
<CONDITION>COND_ANY</CONDITION>
<ARGUMENT>Bust</ARGUMENT>
<VALUE></VALUE>
<REQUIRED>TRUE</REQUIRED>
<SHARED>TRUE</SHARED>
</SLOT>
<SLOT>
<NAME>mMouth</NAME>
<TYPE>TYPE_INSTANCE</TYPE>
<CONDITION>COND_ANY</CONDITION>
<ARGUMENT>MouthIO</ARGUMENT>
<VALUE></VALUE>
<REQUIRED>TRUE</REQUIRED>
<SHARED>TRUE</SHARED>
</SLOT>
<SLOT>
<NAME>onInstantiate</NAME>
<TYPE>TYPE_STR</TYPE>
<CONDITION>COND_ANY</CONDITION>
<ARGUMENT></ARGUMENT>
<VALUE>rpc(“recogFace”, “”);</VALUE>
<REQUIRED>FALSE</REQUIRED>
<SHARED>TRUE</SHARED>
</SLOT>
</SLOTLIST>
</FRAME>
Table 4. XML format in SPAK
With XML format, frame structure as well as its contents written by slots can
be defined easily. Particularly, the frame system can be illustrated in the SPAK
Graphic User Interface (GUI). Besides, corresponding to the XML file, there is
an interpreter to translate XML-based specifications into relative commands.
The meaning of this format can be explained corresponding to the definition of
a frame in Table 1 and 2. Between <FRAME> and </FRAME> defines a frame.
NAME refers to the frame name. ISA refers to the item of “A-kind-of”.
ISINSTANCE refers to the frame type. Between <SLOTLIST> and
</SLOTLIST> defines slots. Each SLOT gives several contents, including
NAME, TYPE, CONDITION, ARGUMENT, VALUE, REQUIRED, SHARED,
etc. With this XML format, a frame and its slots can be described in detail.
Therefore, human-robot system can be defined in SPAK by means of XML
format.
Human-Robot Interaction Control for Industrial Robot Arm through Software… 683
In section 2, the interaction control process for an industrial robot arm has
been explained. As implementing the human-robot interaction control, several
technologies should be integrated and employed by SPAK. Firstly, for the in-
teraction between an operator and a communication robot, SPAK integrates
several technologies for implementing a human-robot interface, such as face
detection, image recognition, speech recognition etc. Concerning face detec-
tion, by use of the robot “eyes” which are installed with video cameras, the ro-
bot can get the video image of the human’s face. With the face detection pro-
gram stored in the embedded computer, the robot can recognize the human by
the computer vision technology. Of course, if it is the first time for this specific
person, the robot will store the information about this person’s face in his
knowledge database by its learning function. When the robot looks this per-
son’s face again, he can recognize him at once. Thus it sends the recognition
results to SPAK. Another technique that robot can use for human-robot inter-
face is by means of speech. Since the robot system has a software program for
speech recognition, SPAK just sends the recognized sentences to the commu-
nication robot, who can speak out the sentences sent by SPAK to the operator.
Another important technology is to parse a natural sentence. In this human-
robot system, an operator can give his requests to the robot arm in natural lan-
guage. By means of voice recognition software, such as IBM ViaVoice
(http://www-306.ibm.com/software/voice/viavoice/), the sentences given
by the operator can be recognized and stored into the computer. Since SPAK
can generate the commands for the robot arm by several key words, it is neces-
sary to parse the sentences and to extract the key words from them for SPAK.
We have developed a simple sentence parser for SPAK using the technique of
Case Grammar taking into account the features of the operations of robot arm
(Bruce, 1975). With this software component, key words about objective of
control, movement of robot arm, etc., will be picked out and sent to SPAK. If
some inconsistencies are detected in the sentences regarding the operation of
the robot arm, this parser can also generate some relative words about the in-
consistencies and send them to SPAK. SPAK will ask the operator to change
his commands again through the speech module of the communication robot.
Although the system hierarchy has expressed the relations among frames, the
execution of frame contents must be implemented in SPAK by inference en-
gines. The inference engines used in SPAK are feedforward and backward
chainings. The feedforward chaining is usually used when a new fact is added
to the knowledge base and we want to generate its consequences, which may
add new other facts, and trigger further inferences. The backward chaining
starts with something we want to prove, find implication facts that would al-
low us to conclude it. It is used for finding all answers to a question posed to
the knowledge base.
684 Industrial Robotics: Theory, Modelling and Control
Besides, the control of robots also needs their different driving programs of ro-
bots for their actions as well as some software agents to convert the signals be-
tween robots and SPAK. Particularly, for feedback control, the robot arm
should be able to generate feedback signal about their status and send them to
SPAK.
5. Experiment
Robovie
Human
SPAK
Wireless Network
MELFA
Robovie can be used in the field of education, nursing, medical treatment, etc.
Robovie and human can make interaction by face detection, speech and user
input, so that Robovie can get the operator’s commands and transfer it to
SPAK.
MELFA is a kind of industrial articulated robot arm, produced by Mitsubishi
Electric Co., Japan The robot arm used in this experiment has 6 DOFs gener-
ated by 6 axes. Each axis is driven by AC servomotor. The servo controller of
MELFA is CR1-571. Its programming language is MELFA-BASIC IV. MELFA
can move strictly along a trajectory in three-dimensional space. The tip of the
robot arm can hold or release workpiece.
In this experiment system, Robovie, MELFA, SPAK and other software com-
ponents are connected via wireless TCP/IP network. SPAK and other software
components (such as vision recognition, sentence parser, software agents, etc.)
are running on different computers. Therefore, this system is comprised of a
distributed system. The core of this system is SPAK. Fig.2 illustrates the envi-
ronment of operator-Robovie interaction for the experiment.
5.2 Results
In Fig.6, for understanding the principle of executing frames in SPAK, the de-
tail contents and relations of the frames of MELFA in SPAK are illustrated as
an example. There exist four frames for four kinds of atomic actions of
MELFA, such as “MELFAOn”, “MELFAMove”, “MELFAHome” and “MEL-
FAOff”. Each of them is related with the conditions expressed by the frames of
“Greet”, “Mouth”, users and atomic actions. If the frames for the conditions
are actuated at the same time, the frame with these condition frames will be ac-
tuated and the command given in the special slot “onInstantiate” in this frame
will be sent to the servo controller of MELFA. MELFA will then move by the
control of servo controller. For instance, the actuation of the frame “MELFA-
Move” is based on the actuation of the frames of “Greet”, “Mouth” and “MEL-
FAOn”. With this system, we have made experiment according to the follow-
ing scenario. It describes an episode of the experiment.
Robovie: (Robovie looks at the operator’s face and try to recognize it.)
Robovie: Hi, how are you! I have never seen you before. What is your name?
Operator: (The operator types his name XXX.)
Robovie: Hi, XXX. Do you want to operate MELFA now?
Operator: Has MELFA been ready?
Robovie: Yes. How do you want MELFA to move?
Operator: I want MELFA to hold workpiece A and move from P0 to position P1.
Robovie: O.K.
MELFA: (MELFA starts to perform the task given by the operator.)
Robovie: MELFA has finished its jobs. If you want it to perform another job,
please give your command.
...
From the experiment, operator’s commands can be correctly sent to SPAK and
SPAK can convert them into the relative commands of MELFA according to
the knowledge model of the system. Therefore, MELFA can perform the task
correctly. Besides, for ensuring the reliability of the operation, there are some
measures for safety adopted in this system. Firstly, the consistency check of the
operator’s sentences is installed in the sentence parser. If the operator’s sen-
tence is inconsistent with the operation of MELFA, the parser will ask operator
688 Industrial Robotics: Theory, Modelling and Control
to give his sentence again. Secondly, SPAK provides some frames for confir-
mation. Before performing the movement of MELFA, SPAK will instruct
Robovie to repeat the task given by operator again. If operator confirms its cor-
rectness, MELFA will be controlled to move. If there exist any problems, SPAK
will instruct Robovie to tell operator by speech as soon as possible. Thirdly,
MELFA also has some measures to protect itself. With this system, the error in-
formation generated by MELFA can be also transferred to SPAK and spoken
out by Robovie.
6. Discussions
Due to the increase of the knowledge required for human-robot system, the
knowledge management and improvement becomes more and more impor-
tant. This is the current research topic for improving the proposed method and
realizing the goal of constructing a highly intelligent symbiotic human-robot
system in the future. The knowledge management includes the organization
and optimization of knowledge. Although a frame and its slots provides
strong ability to describe various types of objects and frame hierarchy can or-
ganize the frames into a system, it needs mechanism to classify the different
types of frames, renew the contents of frames, delete the useless frames, detect
the logical errors among frames, and so on. Currently, an expert manages the
knowledge manually for a human-robot system. It is definitely needed to de-
690 Industrial Robotics: Theory, Modelling and Control
In order to realize the final goal to construct a highly intelligent symbiotic hu-
man-robot system, concerning the proposed control method there still exist
many attractive, challengeable issues needed to solve. As mentioned above,
the knowledge model adopted by the proposed method needs to be improved
on its management. We want to combine the proposed method with other
kinds of control approaches to improve the control performance for a symbi-
otic human-robot system, such as adaptive control, robust control, fuzzy con-
trol, etc. In addition, the reliability of the proposed human-robot interaction
control of robots is needed to continuously improve. We will apply this
method for more different typical tasks in wide areas. With the improvement
of robot functions in the near future, the proposed method will be able to
adopt for more real applications.
Human-Robot Interaction Control for Industrial Robot Arm through Software… 691
7. Conclusions
A new human-robot interaction control approach for industrial robot arm was
proposed. In this method, operator’s requests can be directly obtained by an
autonomous communication robot through human-robot interaction, and con-
verted into control instructions for industrial robot arms by SPAK. In addition,
the status of the robot arm can be obtained by SPAK and informed to the op-
erator by the speech function of the communication robot. Based on this
method, an experimental symbiotic human-robot system for industrial robot
arms was constructed and evaluated. The experimental results demonstrate
the effectiveness of the proposed method. The proposed idea of human-robot
interaction control approach for industrial robot arms has strong flexibility
and extensibility. It can be extended to construct many kinds of symbiotic hu-
man-robot systems including different kinds of robots. The knowledge man-
agement and improvement can be realized by learning functions in SPAK. In
the future research, more issues will be considered to solve and the control
performance of human-robot system will be further improved. We believe that
the symbiotic human-robot system with human-robot interaction control has
great potentials for future human society.
692 Industrial Robotics: Theory, Modelling and Control
8. References
1. Introduction
Industrial robots are known to execute given programs at high speed and at
the same time with high repeatability. From non industrial scenarios as in
(Nakabo et al., 2005) or (Ginhoux et al., 2004) we know that cameras can be
used to execute high-speed motion even in those cases in which the desired
path is not a priori given but online sensed. In general, such visual tracking
tasks of following a randomly moving target by a camera are not accurate
enough for industrial applications. But there the work-piece will scarcely move
randomly. So in this chapter we concentrate on another type of visual ser-
voing, in which the path that has to be followed is fixed in advance, but not
given by a robot program.
A robot-mounted camera is a universal sensor which is able to sense poses
with high accuracy of typically less than a millimeter in the close-up range. At
the same time with high shutter speed the robot may move fast, e.g. at 1 m/s.
In contrast to expensive high-speed cameras, yielding a high frame rate of e.g.
1 kHz as in (Nakabo et al., 2005), we restrict on a standard CCIR camera, to
meet the requirements of a cost-effective hardware. Off-the-shelf cameras are
fundamental for the acceptance in industry. So an important feature of our
method is an appropriate control architecture that tolerates low sampling rates
of sensors. The camera is mounted at the robot arm and measures the work-
piece pose (given by boundary lines) with respect to the tool center point (tcp)
of the robot. More precisely, the camera is mounted laterally to provide
enough space for a tool. So with constant moving sense we have a predictive
sensor as in (Meta-Scout, 2006). With alternating moving sense the camera has
to be tilted so that the corresponding nominal line point p n comes approxi-
mately to the center of the image (see Fig. 1d). In this case segments of the lines
might be occluded by the tool. A tilted mounting yields a priori unknown dis-
tances of the different line points and thus complicates the equations. As well,
the task frame defined by the tcp, and the camera frame are different.
In contrast to current research (Comport et al., 2005), we assume that problems
with image processing, feature detection and projections are solved, which
693
694 Industrial Robotics: Theory, Modelling and Control
holds for our simple black and white scenario. In addition, the initial configu-
ration is supposed to be close to the target configuration, so that large rotations
as in (Tahri and Chaumette, 2005) do not appear.
a b
c d
Figure 1. Sample tasks with natural and artificial background and tilted mounted
camera
We show several planar and spatial sample tasks (Fig. 1). They are used to
sense (curved) lines or edges which are coarsely parallel to the desired motion.
This allows to refine a coarsely programmed path at full speed.
According to the notation of this chapter the programmed path is called refer-
ence path and the corresponding lines of the nominal scenario are nominal lines.
In reality the sensed lines will differ from the nominal lines, thus defining the
desired path, where a path is given by trajectories of positions and orientations.
In our experiments we program a horizontal circular path which is online
modified in radial and in vertical direction using image data. A possible appli-
cation is the spraying of glue to flexible or imprecisely positioned work-pieces.
Other applications are laser cutting, or soldering. In these cases the desired
path is determined during motion, e.g. by surveying of the boundary lines of
the work-piece. For all these tasks high accuracy at high speed is required.
Spatial Vision-Based Control of High-Speed Robot Arms 695
2. Control concept
Instead of directly using sensor data to control the robot, we use a predictive
architecture which separates position control from the sensing of the desired
path (see Fig. 2). Position control provides what we call a Cartesian ideal robot.
This means that for all sampling steps its arm position x a is identical to the de-
sired pose x d . The realization of such an ideal robot will be explained in the
sequel. Sensor data affect the system by a module that computes the desired
poses at the sampling steps. This is presented in section 4.
The fusion of the two parts, position control and the computation of the de-
sired path, yields control from image data.
This concept allows different sampling rates of the camera and the robot con-
troller. Our approach is to integrate image data in each case in the next posi-
tional sampling step. As in (Zhang et al., 2003), time delays owing to processor
load are tolerable as long as the instant of exposure is known.
3. Position control
The core of this chapter is not restricted to a special position control method. A
standard industrial position controller will do if the industrial interface allows
online modified references. Such an interface, usually called sensor interface, is
required since the desired positions x d are not assumed to be available long
before the motion.
A standard sensor interface as (Pertin & Bonnet des Tuves, 2004) allows to
send online desired values and to receive present positional values. For the
purpose of this chapter we refer to the desired values of the sensor interface as
a command vector q c , to distinguish between desired positions q d (of the inter-
face of the ideal robot) and the input to the industrial controller. The actual
joint values are called arm position q a .
For curved paths to be executed at high speed we recommend to increase ac-
curacy using advanced control methods which process not only the current
desired pose but also the desired speed and acceleration or - what we use - the
desired poses of multiple future sampling steps as (Clarke et al., 1987) or
(Grotjahn & Heimann, 2002).
nd
q c (k ) = q d (k ) + ¦ K qi ⋅ (q d (k + i ) − q d (k )) (1)
i =0
where K qi stands for the adapted parameters. This controller can be seen as a
filter of the desired positions. This proposed controller works in joint space
since there are substantial couplings when considering robot dynamics in Car-
tesian space. In joint space the estimation of couplings can be restricted to
some additional adaptive parameters.
This feed-forward controller uses the special predictive interface from (Lange
& Hirzinger, 1996), defined either in joint space or in Cartesian coordinates: In
Spatial Vision-Based Control of High-Speed Robot Arms 697
every sampling step the desired poses of the next nd sampling steps are re-
quired, with nd corresponding to twice the time constant of the controlled ro-
bot. This is a key aspect for the realization of an ideal robot.
Bold face lines in Fig. 2 represent data for current and future time steps, i.e.
predictions of sensor data are required.
In the case of time-invariant environments, using predictions of the progres-
sion of the desired path enables the position controller to compensate all dy-
namical delays that otherwise would directly affect performance. The uncer-
tainty becomes small enough so that the robot can track a continuous line with
very little path error.
The task of this computation is to determine the desired poses of next nd sam-
pling steps. It is advantageous to use a reference path with respect to which
nominal lines are defined. Then, the task is to find the path which lies with re-
spect to the real (sensed) lines, in the same way as the reference path to the
nominal lines. This path is called desired path and is to be transferred to the
ideal robot.
Thus with a camera as sensor and the particular task of tracking lines, predic-
tion of the next nd sampling steps of the desired path reduces to measuring the
course of the contour and to keep the values that correspond to future time
steps.
For each time-step we compute the transformation matrix r Td which describes
the frame of a point of the desired path Td with respect to the frame of the cor-
responding point of the reference path Tr . The so called sensor correction r Td is
computed from the postulation that the desired path is defined by the actually
sensed line p s in the same way as the reference path is given by the nominal line
points p n . p is a point vector and, as T , expressed in homogeneous coordi-
nates. All these variables depend on time. The time index k is omitted, how-
ever, in order to simplify the notation.
d
ps = rpn (2)
and then
r
p s = r Td ⋅ d p s = r Td ⋅ r p n . (3)
698 Industrial Robotics: Theory, Modelling and Control
r
ps = rpd + r pn . (4)
If we have at least two lines and their nominal line positions r p ni we can com-
pute the components of r p d , namely r xd and r zd if y is understood as the di-
rection of motion. r yd is defined to be zero since we evaluate lines points
which are in the x-z-plane of the reference system. Because of the neglected ro-
tational sensor corrections this means also
r
ys = 0 = d ys . (5)
In the image the lines are detected by single points. Here we assume that the
camera is oriented such that lines are approximately vertical in the image (see
Fig. 3). We use a simple edge detection algorithm which evaluates a single im-
age row. Horizontal line searching gives the best accuracy since it allows proc-
essing of single image fields. The detected line points are corrected by a rectifi-
cation algorithm using previously identified camera and lens distortion
parameters.
Figure 3. View from the robot mounted camera with 2 lines and 5 sensed points each.
The desired positions of the lines1 in the image are marked with big yellow blocks.
The windows for line search are shown by horizontal lines.
1
The desired position of a line in the image is the image point of the current nominal
line point when the tcp pose of the exposure is the reference pose. If these desired line
positions coincide in the image with the sensed lines, the actual pose complies with
the desired path.
Spatial Vision-Based Control of High-Speed Robot Arms 699
At least for curved paths the sensed line points will not share image row with
the corresponding nominal image points. So horizontal search starting at the
expected image position of a time step will give image points corresponding to
the reference of different time instants. Therefore we represent the lines in the
image by polynomials. All future computations are then based on these line
polynomials
ξ = f (η ) (6)
p s = (ξ ⋅ c zs η ⋅ c zs zs 1) ,
c T
c (7)
where c zs is the distance between the camera and the line point, we get the
pose vector r p s from the reference pose of the tcp to a sensed line point
§ r
xs · § ξ ⋅ c zs ·
¨ ¸ ¨ c ¸
ry η ⋅ zs ¸
r
ps = ¨ s ¸
= Tc ¨
r
. (8)
¨ rz ¸
s
¨ cz
s
¸
¨¨ ¸ ¨¨ ¸¸
© 1 ¸¹ © 1 ¹
where for example r Tc10 is the component (1,0) of the transformation matrix
r
Tc . This transformation matrix expresses the actual pose of the camera with
respect to the reference pose of the tcp. At least for curved reference paths or a
camera which is tilted with respect to the tool or the lines (see Fig. 1d) this
transformation differs from identity.
The equations for rxs and rzs are a little bit more complicated to evaluate. With
Tr and Td having the same orientation, using equation (4) and (2) we can
write
r
xs = r xd + d xs = r xd + r xn , (10)
700 Industrial Robotics: Theory, Modelling and Control
where rxn is the nominal distance of the line with respect to the reference tra-
jectory of the tcp. This distance is given, e.g. as the space between the desired
cutting edge and a visible edge on the work-piece. r xd is the wanted path cor-
rection which is the same for different lines:
r
xd = ( r Tc 00 ⋅ ξ + r Tc 01 ⋅η + r Tc 02 ) ⋅ c zs + r Tc 03 − r xn . (11)
Likewise we compute
r
zd = ( r Tc 20 ⋅ ξ + r Tc 21 ⋅η + r Tc 22 ) ⋅ c zs + r Tc 23 − r zn (13)
and thus
where r zn0 and r zn1 are the nominal distances to the lines, as e.g. the distance
between the laser and the work-piece. Usually the distances to the two lines
are the same.
In the case of two lines, with two equations (9), equation (12), equation (14),
and two equations (6) we have a total of six equations to determine
ξ 0 , ξ1 ,η0 ,η1 , c zs0 , and c zs1 . Because of the nonlinearity, equation (6), a numerical
solution is required which usually converges within 2 iterations. The wanted
components r xd and r zd of the path correction are calculated by inserting the
computed variables into the equations (11) and (13), using any of the lines.
If only one line is visible, we need a priori information, e.g. the distance of the
line or, more precisely, the plane in which the line lies. If this plane is parallel
to the x-y-plane of the tool frame we can use r zd = 0 . In this case the system of
equations is limited to equations (6), (9), and (13) to determine ξ ,η , and c zs
which are inserted into equation (11).
Spatial Vision-Based Control of High-Speed Robot Arms 701
Strictly speaking we use a priori information as well with two lines. It is the
assumption that the plane of the lines is parallel to the x-y-plane of the tool. A
varying distance can be interpreted as a non zero roll angle. So the specifica-
tion is restricted to the pitch angle of the plane, i.e. the rotation around the y-
axis of the reference frame of the tcp. The computation of this value needs at
least three lines, see (Gangloff and de Mathelin, 2002).
For every capture we compute path corrections for multiple reference poses Tr
but fixed camera pose Tc which is the pose at the time instant of the exposure.
To avoid the computation for all nd sampling steps (see section 3), we repre-
sent the path corrections also as polynomials, using again parameter estima-
tion methods. The resulting polynomials allow to readout the wanted path
modifications with minimal effort. Therefore the method is still suitable for
nd ≈ 20 . With an appropriate feed-forward control it yields minimal path er-
rors.
The indirect computation of the desired poses by using path modifications
with respect to a reference path is advantageous since curved paths with vary-
ing orientation are allowed. Solely the path correction itself is assumed to be
done without rotations.
A problem may occur with the presented method if a line has to be followed
that is not as continuous as the lines of Fig. 1. With the path of Fig. 4 the robot
tries to execute a velocity step at the vertices between straight edges without
considering acceleration limitations.
To prevent this case, we apply impedance-based control. A filter smoothes the
sensed edge so that the resulting contour can be tracked.
Regarding Fig. 5 this means that instead of the sensed edge (red points) a
smoothed edge (blue points) is used to compute the desired path for the inner
position control loop. In force control scenarios such a filter is well known and
is called impedance filter. We adopt this expression although in this case sen-
sor data do not represent forces.
Figure 5. View from the robot-mounted camera. Sensed edge points (red) and the
computed smoothed path (blue) are depicted. The yellow block visualizes the current
desired pose.
It is worth noting that this kind of impedance-based control does not affect the
control law but only the desired trajectory. The stiff position control is main-
tained. This is useful if different compliances are desired in the individual
components of the Cartesian pose vector. A typical example is a horizontal
motion where the height of the end-effector with respect to the floor has to be
accurately kept while the horizontal components are sensor controlled and
therefore have to be compliant with respect to vertices of the target line.
E ⋅ rx d + D ⋅ rx d + M ⋅ r
x d = r x a + a s − s r = Δs r (15)
Spatial Vision-Based Control of High-Speed Robot Arms 703
defines the desired trajectory r x d where x a is the actual pose of the tcp. as are
the sensed edge data, expressed with respect to the tcp. We here assume that
the transformation from the camera system to the tcp system is known. s r
represents a reference sensor value, i.e. a specified distance between tcp and
edge. Fig. 6 demonstrates this setup.
Note that within this section the index *d represents filtered desired values
which are usually different from r Td or r p d in equations like (3) or (4). The lat-
ter will now be denoted by x a + as − s r where x a + a s corresponds to an object
pose To and −s r to o Td .
With E = I and D = M = 0 , (15) represents explicit sensor-based control as in
section 4. If, in contrast, we specify M > 0 , the resulting trajectory will smooth
the corners since then the accelerations are weighed in addition to the devia-
tions from the trajectory of explicit sensor-based control. With M > 0 we fur-
ther specify D > 0 in order to avoid oscillations.
⋅ ( x d (k + i + 1) − r x d (k + i − 1) )
D r
E ⋅ r x d (k + i ) +
2T0
(16)
⋅ ( x d (k + i + 1) − 2 rx d (k + i ) + rx d (k + i − 1) ) = r x a (k ) + a s(k + i /k ) − s r .
M r
+
T02
704 Industrial Robotics: Theory, Modelling and Control
a
s(k + i /k ) reflects the sensed edge position for time step ( k + i) , sensed in the
image at time step k . Equation (16) gives a system of equations for
y
xd ( k + i )
with given k . So the solution computes the desired trajectory from the actual
position and the sensor values at time step k .
The result can be seen considering a vertex or a discontinuity between
a
s(k + i /k ) and a s(k + i + 1/k ) . With D = M = 0 a vertex or a discontinuity of
r
x d (k + i ) would be reproduced. In contrast, M > 0 will smooth the trajectory.
Smoothing is not restricted by causality. Instead, the desired trajectory is af-
fected far before the discontinuity of a s(k + i /k ) . It is still affected after the dis-
continuity, as well.
Equation (15) defines the compliance between the reference trajectory and the
new desired position. The same compliance is valid for the more intuitive rela-
tion between the sensed edge and the desired system since can be derived in
the same way as (15). The right hand side expression is independent of the re-
sulting desired robot motion x d
E ⋅ d xo + D ⋅ d x o + M ⋅ d
x o = s r + (E − I ) ⋅ r xo + D ⋅ rx o + M ⋅ r
xo (17)
.
E ⋅ r x d ( k + i ) = r x a ( k ) + a s ( k + i /k ) − s r (18)
⋅ ( x d (k + i + 1) − r x d (k + i − 1) ) = 0
D r
(19)
2T0
2 (
⋅ x d (k + i + 1) − 2 r x d (k + i ) + rx d (k + i − 1) ) = 0
M r
(20)
T0
5.3 Implementation
Instead of solving (16) or minimizing the mean errors of (18) to (20) in every
time step, a filter can be computed since we have a linear system of equations.
This filter gives r x d (l ) with k ≤ l ≤ k + ni from the values of r x a (k ) + a s(k + i /k )
with 0 ≤ i ≤ ni . This filter is called impedance filter. Its coefficients are found by
a single optimization process in the systems (16) or (18) to (20) respectively.
We now have to specify the number ni of elements of this filter. To be able to
compute nd time steps of r x d (l ) we need ni » nd . In practice however, ni is
usually limited by the visible range of the edge. ni has been chosen sufficiently
large if r x d (l ) proves to be time invariant. This requires that the initial condi-
tions r x d (l ) with l < k have been computed in the same way. The end condi-
tions r x d (l ) with l > k + ni are set to r x d (l ) = Δs r (l ) .
6. Experiments
When tracking only one boundary line (Fig. 1c), the tube lies horizontally on
the floor. In this case, in spite of big differences between nominal and actual
tube, a mean pixel error of less than 1 pixel is reached even without filtering of
the edge data.
With a non planar layout (Fig. 1b) we evaluate both boundary lines to deter-
mine a spatial path. The vertical accuracy is inferior because on the one hand
the measuring of the distance is ill-conditioned due to the geometrical setup,
and on the other hand because elastic oscillations of the robot are excited.
Compliance in the robot joints prevents accurate measurements of the camera
pose, which influences the detected line poses. Nevertheless the mean control
error in both degrees of freedom (dof) falls below one millimeter (Fig. 7b and
Fig. 7c). The reached path accuracy equals thus the complex method of (Lange
& Hirzinger, 2003).
As a second experiment the robot has to follow, as well at 0.7 m/s, a light cable
which lies on the ground (see Fig. 1a and Fig. 7a). This demonstrates that, in
spite of the structured floor, there is no artificial background required
Figure 7. Sensor induced path correction and control error (note the different scales)
Spatial Vision-Based Control of High-Speed Robot Arms 707
Path corrections are horizontal only, and computed by different methods. Be-
cause of bigger deviations from the reference path, path errors (see Table 2) are
somewhat bigger than in the previous experiment. This is caused by approxi-
mation errors between the polynomials and the real shape of the line. Besides,
exposure instant uncertainties will cause additional errors.
The errors of the method of this chapter are listed in the right-hand column.
They are compared, first, with a method which, using the same control archi-
tecture, works without the predictive interface of the position controller, and
therefore without feed-forward control. So with this method in the inner con-
trol loop, only the standard feedback controller is used. In contrast, the compu-
tation of the desired path is unchanged. Depending on the shape of the line,
only a small increase of the path errors appears with this setup. This is because
although the position control is not predictive, the vision system still uses the
upper and lower regions of the image, thus allowing predictions of the desired
path.
r
xcx (k ) = r xcx (k − 1)
(21)
+ K P ⋅ rex (k ) + K D ⋅ ( rex (k ) − r ex (k − 1))
Because of the tilted mounted camera (see Fig. 1d), a control error of
a
Tc 00 ⋅ ξ + a Tc 01 ⋅η + a Tc 02
r
ex = ( r zn − a Tc 23 ) ⋅ + ( a Tc 03 − r xn ) (22)
a
Tc 20 ⋅ ξ + Tc 21 ⋅η + Tc 22
a a
708 Industrial Robotics: Theory, Modelling and Control
is evaluated. a Tcij are the elements of the (constant) transformation matrix be-
tween tcp and camera. In this experiment the performance is poor, and with-
out limitation2 the robot would leave the allowed range of accelerations.
For the non continuous layout of the sheets in Fig. 4 we restrict on the modi-
fied method of section 5.2 because this method outperforms the other one, ex-
cept for special cases, as in (Lange et al., 2006), where both methods are appli-
cable. This time the programmed path is a straight line in y -direction,
executed back and forth at 0.7 m/s and with s r = 0 . In contrast to the previous
experiments, with 600 mm the distance between the camera and the edge is
about twice as much as before. This reduces the positional resolution but it
enlarges the visible region to about ni = 25 sampling steps of the controller
which is sufficient with nd = 15 . Such an extension is useful to fulfil ni » nd .
The results are displayed in Fig. 8, on the left hand side with respect to the ref-
erence path and on the right hand side with respect to the sensed edge. With
explicit image-based control (Fig. 8a) or small D as in Fig. 8b we result in high
accelerations at the vertices in the sheets. On the other side, the edges are
tracked as closely as possible. This desired path is accurately executed by the
ideal position controlled robot besides a couple of time-steps in which the ac-
celeration limits of the robot are reached. Nevertheless a mean tracking error
of about 1 mm is reached.
By specifying M = 0.001 s 2 and D = 0.06 s we define a desired trajectory that
leaves the edges but that shows smooth transitions (see Fig. 5). Bigger imped-
ance parameters, as in Fig. 8d are not recommended since then ni limits the
smoothing. Note that the vertex which is halfway on the left hand side dia-
grams of Fig. 8 is not smoothed because this is the point of reversing.
Videos of the reported and additional experiments can be found in (Lange,
2006).
2
For convenience, accelerations exceeding the limitations are scaled to the feasible
values.
Spatial Vision-Based Control of High-Speed Robot Arms 709
2
b Impedance-based sensor-based control with D = 0.02 s and M = 0.001 s
yielding a mean distance to the edge of 1.7 mm
Figure 8. Plots of r xd (left) and o xd (right) when tracking the sensed edge (red), back
and forth, using camera-based impedance control. Desired (black) and actual (dashed
green) path are almost identical, besides the experiment without smoothing.
710 Industrial Robotics: Theory, Modelling and Control
7. Conclusion
The article shows that vision systems are also applicable for tracking tasks
with high robot speed. Even accurate control of the robot is possible. The cru-
cial fact is the predictive image evaluation, maybe in combination with an
adaptive positional feed-forward control.
The results are reached by means of a cost effective method for which addi-
tional hardware is limited to standard components. In order to guarantee low
costs, image processing, computation of the desired path and dynamical con-
trol are executed using only the standard processor of an industrial robot con-
troller, in our case the KUKA KRC1.
For non continuous or non contiguous paths we propose smoothing similar to
impedance control. Then a reduction of the standards of position accuracy is
tolerated for specified components. The user has to specify parameters in order
to find a compromise between a smooth trajectory and minimum path devia-
tions with respect to the sensed edges. The modified impedance-based method
effects that the robot follows the sensed edge as accurate as possible, except for
vertices or discontinuities where the path is smoothed. Among the two shown
methods the latter it is favourable if a predictive sensor is used.
The methods are demonstrated in tasks where the location and the shape of
planar or spatial lines are used to modify the robot path during motion. For
continuous lines the mean resulting path error is below 1 mm, whereas for
lines with vertices, exact tracking is not executable with a robot. Then the
amount of smoothing determines the deviation from the lines.
Future work will improve the measurements of the camera pose e.g. using an
observer, thus avoiding oscillations caused by joint elasticity.
Acknowledgements
This work has been partially supported by the Bayerische Forschungsstiftung.
Spatial Vision-Based Control of High-Speed Robot Arms 711
8. References
CalLab.http://www.robotic.dlr.de/VISION/Projects/Calibration/CalLab.ht
ml, 1999.
Clarke, D. W. , C. Mohtadi, and P. S. Tuff. Generalized predictive control - part
I. the basic algorithm. Automatica, 23(2):137–148, 1987.
Gangloff, J. A. and M. F. de Mathelin. Visual servoing of a 6-dof manipulator
for unknown 3-d profile following. IEEE Trans. on Robotics and Automa-
tion, 18(4):511–520, August 2002.
Ginhoux, R. et al. Beating heart tracking in robotic surgery using 500 Hz vis-
ual servoing, model predictive control and an adaptive observer. In Proc.
2004 IEEE Int. Conf. on Robotics and Automation (ICRA), pages 274–279,
New Orleans, LA, April 2004.
Comport, A. I., D. Kragic, E. Marchand, and F. Chaumette. Robust real-time
visual tracking: Comparison, theoretical analysis and performance
evaluation. In Proc. 2005 IEEE Int. Conf. on Robotics and Automation
(ICRA), pages 2852–2857, Barcelona, Spain, April 2005.
Grotjahn, M. and B. Heimann. Model-based feedforward control in industrial
robotics. The International Journal on Robotics Research, 21(1):45–60, January
2002.
Lange, F. and G. Hirzinger. Learning of a controller for non-recurring fast
movements. Advanced Robotics, 10(2):229–244, April 1996.
Lange, F. and G. Hirzinger. Is vision the appropriate sensor for cost oriented
automation? In R. Bernhardt and H.-H. Erbe, editors, Cost Oriented Auto-
mation (Low Cost Automation 2001), Berlin, Germany, October 2001. Pub-
lished in IFAC Proceedings, Elsevier Science, 2002.
Lange, F. and G. Hirzinger. Predictive visual tracking of lines by industrial ro-
bots. The International Journal on Robotics Research, 22(10-11):889–903, Oct-
Nov 2003.
Lange, F. and G. Hirzinger. Calibration and synchronization of a robot-
mounted camera for fast sensor-based robot motion. In Proc. 2005 IEEE
Int. Conf. on Robotics and Automation (ICRA), pages 3911–3916, Barcelona,
Spain, April 2005.
Lange, F.. Video clips. http://www.robotic.de/?id=43., 2006.
Lange, F., M. Frommberger, and G. Hirzinger. Is impedance-based control
suitable for trajectory smoothing? In Preprints 8th IFAC Symposium on Ro-
bot Control (SYROCO 2006), Bologna, Italy, Sept. 2006.
Meta-Scout GmbH. SCOUT joint tracking system. http://www.scout-
sensor.com/index-engl.html.
Nakabo, Y., T. Mukai, N. Fujikawa, Y. Takeuchi, and N. Ohnishi. Cooperative
object tracking by high-speed binocular head. In Proc. 2005 IEEE Int. Conf.
on Robotics and Automation (ICRA), pages 1585–1590, Barcelona, Spain,
April 2005.
712 Industrial Robotics: Theory, Modelling and Control
Pertin, F. and J.-M. Bonnet des Tuves. Real time robot controller abstraction
layer. In Proc. Int. Symposium on Robots (ISR), Paris, France, March 2004.
Rives, P. and J.-J. Borrelly. Real-time image processing for image-based visual
servoing. In M. Vincze and G. D. Hager, editors, Robust vision for vision-
based control of motion, pages 99–107. IEEE Press, 2000.
Tahri, O. and F. Chaumette. Complex objects pose estimation based on image
moments invariants. In Proc. 2005 IEEE Int. Conf. on Robotics and Automa-
tion (ICRA), pages 438–443, Barcelona, Spain, April 2005.
Zhang, J., R. Lumia, J. Wood, and G. Starr. Delay dependent stability-limits in
high performance real-time visual servoing systems. In Proc. IEEE/RSJ
Int. Conference on Intelligent Robots and Systems, pages 485–491, Las Vegas,
Nevada, Oct. 2003.
26
1. Introduction
713
714 Industrial Robotics: Theory, Modelling and Control
to difficulty, error and even failure of the processing of the welding seam im-
age. Intelligent recognition algorithms, such as discussed in (Kim et al., 1996;
Wu et al., 1996), can effectively eliminate some of the effects. However, besides
intelligent recognition algorithm, it is an effective way for the improvement of
recognition correctness to increase the performance of image processing.
The visual control methods fall into three categories: position-based, image-
based and hybrid method (Hager et al., 1996; Corke & Good, 1996; Chaumette
& Malis, 2000). As early as 1994, Yoshimi and Allen gave a system to find and
locate the object with “active uncalibrated visual servoing” (Yoshimi & Allen,
1994). Experimental results by Cervera et al. demonstrated that using pixel co-
ordinates is disadvantageous, compared with 3D coordinates estimated from
the same pixel data (Cervera et al., 2002). On the other hand, although posi-
tion-based visual control method such as (Corke & Good, 1993; 1996) has bet-
ter stableness, it has lower accuracy than former because the errors of kinemat-
ics and camera have influence on its precision. Malis et al. proposed hybrid
method that controls the translation in image space and rotation in Cartesian
space. It has the advantages of two methods above (Malis et al., 1998; 1999;
Chaumette & Malis, 2000).
In this chapter, a calibration method for the laser plane is presented, which is
easy to be realized and provides the possibility to run hand-eye system cali-
bration automatically. Second, the image processing methods for the laser
stripe of welding seam are investigated. Third, a novel hybrid visual servoing
control method is proposed for robotic arc welding with a general six degrees
of freedom robot.The rest of this chapter is arranged as follows. The principle
of a structured light vision sensor is introduced in Section 2. And the robot
frames are also assigned in this Section. In Section 3, the laser plane equation
of a structured light visual sensor is deduced from a group of rotation, in
which the position of the camera’s optical centre is kept unchangeable in the
world frame. In Section 4, a method to extract feature points based on second
order difference is proposed for type V welding seams. A main characteristic
line is obtained using Hotelling transform and Hough transform. The feature
points in the seam are found according to its second difference. To overcome
the reflex problem, an improved method based on geometric centre is pre-
sented for multi-pass welding seams in Section 5. The profiles of welding seam
grooves are obtained according to the column intensity distribution of the laser
stripe image. A gravity centre detection method is provided to extract feature
points on the basis of conventional corner detection method. In Section 6, a
new hybrid visual control method is concerned. It consists of a position control
inner loop in Cartesian space and two outer loops. One outer loop is position-
based visual control in Cartesian space for moving in the direction of the weld-
ing seam, i.e. welding seam tracking; another is image-based visual control in
image space for adjustment to eliminate the errors in tracking. Finally, this
chapter is ended with conclusions in Section 7.
Visual Control System for Robotic Welding 715
Generally, the camera is with small view angle, and its intrinsic parameters
can be described with pinhole model, as given in (1). Its extrinsic parameters
can be given in (2).
ª u º ª kx 0 u0 º ª xc / zc º ª xc / zc º
«v» = « 0 ky v0 » « yc / zc » = M in «« yc / zc »»
» « » (1)
« » «
«¬ 1 »¼ «¬ 0 0 1 »¼ «¬ 1 »¼ «¬ 1 »¼
where [u, v] are the coordinates of a point in an image, [u0, v0] denote the im-
age coordinates of the camera’s principal point, [xc, yc, zc] are the coordinates
of a point in the camera frame, Min is the intrinsic parameters matrix, and [kx,
ky] are the magnification coefficients from the imaging plane coordinates to the
image coordinates. In fact, [kx, ky] are formed with the focal length and the
magnification factor from the image size in mm to the imaging coordinates in
pixels.
Visual Control System for Robotic Welding 717
ªx º ª xw º
ª xc º ª nx ox ax px º « w » «y »
« yc » = « n y y
oy ay p y » « w » = cM w « w » (2)
« » « » z « zw »
¬ zc ¼ ¬ nz oz az pz ¼ « w »
«¬ 1 »¼ «¬ 1 »¼
where [xw, yw, zw] are the coordinates of a point in the object frame, and cMw is
the extrinsic parameter matrix of the camera, i.e. the transformation from the
camera frame C to the world frame W. In cMw, n = [n x n y n z ]T is the direction
K
K K
[ ]
vector of the x-axis, o = [o x o y o z ]T is that of the y-axis, a = a x a y a z is
T
that of the z-axis for the frame W expressed in the frame C, and
K
[ ]
p = p x p y p z is the position vector.
T
Camera calibration is not a big problem today. But laser plane calibration is
still difficult. Therefore, the calibration of structured light vision sensor is fo-
cused on laser plane except camera. In the following discussion (Xu & Tan,
2004), the parameters of a camera are supposed to be well calibrated in ad-
vance.
ax + by + cz + 1 = 0 (3)
An arbitrary point P in laser stripe must be in the line formed by the lens cen-
tre and the imaging point [xc1, yc1, 1]. Formula (4) shows the equation of the
line in frame C.
[x y z ] = [xc1
T
yc1 1] t
T
(4)
On the other hand, the imaging point [xc1, yc1, 1]T can be calculated from (1) as
follows.
From (3) and (4), the coordinates of point P in frame C can be expressed as the
functions of parameter a, b, and c, given in (6). Further more, its coordinates
[xw, yw, zw] in frame W can be had as given in (7).
718 Industrial Robotics: Theory, Modelling and Control
Let
ª nx ox ax px º
«n K K K K
oy ay p y » ªn o a pº
w
Te eTc = « y »= (8)
« nz oz az p z » «¬ 0 0 0 1 »¼
«¬ 0 0 0 1 »¼
xw = n x x + o x y + a x z + p x
°
® yw = n y x + o y y + a y z + p y (9)
°
¯ z w = nz x + oz y + a z z + p z
If the surface of work piece is a plane, the points in the laser stripe will satisfy
its plane equation (10).
in which A, B and C are the parameters of the work piece plane in frame W.
Submitting (9) to (10), then
A( nx x + ox y + ax z ) + B( n y x + o y y + a y z ) +
(11)
C( nz x + oz y + az z ) + Apx + Bp y + Cpz + 1 = 0
Let D=Apx+Bpy+Cpz+1. It is sure that the lens centre of the camera, [px, py, pz], is
not on the plane of work piece. Therefore the condition D≠0 is satisfied. Equa-
tion (11) is rewritten as (12) via divided by D and applying (6) to it (Xu & Tan,
2004).
If the optical axis of the camera is not parallel to the plane of the laser light,
then c≠0 is satisfied. In fact, the camera must be fixed in some direction except
that parallel to the plane of the laser light in order to capture the laser stripe.
Dividing (12) by c, then
In the condition that the point of the lens centre [px, py, pz] is kept unchangeable
in frame O, a series of laser stripes in different directions are formed with the
pose change of the vision sensor. Any point in each laser stripe on the same
plane of a work piece satisfies (13). Notice the linear correlation, only two
points can be selected from each stripe to submit to formula (13). They would
form a group of linear equations, whose number is as two times as that of
stripes. If the number of equations is greater than 5, they can be solved with
least mean square method to get parameters such as A2, B2, C2, a1, b1.
Now the task of laser calibration is to find the parameter c. The procedure is
very simple. It is well known that the distance between two points Pi and Pj on
the stripe is as follows
in which, [xwi, ywi, zwi] and [xwj, ywj, zwj] are the coordinates of point Pi and Pj in
the world frame; dx, dy, dz are coordinates decomposition values of distance d.
Submitting (6) and (9) to (14), then
1 xc1 j xc1i
d x = [ nx ( − )
c a1 xc1 j + b1 yc1 j + 1 a1 xc1i + b1 yc1i + 1
yc1 j yc1i
+ ox ( − ) (15)
a1 xc1 j + b1 yc1 j + 1 a1 xc1i + b1 yc1i + 1
1 1 1
+ ax ( − )] = d x1
a1 xc1 j + b1 yc1 j + 1 a1 xc1i + b1 yc1i + 1 c
1 1 d
d =± d x21 + d y21 + d z21 = ± d 1 c = ± 1 (16)
c c d
720 Industrial Robotics: Theory, Modelling and Control
where d1 is the calculated distance between two points on the stripe with pa-
rameters a1 and b1, and d is the measured distance with ruler.
Then parameters a and b can be directly calculated from c as formula (17). Ap-
plying a, b, and c to (6), the sign of parameter c could be determined with the
constraint z>0.
a = a1c
® (17)
¯b = b1c
The camera in the vision sensor was well calibrated in advance. Its intrinsic
parameters Min and extrinsic ones eTc were given as follows.
Feature points
A welding seam of type V was measured by use of the structured light vision
sensor to verify the hand-eye system. The measurements were conducted 15
times along the seam. Three points were selected from the laser stripe for each
time, which were two edge points and a bottom one. Their coordinates in
frame W were computed via the method proposed above. The results were
shown in Table 1.
(a) 3D space
Figure 4. The data graph of vision measurement results of a type V welding seam
Visual Control System for Robotic Welding 723
The gray image of laser stripe is captured via a camera with light filter. Gener-
ally, its size is large. For example, it could be as large as 576×768 pixels. There-
fore, simple and efficient image pre-processing is essential to improve visual
control performance in real time. The image pre-processing includes image
segmentation, image enhancement and binarization (Xu et al., 2004a; 2004b).
First, the background gray value of the image is counted. Along its horizontal
and perpendicular direction, lines with constant space are drawn. Gray values
of all pixels on the lines are added, and its average value is taken as the gray
value of background. It is given in (18).
1 § nw n1 n2 n h
·
°B = ¨ ¦¦ I (i ,10 j ) + ¦¦ I (10i , j ) ¸
® nw n1 + nh n2 ¨© i =1 j =1 i =1 j =1
¸
¹ (18)
°n = Int(n / 10 ), n = Int(n / 10 )
¯ 1 h 2 w
where nw and nh are the image width and height respectively, n1 is the number
of horizontal lines, n2 is the number of vertical lines, and I(x, y) is the gray
value of the pixel in coordinates (x, y).
Usually, laser stripe has higher brightness than the background. Along the
lines drawn above, all pixels with gray value greater than B+T1 are recorded.
The target area on the image is confirmed according to the maximum and the
minimum coordinates of pixels recorded along the horizontal and perpendicu-
lar direction respectively.
where T1 is the gray threshold. The target area consists of X1, X2, Y1 and Y2.
The structured light image is suffered from arc light, splash, and acutely
changed background brightness during welding. As known, the intensity of
the arc light and splash changes rapidly, but the laser intensity keeps stable.
According to this fact, the effect of arc light and splash can be partly elimi-
nated via taking the least gray value between sequent images as the new gray
724 Industrial Robotics: Theory, Modelling and Control
where Ik is the image captured at k-th times, and Ik-1 is k-1-th. X1≤i≤X2, Y1≤j≤Y2.
The target area is divided into several parts, and its gray values are divided
into 25 levels. For every part, the appearance frequency of every gray level is
calculated, as given in (21).
X 2 Y2
° F ( k , h ) = ¦ ¦ P ( k , h)
° i = X 1 j =Y1
® (21)
° P( k , h) = ®1 k = Int (i / 5), h = Int ( I (i, j ) / 10)
°¯ ¯0 others
Taking into account the different contrast between the laser stripe and back-
ground, the gray value with higher level, whose appearance reaches specified
frequency, is regarded as the image enhancement threshold T2(k).
K
T2 (k ) = 10 K , iff ( ¦ F (k , h) > S1 ) ∨ ( F (k , K ) > S 2 ) (22)
h = 25
where S1 is the specified sum of the frequency with the higher gray level, S2 is
the specified frequency with higher level in one child area, and K is the gray
level, 1≤K≤25.
According to the threshold of every child area, high-pass filter and image en-
hancement are applied to the target area, followed by Gauss filter and binary
thresholding. Fig. 5 is the result of image segmentation of a welding seam. In
detail, Fig. 5(a) is the original image with inverse colour, Fig. 5(b) shows its
distribution of gray frequency, Fig. 5(c) is the image of the strengthened target
area, and Fig. 5(d) is the binary image. Fig. 5(e) and Fig. 5(f) are two frames of
original images with inverse colour in sequence during welding, and Fig. 5(g)
is the processing result via taking the least gray value in the target area with
(20). Fig. 5(h) is its binary image. It can be seen that the binary images of weld-
ing seams, obtained after image pre-processing with the proposed method, are
satisfactory.
Because the turning points of the laser stripe are the brim points of the weld-
ing seam, they are selected as the feature points. To adjust the pose of the weld
torch easily, some points on the weld plane are required. Therefore, the goal of
features extraction is to search such turning points and weld plane points from
the binary image.
To thin the binary image of welding seam, the average location between the
upper edge and the lower one, which is detected from the binary image, is re-
garded as the middle line of laser stripe. Fig. 6(a) shows the upper, lower edge,
and middle line of the laser stripe. Because of the roughness of the binary laser
stripe, the middle line curve has noise with high frequency, seen in the bottom
of Fig. 6(b).
The middle line stored in an array with two dimensions is transformed via Ho-
telling transformation, to make its feature direction same as the x-axis. Hotel-
ling transformation is shortly described as follows.
726 Industrial Robotics: Theory, Modelling and Control
First, the position vector of the average of all points on the middle line is com-
puted.
N
1
md = ¦m d (i ) (23)
N i =1
where N is the point number on the middle line, and md is the position vector
of the average, md = [md (1) md (2)] . md (i,1) is the coordinate x of the i-th
T
Second, the position vector of each point on the middle line after Hotelling
transformation is calculated.
N
1
Cd = (i )md (i )T − md md
T
N
¦m
i =1
d (24)
where mdh (i ) = [mdh (i,1) mdh (i,2)] is the position vector of the i-th point on the
T
5 5
~ (k ,2) = [ m (k − h,2) μ (h)]
mdh ¦ dh h = −5
¦ μ ( h)
h = −5
(26)
~ (k ,2) is the y-coordinate of the k-th point on the filtered middle line.
where mdh
1 −3 ≤ h ≤ 3
°
μ ( h ) = ®2 − h / 3 3 < h ≤ 5 (27)
° 0 h >5
¯
A line gained by Hough transform, which is the closest to the x-axis direction
converted by Hotelling transformation, is viewed as the main line. Locations of
points on the middle line are mapped into the parameter space A(p, q) of the
line function, shown in (28), and the (p, q) with the maximum value of A is the
Visual Control System for Robotic Welding 727
parameter of the main line. All points on the middle line satisfied with the
main line function are feature points of the weld plane.
M pMax
° A( p, q ) = ¦ ¦ B ( p, q )
° k =1 p = pMin
(28)
® ~ ~
° B ( p, q ) = 1 q = − pmdh (k ,1) + mdh (k ,2)
° ®
¯ ¯0 others
The main line is rotated an angle in order to make it parallel to the x-axis direc-
tion.
~ (i ) = ªcosθ
mdr (i ) = V1m
− sin θ º ~
mdh (i ) (29)
dh « sin θ cosθ »¼
¬
where θ=atan(p) is the inclination angle between the main line and x-axis,
mdr(i) is the position vector of the i-th point on the rotated middle line, V1 is a
rotation matrix formed with cosθ and sinθ.
The point with the maximum of the local second derivative is the turning point
of the middle line. After reverse transform as given in (30), the position of the
welding seam feature point in the original image is obtained.
where mdrm(i) is the position vector of the i-th turning point on the middle line,
and mdm(i) is the turning point position in the original image.
The curve at the top of Fig. 6(b) shows the middle line after filtered and trans-
formed. The second derivative of the middle line is seen in Fig. 6(c). Two fea-
ture points of the welding seam on the original image can be read from Fig.
6(d).
Fig. 7 shows two frames of laser images of a welding seam of type V groove, in
which Fig. 7(a) is an original image before welding, and Fig. 7(b) is an image
with reflection of laser on the surface of the welding seam after root pass weld-
ing. It can be found that the two images are different in a very large degree. So
they should be dealt with different strategies. The method proposed in Section
4 is difficult to deal with the image as given in Fig. 7(b). However, the two im-
728 Industrial Robotics: Theory, Modelling and Control
ages have a common property, that is, the area of the welding seam is just part
of the image. So the welding seam area should be detected to reduce the com-
putation cost (Li et al., 2005).
(a) (b)
Figure 7. Images of welding seams before and after root pass welding
Y2 = Min{Y p + hw + mw , nh }
®Y = Max{Y − m , 0} (31)
¯ 1 p w
for the task. In order to get high quality profiles of seams, a method that com-
bines smoothing filter, maximum detection and neighbourhood criteria is pro-
posed.
(a)
(b)
Figure 8. Intensity extraction of four columns
Firstly, a low pass filter is designed to smooth the intensity curve of column i.
Usually, the pixels of profiles are in the area of main peak, and the peaks
caused by disturbances are lower or thinner. After smoothing the intensity
curve, the plateau is smoothed with one maximum in main peak, and the
lower or thinner peaks are smoothed into hypo-peaks. Fig. 8 gives an example
of intensity distribution of column 300, 350, 400, 450 of a welding seam image.
Fig. 8(a) shows the original image and the positions of four example columns.
Fig. 8(b) shows their intensity distribution.
730 Industrial Robotics: Theory, Modelling and Control
Then according to the analysis of the image gray frequency, the self-adaptive
thresholds of the images are calculated. Only the pixels whose intensity ex-
ceeds the thresholds are regards as valid. Thus the intensity curve is frag-
mented to several peaks. By calculating the area of peaks, the main peak can be
gotten.
In this way, there is only one point with maximum intensity remained on the
main peak in the intensity distribution curve. In other words, one point on the
profile for each column is detected. Thus an array of points indexed with col-
umn is extracted through intensity distribution from the welding image.
In order to extract points of profile more properly, the criterion of neighbour is
applied. Since the profiles of grooves should be a continuous curve, the points
that are apparently inconsistent to neighbour points should be rejected. When
the pixels whose intensity value exceeds the thresholds cannot be found, there
will be no record in the array for this column. In these situations, the data in
the array will be not continuous. Then the linear interpolation algorithm is
used to fill up the curve between broken points, and the discrete points array
is transferred to continuous profile curve.
In order to extract features of profiles for seam tracking, the first task is to se-
lect features. Usually the corner points of profiles are brim points of the weld-
ing groove, and they are often selected as features of welding seams in single
pass welding (Kim et al., 1996; Wu et al., 1996). The corner detection method is
only valid for images of single pass welding. But in multi-pass welding, there
is distortion caused by weld bead in the bottom of groove. There are welding
slag remained on the surface of welding work piece sometimes. As shown in
Fig. 7(b), it is hard to judge the proper corner points by the second derivative
because of the distortion. So the features extraction with corner point’s detect-
ion is not reliable in this case.
The centre of gravity of groove area is selected as features because of its good
stabilization relative to corner points.
Fig. 9(a) shows a profile of groove extracted with the method in Section 5.1
from a welding seam after welding root pass. Firstly, the profile is smoothed
by a Hanning filter to eliminate high frequency noise, as shown in Fig. 9(b). In
order to get the figure of groove area, the main vector of the profile is required.
It can be extracted by Hough transform as described in Section 4.
Because the main vector is the asymptote of the profile, the main vector and
the profile can form a trapeziform figure approximately. In the first step, the
bottom of groove area is detected by template matching. Then from the bottom
of groove, the points on the profile are searched forward and backward re-
spectively.
Visual Control System for Robotic Welding 731
350 350
300 300
250 250
200 200
0 100 200 300 400 500 600 0 100 200 300 400 500 600
(a) (b)
The two borders (b1, b2) of the figure are gotten when the distances between
the points on the profile and the main vector are less than the thresholds (5
pixels here). The trapeziform figure is defined with the border points, as
shown in Fig. 10. Finally, the gravity centre of the figure is extracted as fea-
tures by (32). A perpendicular to the main vector is drawn through the gravity
centre. The intersection is taken as the feature point of the welding seam.
[ ] [ ]
b2 b2
° Fu = ¦ i y p (i ) − y v (i ) ¦ y p (i ) − yv (i )
° i =b1 i =b1
® (32)
[ ] [ ]
b2 b2
° F = 0.5 y (i )2 − y (i )2
°¯ v i¦ =b1
p v ¦ y p (i ) − yv (i )
i =b1
400
350
Main vector border points
300
trapeziform figure
250 Profile of groove
200
0 100 200 300 400 500 600
Figure 10. Main vector and border points on the groove profile
732 Industrial Robotics: Theory, Modelling and Control
6.1 Jacobian matrix from the image space to the Cartesian space of the
end-effector
From (3) and (4), formula (33) is deduced (Xu et al., 2004b; 2005).
−1
ª xc º § ª xc1 º · ª xc1 º
« yc » = −¨ [a b c ]« yc1 » ¸ « yc1 » (33)
« » ¨ « » ¸¸ « »
¨
¬ zc ¼ © ¬ 1 ¼¹ ¬1¼
e e e e
ª xe º ª xc º ª mxc nxc oxc p xc º ª xc º ª xc º
« ye » e « yc » « e m yc e
n yc e
o yc e
p yc » « yc » ª e Rc e
pc º « y c »
« »= M c « » = « e »« » = « » (34)
« ze » « zc » « mzc
e
nzc e
ozc e
p zc » « zc » «¬ 0 1 »¼ « zc »
1
¬« ¼» ¬« 1 ¼» «¬ 0 0 0 1 »¼ ¬« 1 ¼» ¬« 1 ¼»
in which, eMc is the extrinsic parameter matrix of the camera relative to the
end-effector of the robot. eRc is the rotation translation matrix, and epc is the
position vector.
ª xe º ª xc º
« y e » ª e R ª x º ª x º
e
pc º « y c » « e » e « c »
« » = « c « » y = Rc yc (35)
« ze » ¬ 0 1 »¼ « zc » « e » « »
¬ ze ¼ ¬ zc ¼
«¬ 0 »¼ «¬ 0 »¼
Submitting (5) into (33) and applying time derivative, then (Xu et al., 2005)
in which, J c (u, v) is the Jacobian matrix from image space to Cartesian space in
the camera frame C. D = a(u − u0 ) / kx + b( v − v0 ) / k y + c , is a constraint of the
laser plane.
Submitting (36) to (35), the Jacobian matrix from image space to Cartesian
space in the end-effector frame E is obtained, as given in (37).
ª xe º ª dxe º
« y e » = J (u , v) ªu º «dye » = J (u , v) ªdu º (37)
« » «¬ v »¼ « » «¬ dv »¼
¬ ze ¼ ¬ dze ¼
ª −b( v − v0 ) /( kx ky ) − c / kx b(u − u0 ) /( kx ky ) º
1 « »
J(u,v) = e Rc J c (u,v) = 2 e Rc « a( v − v0 ) /( kx ky ) − a(u − u0 ) /( kx ky ) − c / ky » (38)
D
« a / kx b / ky »
¬ ¼
6.2.1 The model of hybrid visual servoing control for robotic arc welding
The scheme of hybrid visual servoing control method proposed in this chapter
for robotic arc welding consists of four main parts, such as the control of mov-
ing along welding seam, the control of tracking adjusting, the position control
of the robot, and the image feature extraction. The block diagram is shown in
Fig. 11. Position-based visual control in Cartesian space is employed in the
process of moving along the welding seam. From the image of the structured
light stripe at i-th sampling, the image coordinates ui' and vi' for feature point
Pi on the stripe can be extracted. Then [xei, yei, zei], the coordinates of point Pi in
the end-effector frame, can be computed with (5), (33) and (34). In addition, the
coordinates of point Pi-1 in the current end-effector frame, [xei-1, yei-1, zei-1], can
be obtained through transformation according to the movement Ʀi of the end-
effector at last times. Then the direction of welding seam is determined with
[xei-1, yei-1, zei-1] and [xei, yei, zei]. For reducing the influence of random ingredi-
ents, the coordinates of n+1 points Pi-n-Pi in the end-effector frame can be used
to calculate the direction of the welding seam through fitting. The direction
734 Industrial Robotics: Theory, Modelling and Control
vector of the welding seam is taken as movement Ʀli of the end-effector after
multiplying with a proportion factor K. In the part of the control of moving
along welding seam, the measured direction above is taken as the desired
value to control the movement of the robot. It is inevitable that there exist ap-
parent errors in the process of moving along the welding seam. Therefore the
second part, that is, tracking adjusting with visual servoing control in image
space, is introduced. According to the desired image coordinates [u, v] and the
actual ones [ui', vi'] of the feature point Pi, the errors [dui, dvi] of the image co-
ordinates as well as the estimated Jacobian matrix Jˆ (u, v) are calculated. Then
[dxˆe , dyˆ e , dzˆe ] is computed using (37), which is considered as the position errors
of the end-effector. The differential movement Ʀsi of the end-effect-or is gener-
ated with PID algorithm according to these errors. Ʀi, the sum of Ʀsi and Ʀli, is
taken as the total movement of the end-effector. The third part, the position
control of the robot, controls the motion of the robot according to Ʀi. In detail,
the position and pose of the end-effector in next step, in the world frame, is
calculated with the current one and Ʀi. The joint angle value for each joint of
the robot is calculated using inverse kinematics from the position and pose of
the end-effector in next step. Then the position controller for each joint controls
its motion according to the joint angle. The position control of the robot is real-
ized with the control device attached to the robot set.
Figure 11. The block diagram of hybrid visual servoing control for robotic arc welding
Visual Control System for Robotic Welding 735
The other parts such as the control of moving along welding seam, tracking ad-
justing and image feature extraction are completed with an additional com-
puter (Xu et al., 2005).
In the hybrid visual servoing control system for robotic arc welding, as shown
in Fig. 11, the control of moving along welding seam takes the direction of
welding seam as the desired value to make the end-effector to move ahead. Its
output Ʀli can be considered as disturbance ξ(t) for the part of image-based
visual servoing control. In the part of the position control of the robot, the mo-
tions of the robot are merely controlled according to the desired movements of
the end-effector and the stated velocity. In the condition that the movement
velocity is low, the part of the position control for the movement of the end-
effector can be considered as a one-order inertia object. Therefore, the model of
the hybrid visual servoing control system can be simplified to the dynamic
framework as shown in Fig. 12.
Figure 12. The simplified block diagram of hybrid visual servoing control for robotic
arc welding
Although the laser stripe moves with the end-effector, the position [xe, ye, ze], in
the end-effector frame, of the feature point P on the stripe will vary with the
movement of the end-effector. The relation between the movement of the end-
effector and [xe, ye, ze] is indicated as f(Ʀi’). The model for the camera and im-
age capture card is described as MineMc-1.
736 Industrial Robotics: Theory, Modelling and Control
6.2.3 The relation between the end-effector’s movement and the feature point position
In the end-effector frame, the equation of the laser plane can be expressed as
(39).
ae x + be y + ce z + 1 − d e = 0
°a = am + bn + co
° e x x x
(39)
®be = am y + bn y + co y
°c = am + bn + co
° e Kz K z K K z K K
¯d e = am ⋅ p + bn ⋅ p + co ⋅ p
K
where m = m x [ my ] K
m z , n = nx
T
[ ny ] K
nz , and o = ox
T
[ oy ]
T
oz are the orien-
tation vector of eMc as (34).
Assume the equation of the welding seam, in the end-effector frame, at the i-th
sampling is as given in (40).
xli = xli 0 + k x t
°
® yli = yli 0 + k y t (40)
°¯ zli = zli 0 + k z t
From (39) and (40), the coordinates of the feature point Pi on the stripe, in the
end-effector frame, at the i-th sampling is deduced, seen in (41).
After the movement of the end-effector Ʀi'=[Ʀ'ix, Ʀ'iy, Ʀ'iz]T, the equation of the
welding seam in the end-effector frame is obtained.
Applying (42) to (39), the resolution obtained as (43) is the coordinates of the
feature point Pi+1 on the stripe, in the end-effector frame, at the i+1-th sam-
pling.
Visual Control System for Robotic Welding 737
By comparing (41) and (43), the relation of the coordinates between Pi+1 and Pi
in the end-effector frame is derived.
K
[ ] [
selected as X 1 = e , X 2 = Δ i = Δ ix Δ iy Δ iz , and X 3 = Δ ′i = Δ ′ix Δ ′iy Δ ′iz . It is
T T
]
easy to establish the state equation of the system as (46) reference from Fig. 12.
X 1 = − M in eM c−1 FX 3
°° −1 −1
® X 2 = K i JˆX 1 − (1 / Tr ) K d JˆM in M c FX 2 + ( K d / Tr − K p ) JˆM in M c FX 3
e e
(46)
°
°¯ X 3 = (1 / Tr ) X 2 − (1 / Tr ) X 3
1 ˆ T ˆ 1 T 1 T
V= ( JX 1 ) JX 1 + X 2 X 2 + X 3 X 3 (47)
2 2 2
1
V = −( JˆX 1 )T JˆM in eM c−1 FX 3 + X 2T K i JˆX 1 − X 2T K d JˆM in eM c−1 FX 2
Tr
1 1 1
+ X 2T [( K d − K p ) JˆM in eM c−1 F + ] X 3 − X 3T X 3
Tr Tr Tr
1 1 1
= − ( JˆX 1 )T JˆM in eM c−1 FJˆX 1 + ( JˆX 1 − X 3 )T JˆM in eM c−1 F ( JˆX 1 − X 3 ) − X 3T JˆM in eM c−1 FX 3
2 2 2
1 ˆ T ˆ 1 ˆ 1 T 1 T
+ ( JX 1 ) K i JX 1 − ( JX 1 − X 2 ) K i ( JX 1 − X 2 ) + X 2 K i X 2 − X 2 K d JˆM in eM c−1 FX 2
T ˆ
(48)
2 2 2 Tr
1 T 1 1 1 1 1
+ X 2 [( K d − K p ) JˆM in eM c−1 F + ] X 2 + X 3T [( K d − K p ) JˆM in eM c−1 F + ] X 3
2 Tr Tr 2 Tr Tr
1 1 1 1
− ( X 2 − X 3 )T [( K d − K p ) JˆM in eM c−1 F + ]( X 2 − X 3 ) − X 3T X 3
2 Tr Tr Tr
1 1 I
≤ − ( JˆX 1 )T ( JˆM in eM c−1 F − K i ) JˆX 1 − X 2T ( K p JˆM in eM c−1 F − − K i ) X 2
2 2 Tr
1 T 1
− X 3 ( I + K p − K d ) JˆM in eM c−1 FX 3 + o(δ 2 )
2 Tr
1 ˆ
where I is a unit matrix, and o(δ 2 ) = ( JX 1 − X 3 )T JˆM in eM c−1 F ( JˆX 1 − X 3 ) is a
2
two-order infinitely small quantity term that can be ignored.
Obviously, if the condition (49) is satisfied, then V < 0 . According to the stabil-
ity theorem of Lyapunov, the system is asymptotic stable.
Discussion: (1) As the ideal case, the parameters of the camera and the laser
plane are accurately calibrated, that is, Jˆ = J . It is easy to validate that
JˆM in eM c−1 F = I is satisfied. It means that the system is degenerated as a linear
system.
(2) In the case that there exist errors in Ĵ , if JˆM in eM c−1 F is positive definite,
then the PID parameters that satisfy condition (49) can make the system be as-
ymptotic stable.
(3) If JˆM in eM c−1 F is negative definite, then it is not ensured that the system is
stable.
Visual Control System for Robotic Welding 739
ª0.5 0 0º ª0.05 0 0 º
« » «
K p = « 0 0.5 0 », K d = 0, K i = « 0 0.01 0 »»
¬« 0 0 0.5¼» «¬ 0 0 0.02¼»
The experimental welding seams were a type V groove welding seam and a
lap one. The protection gas was CO2. The transition mode of welding was
short circuit. The welding experiments for a type V groove welding seam and
a lap one were respectively conducted by using the methods proposed in this
chapter. The results showed that the welding seam could be recognized and
tracked well. And the shape of weld mark was good. Fig. 13 shows the results
of welding experiment for a lap welding seam. The situation for pixel coordi-
nate u' of feature point during tracking and welding is shown in Fig. 13(a),
and v' in Fig. 13(b). Their horizontal coordinates are sample times. The pixel
coordinates [u', v'] of feature points during tracking and welding are shown in
Fig. 13(c). The weld mark after welding is in Fig. 13(d). It can be found that
there existed larger errors near by the end stage. It was because of a small
piece of scrap on the welding seam, which resulted in the image coordinates of
the feature point with large errors.
7. Conclusions
A visual control system for robotic welding is introduced in this chapter. The
calibration of a vision sensor, the processing algorithms for laser stripe images,
and a hybrid visual control method are discussed in detail.
Based on the robot’s movement, a method of the structured light vision sen-
sor’s calibration is proposed. The laser plane can be calibrated from a group of
rotation movements. The experimental results show its effectiveness. It is easy
to be realized and provides the possibility to run hand-eye system calibration
automatically.
The image processing algorithms proposed in this chapter include two catego-
Visual Control System for Robotic Welding 741
ries such as feature extraction methods based on second order difference and
geometric centre. The former can recognize welding seam of type V groove
and find feature points with precise in the case of strong reflection, arc light
and splash disturbance. The latter can robustly provide feature points for the
welding seam after root pass welding.
A hybrid visual servoing method is proposed in this chapter, which consists of
a position control inner-loop in Cartesian space and two outer-loops. One is
position-based visual control in Cartesian space; another is image-based visual
servoing control in image space. The former is employed for the control of
moving along the direction of the welding seam. The latter is for the control of
tracking adjusting. The Jacobian matrix from image space of the feature point
on structured light stripe to Cartesian space is variable with the movement of
the end-effector. But there exists not singular problem in the Jacobian matrix.
So the target can be kept in the view field of the camera with the control
method above. In the case that the errors between the estimated and real pa-
rameters of the camera and the laser plane are not very large, the asymptotic
stability of the control system can be ensured through the selection of adequate
PID parameters.
The experimental results show the effectiveness of the hybrid visual servoing
control system for robotic arc welding. The current position and pose are not
necessary for the hybrid visual servoing control system. So it can be applied to
many kinds of robots, which can accept the commands of relative movement
to the end-effector, to realize visual measurement and tracking.
In addition, whether the work piece is clean or not has obvious influence on
the visual measurement results. The unclean surface sometimes results in
gross errors of the welding seam tracking. How to eliminate the influence of
the gross errors in image space is our work in near future. And the automated
adjustment of the position and pose of the welding gun in the start stage is an-
other problem to deal with in future.
Acknowledgement
The authors would like to thank the National High Technology Research and
Development Program of China for the support to this work under grant
No.2002AA422160. We would also like to thank like to thank National Key
Fundamental Research and Development Project of China (973,
No.2002CB312200) for the support to this work.
742 Industrial Robotics: Theory, Modelling and Control
8. References
Bakos, G. C.; Tsagas, N. F.; Lygouras, J. N.; Lucas J. (1993). Long distance non-
contact high precision measurements. International Journal of Electronics,
Vol. 75, No. 6, pp. 1269-1279, ISSN: 0020-7217.
Bolmsjo, G.; Olsson, M. & Cederberg, P. (2002). Robotic arc welding—trends
and developments for higher autonomy. The Industrial Robot, Vol. 29,
No. 2, pp. 98-104, ISSN: 0143-991X.
Cervera, E.; Berry, F. & Martinet, P. (2002). Image-based stereo visual servoing:
2D vs 3D features. 15th Triennial World Congress of the International Fed-
eration of Automatic Control, pp. 1630-1635, ISBN: 0-08-044295-1, Barce-
lona, Spain, July 2002, Elsevier Science Press.
Chaumette, F. & Malis, E. (2000). 2 1/2 D visual servoing: a possible solution to
improve image-based and position-based visual servoings. IEEE Interna-
tional Conference on Robotics and Automation, pp. 630-635, ISBN: 0-7803-
5889-9, San Francisco, USA, Apr. 2000, Institute of Electrical and Elec-
tronics Engineers Inc., Piscataway, USA
Corke, P. I. & Good, M. C. (1993). Controller design for high-performance vis-
ual servoing. Proceedings 12th World Congress International Federation of
Automatic Control, Vol. 9, pp. 395-398, ISBN: 0-08-042211-X, Sydney,
Australia, July 1993, Elsevier Science Press.
Corke, P. I. & Good, M. C. (1996). Dynamic effects in visual closed-loop sys-
tems. IEEE Transaction on Robotics and Automation, Vol. 12, No. 5, pp.
671-683, ISSN: 1042-296X.
Faugeras, O. D. & Toscani, G. (1986). The calibration problem for stereo. IEEE
Computer Society Conference on Computer Vision and Pattern Recognition,
pp. 15-20, ISBN: 0-8186-0721-1, Miami Beach, USA, 1986, IEEE Press,
New York, USA.
Hager, G. D.; Hutchinson, S. & Corke, P. I. (1996). A tutorial on visual servo
control. IEEE Transaction on Robotics and Automation, Vol. 12, No. 5, pp.
651-670, ISSN: 1042-296X.
Haug, K. & Pristrchow, G. (1998). Robust laser stripe sensor for the automated
welding seam tracking in the shipbuilding industry. Proceedings of the
24th Annual Confer-ence of the IEEE Industry Electronics Society, pp. 1236-
1241, Aachen, Germany, Sep. 1998, IEEE Comp Soc, Los Alamitos, USA.
Kim, J. S.; Son, Y. T.; Cho, H. S.; Koh, K. I. (1996). A robust visual seam tracking
system for robotic arc welding. Mechantronics, Vol. 6, No. 2, pp. 141-163,
ISSN: 0957-4158.
Li, Y.; Xu, D. & Tan, M. (2005). Robust features extraction for structured light
images of welding seam in multi-pass submerged arc welding, 7th In-
ternational Conference on Electronic Measurement & Instruments, Vol. 6, pp.
1559-1563, ISBN: 7-5062-7443-4, Beijing, China, Aug. 2005, International
Academic Publishers, Beijing.
Visual Control System for Robotic Welding 743
ISBN: 0-8186-5332-9, San Diego, USA, May 1994, IEEE Press, Piscata-
way, USA.
Zhang, J. & Djordjevich, A. (1999). Study on laser stripe sensor. Sensors and Ac-
tuators A: Physical, Vol. 72, No. 3, pp. 224-228, ISSN: 0924-4247.
Zhang, Z. (2000) A flexible new technique for camera calibration. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 11, pp.
1330-1334, ISSN 0162-8828.
Zhu, S. & Qiang X. (2000). Analysis of 3-D coordinate vision measuring meth-
ods with feature points on workpiece. Optics and Precision Engineering,
Vol. 8, No. 2, pp. 192-197, ISSN: 1004-924X.
Zou, D.; Ye, S. & Wang, C. (1995). Structured-lighting surface sensor and its
calibration. Optical Engineering, Vol. 34, No. 10, pp. 3040-3043, ISSN:
0091-3286.
27
Theodor Borangiu
1. Introduction
The chapter presents two methods and related motion control algorithms for
robots which are required to pick "on-the-fly" objects randomly moving on
conveyor belts; the instantaneous location of moving objects is computed by
the vision system acquiring images from a stationary, down looking camera.
The algorithms for visual tracking of conveyor belts for moving object access
are partitioned in two stages: (i) visual planning of the instantaneous destina-
tion of the robot, (ii) dynamic re-planning of the robot's destination while
tracking the moving objects.
In the first method one assumes that conveyors are configured as external axes
of the robot, which allows their modelling by means of a special class of vari-
ables called belt variables. A belt variable is here considered as a relative trans-
formation (having a component variable in time) defining the location of a ref-
erence frame attached to the moving belt conveyor. By composing this time
variable transformation (it reflects the contents of the belt's encoder) with the
time – invariant instantaneous object location (estimated by vision for each ob-
ject and compensated by the encoder offset value), the motion control algo-
rithm will operate with a periodically updated destination, hence the robot
will track the object moving on the belt.
In the second method the ensemble conveyor belt-actuator-sensor is config-
ured as a m ≤ 3 -axis Cartesian robot, leading thus to a problem of cooperation
between multiple robot manipulators subject to the multitasking control of a
computer. Conceptually, the problem is solved by defining a number of user
tasks which attach two types of "robots": the n – d.o.f. manipulator responsible
with grasping on-the-fly objects moving on the conveyor belt, and the m ≤ 3 -
axis robot emulating the conveyor belt under vision control. These user tasks
run concurrently with the internal system tasks of a multitasking robot con-
troller, mainly responsible for trajectory generation, axis servoing and system
resources management.
Both methods use the concept of Conveyor Belt Window to implement fast re-
action routines in response to emergency situations. The tracking algorithms
745
746 Industrial Robotics: Theory, Modelling and Control
• scale_factor is the calibrating constant specifying the ratio between the ele-
mentary displacement of the belt and one pulse of the encoder.
Using such a belt variable, it becomes possible to describe the relationship be-
tween a belt encoder and the location and speed of the reference frame (con-
veniently chosen with respect to the manipulability domain of the robot ac-
cessing the belt) which maintains a fixed position and orientation relative to
the belt (Borangiu, 2004). The informational model of the conveyor belt is its
assigned belt variable, to which additional modelling data must be specified
for robot-vision compatibility:
• window parameters, defining the working area of the robot over the conveyor
belt;
• encoder offset, used to adjust the origin of the belt’s reference frame (e.g. rela-
tive to the moment when image acquisition is triggered).
The current orientation data in a belt variable is invariant in time, equal to that
expressed by nominal_trans. In order to evaluate the current location updated
by the same belt variable, the following real-time computation has to be per-
formed: multiplication of a unit vector in the direction of X nominal_trans by
belt_distance – a distance derived from the encoder's contents (periodically
read by the system), and then addition of the result to the position vector of
nominal_trans. The symbolic representation of this computation is given in
equations (1) and (2):
Example 1:
A reaction routine continuously monitors a fast digital-input interrupt line
which detects the occurrence of an external event of the type: "an object has
completely entered the Conveyor Belt Window – and hence is completely visi-
ble". This event is detected by a photocell, and will determine an image acqui-
sition of the moving object. The very moment the detected object is recognised
as an object of interest and successfully located, the position of the belt is reset
and consequently the belt variable will encapsulate from now on the current
displacement of the belt relative to the belt’s position in which the object has
been successfully located. The V+ code is given below:
;var. $name and its location relative to the vision frame in vis_loc
VLOCATE (cam,0) $name,vis.loc
success = VFEATURE(1) ;evaluate the success of the locating op.
belt_offset = VFEATURE(8)
;The real-valued function VFEATURE(8) returns the contents of the
;belt’s encoder when the strobe light for image acquisition was
;triggered.
RETURN
.END
In what concerns the encoder’s scale factor represented by the constant pa-
rameter scale_factor, it can be evaluated:
2.2 The logical mechanism "Conveyor Belt Window" and emergency routines
There has been designed a logical mechanism called Conveyor Belt Window
(CBW) which allows the user to check the area of the belt in which the robot
will operate. A CBW defines a segment of the belt delimitated by two planes
perpendicular to the direction of motion of the belt (this window is restricted
only in the direction of motion of the conveyor's belt) (Borangiu & Kopacek,
2004).
750 Industrial Robotics: Theory, Modelling and Control
Hence, the returned value conforms to the WINDOW model shown in Fig. 2,
for which the value returned by the function WINDOW increases as the belt-
relative location moves downstream.
Downstream
limit
WINDOW()
Motion direction
along the belt
Upstream
limit
For robots tracking conveyor belts in order to pick moving objects recognised
and located by vision, the belt-relative transformation is %belt_var:part.loc
(variable in time), obtained by the composition of:
• temporarily stops the robot, delaying thus the motion planning, until the
time-variable destination enters the belt window;
• definitively stops the robot and generates immediately an error message.
• updated every 8 milliseconds by the motion controller based on the current po-
sition data read from the belt’s encoder, until the robot’s end-point com-
pletes the necessary percentage of its motion segment towards the part's
grasping location.
Visual Conveyor tracking in High-speed Robotics Tasks 753
The research on Guidance Vision for Robots (GVR) accessing moving targets
was directed to develop a convergent motion control algorithm for visually
plan the motion of the robot as a result of object detection, recognition and lo-
cating on a moving conveyor belt, and than track the object in order to grasp it
inside a conveniently defined belt window. The main idea relies on dynami-
cally changing the visually computed destination of the robot end point by
composing it with a belt-related transformation updated every 8 milliseconds
from the encoder data.
If a stationary camera looks down at the conveyor belt, and supposing that its
field of view covers completely a conveyor belt window defined inside the
working area of the robot (after execution of a camera – robot calibration ses-
sion), then the image plane can be referred by the time – invariant frame
( xvis , y vis ) as represented in Fig. 3.
It is also assumed that the X axes of the reference frame of the robot ( x0 ) , of the
conveyor's direction of motion ( xn ) and of the image plane ( xvis ) are parallel.
The conveyor belt is modelled by the belt variable %belt. Parts are circulating
on the belt randomly; their succession (current part type entering the CBW), dis-
tance from the central axis of the conveyor and orientation are unknown. The
"Look-and-Move" interlaced operating principle of the image processing sec-
tion and motion control section is used (Hutchinson, 1996), (Borangiu, 2001),
(West, 2001), (Adept, 2001). According to this principle, while an image of the
CBW is acquired and processed for object identification and locating, no mo-
tion command is issued and reciprocally, the camera will not snap images
while the robot tracks a previously located part in order to pick it "on-the-fly".
(encoder_val-VFEATURE(8))*scale_factor*vers(+x n )
yg xg rz_off
y4 grip.trans
%belt : part.loc
ptp z_off
y o bj AIM
x4 x obj
z4 cp
part.loc
yn yi
to.cam[cam] motion
xn xi direction of
belt
y vis
z0 y0
x vis
vis.loc
x0
upstream downstream
limit instant_trans limit
of %belt
nominal_trans
of %belt
The robot motion control algorithm for tracking the conveyor belt in order to
pick "on-the-fly" one by one objects recognised and located by vision comput-
ing consists of the following basic steps:
1. Triggering the strobe light (synch./asynch. relative to the read cycle of the
video camera) when image acquisition is requested from a fast digital-input
interrupt line connected to a photocell mounted at the upstream limit of the
CBW. The interrupt line signals that an object has completely entered the
belt window.
2. Recognizing a single object that just completely entered the belt window. Ob-
ject recognition is exclusively based in this approach on the match with
previously learned models of all objects of interest (Lindenbaum, 1997).
3. Locating the object which was recognised, by computing the coordinates of
its centre of mass and the angle between its minimum inertia axis (MIA)
and xvis . As can be seen in Fig. 3, the object-attached frame ( x obj , y obj ) has the
abscissa aligned with the minimum inertia axis (MIA), and the current loca-
tion of the object in the image plane is computed by the vision section and
returned in vis.loc.
4. Planning the instantaneous destination of the robot. Once the object is recognized
as the instance of a model and located, the related grasping transformation
grip.trans is called. Assuming that the grasping style is such that the projec-
tion of the gripper's centre on the image plane coincides with the object's
centre of mass, the gripper-attached frame ( x g , y g ) will be offset relative to
the object-attached frame along z 0 by z_off millimetres and turned with
r_off degrees about z 0 . Now, using the relative transformation
to.cam[cam](as output of the camera-robot calibration session) relating the
vision frame ( x vis , y vis , z vis ) to the base frame of the robot ( x 0 , y 0 , z 0 ) , the cur-
rent destination of the robot (for a frozen conveyor belt) is computed from
the vision data as a composed transformation part.loc, expressing the grip-
per frame relative to the robot base frame:
part.loc = to.cam[cam]:vis.loc:grip.trans
5. Synchronising the encoder belt with the motion of the object recognized in the
belt window. This operation consists into setting the offset of the conveyor
belt at a correct value. The operation
establishes the point of interest of the conveyor belt modelled with %belt as
the point corresponding to the current value encoder_val(strobe) of the en-
coder counter at the time the strobe light was triggered. This value is avail-
able immediately after object locating. Thus, as soon as one object is recog-
nized and located, the current belt position, identified by ( xi , y i ) , will be
reset since:
6. Tracking and picking the object moving on the belt. This requires issuing a lin-
ear motion command in the Cartesian space, relative to the belt. A com-
posed relative transformation %belt:part.loc, expressing the current com-
puted location of the gripper relative to the instantaneous moving frame
( xi , y i ) , is defined. Practically, the tracking procedure begins immediately
after the instantaneous position of the belt – expressed by the frame ( xi , y i )
has been initialized by the SETBELT operation, and consists into periodi-
cally updating the destination of the gripper by shifting it along the xn axis
with encoder counts accumulated during successive sampling periods
..., t k −1 , t k , t k +1 ,... Δt = t k +1 − t k = const :
7. Once the robot commanded towards a destination relative to the belt, the
gripper will continuously track the belt until a new command will be is-
sued to approach a location which is not relative to the belt.
For belt-relative motions, the destination changes continuously; depending on
the magnitude and the variations of the conveyor speed it is possible that the
robot will not be able to attain the final positions within the default error toler-
ance.
In such cases, the error tolerance must be augmented. In extreme cases it will
be even necessary to totally deactivate the test of final error tolerance. Fig. 4
756 Industrial Robotics: Theory, Modelling and Control
presents the designed robot motion control algorithm for tracking the con-
veyor belt in order to pick "on-the-fly" an object recognized and located by vi-
sion computation inside the a priori defined belt window. A REACT mecha-
nism continuously monitors the fast digital-input interrupt line which signals
that an object has completely entered the belt window. The robot motion rela-
tive to the belt will be terminated:
Example 2:
The following series of instructions will move the robot end-effector towards a
belt-relative location part_2 (the belt is modelled as %belt[1]), open the grip-
per, track the conveyor belt for 5 seconds (in fact the location part_2 on the
belt), close the gripper and finally leave the belt to move towards a fixed loca-
tion.
MOVES %belt[1]:part_2
OPENI
DELAY 5.0
CLOSEI
MOVES fixed_location
When defining the Conveyor Belt Window, a special high-priority routine can
be specified, which will be automatically invoked to correct any window viola-
tion during the process of tracking moving objects. In such situations the robot
will be accelerated (if possible) and the downstream limit temporarily shifted
in the direction of motion of the conveyor belt (within a timeout depending on
the belt's speed) in which the error tolerance must be reached (Espiau, 1992),
(Borangiu, 2002).
Visual Conveyor tracking in High-speed Robotics Tasks 757
Timeout
expired REACT to fast digital-input interrupt line No event
from photocell: "object inside CBW"
Event-driven interrupt
Triggering the strobe light and image acquisition
Failure or
object of no
interest
Single object recognition by prototype match
Yes
Setting the offset of the conveyor belt to encoder_val(strobe)
xyzi = xyzn
Destination
upstream CBW Read encoder_count(ǻt)
%belt:part.loc ←SHIFT(%belt:part.loc BY
encoder_count(ǻt)*scale_factor,0,0)
Robot moving towards time-variable destination:
MOVES %belt:part.loc
Figure 4. The robot motion algorithm for visual tracking of the conveyor belt
758 Industrial Robotics: Theory, Modelling and Control
According to the second tracking method, the ensemble conveyor belt + actua-
tor + sensor is configured as an m ≤ 3 -axis Cartesian robot, which leads to a
problem of cooperation between multiple robots subject to multitasking com-
puter control. The V+ structured programming environment is used for exem-
plifying the multi tasking control of robots visually tracking moving objects on
multiple belts.
In this respect, there are three tasks to be at least defined for the tracking prob-
lem:
1. Task 1: Dynamic re-planning the destination location (grasping the
moving object) for the robot manipulator.
2. Task 2: Continuously moving (driving) the m ≤ 3 -axis vision belt. In
the most general case, the belt moves along any 3D-direction relative
to the robot base frame ( x0 , y 0 , z 0 ) .
3. Task 3: Reading once the belt's location the very moment an object of
interest has been recognised, located and its grasping estimated as col-
lision-free, and then continuously until the object is effectively picked.
A multitasking robot control system appears to execute all these program tasks
at the same time. However, this is actually achieved by rapidly switching be-
tween the tasks many times each second, each task receiving a fraction of the
total time available. This is referred to as concurrent execution (Zhuang, 1992),
(Borangiu, 2005).
The amount of time a particular program task receives is caused by two pa-
rameters: its assignment to the various time slices, and its priority within the
time slice. One assumes that, in the multitasking operating system, each system
cycle is divided into 16 time slices of one millisecond each, the slices being
numbered 0 through 15. A single occurrence of all 16 time slices is referred to
Visual Conveyor tracking in High-speed Robotics Tasks 759
as a major cycle. For a robot each of these cycles corresponds to one output
from the trajectory generator to the servos.
A number of seven user tasks, e.g. from 0 to 6, will be used and their configu-
ration tailored to suit the needs of specific applications. Each program task
configured for use requires dedicated memory, which is not available to user
programs. Therefore, the number of tasks available should be made no larger
than necessary, especially if memory space for user programs is critical.
When application programs are executed, their program tasks are normally as-
signed default time slices and priorities according to the current system con-
figuration. The defaults can be overridden temporarily for any user program
task, by specifying the desired time slice and priority parameters of the
EXECUTE initiating command.
Tasks are scheduled to run with a specified priority in one or more time slices.
Tasks may have priorities from −1 to 64, and the priorities may be different in
each time slice. The priority meanings are:
The V+ operating system has a number of internal (system) tasks that compete
with application (user) program tasks for time within each time slice:
• On motion systems, the V+ trajectory generator runs (at the highest priority
task) in slice #0 and continues through as many time slices as necessary to
compute the next motion device set point.
• On motion systems, the CPU running servo code runs the servo task (at in-
terrupt level) every 1 or 2 milliseconds (according to the controller configu-
ration utility).
760 Industrial Robotics: Theory, Modelling and Control
The remaining time is allocated to user tasks, by using the controller configu-
ration utility. For each time slice, you specify which tasks may run in the slice
and what priority each task has in that slice.
Vision guided robot planning ("object recognition and locating"), and dynami-
cal re-planning of robot destination ("robot tracking the belt") should always
be configured on user tasks 0 or 1 in "Look-and-Move" interlaced robot motion
control, due to the continuous, high priority assignment of these two tasks,
over the first 13 time slices. However, vision guidance and motion re-planning
programs complete their computation in less than the 13 time slices (0-12).
Consequently, in order to give the chance to conveyor-associated tasks ("drive"
the vision belt, "read" the current position of the vision belt") to provide the
"robot tracking" programs with the necessary position update information ear-
lier than the slice 13, and to the high-priority trajectory generation system task
to effectively use this updates, a WAIT instruction should be inserted in the
loop-type vision guidance and motion re-planning programs of tasks 0 and/or
1.
A WAIT condition instruction with no argument will suspend then, once per
loop execution, the motion re-planning program, executing on user task 1, un-
til the start of the next major cycle (slice 0). At that time, the "vision processing
and belt tracking" task becomes runnable and will execute, due to its high pri-
ority assignment.
Due to their reduced amount of computation, programs related to the man-
agement of the conveyor belt should be always assigned to tasks 2, 3, 5 or 6 if
the default priority scheme is maintained for user program tasks, leaving tasks
1 and 2 for the intensive computational vision and robot motion control.
Whenever the current task becomes inactive, the multitasking OS searches for
a new task to run. The search begins with the highest priority task in the cur-
rent time slice and proceeds through in order of descending priority. If multi-
ple programs wait to run in the task, they are run according to relative pro-
gram priorities. If a runnable task is not found, the next higher slice is checked.
All time slices are checked, wrapping around from slice 15 to slice 0 until the
original slice is reached. Whenever a 1 ms interval expires, the system per-
forms a similar search of the next time slice; if this one does not contain a run-
nable task, the currently executing task continues.
If more than one task in the same time slice has the same priority, they become
part of a round-robin scheduling group. Whenever a member of a round-robin
group is selected by the normal slice searching, the group is scanned to find
the member of the group that run most recently. The member that follows the
most recent is run instead of the one which was originally selected.
Visual Conveyor tracking in High-speed Robotics Tasks 761
Fig. 5 shows the task scheduler algorithm which was designed for an n-d.o.f. ro-
bot tracking a continuously updated object grasping location, and picking the
object "on-the-fly" from a conveyor belt, when motion completes. The object is
recognized and located by vision, and updating of its position is provided by
encoder data reads from the conveyor belt modelled as a 1-d.o.f. robot. The
priority analysis and round-robin member selection are also indicated.
The problem of conveyor tracking with vision guiding for part identification
and locating required definition of three user tasks, to which programs were
associated:
762 Industrial Robotics: Theory, Modelling and Control
1. Task 1: program "track" executes in this task, with robot 1 (4-d.o.f. SCARA)
selected. This program has two main functions, carried out in a 2 – stage se-
quence:
2. Task 2: program "drive" executes in this task, with robot 2 (the 1-d.o.f. con-
veyor belt) selected. This program moves the belt in linear displacement in-
crements, at a sufficiently high rate to provide a jerk-free, continuous belt
motion. This program executes in both stages of the application, previously
defined.
3. Task 3: program "read" executes in this task, with robot 2 selected. This pro-
gram executes differently in the two stages of the application:
Visual Conveyor tracking in High-speed Robotics Tasks 763
no
run
slice =0 servos?
yes
yes
yes
can cur-
rent task
no
runnable run?
task in
slice no
task 1 (1) (2) tasks 2 and 3
yes
look ahead for
round-robin a runnable task
make higher
priority selection up in any other
priority task or
19 or 21 to slice 13 time slice
round robin
of task 1 group the cur- and in slices
14 and 15,
rent task
or task 2 in
yes
slice 13
task
run highest priority found?
Run "track" program in task un- run "drive"
in task 1 til program com- in task 2,
until issues or "read" no
pletes, waits, or
WAIT time slice is up in task 3
run null task un-
til an "event"
occurs
yes
time
left in
slice? run round-robin member that follows
(3) the most recent one (switch between
no "drive" and "belt" tasks)
Figure 5. Task scheduler for multitasking robot control and conveyor tracking
STAGE 1: Executes a single time upon receiving an input signal from vision
in task 1, confirming the recognition and locating of a part. In re-
sponse, "drive" reads the instantaneous belt position, which from now
on will be used as an offset for the position updates.
764 Industrial Robotics: Theory, Modelling and Control
From the three user tasks, the default priority assignment is maintained. This
leads to the following priority analysis for a major cycle:
• Task 1 has the highest priority in time slices 0 – 12 (inclusively), with val-
ues of 19, 21, 9 and 11.
• Task 2 has the highest priority (20) in a single time slice: 13.
• Task 3 never detains explicitly a position of highest priority with respect to
tasks 1 and 2.
The three tasks become part of a round-robin group as follows: tasks 2 and 3 in
slices 0 – 12 inclusively; tasks 1, 2 and 3 in slices 14 and 15. Because tasks 2 and
3 are in more than one round-robin group on different slices, then all three
tasks in the corresponding pairs of different slices appear to be in a big group.
As a result of the priority scan and scheduling, the programs in the three user
tasks execute as follows:
STAGE 1 – vision is processing, the robot is not moving and no WAIT is issued
by task 1 (Fig. 6):
− Task 1 runs: in slices 0 – 12 (it detains the highest priority), in slice 14 (it is
member of the round-robin group following task 2 that run more re-
cently – in slice 13) only before generating the request for task 3 to com-
pute the instantaneous offset belt position when vision located the
object, and in slice 15 only after generating this request (it is member of
the round-robin group following task 3 that run more recently – in
slice 14).
− Task 2 runs in slice 13 (it detains the highest priority), and in slice 15 (it is
member of the round-robin group following task 1 that run more re-
cently – in slice 14) only before task 1 generates the request for task 3 to
compute the instantaneous offset belt position when vision located
the object.
− Task 3 runs in slice 14 (it is member of the round-robin group following task
2 that run more recently – in slice 13) only after receiving the request
from task 1 to compute the instantaneous offset belt position when
vision located the object.
Visual Conveyor tracking in High-speed Robotics Tasks 765
20 after request
task 3 running "read", task priority =15
10 RR RR
before request
after request
0 1 2
1 millisecond time slices 12 13 14 15
= task waiting
= task running
RR = round-robin member selection
Figure 6. Priority assignment and tasks running in STAGE 1 of vision guidance for
robot motion planning
STAGE 2 – vision is not processing, the SCARA robot is moving and WAIT com-
mands are issued in task 1 by the "track" program after each re-planning of the
end-effector's target destination within one major cycle of 16 milliseconds (Fig. 7):
WAIT signal
= task waiting
= task running
xx = task inactive
RR = round-robin member selection
Figure 7. Priority assignment and tasks running in STAGE 2 of robot motion re-plan
Visual Conveyor tracking in High-speed Robotics Tasks 767
3.2 Dynamically altering belt locations for collision-free object picking on-the-fly
The three previously discussed user tasks, when runnable and selected by the
system's task scheduler, attach respectively the robots:
In multiple-robot systems like the one for conveyor tracking, SELECT robot
operations select the robot with which the current task must communicate.
The SELECT operation thus specifies which robot receives motion instructions
(for example, DRIVE to move the vision belt in program "drive", or MOVES to
move the SCARA in program "track") and returns robot-related information (for
example, for the HERE function accessing the current vision belt location in
program "read").
Program "track" executing in task 1 has two distinct timing aspects, which cor-
respond to the partitioning of its related activities in STAGE 1 and STAGE 2.
Thus, during STAGE 1, "track" waits first the occurrence of the on-off transi-
tion of the input signal generated by a photocell, indicating that an object
passed over the sensor and will enter the field of view of the camera. Then, af-
ter waiting for a period of time (experimentally set up function of the belt's
speed), "track" commands the vision system to acquire an image, identify an
object of interest and locate it.
During STAGE 2, "track" alters continuously, once each major 16 millisecond
system cycle, the target location of the end-effector – part.loc (computed by vi-
sion) by composing the following relative transformations:
where grip.part is the learned grasping transformation for the class of objects
of interest. The updating of the end-effector target location for grasping one
moving object uses the command ALTER()Dx,Dy,Dz,Rx,Ry,Rz, which specifies
the magnitude of the real-time path modification that is to be applied to the
robot path during the next trajectory computation. This operation is executed
by "track" in task 1 that is controlling the SCARA robot in alter mode, enabled
by the ALTON command. When alter mode is enabled, this instruction should
be executed once during each trajectory cycle. If ALTER is executed more of-
ten, only the last set of values defined during each cycle will be used. The ar-
guments have the meaning:
• Dx,Dy,Dz: optional real values, variables or expressions that define the
translations respectively along the X , Y , Z axes;
768 Industrial Robotics: Theory, Modelling and Control
• Rx,Ry,Rz: optional real values, variables or expressions that define the rota-
tions respectively about the X , Y , Z axes.
It is assumed that the axis of the vision belt is parallel to the Y0 robot axis in its
base. Also, it is considered that, following the belt calibrating procedure de-
scribed in Section 2.1, the coefficient pulse.to.mm, expressing the ratio between
one belt encoder pulse and one millimetre, is known.
The repeated updating of the end-effector location by altering the part.loc ob-
ject-grasping location proceeds in task 1 by "track" execution, until motion
stops at the (dynamically re-) planned grasping location, when the object will
be picked-on-the-fly (Borangiu, 2006). This stopping decision is taken by
"track" by using the STATE (select) function, which returns information about
the state of the robot 1 selected by the task 1 executing the ALTER loop. The
argument select defines the category of state information returned. For the pre-
sent tracking software, the data interpreted is "Motion stopped at planned lo-
cation", as in the example below:
Example 3:
The next example shows how the STATE function is used to stop the continu-
ous updating of the end-effector's target location by altering every major cycle
the position along the Y axis. The altering loop will be exit when motion stopped
at planned location, i.e. when the robot's gripper is in the desired picking posi-
tion relative to the moving part.
Visual Conveyor tracking in High-speed Robotics Tasks 769
After alter mode terminates, the robot is left at a final location that reflects both
the destination of the last robot motion and the total ALTER correction that
was applied.
Figure 8. Cooperation between the tasks of the belt tracking control problem
Random scene foregrounds, as the conveyor belt, may need to be faced in ro-
botic tasks. Depending on the parts shape and on their dimension along z + ,
grasping models Gs_m are off line trained for prototypes representing object
classes. However, if there is the slightest uncertainty about the risk of collision
between the gripper and parts on the belt – touching or close one relative to the
other –, then extended grasping models EG_m = {Gs_m, FGP_m } must be created
by the adding the gripper's fingerprint model FGP_m to effectively authorize part
access only after clear grip tests at run time.
Definition.
A multiple fingerprint model MFGP_m(G, O) = {FGP_m 1 (G, O) ,..., FGP_m k (G, O) } for a
p -fingered gripper G and a class of objects O describes the shape, location and
interpretation of k sets of p projections of the gripper's fingerprints onto the
image plane x vis , y vis for the corresponding k grasping styles Gs_m i , i = 1,..., k of
O -class instances. A FGP_m i (G, O) model has the following parameter structure:
Visual Conveyor tracking in High-speed Robotics Tasks 771
• finger _ shape(G ) = number , shapei , size i , i = 1,..., p , expresses the shape of the
gripper in terms of its number p of fingers, the shape and dimensions of
each finger. Rectangular-shaped fingers are considered; their size is given
"width" and "height".
• fingers _ viewing (G, pose _ context i , i = 1,... p) indicating the way how "invisi-
ble" fingers are to be treated; fingers are "invisible" if they are outside the
field of view.
• grip = 1,..., k are the k gripper-object Gs_m (G, O ) distinct grasping models a
priori trained, as possible alternatives to face at run time foreground context
situations.
two gripper fingerprints. This data refers to the size, position and orientation of
the gripper's fingerprints, and is based on:
• the number and dimensions of the gripper's fingers: 2-parallel fingered grippers
were considered, each one having a rectangular shape of dimensions
wd g , ht g ;
• the grasping location of the fingers relative to the class model of objects of interest.
• Relative transformations: to.cam[cam]: describes, for the given camera, the lo-
cation of the vision frame with respect to the robot's base frame; vis.loc: de-
scribes the location of the object-attached frame with respect to the vision
frame; vis.obj: describes the location of the object-attached frame with respect
to the vision frame; pt.rob: describes the location of the gripper frame with re-
spect to the robot frame; pt.vis: describes the location of the gripper frame
with respect to the vision frame.
As a result of this learning stage, which uses vision and the robot's joint encod-
ers as measuring devices, a grasping model
GP_m(G, " LA" ) = {d.cg , alpha, z_off , rz_off } is derived, relative to the object's
centre of mass C and minimum inertia axis MIA (C and MIA are also available
at runtime):
d .cg = dist(C, G), alpha = ∠(MIA, dir(C, G)), z _ off = dist(T, G), rz _ off = ∠( x g , dir(C, G ))
A clear grip test is executed at run time to check the collision-free grasping of a
recognized and located object, by projecting the gripper's fingerprints onto the
image plane, ( x vis , y vis ) , and verifying whether they "cover" only background pix-
els, which means that no other object exists close to the area where the gripper's
fingers will be positioned by the current robot motion command. A negative
result of this test will not authorize the grasping of the object.
For the test purpose, two WROIs are placed in the image plane, exactly over the
areas occupied by the projections of the gripper's fingerprints in the image
plane for the desired, object-relative grasping location computed from
GP_m(G, "LA" ) ; the position (C) and orientation (MIA) of the recognized object
must be available. From the invariant, part-related data:
Visual Conveyor tracking in High-speed Robotics Tasks 773
alpha, rz.off , wd LA , wd g , ht g , d .cg , there will be first computed at run time the
current coordinates xG , y G of the point G, and the current orientation angle
angle.grasp of the gripper slide axis relative to the vision frame.
Figure 9. Frames and relative transformations used to teach the GP_m(G, "LA" ) pa-
rameters
The part's orientation angle.aim = ∠(MIA, x vis ) returned by vision is added to the
learned alpha .
Once the part located, the coordinates x C , y C of its gravity centre C are avail-
able from vision. Using them and beta, the coordinates xG , y G of the G are com-
puted as follows:
Now, the value of angle.grasp = ∠( x g , x vis ) , for the object's current orientation
and accounting for rz.off from the desired, learned grasping model, is obtained
from angle.grasp = beta + rz.off .
Two image areas, corresponding to the projections of the two fingerprints on
the image plane, are next specified using two WROI operations. Using the ge-
ometry data from Fig. 9, and denoting by dg the offset between the end-tip
point projection G, and the fingerprints centres CWi, ∀i = 1,2 ,
dg = wd LA / 2 + wd g / 2 , the positions of the rectangle image areas "covered" by
the fingerprints projected on the image plane in the desired part-relative grasp-
ing location are computed at run time according to (7). Their common orienta-
tion in the image plane is given by angle.grasp .
(7)
y cw1 = y G − dg ⋅ sin(angle.grasp ) ; y cw2 = y G + dg ⋅ sin( angle.grasp )
The type of image statistics is returned as the total number of non-zero (back-
ground) pixels found in each one of the two windows, superposed onto the ar-
eas covered by the fingerprints projections in the image plane, around the ob-
ject. The clear grip test checks these values returned by the two WROI-
generating operations, corresponding to the number of background pixels not
occupied by other objects close to the current one (counted exactly in the grip-
per's fingerprint projection areas), against the total number of pixels corre-
sponding to the surfaces of the rectangle fingerprints. If the difference between
the compared values is less than an imposed error err for both fingerprints –
windows, the grasping is authorized:
5. Conclusion
The robot motion control algorithms with guidance vision for tracking and
grasping objects moving on conveyor belts, modelled with belt variables and
1-d.o.f. robotic device, have been tested on a robot-vision system composed
from a Cobra 600TT manipulator, a C40 robot controller equipped with EVI vi-
sion processor from Adept Technology, a parallel two-fingered RIP6.2 gripper
from CCMOP, a "large-format" stationary camera (1024x1024 pixels) down
looking at the conveyor belt, and a GEL-209 magnetic encoder with 1024
pulses per revolution from Leonard Bauer. The encoder’s output is fed to one
of the EJI cards of the robot controller, the belt conveyor being "seen" as an ex-
ternal device.
Image acquisition used strobe light in synchronous mode to avoid the acquisi-
tion of blurred images for objects moving on the conveyor belt. The strobe
light is triggered each time an image acquisition and processing operation is
executed at runtime. Image acquisitions are synchronised with external events
of the type: "a part has completely entered the belt window"; because these events
generate on-off photocell signals, they trigger the fast digital-interrupt line of the
robot controller to which the photocell is physically connected. Hence, the
VPICTURE operations always wait on interrupt signals, which significantly
improve the response time at external events. Because a fast line was used, the
most unfavourable delay between the triggering of this line and the request for
image acquisition is of only 0.2 milliseconds.
The effects of this most unfavourable 0.2 milliseconds time delay upon the integrity
of object images have been analysed and tested for two modes of strobe light
triggering:
• Asynchronous triggering with respect to the read cycle of the video camera,
i.e. as soon as an image acquisition request appears. For a 51.2 cm width of
the image field, and a line resolution of 512 pixels, the pixel width is of 1
mm. For a 2.5 m/sec high-speed motion of objects on the conveyor belt the
most unfavourable delay of 0.2 milliseconds corresponds to a displacement
of only one pixel (and hence one object-pixel might disappear during the
dist travel above defined), as:
• Synchronous triggering with respect to the read cycle of the camera, induc-
ing a variable time delay between the image acquisition request and the
strobe light triggering. The most unfavourable delay was in this case 16.7
milliseconds, which may cause, for the same image field and belt speed a
potential disappearance of 41.75 pixels from the camera's field of view
(downstream the dwnstr_lim limit of the belt window).
776 Industrial Robotics: Theory, Modelling and Control
Consequently, the bigger are the dimensions of the parts travelling on the con-
veyor belt, the higher is the risk of disappearance of pixels situated in down-
stream areas. Fig. 10 shows a statistics about the sum of:
• visual locating errors: errors in object locating relative to the image frame
( x vis , y vis ) ; consequently, the request for motion planning will then not be
issued;
• motion planning errors: errors in the robot's destinations evaluated during
motion planning as being downstream downstr_lim, and hence not author-
ised,
function of the object's dimension (length long_max.obj along the minimal inertia
axis) and of the belt speed (four high speed values have been considered: 0.5
m/sec, 1 m/sec, 2 m/sec and 3 m/sec).
As can be observed, at the very high motion speed of 3 m/sec, for parts longer
than 35 cm there was registered a percentage of more than 16% of unsuccessful
object locating and of more than 7% of missed planning of robot destinations
(which are outside the CBW) for visually located parts, from a total number of
250 experiments.
The clear grip check method presented above was implemented in the V+ pro-
gramming environment with AVI vision extension, and tested on the same ro-
bot vision platform containing an Adept Cobra 600TT SCARA-type manipula-
tor, a 3-belt flexible feeding system Adept FlexFeeder 250 and a stationary,
down looking matrix camera Panasonic GP MF650 inspecting the vision belt.
The vision belt on which parts were travelling and presented to the camera was
positioned for a convenient robot access within a window of 460 mm.
Experiments for collision-free part access on randomly populated conveyor belt
have been carried out at several speed values of the transportation belt, in the
range from 5 to 180 mm/sec. Table 1 shows the correspondence between the
belt speeds and the maximum time intervals from the visual detection of a part
and its collision-free grasping upon checking [#] sets of pre taught grasping
models Gs_m i (G, O ), i = 1,..., #.
Visual Conveyor tracking in High-speed Robotics Tasks 777
Object 20
locating errors 16
[%] 12
8
4
0
5 15 25 35 45
long_max.obj [cm]
0.5 m/sec 1m/sec 2m/sec 3m/sec
20
planning 16
12
errors [%] 8
4
0
5 15 25 35 45
long_max.obj [cm]
Figure 10. Error statistics for visual object locating and robot motion planning
Table 1. Corespondance between belt speed and collision-free part grasping time
778 Industrial Robotics: Theory, Modelling and Control
6. References
Adept Technology Inc. (2001). AdeptVision User's Guide Version 14.0, Technical Publi-
cations, Part Number 00964-03300, Rev. B, San Jose, CA
Borangiu, Th. & Dupas, M. (2001). Robot – Vision. Mise en œuvre en V+, Romanian
Academy Press & AGIR Press, Bucharest
Borangiu, Th. (2002). Visual Conveyor Tracking for "Pick-On-The-Fly" Robot Motion
Control, Proc. of the IEEE Conf. Advanced Motion Control AMC'02, pp. 317-322,
Maribor
Borangiu, Th. (2004). Intelligent Image Processing in Robotics and Manufacturing, Roma-
nian Academy Press, ISBN 973-27-1103-5, Bucarest
Borangiu, Th. & Kopacek, P. (2004). Proceedings Volume from the IFAC Workshop Intelli-
gent Assembly and Disassembly - IAD'03 Bucharest, October 9-11, 2003, Elsevier
Science, Pergamon Press, Oxford, UK
Borangiu, Th. (2005). Guidance Vision for Robots and Part Inspection, Proceedings vol-
ume of the 14th Int. Conf. Robotics in Alpe-Adria-Danube Region RAAD'05, pp. 27-
54, ISBN 973-718-241-3, May 2005, Bucharest
Borangiu, Th.; Manu, M.; Anton, F.-D.; Tunaru, S. & Dogar, A. (2006). High-speed Ro-
bot Motion Control under Visual Guidance, 12th International Power Electronics
and Motion Control Conference - EPE-PEMC 2006, August 2006, Portoroz, SLO.
Espiau, B.; Chaumette, F. & Rives, P. (1992). A new approach to visual servoing in ro-
botics, IEEE Trans. Robot. Automat., vol. 8, pp. 313-326
Lindenbaum, M. (1997). An Integrated Model for Evaluating the Amount of Data Re-
quired for Reliable Recognition, IEEE Trans. on Pattern Analysis & Machine In-
tell.
Hutchinson, S. A.; Hager, G.D. & Corke, P. (1996). A Tutorial on Visual Servo Control,
IEEE Trans. on Robotics and Automation, vol. 12, pp. 1245-1266, October 1996
Schilling, R.J. (1990). Fundamentals of Robotics. Analysis and Control, Prentice-Hall,
Englewood Cliffs, N.J.
Zhuang, X.; Wang, T. & Zhang, P. (1992). A Highly Robust Estimator through Par-
tially Likelihood Function Modelling and Its Application in Computer Vision,
IEEE Trans. on Pattern Analysis and Machine Intelligence
West, P. (2001). High Speed, Real-Time Machine Vision, CyberOptics – Imagenation, pp.
1-38 Portland, Oregon
28
1. Introduction
779
780 Industrial Robotics: Theory, Modelling and Control
era installed in its end-effecter. The main advantage of the present approach is
that it does not necessitate the tedious CCD camera calibration and the com-
plicated coordinate transformations.
o
z Workspace Robot
CCD
frame end-effector
ȈO
o
y camera
o
x
Rigid tool
Curved line
Tangential direction
Contact object
Goal Initial
feature Feature ı i feature
For the visual feedback control shown in Fig. 1, the trace error of the robot in
the image feature domain needs to be mapped to the joint angle domain of the
robot. That is, the end-effecter should trace the curved line according to
a j , a j +1 in the image domain of the features A j , A j +1 shown in Fig. 2.
Let ȟ , ȟ d ∈ R6×1 be the image feature parameter vectors of a j , a j +1 in the image
feature domain shown in Fig. 2, respectively. The visual feedback control
shown in Fig. 1 can be expressed as
ȟ =ϕ( p ),
tc (2)
Dž ȟ = Jf · δ p , tc (3)
782 Industrial Robotics: Theory, Modelling and Control
Dž p t c = Jc· Dž ș . (4)
Dž ș = (JfJc)-1 · Dž ȟ . (5)
Because Jf and Jc are respectively linearized in the minute domains of ptc and
ș , the motion of the robot is restricted to a minute joint angle domain, and
Eq.(5) is not correct for large Dž ș . Simultaneously, the mapping is weak to the
change of (JfJc)-1. In addition, it is very difficult to calculate (JfJc)-1 on line during
the trace operation. Therefore, NN is introduced to learn such mapping.
Firstly, we consider the mapping from ǻptc in O to ǻȟ in the image feature
domain. ǻȟ and ǻptc are increments of ȟ and ptc for the end-effecter to move
from Aj to Aj+1, respectively. As shown in Fig. 2, the mapping is depend on ptc ,
and the mapping can be expressed as
ǻȟ = ij1 ( ǻp , p ),
tc tc (6)
ǻptc = ij3 ( ǻȟ , ȟ ), (8)
ǻp c = ij ( ǻș , ș ),
'
3 (9)
Since the relative position and orientation between the end-effecter and the
CCD camera are fixed, the mapping from ǻpc to ǻptc is also one-to-one. We get
ǻptc = ij4 ( ǻp c ), (10)
ǻș = ij5 ( ǻptc , ș ), (11.b)
ǻș = ij6 ( ǻȟ , ȟ , ș ), (12)
° 0 ᧤white pixel ᧥
g qr( j ) = ® , (13.a)
°̄ 1 ᧤black pixel ᧥
L W
¦ ¦ g qr ⋅ q
( j)
q =1 r =1
ξ1( j ) = L W
( j) , (13.b)
¦ ¦ g qr
q =1 r =1
Window
Image
L j−1 ȟ ( j +1)
u feature
ȟ ( j) Image
j
ȟ ( j −1 ) feature
j+1
domain
W W W
Figure 3.Definition of image feature parameter vector
L W
¦ ¦ g qr ⋅ r
( j)
ξ (2 j ) = q =1 r =1
L W
( j) , (13.c)
¦ ¦ g qr
q =1 r =1
ξ 3( j ) = ¦ ¦ g
L W
( j)
q =1 r =1
qr , (13.d)
Learning-Based Visual Feedback Control of an Industrial Robot 785
(3) The lengths of the main and minor axes of the equivalent ellipse
Where
λ(11j ) = ¦ ¦ g
L W
( j)
qr
⋅ [q − ξ1( j ) ][r − ξ(2 j ) ] , (13.g)
q =1 r =1
λ(20j ) = ¦ ¦ gL W
( j)
qr ⋅ [ q − ξ1( j ) ] 2, (13.h)
q =1 r =1
λ(02j ) = ¦ ¦ g
L W
( j)
qr ⋅ [ r − ξ (2 j ) ] 2, (13.i)
q =1 r =1
λ(11j )
ξ (6 j ) = 1 tan −1
. (13.j)
2 λ(20j ) − λ(02j )
ȟ = ȟ ( j ) (im) , (14.a)
ǻȟ = ǻȟ (im) , (14.b)
( j +1)
where ȟ ( j ) (im) and ȟ (im) are image feature parameter vectors in the win-
( 0)
dow j and j+1 shown in Fig. 3. ȟ (im) , ȟ (1)
(im) and ȟ ( 2) (im) can be calculated
for j=0,1,2 in Eq.(13.a)~(13.j).
In this paper, a CCD camera is used to detect the image feature parameters of
the curved line, which are used to generate on line the goal trajectory. The se-
786 Industrial Robotics: Theory, Modelling and Control
grasped and processed, the image feature parameter vector ȟ ( 2 ) (m) shown in
Fig. 4(b) is computed. From t=2mT to t=3mT, the end-effecter is only moved by
ǻȟ (m) = ȟ (2) (0) − ȟ (1) (0) . At time t=imT, (i+1)th image is grasped and processed, the
image feature parameter vector ȟ (2) (im) shown in Fig. 4(c) is computed. From
t=imT to t=(i+1)mT, the end-effecter is only moved by
ǻȟ(im) = ȟ [(i − 1)m] − ȟ [(i − 1)m] .
(1) ( 0)
v v
Window Window
2 2
1 1 ȟ(2) (m)
ȟ (2) (0)
u L u
L (1) (1)
ȟ (0) ȟ (m)
0 0
ȟ(0) (0) ȟ(0) (m)
W W W W W W
v
Window
2
1
ȟ(2) (im)
L u
ȟ (1) (im)
0
ȟ (0) (im)
W W W
ș(k) ...
(k=0,1,2, S-1)
ǻȟ (k ) NN
( j)
ȟ (k) ǻșn (k)
Training
ǻșr (k)
ST algorithm
M P U
xlB = ¦ wglAB ⋅ ygA + zlB , xCj = ¦ wljBC ⋅ ylB + zCj , xiD = ¦ wCDji
⋅ yCj + ziD ½°
g =1 l =1 j =1 ¾, (15)
y =x ,
A
g
A
g yl = f (xl ),
B B
y j = f (x j ),
C C
yiD = f (xiD ) °¿
where xmR , y m and z m (m=g, l, j, i; R=A, B, C, D) are the input, the output and
R R
AB BC
the bias of a neuron, respectively. w gl , wlj and wljCD are the weights between
y gA and xlB , y lB and x Cj , y Cj and x iD , respectively. The sigmoid function of NN
is defined by
1 − e −βx
f (x) = , (16)
1 + e −βx
where ǃ is a real number which specifies the characteristics of f (x) . The learn-
ing method of NN is shown in Fig. 6, and its evaluation function is defined as
1 S −1
Ef = ¦ [ ǻș r ( k ) − ǻș n ( k )] [ ǻș r ( k ) − ǻș n ( k )] ,
T
(17)
2S k = 0
where ǻș r (k ) and ǻș n (k ) ∈ R6×1 are the increments of the robot joint angle vec-
tor and the output vector of the neural network, respectively. The positive in-
teger S is the number of the learning samples ș (k ) , ǻș r (k ) , ȟ ( j ) (k ) , ǻȟ (k ) for the
end-effecter to move along the curved line from the initial position I to the goal
position G. ș (k ) , ǻș r (k ) , ȟ ( j) (k) and ǻȟ (k ) are off-line measured in advance, re-
spectively. ș (k ) , ȟ ( j ) (k ) and ǻȟ(k ) are divided by their maximums before input-
ting to NN, respectively. w CD ji
of NN is changed by
∂E f
ǻw CD = −η f , ᧤j=1,2,ƛƛƛ,U; i=1,2,ƛƛƛ,N᧥, (18.a)
∂w CD
ji
ji
∂E f
= −[ ǻ θ ri ( k ) − ǻ θ ni ( k )] f' ( xiD ) y Cj , (j=1,2,ƛƛƛ,U; i=1,2,ƛƛƛ,N), (18.b)
∂ w CD
ji
The present learning control system based on NN is shown Fig. 7. The initial
wglAB , wljBC and w CD
ji
of NN are given by random real numbers between –0.5 and
0.5. When NN finishes learning, the reference joint angle șn (k +1) of the robot is
obtained by
ș n ( k + 1) = ș n ( k ) + ǻș n ( k ), ½
¾, (19)
ș n ( 0 ) = ș ( 0 ), ǻș n ( 0 ) = d , ( k = 0,1,⋅ ⋅ ⋅, S − 1) ¿
where d ∈ R 6×1 is an output of NN when ș (0) , ȟ (0) (0) and ǻȟ(0) are used as ini-
tial inputs of NN. The PID controller Gc (z) is used to control the joint an-
gles of the robot.
For the visual feedback control system, a multilayer neural network NNc is in-
troduced to compensate the nonlinear dynamics of the robot. The structure
and definition of NNc is the same as NN, and its evaluation function is defined
as
∂Ec
ǻw CD = −η c , ᧤j=1,2,ƛƛƛ,U; i=1,2,ƛƛƛ,N᧥, (21.a)
∂w CD
ji
ji
∂Ec
= − ei wif' ( xiD ) y Cj (j=1,2,ƛƛƛ,U; i=1,2,ƛƛƛ,N), (21.b)
∂ w CD
ji
where e i is the ith element of e(k), and wi is the ith diagonal element of W. wglAB
and wljBC of NNc are changed by the error back-propagation algorithm, the pro-
cess is omitted.
The initial wglAB , wljBC and w CDji
of NNc are given by random number between –0.5
and 0.5. ș n (k + 1) , ǻș n (k ) and ǻșn (k −1) are divided by their maximums before in-
putting to NNc, respectively. K is a scalar constant which is specified by the
experiments. While NNc is learning, the elements of e(k) will become smaller
and smaller.
790 Industrial Robotics: Theory, Modelling and Control
Figure 7. Block diagram of learning control system for a robot with visual feedback
Learning-Based Visual Feedback Control of an Industrial Robot 791
In Fig. 7, I is a 6×6 unity matrix. ȁ[ș (k )] ∈ R6×1 and J r [ș (k )] = ∂ȁ[ș(k)] / ∂ș(k) ∈ R6×6
are the forward kinemics and Jacobian matrix of the end-effecter (or the rigid
tool), respectively. Let p t (k ) = [ p t1 , pt 2 , ··· , pt 6 ]T be a position/orientation vector
which is defined in O and corresponds to the tip of the rigid tool, we have
• •
p t (k ) = J r [ș (k )] ⋅ ș (k ) . (22.b)
z
G c (z ) = K P + K I + K D ( 1 − z −1 ), (23)
1− z
where K P , K I and K D ∈ R6×6 are diagonal matrices which are empirically de-
termined.
The part circled by the dot line shown in Fig. 7 is an image processing system.
The work sequence of the image processing system is shown in Fig. 8. At time
t=imT, the CCD camera grasps a 256-grade gray image of the curved line, the
image is binarizated, and the left and right terminals of the curved line are de-
tected. Afterward, the image parameters ȟ(0) (0) , ȟ(1) (0) and ȟ(2) (0) ᧤or ȟ(2) (im) ᧥are
computed using Eqs. (13.a)~(13.j) in the windows 0,1,2 shown in Figs. 4(a)~(c).
Furthermore, in order to synchronize the image processing system and the ro-
bot joint control system, the 2nd-order holder Gh 2 ( z) in Section 5.4 is intro-
duced. ȟ(0) (im) , ȟ(1) (im) and ǻȟ(im) are processed by Gh2 ( z) , and their discrete time
values ȟ(0) (k) , ȟ(1) (k) and ǻȟ(k) are solved at time t=kT.
Generally, the sampling period of the image processing system is much longer
than that of the robot control system. Because the sampling period of
ȟ(0) (im) , ȟ(1) (im) and ǻȟ(im) is m times of the sampling period of ș (k ) , it is difficult to
synchronize ȟ (0) (im) , ȟ(1) (im) , ǻȟ(im) and ș (k ) by zero-order holder or 1st-order hol-
der. Otherwise, the robot will drastically accelerate or decelerate during the vi-
sual feedback control.
792 Industrial Robotics: Theory, Modelling and Control
In this section, ȟ (0) (im) and ȟ(1) (im) are processed by the 2nd-order holder Gh 2 ( z) .
For instance, ξ l( j ) is the lth element of ȟ ( j ) , and ξ l( j ) is compensated by the 2nd-
order curved line from t=imT to t= (i+1)mT. At time t=kT, ȟ ( j ) (k ) (j=0,1) is calcu-
lated by
2(k − im) 2 ( j )
ȟ ( j ) (k)= ȟ ( j ) (im) + {ȟ [(i + 1)m] − ȟ ( j ) (im)}, (0 k–im<m/2), (24.a)
m2
2[k − (i +1)m]2 ½ ( j)
ȟ ( j ) (k)= ȟ( j) (im) + ®1− ¾{ȟ [(i +1)m] − ȟ ( im ) }, (m/2 k–im<m). (24.b)
( j)
2
¯ m ¿
Image grabbing
Binarization
Switches
Parallel
communication
Hand controller
Motoman K3S
Hand CCD camera
Rigid Operation
tool enveriment
Work table
6. Experiments
In order to verify the effectiveness of the present approach, the 6 DOF indus-
trial robot is used to trace a curved line. In the experiment, the end-effecter (or
the rigid tool) does not contact the curved line. Figure 9 shows the experimen-
tal setup. The PC-9821 Xv21 computer realizes the joint control of the robot.
The Dell XPS R400 computer processes the images from the CCD camera. The
robot hand grasps the rigid tool, which is regarded as the end-effecter of the
robot. The diagonal elements of K P , K I and K D are shown in Table 1, the con-
trol parameters used in the experiments are listed in Table 2, and the weight-
ing matrix W is set to be an unity matrix I.
Figures 10 and 11 show the position responses (or the learning results of NN
and NNc) pt1 (k) , pt2(k) and pt3(k) in the directions of x, y and z axes of O. pt1 (k) , pt2(k)
and pt3(k) shown in Fig. 10 are teaching data. Figure 11 show the trace responses
after the learning of NNc. Figure 12 shows the learning processes of NN and
NNc as well as the trace errors. Ef converges on 10-9 rad2, the learning error E*
N −1
shown in Fig. 12(b) is given by E*= [ ¦ Ec (k )] / N , where N=1000. After the 10 tri-
k =0
als (10000 iterations) using NNc, E* converges on 7.6×10-6 rad2, and the end-
794 Industrial Robotics: Theory, Modelling and Control
effecter can correctly trace the curved line. Figure 12(c) shows the trace errors
of the end-effecter in x, y, z axis directions of O , and the maximum error is
lower than 2 mm.
KP KI KD
i
Nym/rad Nym/rad Nym/rad
1 25473 0.000039 0.0235
2 8748 0.000114 0.0296
3 11759 0.000085 0.0235
4 228 0.004380 0.0157
5 2664 0.000250 0.0112
6 795 0.001260 0.0107
0,68 0,05
Position m
Position m
0,67 0
0,66 -0,05
0,65 -0,1
0,64
0,63 -0,15
0 1 2 3 4 5
0 1 2 3 4 5
Time s
Time s
0,08
Position m
0,075
0,07
0,065
0,06
0 1 2 3 4 5
Time s
0,68 0,05
Position m
Position m
0,67 0
0,66 -0,05
0,65
-0,1
0,64
0,63 -0,15
0 1 2 3 4 5 0 1 2 3 4 5
Time s
Time s
0,08
Position m
0,075
0,07
0,065
0,06
0 1 2 3 4 5
Time s
Figure 11. The trace responses after the learning of NNc (after 10th trials)
796 Industrial Robotics: Theory, Modelling and Control
0,000004 0,00008
2
0,00006
Error rad
0,000003
2
Ef E
Error rad
0,000002 0,00004
( f ( ×
0,000001
0,00002
0
0
1 9968 19937 29905 1 2 3 4 5 6 7 8 9 10
0,002
Trace error m
0,001
0
-0,001
-0,002
-0,003
0 1 2 3 4 5
Time s
7. Conclusions
the present approach is verified by tracing a curved line using a 6 DOF indus-
trial robot with a CCD camera installed on its end-effecter.
Through the above research, the following conclusions are obtained:
1. If the mapping relations between the image feature domain of the object
and the joint angle domain of the robot are satisfied, NN can learn the
mapping relations.
2. By computing the image feature parameters of the object, the goal trajec-
tory for the end-effecter to trace the object can be generated.
3. The approach does not necessitate the tedious CCD camera calibration
and the complicated coordinate transformation.
4. Using the 2nd-order holder and the disturbance observer, the synchroni-
zation problem and the influences of the disturbances can be solved.
8. References
Hosoda, K. & Asada, M., (1996). Adaptive Visual Servoing Controller with
Feed-forward Compensator without Knowledge of True Jacobian, Journal
of Robot Society of Japan, (in Japanese), Vol. 14, No. 2, pp. 313-319.
Hosoda, K., Igarashi, K. & Asada, M., (1997). Adaptive Hybrid Visual/Force
Servoing Control in Unknown Environment, Journal of Robot Society of Ja-
pan, (in Japanese), Vol. 15, No. 4, pp. 642-647.
Weiss, L.E. & Sanderson, A.C., (1987). Dynamic Sensor-Based Control of Ro-
bots with Visual Feedback, IEEE Journal of Robotics and Automation, Vol.3,
No. 5, pp. 404-417.
Nikolaos, P. & Pradeep, K., (1999). Visual Tracking of a Moving Target by a
Camera Mounted on a Robot: A Combination of Control and Vision, IEEE
Journal of Robotics and Automation, Vol. 9, No. 1, pp. 14-35.
Bernardino, A. & Santos-Victor J, (1999). Binocular Tracking: Integrating Per-
ception and Control, IEEE Journal Robotics and Automation, Vol. 15, No. 6,
pp. 1080-1093.
Malis, E., Chaumette, F. and Boudet, S., (1999). 2-1/2-D Visual Servoing, IEEE
Journal of Robotics and Automation, Vol. 15, No. 2, pp. 238-250.
Hashimoto, K., Ebine, T. & Kimura, H., (1996). Visual Servoing with Hand-Eye
Manipulator-Optimal Control Approach, IEEE Journal of Robotics and
Automation, Vol. 12, No. 5, pp. 766-774.
798 Industrial Robotics: Theory, Modelling and Control
Wilson, J.W., Williams, H. & Bell, G.S., (1996). Relative End-Effecter Control
Using Cartesian Position Based Visual Servoing, IEEE Trans. Robotics and
Automation, Vol. 12, No. 5, pp. 684-696.
Ishikawa, J., Kosuge, K. & Furuta, K., Intelligent Control of Assembling Robot
Using Vision Sensor, 13-18 May 1990, Cincinnati, OH, USA, Proceedings
1990 IEEE International Conference on Robotics and Automation (Cat.
No.90CH2876-1), Vol. 3, pp. 1904-1909.
Yamada, T. & Yabuta, T., (1991). Some Remarks on Characteristics of Direct
Neuro-Controller with Regard to Adaptive Control, Trans. Soc. Inst. Contr.
Eng., (in Japanese), Vol. 27, No. 7, pp. 784-791.
Verma, B., (1997). Fast Training of Multilayer Perceptrons, IEEE Trans. Neural
Networks, Vol. 8, No. 6, pp. 1314-1319.
29
1. Introduction
Industrial robots have been applied to several tasks, such as handling, assem-
bling, painting, deburring and so on (Ferretti et al., 2000), (Her & Kazerooni,
1991), (Liu, 1995), (Takeuchi et al., 1993), so that they have been spread to vari-
ous fields of manufacturing industries. However, as for the user interface of
the robots, conventional teaching systems using a teaching pendant are only
provided. For example, in the manufacturing industry of wooden furniture,
the operator has to manually input a large mount of teaching points in the case
where a workpiece with curved surface is sanded by a robot sander. This task
is complicated and time-consuming. To efficiently obtain a desired trajectory
along curved surface, we have already considered a novel teaching method as-
sisted by a joystick (Nagata et al., 2000), (Nagata et al., 2001). In teaching mode,
the operator can directly control the orientation of the sanding tool attached to
the tip of the robot arm by using the joystick. In this case, since the contact
force and translational trajectory are controlled automatically, the operator has
only to instruct the orientation with no anxiety about overload and non-
contact state. However, it is not practical to acquire sequential teaching points
with normal directions, adjusting the tool's orientation only with operator's
eyes.
When handy air-driven tools are used in robotic sanding, keeping contact with
the curved surface of the workpiece along the normal direction is very impor-
tant to obtain a good surface quality. If the orientation of the sanding tool
largely deviates from normal direction, then the kinetic friction force tends to
become unstable. Consequently, smooth and uniform surface quality can’t be
achieved. That is the reason why a novel teaching system that assists the op-
erator is now being expected in the manufacturing field of furniture.
In this paper, an impedance model following force control is first proposed for
an industrial robot with an open architecture servo controller. The control law
allows the robot to follow a desired contact force through an impedance model
in Cartesian space. And, a fuzzy compliance control is also presented for an
advanced joystick teaching system, which can provide the friction force acting
799
800 Industrial Robotics: Theory, Modelling and Control
between the sanding tool and workpiece to the operator (Nagata et al., 2001).
The joystick has a virtual spring-damper system, in which the component of
stiffness is suitably varied according to the undesirable friction force, by using
a simple fuzzy reasoning method. If an undesirable friction force occurs in
teaching process, the joystick is controlled with low compliance. Thus, the op-
erator can feel the friction force thorough the variation of joystick's compliance
and recover the orientation of the sanding tool. We apply the joystick teaching
using the fuzzy compliance control to a teaching task in which an industrial
robot FS-20 with an open architecture servo controller profiles the curved sur-
face of a wooden workpiece. Teaching experimental results demonstrate the
effectiveness and promise of the proposed teaching system.
More than two decades ago, two representative force control methods were
proposed (Raibert, 1981), (Hogan, 1985) ; controllers using such methods have
been advanced and further applied to various types of robots. However, in
order to realize a satisfactory robotic sanding system based on an industrial
robot, deeper considerations and novel designs are needed. Regarding the
force control, we use the impedance model following force control that can be
easily applied to industrial robots with an open architecture servo controller
(Nagata et al., 2002). The desired impedance equation for Cartesian-based con-
trol of a robot manipulator is designed by
= − M −1B X + M −1 K (F − F )
X (2)
d d d f d
Joystick Teaching System for Industrial Robots Using Fuzzy Compliance Control 801
( )
X = exp − M d−1 Bd t X (0 ) + ³0 exp{− M d Bd (t − τ )}M d K f (F − Fd ) dτ
t −1 −1
(3)
Here, we will consider the form in the discrete time k using a sampling width
Δt . It is assumed that F is constant within Δt (k − 1) ≤ t < Δtk and diagonal
components of M d , Bd , K d and K f are given constant values. Defining
X (k ) = X (t ) |t = Δtk , it follows that
( ) { }
X (k ) = exp − M d−1Bd Δt X (k − 1) − exp(− M d−1Bd Δt ) − I Bd−1K f {F (k ) − Fd } (4)
( ) { }
x (k ) = exp − M d−1Bd Δt x (k − 1) − exp(− M d−1Bd Δt ) − I Bd−1K f {F (k ) − Fd } (5)
( ) {
v f ( k ) = exp − M d−1 Bd Δt x ( k − 1) − exp( − M d−1 Bd Δt ) − I }
(6)
Bd−1 K f { F(k ) − Fd } + K i ¦ n =1 { F(n ) − Fd }
k
−1
Figure 2. Relation among desired mass M di , damping Bdi and exp(− M di Bdi Δt )
From Eq. (6), the following characteristics are seen. Among the impedance pa-
rameters, desired damping has much influence on force control response as
well as the force feedback gain. The larger Bd becomes, the smaller the effec-
tiveness of force feedback becomes. Figure 2 shows the relation among M di ,
Bdi and diagonal elements of transition matrix exp − M di −1
( )
Bdi Δt in the case that
Δt is set to 0.01 [s]. i denotes the i -th ( i =1, 2, 3) diagonal element As can be
seen, for example, if Bdi is smaller than about 100, then appropriate M di is
−1
limited. M di over 15 leads exp − M di ( )
Bdi Δt to almost 1. In selecting the imped-
ance parameters, their combinations should be noted.
Joystick Teaching System for Industrial Robots Using Fuzzy Compliance Control 803
In our proposed teaching system, the joystick is used to control the orientation
of the sanding tool attached to the top of the robot arm. The rotational velocity
of the orientation is generated based on the values of the encoder in x- and y-
rotational directions as shown in Fig. 3. Also, the compliance of the joystick is
varied according to the kinetic friction force acting between a sanding tool and
workpiece. As the friction force becomes large, the joint of the joystick is con-
trolled more stiffly. Therefore, the operator can perform teaching tasks having
the change of the friction force with the joystick's compliance.
The desired compliance equation for the joint-based control of a joystick is de-
signed by
~
IJ J = B J ș J + K J ș J (7)
Further, to adjust the compliance of the joystick according to the friction force,
~
K J is defined as
~
§ K Jx 0 · § K Jx 0 · § ΔK Jx 0 ·
¨ ~ ¸¸ = ¨¨ ¸+¨ ¸ (8)
¨ 0
© K Jy ¹ © 0 K Jy ¸¹ ¨© 0 ΔK Jy ¸¹
( )
where K J = diag K Jx , K Jy is the base stiffness matrix, ΔK J = diag ΔK Jx , ΔK Jy( )
is the compensated stiffness matrix whose diagonal elements are suitably
given from the following fuzzy reasoning part.
~ ~
Rule 1: If | Fx | is Ax1 and | Fy | is Ay1 , Then ΔK Jx = B x1 and ΔK Jy = B y1
~ ~
Rule 2: If | Fx | is Ax 2 and | Fy | is A y 2 , Then ΔK Jx = B x 2 and ΔK Jy = B y 2
~ ~
Rule 3: If | Fx | is Ax3 and | Fy | is A y 3 , Then ΔK Jx = B x3 and ΔK Jy = B y 3
#
#
~ ~
Rule L: If | Fx | is AxL and | Fy | is A yL , Then ΔK Jx = B xL and ΔK Jy = B yL
~ ~
Where Axi and A yi are i -th ( i =1,…,L) antecedent fuzzy sets for | Fx | and | Fy | ,
respectively. L is the number of the fuzzy rules. B xi and B yi are the conse-
quent constant values which represent i -th x- and y-rotational compensated
stiffness, respectively. In this case, the antecedent confidence calculated by i -
th fuzzy rule is given by
where μ X ( ⋅ ) is the Gaussian type membership function for a fuzzy set repre-
sented by
°
μ X (x ) = exp®log
(x − α )2 β 2 ½° (10)
¾
°̄ 2 °¿
where α and β are the center of membership function and reciprocal value of
standard deviation, respectively.
L L
°
ΔK J = diag ®
¦i =1 Bxiωi , ¦i =1 Byiωi ½°¾ (11)
L L
°¯ ¦i =1ωi ¦i =1ωi °¿
806 Industrial Robotics: Theory, Modelling and Control
4. Teaching Experiment
Throughout the remainder of this paper, the effectiveness of the proposed tea-
ching method is proved by teaching experiments.
Trajectory Accumulator
Forward
Kinematics
Translational Path
Generator
Photo 1 shows the overview of the sanding robot used in the teaching experi-
ments. The base 6-DOF industrial robot with an open architecture servo
controller is the model FS-20 provided by Kawasaki Heavy Industries, whose
tip of the arm has an air-driven sanding tool as shown in Photo 2 via a 6-DOF
force/torque sensor 67M25A provided by Nitta corporation. The permitted
weight of workpieceis under 20 kgf. The size of the sanding tool is 60 × 100
mm2 and its paper roughness is #120. Since this type of tool tends to cause not
only high frequency but also large magnitude vibrations, we use the force sen-
sor's filter whose cutoff frequency is set to 30 Hz. Photo 3 shows the 2-DOF jo-
ystick Impulse Engine2000 provided by Immersion corporation. This joystick
can perform a maximum force of 8.9 N by controlling the joint torque with
2048 steps. In teaching experiments, we apply the fuzzy compliance control
given by Eq. (7) to this joystick.
Figure 5 shows the block diagram of the sanding robot in teaching mode. The
proposed teaching process is as follows: in the direction of position control, the
translational trajectory generator yields a base trajectory such as a zigzag path
and whirl path with a velocity command v p (k ) . In the direction of orientation
control, a rotational velocity v o (k ) is generated using the compensated angle
~ ~ ~ T
[ ] ~
of inclination ș J = ș Jx ș Jy with a valocity transformation gain K v . ș Ji (i = x,
y) is obtained by
0 if − 500 ≤ ș Ji ≤ 500 ½
~ ° °
ș Ji = ® ș Ji − 500 if ș Ji > 500 ¾
° ș + 500 if ș Ji < −500 °
¯ Ji ¿ (12)
808 Industrial Robotics: Theory, Modelling and Control
[
Note that ș J = ș Jx ș Jy ]T in Eqs. (12) and (7) are the same variable. In this case,
the teaching operator can conduct the teaching task feeling the friction force
acting between the sanding tool and workpiece with the compliance of the jo-
ystick. In the direction of force control, the impedance model following force
controller given by Eq. (6) yields v f (k ) , in which v f (k ) is added to the output
from the already proposed fuzzy feedforward force controller (Nagata et al.,
1999) to generate v~ f (k ) . After switched by S p , S o and S f , each directional ve-
locity command is summed up to compose a velocity vector v (k ) . v (k ) is tran-
sformed into a joint angle velocity q (k ) with the inverse Jacobian to give to the
servo controller.
The base compliance of the joystick is set to K Jx = K Jy =0.167, BJx = BJy =0.5.
Table 2 shows the control parameters given in the experiment. After these pre-
parations, an experiment on the proposed joystick teaching was done. Photo 4
shows the teaching scene by using the proposed teaching system. Figures 7
and 8 show x-directional friction force f x and y-rotational component ΔK Jy of
the compensated stiffness matrix ΔK J , respectively.
-430
It is observed from the result that the compliance of the joystick changes ac-
cording to the friction acting between the sanding tool and workpiece. Thus,
the operator could execute the teaching task feeling the friction force with the
compliance of the joystick. In teaching, the time series data of both the position
and orientation were stored into the trajectory accumulator as shown in Fig. 5.
Figure 9 shows the z-directional position obtained by this teaching.
Figure 10 shows the block diagram of the sandinging robot in playback mode.
An experiment on polishing task was carried out using the acquired trajectory.
In this case, although the tangent profiling velocity was set to 40 mm/s which
was two times as fast as that in teaching mode, the polishing task could be
stably practiced. The z-directional force control result is plotted in Fig. 11.
Figure 10. Block diagram of sending robot in diagram in playback mode using joy-
stick taught data.
Joystick Teaching System for Industrial Robots Using Fuzzy Compliance Control 811
It has been observed that a desirable response is obtained in spite of tool's lar-
ge vibrations. Furthermore, the surface accuracy of the workpiece was so good
condition as well as polished by skilled workers. The measurements evaluated
by arithmetical mean roughness method were less than 2Ǎm.
5. Conclusion
In this paper, a joystick teaching system using a fuzzy compliance control has
been proposed for industrial robots. We have applied the proposed teaching
system to a teaching task of a furniture sanding robot. Experimentally, it was
demonstrated that the operator could safely carry out the teaching task feeling
the friction force acting between a sanding tool and workpiece through the
compliance of the joystick.
The proposed teaching process is as follows: first, a zigzag path considered ac-
cording to both sizes of each work and sanding tool is prepared; next, the
sanding robot, in which an impedance model following force control method
is incorporated, profiles the surface of the workpiece along the zigzag path.
The operator has only to control the orientation of the sanding tool using the
fuzzy compliance controlled joystick so that the tool and workpiece can be in
contact each other keeping the desired relation of position and orientation.
Since the force controller keeps the contact force a desired value, the operator
has to give no attention to a sudden over-load or non-contact state. The de-
sired trajectory is automatically obtained as the data including continuous in-
formation of the position and orientation along the zigzag path on the work-
piece surface. In playback mode, the robot can finally achieve the sanding task
without any assists of the operator by referring the acquired trajectory.
812 Industrial Robotics: Theory, Modelling and Control
6. References
Satoru Goto
1. Introduction
Many industrial robot arms are operated in industry, and some robotic
applications in the industry, such as a pulling-out of products made by die
casting, require the flexible motion excited by an external force. Here, the
flexible motion means that the robot arm moves passively according to the
external force. Industrial robot arms, however, are difficult to be moved by the
external force because the servo controller of the industrial robot arm controls
the motion of the robot arm excited by an input signal responsible for the
motion. The torque generated by the external force is a kind of disturbance for
the robot control system and it can be compensated by the servo controller.
Impedance control (Hogan 1985; Scivicco & Siciliano, 2000) and compliance
control (Mason, 1981; Micheal et al., 1982) were proposed in order to achieve
the flexible motion, and these methods have been applied to industrial robots
(Ciro et al., 2000). Most of these control methods impose desired dynamic
characteristics between an end-effector and an environment by setting inertia,
friction and stiffness. Usually an elastic spring behavior is introduced in order
to achieve the flexible motion of the robot arm. The potential force of the elastic
spring behavior is a conservative force, and it is impossible to achieve the
passive motion away from the environment caused by the external force is
impossible to be achieved by using these control methods.
In this research, the forcefree control, which achieves the passive motion of the
robot arm according to the external force, is proposed. Moreover, the forcefree
control is extended to the forcefree control with independent compensation,
the forcefree control with assigned locus and the position information based
forcefree control. The effectiveness of the proposed forcefree control is assured
by comparing the experimental results with simulation results. Comparison
between the forcefree control and other force control methods such as
impedance control are also described. Finally, applications of the forcefree
control of pull-out work, direct teaching and rehabilitation robot are
demonstrated.
813
814 Industrial Robotics: Theory, Modelling and Control
2. Forcefree Control
between the tip of the robot arm and the environment are almost achieved by
attaching a flexible device on the tip of the robot arm.
The forcefree control can achieve the flexible motion of the industrial robot
arms under virtual circumstances of non-gravity and non-friction without any
change of the built-in controller. By use of the forcefree control, the robot arm
moves passively according to the external force directly as if it were under the
circumstances of non-friction and non-gravity. The mathematical explanation
of the forcefree control is described below.
where H (q ) is the inertia matrix, Dq + N ȝ sgn (q ) is the friction term, h(q, q ) is the
coupling nonlinear term, g (q ) is the gravity term, q is the output of joint
angle, IJ s is the torque input to the robot arm and IJ f is the joint torque
corresponding to the external force f acting on the tip of the robot arm (Fu et
al., 1987).
In industrial robot arms, the servo controller is adapted to control the motion
of the robot arm. The control loop of the servo controller is shown on the right
side of Fig. 1, where K p , K v and K IJ are position loop gain, velocity loop gain
and torque constant, respectively (Nakamura et al., 2004; Kyura, 1996). The
816 Industrial Robotics: Theory, Modelling and Control
servo controller adopts P and PI type cascade control, the P controller is used
for the position loop control and the PI controller is used for the velocity loop
control. The velocity control loop has the role of the derivative action. The
servo controller generates the torque input to the robot arm as described by
IJ s = K IJ (K v (K p (q d − q ) − q ))+ IJ d + IJ g − IJ f (2)
where q d is the input of joint angle, IJ d is the friction compensation torque and
IJ g is the gravity compensation torque. As expressed in (2), the servo controller
includes the friction compensation and the gravity compensation through
integral action of PI control. The friction and the gravity are assumed to be
exactly compensated by the servo controller as
IJ g = g (q ). (4)
Forcefree control means that the influences of friction and gravity on the robot
arm motion can be compensated. The entire dynamics of the industrial robot
arms controlled by the forcefree control is described by
IJ f = −(IJ s − IJ d − IJ g ). (8)
Finally, the control law of the forcefree control with independent compensation
is obtained by substituting (6) and (8) for (5) and by solving for q d as
( )
q d = K p−1 K v−1 K IJ−1 (− IJ s + IJ d + IJ g )+ q + q. (9)
1. To cancel the effect of the gravity, the robot arm sets around its vertical po-
sition;
2. Various constant velocity inputs are applied to each link of the robot arm;
3. Respective torque outputs corresponding to the applied velocities are mea-
sured by using the torque monitor;
4. The torque output vs. applied velocities are plotted;
5. Viscous friction coefficient D and magnitude of Coulomb friction N ȝ in
(3) are estimated by using the least squares method from the collected da-
ta.
In order to smoothen the Coulomb friction effect, the Sigmoid function is intro-
duced in the friction term as
where
( )(
f d (q ) = §¨ 1 − e 1 / 1+ e 1
− γq − γq
) ·¸.
( )( )¸¸
¨ −γq −γq
¸
¨ 1 − e 2 / 1+ e 2
¨ (11)
¨# ¸
¨ ¸
( )(
¨ 1 − e −γqn / 1+ e −γqn
© ) ¸
¹
The gravity term is a function of the robot arm position q as (4). Here, the
gravity term is modelled by
g (q ) = U (q )a + V (q )b (12)
§ − Ȝ( q )2 0 " 0 ·
V (q ) = ¨ e 1 ¸
¨ ¸
¨0 ¸
¨ ( )2
− Ȝ q 2 % # ¸
e (13)
¨ ¸
¨# % % 0 ¸
¨ ¸
¨0 2 ¸
¨ " 0 − Ȝ( q ) ¸
© e n ¹
( )
IJ g = I − e At g (q ) (14)
where
−t / T
e At = §¨ e 1 0 " 0 ·.
¸
¨ − t / T2 ¸
¨0 e % # ¸
¨ ¸ (15)
¨# % % 0 ¸
¨¨ − t / Tn
¸¸
©0 " 0 e ¹
The algorithm of the forcefree control is explained here. Initial setting of the
forcefree control is expressed in the following first 3 items.
2.4 Verification
Robot arm motion by using the forcefree control was verified by a simulation
study and experiments. The simulation study shows an ideal motion of the
forcefree control. An industrial articulated robot arm (Performer-MK3S,
YAHATA Electric Machinery Mfg., Co., Ltd) was used for the experiment on
the forcefree control with independent compensation. Two links of Performer-
MK3S were used for the experiment. The link lengths of the robot arm are
l1 = 0.25 [m], l 2 = 0.215 [m], and masses of the links are m1 = 2.86 [kg],
m2 = 2.19 [kg], respectively. The position loop gain was K p = diag (25,25) [1/s],
the velocity loop gain was K v = diag(150,150 ) [1/s], and the torque constant
was K IJ = diag(0.017426,0.036952 ) [ Nm / (rad / s 2 )].
Fig. 2 shows the experimental results of the estimation of friction term. The
bold lines show the results of (10) for γ = 120 and the dotted lines show the
results of Dq + N ȝ sgn (q ) . The step change of the results of Dq + N ȝ sgn (q ) can be
smoothened using the sigmoid function.
820 Industrial Robotics: Theory, Modelling and Control
(a) Friction compensation torque of link1 (b) Friction compensation torque of link2
6 6
Torque [Nm]
Torque [Nm]
4 4
2 2
0 0
0 2 4 6 0 2 4 6
Time [s] Time [s]
Fig. 3 shows the experimental results of the estimation of gravity term. The
bold lines show the results of (12) for Ȝ = 150 and the dotted lines show the
results of U (q )a + b . The alternate long and short lines show the result of
U (q )a . The estimated friction approximately gives the static friction effect.
(a) Gravity compensation torque of link1 (b) Gravity compensation torque of link2
4
Torque [Nm]
Torque [Nm]
15 2
–2
10
0 2 4 6 0 2 4 6
Time [s] Time [s]
Fig. 4 shows the simulation results and the experimental results where the
external force f is (− 88.9,6.1) [N]. The dotted lines show the simulation results
and the bold lines show the experimental results. As in Fig. 4, the experimental
results and theoretical response are almost the same and thereby shows that
the exact forcefree control can be achieved in practice. The result shows that
the forcefree control was achieved with an actual industrial robot arm.
Forcefree Control for Flexible Motion of Industrial Articulated Robot Arms 821
0 0
Force [N]
Force [N]
–100 –100
0 2 4 0 2 4
simulation
Position [rad]
0.6 simulation
experiment 1 experiment
0.5
0.9
0.4
0 2 4 0 2 4
Velocity [rad/s]
simulation
experiment experiment
0 0
0 2 4 0 2 4
Torque [Nm]
20 20
10 10
0 0
0 2 4 0 2 4
Time [s] Time [s]
(i) Locus of tip
0.35
Y–axis [m]
0.3
simulation
experiment
0.25
where C f , C d and C g are the coefficients of the inertia, friction and gravity
terms, respectively. They can be tuned to adjust the effect of the inertia, friction
and gravity, independently. For instance, C f = E , C d = 0 and C g = 0 ,
corresponds to the forcefree control and C f → ∞ , C d = C g = 0 corresponds to
the perfect compensation of the inertia, friction and gravity.
( )
q d = K p−1 K v−1 K IJ−1 (C f IJ f − C d IJ d − C g IJ g )+ q + q. (17)
Finally, the control law of the forcefree control with independent compensation
is obtained by substituting (8) for (17) as
( )
q d = K p−1 K v−1 K IJ−1 (− C f IJ s + (C f − C d )IJ d + (C f − C g )IJ g )+ q + q (18)
Forcefree Control for Flexible Motion of Industrial Articulated Robot Arms 823
3.1.2 Verification
Step input of 10[N] to X-axis direction was applied to the tip of the robot arm.
A force sensor was used to measure the value of external force. The initial end-
effector position of the robot arm was at (0.3, 0.3)[m].
Experimental results of the forcefree control with independent compensation
are shown in Fig. 6, where the coefficients of compensation are c f = 2 E ,
C d = E , C g = 0 . In Fig. 6, the dotted lines show the theoretical responses
obtained through simulation and the bold lines show the experimental results.
As in Fig. 6, the experimental results and theoretical response are almost the
same and thereby shows that the exact forcefree control with independent
compensation can be achieved in practice. The result shows that the forcefree
control with independent compensation was realized with an actual industrial
robot arm.
824 Industrial Robotics: Theory, Modelling and Control
Force fy [N]
10 10
Force fx
0 0
Position q 2 [rad]
simulation
experiment
simulation
experiment
0.3 0.8
0 2 4
Time [s] Time [s]
0.3
Y–axis [m]
0.29
simulation
experiment
0.3 0.31
X–axis [m]
By use of the forcefree control, the robot arm moves according to the external
force. The direction of the motion depends on the direction of the external
force. When the motion of the robot arm was restricted, the original forcefree
control can not be applied. In such a case, the flexible motion with assigned
locus is required. In this section, the forcefree control with assigned locus is
introduced. The forcefree control with assigned locus makes the tip of the robot
arm to follow the assigned locus, and the tip velocity depends on the external
force.
Forcefree Control for Flexible Motion of Industrial Articulated Robot Arms 825
The forcefree control with assigned locus is based on mass point type forcefree
control. Mass point type forcefree control is constructed from a mass point
which is assumed to be the tip of the robot arm. Therefore, the motion of the
mass point and the tip of the robot arm are the same. The mass point moves
according to the velocity v caused by the external force f , when an external
force f is applied on the mass point.
The direction of the motion is the same as the external force f . Besides, the
absolute value of the velocity v depends on the external force f . Therefore,
the mass point could not follow the assigned locus. In order to follow the
assigned locus, the direction of the generated velocity v has to be changed to
the tangential direction of the assigned locus. By continuing the above
processes, the direction of the velocity v of the mass point is always the same
as the tangential direction on the assigned locus. Hence, the tip of the robot
arm follows the assigned locus with the velocity which is determined by the
external force f .
Fig. 7 shows the block diagram of the forcefree control with assigned locus.
Under the non-gravity condition, the equation of the motion of the mass point
by the external force f
where r is the position of the mass point, m is the mass of the mass point and
d is the friction coefficient.
In order to realize the flexible motion with assigned locus according to the
external force f , the mass point obeys the equation (19) only for the
component in ( x , y , z ) which gives the maximal amplitude of the external
force f .
The other components of the position of the mass point are determined by the
assigned locus h(rd ) = 0 . As a result, the position of the tip of the robot arm rd
under the forcefree control with assigned locus is determined by
| |
°mrdi + drdi = f i , i = argmax j= x, y,z f j . (20)
®
°̄h(rd ) = 0
3.2.3 Verification
Verification of the forcefree control with assigned locus was carried out by
simulation and experiment. Besides, simulation and experiment were carried
out under non-friction conditions which mean the coefficient of friction d was
zero.
Simulation and experimental results are shown in Fig. 8. In Fig. 8, (a) and (b)
illustrate the components of the external force f along the direction of X-axis
and Y-axis, respectively, (c) and (d) show the joint trajectories of link1 and
link2, respectively, (e) and (f) show the velocity v of X-axis and Y-axis,
respectively, and (g) shows the locus of the tip of the robot arm. In Fig. 8, the
dotted line denotes the simulation result and the bold line denotes the
experimental results. In Fig. 8(g), dash line shows the assigned locus. It can be
verified that simulation and experimental results are comparable, and both
results have realized the exact assigned locus. This phenomenon illustrates the
realization of the proposed forcefree control with assigned locus.
Forcefree Control for Flexible Motion of Industrial Articulated Robot Arms 827
Figure 8. Simulation and experimental results of forcefree control with assigned locus
828 Industrial Robotics: Theory, Modelling and Control
Fig. 9 shows the block diagram of the position information based forcefree
control. As in Fig. 9, the velocity loop in the servo controller is the P controller
and the friction compensation IJ d , the gravity compensation IJ g and the
external torque compensation IJ f are not included in the servo controller
compared with the servo controller in Fig. 1.
( )
q d = K p−1 K v−1 K IJ−1 (IJ f + IJ d + IJ g )+ q + q (21)
3.3.3 Verification
Verification of the position information based forcefree control was carried out
by a simulation study and an experiment. Simulation and experimental results
are shown in Fig. 10. In Fig. 10, (a) and (b) illustrate the components of the
external force f in the directions of X-axis and Y-axis, respectively, (c) and (d)
show the joint trajectories of link1 and link2, respectively, (e) and (f) show the
velocity v of X-axis and Y-axis, respectively, (g) and (h) show the estimated
torque by using the torque observer and the torque calculated by the force
sensor, respectively, and (i) shows the locus of the tip of the robot arm. In Fig.
10, the dotted line denotes the simulation result and the bold line shows the
experimental result. In (g) and (h), the estimated torque coincides with the
actual torque. It can be verified that the simulation and experimental results
are comparable, and the forcefree control can be achieved only by the usage of
the position information.
830 Industrial Robotics: Theory, Modelling and Control
0 0
Force [N]
Force [N]
–100 –100
0 2 4 6 0 2 4 6
0.8
1.2
Position [rad]
Position [rad]
simulation simulation
experiment experiment
0.6
1
0.4
0 2 4 6 0 2 4 6
simulation
Velocity [rad/s]
Velocity [rad/s]
simulation
experiment
0.1 0.1 experiment
0 0
0 2 4 6 0 2 4 6
Torque [Nm]
20 20
0 0
0 2 4 6 0 2 4 6
Time [s] Time [s]
0.4
Y–axis [m]
0.3
simulation
experiment
0.2
In order to illustrate the feature of the forcefree control, the forccefree control
with independent compensation is compared with the impedance control
(Hogan 1985; Scivicco & Siciliano, 2000). The impedance control is the typical
force control, which enables the contact force between the tip of the robot arm
and the object as assigned inertia, friction and stiffness. The impedance
characteristics are expressed by
where F is the assigned force between the tip of the robot arm and the object,
M d , Dd and K d are the assigned inertia, friction and stiffness, respectively,
and rd , rd are the objective position and the objective velocity in working
coordinates, respectively. The dynamics of the robot arm in joint coordinates is
expressed by
( )
H r (q )r+ hr (q, q )+ Dr r + N ȝr sgn (r )+ g r (q ) = J T
−1
IJ + F. (25)
By substituting (23) for (25), the torque input for the impedance control is
obtained by
[
IJ = J T H r (q )M d−1{− Dd (r − rd ) − K d (r − rd )}
( )
+ hr (q, q )+ H r (q )M d−1 − I F + Dr r + N ȝr sgn (r )+ g r (q ) . ] (26)
By substituting (27) for (24), the torque input for the forcefree control with
independent compensation is obtained by
[( ) ( ) ( )
IJ = J T C f − I F + I − C d (Dr r + N ȝr sgn (r ))+ I − C g g r (q ) . ] (28)
By comparing the torque input of the impedance control (26) and that of the
forcefree control with independent compensation (28), the following
relationship is fulfilled.
Md = C f( ) −1
H r (q ) (30)
Dd = O (31)
Kd = O (32)
hr (q, q ) = 0 (33)
Cd =Cg =O (34)
4.2 Comparison between Forcefree Control with Assigned Locus and Impedance
Control
In the case of forcefree control with assigned locus and the impedance control,
the tip of the robot arm is related to joint motion, but actually, joint coordinate
is not necessary to consider because generalized coordinates are defined in
working coordinate.
Forcefree Control for Flexible Motion of Industrial Articulated Robot Arms 833
where f is the assigned force between the tip of the robot arm and the object,
M d , Dd and K d are the assigned inertia, friction and stiffness, and rd , rd are
the objective position and the objective velocity in working coordinates,
respectively.
The mass point type forcefree control is expressed by
mr = f (36)
where m is the assigned mass of the mass point. By comparing (35) and (36),
the mass point type forcefree control is achieved by
M d= m (37)
Dd = 0 (38)
Kd = 0 (39)
After achieving the mass point type forcefree control by the impedance control,
the forcefree control with assigned locus is accomplished in exactly the same
way explained in section 3.2.2.
Table 1 summarized the comparison of the forcefree control with independent
compensation, the forcefree control with assigned locus and the impedance
control.
834 Industrial Robotics: Theory, Modelling and Control
Pull-out work means that the workpiece is pulled out by the push-rod, where
the workpiece is held by the robot arm, and it is usually used in aluminum
casting in industry. The operation follows the sequence, a) the hand of the
robot arm grasps the workpiece, b) the workpiece is pushed out by the push-
rod, and c) the workpiece is released by the force from the push-rod. The
motion of the robot arm requires flexibility in order to follow the pushed
workpiece.
Experimental results of pull-out work by the force-free control is shown in Fig.
11. Fig. 11(a) and (b) show the torque monitor outputs of link 1 and link 2
caused by the push-rod, respectively, (c) and (d) show the position of link 1
and link 2, respectively, and Fig. 11(e) shows the locus of the tip of the robot
arm.
It guarantees the realization of pull-out work with industrial articulated robot
arm based on the forcefree control.
In general, the industrial robot arms carry out operations based on teaching-
playback method. The teaching-playback method is separated into two parts,
i.e., teaching part and playback part. In the teaching part, the robot arm is
taught the data of operational positions and velocities. In the playback part, the
robot arm carries out the operation according to the taught data.
The teaching of industrial articulated robot arms is categorized into two
methods, i.e., on-line teaching and off-line teaching. Off-line teaching requires
another space for teaching. Therefore, on-line teaching is used for industrial
articulated robot arms. On-line teaching is also categorized into remote
teaching and direct teaching. Here, the remote teaching means that the
teaching is carried out by use of a teach-pendant, i.e., a special equipment for
teaching, and direct teaching means that the robot arm is moved by human
direct force.
Usually, the teaching of industrial articulated robot arms is carried out by
remote teaching. Remote teaching by use of teach-pendant, however, requires
human skill because there exists a difference between operator coordinates and
robot arm coordinates. Besides, the operation method of teach-pendant is not
unique, thus depends on the robot arm manufacturer.
Direct teaching is useful for industrial articulated robot arms against remote
teaching. The process of direct teaching is as follows; 1) the operator grasps the
836 Industrial Robotics: Theory, Modelling and Control
tip of the robot arm, 2) the operator brings the tip of the robot arm to the
teaching points by his hands, directly, and 3) teaching points are stored in
memory. Operational velocities between teaching points are set after the
position teaching process. In other words, anyone can easily carry out teaching.
In direct teaching, operational positions of the industrial articulated robot arm
are taught by human hands directly. The proposed forcefree control can be
applied to realize the direct teaching of the industrial articulated robot arm.
Forcefree control can realize non-gravity and non-friction motion of the
industrial articulated robot arm under the given external force. In other words,
an industrial articulated robot arm is actuated by human hands, directly. Here,
position control of the tip of the robot arm is the important factor in direct
teaching. Position control of the tip of the robot arm is carried out by the
operator in direct teaching.
Direct-teaching for teaching-playback type robot arms is an application of the
forcefree control with independent compensation, where the robot arm is
manually moved by the human operator's hand. Usually, teaching of industrial
articulated robot arms is carried out by using operational equipment and
smooth teaching can be achieved if direct-teaching is realized.
Fig. 12 shows the experimental result of direct-teaching where the
compensation coefficients are C f = 0.5E , C d = E , C g = 0 . As shown in Fig. 12,
teaching was successfully done by the direct use of human hand. The forcefree
control with independent compensation does not use the force sensors and any
part of the robot arm can be used for motion of the robot arm.
Forcefree Control for Flexible Motion of Industrial Articulated Robot Arms 837
0.2 0.2
Torque [Nm]
Torque [Nm]
0 0
–0.2 –0.2
1.4
0.2
0 1.2
0 5 10 0 5 10
Time [s] Time [s]
0.32
Y–axis [m]
0.28
0.24
Figure 11. Experimental result of pull-out work by using the forcefree control with
independent compensation ( C f = 0.2 E , C d = C g = 0 )
838 Industrial Robotics: Theory, Modelling and Control
Torque [Nm]
2
2
0
0
–2
–2
–4
0 5 10 15 0 5 10 15
Position [rad]
1.5
1
–0.5
0 5 10 15 0 5 10 15
Time [s] Time [s]
0.3
Y–axis [m]
0.2
Tip locus
Objective
0.1
Figure 12. Experimental result of direct teaching by using the forcefree control with
independent compensation ( C f = 0.5 E , C d = E , C g = 0 )
Forcefree Control for Flexible Motion of Industrial Articulated Robot Arms 839
The forcefree control with independent compensation uses the torque monitor
in order to detect the external force. Hence, each joint can be monitored for
unexpected torque deviation from the desired torque profile as a result of
unplanned circumstances such as accidental contact with an object or human
being. As a result, the forcefree control with independent compensation can
also improve the safety of work with human operator. To utilize this feature,
the forcefree control with independent compensation is applied to
rehabilitation robots.
The forcefree control with independent compensation is applied to the control
of a meal assistance orthosis for disabled persons both of direct-teaching of
plate position and mouth position and safety operation against unexpected
human motion.
If the forcefree control with independent compensation is installed in such
systems, the safety will be improved because when the unexpected contact
between the operator and the robot occurs, the escape motion of the robot arm
can be invoked by the forcefree control method.
6. Conclusions
The proposed forcefree control realizes the passive motion of the robot arm
according to the external force. Moreover, the forcefree control is extended to
the forcefree control with independent compensation, the forcefree control
with assigned locus and the position information based forcefree control.
Experiments on an actual industrial robot arm were successfully carried out by
the proposed methods. The comparison between the forcefree control and
other force control is expressed and the features of the forcefree control are
clarified. The proposed method requires no change in hardware of the robot
arm and therefore is easily acceptable to many industrial applications.
840 Industrial Robotics: Theory, Modelling and Control
7. References
1. Introduction
841
842 Industrial Robotics: Theory, Modelling and Control
controller generate the position and velocity references in the constrained di-
rection, to obtain a desired force profile acting on the environment. The main
advantage of this control strategy is to provide an easy inclusion of the envi-
ronment model in the controller design and thus to improve the global per-
formance of the control system.
Usually, impedance and environmental models are linear, mainly because the
solution of an unconstrained optimization procedure can be analytically ob-
tained with moderate computational burden. However, a nonrigid environ-
ment has in general a nonlinear behavior, and a nonlinear model for the con-
tact surface must be considered. Therefore, in this paper the linear
spring/damper parallel combination, often used as a model of the environ-
ment, is replaced by a nonlinear one, where the damping effect depends on the
penetration depth (Marhefka & Orin, 1996). Unfortunately, when a nonlinear
model of the environment is used, the resulting optimization problem to be
solved in the MPC scheme is nonconvex. This problem can be solved using
discrete search techniques, such as the branch-and-bound algorithm (Sousa et
al., 1997). This discretization, however, introduces a tradeoff between the
number of discrete actions and the performance. Moreover, the discrete ap-
proximation can introduce oscillations around non-varying references, usually
know as the chattering effect, and slow step responses depending on the se-
lected set of discrete solutions. These effects are highly undesirable, especially
in force control applications. A possible solution to this problem is a fuzzy
scaling machine, which is proposed in this paper. Fuzzy logic has been used in
several applications in robotics. In the specific field of robot force control, some
relevant references, such as (Liu, 1995 ; Corbet et al., 1996 ; Lin & Huang, 1997),
can be mentioned. However, these papers use fuzzy logic in the classic low
level form, while in this paper fuzzy logic is applied in a higher level. Here, the
fuzzy scaling machine alleviates the effects due to the discretization of the
nonconvex optimization problem to be solved in the model predictive algo-
rithm, which derives the virtual reference for the impedance controller consid-
ering a nonlinear environment. The fuzzy scaling machine proposed in this
paper uses an adaptive set of discrete alternatives, based on the fulfillment of
fuzzy criteria applied to force control. This approach has been used in predic-
tive control (Sousa & Setnes, 1999), and is generalized here for model predic-
tive algorithms. The adaptation is performed by a scaling factor multiplied by
a set of alternatives. By using this approach, the number of alternatives is kept
low, while performance is increased. Hence, the problems introduced by the
discretization of the control actions are highly diminished.
For the purpose of performance analysis, the proposed predictive force control
strategy with fuzzy scaling is compared with the impedance controller with
force tracking by simulation with a two-degree-of-freedom (2-DOF) manipula-
tor, considering a nonlinear model of the environment. The robustness of the
predictive control scheme is tested considering unmodeled friction and Corio-
Predictive force control of robot manipulators in nonrigid environments 843
lis effects, as well as geometric and stiffness uncertainties on the contact sur-
face.
The implementation and validation of advanced control algorithms, like the
one presented above, require a flexible structure in terms of hardware and
software. However, one of the major difficulties in testing advanced
force/position control algorithms relies in the lack of available commercial
open robot controllers. In fact, industrial robots are equipped with digital con-
trollers having fixed control laws, generally of PID type with no possibility of
modifying the control algorithms to improve their performance. Generally, ro-
bot controllers are programmed with specific languages with fixed pro-
grammed commands having internally defined path planners, trajectory inter-
polators and filters, among other functions. Moreover, in general those
controllers only deal with position and velocity control, which is insufficient
when it is necessary to obtain an accurate force/position tracking performance
(Baptista et al., 2001b). Considering these difficulties, in the last years several
open control architectures for robotic applications have been proposed. Gener-
ally, these solutions rely on digital signal processor techniques (Mandal &
Payandeh, 1995 ; Jaritz & Spong, 1996) or in expensive VME hardware running
under the VxWorks operating system (Kieffer & Yu, 1999). This fact has moti-
vated the development of an open PC-based software kernel for management,
supervision and control. The real-time software tool for the experimentation of
the algorithms proposed in this paper was developed considering require-
ments such as low cost, high flexibility and possibility of incorporating new
hardware devices and software tools (Baptista, 2000a).
This article is organized as follows. Section 2 summarizes the manipulator and
the environment dynamic models. The impedance controller with force track-
ing is presented in section 3. Section 4 presents the model predictive algorithm
with fuzzy scaling applied to force control. The simulation results for a 2-DOF
robot manipulator are discussed in section 5. The experimental setup and the
force control algorithms implemented in real-time are presented in section 6.
The experimental results with a 2-DOF planar robot manipulator are presented
in section 7. Finally, some conclusions are drawn in section 8.
where q, q , q ∈ R n×1 correspond to the joint, position, velocity and acceleration
vectors, respectively, M (q) ∈ R n×n is the symmetric positive definite inertia ma-
trix, C (q, q ) ∈ R n×n is the centripetal and Coriolis matrix, g (q ) ∈ R n×1 contains the
gravitational terms and d (q )q ∈ R n×1 accounts for the frictional terms. The vec-
tor τ ∈ R n×1 is the joint input torque vector and τ e ∈ R n×1 denote the generalized
vector of joint torques exerted by the environment on the end-effector. From
(1) it is possible to derive the robot dynamic model in the Cartesian space:
x + Cx ( x, x ) x + g x ( x) + d x ( x ) = f − f e
M x ( x) (2)
where x is the n-dimensional vector of the position and orientation of the ma-
nipulator's end-effector, f = J −T (q)τ ∈ R n×1 is the robot’s driving force,
f e ∈ R n×1 is the contact force vector and J represents the Jacobian matrix.
The interaction force vector f e = [ f n ft ] is composed by the normal contact
T
force fn and the tangential contact forces ft caused by friction contact between
the end-effector and the surface. An accurate modeling of the contact between
the manipulator and the environment is usually difficult to obtain due to the
complexity of the robot's end-effector interaction with the environment. In this
paper, the normal contact force fn is modeled as a nonlinear spring-damper
mechanical system according (Marhefka & Orin, 1996):
f n = keδ x + ρ e (δ x) x (3)
where the terms ke and ǒe are the environment stiffness and damping coeffi-
cients, respectively, δ x = x − xe is the penetration depth, where xe stands for the
distance between the surface and the base Cartesian frame. Notice that the
damping effect depends non-linearly on the penetration depth Džx. The tangen-
tial contact force vector ft due to surface friction is assumed to be given as
proposed by (Yao & Tomizuka, 1995):
ft = μ f n sgn( x p ) (4)
where x p is the unconstrained or sliding velocity and Ǎ is the dry friction coef-
ficient between the end-effector and the contact surface.
Predictive force control of robot manipulators in nonrigid environments 845
3. Impedance control
x − Bd ( xd − x ) − K d ( xd − x) = − f e
M d (5)
where xd , xd are the desired velocity and position defined in the Cartesian
space, respectively, and x , x are the end-effector velocity and position in Car-
tesian space, respectively. The matrices M d , Bd , K d are the desired inertia,
damping and stiffness for the manipulator. The reference or target end-effector
acceleration u ≡
x is then given by:
u = M d−1 ( Bd e + K d e − f e ) (6)
846 Industrial Robotics: Theory, Modelling and Control
The major drawback of the impedance control scheme presented above is re-
lated to its poor force tracking capability, especially in the presence of nonrigid
environments (Baptista et al., 2000b). However, from the conventional imped-
ance control scheme it is possible to obtain a force control scheme in a steady-
state contact condition with the surface. Considering the impedance control
scheme (6) in the constrained direction, the following holds:
where xv , xv and u f are the virtual position, velocity and target acceleration, re-
spectively, while md , bd , kd are appropriate elements of M d , Bd , K d matrices de-
fined in (5) in the constrained direction. The contact force fn during steady-
state contact with the surface is given by:
f n = kd ( xv − x) (8)
f n = keδ x (9)
This leads to the following steady-state position and contact force (Singh &
Popa, 1995):
kd xv + ke xe
xss = (10)
k d + ke
k d ke
f nss = ( xv − xe ) (11)
k d + ke
§k +k ·
xv = xe + f d ¨ e d ¸ (12)
© ke kd ¹
fd
° xe + k if f n = 0
° d
xv = ® (13)
° x + f § kd δ x + f n · if f ≠ 0
d ¨ ¸
° e © kd f n ¹
n
¯
which is valid for contact and non-contact condition. This approach enables
the classical impedance controller, given by (6), with force tracking capability
without explicit knowledge of the environment stiffness. Notice that xv is usu-
ally assumed to be zero due to the noise always present in the force sensor
measurements.
The impedance control with force tracking (ICFT) block diagram is presented
in fig.2.
xd , xd u
Impedance Inverse τ
controller dynamics Robot
controller q, q
xv
x, x
fd Reference fe Force sensor/ Forward
trajectory environment kinematics
algorithm
In this scheme, the virtual position xv given by (13), is computed in the Refer-
ence trajectory algorithm block, while the target acceleration vector
848 Industrial Robotics: Theory, Modelling and Control
T
u = ª¬u f u p º¼ with up is obtained from (6) and uf from (7), is computed in the
Impedance controller block. Moreover, the unconstrained target acceleration vec-
tor up is further compensated by a proportional-derivative (PD) controller,
which is given by:
u pc = u p + K P e + K D e (14)
uq = J −1 ( uc − J (q)q ) (15)
Then, applying an inverse dynamics controller in the inner control loop, the
joint torques are given by:
τ = Mˆ (q)uq + gˆ (q ) + J (q )T f e (16)
where Mˆ (q), gˆ (q) are estimates of M (q), g (q) in the robot dynamic model (1).
Notice that Coroilis and friction effects are neglected. The impedance control-
ler with force tracking (ICFT) presented above is a good control approach for
rigid environments since the end-effector velocity in the constrained direction
is close to zero, which leads to a virtual position with an acceptable precision.
However, for nonrigid environments the constrained velocity can hardly be
zero, which limits the accuracy of the control system to track the desired force
profile (Baptista et al, 2001a). To overcome the drawbacks of the scheme pre-
sented above, this paper proposes an alternative force control methodology
based on a model predictive algorithm (MPA) which is presented in the next
section.
Predictive force control of robot manipulators in nonrigid environments 849
In this paper, an MPA is used to predict the target position xv to the impedance
control law in (7), such that a desired force profile is obtained. In general, a
predictive algorithm minimizes a cost function over a specified prediction ho-
rizon Hp. In order to reduce model-plant mismatch and disturbances in an ef-
fective way, the predictive algorithm is combined with an internal model con-
trol (IMC) structure (Economou et al., 1986) which increases the force tracking
performance. A filter is included in the feedback loop of the predictive struc-
ture to reduce the noise present in the force sensor data. This filter stabilizes
the loop by decreasing the gain, increasing the robustness of the force control
loop. The sequence of future target positions xv (k ).....xv (k + H p − 1) over a speci-
fied prediction horizon, produced by the MPA, results in a new target accel-
eration by the impedance control law (6), which determines the force to apply
on the surface. Predictive algorithms need a prediction model to compute the
optimal input. In this paper, the model must predict the contact force fm based
on the measured position x and velocity x . This model must consider the dy-
namics of the environment given by (3). In order to minimize the number of
calculations during the nonlinear optimization procedure, only the virtual tra-
jectory is computed in an optimal way, and thus xv is assumed to be zero.
Therefore, the nonlinear prediction model in the constrained direction is given
by:
md u f + bd x − kd ( xv − x) = − f m (17)
Note that a discrete version of this model is required, predicting the future val-
ues fm(k+i) based on the measured position x(k) and the measured velocity
850 Industrial Robotics: Theory, Modelling and Control
em ( k ) = f n ( k ) − f m ( k ) (18)
The desired force profile fd is compensated by the filtered modeling error emf,
as shown in fig.4, resulting in the modified force reference fdc defined as:
f dc (k ) = f d (k ) − emf (k ) (19)
The cost function considered for the force control scheme is then given by:
Hp
J ( xv ) = ¦ ( f dc ( k + i ) − f m ( k + i ) )
2
(20)
i =1
The process inputs and outputs, as well as state variables, can be subjected to
constraints, which must be incorporated in the optimization problem.
Hp
Hc fd
fm
^
fm
xv
The performance of the MPA depends largely on the accuracy of the process
model. The model must be able to accurately predict the future process out-
Predictive force control of robot manipulators in nonrigid environments 851
puts, and at the same time be computationally attractive to meet real-time de-
mands. When both nonlinear models and constraints are present, the optimi-
zation problem is nonconvex. Efficient optimization methods for nonconvex
optimization problems can be used when the solution space is discretized, and
techniques such as Branch-and-Bound - B&B (Sousa et al., 1997) can be ap-
plied. The B&B method can be used in a recursive way, demanding less com-
putation effort than other methods, and is used in this paper to solve the non-
convex optimization problem. Figure 3 presents the basic principle of a
predictive strategy applied to robot force control.
i
J ( i ) = ¦ ( f dc (k + l ) − f m ( k + l ) )
2
(21)
l =1
where i = 1,…,Hp, denotes the level corresponding to the time step k+i. A par-
ticular branch j at level i is created when the cumulative cost J ( i ) (u ) plus a
lower bound on the cost from the level i to the terminal level Hp for the branch j,
denoted J L j , is lower than an upper bound of the total cost, denoted JU :
J (i ) + J L j < JU (22)
852 Industrial Robotics: Theory, Modelling and Control
Let the total number of branches verifying this rule at level i be given by N. In
order to increase the efficiency of the B&B method, it is required that this num-
ber should be as low as possible, i.e. N M .
The major advantages of the B&B algorithm applied to MPA over other non-
convex optimization methods are the following: the global discrete minimum
containing the optimal solution is always found, guaranteeing good perform-
ance; and the B&B method implicitly deals with constraints. In fact, the pres-
ence of constraints improve the efficiency of bounding, restricting the search
space by eliminating non-feasible sub-problems.
The most serious drawbacks of B&B are the exponential increase of the compu-
tational time with the prediction horizon and the number of alternatives, and
the discretization of the possible inputs, which are the position references xv in
this paper. A solution to these problems is proposed in the next section.
Fuzzy predictive filters, as proposed in (Sousa & Setnes, 1999), select discrete
control actions by using an adaptive set of control alternatives multiplied by a
gain factor. This approach diminishes the problems introduced by the discreti-
zation of control actions in MPA. The predictive rules consider an error in or-
der to infer a scaling factor, or gain, γ ( k ) ∈ [0,1] for the discrete incremental in-
puts. For the robotic application considered in this paper this error is given by
em, as defined in (18). The gain γ ( k ) goes to the zero value when the system
tends to a steady-state situation, i.e., the force error and the change in this error
are both small. On the other hand, the gain increases when the force error or
the change in this error is high. When the gain γ ( k ) is small, the possible inputs
are made close to each other, diminishing to a great extent, or even nullifying,
oscillations of the output. When the system needs to change rapidly the gain is
increased and the interval of variation of the inputs is much larger, allowing
for a fast response of the system. The fuzzy scaling machine reduces thus the
main problem introduced by the discretization of the inputs, i.e. a possible
limit cycle due to the discrete inputs, maintaining also the number of necessary
input alternatives low, which increases significantly the speed of the optimiza-
tion algorithm. The design of the fuzzy scaling machine consists of three parts:
the choice of the discrete inputs, the construction of the fuzzy rules for the gain
filter, and the application of the B&B optimization. The first two parts are ex-
plained in the following.
Let the virtual position xv ( k − 1) ∈ X , which was described in (17), represent
the input reference at time instant k − 1 , where X = [ X − , X + ] is the domain of
this reference position. Upper and lower bounds must be defined for the pos-
sible changes in this reference signal at time k, which are respectively xk+ and
Predictive force control of robot manipulators in nonrigid environments 853
These values are then defined as the maximum changes allowed for the virtual
reference when it is increased or decreased, respectively. The adaptive set of
incremental input alternatives can now be defined as
Ω k = γ ( k ) ⋅ Ω*k (24)
The scaling factor γ ( k ) must be chosen based on the predicted error between
the reference and the system's output, which is defined as
e(k + H p ) = f dc (k + H p ) − f n (k + H p ), (25)
The fuzzy rules to be constructed have as antecedents the predicted error and
the change in the error, and as consequent a value for the scaling factor. Simple
heuristic rules can be constructed noticing the following. The system is close to
a steady-state situation when the error and the change in the error are both
small. In this situation, the discrete virtual references must be scaled down, al-
lowing smaller changes in the reference xv , which yield smaller variations in
the impedance controller, and γ ( k ) should tend to zero. On the other hand,
when the predicted error or the change in error are high, larger discrete refer-
854 Industrial Robotics: Theory, Modelling and Control
ences must be considered, and γ ( k ) should tend to its maximum value, i.e. 1.
The trapezoidal and triangular membership functions μe ( e( k + H p )) and
μΔe ( Δe( k )) define the two following fuzzy criteria: “small predicted error” and
“small change in error”, respectively. The two criteria are aggregated using a
fuzzy intersection; the minimum operator (Klir, 1995). In this way, the mem-
bership degree of these criteria using the min operator is given by:
γ (k ) = μγ = 1 − μγ . (28)
Summarizing, the set of inputs Ω*k at time instant k, which are virtual refer-
ences in this paper, is defined in (23). These inputs are within the available in-
put space at time k. Further, the inputs are scaled by the factor γ ( k ) ∈ [0,1] to
create a set of adaptive alternatives Ω k , which are passed on to the optimiza-
tion routine. At a certain time k, the value of γ ( k ) is determined by simple
fuzzy criteria, regarding the predicted error of the system. Note that the pro-
posed fuzzy scaling machine has only the following design parameters: λl ,
and the membership functions μe and μΔe . Moreover, the tuning of these pa-
rameters is not a hard task, allowing the use of some heuristics to derive them.
Possible constraints on the input signal, which is the virtual trajectory in this
paper, are implemented by selecting properly the parameters λl .
fd + fd c Fuzzy Internal .
xv controller x, x Environment
scaling
- machine and robot
fm fn
Model
- +
emf em
Filter
Figure 4. Block diagram of proposed predictive force control algorithm with fuzzy
scaling machine.
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001a) with kind permission of Springer Science and Business
Media).
Predictive force control of robot manipulators in nonrigid environments 855
Figure 4 depicts the proposed predictive force control algorithm with fuzzy
scaling. The block Fuzzy scaling machine contains the model predictive algo-
rithm, the B&B optimization and the fuzzy scaling strategy. The block Internal
controller and robot implement the impedance and the inverse dynamics control
algorithms. The robot dynamic model equations are also computed in this
block. The block Environment contains the nonlinear model of the environ-
ment. In order to cope with disturbances and model-plant mismatches, an in-
ternal model controller is included in the control scheme. The block Filter be-
longs to the IMC structure (Baptista et al., 2001a).
5. Simulation results
The force control scheme introduced in this paper is applied to a robot through
computer simulation for an end-effector force/position task in the presence of
robot model uncertainties and inaccuracy in the environment location and the
correspondent stiffness characteristics. The robot model represents the links 2
and 3 of the PUMA 560 robot. In all the simulations, a constant time step of 1
ms is used. The overall force control scheme including the dynamic model of
the PUMA robot is simulated in the Matlab/Simulink environment. A nonri-
gid friction contact surface is placed in the vertical plane of the robot work-
space where it is assumed that the end-effector always maintain contact with
the surface during the complete task execution.
In order to analyze the force control scheme robustness to environment model-
ing uncertainties, a non rigid time-varying stiffness profile ke(t) is considered,
given by:
The damping coefficient and the coefficient of dry friction are settled to ǒe=45
Ns/m2 and Ǎ=0.2, respectively. Notice that the stiffness coefficient is consid-
ered to be constant (ke=1000 N/m) in the environment model used for predict
the contact force fm. The matrices in the impedance model (6) are defined as Md
= diag[2.5 2.5] and Kd = diag[250 2500] to obtain an accurate force tracking in
the x-axis direction and an accurate position tracking performance in the y-axis
direction.
The matrix Bd is computed to obtain a critically damped system behavior. The
control scheme was tested considering a smooth step force profile of 10 N and
a desired position trajectory from p1 = [0.5 -0.2] m to p2 = [0.5 0.6] m.
Uncertainties in the location of the contact surface given by the final real posi-
tion of p2r=[0.512 0.6] m are considered in the simulations, as shown in fig.5.
856 Industrial Robotics: Theory, Modelling and Control
The parameters of the predictive controller are Hp = Hc = 2 and the fuzzy scal-
ing machine is applied only during the constant path of the reference force tra-
jectory. This means that during the reference force transition periods, the fuzzy
scaling inference is switched off. The discrete alternatives Δxv for the fuzzy
scaling machine are given by:
In the inner loop controller (16), only the elements of the inertia matrix and the
gravitational terms with parameters 20% smaller than their exact values are
considered. The Coriolis and friction terms were neglected in the implementa-
tion of the algorithm but considered in the simulation of the robot dynamic
model. The proportional and derivative gains in (14) are settled to KP =
diag[5000 5000] and KD = diag[500 500].
Simulations using the impedance controller with force tracking (ICFT) and the
control algorithm proposed in this paper are compared. The conventional im-
pedance controller uses the reference trajectory algorithm presented in (13)
considering the environment modeled as a linear spring with ke=1000 N/m.
The simulation results obtained with the ICFT are presented in fig.6, which
exhibit poor force tracking performance with relatively large force tracking er-
rors. However, the ICFT follows the desired position trajectory with high accu-
racy; in fact, it is not possible to distinguish the reference from the actual y-axis
position in fig.6.
The force control scheme uses the model predictive algorithm to compute the
virtual trajectory xv , the fuzzy scaling machine and the nonlinear environment
model, which furnish the normal force described by (3). The force and position
Predictive force control of robot manipulators in nonrigid environments 857
results from the application of this controller are presented in fig.7. Comparing
this figure with fig.6, it becomes clear that the proposed force controller pre-
sents a remarkable performance improvement in terms of force tracking capa-
bility. In fact, it is not possible to distinguish the reference force from the actual
contact force. In terms of position control, similar performance is achieved.
The results for both controllers can be compared in Table 1, where the error
norm . for position and force errors, as well as the absolute maximum values
for these errors are presented. The table shows that the force control perform-
ance is clearly superior for the MPA with fuzzy scaling machine.
ep Max(ep) ef Max(ef)
Force control algorithms
[m] [mm] [N] [N]
Impedance control with force track- 0.041 0.836 60.426 4.420
ing
MPA with fuzzy scaling machine 0.041 0.801 0.8021 0.064
Table 1. Euclidian norm of position, force errors and absolute maximum errors.
12
10
fd ; fn ; ft [N]
0
0 0.5 1 1.5 2 2.5 3
0.6
0.4
yd ; y [m]
0.2
−0.2
−0.4
0 0.5 1 1.5 2 2.5 3
time [s]
Figure 6. Impedance control with force tracking: desired force (dashdot), normal force
(solid) and friction force (dashed) – top view; desired y-axis trajectory (dashdot) and
actual position trajectory (solid) – bottom view.
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001a) with kind permission of Springer Science and Business
Media).
858 Industrial Robotics: Theory, Modelling and Control
12
10
fd ; fn ; ft [N] 8
0
0 0.5 1 1.5 2 2.5 3
0.6
0.4
yd ; y [m]
0.2
−0.2
−0.4
0 0.5 1 1.5 2 2.5 3
time [s]
Figure 7. MPA with fuzzy scaling: desired force (dashdot), normal force (solid) and
friction force (dashed) – top view; desired y-axis trajectory (dashdot) and actual posi-
tion trajectory (solid) – bottom view.
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001a) with kind permission of Springer Science and Business
Media).
0.02
−0.02
ef [N]
−0.04
−0.06
−0.08
0 0.5 1 1.5 2 2.5 3
0.8
0.6
γ(k)
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3
time [s]
Figure 8. MPA with fuzzy scaling: contact force errors (top view) and fuzzy scaling
factor DŽ(k) (bottom view).
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001a) with kind permission of Springer Science and Business
Media).
Predictive force control of robot manipulators in nonrigid environments 859
90
80
τ1 [Nm]
70
60
50
40
0 0.5 1 1.5 2 2.5 3
90
80
τ1 [Nm]
70
60
50
40
0 0.5 1 1.5 2 2.5 3
time [s]
Figure 9. MPA with fuzzy scaling: joint torque Ǖ1 without (top view) and with fuzzy
scaling machine (bottom view).
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001a) with kind permission of Springer Science and Business
Media).
In order to study in more detail the proposed force approach, fig. 8 presents
the contact force error and the fuzzy scaling factor DŽ(k) for the same trajectory.
The factor DŽ(k) exhibits a fast convergence to values around zero during the
constant reference force path, which reduces the chattering present on the tar-
get trajectory and in the joint torques.
The joint torque Ǖ1 using the predictive approach with and without the fuzzy
scaling machine is shown in fig. 9. The strategy without fuzzy scaling produce
undesirable oscillations on the virtual trajectory xv, which has the same effect
on the joint torques Ǖ. This effect is significantly reduced by the fuzzy scaling
machine, as shown in fig.9 for the joint torque Ǖ1.
6. Experimental setup
shaft shaft
spring spring
Y Rigid plate
Force sensor fx
fz
.
fn
m2, I2
l1 q2 ye
l2
lc2
lc1
. m1, I1
q1
X
Figure 10. Top view schematic representation of the robotic setup. (Note: The arrows
represent the fx, fz components and indicate the negative directions of the contact
forces measured by the force sensor.).
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001b) with permission of Elsevier).
The planar robot has two revolute joints driven by HARMONIC DRIVE actua-
tors (HDSH-14). Each actuator has an electric d.c. motor, a harmonic drive, an
encoder and a tachogenerator. The robot links parameters are given in Table 2
where lci is the distance to the mass center of the link i, Izzi is the inertia mo-
ment related to the z-axis and Imi is the actuator's inertia moment related to the
output shaft. The contact surface used for force control experiments is based
on a steel plate with two springs in parallel guided by shafts with ball bearings
(Baptista, 2000a). The top view of the planar robot and the nonrigid mechani-
cal contact surface are shown in fig.10 and a picture of the robot and mechani-
cal environment is depicted in fig.11.
Figure 11. Top view of the 2-DOF robot and the nonrigid mechanical environment.
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001b) with permission of Elsevier).
Computer
Pentium 200MHz
ISA BUS
ServoToGo axis
JR3 DSP sensor board
board
Channel 0 Channel 1
Amplifier Amplifier q
Joint 1 Joint 2
JR3 sensor
This ISR is redefined to execute the safety routine at a frequency that is the
double of the sampling frequency. In order to avoid the accumulation of inter-
rupt requests, the ISR 13 increments a flag at the beginning of the execution of
the selected control routine. This flag is decremented at the end of the execu-
tion of the procedure and is checked by the safety routine, which aborts the
real-time procedure if its content is superior to the unit value. In this situation,
the failure is reported to the operator by the interface program.
classical model-based impedance controller, derived from (2) and (5), repre-
sented by the following equation:
τ = J T ( M x (u + M d−1 ( Bd e + K d e − f e )) + Cx ( x, x ) x + g x ( x) + f e ) (31)
tor in the control law (32), allows a force tracking capability without explicit
knowledge of the environment stiffness coefficient. In order to improve the
force tracking accuracy, an integral control action can be included in (32),
which leads to:
( )
τ = J (q)T Bd e + K d e + Ki ³ e f dt + g (q ) (33)
where e = xt − x and e = xt − x are redefined as virtual position and velocity er-
rors and e f = f d − f e are the force errors. Taking the previous assumptions in
consideration, the impedance control algorithm with force tracking imple-
mented in real-time is given by (33) where the gravitational terms g(q) are null
since the robot is of planar type. The main steps of the controller implementa-
tion for each sampling period Ts are (Baptista et al., 2001b):
δ fd ( k )
° xv ( k − 1 ) − k if f n ( k ) ≥ 0
° d (34)
xv ( k ) = ®
° x ( k − 1 ) − δ f d ( k ) + κ f e f ( k ) if f ( k ) < 0
°¯ v k̂e
n
where
δ fd ( k ) = fd ( k ) − fd ( k − 1 ) (35)
e f ( k ) = fd ( k ) − fn ( k ) (36)
x ( k ) x-axis direction
xv ( k ) = ® d (37)
¯ xv ( k ) ⊥ y -axis direction
k ) + K d e( k ) + K i (Ts I p ( k ))
f ( k ) = − Bd x( (38)
where
e( k ) + e( k − 1 )
I p( k ) = (39)
2
τ ( k ) = J( q( k ))T f ( k ) (40)
Notice that in (35) the minus signals follow the convention of signals used by
the force/torque sensor. In this algorithm, an estimate of the environment
stiffness k̂e is used to compute xv(k). The term κ f e f ( k ) is used to compensate
geometric and stiffness uncertainties and other non-modeled terms of the envi-
ronment.
For purposes of performance analysis, the classical hybrid position/force con-
trol algorithm was also implemented in real-time. This algorithm has been pro-
posed by (Raibert & Craig, 1981) and uses a selection matrix
S = diag( s j ) with j = 1,...,m , to separate the position and force subspaces, by the
attribution of ``1" to the force subspace and ``0" to the position subspace. The
matrix S is used to represent the force controlled direction and the matrix I-S
to represent the remaining position controlled directions, where I is the iden-
Predictive force control of robot manipulators in nonrigid environments 865
tity matrix. In this algorithm, the control forces are given by:
f ( k ) = Sf f ( k ) + [1 − S ] f p ( k ) (41)
The real-time implementation of the control laws for unconstrained space and
constrained space, are respectively given by:
f d ( k ) + K Pf e( k ) + Ts K f I I f ( k ) − K Df x Force subspace
f(k )= ® (42)
¯ K P e( k ) + Ts K I I p ( k ) − K D x Position subspace
where ef is defined in (37) and Ip follows the definition in (39) for the uncon-
strained direction. The coefficients KP, KI and KD are the parameters of the PID
position controller and KPf, KIf and KDf are the parameters of the PID force con-
troller. Notice that velocity damping is included in the control law for the con-
strained direction (Mandal & Payandeh, 1995).
Unfortunately, the control scheme presented in fig.4 is not possible to imple-
ment directly in real-time due to the high sampling rate used in the force con-
trol loop (Fs=1 KHz). Thus, one possible solution is to use more powerful
hardware and software tools to implement the control scheme in a straight-
forward way. Due to computer hardware limitations, an alternative strategy
was used to implement the proposed force control algorithm in real-time. The
optimized virtual trajectory xv for the impedance controller is computed off-
line in Matlab/Simulink using a realist simulator of the experimental robotic
setup. The optimized virtual trajectory calculated off-line with an additional
term for compensation of uncertainties and modeling errors is then furnished
as a reference signal (xvopt) to the impedance controller described in Section 3.
The virtual reference used in the simplified version of the MPA with fuzzy
scaling implemented in real-time is then given by:
ef ( k )
xv ( k ) = xvopt ( k ) + κ f (43)
k̂e
7. Experimental results
In this section, experimental results obtained with the predictive force algo-
rithm with fuzzy scaling proposed in this paper are presented. These results
are compared with the classical force control algorithms, namely the classical
impedance controller and the hybrid position/force controller. Note that the
predictive control algorithm uses a nonlinear model of the nonrigid environ-
ment where the stiffness coefficient is varying with the penetration depth Džx.
It is necessary to obtain an estimate of the environment stiffness in the con-
strained direction k̂e , which is used in (34) and (43). This estimate is obtained
experimentally by measuring the linear surface displacement when a force
profile is applied by the end-effector. The collected pairs of f n − δ x data are
then used to estimate the global stiffness coefficient k̂e =1636 N/m, using the
least-squares method. The obtained results presented in fig.13, exhibit a large
deformation of the surface when a 10 N smooth step force profile is applied on
the surface. From the plot is possible to observe that the environment has a
nonlinear behavior of hysteresis type (Baptista et al., 2001b).
The control algorithms are implemented in the cartesian space with a sampling
period of Ts = 1 ms. The implemented force/position task is a linear trajectory
with 100 mm length between xi = 229.5 mm and xf = 329.5 mm along x-axis,
with simultaneous application of a smooth 10 N step force profile along the y-
axis, during 8 seconds. It is assumed that the arm is initially in contact with the
environment. Moreover, geometric uncertainty in the contact surface of 5 mm
at the end of the trajectory is intentionally imposed to the environment. The fil-
tered force data components (fx, fz) are obtained from the JR3 DSP board con-
sidering a first order low-pass digital filter with 500 Hz cut-off frequency in
order to attenuate the force sensor noise. The predictive force control algo-
rithm, which derives the virtual position to the impedance controller, use the
parameters Hc= Hp = 2. As refereed in Section 4, the fuzzy scaling machine is
applied only during the constant path of the reference force profile.
Predictive force control of robot manipulators in nonrigid environments 867
15
10
fn [N] 5
−5
0 1 2 3 4 5 6
δx [mm]
5
0
fn ; f*n [N]
−5
−10
−15
0 1 2 3 4 5 6 7 8
time [s]
Figure 13. Estimated environment stiffness and contact forces. (Note: dashdot - esti-
mated contact force ˆf n = kˆ eδ x ; solid - actual force fn).
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001b) with permission of Elsevier).
The maximum allowed increments Ʀxv at each time step are the following (in
mm):
The impedance control algorithm use the parameters presented in Table 3 for
the real-time control experiments. The force compensation gain is set to
κ f =0.025, and was selected from experimental trials.
Axis Kd Ki Bd
x-axis 3000 4000 6
y-axis 2000 3000 2
Table 3. Impedance control parameters.
Figure 14 presents the end-effector’s position in the x-y axis coordinates using
the real-time implementation of the predictive force control algorithm with
fuzzy scaling. For the same controller, the optimized virtual position given by
(43) and the actual contact force are presented in fig.15. Figure 16 presents the
obtained position and force errors. The experimental results obtained with the
classical impedance controller are presented in fig. 17, where the position and
868 Industrial Robotics: Theory, Modelling and Control
force errors are depicted. The parameters for the hybrid position/force control-
ler used in the real-time tests are presented in Table 4 and the position and
force errors are depicted in fig.18.
Axis KP KI KD
x-axis 3000 4000 6
y-axis 570 2500 2
Table 4. Parameters of the hybrid position/force controller.
The results presented for the predictive force control algorithm reveal an accu-
rate force tracking accuracy when compared to the equivalent results obtained
with the impedance and hybrid control algorithms. The proposed controller
exhibit significantly better results, especially during the time-varying force ref-
erence trajectory. The impedance controller with force tracking reveal some
difficulties in following the force reference path when this is not constant. The
force control performance of the hybrid controller drops significantly during
the final part of the trajectory due to the geometric uncertainty, which is not ef-
ficiently compensated by the algorithm.
340
320
xd ; x [mm]
300
280
260
240
220
0 1 2 3 4 5 6 7 8
450
448
y [mm]
446
444
442
440
0 1 2 3 4 5 6 7 8
time [s]
Figure 14. Predictive force control algorithm with fuzzy scaling: X-axis position coor-
dinate, Y-axis constrained position coordinate.
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001b) with permission of Elsevier).
Predictive force control of robot manipulators in nonrigid environments 869
450
448
xv [mm]
446
444
442
440
0 1 2 3 4 5 6 7 8
0
fd ; fn [N]
−5
−10
−15
0 1 2 3 4 5 6 7 8
time [s]
Figure 15. Predictive force control algorithm with fuzzy scaling: Y-axis virtual trajec-
tory, desired and actual contact force (Note: solid-actual; dashdot-desired).
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001b) with permission of Elsevier).
0.5
ep [mm]
−0.5
0 1 2 3 4 5 6 7 8
2
ef [N]
−2
−4
0 1 2 3 4 5 6 7 8
time [s]
Figure 16. Predictive force control algorithm with fuzzy scaling: x-axis position errors
and y-axis contact force errors.
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001b) with permission of Elsevier).
870 Industrial Robotics: Theory, Modelling and Control
0.5
ep [mm]
0
−0.5
0 1 2 3 4 5 6 7 8
2
ef [N]
−2
−4
0 1 2 3 4 5 6 7 8
time [s]
Figure 17. Impedance control with force tracking: x-axis position errors and y-axis
contact force errors.
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001b) with permission of Elsevier).
ep Max(ep) ef Max(ef)
Force controllers
[m] [mm] [N] [N]
Hybrid position/force control 8.1 0.43 63.89 2.76
Impedance control with force tracking 8.5 0.49 50.63 2.33
Predictive force control with fuzzy 8.1 0.42 37.40 1.73
scaling
Table 5. Euclidian norm of position, force errors and absolute maximum errors.
From Table 4 results, is possible to conclude that predictive force control algo-
rithm proposed in this paper reduces the norm of the force error in 26% when
compared to the impedance controller with force tracking, which is a quite
significant value. The reduction in the norm of the force error is even more
significant when compared to the hybrid position/force controller results
(41%). The experimental results reveal the successful implementation of the
proposed predictive force control strategy in real-time, in the presence of non-
rigid environments with uncertainties.
Predictive force control of robot manipulators in nonrigid environments 871
0.5
ep [mm]
−0.5
0 1 2 3 4 5 6 7 8
2
ef [N]
−2
−4
0 1 2 3 4 5 6 7 8
time [s]
Figure 18. Hybrid position/force control: x-axis position errors and y-axis contact
force errors.
(Reprinted from Baptista, L.; Sousa, J. & Sá da Costa, J. (2001b) with permission of Elsevier).
8. Conclusion
AKNOWLEDGMENTS
Part of the material in this chapter appeared in various journal and conference
publications. These publications are referred to in the relevant paragraphs, and
they are listed in the bibliography. The material, including figures and tables,
is reproduced with the kind permission of the respective copyright holders.
The list of the copyright holders is as follows: Springer Science and Business
Media: Baptista, L.; Sousa, J. & Sá da Costa, J. (2001a). Elsevier Science: Bap-
tista, L.; Sousa, J. & Sá da Costa, J. (2001b).
9. References
Baptista, L.; (2000a). Adaptive force and position control of robot manipulators, PhD
Thesis, Technical University of Lisbon, Mechanical Engineering De-
partment, Lisbon (in portuguese)
Baptista,L; Ayala Botto, M. & Sá da Costa, J. (2000b). A predictive force control
approach of robot manipulators in nonrigid environments. International
Journal of Robotics and Automation, 15(2) pp.94-102, ISSN 0826-8185
Baptista,L; Sousa, J. & Sá da Costa, J. (2001a). Force control of robot manipula-
tors using a fuzzy predictive approach. Journal of Intelligent and Robotic
Systems, 30(4), April 2001, pp.359-376, ISSN 0921-0296
Baptista,L; Sousa, J. & Sá da Costa, J. (2001b). Force predictive algorithms ap-
plied to real-time force control. Control Engineering Practice, 9(4), pp.411-
423, ISSN 0967-0661
Corbet, T. ; Sepehri, N. & Lawrence, P. (1996). Fuzzy control of a class of hy-
draulically actuated industrial robots. IEEE Transactions Control Systems
Technology, 4(4), pp.419-426
De Witt, C. ; Siciliano, B. & Bastin, G. (1997). Theory of robot control. Springer-
Verlag, ISBN 3-540-76054-7, London, UK
De Schutter, J. ; Bruyninckx, H. & Spong, M. (1997). Force control: a bird’s eye.
Proceedings of IEEE CSS/RAS International Workshop on Control Problems
in Robotics and Automation: Future directions. San Diego, December 1997,
USA
Economou, C. ; Morari, M. & Palsson, B. (1986). Internal model control. 5. Exten-
sion to nonlinear systems. Industrial Engineering Chemical Process De-
sign and Development, 25, American Chemical Society, pp.404-411
Hogan, N. (1985). Impedance control: An approach to manipulation: Part I-II-
III. Journal of Dynamic Systems, Measurement and Control, 107(1), pp.1-24
Jaritz, A. & Spong, M. (1996). An experimental comparison of robust control
algorithms on a direct drive manipulator. IEEE Transactions on Control
Systems Technology, 4, pp.614-626
Predictive force control of robot manipulators in nonrigid environments 873
1. Introduction
Nowadays robots in industrial settings are mainly used for repetitive tasks
where they act as programmable devices reproducing previously recorded
motions in a highly structured environment so that decision and initiative ca-
pability is rarely exploited. Contour tracking is, on the contrary, an example of
a complex task that requires the manipulator to continuously and autono-
mously modify its path, coping with the uncertainties typical of unstructured
environments (Siciliano & Villani, 1999). In many applications a robot is re-
quired to follow a contour while applying a normal force; these tasks include
grinding (Thomessen & Lien, 2000), deburring (Ferretti et al., 2000; Ziliani et
al., 2005), shape recovery (Ahmad & Lee, 1990), polishing and kinematic cali-
bration (Legnani et al., 2001). The problem of tracking (known and) unknown
contours has been studied by many researchers in the last two decades.
Hybrid force/velocity control (Raibert & Craig, 1981) appears to be suitable to
be adopted in this context, because it explicitly controls the end-effector force
in a selected direction and the end-effector velocity in the other complemen-
tary directions. Actually, two kinds of hybrid force/velocity control can be
implemented (Roy & Whitcomb, 2002): 1) explicit hybrid force/velocity control,
where the robot end-effector is controlled by directly imposing the joint
torques based on the measured force and position/velocity errors, and 2) im-
plicit hybrid force/velocity control, where the end-effector is controlled indirectly
by suitably modifying the reference trajectories of the joint position/velocity
inner control loops based on the measured force errors. A theoretical compari-
son (with experimental results) between these two approaches has been devel-
oped in (Volpe & Khosla, 1993), although the contour tracking task has not
been considered.
Indeed, it is a matter of fact that these methodologies are not widely employed
in industrial settings. This might be due to the fact that there is a lack of a
characterisation of these techniques from an industrial point of view where the
cost/benefit ratio has to be always taken into account. In order to (partially)
address this fact, the implementation of an implicit and an explicit hybrid
875
876 Industrial Robotics: Theory, Modelling and Control
2. Experimental Setup
The experimental set-up adopted for the experiments described in the follow-
ing sections is available in the Applied Mechanics Laboratory of the University
of Brescia and it consists of an industrial robot manipulator manufactured by
ICOMATIC (Gussago, Italy) with a standard SCARA architecture where the
vertical z axis has been blocked since a planar task is addressed. A detailed
dynamic model is described in (Visioli & Legnani, 2002).
Both links have the same length of 0.33 m. The two joints are actuated by
means of two DC motors that are driven by conventional PWM amplifiers and
position measurements are available by means of two incremental encoders
with 2000 pulses/rev. resolution. Harmonic Drive speed reducers are present
and the reduction rate is 1/100 for both joints. Velocity is estimated through
numerical differentiation whose output is then processed by a low-pass 2-
order Butterworth filter with a 100 Hz cut-off frequency and a 1.0 damping ra-
tio.
Figure 1. The SCARA robot during the contour tracking of a complex shape
Friction compensation in hybrid force/velocity control for contour tracking tasks 877
A sketch of the SCARA robot is shown in Figure 2. Frame (0) refers to the robot
base, while task frame (T) has its origin on the robot end-effector with its n and
t axes that are directed respectively along the normal and tangential direction
of the contour of the piece, whose geometry is assumed to be unknown; ϑ de-
notes the angle between n axis and x axis of frame (0).
Let Q = [q , q ]T be the vector of the joint positions and Q its first time deriva-
1 2
tive. Since a suitable belt transmission keeps the end-effector with constant
orientation with respect to the absolute frame, force measurements are directly
available in frame (0). Let F(0) = [ Fx , Fy ]T , F(T ) = [ Ft , Fn ]T be the vector of the
contact force in frame (0) and (T) respectively. They are related to each other
by the equation F(0) = M 0T (ϑ ) F(T ) denoting with M ij the rotation matrix from
frame j to frame i . Note that
ª sin ϑ cosϑ º
M 0T (ϑ ) = « ». (1)
¬− cosϑ sin ϑ ¼
Vector V(T ) = [Vt , Vn ]T representing the Cartesian velocity in frame (T) can be
obtained from the relation
where J (Q) is the robot Jacobian. The aim of the contour tracking task is to
control the normal force and the tangential velocity of the robot probe along n
and t directions of task frame (T) respectively.
878 Industrial Robotics: Theory, Modelling and Control
These directions can be easily estimated, assuming that the contact friction
force on the tangent direction is negligible with respect to the normal contact
force (note that this is achieved by adopting a suitable probe endowed with a
ball bearing, as described in Section 2), by on-line estimating the angle ϑ as:
§ Fy ·
ϑ = atan2( Fy , Fx ) = arctan¨¨ ¸ ± π.
¸ (3)
F
© x¹
(T) n
y
ϑ
t
q2
q1
(0) x
where B(Q) is the inertia matrix, C (Q, Q ) is the matrix of centrifugal and Cori-
olis terms, f = [ f1 (q1 ), f1 (q 2 )]T is the vector of joint friction forces, G (Q) is the
gravity forces term (null for the SCARA robot adopted because it works in a
horizontal plane), τ = [τ 1 , τ 2 ]T is the joint torques vector, and F(0) was defined
earlier as the vector of forces exerted from the robot on the environment. A
thorough theoretical and experimental investigation of the robot identified its
dynamics (Visioli & Legnani, 2002; Indri et al. 2002). It was found that, because
of the low velocities and accelerations involved in conventional contour track-
ing tasks, the effects of the inertial and Coriolis forces can be neglected with
respect to contact forces and friction terms. As a consequence of this, equation
Friction compensation in hybrid force/velocity control for contour tracking tasks 879
τ = J T (Q) M 0T (U (T ) + K R R ) + fˆ (6)
where R = [vt , f n ]T is the vector of the tangential velocity and normal force ref-
erence values, K R = diag[kV , ff , k F , ff ] is the diagonal matrix of feedforward
gains, fˆ = [ fˆ1 (q1 ), fˆ1 (q 2 )]T is an available estimate of the joint friction torques
(see Section 4) and
where u PID,V is the tangential velocity PID output, u PI , F is the normal force
PI output, vn (t ) = 0 , vn (t ) is the velocity of the end-effector in the normal di-
rection and kv, fb is a proportional gain.
Note that the use of a normal force derivative term has been avoided in (6) (in-
deed, only the proportional and the integral actions have been employed) as
the derivation of such a signal is ill-conditioned (Craig, 1989). Conversely, the
adoption of a normal force velocity feedback loop has been proven to be effec-
tive to compensate for the large force oscillations due to the effects of link
masses (and joint elasticities) in a large portion of the workspace (Jatta et al.,
2006). Further, a gain scheduling approach has been adopted in order to take
into account the configuration dependent dynamics of the manipulator during
a constrained motion (Ziliani et al., 2006).
Figure 4. The inner position control loop in the implicit hybrid force/velocity control
scheme
Friction compensation in hybrid force/velocity control for contour tracking tasks 881
4.1 Generalities
° [p− − −
i 0 pi1 " pih ] if qi < 0
Pi := [ pi 0 pi1 " pih ] = ®
°̄ [p+ + +
i 0 pi1 " pih ] if qi > 0
(8)
and
the friction term can be modelled for the ith axis as f i (qi ) = Pi Ω i . If P̂i is an
available estimate of vector Pi the joint friction torque can be estimated as
Where
2
Ω ref , i := [1 q ref , i q ref h
, i " q ref , i ]
T
(11)
and q ref ,i is the ith joint velocity reference. Note that, Ω ref , i was used instead
of Ωi because better experimental performance was achieved (Whitcomb et
al., 1997). Since reference velocities are not directly available they have to be
reconstructed from workspace references as
Two methods for the determination of P̂i are considered hereafter. They will
be referred as Static model-based Friction Compensation (SFC) and Adaptive
Friction Compensation (AFC).
The first method that has been considered consists of performing suitable ex-
periments on the robot and then of applying an off-line estimation procedure
based on a recursive least squares algorithm in order to estimate all the robot
parameters including friction torques (h=3 was used) (Indri et al., 2002). Once
the (static) friction model is estimated, it can be employed in the control law
both for the explicit and the implicit hybrid force/velocity control law during
the contour tracking tasks. However, this technique is time consuming (as ad
hoc experiments have to be performed) and not always appropriate as it is
known that friction torques may change over time and might not be always re-
liably predicted (Daemi & Heimann, 1996). Thus, control performance may de-
crease during the robot operation and therefore it is useful to compensate for
the joint friction effect using an adaptive procedure.
The friction error signal can be determined suitably both for the explicit and
the implicit hybrid control law. In the first case, if the model expressed by (5)
represents a perfect prediction, then the output of the PID controllers in (6)
would be equal to zero. Consequently the PID output can be regarded as a
joint friction prediction error signal. In other words, adopting kV , ff = 0 and
k F , ff = 1 equations (5) and (6) result in:
where
is the workspace output of PID regulators transformed into the joint space.
By following a similar reasoning, in case the implicit control law is employed,
it is
where
e = u*PID (18)
where equation (15) or (17) has to be considered if the explicit or the implicit
controller is adopted respectively.
Based on the value of the error signal (13), the polynomial coefficients of the
friction function can be updated every control step k with the following AFC
algorithm (i=1,2):
5. Experimental Results
Experimental results, aiming at verifying the effects of the joint friction com-
pensation, have been obtained by considering an iron disk with a diameter of
180 mm placed in two different positions (called `A’ and `B’) of the manipula-
tor workspace, as shown in Figure 7.
It is worth stressing again that the contour tracking task is performed by the
control algorithm without any knowledge of the geometry of the workpiece.
In both cases, a large set of different normal force set-points [20, 25, 30, 35, 40]
N, each at different tangential velocities [10, 20, 30, 40, 50] mm/s, has been se-
lected and the RMS force and velocity error has been calculated for both the
explicit and implicit hybrid force/velocity control law, where the use of the
SFC method and of the AFC method has evaluated together with the absence
of joint friction compensation. The control system parameters, namely the pa-
rameters of the PID controllers in the explicit and implicit control schemes,
have been selected after a careful trial-and-error procedure.
0.6
B
0.4
A
0.2
Y [m]
-0.2
-0.4
-0.6
It is worth noting that the implicit hybrid force/velocity control scheme re-
quires a more significant tuning effort than the explicit one because of the
presence of the additional position inner loop.
Results are shown in Figures 8-11. In order to understand the results better, the
plots of the obtained normal force and tangential velocity signals for the con-
sidered control schemes applied to disk A when the normal force set-point is
30 N and the tangential velocity set-point is 10 mm/s are shown in Figures 12-
13.
886 Industrial Robotics: Theory, Modelling and Control
a)
b)
Figure 8. Force RMS error with no friction compensation and with SFC and AFC for
the disk in the A position. a) implicit control; b) explicit control.
Friction compensation in hybrid force/velocity control for contour tracking tasks 887
a)
b)
Figure 9. Force RMS error with no friction compensation and with SFC and AFC for
the disk in the B position. a) implicit control; b) explicit control.
888 Industrial Robotics: Theory, Modelling and Control
a)
b)
Figure 10. Velocity RMS error with no friction compensation and with SFC and AFC
for the disk in the A position. a) implicit control; b) explicit control.
Friction compensation in hybrid force/velocity control for contour tracking tasks 889
(a)
b)
Figure 11. Velocity RMS error with no friction compensation and with SFC and AFC
for the disk in the B position. A) implicit control; b) explicit control.
890 Industrial Robotics: Theory, Modelling and Control
a)
b)
Figure 12. Normal force signals with no friction compensation and with SFC and AFC
for the disk in the A position. a) implicit control; b) explicit control.
Friction compensation in hybrid force/velocity control for contour tracking tasks 891
a) b)
Figure 13. Tangential velocity signals with no friction compensation and with SFC
and AFC for the disk in the A position. a) implicit control; b) explicit control.
The mean value of the normal force and tangent velocity RMS error for the dif-
ferent experiments is then reported in Table 1.
From the results presented it can be deduced that a friction compensation
strategy is indeed necessary especially for the explicit control law. This is mo-
tivated by the fact that the inner joint position control loops in the implicit con-
trol law are somehow able to cope with the friction effects. However, it has to
be noted again that the implicit control law requires a greater tuning effort
than the explicit one (although, from another point of view, it has the advan-
tage that it can be applied to a pre-existing motion control architecture).
The Adaptive Friction Compensation strategy provides definitely the best re-
sults for the explicit control scheme both in terms of normal force and tangen-
tial velocity, while for the implicit control law the performance obtained by the
Adaptive Friction Compensation scheme and by the Static Friction Compensa-
tion scheme are similar. In any case, the great advantage for the AFC of being a
model-free scheme (i.e., no preliminary experiment is required to derive a fric-
tion model and robustness to variation of the friction parameters is assured)
makes it more appealing to be applied in a practical context.
It is worth stressing that the AFC strategy is effective in reducing the normal
force and tangential velocity errors especially when the joint velocity sign
changes. This fact can be evaluated by considering the resulting two joint ve-
locities that would be necessary in order to achieve the required tangential ve-
locity of 10 mm/s (for disk A). They are reported in Figure 14 (compare with
Figures 12 and 13, for example at time t=3.9 s when the velocity of the first
joint changes its sign it appears that the normal force and tangential velocity
errors increase significantly when no friction compensation is applied, espe-
cially for the explicit control).
892 Industrial Robotics: Theory, Modelling and Control
Further, the explicit hybrid force/velocity controller (with AFC) provides basi-
cally the same performance (in terms of both normal force and tangential ve-
locity errors) disregarding the different normal force and tangential velocity
set-points and the different position of the workpiece in the manipulator
workspace. This is indeed a remarkable issue that is due to the higher band-
width provided by the explicit control than the implicit one.
Figure 14. Required joint velocities for tracking disk A with the tangential velocity of
10 mm/s
Friction compensation in hybrid force/velocity control for contour tracking tasks 893
6. Conclusions
8. References
1. Introduction
895
896 Industrial Robotics: Theory, Modelling and Control
al., 1985). Conditions for the generalized stability of polynomials with the line-
arly dependent coefficients (polytopes) have been obtained (Bartlett et al.,
1987; Rantzer, 1992).
One of the most important stages, while calculating dynamic systems with un-
certain parameters, is ensuring robust quality. The control process qualitative
characteristics are defined by the characteristic equations roots location in the
complex plane (the plane of system fundamental frequencies). In this connec-
tion, three main groups of tasks being solved can be distinguished: determin-
ing the assured roots location domain (region) for the given system, finding
conditions of whether roots get into the given region or not (determination of
the Λ-stability conditions) and locating roots in the given domain (ensuring Λ-
stability).
The frequency stability criteria for the linear systems families and also the
method for finding the largest disturbance range of their characteristic equa-
tions coefficients, which guarantees the system asymptotic stability, are con-
sidered by B. T. Polyak and Y. Z. Tsypkin (Polyak & Tsypkin, 1990). The as-
sured domain of the interval polynomial roots location is found in (Soh et al.,
1985). The root locus theory is used in (Gaivoronsky, 2006) for this task solu-
tion. Conditions (Vicino, 1989; Shaw & Jayasuriya, 1993) for the interval poly-
nomial roots getting into the given domain of some convex shape are defined.
The parametric approach to robustness, based on the root locus theory (Rim-
sky, 1972; Rimsky & Taborovetz, 1978; Nesenchuk, 2002; Nesenchuk, 2005), is
considered in this chapter in application to the industrial anthropomorphous
robot control system parametric design. The developed techniques allow to set
up the values of the parameter variation intervals limits for the cases when the
stability verification showed, that the given system was unstable, and to en-
sure the system robust quality by locating the characteristic equations family
roots within the given quality domain.
Most industrial robots are used for transportation of various items (parts), e. g.
for installing parts and machine tools in the cutting machines adjustments, for
moving parts and units, etc. During the robot operation due to some internal
or external reasons its parameters vary, causing variation of the system charac-
teristic equation coefficients. This variation can be rather substantial. In such
conditions the system is considered, as the uncertain system.
The industrial robot considered here is used for operation as an integrated part
of the flexible industrial modules including those for stamping, mechanical as-
Industrial robot control system parametric design on the base of methods for uncertain… 897
sembly, welding, machine cutting, casting production, etc. The industrial robot
is shown in fig. 1. It comprises manipulator 1 of anthropomorphous structure,
control block 2 including periphery equipment and connecting cables 3. Ma-
nipulator has six units (1–8 in fig. 1) and correspondingly is of six degrees of
freedom (see fig. 1): column 4 turn, shoulder 5 swing, arm 6 swing, hand 7
swing, turn and rotation. The arm is connected with the joining element 8.
Controlling robots of such a type, belonging to the third generation, is based
on the hierarchical principle and features the distributed data processing. It is
based on application of special control processors for autonomous control by
every degree of freedom (lower executive control level) and central processor
coordinating their operation (higher tactical control level).
2.2 Industrial robot manipulator unit control system, its structure and
mathematical model
W p' ( s ) = W p ( s ) s
ϕ 1
W p ( s) = = , (1)
Ug L R
( j m + jl ) A s 3 + ( j m + jl ) A s 2 + C e s
CɆ CɆ
Figure 2. Control system for the industrial robot manipulator shoulder unit
On the basis of (1) write the manipulator unit control system characteristic
equation
Industrial robot control system parametric design on the base of methods for uncertain… 899
R A 3 C e C Ɇ 2 C Ɇ K1 K s CM K 2 K p K s
s4 + s + s + s+ =0
LA jm LA j m L AT j m L AT
or as
a 0 s 4 + a1 s 3 + a 2 s 2 + a 3 s + a 4 = 0 , (2)
where
CM K 2 K p K s
a0 = 1 ; a1 = R A
Ce C M
; a2 = ; a3 = C Ɇ K1 K s ; a 4 = ;
LA ( j m + jl’ ) L A ( j m + j l ) L AT ( j m + j l ) L AT
- R A = 0,63 Ω;
- L A = 0,0014 henry;
- jl = 2,04⋅10 −5 kg / m 2
- j m = 40,8⋅10 −5 kg / m 2 ;
V ⋅s
- Ce = 0,16 ;
rad
- C M = Ce ;
- T= 0,23 s;
- K 1 = 66,7, K 2 = 250;
- K s = 0,078, K p = 2,5.
900 Industrial Robotics: Theory, Modelling and Control
Figure 3. Structure of the position control system loop for the manipulator shoulder
unit
After substitution of the nominal values into (2) rewrite the unit characteristic
equation as
The coefficients of (3) are the nominal ones and while robot operation they of-
ten vary within the enough wide intervals. For this reason when calculating
the robot control system it is necessary to consider the parameters uncertainty
and ensure the control system robustness.
The method is described for synthesis of the interval dynamic system (IDS)
stable characteristic polynomials family from the given unstable one, based on
the system model in the form of the free root locus portrait. This method al-
lows to set up the given interval polynomial for ensuring its stability in cases,
when it was found, that this polynomial was unstable. The distance, measured
along the root locus portrait trajectories, is defined as the setting up criterion,
in particular, the new polynomial can be selected as the nearest to the given
one with consideration of the system quality requirements. The synthesis is
carried on by calculating new boundaries of the polynomial constant term
variation interval (stability interval), that allows to ensure stability without the
system root locus portrait configuration modification
While investigating uncertain control systems for getting more complete rep-
resentation of the processes, which occur in them, it seems substantial to dis-
cover correlation between algebraic, frequency and root locus methods of in-
Industrial robot control system parametric design on the base of methods for uncertain… 901
vestigation. Such correlation exists and can be applied for finding dependence
between the system characteristic equation coefficients values (parameters)
and its dynamic properties to determine how and what coefficients should be
changed for ensuring stability. One of the ways for establishing the above
mentioned correlation can be investigation of the systems root locus portraits
and Kharitonov's polynomials root loci (Kharitonov, 1978).
Consider the IDS, described by the family of characteristic polynomials
n
P(s) = ¦ a j s n− j = 0 , (4)
j =0
3.2 The interval system model in the form of the root locus portrait
Remark 2 is correct due to the root loci properties (Uderman, 1972) and because
real roots of equations with positive coefficients are always negative (see fig.
4).
The peculiarity of the free root loci, which distinguishes them from another
types of root loci, consists in the fact, that all their branches strive to infinity,
approaching to the corresponding asymptotes.
For carrying on investigation apply the Teodorchik – Evans free root loci
(TEFRL) (Rimsky, 1972), i. e. the term "root locus" within this section will mean
the TEFRL, which parameter is the system characteristic equation constant
term.
To generate the IDS root locus portrait apply the family of the mapping func-
tions
where u(σ,ω) and v(σ,ω) are harmonic functions of two independent variables
σ and ǚ; an is the root locus parameter; s = σ + iω. Analytical and graphical
root loci are formed using mapping function (5). The root locus equation is
as follows:
iv(σ,ω) = 0 (6)
The fragmentary root locus portrait for the IDS of the forth order, which is
made up of four Kharitonov's polynomials free root loci, is shown in fig. 4.
The Kharitonov's polynomials h1, h2, h3 and h4 in this figure are represented
by points (roots), marked with circles, triangles, squares and painted over
squares correspondingly. There are the following designations: σ h i , i = 1, 2, 3,
4, – the cross centers of asymptotes for the root loci of every polynomial hi, tl, l
= 1, 2, 3, – cross points of the root loci branches with the system asymptotic sta-
bility boundary, axis iǚ. The root loci initial points, which represent zeroes of
mapping function (5), are depicted by X-s. Because in fig. 4 all roots of the
Kharitonov's polynomials are completely located in the left half-plane, the
given interval system is asymptotically stable (Kharitonov, 1978).
904 Industrial Robotics: Theory, Modelling and Control
Figure 4. Root loci of the Kharitonov's polynomials for the system of class [4;0]
Industrial robot control system parametric design on the base of methods for uncertain… 905
The branches of the IDS root locus portrait, when crossing the stability bound-
ary, generate on it the region (set) of cross points. Name this region, as the cross
region and designate it as Rω. According to the theory of the complex variable
(Lavrentyev & Shabat, 1987) and due to the complex mapping function (5)
continuity property, this region is the many-sheeted one and is composed of
the separate sheets with every sheet (continuous subregion), formed by the
separate branch while it moves in the complex plane following the parameters
variation. The cross region portion, generated by only positive branches of the
system root locus portrait, name as the positive cross region and designate it as
Rω+.
Define also the subregion rω+ (either continuous or discrete one) of the cross
region Rω+ (8) generated by the root loci branches of any arbitrary subfamily f
of the interval system polynomials family (4), and name it as the (positive) cross
subregion, thus,
where Wr+ is the set (family) of the cross subregion rω+ (9) points coordinates
ω +r i ; Ar+ is the set (family) of values ar+ i of the root locus parameter Ȏn at the
set Wr+ points.
Define the minimal positive value a r+ min of the root locus parameter within the
cross subregion rω+:
Peculiarities of the IDS root loci initial points location make it possible to draw
a conclusion about existence of its characteristic equation coefficients variation
intervals, ensuring asymptotic stability of the given system.
Proof. The subfamily f free root loci generate at the system stability boundary
the cross subregion rω+ (9) of cross points, which is formed by the set (10) of the
cross points coordinates and corresponding set (11) of the parameters values. If
the initial points are located, as it is defined by the statement, on every i-th
branch of every polynomial root loci there exist an interval ri = ( σ l i ,0) of roots
values (starting from the branch initial point with coordinate σ l i until the
point, where it crosses the stability boundary, axis iω of the complex plane),
which is completely located in the left half-plane. Therefore, there exists also
the appropriate maximum possible common interval dm (which is common for
all the branches) of the root loci parameter an values (beginning from zero up
to the definite maximum possible value an = a r+ m ), corresponding to the values
of roots within some interval rk = ( σ l k ,0), which ensures the system stability.
Name this interval dm the dominating interval and define it as dm = (0, a r+ m ). Des-
ignate the roots σ i coordinates values interval, located on every positive i-th
branch of the family and corresponding to the dominating interval, as
rd = ( σ l i , σ r i ). It is evident, that a r+ m will be maximum possible at the stability
boundary, i. e. at σ r i = 0. Then, ∀ σ r i [ a r+ m = a r+ min → σ r i ≤ 0], i. e. the dominat-
ing one is the interval dm = (0, a r+ min ), which represents itself the interval d (13).
Hence, the statement is correct.
Definition 7. The interval of polynomial (4) root loci parameter values name
the polynomial stability interval by this parameter or simply the
polynomial stability interval, if the polynomial asymptotic stabil-
ity property holds within this interval.
In case, if some initial points are located at the stability boundary (excluding
the point, which is always located at the origin), and on the assumption, that
all the rest points are located in the left half-plane, the additional analysis is
required for finding the stability interval existence. For this purpose it is neces-
sary to define the root loci branches direction at their outcome from the initial
Industrial robot control system parametric design on the base of methods for uncertain… 907
a) all the root loci branches with initial points, located at the stability
boundary, are directed from these points to the left half-plane;
b) all positive root loci branches with initial points, located at the stability
boundary, are directed from these points to the left half-plane.
The conditions for existence of the polynomials (4) family coefficients stability
intervals were formulated in the previous section. Here we define what these
intervals values should be. For this purpose consider the polynomials (4) sub-
family f, consisting of the system Kharitonov’s polynomials, and develop the
procedure for synthesis of the stable Kharitonov’s polynomials on the base of
the unstable ones, which depends on the root loci initial points location in rela-
tion to the asymptotic stability boundary. For the synthesis procedure devel-
opment apply the Kharitonov’s polynomials free root loci. Consider the case,
when initial points are located in the left half-plane. In this case the algorithm
of synthesis can be divided into the following stages.
908 Industrial Robotics: Theory, Modelling and Control
Stage 1. Obtaining the Teodorchik – Evans free root loci equation (6) for each
one of the IDS four Kharitonov’s polynomials.
As the Kharitonov’s polynomials represent the subfamily of the IDS polyno-
mials family, they generate the above described cross subregion rω+ (9) on the
stability boundary, which is formed by the set (10) of the cross points coordi-
nates.
Before describing the next stage of synthesis formulate the following theorem.
Theorem. For robust stability of the polynomial family (4) it is necessary and
enough to ensure the upper limit of the constant term an variation interval to
satisfy the inequality
Proof. Let the coefficient an to be the polynomial (4) root locus parameter. Un-
der the theorem condition family of (4) is stable at an = 0, i.e. the root loci initial
points are located in the left half-plane. Therefore, in view of statement 1 the
theorem is valid.
Stage 4. Comparing the obtained stability interval (13) with the given interval
an ∈ [ a n , a n ] of the parameter an variation in correspondence with inequality
(14).
In case, if condition (14) is not satisfied, the upper limit a n of the parameter
variation interval is set up in correspondence with this inequality.
When the power n of the polynomial is less or equal than 3, n ≤ 3, the above
given theorem is applied without any conditions, i. e. it is not required to sat-
Industrial robot control system parametric design on the base of methods for uncertain… 909
where the real coefficients are: Ȏ0 = 1; 8,4 ≤ Ȏ1 ≤ 11,6; 24 ≤ Ȏ2 ≤ 48; 26,5 ≤ Ȏ3 ≤ 83,1;
8,99 ≤ Ȏ4 ≤50,3.
Let the coefficient a4 to be the root locus parameter. Then, define the mapping
function:
Define the Kharitonov’s polynomials for the interval system with the initial
characteristic polynomial (15):
The root loci of these polynomials are represented in fig. 4, described above.
Number of asymptotes na (in fig. 4 they are indicated as s1, s2, …, s6) is constant
for every one of Kharitonov’s polynomials and is equal to
na = n – m = 4 – 0 = 4,
The centers of asymptotes are located on the axis σ and have coordinates:
σ h1 = 2,10; σ h 2 = 2,90; σ h 3 = 2,10; σ h 4 = 2,90 (see fig. 4). The asymptotes cen-
ters coordinates coincide in pairs: for the pair h1 ( s ) and h3 ( s ) , and also for the
pair h2 ( s ) and h4 ( s ) .
The inclination angles of asymptotes for the given root loci are correspond-
ingly the following:
ϕ1 = 00 ; ϕ3 = 1350 ;
ϕ 2 = 450 ; ϕ 4 = 1800.
According to fig. 4, every pair of the root loci strives to the same asymptotes,
i.e. the pairs are formed by those root loci, which asymptotes centers coincide,
as it was indicated above.
For definition of equation (15) coefficients intervals, ensuring the system stabil-
ity, stability condition (14) is applied. Thus, the following values a r+ i of the set
Ar+ have been defined:
a r+ min = a r+ 4 = 54,89.
Because a 4 < 54,89, in correspondence with (14) the given interval system is
asymptotically stable.
In this section the task is solved for locating the uncertain system roots within
the trapezoidal domain. The method allows to locate roots of the uncertain
system characteristic equations family within the given quality domain, thus
ensuring the required system quality (generalized stability). The task is solved
by inscribing the system circular root locus field into the given quality domain.
The trapezoidal domain, bounded by the arbitrary algebraic curve, is consid-
ered. Peculiarity of the method consists in the root locus fields application.
Industrial robot control system parametric design on the base of methods for uncertain… 911
The systems with parametric uncertainty are considered, described by the fam-
ily of characteristic polynomials
where a1, ..., an are coefficients, which depend linearly of some uncertain pa-
rameter k, and can be either real or complex ones.
For selection of the uncertain parameter k, transform equation (16) and rewrite
it in the following form:
where φ(s) and ψ(s) are some polynomials of the complex variable s; k is the
system uncertain parameter.
Ǘ(s)
k = f (s) = − = u( ǔ ,ǚ) + iv( ǔ ,ǚ) (18)
Ǚ(s)
where u(σ,ω), v(σ,ω) are harmonic functions of two independent real vari-
ables σ and ω.
Definition 8. The root locus field of the control system is the field with the com-
plex potential
ϕ( s ) = u (σ, ω) + iν(σ, ω) ,
that is defined in every point of the extended free parameter complex plane by
setting the root locus image existence over the whole plane (Rimsky & Ta-
borovetz, 1978).
Then, set the root locus image by the real function h = h(u, ν, t ), where t is the
constant value for every image. Name t, as the image parameter. Suppose the
image is defined over the whole free parameter plane by setting the corre-
sponding boundaries of the parameter t. Thus, using mapping function (18),
define in the general form the scalar root locus field function
f *= f *(σ,ω) (19)
912 Industrial Robotics: Theory, Modelling and Control
f *(σ,ω) = L, (20)
Define the quality domain Q (fig. 5) in the left complex half-plane of the
system fundamental frequencies (roots plane), bounding the equation (16)
roots location by the lines Lη' and Lη'' of the equal degree of stability (stabil-
ity margin) and the lines L+β and L–β of constant damping, that is equivalent
to setting permissible limits for the following system quality indicators: de-
gree of the system stability η and oscillation β. In fig. 5 the quality domain
Q has the shape of a trapezoid.
The task consists in locating the characteristic equation (16) roots within the
domain Q, i. e. in determination of such a domain D of the uncertain parame-
ter k values, which ensure location of this equation roots (e. g., p1, p2, p3, p4 in
fig. 5) within the given domain Q, when the system qualitative characteristics
do not get beyond the preset limits for η and β, ensuring thus the system Q-
stability and fulfillment of the condition. bounded by the lines of equal degree
of stability and constant damping
k ∈ D → si ∈ Q, (21)
where i = 1, 2, 3, …, n.
Industrial robot control system parametric design on the base of methods for uncertain… 913
For solving the task, apply the root locus fields of the circular image (circular
root locus fields – CRLF) (Rimsky, 1972; Nesenchuk, 2005).
The field function (19) and the level lines equation (20) for the CRLF in the general
form:
f* = f*(σ, ω, a, b) (22)
914 Industrial Robotics: Theory, Modelling and Control
where a and b are the image center coordinates by axes u and υ correspond-
ingly, a = const and b = const; ρ is the circular image radius.
The circular root locus fields for the systems of class [3;0] are represented in
fig. 6 and 7.
The CRLF location in the complex plane to the great extent is defined by the
given circular image center location, which is mapped onto the complex plane
by the field localization centers (see definition 2.4 in (Nesenchuk, 2005)).
Localization centers of the field, described by the level lines L1 (L1', L1'', L1'''), L2,
L3, L4, are located in the points ǿ1, ǿ2, ǿ3 (fig. 6, b). The level lines bound the
corresponding domains (W1, W2, W3, W4 in fig. 6, b) in the plane s. Every such
many-sheeted domain W represents the mapping of the root locus level line
disk-image of the certain radius.
The given task is solved by inscribing the level line of the CRLF, previously
oriented in a special way in the complex plane, into the given quality domain
of the system. This level line image in the free parameter plane k, that repre-
sents some circle of the radius r, will be the boundary of the required domain
D (the required disk). Then, in case, if the circular image center is located in the
origin, the following condition should be satisfied: k ≤ r.
(Ȏ) b) (b) b)
Figure 6. Circular root locus field when setting the image center in the origin
of the variable parameter plane k
(Ȏ) b) (b) b)
Figure 7. Circular root locus field when shifting the image center in relation
to the origin of the variable parameter plane k
916 Industrial Robotics: Theory, Modelling and Control
In the first case the circular image center will be located in point C, where
k = 0 (fig. 6, Ȏ). In the second case the field localization centers should be
located on the TERL positive branches segments being completely located
within the given quality domain. Coordinates u = a and υ = b (fig. 7, Ȏ) of
the corresponding image center are determined from formula (18).
Consider the first subtask. For its solution find the extreme points of contact of
the CRLF level line and the lines Lη', Lη'' of equal degree of stability (fig. 5).
Apply the formula for the gradient of the root locus field:
∂f ∗ G ∂f ∗ G
gradf ∗ = i + j, (24)
∂σ ∂ω
G G
where f *(σ,ω) is the field function; i , j are projections of the identity vector, di-
rected along the normal to the field level line, onto the axes σ and ω correspond-
ingly.
Because in the desired points of contact the gradient (24) projections onto the
axis iω are equal to zero, determine these points coordinates by composing two
systems of equations:
∂ f * (σ, ω) ½
= 0°
∂ω ¾; (25)
σ = σ η ' °¿
∂ f * (σ, ω) ½
= 0°
∂ω ¾, (26)
σ = σ η ' ' °¿
Industrial robot control system parametric design on the base of methods for uncertain… 917
where the first equation of every system represents projection of the gra-
dient onto the axis ω; ση' and ση'' are coordinates of cross points of the axis
σ and the lines Lη' and Lη' correspondingly. From the first system of equa-
tions the coordinate ω of the extreme contact point of the line Lη', bound-
ing the quality domain from the right side, and the CRLF level line is de-
termined. The second system allows to determine the coordinate ω of the
extreme contact point (e. g., point t3 in fig. 8) of the line Lη'', bounding the
domain Q on the left side, and the CRLF level line.
Turn to the second subtask consideration. For its solution it is necessary to find
the extreme contact point (points) of the CRLF level line and the line L+β or L–β
(fig. 5) of constant damping. The only one line, L+β or L–β, is chosen because
when the image center is set on the axis u of the free parameter plane, the
CRLF is symmetric in relation to the axis iω. The line L+β will be considered as
a tangent to the CRLF level line.
Figure 8. The domain of roots location, inscribed into the given quality domain
918 Industrial Robotics: Theory, Modelling and Control
Write the equation of a tangent to the scalar CRLF level line (a tangent to the
curve) in the general form:
∂ f * (σ, ω) ∂ f * (σ, ω)
( Δ − σ) + (Ω − ω) = 0, (27)
∂σ ∂ω
where Δ, Ω are current coordinates of a point on the tangent; σ, ω are the point
of contact coordinates.
As in this case the tangent to the level line passes through the origin, set coor-
dinates Δ and Ω to zero and rewrite (27) in the following form:
∂ f * (σ, ω) ∂ f * (σ, ω)
( − σ) + (−ω) = 0 . (28)
∂σ ∂ω
ω = μσ,
where μ is the real constant, μ = tg β (fig. 5), β is the angle between the constant
damping line and the axis ω. By composing on the basis of the last two equa-
tions the system
∂ f * (σ, ω) ∂ f * (σ, ω) ½
( − δ) + (−ω) = 0°
∂δ ∂ω ¾ (29)
ω = μσ °¿
and solving (29), obtain coordinates σ and ω of the desired contact point.
It is necessary to note, that when solving both the first and the second sub-
tasks, several points of contact to every quality domain boundary can be
found. It is explained by the fact, that contact points are determined for both
global and every local field level line. In this case the level line corresponding
to the circular image of the minimal radius is always chosen. Thus, from three
points t1, t2 and t3 (fig. 8), found by the above described method, the point t1 lo-
cated on the level line L1, corresponding to the circular image of the minimal
radius, is chosen. This line represents itself the boundary of the desired do-
main D of the uncertain parameter k values, ensuring the required system op-
erational quality indicators.
Consider the numerical example. The system quality domain Q (see fig. 5) is
bounded by the lines of equal degree of stability, described by equations
σ = – 1.2, σ = – 4.7,
Industrial robot control system parametric design on the base of methods for uncertain… 919
σ = ω, σ = – ω.
Set the characteristic equation, describing the dynamic system of class [3;0]
and select polynomials φ(s) and ψ(s) (see (17)):
ψ(s) = 1. (31)
For determination of the CRLF level line, inscribed into the quality domain, the fol-
lowing systems of equations (25), (26) and (29) were solved:
920 Industrial Robotics: Theory, Modelling and Control
The first equation of the first and the second system represents the CRLF gra-
dient value in the contact points of the field level line and the lines, bounding
the quality domain from the left and right (the lines of equal degree of stabil-
ity), the second equation represents the equation of the lines of equal degree of
stability. The first equation of the third system represents the equation of a
tangent to the CRLF level line, which passes through the origin. As a result of
these equations three points of contact of the CRLF level lines and the lines Lη',
Lη'' and L+β, bounding the quality domain, are defined. In fig. 8 these points are
t1, t2 for contact of level lines L1'', L1' correspondingly and the constant damp-
ing line L+β and point t3 for contact of the level line L2'' and the line Lη'' of equal
degree of stability. It has been found, that the point t2 belongs to the level line,
inscribed into the domain Q, and the lines L2', L2'', which correspond to the
contact point t3, and the level line L2''' get beyond this domain (the lines L2', L2''
and L2''' represent mappings of a single circular image). Thus, three simply
connected closed regions (in fig. 8 they are cross-hatched) are formed,
bounded correspondingly by three level lines L1', L1'' and L1'', representing
three sheets of the three-sheeted domain, defined by mapping of the image
disc onto the plane s using three branches of the three-valued mapping func-
tion. This three-sheeted domain represents the domain of the characteristic
equation roots, satisfying the required quality. The image of this domain
boundary onto the plane an is the circle of radius r = 2, bounding the desired
closed domain D of the free parameter an values, which comply with the given
conditions of the system stability.
The developed method for parametric synthesis of the dynamic systems, meet-
ing the robust quality requirements, is based on the circular root locus fields
application. It allows to select some regions of the system characteristic equa-
tion roots location, belonging to the given quality domain, which defines the
required quality indicators values (degree of stability and oscillation), and also
to define the corresponding regions of the variable parameters values, ensur-
ing the status when the system quality characteristics do not get beyond the
boundaries set. The main advantage of the method is, that it allows to deter-
mine the system parameters values, which ensure the required quality indica-
Industrial robot control system parametric design on the base of methods for uncertain… 921
tors for cases when the given system does not comply with the requirements,
i.e. to carry on the dynamic systems parametric synthesis. The method can be
generalized to the tasks of roots location within the domains of other shapes.
5. Parametric design of the industrial robot robust control systemon the base of
the method for interval control systems synthesis
5.1 Control system model for the case of operation in conditions of uncertainty
Robots loads change with variation of the weights of the items they carry, that
causes variation of the load inertia moment jl, which is linearly included into
the characteristic equation coefficients (see (1) and (2)), generating their varia-
tion intervals. Currently during the design procedure robots parameters val-
ues in the cases of substantial parameters variation are obtained by the tech-
nique of tests and mistakes. Conduct parametric synthesis of the manipulator
shoulder control system in conditions of its parameters uncertainty using the
analytical method described in 3.
Let coefficients of the characteristic equation (3) for the manipulator shoulder
unit vary within the following limits:
Suppose any of the coefficients, e. g. a4, varies continuously along the real axis
in the plane of system fundamental frequencies. Taking into account expres-
sion (1), the complex mapping function (5), that determines root loci of the in-
terval system relative to a4, is defined as
φ( s )
f (s) = − = −( s 4 + a1 s 3 + a 2 s 2 + a 3 s ) . (32)
ψ(s)
φ( s ) + a 4 ψ ( s ) = s 4 + a1 s 3 + a 2 s 2 + a 3 s + a 4 = 0. (33)
The limit values of equation (33) coefficients variation intervals are entered to
the input of the package ANALRL for computer-aided investigation of control
systems with variable parameters. During the package functioning the Khari-
tonov's polynomials of the system characteristic equation are formed:
These four equations form the basis for generation of the mathematical model
for the robot interval control system in the form of the Kharotiniv’s polynomi-
als root loci.
Considering presence of the load inertia moment jl substantial variations, it is
required to find the coefficients variation intervals, ensuring stability of the
characteristic equations family.
For the task solution apply the method, described in section 3, which allows to
calculate the characteristic equations family coefficients intervals, ensuring the
system robust stability.
First, zeroes of functions (32) (the poles of the open loop transfer function) are
calculated for the above Kharitonov's polynomials and, if they are located in
the left-half plane of the plane s (see statement given in subsection 3.3) or on
the stability bound iω, the root loci of Kharitonov's polynomials are generated
on the basis of these functions.
As for our example one of zeroes of function (32) is located on the stability
bound (in the point s = 0), the direction of the corresponding root locus is veri-
fied. It is stated that the positive branch is directed from this zero to the left
half plane that, according to the above given statement (see 3.3), means the ex-
istence of positive stability intervals of the system investigated.
After constructing the parameter functions according to the technique sug-
gested above (see section 3.4), the variable coefficient values from the set Ar+
(11)) in the cross points of the Kharitonov's polynomials root loci branches
with the system stability bound iω are determined. For the given case the fol-
lowing values have been obtained:
Industrial robot control system parametric design on the base of methods for uncertain… 923
a r+ 1 =0,334⋅ 10 9 (polynomial h1 );
a r+ 2 =0,336⋅ 10 9 (polynomial h2 );
a r+ 3 =0,414⋅ 10 9 (polynomial h3 );
a r+ 4 = 0,280⋅ 10 9 ( polynomial h4 ).
According to the corresponding task algorithm and the obtained values, the
minimal positive value a r+ min = 0.280⋅109 is determined. The interval (13) of a 4
values variation is calculated that ensures system asymptotic stability: d = (0;
0.280⋅109)). On the basis of the theorem, formulated in 3.4, the following stabil-
ity condition of the interval system is formed:
As the root locus parameter varies within the limits a 4 = 0.56⋅109, and
a 4 =0.488⋅109, the upper one doesn’t comply with the stability condition. For
ensuring stability of the considered interval control system the upper limit
should be set to a 4 = 0.280⋅109. The limits of the acceptable interval of the coef-
ficient a 4 variation are the following:
a 4 = 0.280⋅109, a 4 = 0.
From the above described calculations it is evident that the developed method
can be applied not only for investigating the interval system stability, but also
for calculating intervals of its variable parameters in case the initial system is
not stable. It is worth to pinpoint that the method allows to ensure the system
asymptotic stability by setting up only one coefficient of its characteristic equa-
tion.
924 Industrial Robotics: Theory, Modelling and Control
6. Conclusion
7. References
1. Introduction
927
928 Industrial Robotics: Theory, Modelling and Control
1 SU
S
2
R
3
R : Robot
SU : n identical safety units ( one operating and n-1 on standby)
S : Switch for replacing a failed safety unit and it can also fail.
• The robot fails with a normally working safety unit and the switch. In addi-
tion zero or more safety units are on standby.
• The robot fails with one or more safety units failed or considered failed and
the switch is either working or failed.
• The following assumptions are associated with this model:
• The robot-safety system is composed of one robot, n identical safety units
( only one operates and the rest remain on standby) and a switch.
• Robot, switch and one safety unit start operating simultaneously.
• The completely failed robot-safety system and its individually failed units
( i.e. robot, switch and safety unit) can be repaired. Failure and repair rates
of robot, switch and safety units are constant.
• The failure robot-safety system repair rates can be constant or non-constant.
• All failures are statistically independent.
• A repaired safety unit, robot, switch or the total robot-safety system is as
good as new.
Stochastic Analysis of a System containing One Robot and…. 929
1.1 Notation
th
j j state of the robot–safety system:
for j = 2n+2, means the total robot-safety system has failed ( i.e.
the robot , one or
more safety units have failed or considered failed and the switch
is either working or failed);
for j = 2n+ 3, means the robot-safety system has failed ( i.e. the
robot has failed while a safety unit and the switch are working
normally. In addition, zero or more safety units are on standby);
t time.
njs Constant failure rate of a safety unit.
njDŽ Constant failure rate of the robot.
njw Constant failure rate of the switch.
Ǎs Constant repair rate of a safety unit.
Ǎw Constant repair rate of the switch.
Ʀx : Finite repair time interval.
Ǎ j ( x) Time dependent repair rate when the failed robot-safety system is
in state j:and has an elapsed repair time of x; for j = 2n+2, 2n+3.
Pj ( x.t ) Ʀx The probability that at time t, the failed robot-safety system is in
state j and the elapsed repair time lies in the interval [x, x+ Δ x];
for j = 2n+2, 2n+3 .
pdf Probability density function.
w j (x) Pdf of repair time when the failed robot-safety system is in state j
and has an elapsed time of x; for j = 2n+2, 2n+3 .
P j (t) Probability that the robot safety system is in state j at time t; for j
= 2n+2, 2n+3.
P i (t) Probability that the robot-safety system is in state i at time t; for i
= 0,1,2…2n+1.
Pi Steady state probability that the robot-safety system is in state i;
for i=0,1,..2n+1.
Pj Steady state probability that robot-safety system is in state j; for j
= 2n+2, 2n+3.
s Laplace transform variable.
Pi ( s ) Laplace transform of the probability that the robot-safety system
is in state i;
for i = 0,1,2…2n+1.
P j (s) Laplace transform of the probability that the robot-safety system
is in state j;
for j = 2n+2, 2n+3.
Stochastic Analysis of a System containing One Robot and…. 931
2 n +3
dP0 (t )
dt
+ a 0 P 0 (t) = μ s P 1 (t) + μ w P n +1 (t) + ¦ P ( x, t ) μ
j =2 n+2
j j ( x)dx (1)
dPi (t )
+a i P i (t)= λ s P i −1 (t) + μ s P i +1 (t) + μ w P i + n +1 (t) (2)
dt
( for i = 1,2,……..,n-1)
dPn (t )
+a n P n (t)= λ s P n −1 (t)+ μ w P 2 n +1 (t) (3)
dt
dPi (t )
+ a i P i (t) = λ w P i − n −1 (t) ( for i = n+1,n+2,……..,2n) (4)
dt
2n
dP2 n +1 (t )
+a 2 n +1 P 2 n +1 (t)= λ s ¦ P (t ) + λ
i w P n (t) (5)
dt i = n +1
932 Industrial Robotics: Theory, Modelling and Control
where
a 0 = λ s + λ w + λr
a i = λs + λw + λr + μ s ( for i = 1,2,……..,n-1)
a n = λw + λr + μ s
a i = λs + λr + μ w ( for i = n+1,n+2,……..,2n)
a 2 n +1 = λr + μ w
∂Pj ( x, t )
+ ∂Pj ( x, t ) + μ j (x) P j (x,t) = 0 ( for j = 2n+2,2n+3) (6)
∂t ∂x
2 n +1
P 2 n + 2 (0,t)= λr ¦ P (t ) i (7)
i=n
n −1
P 2 n +3 (0,t)= λ r ¦ P (t ) i (8)
i =0
At time t = 0, P 0 (0) = 1, and all other initial state probabilities are equal to zero.
By solving Equations (1)-(8) with the Laplace transform method, we get the
following
n
λw 2 n +1 2 n +3 1 − W j ( s) 1
P 0 (s) = [ s(1 + ¦ Y (s) + i
+ ¦V ( s) i
+ ¦ a j (s) )] −1 = (9)
i =1 s + a n +1 i =n+ 2 j =2 n+ 2 s G(s)
λw
P n +1 (s) = P 0 (s) (12)
s + a n +1
Stochastic Analysis of a System containing One Robot and…. 933
1 − W j ( s)
P j (s) = a j (s) P 0 (s) (for j = 2n+2, 2n+3) (13)
s
where
λw μ w
L i (s) = ( s + a i ) - ( for i = 1,2,……..,n)
s + ai + n +1
D 1 (s) = L 1 (s)
λs μ s
D i (s) = L i (s) - ( for i = 2,……..,n)
Di −1 ( s )
λis
A i (s) = i
( for i = 1,2,……..,n-1)
∏D
h =1
h ( s)
μs
B i (s) = ( for i = 1,2,……..,n-1)
Di ( s )
n −1 h −1 n −1
Y i (s) = ¦ Ah (s)∏ Bk (s) +
h =i k =i
∏B
h =i
h ( s )Yn ( s )
( for i = 1,2,……..,n-1)
λw
V i (s) = Y i −n −1 (s) ( for i = n+2,……..,2n)
s + ai
λs λw λs n −1
λw λw
V 2 n +1 (s) =
( s + a n +1 )( s + a 2 n +1 )
+
s + a 2 n +1
¦s+a
i =1
Y i (s) +
s + a 2 n +1
Y n (s)
i + n +1
Y n (s) =
λs λw μ w λs μ w n −1
λw n −1 h −1
λ s An −1 ( s) +
( s + a n +1 )( s + a 2 n +1 )
+
s + a 2 n +1
¦s+a
i =1
¦ [ Ah (s)∏ Bk (s)]
i + n +1 h = i k =i
λs μ w n −1
λw n −1
Ln ( s ) − λ s Bn −1 ( s ) −
s + a 2 n +1
¦s+a
i =1
∏B h ( s)
i + n +1 h = i
λw 2 n +1
a 2 n + 2 (s) = λ r [ Y n (s) + + ¦V (s) ]
s + a n +1
i
i =n+ 2
934 Industrial Robotics: Theory, Modelling and Control
n −1
a 2 n +3 (s) = λ r [1+ ¦ Yi ( s ) ]
i =1
n
λw 2 n +1 2 n +3 1 − W j (s)
G(s) = s(1 + ¦ Yi (s) +
i =1 s + a n +1
+ ¦Vi (s) +
i =n+ 2
¦
j =2n+2
a j ( s)
s
) (14)
∞ − sx
W j (s) = ³e
0
w j (x)dx for j = 2n+2, 2n+3)
(15)
x
w j (x) = exp[- ³ μ j (δ )dδ ] μ j (x)
0
where
w j (x) is the failed robot safety system repair time probability density function
The Laplace transform of the robot-safety system availability with one nor-
mally working safety unit, the switch and the robot is given by:
n −1
λw 2n
1 + ¦ Yi ( s ) + + ¦V (s)
s + a n +1
n −1 2n i
i =1 i=n+2
AV rs (s)= ¦ Pi ( s ) + ¦ Pi (s) = (16)
i =0 i = n +1 G(s)
λw n 2 n +1
2 n +1 1+ + ¦ Yi ( s ) + ¦V ( s)
s + a n +1
i
AV r (s) = ¦ P ( s)
i =0
i = i =1
G ( s)
i =n+ 2
(17)
Taking the inverse Laplace transforms of the above equations, we can obtain
the time dependent state probabilities, P i (t)and P j (t), and robot-safety system
availabilities,
3.1 Robot Safety System Time Dependent Analysis For A Special Case
For two safety units ( i.e., one working, other one on standby)
Substituting n=2 into Equations (9)-(16), we get
Stochastic Analysis of a System containing One Robot and…. 935
1 1
P 0 (s) = = (18)
2
λw 5 7 1 − W j ( s) G ( s)
s[1 + ¦ Yi ( s) + + ¦Vi ( s ) + ¦ a j ( s ) ]
i =1 s + a3 i =4 j =6 s
λw
P 3 (s)= P 0 (s) (20)
s + a3
1 − W j ( s)
P j (s)=a j (s) P 0 (s) (22)
s
where
λs λs λw μ w λs λw μ w μs
λs + +
L1 ( s ) ( s + a3 )( s + a5 ) ( s + a 4 )( s + a5 ) L1 ( s )
Y 2 (s) =
μs λs λw μ w μs
L2 ( s ) − λ s −
L1 ( s ) ( s + a 4 )( s + a5 ) L1 ( s )
λs μs
Y 1 (s) = + Y 2 (s)
L1 ( s ) L1 ( s )
λs λw λs λw λw
V 5 (s) = + Y 1 (s) + Y 2 (s)
( s + a3 )( s + a5 ) s + a5 s + a 4 s + a5
λw
V 4 (s) = Y 1 (s)
s + a4
λw 5
a 6 (s) = λ r [ Y 2 (s) + + ¦ V ( s) ]
s + a3
i
i =4
a 7 (s) = λ r [ 1+ Y 1 (s)]
λw μ w
L 1 (s) = (s+a 1 ) -
s + a4
λw μ w
L 2 (s) = (s+a 2 ) -
s + a5
936 Industrial Robotics: Theory, Modelling and Control
2
λw 5 7 1 − W j ( s)
G(s) = s[1+ ¦ Yi ( s ) + + ¦ V ( s) + ¦ a ( s) ] (23)
s + a3
i j
i =1 i =4 j =6 s
The Laplace transform of the robot-safety system availability with one nor-
mally working safety unit, the switch and the robot is given by:
λw
1 + Y1 ( s ) + + V4 ( s )
1 4
s + a3
AV rs (s = ¦ Pi ( s ) + ¦ Pi ( s ) = (24)
i =0 i =3 G ( s)
2
λw 5
1 + ¦ Yi ( s ) + + ¦ Vi ( s )
5
i =1 s + a3 i =4
AV r (s)= ¦ Pi ( s ) = (25)
i =0 G ( s)
Taking the inverse Laplace transforms of the above equations, we can obtain
the time dependent state probabilities, P i (t)and P j (t), and robot-safety system
availabilities,
Thus, for the failed robot-safety system repair time x is exponentially distrib-
uted repair times, the probability function is expressed by
−μ j x
w j (x) = μ j e ( μ j > 0, j = 6,7) (26)
where
x is the repair time variable and μ j is the constant repair rate of state j.
Substituting equation (26) into equation (15), we can get
μj
W j (s) = ( μ j > 0, j = 6,7) (27)
s+μj
Figure 3. Time-dependent probability plots for a robot safety system with exponential
distributed failed system repair time.
As time approaches infinity, all state probabilities reach the steady state. Thus,
from
a i P i = λ s P i −1 + μ s P i +1 + μ w P i + n +1 (29)
( for i = 1,2,……..,n-1)
a n P n = λ s P n −1 + μ w P 2 n +1 (30)
a i P i = λ w P i − n −1 (31)
( for i = n+1,n+2,……..,2n-k-1)
938 Industrial Robotics: Theory, Modelling and Control
2n
a 2 n +1 P 2 n +1 = λ s ¦P +λ i w Pn (32)
i = n +1
where
a 0 = λ s + λ w + λr
a i = λs + λw + λr + μ s ( for i = 1,2,……..,n-1)
a n = λw + λr + μ s
a i = λs + λr + μ w ( for i = n+1,n+2,……..,2n)
a 2 n +1 = λr + μ w
dPj ( x)
+ μ j (x)P j (x) = 0 ( for j = 2n+2,2n+3) (33)
dx
2 n +1
P 2 n + 2 (0)= λr ¦P i (34)
i=n
n −1
P 2 n +3 (0)= λ r ¦P i (35)
i =0
2 n +1 2 n+3
¦ Pi +
i =0
¦P
j =2n+2
j =1 (36)
We get:
n
λw 2n 2 n +3
1
P0 = ( 1 + ¦ Yi + + ¦Vi + ¦a j E j [ x] ) −1 = (37)
i =1 a n +1 i =n+ 2 j =2n+2 G
λw
P n +1 = P0 (40)
an
P j = a j E j [x] P 0
Stochastic Analysis of a System containing One Robot and…. 939
where
D1 = L1
λs μ s
Di = Li - ( for i = 2,……..,n)
Di −1
∏D
h =1
h
μs
Bi = ( for i = 1,2,……..,n-1)
Di
n −1 h −1 n −1
Yi = ¦A ∏B
h =i
h
k =i
k + ∏B Y
h =i
h n
( for i = 1,2,……..,n-1)
λw
Vi = Y i − n −1 ( for i = n+2,……..,2n)
ai
λs λw λs n −1
λw λw
V 2 n +1 =
a n +1 a 2 n +1
+
a 2 n +1
¦a
i =1
Yi +
a 2 n +1
Y n−k
i + n +1
λs λw μ w λs μ w n −1
λw n −1 h −1
λ s An −1 +
a n a 2 n +1
+
a 2 n +1
¦a
i =1
¦ Ah ∏ Bk
i + n +1 h =i k =i
Yn =
λs μ w n −1
λ n −1
Ln − λ s Bn −1 −
a 2 n +1
¦a
i =1
w
∏B h
i + n +1 h =i
2 n +1
λw
a 2 n + 2 = λr (Y n + ¦V i + )
i =n+2 a n +1
n −1
a 2 n +3 = λr (1+ ¦ Yi )
i =1
940 Industrial Robotics: Theory, Modelling and Control
n −1
λw 2 n +1 2 n +3
G =1 + ¦ Yi +
i =1 a n +1
+ ¦Vi +
i =n+ 2
¦a
j =2n+2
j E j [ x] (42)
∞ x
E j [x] = ³ exp[− ³ μ j (δ )dδ ]dx
0 0
(43)
∞
= ³ 0
xw j ( x)dx ( for j = 2n+2,2n+3)
where
w j (x) is the failed robot safety system repair time probability density function
E j [x] is the mean time to robot safety system repair when the failed robot
safety system is in state j and has an elapsed repair time x.
The generalized steady state availability of the robot safety system with one
normally working normally safety unit, the switch and the robot is given by
n −1
λw 2n
n −1 2n
1 + ¦ Yi + + ¦V i
i =1 a n +1 i =n+ 2
SSAVrs= ¦ Pi + ¦ Pi = (44)
i =0 i = n +1 G
Similarly, the generalized steady state availability of the robot safety system
with or without a working safety units is
n
λw 2 n +1
2 n +1
1 + ¦ Yi + + ¦V i
i =1 a n +1 i =n+ 2
SSAVr= ¦ Pi = (45)
i =0 G
For different failed robot-safety system repair time distributions, we get differ-
ent expressions for G as follows:
1) For the failed robot-safety system Gamma distributed repair time x, the
probability
−μ j x
μ βj x β −1e
w j (x) = ( β > 0, j = 2n+2,2n+3) (46)
Γ( β )
where
Stochastic Analysis of a System containing One Robot and…. 941
x is the repair time variable, Γ( β ) is the gamma function, μ j is the scale pa-
rameter and β is the shape parameter.
Thus, the mean time to robot-safety system repair is given by
∞
β
E j (x) = ³ xw ( x)dx = ( β > 0, j = 2n+2,2n+3) (47)
μj
j
0
n −1
λw 2 n +1 2 n+3
β
G =1+ ¦ Yi + + ¦Vi + ¦ aj E j [ x] (48)
i =1 a n +1 i =n+ 2 j =2n+2 μj
2) For the failed robot-safety system Weibull distributed repair time x, the
probability
− μ j ( x)β
w j (x) = μ j β x β −1e ( β > 0, j = 2n+2, 2n+3) (49)
where
x is the repair time variabl, μ j is the scale parameter and β is the shape pa-
rameter .
Thus, the mean time to robot-safety system repair is given by
∞
1 1 1
E j [x] = ³ xW ( x)dx = ( ) 1/ β Γ( ) ( β > 0, j = 2n+2,2n+3) (50)
μj β β
j
0
n −1
λw 2 n +1 2 n +3
1 1 1
G =1+ ¦Y + + ¦V + ¦ aj( )1 / β Γ( ) (51)
μj β β
i i
i =1 a n +1 i =n+ 2 j =2n+2
3) For the failed robot-safety system Rayleigh distributed repair time x, the
probability
where
942 Industrial Robotics: Theory, Modelling and Control
∞
π
E j (x) = ³ xW ( x)dx = ( μ j > 0, j = 2n+2,2n+3) (53)
2μ j
j
0
n −1
λw 2 n +1 2 n +3
π
G =1 + ¦ Yi + + ¦V + ¦ aj (54)
2μ j
i
i =1 a n +1 i =n+ 2 j =2n+2
4) For the failed robot system Lognormal distributed repair time x, the prob-
ability
− ( Inx − μ y j ) 2
1 [
w j (x) = e 2σ y2 j (for j = 2n+2,2n+3) (55)
2π xσ y j
where
x is the repair time variable, Inx is the natural logarithm of x with a mean
μ and
variance σ 2 . The conditions μ and σ 2 on parameters are:
σx
σ y = In 1 + ( j
)2 (56)
j
μx j
μ 4
xj
μ y = In (57)
j
μ + σ x2
2
xj j
σ y2 j
(μy j + )
E j (x) = e 2 (for j = 2n+2, 2n+3) (58)
n −1 λw 2 n +1 2 n +3
(μy j +
σ y2 j
)
G=1+ ¦ Yi +
i =1 a n +1
+ ¦Vi +
i =n+ 2
¦a e
j =2n+2
j
2 (for j = 2n+2,2n+3) (59)
5) For the failed robot system exponentially distributed repair time x, the
probability
−μ j x
w j (x) = μ j e ( μ j > 0, j = 2n+2,2n+3) (60)
where
x is the repair time variable and μ j is the constant repair rate of state j.
∞
1
E j (x) = ³ xw ( x)dx = ( β > 0, j = 2n+2, 2n+3) (61)
μj
j
0
n −1
λw 2 n +1 2 n +3
1
G =1+ ¦ Yi + + ¦V + ¦ aj (62)
μj
i
i =1 a n +1 i =n+ 2 j =2n+ 2
4.1 The Robot-Safety System Steady State Analysis For A Special Case
1
P0= (63)
2
λw 5 7
1 + ¦ Yi + + ¦ Vi + ¦ a j E j [ x]
i =1 a3 i =4 j =6
λw
P3= P0 (65)
a3
P j =a j E j [x]P 0 (67)
where
λs λs λw μ w λs λw μ w μ s
λs + +
L1 a3 a5 a 4 a5 L1
Y 2 (s) =
μs λs λw μ w μ s
L2 − λ s −
L1 a 4 a5 L1
λs μs
Y1 = + Y2
L1 L1
λs λw λs λw λw
V5 = + Y1 + Y2
a3 a5 a 4 a5 a5
λw
V4 = Y1
a4
λw 5
a 6 = λr [ Y 2 + + ¦V i ]
a3 i =4
a 7 = λr [ 1+ Y 1 ]
λw μ w
L1 = a1 -
a4
λw μ w
L2 = a2 -
a5
2
λw 5
G=1+ ¦ Yi +
i =1 a3
+ ¦V
i =4
i + a j E j [x] (68)
λw
1 4 1 + Y1 + + a4
a3
SSAVrs= ¦ Pi + ¦ Pi = (69)
i =0 i =3 G
2
λw 5
5 1 + ¦ Yi + + ¦ Vi
a3
SSAVr= ¦ Pi = i =1 i =4
(70)
i =0 G
Stochastic Analysis of a System containing One Robot and…. 945
and using matlab computer program [10], the Figure 4 plot were obtained. The
plot shows, as expected, that SSAV r is greater than SSAV rs and both of them
increase slightly with the increasing values of the safety unit repair rate.
Setting μ j = 0, (for j = 2n+2, 2n+3 ), in Figure 2 and using the Markov me-
thod[11], we write the following equations for the modified figure:
dP0 (t )
+a 0 P 0 (t)= μ s P 1 (t)+ μ w P n +1 (t) (71)
dt
dPi (t )
+a i P i (t = λ s P i −1 (t)+ μ s P i +1 (t) + μ w P i + n +1 (t) (72)
dt
( for i = 1,2,……..,n-1)
dPn (t )
+a n P n (t)= λ s P n −1 (t)+ μ w P 2 n +1 (t) (73)
dt
dPi (t )
+a i P i (t) = λ w P i − n −1 (t) ( for i = n+1,n+2,……..,2n) (74)
dt
2n
dP2 n +1 (t )
+a 2 n +1 P 2 n +1 (t)= λ s ¦ P (t ) + λ
i w P n (t) (75)
dt i = n +1
2 n +1
dP2 n + 2 (t )
= λr ¦ P (t ) i (76)
dt i=n
n −1
dP2 n +3 (t )
= λr ¦ P (t ) i (77)
dt i =0
where
a 0 = λ s + λ w + λr
a i = λs + λw + λr + μ s ( for i = 1,2,……..,n-1)
a n = λw + λr + μ s
a i = λs + λr + μ w ( for i = n+1,n+2,……..,2n)
a 2 n +1 = λr + μ w
At time t = 0, P 0 (0) =1 and all other initial conditions state probabilities are
equal to zero.
By solving Equations (71) – (77) with the aid of Laplace transforms, we get:
n
λw 2 n +1 2 n +3 a j (s) 1
P 0 (s) = P 0 (s) = [ s(1 + ¦ Yi (s) + + ¦Vi (s) + ¦ )] −1 = (78)
i =1 s + a n +1 i =n+ 2 j =2n+2 s G ( s)
λw
P n +1 (s)= P 0 (s) (81)
s + a n +1
Stochastic Analysis of a System containing One Robot and…. 947
a j ( s)
P j (s) = P 0 (s) (for j = 2n+2, 2n+3) (82)
s
n
λw 2 n +1 2 n +3 a j (s)
G(s)=s[1+ ¦ Yi ( s ) +
i =1 s + a n +1
+ ¦Vi (s) +
i =n+ 2
¦
j =2n+2 s
] (83)
The Laplace transform of the robot-safety system reliability with one normally
working safety unit, the switch and the robot is given by:
n −1
λw 2n
1 + ¦ Yi ( s ) + + ¦V (s)
s + a n +1
n −1 2n i
i =1 i =n+ 2
R rs (s)= ¦ Pi ( s ) + ¦ Pi (s) = (84)
i =0 i = n +1 G ( s)
Similarly, the Laplace transform of the robot safety system reliability with or
without a working safety unit is
λw n 2 n +1
1+ + ¦ Yi ( s ) + ¦V (s)
s + a n +1
2 n +1 i
i =1 i=n+2
R r (s)= ¦ Pi ( s ) = (85)
i =0 G(s)
Using Equation (83) and Reference [11], the robot-safety system mean time to
failure with one normally working safety unit, the switch and the robot is
given by
n −1
λw 2n
1 + ¦ Yi + + ¦V i
i =1 a n +1 i =n+ 2
MTTF rs = lim R rs (s)= 2 n +3
(86)
s →0
¦a
j =2 n+ 2
j
Similarly, using Equation (84) and Reference [11], the robot safety system
mean time to failure with or without a working safety unit
1
isMTTF r = lim R r (s)= (87)
s →0 λr
948 Industrial Robotics: Theory, Modelling and Control
λw
1 + Y1 + + V4
a3
MTTF rs = 7
(88)
¦a
j =6
j
1
MTTF r = (89)
λr
where
λs λs λw μ w λs λw μ w μ s
λs + +
L1 a3 a5 a 4 a5 L1
Y2 =
μs λs λw μ w μ s
L2 − λ s −
L1 a 4 a5 L1
λs μs
Y1 = + Y2
L1 L1
λs λw λs λw λw
V5 = + Y1 + Y2
a3 a5 a 4 a5 a5
λw
V4 = Y1
a4
λw 5
a 6 = λ r [Y 2 + + ¦V i ]
a3 i =4
a 7 = λr [1+ Y 1 ]
λw μ w
L1 = a1 -
a4
λw μ w
L2 = a2 -
a5
Stochastic Analysis of a System containing One Robot and…. 949
For λ s =0.0002, λ w =0.001, μ w = 0.0003, λr = 0.00009,and using Equations (88)-(89) and Matlab
computer program [10], in Figure 5 MTTF rs and MTTF r plots were obtained. λ s =0.0002,
λ w =0.001, μ w = 0.0003, λr = 0.00009
Figure 5. The robot-safety system mean time to failure plots for the increasing
value of the safety unit repair rate ( μ s ).
These plots demonstrate that MTTF r is greater than MTTF rs , but just MTTF rs
increases with the increasing value of μ s .
950 Industrial Robotics: Theory, Modelling and Control
6. References
Zeldman, M.I., What Every Engineer Should Know About Robots, Marcel Dekker,
New York, 1984
Ruldall, B.H., Automation and Robotics Worldwide: Reports and Survey, Robotica,
Vol.14, 1996, pp. 243-251.
Dhillon, B.S., Fashandi, A.R.M., Safety and Reliability Assessment Techniques in
Robotics, Robotica, Vol.15, 1997, pp. 701-708.
Dhillon, B.S., Fashandi, A.R.M., Liu, K.L., Robot Systems Reliability and safety: A
Review, Journal of Quality in Maintenance Engineering, Vol. 8, No,3, 2002,
pp. 170-212.
Nicolaisen, P., Safety Problems Related to Robots, Robotics, Vol.3, 1987, pp. 205-
211.
Nagamachi, M., Ten Fatal Accidents Due Robots in Japan, in Ergonomics of Hybrid
Automated Systems, edited by H.R. Korwooski, M.R., Parsaei, M.R.,
Slsevier, Amsterdam, 1998, pp. 391-396.
Dhillon, B.S., Robot Reliability and Safety, springer, NewYork, 1991.
Gaver, D.P., Time to failure and availability of paralleled system with repair, IEEE
Trans. Reliab. Vol.12, 1963, pp.30-38.
Grag, R.C., Dependability of a complex system having two types of components, IEEE
Trans Reliab. Vol. 12, 1963, pp.11-15.
Hahn, Brian D., Essential MATLAB for Scientists and Engineers, Oxford:
Butterworth-Heinemann, 2002.
Dhillion, B.S., Design Reliability: Fundamentals and Applications, CRC, Boca
Raton, Florida, 1999.
951
Abdessemed Foudil De Xu
University of Batna Institute of Automation
Algeria Chinese Academy of Sciences
P. R. China