Advanced Techniques of Industrial Robot Programming 79
Advanced Techniques of Industrial Robot Programming 79
Advanced Techniques of Industrial Robot Programming 79
79
x4
Advanced Techniques
of Industrial Robot Programming
Frank Shaopeng Cheng
www.intechopen.com
80
virtual robot points and programming virtual robot motions in an interactive and virtual 3D
design environment (Cheng, 2003; Connolly, 2006). By the time a robot simulation design is
completed, the simulation robot program is able to move the virtual robot and end-effector
to all desired virtual robot points for performing the specified operations to the virtual
workpiece without collisions in the simulated workcell. However, because of the inevitable
dimensional differences of the components between the real robot workcell and the
simulated robot workcell, the virtual robot points created in the simulated workcell must be
adjusted relative to the actual position of the components in the real robot workcell before
they can be downloaded to the real robot system. This task involves the techniques of
calibrating the position coordinates of the simulation Device models with respect to the
user-defined real robot points.
In this chapter, advanced techniques used in creating industrial robot points are discussed
with the applications of the FANUC robot system, Delmia IGRIP robot simulation software,
and Dynalog DynaCal robot calibration system. In Section 2, the operation and
programming of an industrial robot system are described. This includes the concepts of
robots frames, positions, kinematics, motion segments, and motion instructions. The
procedures for teaching robot frames and robot points online with the real robot system are
introduced. Programming techniques for maintaining the accuracy of the exiting robot
points are also discussed. Section 3 introduces the setup and integration of a two
dimensional (2D) vision system for performing vision-guided robot operations. This
includes establishing integrated measuring functions in both robot and vision systems and
modifying existing robot points through vision measurements for vision-identified
workpieces. Section 4 discusses the robot simulation and offline programming techniques.
This includes the concepts and procedures related to creating virtual robot points and
enhancing their accuracy for a real robot system. Section 5 explores the techniques for
transferring industrial robot points between two identical robot systems and the methods
for enhancing the accuracy of the transferred robot points through robot system calibration.
A summary is then presented in Section 6.
n x
n
y
nz
ox
oy
oz
ax
ay
az
px
py ,
pz
(1)
where the coordinates of vector p = (px, py, pz) represent the location of frame Def_TCP and
the coordinates of three unit directional vectors n, o, and a represent the orientation of frame
www.intechopen.com
81
Def_TCP. The inverse of RTDef_TCP or P[ n ]RDef _ TCP denoted as (RTDef_TCP)-1 or ( P[ n ]RDef _ TCP )-1
represents the position of frame R to frame Def_TCP, which is equal to frame transformation
Def_TCPTR. Generally, the definition of a frame transformation matrix or its inverse described
above can be applied for measuring the relative position between any two frames in the
robot system (Niku, 2001). The orientation coordinates of frame Def_TCP in Eq. (1) can be
determined by Eq. (2)
n x
n
y
n z
ox
oy
oz
ax
a y Rot ( z , z )Rot ( y , y )Rot (x , x )
a z
cos z cos y
sin z cos y
sin y
, (2)
cos z sin y cos x sin z sin x
cos y cos x
where transformations Rot(x, x), Rot(y, y), and Rot(z, z) are pure rotations of frame
Def_TCP about the x-, y-, and z-axes of frame R with the angles of x (yaw), y (pitch), and z
(roll), respectively. Thus, a robot point P[ n ]RDef _ TCP can also be represented by Cartesian
coordinates in Eq. (3)
P[ n ]RDef _ TCP ( x , y , z , w , p , r ) .
(3)
It is obvious that the robots joint movements are to change the position of frame Def_TCP.
For an n-joint robot, the geometric motion relationship between the Cartesian coordinates of
a robot point P[ n ]RDef _ TCP in frame R (i.e. the robot world space) and the proper
displacements of its joint variables q = (q1, q2, ..qn) in robot joint frames (i.e. the robot joint
space) is mathematically modeled as the robots kinematics equations in Eq. (4)
nx
n
y
nz
ox
ax
oy
ay
oz
az
1 0
0
0
1
(4)
www.intechopen.com
82
In robot programming, the robot programmer creates a robot point P[ n ]RDef _ TCP by first declaring
it in a robot program and then defining its coordinates in the robot system. The conventional
method is through recording a particular robot pose with the robot teach pendent (Rehg, 2003).
Under the teaching mode, the robot programmer jogs the robots joints for poisoning the robots
end-effector relative to the workpiece. As joint k moves, the serial pulse coder of the joint
measures the joint displacement qk relative to the zero position of the joint frame. The robot
system substitutes all measured values of q = (q1, q2, ..qn) into the robot forward kinematics
equations to determine the corresponding Cartesian coordinates of frame Def_TCP in Eq. (1) and
R
with the teach pendant, its Cartesian
Eq. (3). After the robot programmer records a P[ n ]Def
_ TCP
coordinates and the corresponding joint values are saved in the robot system. The robot
programmer may use the Representation softkey on the teach pendant to automatically
convert and display the joint values and Cartesian coordinates of a taught robot point
P[ n ]RDef _ TCP . It is important to notice that Cartesian coordinates in Eq. (3) is the standard
representation of a P[ n]RDef _ TCP in the industrial robot system, and its joint representation always
uniquely defines the position of frame Def_TCP (i.e. the robot pose) in frame R.
In robot programming, the robot programmer defines a motion segment of frame Def_TCP by
using two taught robot points in a robot motion instruction. During the execution of a motion
instruction, the robot system utilizes the trajectory planning method called linear segment with
parabolic blends to control the joint motion and implement the actual trajectory of frame
Def_TCP through one of the two user-specified motion types. The joint motion type allows the
robot system to start and end the motion of all robot joints at the same time resulting in an
unpredictable, but repeatable trajectory for frame Def_TCP. The Cartesian motion type allows
the robot system to move frame Def_TCP along a user-specified Cartesian path such as a straight
line or a circular arc in frame R during the motion segment, which is implemented in three steps.
First, the robot system interpolates a number of intermediate points along the specified Cartesian
path in the motion segment. Then, the proper joint values for each interpolated robot point are
calculated by the robot inverse kinematics equations. Finally, the joint motion type is applied
to move the robot joints between two consecutive interpolated robot points.
Different robot languages provide the robot systems with motion instructions in different format.
The motion instruction of FANUC Teach Pendant Programming (TPP) language (Fanuc, 2007)
allows the robot programmer to define a motion segment in one statement that includes the
robot point P[n], motion type, speed, motion termination type, and associated motion options.
Table 1 shows two motion instructions used in a FANUC TP program.
Description
Moves the TCP frame to robot point P[1]
with Joint motion type (J) and at 50% of
the default joint maximum speed, and stops
exactly at P[1] with a Fine motion
termination.
2. L P[2] 100 mm/sec FINE
Utilizes Linear motion type (L) to move
TCP frame along a straight line from P[1] to
P[2] with a TCP speed of 100 mm/sec and a
Fine motion termination type.
Table 1. Motion instructions of FANUC TPP language
1.
www.intechopen.com
83
R
R
P[ n ]UT
[ k ] TUT [ k ]
(5)
R
It is obvious that a robot point P[n]Def
in Eq. (1) or Eq. (3) can be taught with different
_ TCP
UT[k], thus, represented in different Cartesian coordinates in the robot system as shown in
Eq. (6)
R
R
Def _ TCP
P[ n ]UT
TUT[ k ] .
[ k ] P[ n ]Def _ TCP
(6)
Description
Set UT[1] frame to be the current active
UT.
Table 2. UT[k] frame selection instructions of FANUC TPP language
1.
To define a UT[k] for an actual tool-tip point PT-Ref whose coordinates (x, y, z, w, p, r) in
frame Def_TCP is unknown, the robot programmer must follow the UT Frame Setup
R
procedure provided by the robot system and teach six robot points P[ n]Def
_ TCP
(for n = 1, 2, 6) with respect to PT-Ref and a reference point PS-Ref on a tool reachable
surface. The three-point method as shown in Eq. (7) and Eq. (8) utilizes the first three
taught robot points in the UT Frame Setup procedure to determine the UT[k] origin.
Suppose that the coordinates of vector Def_TCPp= [pn, po, pa]T represent point PT-Ref in frame
Def_TCP. Then, it can be determined in Eq. (7)
Def _ TCP
p (T1 )1 R p ,
(7)
where the coordinates of vector Rp= [px, py, pz]T represents point PT-Ref in frame R and T1
represents the first taught robot point P[1]RDef _ TCP when point PT-Ref touches point PS-Ref. The
coordinates of vector Rp= [px, py, pz]T also represents point PS-Ref in frame R and can be
solved by the three linear equations in Eq. (8)
www.intechopen.com
84
(I T2 T31 )R p 0 ,
(8)
where transformations T2 and T3 represent the other two taught robot points P[2 ]RDef _ TCP and
P[ 3]RDef _ TCP in the UT Frame Setup procedure respectively when point PT-Ref is at point PS-Ref.
To ensure the UT[k] accuracy, these three robot points must be taught with point PT-Ref
touching point PS-Ref from three different approach statuses. Practically, P[ 2 ]RDef _ TCP (or
P[ 3 ]RDef _ TCP ) can be taught by first rotating frame Def_TCP about its x-axis (or y-axis) for at
least 90 degrees (or 60 degrees) when the tool is at P[1]RDef _ TCP , and then moving point PT-Ref
back to point PS-Ref. A UT[k] taught with the three-point method has the same orientation
of frame Def_TCP.
Tool-tip
Reference Point
Surface
Reference Point
(9)
where frame Def_TCP represents the position of frame Def_TCP after frame UT[k] moves
www.intechopen.com
85
R
can be shifted into the
to UT[k]. In this case, the pre-taught robot point P[n]UT
[k]
R
corresponding robot point P[n]UT
through Eq. (10)
[ k ]'
Def _ TCP
TUT[ k ]' Def _ TCP TDef _ TCP' Def _ TCP' TUT[ k ]' .
(10)
The industrial robot system usually implements Eq. (9) and Eq. (10) as both a system utility
function and a program instruction. As a system utility function, the offset Def_TCPTDef_TCP
changes the position of frame Def_TCP in the robot system so that the robot programmer is
able to change the current UT[k] of a taught P[n] into a different UT[k] while remaining the
same Cartesian coordinates of P[n] in frame R. As a program instruction, Def_TCPTDef_TCP
R
into the corresponding point P[ n ]' RUT [ k ]' without
shifts the pre-taught robot point P[n]UT
[k]
changing the position of frame Def_TCP. Table 3 shows the UT[k] offset instruction of
FANUC TPP language for Eq. (10).
Def _ TCP
TUT[ k ]
Def _ TCP
Def _ TCP
TDef _ TCP'
TUT[ k ]'
Def _ TCP'
TUT[ k ]'
TP Instructions
Tool_Offset Conditions PR[x], UTOOL[k],
2.
Description
Offset value Def_TCPTDef_TCP is
stored in a user-specified position
register PR[x].
The Offset option in motion
instruction shifts the existing
R
into
robot
point
P[ n]UT
[k]
R
corresponding point P[ n ]'UT
.
[ k ]'
www.intechopen.com
86
(11)
It is obvious that a robot point P[ n]RDef _ TCP in Eq. (1) or Eq. (3) can be taught with different
UT [k] and UF[i], thus, represented in different Cartesian coordinates in the robot system as
shown in Eq. (12)
UF[ i ]
R
1
R
Def _ TCP
P[ n ]UT
TUT[ k ] .
[ k ] ( TUF[ i ] ) P[ n ]Def _ TCP
(12)
However, the joint representation of a P[ n ]RDef _ TCP uniquely defines the robot pose.
The robot programmer can directly define a UF[i] with a known robot position measured in
frame R. Table 4 shows the UF[i] setup instructions of FANUC TPP language.
Description
Assign the value of a robot position
register PR[x] to UF[i]
2. UFRAME[i]=LPOS
Assign the current coordinates of frame
Def_TCP to UF[i]
3. UFRAME_NUM= i
Set UF[i] to be active in the robot system
Table 4. UF[i] setup instructions of FANUC TPP language
1.
c a b ,
(13)
where the coordinates of vectors a and b are determined by the location coordinates (x, y, z)
of robot points P[1] (i.e. the positive x-direction point), P[2] (i.e. the positive y-direction
point), and P[3] (i.e. the system orient-origin point) in R frame as shown in Fig. 3.
With a taught UF[i], the robot programmer is able to teach a group of robot points relative to
it and shift the taught points through its offset value. Fig. 4 shows the method for shifting a
UF[ i ] with the offset of UF[i].
taught robot point P[ n ]UT
[k]
www.intechopen.com
87
UF[ i ]
TUT[ k ]
UF[ i ]
UF[ i ]
TUF[ i ]'
TUT[ k ]'
UF[ i ]'
TUT[ k ]'
or
[i]
UF[ i ]' ,
P[ n]UF
UT[ k ] P[ n]'UT[ k ]'
(14)
where frame UF[i] represents the position of frame UF[i] after P[n] becomes P[n]. Also,
assume that transformation UF[i]TUF[i] represents the position change of UF[i] relative to
[ i ] ) can be converted (or shifted)
UF[i], thus, transformation UF[i]TUT[k] (or robot point P[ n ]UF
UT[ k ]
[ i ] ) as shown in Eq. (15)
to UF[i]TUT[k] (or P[ n]'UF
UT[ k ]'
UF[ i ]
or
www.intechopen.com
UF[ i ]
UF[ i ]
UF[ i ] .
P[ n ]'UT
TUF[ i ]' P[ n ]UT
[ k ]'
[k]
(15)
88
Usually, the industrial robot system implements Eq. (14) and Eq. (15) as both a system utility
function and a program instruction. As a system utility function, offset UF[i]TUF[i] changes the
current UF[i] of a taught robot point P[n] into a different UF[i] without changing its
Cartesian coordinates in frame R. As a program instruction, UF[i]TUF[i] shifts a taught robot
UF[ i ] into the corresponding point
UF [ i ] without changing its original UF[i].
point P[ n ]UT
P[ n ]'UT
[k]
[ k ]'
Table 5 shows the UF[i] offset instruction of FANUC TPP language for Eq. (15).
3.
4.
Description
Offset value UF[i]TUF[i] is stored in a userspecified position register PR[x].
The
Offset
option
in
motion
instruction shifts the existing robot point
UF[ i ]
into corresponding point
P[ n ]UT
[k ]
UF [ i ] .
P[ n ]'UT
[ k ]'
register PR[x]. In the industrial robot system, a PR[x] functions to hold the robot position
data such as a robot point P[n], the current value of frame Def_TCP (LPOS), or the value of a
user-defined robot frame. Different robot languages provide different instructions for
manipulating PR[x]. When a PR[x] is taught in a motion instruction, its Cartesian
coordinates are defined relative to the current active UT[k] and UF[i] in the robot system.
UF [ i ] whose UT[k] and UF[i] cannot be changed in a robot
Unlike a taught robot point P[ n ]UT
[k]
program, the UT[k] and UF[i] of a taught PR[x] are always the current active ones in the
robot program. This feature allows the robot programmer to use the Cartesian coordinates
of a PR[x] as the offset of the current active UF[i] (i.e. UF[i]TUF[i]) in the robot program for
shifting the robot points as discussed above.
www.intechopen.com
UF[ i ]cal .
TObj[ n ] UF[ i ]cal TObj[ n ] P[ n]UT
[k ]
(16)
89
perform the robot motions with the robot points that are taught via the same visionidentified object located at a different position Obj[m] (i.e. m n).
www.intechopen.com
90
To reuse all pre-taught robot points in the robot program for the vision-identified object at a
different position, the robot programmer must set up the vision system so that it can
www.intechopen.com
91
determine the position offset of frame UF[i]cal (i.e. UF[i]calTUF[i]cal) with two vision
measurements VisTObj[n] and VisTObj[m] as shown in Fig. 6 and Eq. (17)
UF[ i ]cal
and
UF[ i ]cal
(17)
where frames Vis[i] and UF[i]cal represent the positions of frames Vis[i] and UF[i]cal after
object position Obj[n] changes to Obj[m]. Usually, the vision system obtains Vis[i]TObj[n]
during the vision setup and acquires Vis[i]TObj[m] when the camera takes the actual view
picture for the object.
Vis[ i ]
TObj[ n ]
Vis[ i ]
UF[ i ]
TObj[ m ]
TUF[ i ]'
Vis[ i ]'
TObj[ m ]
Fig. 6. Determining the offset of frame UF[i]cal through two vision measurements
In a mobile-camera vision application, the camera can be attached to the robots wrist
faceplate and moved by the robot on the camera view plane. In this case, frames UF[i]cal and
Vis[i] are not coincident each other when camera view position P[m]vie is not at P[n]cal. Thus,
vision measurement Vis[i]TObj[m] obtained at P[m]vie cannot be used for determining
UF[i]calTUF[i]cal in Eq. (17) directly. However, it is noticed that frame Vis[i] is fixed in frame
Def_TCP and its position coordinates can be determined in Eq. (18) as shown in Fig. 7
Def _ TCP
(18)
where transformations RTUF[i]cal and RTDef_TCP are uploaded from the robot system when the
robot-mounted camera is at P[n]cal during the vision setup. With vision-determined
Def_TCPTVis[i], vision measurement Vis[i]TObj[m] can be transformed into UF[i]TObj[m] for the robot
system in Eq. (19) if frame Def_TCP is used as frame UF[i]cal (i.e. UF[i]cal = Def_TCP) in the
robot program as shown in Fig. 7.
UF[ i ]cal
(19)
By substituting Eq. (19) into Eq. (17), frame offset UF[i]calTUF[i]cal can be determined in Eq. (20)
www.intechopen.com
92
and
UF[ i ]cal
(20)
where frames Vis[i] and UF[i]cal represent positions of frames Vis[i] and UF[i]cal after object
position Obj[n] changes to Obj[m].
and
[ i ]'cal
P[n]'UF
UT[ k ]
UF[ i ]cal .
P[n]UT
[k]
(21)
Table 6 shows the FANUC TP program used in a fixed-camera FANUC vision application.
The program calculates vision offset UF[i]calTUF[i]cal in Eq. (17), sends it to user-specified
UF[ i ]cal in Eq. (21).
robot position register PR[x], and transforms robot point P[ n ]UT
[k]
www.intechopen.com
1: R[1] = 0;
FANUC TP Program
93
Description
Clear robot register R[1] which is
used as the indicator for vision
Snap & Find operation.
Acquire VisTOBJ[m] from snapshot
view picture 2d single, find visionmeasured offset UF[i]calTUF[i]cal, and
send it to robot position register
PR[1].
Wait until the VisLOC vision
system sets R[1] to 1 for a
successful vision Snap & Find
operation.
Jump out of the program if the
vision system cannot set R[1] as 1.
Apply UF[i]calTUF[i]cal as Offset
Condition.
Transforms
robot
point
UF[ i ]cal by UF[i]calT
UF[i]cal.
P[ n ]UT
[k]
www.intechopen.com
94
Selection, Translation, Rotation, and Snap. During robot simulation, the motion instruction
in the robot simulation program is able to move frame Def_TCP (or UT[k]) of the robot
Device to coincide a Tag[n] only if it is within the robots workspace and not a robots
singularity. The procedures for creating and manipulating tag points in IGRIP are:
Step 1. Create a tag path and attach it to frame B[i] of a selected Device.
Step 2. Create tag points Tag[n] (for n = 1, 2, m) one at a time in the created path.
Step 3. Manipulate a Tag[n] in the Workcell. Besides manipulation functions of selection,
translation, and/or rotation, the snap function allows the programmer to place a
Tag[n] to the vertex, edge, frame, curve, and surface of any Device in the Workcell.
Constraints and options can also be set up for a specific snap function. For example,
if the center option is chosen, a Tag[n] will be snapped on the center of the
geometric entities such as line, edge, polygon, etc. If a Tag[n] is required to snap on
surface, the parameter approach axis must be set up to determine which axis of
Tag[n] will be aligned with the surface normal vector.
4.2 Accuracy Enhancement of Virtual Robot Points
It is obvious that inevitable differences exist between the real robot wokcell and the
simulated robot Workcell because of the manufacturing tolerance and dimension variation
of the corresponding components. Therefore, it is not feasible to directly download tag point
Tag[n] to the actual robot controller for execution. Instead, the robot programmer must
apply the simulation calibration functions to adjust the tag points with respect to a number
of robot points uploaded from the real robot workcell. The two commonly used calibration
methods are calibrating frame UT[k] of a robot Device and calibrating frame B[i] of a Device
that attaches Tag[n]. The underlying principles of these methods are the same with the
design of robot UT and UF frames as introduced in section 2.1 and 2.2. For example, assume
that the UT[k] of the robot end-effector Device is not exactly the same with the UT[k] of the
actual robot end-effector prior to UT[k] calibration. To determine and use the actual UT[k]
in the simulation Workcell, the programmer needs to teach three non-collinear robot points
through UT Frame Setup procedure in the real robot system and upload them into the
simulation Workcell so that the simulation system is able to calculate the origin of UT[k]
with the three-point method as described in Eq. (5) and Eq. (6) in section 2.1. With the
calibrated UT[k] and the assumption that the robot Device is exactly the same as the real
robot, the UT[k] position relative to the R frame (RTUT[k]) of a robot Device in the simulation
Workcell is exactly the same as the corresponding one in the real robot workcell. Also, prior
to frame B[i] calibration, the Tag[n] position relative to frame R of a robot Device (RTTag[n])
may not be the same as the corresponding one in the real robot workcell. In this case, the
Device that attaches Tag[n] serves as a fixture Device. Thus, the programmer may define a
robot UF[i] frame by teaching (or create) three or six robot points (or tag points) on the
features of the real fixture device (or fixture Device) in the real workcell (or the
simulation Workcell). Coinciding the created UF tag points in the simulation Workcell with
www.intechopen.com
95
the corresponding uploaded real robot points results in calibrating the position of frame B[i]
of the fixture Device and the Tag[n] attached to it.
www.intechopen.com
96
In DynaCal UT[k] calibration, the programmer needs to specify at least three non-collinear
measurement points on the robot end-effector and input their locations relative to the
desired tool-tip point in the DynaCal system during the DynaCal robot calibration.
However, when only the UT[k] origin needs to be calibrated, one measurement point on the
end-effector suffices and choosing the measurement point at the desired tool-tip point
further simplifies the process because its location relative to the desired tool-tip point is then
simply zero. In DynaCal UF[i] calibration, the programmer needs to mount the DynaCal
measurement device at three (or four) non-collinear alignment points on a fixture during the
DynaCal robot calibration. The position of each alignment point relative to the robot R
frame is measured through the DynaCal cable and the TCP adaptor at the calibrated UT[k].
The DynaCal software uses the measurements to determine the transformation between the
UF[i]Fix on the fixture and the robot R frame, denoted as RTUF[i]Fix. With the identified values
of frames UT[k] and UF[i]Fix in the original robot workcell and the values of UT[k] and
UF[i]Fix in the identical robot workcell, offsets UF and UT can be determined and the
robot points P[n] used in the original robot cell can be converted into the corresponding
ones for the identical robot cell with the methods as introduced in sections 2.1 and 2.2.
Fig. 8. Determining the offset of UF[i] in two identical robot workcells through robot
calibration system
The following frame transformation equations show the method for determining the robot
offset UF[i]TUF[i] in two identical robot workcells through calibrated values of UF[i]Fix and
www.intechopen.com
97
UF[i]Fix as shown in Fig. 8. Given that the coincidence of UF[i]Fix and UF[i]Fix represents a
commonly used calibration fixture in two identical robot workcells, the transformation
between two robot base frames R and R can be calculated in Eq. (22)
R'
(22)
(23)
where frames UF[i] and UF[i] are used for recording robot points P[n] and P[n] in the two
identical robot workcells, respectively. With Eq. (22) and Eq. (23), robot offset UF[i]TUF[i]
can be calculated in Eq. (24)
UF[ i ]'
(24)
6. Conclusion
Creating accurate robot points is an important task in robot programming. This chapter
discussed the advanced techniques used in creating robot points for improving robot
operation flexibility and reducing robot production downtime. The theory of robotics shows
that an industrial robot system represents a robot point in both Cartesian coordinates and
proper joint values. The concepts and procedures of designing accurate robot user tool
frame UT[k] and robot user frame UF[i] are essential in teaching robot points. Depending on
the selected UT[k] and UF[i], the Cartesian coordinates of a robot point may be different, but
the joint values of a robot point always uniquely define the robot pose. Through teaching
robot frames UT[k] and UF[i] and measuring their offsets, the robot programmer is able to
shift the originally taught robot points for dealing with the position variations of the robots
end-effector and the workpiece. The similar method has also been successfully applied in
the robot vision system, the robot simulation, and the robot calibration system. In an
integrated robot vision system, the vision frame Vis[i] serves the role of frame UF[i]. The
vision measurements to the vision-identified object obtained in either fixed-camera or
mobile-camera applications are used for determining the offset of UF[i] for the robot system.
In robot simulation, the virtual robot points created in the simulation robot workcell must be
adjusted relative to the position of the robot in the real robot workcell. This task can be done
by attaching the created virtual robot points to the base frame B[i] of the simulation device
that serves the same role of UF[i]. With the uploaded real robot points, the virtual robot
points can be adjusted with respect to the determined true frame B[i]. In a robot calibration
system, the measuring device establishes frame UF[i] on a common fixture for the
workpiece, and the measurement of UF[i] in the identical robot workcell are used to
determine the offset of UF[i].
www.intechopen.com
98
7. References
Cheng, F. S. (2009). Programming Vision-Guided Industrial Robot Operations, Journal of
Engineering Technology, Vol. 26, No. 1, Spring 2009, pp. 10-15.
Cheng, F. S. (2007). The Method of Recovering TCP Positions in Industrial Robot Production
Programs, Proceedings of 2007 IEEE International Conference on Mechatronics and
Automation, August 2007, pp. 805-810.
Cheng, S. F. (2003). The Simulation Approach for Designing Robotic Workcells, Journal of
Engineering Technology, Vol. 20, No. 2, Fall 2003, pp. 42-48.
Connolly, C. (2008). Artificial Intelligence and Robotic Hand-Eye Coordination, Industrial
Robot:An International Journal, Vol. 35, No. 6, 2008, pp. 496-503.
Connolly, C. (2007). A New Integrated Robot Vision System from FANUC Robotics,
Industrial Robot: An International Journal, Vol. 34, No. 2, 2007, pp. 103-106.
Connolly, C. (2006). Delmia Robot Modeling Software Aids Nuclear and Other Industries,
Industrial Robot:An International Journal, Vol. 33, No. 4, 2008, pp. 259-264.
Fanuc Robotics (2007). Teaching Pendant Programming, R-30iA Mate LR HandlingTool
Software Documentation, Fanuc Robotics America, Inc.
Golnabi, H. & Asadpour, A. (2007). Design and application of industrial machine vision
systems, Robotics and Computer-Integrated Manufacturing, 23, pp. 630637.
Motta, J.T.; de Carvalhob, G. C. & McMaster, R.S. (2001). Robot calibration using a 3D
vision-based measurement system with a single camera, Robotics and Computer
Integrated Manufacturing, 17, 2001, pp. 487497
Nguyen, M. C. (2000), Vision-Based Intelligent Robots, In SPIE: Input/Output and Imaging
Technologies II, Vol. 4080, 2000, pp. 41-47.
Niku, S. B. (2001). Robot Kinematics, Introduction to Robotics: Analysis, Systems, Applications,
pp. 29-67, Prentice Hall. ISBN 0130613096, New Jersey, USA.
Pulkkinen1, T.; Heikkil1, T.; Sallinen1, M.; Kivikunnas1, S. & Salmi, T. (2008). 2D CAD
based robot programming for processing metal profiles in short series, Proceedings
of Manufacturing, International Conference on Control, Automation and Systems 2008,
Oct. 14-17, 2008 in COEX, Seoul, Korea, pp. 157-160.
Rehg, J. A. (2003). Path Control, Introduction to Robotics in CIM Systems, 5th Ed. pp. 102108Prentice Hall, ISBN 0130602434, New Jersey, USA.
Zhang, H.; Chen, H. & Xi, N. (2006). Automated robot programming based on sensor fusion,
Industrial Robot: An International Journal, Vol. 33, No. 6, 2006, pp. 451-459.
www.intechopen.com
ISBN 978-953-307-070-4
Hard cover, 678 pages
Publisher InTech
How to reference
In order to correctly reference this scholarly work, feel free to copy and paste the following:
Frank Shaopeng Cheng (2010). Advanced Techniques of Industrial Robot Programming, Advances in Robot
Manipulators, Ernest Hall (Ed.), ISBN: 978-953-307-070-4, InTech, Available from:
http://www.intechopen.com/books/advances-in-robot-manipulators/advanced-techniques-of-industrial-robotprogramming
InTech Europe
InTech China