An Approach To Visual Servoing Based On Coded Light

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

An approach to visual servoing based on coded light

Jordi Pag`es

Christophe Collewet

Francois Chaumette

Institut dInform`atica i Aplicacions


University of Girona
Girona, Spain

Abstract Positioning a robot with respect to objects by using


data provided by a camera is a well known technique called
visual servoing. In order to perform a task, the object must
exhibit visual features which can be extracted from different
points of view. Then, visual servoing is object-dependent as it
depends on the object appearance. Therefore, performing the
positioning task is not possible in presence of non-textured objets
or objets for which extracting visual features is too complex or
too costly. This paper proposes a solution to tackle this limitation
inherent to the current visual servoing techniques. Our proposal
is based on the coded structured light approach as a reliable and
fast way to solve the correspondence problem. In this case, a
coded light pattern is projected providing robust visual features
independently of the object appearance.

I. I NTRODUCTION
Visual servoing is a largely used technique which is able
to control robots by using data provided by visual sensors.
The most typical configuration is eye-in-hand, which consists
of linking a camera to the end-effector of the robot. Then,
typical task of positioning the robot with respect to objects or
target tracking are fulfilled by using a control loop based on
visual features extracted from the images [1].
All the visual servoing techniques assume that it is possible
to extract visual measures from the object in order to perform
a pose or partial pose estimation or to use a given set of
features in the control loop. Therefore, visual servoing does
not bring any solution for positioning with respect to nontextured objects or objects for which extracting visual features
is too complex or too time consuming. Note that the sampling
rate in visual servoing must be high enough in order to not
penalising the dynamics of the end-effector and the stability
of the control scheme.
A possible solution to this problem is to project structured
light on the objects in order to obtain visual features. There are
few works in this field and they are mainly based on the use of
laser pointers and laser planes [2][4]. Furthermore, they are
usually designed for positioning with respect to planar objects
or specific non-planar objects like spheres, for example. In this
paper, we propose the use of coded structured light [5]. This
is a powerful technique based on the projection of coded light
patterns which provide robust visual features. This technique
has been largely used in shape acquisition applications based
on triangulation. However, it has never been used in a visual
servoing framework. With the use of coded patterns, visual
features are available independently of the object appearance
so that visual servoing techniques can tackle their limitation

Joaquim Salvi

IRISA / INRIA Rennes


Campus de Beaulieu
Rennes, France

in front of non-textured objects. However, in case of moving


objects it has several problems as the projected features do
not remain static on the object surface. In a first attempt
to combine coded structured light with visual servoing this
paper only considers static objects. The paper is structured as
follows. In Section II the coded light approach and its ability
to provide visual features is presented. Then, the coded pattern
used in this work is presented in Section III. Afterwards,
Section IV reviews the formalism of a positioning task by
using visual servoing and the control law based on the visual
features provided by the coded pattern. Experiments validating
the approach are shown in Section V. Finally, the end of the
paper discusses conclusions and future works.
II. P ROVIDING VISUAL FEATURES WITH A CODED PATTERN
Coded structured light is considered as an active stereovision technique [6]. It is said to be active because controlled
illumination is used in order to simplify computer vision tasks.
The typical configuration consists of a LCD projector and one
(or two) camera(s). In both cases the LCD projector is used
for projecting a light pattern on the object. The advantage of
using a LCD projector is that patterns of high resolution and
large number of colours can be projected. Furthermore, a high
flexibility is obtained as the projected pattern can be changed
with no cost. In case of using a unique camera, the projector is
considered as an inverse camera and correspondences between
the perceived image and the projected pattern are easily found.
The effectiveness of coded structured light rely on the coding
strategy used for defining the patterns. Typically, codification
allows a set of pattern points or lines to be uniquely identified.
Then, the decoding process consists in locating encoded points
or lines in the image provided by the camera when the
pattern is being projected on the object. The typical application
of coded structured light is shape acquisition [5]. In this
case, the correspondences are triangulated obtaining a 3D
reconstruction of the object view. This is possible if the camera
and the projector have been accurately calibrated previously.
Nevertheless, the aim of coded structured light is to provide
robust, unambiguous and fast correspondences between the
projected pattern and the image view.
A large number of different coded structured light techniques exist [5]. Among all the techniques, there are two main
groups: time-multiplexing and one-shot techniques. Timemultiplexing techniques are based on projecting a sequence
of binary or grey-scaled patterns. The advantage of these

techniques is that, as the number of patterns is not restricted,


a large resolution, i.e. number of correspondences, can be
achieved. Furthermore, binary patterns are robust against the
objects colour. On the other hand, their main constraint is that
during the projection of the patterns the object, the projector
and the camera must all remain static (which is incompatible in
visual servoing). One-shot techniques project a unique pattern
so that a moving camera or projector can be considered.
In order to concentrate the codification scheme in a unique
pattern, each encoded point or line is uniquely identified by
a local neighbourhood around it. Then, for correctly decoding
the pattern in the image, the object surface is assumed to be
locally smooth. Otherwise, every encoded neighbourhood can
appear incomplete in the image which can provoke decoding
errors.
From the point of view of visual servoing, one-shot coded
structured light is a powerful solution for robustly providing
correspondences independently of the object appearance. By
projecting a coded pattern, correspondences are easily found
between the reference image and the initial and intermediate
images. In Fig. 1 several one-shot patterns projected on a horse
statue are shown.
In the first pattern, every coloured spot is uniquely encoded
by the window of 3 3 spots centred on it. In the second
pattern, every stripe is uniquely encoded by its colour and the
colour of the adjacent stripes. Finally, the third pattern uses
the same codification for both horizontal and vertical slits.
The choice of the coded pattern for visual servoing depends
on the object, the number of correspondences that we want
to get, the lighting conditions, the required decoding time,
etc. In this paper, the pattern chosen is a coloured array of
spots like the one shown in Fig. 1a. The main reason is that it
can be decoded very fast, fitting the control rate requirements.
Furthermore, it provides image point correspondences which
are useful for many of the existing visual servoing techniques.
In the following section the pattern used in the experiments
is presented in more details.
III. PATTERN BASED ON M- ARRAY CODIFICATION
Many patterns encoding points have been proposed in the
literature [7][10]. Most of them use the theory of pseudorandom arrays, also known as M-arrays, for encoding the
points. The main advantage of such coding scheme is that
high redundancy is included, increasing the robustness when
decoding the pattern. Firstly, the formal definition of an Marray is briefly reviewed. Afterwards, the pattern design chosen
for our application is presented. Then, an overview of the
pattern decoding procedure is introduced.

a)

b)

c)

Fig. 1. Several one-shot patterns. a) Array of dots. b) multi-slit pattern. c)


Grid pattern.

the one filled by 0s, then M is called an M-array or pseudorandom array [11]. This kind of array has been widely used
in pattern codification because the window property allows
every different submatrix to be associated with an absolute
position in the array. An example of a 4 6 binary M-array
with window property 2 2 is

0 1 1 1 1 0
0 0 1 1 0 0

(1)
0 1 0 0 1 0
0 1 1 1 1 0
This type of arrays can be constructed by folding a pseudorandom sequence [11] which is the unidimensional variant
of an M-array. In this case, however, the length of the
pseudo-random sequence, the size of the resulting M-array
and the size of its window length are correlated. Therefore,
a generic M-array of given dimensions with a given window
property cannot always be constructed. In order to cope with
this constraint, an alternative consists in generating a perfect
submap. This type of arrays has also the window property,
but not all the possible windows are included. Morano et
al. [9] proposed a brute force algorithm for generating perfect
submaps. For example, in order to generate a 6 6 M-array
with window property 3 3 using an alphabet of 3 elements
the procedure is as follows: firstly, a subarray of 3 3 is
randomly chosen and it is placed in the north-west vertex of
the M-array that is being built as shown in Fig. 2a. Then,
consecutive random columns are added rightwards as shown
in Fig. 2b-c. The random elements are only inserted if the
window property of the global array is maintained, i.e. if no
repeated 3 3 sub-arrays appear. Similarly, random rows are
added downwards from the initial sub-array position, as shown
in Fig. 2d-e. Afterwards, the remaining of the array is filled by
completing rows and columns until the end. Note that certain
combinations of array size, window size and length of the
alphabet will not produce any result.
B. Pattern design

A. Formal definition
Let M be a matrix of dimensions r v where each element
is taken from an alphabet of k elements {0, 1, .., k 1}. If
M has the window property, i.e. each different submatrix of
M of dimensions n m appears exactly once, then M is a
perfect map. If M contains all submatrices of n m except

There are several ways of designing a pattern with the aid


of a M-array [7][10]. In most cases, patterns containing an
array of spots are used like in [8], [9]. Every element of the
alphabet is assigned to a grey level or a colour.
Four our visual servoing purposes, a 20 20 M-array based
on an alphabet of 3 symbols {0, 1, 2} and window property

1 1 3
3 1 2
3 1 1

1 1 3 1
3 1 2 1
3 1 1 2

a)
1
3
3
2
1
2

1
1
1
3
1
1

3
2
1
1
3
3

b)

1 3 2
1 3 2
2 1 3
2

f)
Fig. 2.

...

1 1 3 1 3 2
3 1 2 1 3 2
3 1 1 2 1 3

1
3
3
2
1
2

1
1
1
3
1
1

3 1 3 2
2 1 3 2
1 2 1 3
1
3
3

c)

...

e)

1
3
3
2

1
1
1
3

3 1 3 2
2 1 3 2
1 2 1 3
1

d)

Fig. 3.

The coloured spot pattern encoded according to a M-array.

M-array generation proposed by Morano et al.

Decoding consistency: every spot can be identified by


the 9 windows of 3 3 in which it takes part. Those
spots for which all the windows do not provide the same
identification are removed.
Note that the decoding process is quite exigent and does
not allow inconsistences or uncertainties. This can provoke an
important number of spots to be rejected. On the other hand,
this ensures a high robustness because erroneous decoded
spots will rarely occur.
Examples of pattern decoding are shown in Fig. 4. The
successfully decoded spots are indicated with an overprinted
numeric mark. In the two first examples, the camera aperture
has been adjusted in order to remove most part of the scene so
that only the projected pattern is visible. In the first example,
the object is a ham where most part of visible dots have been
decoded. The other two examples show an object similar to an
elliptic cylinder in different contexts. In Fig. 4b, the pattern is
clearly visible and most part of the points are identified. On the
other hand, the scene and the image shown in Fig. 4c are pretty
more complex. In this case, the object texture is darker which
imposes a higher aperture of the camera in order to perceive
the pattern. This provokes that the rest of the scene is also
visible. Nevertheless, a large set of spots are still decoded,
including some of the projected on the background wall.
In all the examples, the decoding time was lower than 40 ms,
which is the typical acquisition period of a CCIR format
camera.
Next section reviews the typical definition of a positioning
task by using visual data. In our case, the data used are the
decoded image points provided by the coded structured light
technique.

33 has been generated according to the brute-force algorithm


by Morano et al. [9] described above. The obtained pattern is
shown in Fig. 3 where blue has been matched with 0, green
with 1 and red with 2.
C. Pattern segmentation and decoding
When the pattern is projected on an unknown object, the
camera provides an image of the pattern deformed according
to the object shape. Firstly, it is necessary to segment the
pattern in the image, i.e. identifying which parts of the image
contain the projected pattern. Such operation is referred as
pattern segmentation. One of the classic advantages of using
coded light is that the image processing is greatly simplified.
Usually, with an appropriate camera aperture it is possible to
perceive only the projected pattern removing the rest of the
scene.
In our case, the pattern segmentation process consists in
finding the visible coloured spots. Once the centre of gravity of
every spot is located and its colour is identified, the decoding
process can start. The steps are hereafter summarised:
Adjacency graph: for every spot, the four closest spots
in the four main directions are searched. With this step
the 4-neighbourhood of each spot is located. Then, the
8-neighbourhood of every spot can be completed.
Graph consistency: in this step, the consistency of every
8-neighbourhood is tested. For example, given a spot, its
north-west neighbour must be the west neighbour of its
north neighbour, and at the same time, the north neighbour of its west neighbour. These consistency rules can
be extrapolated to the rest of neighbours corresponding
to the corners of the 8-neighbourhood. Those spots not
respecting the consistency rules are removed from the
8-neighbourhood being considered.
Spot decoding: every spot having a complete 8neighbourhood, its colour and the colours of its neighbours are used for identifying the spot in the original
pattern. In order to speed up this search, a look up table
storing all the 3 3 windows included in the pattern is
used.

IV. V ISUAL SERVOING


As already said, a typical robotic task consists in positioning
an eye-in-hand system with respect to an object by using
visual features extracted from the camera. Visual servoing is
based on the relationship between the camera motion and the
consequent change on the visual features. This relationship is
expressed by the well-known equation [12]
s = Ls v

(2)

a)

b)
Fig. 4.

Examples of decoded patterns when projected on different objects.

where s is a vector containing the visual features values, Ls is the so-called interaction matrix, and v =
(vx , vy , vz , x , y , z ) the camera velocity screw.
The goal of visual servoing consists in moving the robot
from an initial relative robot-object pose to a desired one where
a desired set of visual features s is obtained. Most applications obtain the desired features s by using the teachingby-showing approach. In this case, the robot is firstly moved
to the desired position, then an image is acquired and s is
computed. This is useful, for example, in robots having bad
odometry, such as mobile robots. In this case, the exact goal
position can be achieved starting from the surroundings by
using the visual servoing approach.
A robotic task can be described by a function which must
be regulated to 0 [12]. Concretely, when the number of visual
features is higher than the m degrees of freedom of the camera,
the task function is noted as the following mdimensional
vector
+
cs (s s )
e=L
(3)
where s are the visual features corresponding to the current
+
cs is the pseudoinverse of a model or an approxstate and L
imation of the interaction matrix. A typical control law for
cancelling the task function and therefore moving the robot to
the desired position is [12]
+

cs (s s )
v = L

(4)

with a positive gain. It is well known that the local


asymptotic stability of the control law is ensured if the model
of interaction matrix holds
+

cs Ls > 0
L

(5)

s = (x1 , y1 , x2 , y2 , ..., xk , yk )

(6)

As explained in the previous section, the coded pattern in


Fig. 3 provides a large number of point correspondences in
every image. Therefore, matching pattern points when viewing
the object from different positions becomes straightforward.
The normalised coordinates x of these points obtained after
camera calibration can be used as visual features in the control
loop. Given a set of k matched image points between the
current and desired images, the visual features are defined by

The interaction matrix of a normalised point is [12], [13]




1/Z
0
x/Z
xy
(1 + x2 ) y
(7)
Lx=
0
1/Z y/Z 1 + y 2
xy
x
where Z is the depth of the point. Then, Ls has the form

1/Z1
0
x1 /Z1 x1 y1 (1+x21 ) y1
0
1/Z1 y1 /Z1 1 + y12 x1 y1 x1

..
..
..
..
.. (8)
Ls= ...
.
.
.
.
.

2
1/Zk
0
xk /Zk xk yk (1+xk ) yk
0
1/Zk yk /Zk 1 + yk2 xk yk xk

Note that the real interaction matrix depends on the depth


distribution of the points. Nevertheless, the depth distribution
is usually considered as unknown and a rough approximation
cs . A typical choice
is used in the model of interaction matrix L
c
for Ls , which is the one used in this paper, is the interaction
matrix evaluated at the desired state Ls . This is obtained
by using the normalised coordinates from the desired image
(xi , yi ) and the depths in the desired position Zi . In our
case, the depths in the desired position have been modelled by
setting Zi= Z being Z > 0 an approximation of the average
distance between the object and the camera. Note, however,
that other types of interaction matrix models could be used.
For example, if the camera and the projector were accurately
calibrated, the depth of the points could be reconstructed by
using triangulation. Then, a better estimation of the interaction
matrix in the desired state or even at each iteration would be
available. This solution is not here considered as we have not
calibrated the system.
Another way to improve the system consists in considering
alternative visual features computed from the 2D points, like
image moments [14], for example.
V. E XPERIMENTAL RESULTS
Experiments have been done in order to validate the visual
servoing approach based on coded light. A robotic cell with a
six-degrees-of-freedom arm has been used. A colour camera
has been attached to the end-effector of the robot while a
LCD projector has also been positioned about 1 m aside the
robot. The projector focus has been set so that the pattern gets
acceptably focused in a range of distances between 1.6 and
1.8 m in front of the projector. This is the range of distances
where the objects are placed during the experiments.

a)

b)

Fig. 5. a) First experiment: projection of the coded pattern onto a planar


object. b) Elliptic cylinder used in the second experiment.

A. Planar object
The first experiment consists in positioning the robot with
respect to a plane. Fig. 5a shows the robot manipulator and
the plane with the encoded pattern projected on it. The desired
position has been defined so that the camera is parallel to the
plane at a distance of 90 cm. The reference image acquired
in the desired position is shown in Fig. 6a. In this image
a total number of 370 coloured spots out of 400 have been
successfully decoded. The initial position of the robot in the
experiment has been defined from the desired position, by
moving the robot 5 cm along its X axis, 10 cm along
Y , 20 cm along Z, and rotations of 15 about X and
10 about Y have been applied. The image perceived in this
configuration is shown in Figure 6b. In this case, the number of
decoded points is 361. Matching points between the initial and
the desired images is straightforward thanks to the decoding
process of the pattern.
The goal is then to move the camera back to the desired
position by using visual servoing. At each iteration, the visual
features set s in (6) is filled with the matched points between
the current and the desired image. The normalised coordinates
x of the points are obtained by using an approximation of the
camera intrinsic parameters. The control law (4) is computed
cs = L . The result of the servoing
at each iteration with L
s
is presented in Figure 6c-d. Concretely, the camera velocities
generated by the control law are plotted in Figure 6c. Note
that the norm of the task function decreases at each iteration
as shown in Figure 6d. As can be seen, the behaviour of both
the task function and the camera velocities is satisfactory and
the robot reaches the desired position with no problem as for
classical image-based visual servoing.
B. Non-planar object
In the second experiment a non-planar object has been
used. Concretely, the elliptic cylinder shown in Figure 5b has
been positioned in the workspace. In this case, the desired
position has been chosen so that the camera points towards
the objects zone of maximum curvature with a certain angle
and the distance between both is about 60 cm. The desired
image perceived in this configuration is shown in Figure 6e.
The number of successfully decoded points is 160. Then,
the robot end-effector has been displaced 20 cm along X,

20 cm along Y and 30 cm along Z. Afterwards, rotations


of 10 about X, 15 about Y and 5 about Z have been
applied. These motions define the initial position of the robot
end-effector for this experiment. The image perceived in this
configuration is shown in 6f. In this case, the number of
decoded points is 148.
The results of the visual servoing are plotted in Figure 6gh. The desired image is reached again at the end of the
servoing. Note that the model of interaction matrix used in
the control law assumes that all the points ar coplanar at
depth Z = 60 cm. Since the object has a strong curvature,
the chosen model of the interaction matrix used is then very
coarse. This explains that the camera velocities generated
by the control law are more noisy and less monotonic than
in the previous experiment. Furthermore, the convergence is
slower. It has been proved that the depth distribution of a
cloud of points used in classical image-based visual servoing
plays an important role in the stability of the system [15].
Nevertheless, this experiment confirms that visual servoing
is robust against modeling errors since the convergence is
reached. In this experiment approximated camera intrinsic are
also used. Furthermore, during the robot motion, some of the
pattern points were occluded by robot arm. Therefore, the
control law is robust against occlusions.
VI. C ONCLUSION
This paper has proposed an approach to visual servoing
based on coded structured light for positioning eye-in-hand
robots with respect to unknown objects. The projection of a
coded pattern provides robust visual features independently
of the object appearance. Furthermore, coded light enables to
deal with non-textured objects or objects for which extracting
visual features is too complex or too costly. Our approach is
based on placing a LCD projector aside the robot manipulator.
The use of a coded pattern allows classic visual servoing to be
directly applied. This is advantageous since the large number
of existing visual servoing techniques can be directly applied.
A pattern containing a M-array of coloured spots has been
used for illustrating this approach. The choice of this pattern
has been made taking into account its easy segmentation and
fast decoding, which fits on the visual servoing requirements of
sampling rate (40 ms). To our knowledge, this is the first work
using coded structured light in a visual servoing framework.
Therefore, we consider this approach a first step which shows
the potentiality of coded light in visual servoing applications.
A classic image-based approach based on points provided
by the coded pattern has been used. Experiments have shown
that good results are obtained when positioning the robot with
respect to planar objects. Furthermore, thanks to the large
number of correspondences provided by the coded pattern,
the system has shown to be robust even in presence of
occlusions. On the other hand, the results when using nonplanar objects show that the camera motion is noisier, slower
and less monotonic. This is a well known problem in classic
2D visual servoing when a rough estimation of the point depth
distribution is included in the interaction matrix. In order to

0.04

Vx
Vy
Vz
x
y
z

0.03
0.02
0.01

||e||

1.5

0
-0.01

0.5

-0.02
-0.03

-0.04
0

a)

10

15

b)

20

25

30

35

40

45

10

15

c)

20

25

30

35

40

45

80

90

d)

0.04

Vx
Vy
Vz
x
y
z

0.03
0.02
0.01

||e||

1.5

0
-0.01

0.5

-0.02
-0.03

-0.04
0

e)

f)

10

20

30

40

g)

50

60

70

80

90

10

20

30

40

50

60

70

h)

Fig. 6. First experiment: planar object. a) Desired image. b) Initial Image. c) Camera velocities (ms/s and rad/s) vs. time (in s). d) Norm of the task function
vs. time (in s). Second experiment: elliptic cylinder. e) Desired image. f) Initial Image. g) Camera velocities (ms/s and rad/s) vs. time (in s). h) Norm of the
task function vs. time (in s).

improve the results other existing image-based approaches


could be tested like [14], [16], [17]. Furthermore, a better
estimation of the depth distribution of the non-planar object
would produce better results.
The main constraint of the current approach is that the
pattern used is not rotation invariant. This means that in order
to properly decode the pattern it cannot appear too much
rotated about the camera optical axis. In order to remove this
constraint a pattern which is rotation invariant should be used.
Finally, we remark that structured light allows us to choose
the visual features which will be used in the control law. Then,
an important future work is to determine a suitable pattern
design leading to a robust and optimised control law as done
in [4] for the case of planar objects and onboard structured
light.
R EFERENCES
[1] S. Hutchinson, G. Hager, and P. Corke, A tutorial on visual servo
control, IEEE Trans. on Robotics and Automation, vol. 12, no. 5, pp.
651670, 1996.
[2] D. Khadraoui, G. Motyl, P. Martinet, J. Gallice, and F. Chaumette,
Visual servoing in robotics scheme using a camera/laser-stripe sensor,
IEEE Trans. on Robotics and Automation, vol. 12, no. 5, pp. 743750,
1996.
[3] N. Andreff, B. Espiau, and R. Horaud, Visual servoing from lines,
Int. Journal of Robotics Research, vol. 21, no. 8, pp. 679700, August
2002.
[4] J. Pag`es, C. Collewet, F. Chaumette, and J. Salvi, Robust decoupled
visual servoing based on structured light, in IEEE/RSJ Int. Conf. on
Intelligent Robots and Systems, vol. 2, Edmonton, Canada, August 2005,
pp. 26762681.
[5] F. Chen, G. Brown, and M. Song, Overview of three-dimensional
shape measurement using optical methods, Optical Engineering, vol. 1,
no. 39, pp. 1022, January 2000.
[6] R. A. Jarvis, A Perspective on Range Finding Techniques for Computer
Vision, IEEE Trans. on Pattern Analysis and Machine Intelligence,
vol. 5, no. 2, pp. 122139, 1983.
[7] P. Griffin, L. Narasimhan, and S. Yee, Generation of uniquely encoded
light patterns for range data acquisition, Pattern Recognition, vol. 25,
no. 6, pp. 609616, 1992.

[8] C. J. Davies and M. S. Nixon, A hough transform for detecting the


location and orientation of 3-dimensional surfaces via color encoded
spots, IEEE Trans. on systems, man and cybernetics, vol. 28, no. 1, pp.
9095, February 1998.
[9] R. A. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, and J. Nissanov,
Structured light using pseudorandom codes, IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol. 20, no. 3, pp. 322327, March
1998.
[10] H. J. W. Spoelder, F. M. Vos, E. M. Petriu, and F. C. A. Groen, Some
aspects of pseudo random binary array-based surface characterization,
IEEE Trans. on instrumentation and measurement, vol. 49, no. 6, pp.
13311336, December 2000.
[11] F. J. MacWilliams and N. J. A. Sloane, Pseudorandom sequences and
arrays, Proceedings of the IEEE, vol. 64, no. 12, pp. 17151729, 1976.
[12] B. Espiau, F. Chaumette, and P. Rives, A new approach to visual
servoing in robotics, IEEE Trans. on Robotics and Automation, vol. 8,
no. 6, pp. 313326, June 1992.
[13] J. T. Feddema, C. S. G. Lee, and O. R. Mitchell, Weighted selection of
image features for resolved rate visual feedback control, IEEE Trans.
on Robotics and Automation, vol. 7, no. 1, pp. 3147, February 1991.
[14] O. Tahri and F. Chaumette, Point-based and region-based image moments for visual servoing of planar objects, IEEE Trans. on Robotics,
vol. 21, no. 6, 2005.
[15] E. Malis and P. Rives, Robustness of image-based visual servoing with
respect to depth distribution errors, in IEEE Int. Conf. on Robotics and
Automation, vol. 1, Taipei, Taiwan, September 2003, pp. 10561061.
[16] F. Schramm, G. Morel, A. Micaelli, and A. Lottin, Extended-2d visual
servoing, in IEEE Int. Conf. on Robotics and Automation, New Orleans,
USA, April 26-May 1 2004, pp. 267273.
[17] P. I. Corke and S. A. Hutchinson, A new partitioned approach to imagebased visual servo control, IEEE Trans. on Robotics and Automation,
vol. 17, no. 4, pp. 507515, August 2001.

You might also like