Seminar Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

Seminar Report 11

Sensors on 3D Digitization

1. INTRODUCTION

Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously colour and 3D.

Dept. of ECE

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

3. COLOUR 3D IMAGING TECHNOLOGY


Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available.

Passive vision, attempts to analyze the structure of the scene under ambient light. Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.

Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.

Dept. of ECE

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

4. AUTOSYNCHRONIZED SCANNER

Figure 4.1
The auto-synchronized scanner, depicted schematically on Figure 1, can provide registered range and colour data of visible surfaces. A 3D surface map is captured by scanning a laser spot onto a scene, collecting the reflected laser light, and finally focusing the beam onto a linear laser spot sensor. Geometric and photometric corrections of the raw data give two images in perfect registration: one with x, y, z coordinates and a second with reflectance data. The laser beam composed of multiple visible wavelengths is used for the purpose of measuring the colour map of a scene (reflectance map).

Advantage
Triangulation is the most precise method of 3D

Limitation
Increasing the accuracy increases the triangulation distance. The larger the triangulation distance, the more shadows appear on the scanning object and the scanning head must be made larger.

Dept. of ECE

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

5. SENSORS FOR 3D IMAGING


The sensors used in the autosynchronized scanner include

5.1 SYNCHRONIZATION CIRCUIT BASED UPON DUAL PHOTOCELLS


This sensor ensures the stability and the repeatability of range measurements in environment with varying temperature. Discrete implementations of the so-called synchronization circuits have posed many problems in the past. A monolithic version of an improved circuit has been built to alleviate those problems.

5.2 LASER SPOT POSITION MEASUREMENT SENSORS


High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated.

Dept. of ECE

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

6. POSITION SENSITIVE DETECTORS


Many devices have been built or considered in the past for measuring the position of a laser spot more efficiently. One method of position detection uses a video camera to capture an image of an object electronically. Image processing techniques are then used to determine the location of the object .For situations requiring the location of a light source on a plane, a position sensitive detector (PSD) offers the potential for better resolution at a lower system cost.

The PSD is a precision semiconductor optical sensor which produces output currents related to the centre of mass of light incident on the surface of the device.

While several design variations exist for PSDs, the basic device can be described as a large area silicon p-i-n photodetector. The detectors can be available in single axis and dual axis models.

Dept. of ECE

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

7. DUAL AXIS PSD


This particular PSD is a five terminal device bounded by four collection surfaces; one terminal is connected to each collection surface and one provides a common return. Photocurrent is generated by light which falls on the active area of the PSD will be collected by these four perimeter electrodes.

Figure 7.1 A typical dual axis PSD


The amount of current flowing between each perimeter terminal and the common return is related to the proximity of the centre of mass of the incident light spot to each collection surface. The difference between the up current and the down current is proportional to the Y-axis position of the light spot .Similarly ,the right current minus the left current gives the X-axis position. The designations up,down, rightand leftare arbitrary; the device may be operated in any relative spatial orientation.

Dept. of ECE

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

8. LASER SENSORS FOR 3D IMAGING


The state of the art in laser spot position sensing methods can be divided into two broad classes according to the way the spot position is sensed. Among those, one finds continuous response position sensitive detectors (CRPSD) and discrete response position sensitive detectors (DRPSD)

CONTINOUS RESPONSE POSITION SENSITIVE DETECTORS (CRPSD)


The category CRPSD includes lateral effect photodiode.

Figure 8.1 Basic structure of a single-axis lateral effect photodiode

Figure illustrates the basic structure of a p-n type single axis lateral effect photodiode. Carriers are produced by light impinging on the device are separated in the depletion region and distributed to the two sensing electrodes according to the Ohms law. Assuming equal impedances, The electrode that is the farthest from the centroid of the light distribution collects the least current. The normalized position of the centroid when the light intensity fluctuates is given by P =I2-I1/I1+I2.

Dept. of ECE

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

The actual position on the detector is found by multiplying P by d/2 ,where d is the distance between the two sensing electrodes.[3]. A CRPSD provides the centroid of the light distribution with a very fast response time (in the order of 10 MHz). Theory predicts that a CRPSD provides very precise measurement of the centroid versus a DRPSD. By precision, we mean measurement uncertainty. It depends among other things on the signal to noise ratio and the quantization noise. In practice, precision is important but accuracy is even more important. A CRPSD is in fact a good estimator of the central location of a light distribution.

DISCRETE RESPONSE POSITION SENSITIVE DETECTORS (DRPSD)


DRPSD on the other hand comprise detectors such as Charge Coupled Devices (CCD) and arrays of photodiodes equipped with a multiplexer for sequential reading. They are slower because all the photo-detectors have to be read sequentially prior to the measurement of the location of the peak of the light distribution. [1] DRPSDs are very accurate because of the knowledge of the distribution but slow. Obviously, not all photo-sensors contribute to the computation of the peak. In fact, what is required for the measurement of the light distribution peak is only a small portion of the total array. Once the pertinent light distribution (after windowing around an estimate around the peak) is available, one can compute the location of the desired peak very accurately.

Dept. of ECE

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

Figure 8.2 In a monochrome system, the incoming beam is split into two components

Figure 8.3 Artistic view of a smart sensor with colour capabilities,

Figure shows schematically the new smart position sensor for

light spot

measurement in the context of 3D and colour measurement. In a monochrome range camera, a portion of the reflected radiation upon entering the system is split into two beams (Figure 7.2). One portion is directed to a CRPSD that determines the location of the best window and sends that information to the DRPSD.

In order to measure colour information, a different optical element is used to split the returned beam into four components, e.g., a diffractive optical element (Figure 7.3). The white zero order component is directed to the DRPSD, while the RGB 1storder components are directed onto three CRPSD, which are used for colour detection. The CRPSDs are also used to find the centroid of the light distribution impinging on them and to estimate the total light intensity The centroid is computed

Dept. of ECE

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

on chip with the well-known current ratio method i.e. (I1-I2)/ (I1+I2) where I1 and I2 are the currents generated by that type of sensor. The weighed centroid value is fed to a control unit that will select a sub-set (window) of contiguous photo-detectors on the DRPSD. That sub-set is located around the estimate of the centroid supplied by the CRPSD.Then the best algorithms for peak extraction can be applied to the portion of interest.

Dept. of ECE

10

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

9. PROPOSED SENSOR COLORANGE 9.1 ARCHITECTURE


Figure5 shows the proposed architecture for the color range chip.

Figure9.1 The proposed architecture for the colour range chip

An object is illuminated by a collimated RGB laser spot and a portion of the reflected radiation upon entering the system is split into four components by a diffracting optical element as shown in figure 4b. The white zero order component is directed to the DRPSD, while the RGB 1storder components are directed onto three CRPSD, which are used for colour detection. The CRPSDs are also used to find the centroid of the light distribution impinging on them and to estimate the total light intensity The centroid is computed on chip with the well-known current ratio method i.e. (I1-I2)/ (I1+I2) where I1 and I2 are the currents generated by that type of sensor.

Dept. of ECE

11

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

The weighed centroid value is fed to a control unit that will select a sub-set (window) of contiguous photo-detectors on the DRPSD. That sub-set is located around the estimate of the centroid supplied by the CRPSD. Then, the best algorithms for peak extraction can be applied to the portion of interest.

9.2 32 PIXEL PROTOTYPE CHIP


Figure6 shows the architecture and preliminary experimental results of a first prototype chip of a DRPSD with selectable readout window. This is the first block of a more complex chip that will include all the components illustrated in Figure 3c.

Figure 9.2 Block diagram of the 32-pixel prototype array.

Dept. of ECE

12

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

The prototype chip consists of an array of 32 pixels with related readout channels and has been fabricated using a 0. mm commercial CMOS process. The novelties implemented consist in a variable gain of the readout channels and a selectable readout window of 16 contiguous pixels. Both features are necessary to comply with the requirements of 3D single laser spot sensors, i.e., a linear dynamic range of at least 12 bits and a high 3D data throughput. In the prototype, many of the signals, which, in the final system are supposed to be generated by the CRPSDs, are now generated by means of external circuitry. The large dimensions of the pixel are required, on one side to cope with speckle noise and, on the other side, to facilitate system alignment. Each pixel is provided with its own readout channel for parallel reading. The channel contains a charge amplifier and a correlated double sampling circuit (CDS). To span 12 bits of dynamic range, the integrating capacitor can assume five different values. In the prototype chip, the proper integrating capacitor value is externally selected by means of external switches C0-C4. In the final sensor, however, the proper value will be automatically set by an on chip circuitry on the basis of the total light intensity as calculated by the CRPSDs.

During normal operation, all 32 pixels are first reset at their bias value and then left to integrate the light for a period of 1 ms. Within this time the CRPSDs and an external processing unit estimate both the spot position and its total intensity and those parameters are fed to the window selection logic. After that, 16 contiguous pixels, as addressed by the window selection logic, are read out in 5 ms, for a total frame rate of 6 ms. Future sensors will operate at full speed, i.e. an order of magnitude faster. The window selection logic, LOGIC_A, receives the address of the central pixel of the 16 pixels and calculates the address of the starting and ending pixel. The analog value at the output of the each CA within the addressed window is sequentially put on the bit line by a decoding logic, DECODER, and read by the video amplifier .LOGIC-A generates also synchronization and end-of-frame signals which are used from the external processing units. LOGIC_B instead is devoted to the generation of logic signals that derive both the CA and the CDS blocks. To add flexibility also the integration time can be changed by means of the external switches

T0-T4.The chip has been tested and its functionality proven to be in agreement with specifications.

Dept. of ECE

13

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

11. ADVANTAGES
Reduced size and cost Better resolution at a lower system cost High reliability that is required for high accuracy 3D vision systems Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated.

12. DISADVANTAGE

The elimination of all stray light in an optical system requires sophisticated techniques.

Dept. of ECE

14

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

13. APPLICATIONS

Intelligent digitizers will be capable of measuring accurately and simultaneously colour and 3D

For the development of hand held 3D cameras Multi resolution random access laser scanners for fast search and tracking of 3D features

Dept. of ECE

15

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

14. CONCLUSION
The results obtained so far have shown that optical sensors have reached a high level of development and reliability those are suited for high accuracy 3D vision systems. The availability of standard fabrication technologies and the acquired knowhow in the design techniques, allow the implementation of optical sensors that are application specific: Opto-ASICs. The trend shows that the use of the low cost CMOS technology leads competitive optical sensors. Furthermore post-processing modules, as for example anti reflecting coating film deposition and RGB filter deposition to enhance sensitivity and for colour sensing, are at the final certification stage and will soon be available in standard fabrication technologies. The work on the Colorange is being finalized and work has started on a new improved architecture.

Dept. of ECE

16

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

REFERENCES

[1]

L.GONZO,

A.SIMONI,

A.GOTTARDI,

DAVID

STAPPA,

J.A

BERNALDIN,Sensors optimized for 3D digitization, IEEE transactions on instrumentation and measurement, vol 52, no.3, June 2003, pp.903-908. [2] P.SCHAFER, R.D.WILLIAMS. G.K DAVIS, ROBERT A. ROSS, Accuracy of position detection using a position sensitive detector, IEEE transactions on instrumentation and measurement, vol 47, no.4, August 1998, pp.914-918 [3] J.A.BERALDIN,Design of Bessel type pre-amplifiers for lateral effect photodiodes, International Journal Electronics, vol 67, pp 591-615, 1989. [4] [5] A.M.DHAKE,Television and video engineering, Tata Mc Graw Hill X.ARREGUIT, F.A.VAN SCHAIK, F.V.BAUDUIN, M.BIDIVILLE,

E.RAEBER,A CMOS motion detector system for pointing devices, IEEE journal solid state circuits, vol 31, pp.1916-1921, Dec 1996 [6] P.AUBERT, H.J.OGUEY, R.VUILLEUNEIR, Monolithic optical position encoder with on-chipphotodiodes,IEEE J.solid state circuits, vol 23, pp.465473, April 1988 [7] K.THYAGARAJAN, A.K.GHATAK,Lasers-Theory and Plenum Press [8] N.H.E.Weste, Kamran Eshraghian,Principles of CMOS VLSIdesign, A systems perspective, Low Price Edition applications,

Dept. of ECE

17

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

CONTENTS

INTRODUCTION COLOUR 3-D IMAGING TECHNOLOGY LASER SENSORS FOR 3-D IMAGING POSITION SENSITIVE DETECTORS PROPOSED SENSOR-COLORANGE ADVANTAGES DISADVANTAGES APPLICATIONS FUTURE SCOPE CONCLUSION REFERENCES

Dept. of ECE

18

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

ABSTRACT

Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available.

The first strategy, known as passive vision, attempts to analyze the structure of the scene under ambient light. In contrast, the second, known as active vision, attempts to reduce the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with 2-D imaging systems. Moreover, with laser based approaches, the 3-D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Thus the task of processing 3-D data is greatly simplified.

Dept. of ECE

19

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

ACKNOWLEDGEMENT

I extend my sincere gratitude towards Prof. Sasikumar Head of Department for giving us his invaluable knowledge and wonderful technical guidance

I express my thanks to our staff advisor Mrs. Remola Joy for their kind cooperation and guidance for preparing and presenting this seminar.

I also thank all the other faculty members of ECE department and my friends for their help and support.

Dept. of ECE

20

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

Many devices have been built or considered in the past for measuring the position of a light beam. Among those, one finds continuous response position sensitive detectors (CRPSD) and discrete response position sensitive detectors (DRPSD) [9-20]. The category CRPSD includes lateral effect photodiode and geometrically shaped photo-diodes (wedges or segmented). DRPSD on the other hand comprise detectors such as Charge Coupled Devices (CCD) and arrays (1-D or 2-D) of photodiodes equipped with a multiplexer for sequential reading. One cannot achieve high measurement speed like found with continuous response position sensitive detectors and keep the same accuracy as with discrete response position sensitive detectors. On Figure 1, we report two basic types of devices that have been proposed in the literature. A CRPSD provides the centroid of the light distribution with a very fast response time (in the order of 10Mhz) [20]. On the other hand, DRPSD are slower because all the photo-detectors have to be read sequentially prior to the measurement of the location of the real peak of the light distribution [21]. People try to cope with the detector that they have chosen. In other words, they pick a detector for a particular application and accept the tradeoffs inherent in their choice of a position sensor. Consider the situation depicted on Figure 2, a CRPSD would provide A as an answer. But a DRPSD can provide B, the desired response. This situation occurs frequently in real applications. The elimination of all stray light in an optical system requires sophisticated techniques that increase the cost of a system. Also, in some applications, background illumination cannot be completely eliminated even with optical light filters. We propose to use the best of both worlds. Theory predicts that a CRPSD provides very precise measurement of the centroid versus a DRPSD (because of higher spatial resolution). By precision, we mean measurement uncertainty. It depends among other things on the signal to noise ratio, the quantization noise, and the sampling noise. In practice, precision is important but accuracy is even more important. A CRPSD is in fact a good estimator of the central location of a light distribution. Accuracy is understood to be the true value of a quantity. DRPSD are very accurate (because of the knowledge of the distribution) but can be less precise (due to spatial sampling of the light distribution).

Dept. of ECE

21

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

Figure 9.3 A typical situations where stray light blurs the measurement of the real but much narrower peak. A CRPSD would provide A as an answer. But a DRPSD can provide B, the desired response peak

Their slow speed is due to the fact that all the pixels of the array have to be read but they dont all contribute to the computation of the peak. In fact, what is required for the measurement of the light distribution peak is only a small portion of the discrete detector. Hence the new smart detector. Once the pertinent light distribution is available, one can compute the location of the desired peak very accurately. Figure 3 shows an example of the embodiment of the new smart position sensor for single light spot measurement in the context of 3D and colour measurement. An object is illuminated by a collimated RGB laser spot and a portion of the reflected radiation upon entering the system is split into four components by a diffracting optical element. The white zero order component is directed onto the DRPSD, while the RGB 1st order components are directed onto three CRPSD which are used for colour detection. The CRPSDs are also used to find the centroid of the light distribution impinging on them and to estimate the total light intensity. The centroid is computed on chip or offchip with the well-known current ratio method i.e. (I1-I2)/(I1+I2) where I1 and I2 are the currents generated by that type of sensor [20]. The centroid value is fed to a control unit that will select a sub-set of contiguous photo-detectors on the DRPSD. That sub-set is located around the estimate of the centroid supplied by the CRPSD. Then, the best algorithms for peak extraction can be applied to the light distribution of interest
Dept. of ECE

22

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

10. IMPLEMENTATION AND EXPERIMENTAL RESULTS


We present here the architecture and preliminary experimental results of a first prototype chip of a DRPSD with selectable readout window. This is the first block of a more complex chip that will include all the components illustrated in Figure 3. The prototype chip consists of an array of 32 pixels with related readout channels and has been fabricated using a 0.8mm commercial CMOS process [22]. The sensor architecture is typical for linear array optical sensors [23] and is shown in Figure 4. The novelties implemented consist in a variable gain of the readout channels and a selectable readout window of 16 contiguous pixels. Both features are necessary to comply with the requirements of 3D single spot vision systems a linear dynamic range of at least 12 bits and a fast readout. Of course, in the prototype, many of the signals which, in the final system are supposed to be generated by the CRPSDs, are now generated by means of external circuitry. As stated above the photosensor array contains 32 pixels with a pitch of 50mm, each pixel having a sensitive area of 48 x 500 mm2. The large dimensions of the pixel are, required, on one side to cope with speckle noise [8] and, on the other side, to facilitate system alignment. Each pixel is provided with its own readout channel for parallel reading. The channel contains a charge amplifier (CA) and a correlated double sampling circuit (CDS). To span 12 bits of dynamic range, the integrating capacitor of the CA can assume five different values (CAP). In the prototype chip the proper integrating capacitor value is externally selected by means of the switches C0-C4. In the final sensor, however, the proper capacitance value will be automatically set by an on chip circuitry, on the basis of the total light intensity as calculated by the CRPSDs. During normal operation all 32 pixels are first reset at their bias value and then left to integrate the light for a period of 12ms. Within this time the CRPSDs and an external processing unit are supposed to estimate both the spot position and its total intensity and to feed those parameters to the window selection logic, LOGIC_A, and to the gain selection logic, LOGIC_B. After that, 16 contiguous pixels, as addressed by the window selection logic, are read out in 52ms, for a total frame rate of 64ms.

Dept. of ECE

23

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

Figure 10.1 Artistic view of a smart sensor for accurate, precise and rapid light position measurement. The window selection logic, LOGIC_A, receives the address of the central pixel of the 16 pixel and calculates the address of the starting and ending pixel. The analog value at the output of the each CA within the addressed window is sequentially put on the bit line by a decoding logic, DECODER, and read by the video amplifier. LOGIC_A generates also synchronization and end-of-frame signals which are used from the external processing units. LOGIC_B, instead, is devoted to the generation of logic signals that drive both the CA and the CDS blocks. To add flexibility also the integration time can be changed by means of the external switches T0-T4. The chip has been tested and its functionality has been proven to being agreement with specifications. In Figures 5 and 6 are illustrated the experimental results relative to spectral responsivity and power responsivity, respectively.

Dept. of ECE

24

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

Figure10.2 Spectral responsivity of the photo elements The spectral responsivity, obtained by measuring the response of the photoelements to a normal impinging beam with varying wavelength is in agreement with data reported in literature [ 23]. Its maximum value of ~0.25 A/W is found around l~660nm. The several peaks in the curve are due to multiple reflection of light passing through the oxide layers on top of the photosensitive area. The power responsivity has been measured by illuminating the whole array with a white light source and measuring the pixel response as the light power is increased. As expected the curve can be divided into three main regions: a far left region dominated by photoelement noise, a central region where the photoelement response is linear with the impinging power and a saturation region. The values of the slope, linearity and dynamic range of the central region have been calculated for three chips

Dept. of ECE

25

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

Figure 10.3 Normalized power responsivity measured on two different samples.

Dept. of ECE

26

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

Figure Sensor response to a moving spot laser

Dept. of ECE

27

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

2. Overview of 3D imaging techniques


3D imaging sensors generally operate by projecting (in the active form) or acquiring (in the passive form) electromagnetic energy onto/from an object followed by recording the transmitted or reflected energy. The most important example of transmission gauging is the industrial computer tomography (CT), which uses highenergy x-rays and measures the radiation transmitted through the object. Reflection sensors for shape acquisition can be subdivided into non-optical and optical sensing. Non-optical sensing includes acoustic sensors (ultrasonic, seismic), electromagnetic (infrared, ultraviolet, microwave radar, etc...) and others. These techniques typically measure distances to objects by measuring the time required for a pulse of sound or microwave energy to bounce back from an object. In reflection optical sensing, light carries the measurement information. There is a remarkable variety of 3D optical techniques, and their classification is not unique. In this section, we mention the more prominent approaches, and we classify them as shown in Table 1

Table 2.1 Classification of 3D imaging techniques

Dept. of ECE

28

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

The 3D techniques are based on optical triangulation, on time delay, and on the use of monocular images. They are classified into passive and active methods. In passive methods, the reflectance of the object and the illumination of the scene are used to derive the shape information: no active device is necessary. In the active form, suitable light sources are used as the internal vector of information. A distinction is also made between direct and indirect measurements. Direct techniques result in range data, i.e., into a set of distances between the unknown surface and the range sensor. Indirect measurements are inferred from monocular images and from prior knowledge of the target properties. They result either in range data or in surface orientation. Excellent reviews of optical methods and range sensors for 3D measurement are presented in references [17]. In [1] a review of 20 years of development in the field of 3D laser imaging with emphasis on commercial systems available in 2004 is given. Comprehensive summaries of earlier techniques and systems are provided in the other publications. A survey of 3D imaging techniques is also given in [8] and in [9], in the wider context of both contact and contactless 3D measurement techniques for the reverse engineering of shapes.

Dept. of ECE

29

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

4.1 Laser triangulators


Both single-point triangulators and laser stripes belong to this category. They are all based on the active triangulation principle [17]. Figure 1 shows a typical system configuration. Points OP and OC are the exit and the entrance pupils of a laser source and of a camera. Their mutual distance is the baseline d. The optical axes zP and zC of the laser and the camera form an angle of degrees

Figure 4.2 Schematics of the triangulation principle.

The laser source generates a narrow beam, impinging the object at point S (single-point triangulators). The back scattered beam is imaged at point S at image plane . The

measurement of the location (iS, jS) of image point S defines the line of sight C S'O , and, by means of simple geometry, yields the position of S. The measurement of the surface is achieved by scanning. In a conventional triangulation configuration, a compromise is necessary between the Field of View (FOV), the measurement resolution and uncertainty, and the shadow effects due to large values of angle . To overcome

this limitation, a method called synchronized scanning has been proposed. Using the approach illustrated in [18, 19], a large field of view can be achieved with a small angle , without sacrificing range measurement precision. Laser stripes exploit the optical

Dept. of ECE

30

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

triangulation principle shown in Figure 1. However, in this case, the laser is equipped with a cylindrical lens, which expands the light beam along one direction. Hence, a plane of light is generated, and multiple points of the object are illuminated at the same time. In the figure, the light plane is denoted by S, and the illuminated points belong

to the intersection between the plane and the unknown object (line AB). The measurement of the location of all the image points from A to B at plane allows the

determination of the 3D shape of the object in correspondence with the illuminated points. For dense reconstruction, the plane of light must scan the scene [2022]. An interesting enhancement of the basic principle exploits the Biris principle [23]. A laser source produces two stripes by passing through the Bi-Iris component (a mask with two apertures). The measurement of the point location at the image plane can be carried out on a differential basis, and shows decreased dependence on the object reflectivity and on the laser speckles. One of the most significant advantages of laser triangulators is their accuracy, and their relative insensitivity to illumination conditions and surface texture effects. Single-point laser triangulators are widely used in the industrial field, for the measurement of distances, diameters, thicknesses, as well as in surface quality control applications. Laser stripes are becoming popular for quality control, reverse engineering, and modeling of heritage. Examples are the Vivid 910 system (Konica Minolta, Inc.), the sensors of the SmartRay series (SmartRay GmbH, Germany) and the ShapeGrabber systems (ShapeGrabber Inc., CA, USA). 3D sensors are designed for integration with robot arms, to implement pick and place operations in de-palletizing stations. An example is the Ranger system (Sick, Inc.). The NextEngine system (NextEngine, Inc., CA, USA) and the TriAngles scanner (ComTronics, Inc., USA) device deserve particular attention, since they show how the field is progressing. These two systems, in fact, break the price barrier of laser-based systems, which is a limiting factor in their diffusion.

4.2 Structured light


Structured light based sensors share the active triangulation approach above mentioned. However, instead of scanning the surface, they project bi-dimensional patterns of non-coherent light, and elaborate them to obtain the range information for each viewed point simultaneously. A single pattern as well as multiple patterns can be
Dept. of ECE

31

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

projected. In both cases, the single light plane

S shown in Figure is replaced by a

bundle of geometrical planes, schematically shown in Figure 2. The planes are indexed along the LP coordinate at the projector plane. The depth information at object point S is obtained as the intersection between line sight SS and the plane indexed by LP=LPS [25]. The critical point is to guarantee that different object points are assigned to different indexes along the LP coordinate. To this aim, a large number of projection strategies have been developed. The projection of grid patterns [26, 27], of dot patterns [28], of multiple vertical slits [29] and of multi-color projection patterns [30] have been extensively studied. Particular research has been carried out on the projection of fringe patterns [31, 32], that are particularly suitable to maximize the measurement resolution. As an example, in Figure 3, three fringe patterns are shown. The first one is formed by fringes of sinusoidal profile, the second one is obtained by using the superposition of two patterns with sinusoidal fringes at different frequencies [33], and the last one is formed by fringes of rectangular profile [34].

Figure 4.3 Principle of measurement in structured light systems

Dept. of ECE

32

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

4.3 Photogrammetry
Photogrammetry obtains reliable 3D models by means of photographs. In digital photogrammetry digital images are used. The elaboration pipeline consists basically of the following steps: camera calibration and orientation, image point measurements, 3D point cloud generation, surface generation and texture mapping. Camera calibration is crucial in view of obtaining accurate models. Image measurement can be performed by means of automatic or semi-automatic procedures. In the first case, very dense point clouds are obtained, even if at the expense of inaccuracy and missing parts. In the second case, very accurate measurements are obtained, over a significantly smaller number of points, and at increased elaboration and operator times [4042]. Typical applications of photogrammetry, besides the aerial photogrammetry, are in close range photogrammetry, for city modeling, medical imaging and heritage [4843]. Reliable packages are now commercially available for complete scene modeling or sensor calibration and orientation. Examples are the Australis software (Photometrics, Australia), Photomodeler (Eos Systems, Inc., Canada), Leica Photogrammetry Suite (Leica Geosystems GIS & Mapping), and Menci (Menci Software srl, Italy). Both stereo vision and photogrammetry share the objective of obtaining 3D models from images and are currently used in computer vision applications. The methods rely on the use of camera calibration techniques. The elaboration chain is basically the same, even if linear models for camera calibration and simplified point measurement procedures are used in computer vision applications. In fact, in computer vision the automation level of the overall process is more important with respect to the accuracy of the models [44]. In recent times, the photogrammetric and computer vision communities have proposed combined procedures, for increasing both the measurement performance and the automation levels. Examples are in the use of laser scanning and photogrammetry [4547] and in the use of stereo vision and photogrammetry for facade reconstruction and object tracking [48, 50]. On the market, combinations of structured light projection with photogrammetry are available (Atos II scanner,Gom GmbH, Germany) for improved measurement performance.

Dept. of ECE

33

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

4.4 Time of Flight


Surface range measurement can be made directly using the radar time-of-flight principle. The emitter unit generates a laser pulse, which impinges onto the target surface. A receiver detects the reflected pulse, and suitable electronics measures the round trip travel time of the returning signal and its intensity. Single point sensors perform point-to-point distance measurement, and scanning devices are combined to the optical head to cover large bi-dimensional scenes. Large range sensors allow measurement ranges from 15 m to 100 m (reflective markers must be put on the target surfaces). Medium range sensors can acquire 3D data in shorter ranges. The measurement resolutions vary with the range. For large measuring ranges, time-of flight sensors give excellent results. On the other side, for smaller objects, about one meter in size, attaining 1 part per 1,000 accuracy with time-of-flight radar requires very high speed timing circuitry, because the time differences are in the pico-second range. A few amplitude and frequency modulated radars have shown promise for close range distance measurement [37, 51, 52]. In many applications, the technique is range-limited by allowable power levels of laser radiation, determined by laser safety considerations. Additionally, time-of-flight sensors face difficulties with shiny surfaces, which reflect little back-scattered light energy except when oriented perpendicularly to the line of sight.

4.5 Interferometry
Interferometric methods operate by projecting a spatially or temporally varying periodic pattern onto a surface, followed by mixing the reflected light with a reference pattern. The reference pattern demodulates the signal to reveal the variation in surface geometry. The measurement resolution is very high, since it is a fraction of the wavelength of the laser radiation. For this reason, surface quality control and microprofilometry are the most explored applications. Use of multiple wavelengths and of double heterodyne detection lengthens the non-ambiguity range. Systems based on this principle are successfully used to calibrate CMMs. Holographic interferometry relies on mixing coherent illumination with different wave vectors. Holographic

Dept. of ECE

34

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

methods typically yield range accuracy of a fraction of the light wavelength over microscopic fields of view [5, 53].

4.6 Moir fringe range contours


Moir fringe range contours are obtained by illuminating the scene with a light source passing through a grating, and viewing the scene with a laterally displaced camera through an identical second grating. The resulting interference pattern shows equidistant contour lines, but there is no information about the corresponding depth values. In addition, it is impossible to have direct information about the sign of the concavities. Moir methods can have phase discrimination problems when the surface does not exhibit smooth shape variations [54].

4.7 Shape from focusing


Depth and range can be determined by exploiting the focal properties of a lens. A camera lens can be used as a range finder by exploiting the depth of view phenomena. In fact, the target image is blurred by an amount proportional to the distance between points on the object and the in-focus object plane. This technique has evolved from the passive approach to an active sensing strategy. In the passive case, surface texture is used to determine the amount of blurring. Thus, the object must have surface texture covering the whole surface in order to extract shape. The active version of the method operates by projecting light onto the object to avoid difficulties in discerning surface texture. Most prior work in active depth from focus has yielded moderate accuracy up to one part per 400 over the field of view [55]. There is a direct trade-of between depth of view and field of view; satisfactory depth resolution is achieved at the expense of a sub-sampling the scene, which in turn requires some form of mechanical scanning to acquire range measurement over the whole scene. Moreover, spatial resolution is non-uniform. Specifically, depth resolution is substantially less than resolution perpendicular to the observation axis. Finally, objects not aligned perpendicularly to the optical axis and having a depth dimension greater than the depth of view will come in to focus at different ranges, complicating the scene analysis and interpretation [56].

Dept. of ECE

35

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

4.8 Shape from shadows


This technique is a variant of the structured light approach. The 3D model of the unknown object is rebuilt by capturing the shadow of a known object projected onto the target when the light is moving. Low cost and simple hardware are the main advantages of this approach, even if at the expense of low accuracy [57].

4.9 Shape from texture


The idea is to find possible transformations of texture elements (texels) to reproduce the object surface orientation. For example, given a planar surface, the image of a circular texel is an ellipse with varying values of its axes depending on its orientation [58]. A review of the techniques developed in this area is in reference [37]. These methods yield simple, low cost hardware. However, the measurement quality is low.

4.10 Strengths and weaknesses of 3D imaging techniques


The main characteristics of optical range imaging techniques are summarized in Table . The comments on the table are quite general in nature, and a number of exceptions are known. Strengths and weaknesses of the different techniques are strongly application-dependent. The common attribute of being non-contact is an important consideration in those applications characterized by either fragile, deformable objects, or on-line measurements or hostile environments. Acquisition time is another important aspect, when the overall cost of the measurement process suggests the reduction of the operator time even at the expense of deteriorating the quality of the data. On the other hand, active vision systems using a laser beam that illuminates the object inherently involve safety considerations, and possible surface interaction effects. In addition, the speckle effect introduces disturbances that represent a serious limit to the accuracy of the measurements [60]. Fortunately, recent advances in LEDs technology yield a valuable solution to this problem, since they are suitable candidates as light projectors in active triangulation systems. The dramatic evolution of CPUs and memories has led in the last three years to elaboration performances that were by far

Dept. of ECE

36

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

impossible before, at low costs. For this reasons, techniques that are computation demanding (e.g., passive stereo vision) are now more reliable and efficient. The selection of which sensor type should be used to solve a given depth measurement problem is a very complex task, that must consider (i) the measurement time, (ii) the budget, and (iii) the quality expected from the measurement. In this respect, 3D imaging sensors may be affected by missing or bad quality data. Reasons are related to the optical geometry of the system, the type of projector and/or acquisition optical device, the measurement technique and the characteristics of the target objects. The sensor performance may depend on the dimension, the shape, the texture, the temperature and the accessibility of the object. Relevant factors that influence the choice are also the ruggedness, the portability, and the adaptiveness of the sensor to the measurement problem, the easiness of the data handling, and the simplicity of use of the sensor.

Dept. of ECE

37

Marian Engineering college

Seminar Report 11

Sensors on 3D Digitization

Experience in the application of 3D imaging sensors A wide range of applications is possible nowadays, thanks to the availability of engineered sensors, of laboratory prototypes, and of suitable software environments to elaborate the measurements. This section presents a brief insight of the most important fields where 3D imaging sensors can be fruitfully used. For each one, an application example that has been carried out in our Laboratory is presented. 13.1 Industrial applications Typical measurement problems in the industrial field deal with (i) the control of machined surfaces, for the quantitative measure of roughness, waviness and form factor, (ii) the dimensional measurement and the quality control of products, and (iii) the reverse engineering of complex shapes. 13.2 Surface quality control Microprofilometry techniques and sensors are the best suited to carry out the measurement processes in the surface control field. In this application, the objective is to gauge the 3D quantitative

FUTURE SCOPE
Antireflection coating film deposition and RGB filter deposition can be used to enhance sensitivity and for colour sensing

Dept. of ECE

38

Marian Engineering college

You might also like