Seminar Report
Seminar Report
Seminar Report
Sensors on 3D Digitization
1. INTRODUCTION
Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously colour and 3D.
Dept. of ECE
Seminar Report 11
Sensors on 3D Digitization
Passive vision, attempts to analyze the structure of the scene under ambient light. Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.
Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.
Dept. of ECE
Seminar Report 11
Sensors on 3D Digitization
4. AUTOSYNCHRONIZED SCANNER
Figure 4.1
The auto-synchronized scanner, depicted schematically on Figure 1, can provide registered range and colour data of visible surfaces. A 3D surface map is captured by scanning a laser spot onto a scene, collecting the reflected laser light, and finally focusing the beam onto a linear laser spot sensor. Geometric and photometric corrections of the raw data give two images in perfect registration: one with x, y, z coordinates and a second with reflectance data. The laser beam composed of multiple visible wavelengths is used for the purpose of measuring the colour map of a scene (reflectance map).
Advantage
Triangulation is the most precise method of 3D
Limitation
Increasing the accuracy increases the triangulation distance. The larger the triangulation distance, the more shadows appear on the scanning object and the scanning head must be made larger.
Dept. of ECE
Seminar Report 11
Sensors on 3D Digitization
Dept. of ECE
Seminar Report 11
Sensors on 3D Digitization
The PSD is a precision semiconductor optical sensor which produces output currents related to the centre of mass of light incident on the surface of the device.
While several design variations exist for PSDs, the basic device can be described as a large area silicon p-i-n photodetector. The detectors can be available in single axis and dual axis models.
Dept. of ECE
Seminar Report 11
Sensors on 3D Digitization
Dept. of ECE
Seminar Report 11
Sensors on 3D Digitization
Figure illustrates the basic structure of a p-n type single axis lateral effect photodiode. Carriers are produced by light impinging on the device are separated in the depletion region and distributed to the two sensing electrodes according to the Ohms law. Assuming equal impedances, The electrode that is the farthest from the centroid of the light distribution collects the least current. The normalized position of the centroid when the light intensity fluctuates is given by P =I2-I1/I1+I2.
Dept. of ECE
Seminar Report 11
Sensors on 3D Digitization
The actual position on the detector is found by multiplying P by d/2 ,where d is the distance between the two sensing electrodes.[3]. A CRPSD provides the centroid of the light distribution with a very fast response time (in the order of 10 MHz). Theory predicts that a CRPSD provides very precise measurement of the centroid versus a DRPSD. By precision, we mean measurement uncertainty. It depends among other things on the signal to noise ratio and the quantization noise. In practice, precision is important but accuracy is even more important. A CRPSD is in fact a good estimator of the central location of a light distribution.
Dept. of ECE
Seminar Report 11
Sensors on 3D Digitization
Figure 8.2 In a monochrome system, the incoming beam is split into two components
light spot
measurement in the context of 3D and colour measurement. In a monochrome range camera, a portion of the reflected radiation upon entering the system is split into two beams (Figure 7.2). One portion is directed to a CRPSD that determines the location of the best window and sends that information to the DRPSD.
In order to measure colour information, a different optical element is used to split the returned beam into four components, e.g., a diffractive optical element (Figure 7.3). The white zero order component is directed to the DRPSD, while the RGB 1storder components are directed onto three CRPSD, which are used for colour detection. The CRPSDs are also used to find the centroid of the light distribution impinging on them and to estimate the total light intensity The centroid is computed
Dept. of ECE
Seminar Report 11
Sensors on 3D Digitization
on chip with the well-known current ratio method i.e. (I1-I2)/ (I1+I2) where I1 and I2 are the currents generated by that type of sensor. The weighed centroid value is fed to a control unit that will select a sub-set (window) of contiguous photo-detectors on the DRPSD. That sub-set is located around the estimate of the centroid supplied by the CRPSD.Then the best algorithms for peak extraction can be applied to the portion of interest.
Dept. of ECE
10
Seminar Report 11
Sensors on 3D Digitization
An object is illuminated by a collimated RGB laser spot and a portion of the reflected radiation upon entering the system is split into four components by a diffracting optical element as shown in figure 4b. The white zero order component is directed to the DRPSD, while the RGB 1storder components are directed onto three CRPSD, which are used for colour detection. The CRPSDs are also used to find the centroid of the light distribution impinging on them and to estimate the total light intensity The centroid is computed on chip with the well-known current ratio method i.e. (I1-I2)/ (I1+I2) where I1 and I2 are the currents generated by that type of sensor.
Dept. of ECE
11
Seminar Report 11
Sensors on 3D Digitization
The weighed centroid value is fed to a control unit that will select a sub-set (window) of contiguous photo-detectors on the DRPSD. That sub-set is located around the estimate of the centroid supplied by the CRPSD. Then, the best algorithms for peak extraction can be applied to the portion of interest.
Dept. of ECE
12
Seminar Report 11
Sensors on 3D Digitization
The prototype chip consists of an array of 32 pixels with related readout channels and has been fabricated using a 0. mm commercial CMOS process. The novelties implemented consist in a variable gain of the readout channels and a selectable readout window of 16 contiguous pixels. Both features are necessary to comply with the requirements of 3D single laser spot sensors, i.e., a linear dynamic range of at least 12 bits and a high 3D data throughput. In the prototype, many of the signals, which, in the final system are supposed to be generated by the CRPSDs, are now generated by means of external circuitry. The large dimensions of the pixel are required, on one side to cope with speckle noise and, on the other side, to facilitate system alignment. Each pixel is provided with its own readout channel for parallel reading. The channel contains a charge amplifier and a correlated double sampling circuit (CDS). To span 12 bits of dynamic range, the integrating capacitor can assume five different values. In the prototype chip, the proper integrating capacitor value is externally selected by means of external switches C0-C4. In the final sensor, however, the proper value will be automatically set by an on chip circuitry on the basis of the total light intensity as calculated by the CRPSDs.
During normal operation, all 32 pixels are first reset at their bias value and then left to integrate the light for a period of 1 ms. Within this time the CRPSDs and an external processing unit estimate both the spot position and its total intensity and those parameters are fed to the window selection logic. After that, 16 contiguous pixels, as addressed by the window selection logic, are read out in 5 ms, for a total frame rate of 6 ms. Future sensors will operate at full speed, i.e. an order of magnitude faster. The window selection logic, LOGIC_A, receives the address of the central pixel of the 16 pixels and calculates the address of the starting and ending pixel. The analog value at the output of the each CA within the addressed window is sequentially put on the bit line by a decoding logic, DECODER, and read by the video amplifier .LOGIC-A generates also synchronization and end-of-frame signals which are used from the external processing units. LOGIC_B instead is devoted to the generation of logic signals that derive both the CA and the CDS blocks. To add flexibility also the integration time can be changed by means of the external switches
T0-T4.The chip has been tested and its functionality proven to be in agreement with specifications.
Dept. of ECE
13
Seminar Report 11
Sensors on 3D Digitization
11. ADVANTAGES
Reduced size and cost Better resolution at a lower system cost High reliability that is required for high accuracy 3D vision systems Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated.
12. DISADVANTAGE
The elimination of all stray light in an optical system requires sophisticated techniques.
Dept. of ECE
14
Seminar Report 11
Sensors on 3D Digitization
13. APPLICATIONS
Intelligent digitizers will be capable of measuring accurately and simultaneously colour and 3D
For the development of hand held 3D cameras Multi resolution random access laser scanners for fast search and tracking of 3D features
Dept. of ECE
15
Seminar Report 11
Sensors on 3D Digitization
14. CONCLUSION
The results obtained so far have shown that optical sensors have reached a high level of development and reliability those are suited for high accuracy 3D vision systems. The availability of standard fabrication technologies and the acquired knowhow in the design techniques, allow the implementation of optical sensors that are application specific: Opto-ASICs. The trend shows that the use of the low cost CMOS technology leads competitive optical sensors. Furthermore post-processing modules, as for example anti reflecting coating film deposition and RGB filter deposition to enhance sensitivity and for colour sensing, are at the final certification stage and will soon be available in standard fabrication technologies. The work on the Colorange is being finalized and work has started on a new improved architecture.
Dept. of ECE
16
Seminar Report 11
Sensors on 3D Digitization
REFERENCES
[1]
L.GONZO,
A.SIMONI,
A.GOTTARDI,
DAVID
STAPPA,
J.A
BERNALDIN,Sensors optimized for 3D digitization, IEEE transactions on instrumentation and measurement, vol 52, no.3, June 2003, pp.903-908. [2] P.SCHAFER, R.D.WILLIAMS. G.K DAVIS, ROBERT A. ROSS, Accuracy of position detection using a position sensitive detector, IEEE transactions on instrumentation and measurement, vol 47, no.4, August 1998, pp.914-918 [3] J.A.BERALDIN,Design of Bessel type pre-amplifiers for lateral effect photodiodes, International Journal Electronics, vol 67, pp 591-615, 1989. [4] [5] A.M.DHAKE,Television and video engineering, Tata Mc Graw Hill X.ARREGUIT, F.A.VAN SCHAIK, F.V.BAUDUIN, M.BIDIVILLE,
E.RAEBER,A CMOS motion detector system for pointing devices, IEEE journal solid state circuits, vol 31, pp.1916-1921, Dec 1996 [6] P.AUBERT, H.J.OGUEY, R.VUILLEUNEIR, Monolithic optical position encoder with on-chipphotodiodes,IEEE J.solid state circuits, vol 23, pp.465473, April 1988 [7] K.THYAGARAJAN, A.K.GHATAK,Lasers-Theory and Plenum Press [8] N.H.E.Weste, Kamran Eshraghian,Principles of CMOS VLSIdesign, A systems perspective, Low Price Edition applications,
Dept. of ECE
17
Seminar Report 11
Sensors on 3D Digitization
CONTENTS
INTRODUCTION COLOUR 3-D IMAGING TECHNOLOGY LASER SENSORS FOR 3-D IMAGING POSITION SENSITIVE DETECTORS PROPOSED SENSOR-COLORANGE ADVANTAGES DISADVANTAGES APPLICATIONS FUTURE SCOPE CONCLUSION REFERENCES
Dept. of ECE
18
Seminar Report 11
Sensors on 3D Digitization
ABSTRACT
Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available.
The first strategy, known as passive vision, attempts to analyze the structure of the scene under ambient light. In contrast, the second, known as active vision, attempts to reduce the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with 2-D imaging systems. Moreover, with laser based approaches, the 3-D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Thus the task of processing 3-D data is greatly simplified.
Dept. of ECE
19
Seminar Report 11
Sensors on 3D Digitization
ACKNOWLEDGEMENT
I extend my sincere gratitude towards Prof. Sasikumar Head of Department for giving us his invaluable knowledge and wonderful technical guidance
I express my thanks to our staff advisor Mrs. Remola Joy for their kind cooperation and guidance for preparing and presenting this seminar.
I also thank all the other faculty members of ECE department and my friends for their help and support.
Dept. of ECE
20
Seminar Report 11
Sensors on 3D Digitization
Many devices have been built or considered in the past for measuring the position of a light beam. Among those, one finds continuous response position sensitive detectors (CRPSD) and discrete response position sensitive detectors (DRPSD) [9-20]. The category CRPSD includes lateral effect photodiode and geometrically shaped photo-diodes (wedges or segmented). DRPSD on the other hand comprise detectors such as Charge Coupled Devices (CCD) and arrays (1-D or 2-D) of photodiodes equipped with a multiplexer for sequential reading. One cannot achieve high measurement speed like found with continuous response position sensitive detectors and keep the same accuracy as with discrete response position sensitive detectors. On Figure 1, we report two basic types of devices that have been proposed in the literature. A CRPSD provides the centroid of the light distribution with a very fast response time (in the order of 10Mhz) [20]. On the other hand, DRPSD are slower because all the photo-detectors have to be read sequentially prior to the measurement of the location of the real peak of the light distribution [21]. People try to cope with the detector that they have chosen. In other words, they pick a detector for a particular application and accept the tradeoffs inherent in their choice of a position sensor. Consider the situation depicted on Figure 2, a CRPSD would provide A as an answer. But a DRPSD can provide B, the desired response. This situation occurs frequently in real applications. The elimination of all stray light in an optical system requires sophisticated techniques that increase the cost of a system. Also, in some applications, background illumination cannot be completely eliminated even with optical light filters. We propose to use the best of both worlds. Theory predicts that a CRPSD provides very precise measurement of the centroid versus a DRPSD (because of higher spatial resolution). By precision, we mean measurement uncertainty. It depends among other things on the signal to noise ratio, the quantization noise, and the sampling noise. In practice, precision is important but accuracy is even more important. A CRPSD is in fact a good estimator of the central location of a light distribution. Accuracy is understood to be the true value of a quantity. DRPSD are very accurate (because of the knowledge of the distribution) but can be less precise (due to spatial sampling of the light distribution).
Dept. of ECE
21
Seminar Report 11
Sensors on 3D Digitization
Figure 9.3 A typical situations where stray light blurs the measurement of the real but much narrower peak. A CRPSD would provide A as an answer. But a DRPSD can provide B, the desired response peak
Their slow speed is due to the fact that all the pixels of the array have to be read but they dont all contribute to the computation of the peak. In fact, what is required for the measurement of the light distribution peak is only a small portion of the discrete detector. Hence the new smart detector. Once the pertinent light distribution is available, one can compute the location of the desired peak very accurately. Figure 3 shows an example of the embodiment of the new smart position sensor for single light spot measurement in the context of 3D and colour measurement. An object is illuminated by a collimated RGB laser spot and a portion of the reflected radiation upon entering the system is split into four components by a diffracting optical element. The white zero order component is directed onto the DRPSD, while the RGB 1st order components are directed onto three CRPSD which are used for colour detection. The CRPSDs are also used to find the centroid of the light distribution impinging on them and to estimate the total light intensity. The centroid is computed on chip or offchip with the well-known current ratio method i.e. (I1-I2)/(I1+I2) where I1 and I2 are the currents generated by that type of sensor [20]. The centroid value is fed to a control unit that will select a sub-set of contiguous photo-detectors on the DRPSD. That sub-set is located around the estimate of the centroid supplied by the CRPSD. Then, the best algorithms for peak extraction can be applied to the light distribution of interest
Dept. of ECE
22
Seminar Report 11
Sensors on 3D Digitization
Dept. of ECE
23
Seminar Report 11
Sensors on 3D Digitization
Figure 10.1 Artistic view of a smart sensor for accurate, precise and rapid light position measurement. The window selection logic, LOGIC_A, receives the address of the central pixel of the 16 pixel and calculates the address of the starting and ending pixel. The analog value at the output of the each CA within the addressed window is sequentially put on the bit line by a decoding logic, DECODER, and read by the video amplifier. LOGIC_A generates also synchronization and end-of-frame signals which are used from the external processing units. LOGIC_B, instead, is devoted to the generation of logic signals that drive both the CA and the CDS blocks. To add flexibility also the integration time can be changed by means of the external switches T0-T4. The chip has been tested and its functionality has been proven to being agreement with specifications. In Figures 5 and 6 are illustrated the experimental results relative to spectral responsivity and power responsivity, respectively.
Dept. of ECE
24
Seminar Report 11
Sensors on 3D Digitization
Figure10.2 Spectral responsivity of the photo elements The spectral responsivity, obtained by measuring the response of the photoelements to a normal impinging beam with varying wavelength is in agreement with data reported in literature [ 23]. Its maximum value of ~0.25 A/W is found around l~660nm. The several peaks in the curve are due to multiple reflection of light passing through the oxide layers on top of the photosensitive area. The power responsivity has been measured by illuminating the whole array with a white light source and measuring the pixel response as the light power is increased. As expected the curve can be divided into three main regions: a far left region dominated by photoelement noise, a central region where the photoelement response is linear with the impinging power and a saturation region. The values of the slope, linearity and dynamic range of the central region have been calculated for three chips
Dept. of ECE
25
Seminar Report 11
Sensors on 3D Digitization
Dept. of ECE
26
Seminar Report 11
Sensors on 3D Digitization
Dept. of ECE
27
Seminar Report 11
Sensors on 3D Digitization
Dept. of ECE
28
Seminar Report 11
Sensors on 3D Digitization
The 3D techniques are based on optical triangulation, on time delay, and on the use of monocular images. They are classified into passive and active methods. In passive methods, the reflectance of the object and the illumination of the scene are used to derive the shape information: no active device is necessary. In the active form, suitable light sources are used as the internal vector of information. A distinction is also made between direct and indirect measurements. Direct techniques result in range data, i.e., into a set of distances between the unknown surface and the range sensor. Indirect measurements are inferred from monocular images and from prior knowledge of the target properties. They result either in range data or in surface orientation. Excellent reviews of optical methods and range sensors for 3D measurement are presented in references [17]. In [1] a review of 20 years of development in the field of 3D laser imaging with emphasis on commercial systems available in 2004 is given. Comprehensive summaries of earlier techniques and systems are provided in the other publications. A survey of 3D imaging techniques is also given in [8] and in [9], in the wider context of both contact and contactless 3D measurement techniques for the reverse engineering of shapes.
Dept. of ECE
29
Seminar Report 11
Sensors on 3D Digitization
The laser source generates a narrow beam, impinging the object at point S (single-point triangulators). The back scattered beam is imaged at point S at image plane . The
measurement of the location (iS, jS) of image point S defines the line of sight C S'O , and, by means of simple geometry, yields the position of S. The measurement of the surface is achieved by scanning. In a conventional triangulation configuration, a compromise is necessary between the Field of View (FOV), the measurement resolution and uncertainty, and the shadow effects due to large values of angle . To overcome
this limitation, a method called synchronized scanning has been proposed. Using the approach illustrated in [18, 19], a large field of view can be achieved with a small angle , without sacrificing range measurement precision. Laser stripes exploit the optical
Dept. of ECE
30
Seminar Report 11
Sensors on 3D Digitization
triangulation principle shown in Figure 1. However, in this case, the laser is equipped with a cylindrical lens, which expands the light beam along one direction. Hence, a plane of light is generated, and multiple points of the object are illuminated at the same time. In the figure, the light plane is denoted by S, and the illuminated points belong
to the intersection between the plane and the unknown object (line AB). The measurement of the location of all the image points from A to B at plane allows the
determination of the 3D shape of the object in correspondence with the illuminated points. For dense reconstruction, the plane of light must scan the scene [2022]. An interesting enhancement of the basic principle exploits the Biris principle [23]. A laser source produces two stripes by passing through the Bi-Iris component (a mask with two apertures). The measurement of the point location at the image plane can be carried out on a differential basis, and shows decreased dependence on the object reflectivity and on the laser speckles. One of the most significant advantages of laser triangulators is their accuracy, and their relative insensitivity to illumination conditions and surface texture effects. Single-point laser triangulators are widely used in the industrial field, for the measurement of distances, diameters, thicknesses, as well as in surface quality control applications. Laser stripes are becoming popular for quality control, reverse engineering, and modeling of heritage. Examples are the Vivid 910 system (Konica Minolta, Inc.), the sensors of the SmartRay series (SmartRay GmbH, Germany) and the ShapeGrabber systems (ShapeGrabber Inc., CA, USA). 3D sensors are designed for integration with robot arms, to implement pick and place operations in de-palletizing stations. An example is the Ranger system (Sick, Inc.). The NextEngine system (NextEngine, Inc., CA, USA) and the TriAngles scanner (ComTronics, Inc., USA) device deserve particular attention, since they show how the field is progressing. These two systems, in fact, break the price barrier of laser-based systems, which is a limiting factor in their diffusion.
31
Seminar Report 11
Sensors on 3D Digitization
bundle of geometrical planes, schematically shown in Figure 2. The planes are indexed along the LP coordinate at the projector plane. The depth information at object point S is obtained as the intersection between line sight SS and the plane indexed by LP=LPS [25]. The critical point is to guarantee that different object points are assigned to different indexes along the LP coordinate. To this aim, a large number of projection strategies have been developed. The projection of grid patterns [26, 27], of dot patterns [28], of multiple vertical slits [29] and of multi-color projection patterns [30] have been extensively studied. Particular research has been carried out on the projection of fringe patterns [31, 32], that are particularly suitable to maximize the measurement resolution. As an example, in Figure 3, three fringe patterns are shown. The first one is formed by fringes of sinusoidal profile, the second one is obtained by using the superposition of two patterns with sinusoidal fringes at different frequencies [33], and the last one is formed by fringes of rectangular profile [34].
Dept. of ECE
32
Seminar Report 11
Sensors on 3D Digitization
4.3 Photogrammetry
Photogrammetry obtains reliable 3D models by means of photographs. In digital photogrammetry digital images are used. The elaboration pipeline consists basically of the following steps: camera calibration and orientation, image point measurements, 3D point cloud generation, surface generation and texture mapping. Camera calibration is crucial in view of obtaining accurate models. Image measurement can be performed by means of automatic or semi-automatic procedures. In the first case, very dense point clouds are obtained, even if at the expense of inaccuracy and missing parts. In the second case, very accurate measurements are obtained, over a significantly smaller number of points, and at increased elaboration and operator times [4042]. Typical applications of photogrammetry, besides the aerial photogrammetry, are in close range photogrammetry, for city modeling, medical imaging and heritage [4843]. Reliable packages are now commercially available for complete scene modeling or sensor calibration and orientation. Examples are the Australis software (Photometrics, Australia), Photomodeler (Eos Systems, Inc., Canada), Leica Photogrammetry Suite (Leica Geosystems GIS & Mapping), and Menci (Menci Software srl, Italy). Both stereo vision and photogrammetry share the objective of obtaining 3D models from images and are currently used in computer vision applications. The methods rely on the use of camera calibration techniques. The elaboration chain is basically the same, even if linear models for camera calibration and simplified point measurement procedures are used in computer vision applications. In fact, in computer vision the automation level of the overall process is more important with respect to the accuracy of the models [44]. In recent times, the photogrammetric and computer vision communities have proposed combined procedures, for increasing both the measurement performance and the automation levels. Examples are in the use of laser scanning and photogrammetry [4547] and in the use of stereo vision and photogrammetry for facade reconstruction and object tracking [48, 50]. On the market, combinations of structured light projection with photogrammetry are available (Atos II scanner,Gom GmbH, Germany) for improved measurement performance.
Dept. of ECE
33
Seminar Report 11
Sensors on 3D Digitization
4.5 Interferometry
Interferometric methods operate by projecting a spatially or temporally varying periodic pattern onto a surface, followed by mixing the reflected light with a reference pattern. The reference pattern demodulates the signal to reveal the variation in surface geometry. The measurement resolution is very high, since it is a fraction of the wavelength of the laser radiation. For this reason, surface quality control and microprofilometry are the most explored applications. Use of multiple wavelengths and of double heterodyne detection lengthens the non-ambiguity range. Systems based on this principle are successfully used to calibrate CMMs. Holographic interferometry relies on mixing coherent illumination with different wave vectors. Holographic
Dept. of ECE
34
Seminar Report 11
Sensors on 3D Digitization
methods typically yield range accuracy of a fraction of the light wavelength over microscopic fields of view [5, 53].
Dept. of ECE
35
Seminar Report 11
Sensors on 3D Digitization
Dept. of ECE
36
Seminar Report 11
Sensors on 3D Digitization
impossible before, at low costs. For this reasons, techniques that are computation demanding (e.g., passive stereo vision) are now more reliable and efficient. The selection of which sensor type should be used to solve a given depth measurement problem is a very complex task, that must consider (i) the measurement time, (ii) the budget, and (iii) the quality expected from the measurement. In this respect, 3D imaging sensors may be affected by missing or bad quality data. Reasons are related to the optical geometry of the system, the type of projector and/or acquisition optical device, the measurement technique and the characteristics of the target objects. The sensor performance may depend on the dimension, the shape, the texture, the temperature and the accessibility of the object. Relevant factors that influence the choice are also the ruggedness, the portability, and the adaptiveness of the sensor to the measurement problem, the easiness of the data handling, and the simplicity of use of the sensor.
Dept. of ECE
37
Seminar Report 11
Sensors on 3D Digitization
Experience in the application of 3D imaging sensors A wide range of applications is possible nowadays, thanks to the availability of engineered sensors, of laboratory prototypes, and of suitable software environments to elaborate the measurements. This section presents a brief insight of the most important fields where 3D imaging sensors can be fruitfully used. For each one, an application example that has been carried out in our Laboratory is presented. 13.1 Industrial applications Typical measurement problems in the industrial field deal with (i) the control of machined surfaces, for the quantitative measure of roughness, waviness and form factor, (ii) the dimensional measurement and the quality control of products, and (iii) the reverse engineering of complex shapes. 13.2 Surface quality control Microprofilometry techniques and sensors are the best suited to carry out the measurement processes in the surface control field. In this application, the objective is to gauge the 3D quantitative
FUTURE SCOPE
Antireflection coating film deposition and RGB filter deposition can be used to enhance sensitivity and for colour sensing
Dept. of ECE
38