Lab 2 - Depth-Sensor

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Lab 2.

Depth Sensor

Floroiu Dragos, Nagy Nicoleta, Boca Ana

Activity 1: Make your own ‘surveys’ regarding these technologies and a short report, including new
developments (past 2 years).

3D scanning for Lidar technology

With 3D scanning, one of the biggest challenges is edge fidelity, making sure that objects don't bleed into
the background or each other.

Lidar is a surveying method that measures distance to a target by illuminating the target with laser light
and measuring the reflected light with a sensor. Differences in laser return times and wavelengths can
then be used to make digital 3-D representations of the target. The name lidar, now used as an acronym
of light detection and ranging (sometimes, light imaging, detection, and ranging), was originally a
portmanteau of light and radar. Lidar sometimes is called 3D laser scanning, a special combination of a
3D scanning and laser scanning. It has terrestrial, airborne, and mobile applications.

The LIDAR camera has edge fidelity in a class of its own, combined with a quality FHD RGB camera and
IMU for more robust handheld scanning solutions.
Time of flight cameras (present development): ToF cameras have gained attention as the depth sensing
method of choice for its smaller form factor, wide dynamic range of sensing and its ability to operate in a
variety of environments.

Though ToF has been used for years in the scientific and military fields it has become more used starting
in early 2000's.

ToF camera measures distance by actively illuminating an object with a modulated light source and a
sensor that is sensitive to laser's wavelength captures reflected light.
Activity 2: Give some examples of typical applications related to depth sensors and provide also the
technical specifications, in order to see the differences between implementations (ex. distance, range,
resolution, source of errors). Discuss the hardware associated with 3D-sensing computer vision products.

AR / VR: For sensing real 3D environments and reconstructing them in the virtual world

For example, Project Tango by Google uses depth sensors to accurately measure the real
environment and inform its graphics algorithms to place virtual content at proper positions.
Contrast that with the AR mode of Pokemon Go, where users can often see Pokemon placed in
inaccurate positions since algorithm does not have environment depth information.
Depth information is also necessary for human-machine interaction of VR/AR devices. Devices
must respond accurately to the 3D movement of users, which definitely need high-performance
depth sensors.
Robotics: For navigation, localization, mapping, and avoiding collision

Many warehouses are already taking advantage of fully autonomous vehicles that transport
objects from one place to another. The ability for the vehicle to move on its own requires depth
sensing so that it can know where it is in the environment, where other important things are, and
most importantly how it can safely get from A to B. Similarly, any robot used for picking
purposes relies on depth sensing to know where the target object is and how to get it.
These same applications are essential for any autonomous vehicle to succeed. In fact, one of the
most significant challenges for autonomous vehicles at the moment is equipping the car with an
accurate depth sensor without increasing the cost too dramatically.
Facial Recognition: For improving convenience while preventing fraud

Most facial recognition systems use a 2D camera to capture a photo and send it to an algorithm to
decide the person’s identity. However, this has a significant loophole: a bad actor can fool
systems since they can’t tell whether it’s seeing a real 3D face or a 2D photo. In order to make
facial recognition safe, 3D cameras with depth sensing are necessary.
Besides blocking that loophole, 3D face modeling also communicates more features of the face
for more accurate recognition.

Gesture and proximity detection: For gaming, security, and


more.
Time-of-Flight (ToF) depth sensors are already being used by many devices for these purposes. In
simple implementations, a depth sensor only needs to detect the depth information of one point,
such as a hand for gesture detection or a face for proximity detection. Thus a depth sensing system
with simple optics (and a narrow Field-of-View) is sufficient. With the evolution of gesture detection,
more sophisticated depth sensing systems are used, such as Microsoft’s Kinect.
Activity 3: Give some examples where do you think the GPS-based technology applications are
useful instead of ATAP-based technology.

-Traffic monitoring applications like Waze

-Exercise trackers like MapMyRun

-Ski tracking and location applicaitons like KingOfTheSlope

-Marine: when high accuracy GPS is fitted to boats and ships, it allows captains to navigate through
unfamiliar harbours, shipping channels and waterways without running aground or hitting known
obstacles. GPS is also used to position and map dredging operations in rivers, wharfs and
sandbars, so other boats know precisely where it is deep enough for them to operate

-Aviation: almost all modern aircraft are fitted with multiple GPS receivers. This provides pilots (and
sometimes passengers) with a real-time aircraft position and map of each flight’s progress. GPS
also allows airline operators to pre-select the safest, fastest and most fuel-efficient routes to each
destination, and ensure that each route is followed as closely as possible when the flight is
underway.

Activity 5: Check the Project Tango Tablet Listed on Play Store (applications related to the
Tango project)

Activity 6: How do you think the RGB cameras and IR and depth sensors work together
The RGB + IR technology is similar to standard RGB color sensors in that it is a mask that is
applied to a monochrome sensor.

The difference is that one of the two green pixels is replaced with an Infrared (IR) pixel. This IR
pixel contains Red, Green and Blue filter materials, effectively absorbing all of the light in the visible
spectrum. IR light with longer wavelengths will pass through this filter with minimal loss.

The main benefits of an RGB+IR sensor are sensitivity and good color reproduction. This sensor
will provide a brighter image as the available IR light is not blocked and is collected in addition to
the visible light. With the standard Bayer masks, IR damages the color separation as it can only be
approximated and not measured. This results in lost color resolution and color that does not look
natural.

The applications best suited for security and traffic applications are perfect as they require imaging
continuously from day to night, encountering the full spectrum of lighting conditions.

Activity 7: See Google Project Tango in Action! Why do we need augmented reality?
Augmented reality is an interactive experience of a real-world environment where the objects that
reside in the real world are enhanced by computer-generated perceptual information, sometimes
across multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory.

Architecture

AR can aid in visualizing building projects. Computer-generated images of a structure can be


superimposed onto a real-life local view of a property before the physical building is constructed
there.

Commerce

AR is used to integrate print and video marketing. Printed marketing material can be designed
with certain "trigger" images that, when scanned by an AR-enabled device using image
recognition, activate a video version of the promotional material.

Social interaction
AR can be used to facilitate social interaction. An augmented reality social network framework
called Talk2Me enables people to disseminate information and view others' advertised
information in an augmented reality way. The timely and dynamic information sharing and
viewing functionalities of Talk2Me help initiate conversations and make friends for users with
people in physical proximity.

Video games
The gaming industry embraced AR technology. A number of games were developed for prepared
indoor environments, such as AR air hockey, Titans of Space, collaborative combat against virtual
enemies, and AR-enhanced pool table games. Augmented reality allowed video game players to
experience digital game play in a real-world environment.

You might also like