Stereoscopy: Submitted By-Saumya Tripathi 2K7/ME/304

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

STEREOSCOPY

Submitted To-

Submitted By-

Saumya Tripathi
2K7/ME/304
INTRODUCTION

Stereoscopy (also called stereoscopic or 3-D imaging) is any technique capable of


recording three-dimensional visual information or creating the illusion of depth in an
image.
Human vision uses several cues to determine relative depths in a perceived scene.
Some of these cues are:
 Stereopsis
 Accommodation of the eyeball (eyeball focus)
 Occlusion of one object by another
 Subtended visual angle of an object of known size
 Linear perspective (convergence of parallel edges)
 Vertical position (objects higher in the scene generally tend to be perceived as
further away)
 Haze, desaturation, and a shift to bluishness
 Change in size of textured pattern detail
All the above cues, with the exception of the first two, are present in traditional two-
dimensional images such as paintings, photographs, and television. Stereoscopy is the
enhancement of the illusion of depth in a photograph, movie, or other two-dimensional
image by presenting a slightly different image to each eye, and thereby adding the first
of these cues (stereopsis) as well. It is important to note that the second cue is still not
satisfied and therefore the illusion of depth is incomplete.
Many 3D displays use this method to convey images. It was first invented by Sir Charles
Wheatstone in 1838. Stereoscopy is used in photogrammetry and also for entertainment
through the production of stereograms. Stereoscopy is useful in viewing images
rendered from large multi-dimensional data sets such as are produced by experimental
data. Modern industrial three dimensional photography may use 3D scanners to detect
and record 3 dimensional information. The three-dimensional depth information can be
reconstructed from two images using a computer by corresponding the pixels in the left
and right images. Solving the Correspondence problem in the field of Computer
Vision aims to create meaningful depth information from two images.
Traditional stereoscopic photography consists of creating a 3-D illusion starting from a
pair of 2-D images. The easiest way to enhance depth perception in the brain is to
provide the eyes of the viewer with two different images, representing
two perspectives of the same object, with a minor deviation exactly equal to the
perspectives that both eyes naturally receive in binocular vision. If eyestrain and
distortion are to be avoided, each of the two 2-D images preferably should be presented
to each eye of the viewer so that any object at infinite distance seen by the viewer
should be perceived by that eye while it is oriented straight ahead, the viewer's eyes
being neither crossed nor diverging. When the picture contains no object at infinite
distance, such as a horizon or a cloud, the pictures should be spaced correspondingly
closer together.
Stereopsis

Stereopsis (from stereo- meaning "solid", and opsis meaning view or sight) is the


process in visual perception leading to the sensation of depth from the two slightly
different projections of the world onto the retinas of the two eyes. The differences in the
two retinal images are called horizontal disparity, retinal disparity, or binocular
disparity. The differences arise from the eyes' different positions in the head. Stereopsis
is commonly referred to as depth perception. This is inaccurate, as depth perception
relies on many more monocular cues than stereoptical ones, and individuals with only
one functional eye still have full depth perception except in artificial cases (such as
stereoscopic images) where only binocular cues are present.

Geometrical basis for stereopsis


Stereopsis appears to be processed in the visual cortex in binocular
cells having receptive fields in different horizontal positions in the two eyes. Such a cell
is active only when its preferred stimulus is in the correct position in the left eye and in
the correct position in the right eye, making it a disparity detector.
When a person stares at an object, the two eyes converge so that the object appears at
the center of the retina in both eyes. Other objects around the main object appear
shifted in relation to the main object. In the following example, whereas the main object
(dolphin) remains in the center of the two images in the two eyes, the cube is shifted to
the right in the left eye's image and is shifted to the left when in the right eye's image.

The cube is shifted to the


The cube is shifted to the right
The two eyes converge on the in left eye's image. left in the right eye's
object of attention. image.

The brain gives each point in


We see a single, Cyclopean, the Cyclopean image a depth
image from the two eyes' value, represented here by a
images. grayscale depth map.
Because each eye is in a different horizontal position, each has a slightly different
perspective on a scene yielding different retinal images. Normally two images are not
observed, but rather a single view of the scene, a phenomenon known assingleness of
vision. Nevertheless, stereopsis is possible with double vision. This form of stereopsis
was called qualitative stereopsis by Kenneth Ogle.
If the images are very different (such as by going cross-eyed, or by presenting different
images in a stereoscope) then one image at a time may be seen, a phenomenon known
as binocular rivalry.

Visual requirements
Anatomically, there are 3 levels of binocular vision required to view stereo images:
Simultaneous perception
Fusion (binocular 'single' vision)
Stereopsis
These functions develop in early childhood. Some people who have strabismus disrupt
the development of stereopsis, however orthoptics treatment can be used to
improve binocular vision. A person's stereoacuity determines the minimum image
disparity they can perceive as depth. Above three are explained as under.

Side-by-side (non-shared viewing scenarios)

Characteristics
Little or no additional image processing is required. Under some circumstances, such as
when a pair of images is presented for crossed or diverged eye viewing, no device or
additional optical equipment is needed.
The principal advantages of side-by-side viewers is that there is no diminution of
brightness so images may be presented at very high resolution and in full spectrum
color. The ghosting associated with polarized projection or when color filtering is used is
totally eliminated. The images are discretely presented to the eyes and visual center of
the brain, with no co-mingling of the views. The recent advent of flat screens and
"software stereoscopes" has made larger 3D digital images practical in this side by side
mode, which hitherto had been used mainly with paired photos in print form.

Freeviewing
Freeviewing is viewing a side-by-side image without using a viewer.
 The parallel view method uses two images not more than 65mm between
corresponding image points; this is the average distance between the two eyes.
The viewer looks through the image while keeping the vision parallel; this can be
difficult with normal vision since eye focus and binocular convergence normally
work together.
 The cross-eyed view method uses the right and left images exchanged and
views the images cross-eyed with the right eye viewing the left image and vice-
versa.

Stereographic cards and the stereoscope


Two separate images are printed side-by-side. When viewed without a stereoscopic
viewer the user is required to force his eyes either to cross, or to diverge, so that the
two images appear to be three. Then as each eye sees a different image, the effect of
depth is achieved in the central image of the three.
The stereoscope offers several advantages:
Using positive curvature (magnifying) lenses, the focus point of the image is changed
from its short distance (about 30 to 40 cm) to a virtual distance at infinity. This allows
the focus of the eyes to be consistent with the parallel lines of sight, greatly reducing
eye strain.
The card image is magnified, offering a wider field of view and the ability to examine the
detail of the photograph.
The viewer provides a partition between the images, avoiding a potential distraction to
the user.
Stereograms cards are frequently used by orthoptists and vision therapists in the
treatment of many binocular vision and accommodative disorders.
Transparency viewers
The practice of viewing transparencies in stereo via a viewer dates to at least as early
as 1931, when Tru-Vue began to market filmstrips that were fed through a handheld
device made from Bakelite. In the 1940s, a modified and miniaturized variation of this
technology was introduced as the View-Master. Pairs of stereo views are printed on
translucent film which is then mounted around the edge of a cardboard disk, images of
each pair being diametrically opposite. A lever is used to move the disk so as to present
the next image pair. A series of seven views can thus be seen on each card when it was
inserted into the View-Master viewer. These viewers were available in many forms both
non-lighted and self-lighted and may still be found today. One type of material
presented is children's fairy tale story scenes or brief stories using
popular cartoon characters. These use photographs of three dimensional model sets
and characters. Another type of material is a series of scenic views associated with
some tourist destination, typically sold at gift shops located at the attraction.
Another important development in the late 1940s was the introduction of the Stereo
Realist camera and viewer system. Using color slide film, this equipment made stereo
photography available to the masses and caused a surge in its popularity. The Stereo
Realist and competing products can still be found (in estate sales and elsewhere) and
utilized today.
Low-cost folding cardboard viewers with plastic lenses have been used to view images
from a sliding card and have been used by computer technical groups as part of annual
convention proceedings. These have been supplanted by the DVD recording and
display on a television set. By exhibiting moving images of rotating objects a three
dimensional effect is obtained through other than stereoscopic means.
An advantage offered by transparency viewing is that a wider field of view may be
presented since images, being illuminated from the rear, may be placed much closer to
the lenses. Note that with simple viewers the images are limited in size as they must be
adjacent and so the field of view is determined by the distance between each lens and
its corresponding image.
Good quality wide angle lenses are quite expensive and they are not found in most
stereo viewers.
Head-mounted displays

The user typically wears a helmet or glasses with two small LCD or OLED displays with
magnifying lenses, one for each eye. The technology can be used to show stereo films,
images or games, but it can also be used to create a virtual display. Head-mounted
displays may also be coupled with head-tracking devices, allowing the user to "look
around" the virtual world by moving their head, eliminating the need for a separate
controller. Performing this update quickly enough to avoid inducing nausea in the user
requires a great amount of computer image processing. If six axis position sensing
(direction and position) is used then wearer may move about within the limitations of the
equipment used. Owing to rapid advancements in computer graphics and the continuing
miniaturization of video and other equipment these devices are beginning to become
available at more reasonable cost.

Head-mounted or wearable glasses may be used to view a see-through image imposed


upon the real world view, creating what is called augmented reality. This is done by
reflecting the video images through partially reflective mirrors. The real world view is
seen through the mirrors' reflective surface. Experimental systems have been used for
gaming, where virtual opponents may peek from real windows as a player moves about.
This type of system is expected to have wide application in the maintenance of complex
systems, as it can give a technician what is effectively "x-ray vision" by combining
computer graphics rendering of hidden elements with the technician's natural vision.
Additionally, technical data and schematic diagrams may be delivered to this same
equipment, eliminating the need to obtain and carry bulky paper documents.
Augmented stereoscopic vision is also expected to have applications in surgery, as it
allows the combination of radiographic data (CAT scans and MRI imaging) with the
surgeon's vision.

3D glasses

There are two categories of 3D glasses technology, active and passive. Active glasses
have electronics which interact with a display.

Active
Liquid crystal shutter glasses

Glasses containing liquid crystal that block or pass light through in synchronization with


the images on the computer display, using the concept of alternate-frame sequencing.
See also Time-division multiplexing.

"Red eye" shutterglasses method

The Red Eye Method reduces the ghosting caused by the slow decay of the green and
blue P22-type phosphors typically used in conventional CRT monitors. This method
relies solely on the red component of the RGB image being displayed, with the green
and blue component of the image being suppressed.

Passive

Linearly polarized glasses

To present a stereoscopic motion picture, two images are projected superimposed onto
the same screen through orthogonal polarizing filters. It is best to use a silver screen so
that polarization is preserved. The projectors can receive their outputs from a computer
with a dual-head graphics card. The viewer wears low-cost eyeglasses which also
contain a pair of orthogonal polarizing filters. As each filter only passes light which is
similarly polarized and blocks the orthogonally polarized light, each eye only sees one
of the images, and the effect is achieved. Linearly polarized glasses require the viewer
to keep his head level, as tilting of the viewing filters will cause the images of the left
and right channels to bleed over to the opposite channel – therefore, viewers learn very
quickly not to tilt their heads. In addition, since no head tracking is involved, several
people can view the stereoscopic images at the same time.

Circularly polarized glasses

To present a stereoscopic motion picture, two images are projected superimposed onto
the same screen through circular polarizing filters of opposite handedness. The viewer
wears low-cost eyeglasses which contain a pair of analyzing filters (circular polarizers
mounted in reverse) of opposite handedness. Light that is left-circularly polarized is
extinguished by the right-handed analyzer, while right-circularly polarized light is
extinguished by the left-handed analyzer. The result is similar to that of steroscopic
viewing using linearly polarized glasses, except the viewer can tilt his or her head and
still maintain left/right separation.
The RealD Cinema system uses an electronically driven circular polarizer, mounted in
front of the projector and alternating between left- and right- handedness, in sync with
the left or right image being displayed by the (digital) movie projector. The audience
wears passive circularly polarized glasses.
Infitec glasses

Infitec stands for interference filter technology. Special interference filters in the glasses
and in the projector form the main item of technology and have given it this name. The
filters divide the visible color spectrum into six narrow bands - two in the red region, two
in the green region, and two in the blue region (called R1, R2, G1, G2, B1 and B2 for
the purposes of this description). The R1, G1 and B1 bands are used for one eye
image, and R2, G2, B2 for the other eye. The human eye is largely insensitive to such
fine spectral differences so this technique is able to generate full-color 3D images with
only slight colour differences between the two eyes.[7] Sometimes this technique is
described as a "super-anaglyph" because it is an advanced form of spectral-
multiplexing which is at the heart of the conventional anaglyph technique.
Dolby uses a form of this technology in its Dolby 3D theatres.

Complementary color anaglyphs

Complementary color anaglyphs employ one of a pair of complementary color filters for
each eye. The most common color filters used are red and cyan.
Employing tristimulus theory, the eye is sensitive to three primary colors, red, green,
and blue. The red filter admits only red, while the cyan filter blocks red, passing blue
and green (the combination of blue and green is perceived as cyan). If a paper viewer
containing red and cyan filters is folded so that light passes through both, the image will
appear black. Another recently introduced form employs blue and yellow filters. (Yellow
is the color perceived when both red and green light passes through the filter.)
Anaglyph images have seen a recent resurgence because of the presentation of images
on the Internet. Where traditionally, this has been a largely black & white format, recent
digital camera and processing advances have brought very acceptable color images to
the internet and DVD field. With the online availability of low cost paper glasses with
improved red-cyan filters, and plastic framed glasses of increasing quality, the field of
3D imaging is growing quickly. Scientific images where depth perception is useful
include, for instance, the presentation of complex multi-dimensional data sets and
stereographic images of the surface of Mars. With the recent release of 3D DVDs, they
are more commonly being used for entertainment. Anaglyph images are much easier to
view than either parallel sighting or crossed eye stereograms, although these types do
offer more bright and accurate color rendering, most particularly in the red component,
which is commonly muted or desaturated with even the best color anaglyphs. A
compensating technique, commonly known as Anachrome, uses a slightly more
transparent cyan filter in the patented glasses associated with the technique.
Processing reconfigures the typical anaglyph image to have less parallax to obtain a
more useful image when viewed without filters.
Compensating diopter glasses for red-green method
Simple sheet or uncorrected molded glasses do not compensate for the 250 nanometer
difference in the wave lengths of the red-cyan filters. With simple glasses, the red filter
image can be blurry when viewing a close computer screen or printed image since the
retinal focus differs from the cyan filtered image, which dominates the eyes' focusing.
Better quality molded plastic glasses employ a compensating differential diopter power
to equalize the red filter focus shift relative to the cyan. The direct view focus on
computer monitors has been recently improved by manufacturers providing secondary
paired lenses fitted and attached inside the red-cyan primary filters of some high end
anaglyph glasses. They are used where very high resolution is required, including
science, stereo macros, and animation studio applications. They use carefully balanced
cyan (blue-green) acrylic lenses, which pass a minute percentage of red to improve skin
tone perception. Simple red/blue glasses work well with black and white, but blue filter
unsuitable for human skin in color.

ColorCode 3D
Michelle Obama and Barack Obama and their party watch the commercials using
ColorCode 3D during Super Bowl XLIII on February 1, 2009 in theWhite House theatre.
ColorCode 3D is a newer, patented stereo viewing system deployed in the 2000s that
uses amber and blue filters. Notably, unlike other anaglyph systems, ColorCode 3D is
intended to provide perceived full colour viewing with existing television and paint
mediums. One eye (left, amber filter) receives the cross-spectrum colour information
and one eye (right, blue filter) sees a monochrome image designed to give the depth
effect. The human brain ties both images together.
Images viewed without filters will tend to exhibit light-blue and yellow horizontal fringing.
The backwards compatible 2D viewing experience for viewers not wearing glasses is
improved, generally being better than previous red and green anaglyph imaging
systems, and further improved by the use of digital post-processing to minimise fringing.
The displayed hues and intensity can be subtly adjusted to further improve the
perceived 2D image, with problems only generally found in the case of extreme blue.
The blue filter is centred around 450 nm and the amber filter lets in light at wavelengths
at above 500 nm. Wide spectrum colour is possible because the amber filter lets
through light across most wavelengths in spectrum. When presented via RGB color
model televisions, the original red and green channels from the left image are combined
with a monochrome blue channel formed by averaging the right image with the
weights {r:0.15,g:0.15,b:0.7}.

Chromadepth method and glasses


The ChromaDepth procedure of American Paper Optics is based on the fact that with a
prism colors are separated by varying degrees. The ChromaDepth eyeglasses contain
special view foils, which consist of microscopically small prisms. This causes the image
to be translated a certain amount that depends on its color. If one uses a prism foil now
with one eye but not on the other eye, then the two seen pictures – depending upon
color – are more or less widely separated. The brain produces the spatial impression
from this difference. The advantage of this technology consists above all of the fact that
one can regard ChromaDepth pictures also without eyeglasses (thus two-dimensional)
problem-free (unlike with two-color anaglyph). However the colors are only limitedly
selectable, since they contain the depth information of the picture. If one changes the
color of an object, then its observed distance will also be changed.

Anachrome "compatible" color anaglyph method

A recent variation on the anaglyph technique is called "Anachrome method”. This


approach is an attempt to provide images that look fairly normal without glasses as 2D
images to be "compatible" for posting in conventional websites or magazines. The 3D
effect is generally more subtle, as the images are shot with a narrower stereo base, (the
distance between the camera lenses). Pains are taken to adjust for a better overlay fit of
the two images, which are layered one on top of another. Only a few pixels of non-
registration give the depth cues. The range of color is perhaps three times wider in
Anachrome due to the deliberate passage of a small amount of the red information
through the cyan filter. Warmer tones can be boosted, and this provides warmer skin
tones and vividness.

Other display methods


More recently, random-dot autostereograms have been created using computers to hide
the different images in a field of apparently random noise, so that until viewed by
diverging or converging the eyes in a manner similar to naked eye viewing of stereo
pairs, the subject of the image remains a mystery. A popular example of this is
the Magic Eye series, a collection of stereograms based on distorted colorful and
interesting patterns instead of random noise.

Pulfrich effects
In the classic Pulfrich effect paradigm a subject views, binocularly, a pendulum swinging
perpendicular to his line of sight. When a neutral density filter (e.g., a darkened lens
-like from a pair of sunglasses) is placed in front of, say, the right eye the pendulum
appears to take on an elliptical orbit, being closer as it swings toward the right and
farther as it swings toward the left.
The widely accepted explanation of the apparent motion with depth is that a reduction in
retinal illumination (relative to the fellow eye) yields a corresponding delay in signal
transmission, imparting instantaneous spatial disparity to moving objects. This occurs
because the eye, and hence the brain, respond more quickly to brighter objects than to
dimmer ones.

So if the brightness of the pendulum is greater in the left eye than in the right, the retinal
signals from the left eye will reach the brain slightly ahead of those from the right eye.
This makes it seem as if the pendulum seen by the right eye is lagging behind its
counterpart in the left eye. This difference in position over time is interpreted by the
brain as motion with depth: no motion, no depth.
The ultimate effect of this, with appropriate scene composition, is the illusion of motion
with depth. Object motion must be maintained for most conditions and is effective only
for very limited "real-world" scenes.
Prismatic & self-masking crossview glasses
"Naked-eye" cross viewing is a skill that must be learned to be used.
New prismatic glasses now make cross-viewing as well as over/under-viewing easier,
and also mask off the secondary non-3D images, that otherwise show up on either side
of the 3D image. The most recent low-cost glasses mask the images down to one per
eye using integrated baffles. Images or video frames can be displayed on a new
widescreen HD or computer monitor with all available area used for display.HDTV wide
format permits excellent color and sharpness. Cross viewing provides true "ghost-free
3D" with maximum clarity, brightness and color range, as does the stereopticon and
stereoscope viewer with the parallel approach and the KMQ viewer with the over/under
approach. The potential depth and brightness is maximized. A recent cross converged
development is a new variant wide format that uses a conjoining of visual information
outside of the regular binocular stereo window. This allows an efficient seamless visual
presentation in true wide-screen, more closely matching the focal range of the human
eyes.

Lenticular prints
Lenticular printing is a technique by which one places an array of lenses, with a texture
much like corduroy, over a specially made and carefully aligned print such that different
viewing angles will reveal different image slices to each eye, producing the illusion of
three dimensions, over a certain limited viewing angle. This can be done cheaply
enough that it is sometimes used on stickers, album covers, etc. It is the classic
technique for 3D postcards.

Displays with filter arrays


The LCD is covered with an array of prisms that divert the light from in their notebook
and desktop computers. These displays usually cost upwards of 1000 dollars and are
mainly targeted at science or medical professionals.
Another technique, for example used by the X3D company is simply to cover the LCD
with two layers, the first being closer to the LCD than the second, by some millimeters.
The two layers are transparent with black strips, each strip about one millimeter wide.
One layer has its strips about ten degrees to the left, the other to the right. This allows
seeing different pixels depending on the viewer's position.
Wiggle stereoscopy

This method, possibly the simplest stereogram viewing technique, is to simply alternate
between the left and right images of a stereogram. In a web browser, this can easily be
accomplished with an animated .gif image, flash applet or a specialized java applet.
Most people can get a crude sense of dimensionality from such images, due to parallax.
Closing one eye and moving the head from side-to-side when viewing a selection of
objects helps one understand how this works. Objects that are closer appear to move
more than those further away. This effect may also be observed by a passenger in a
vehicle or low-flying aircraft, where distant hills or tall buildings appear in three-
dimensional relief, a view not seen by a static observer as the distance is beyond the
range of effective binocular vision.
Advantages of the wiggle viewing method include:
 No glasses or special hardware required
 Most people can "get" the effect much quicker than cross-eyed and parallel
viewing techniques
 It is the only method of stereoscopic visualization for people with limited or no
vision in one eye
Disadvantages of the "wiggle" method:
 Does not provide true binocular stereoscopic depth perception
 Not suitable for print, limited to displays that can "wiggle" between the two
images
 Difficult to appreciate details in images that are constantly "wiggling"
 Lack of 3D illusion to those who can detect the wiggling too easily.
Most wiggle images use only two images, leading to an annoyingly jerky image. A
smoother image, more akin to a motion picture image where the camera is moved back
and forth, can be composed by using several intermediate images (perhaps with
synthetic motion blur) and longer image residency at the end images to allow inspection
of details. Another option is a shorter time between the frames of a wiggle image
through the use of an animated .png.
Although the "wiggle" method is an excellent way of previewing stereoscopic images, it
cannot actually be considered a true three-dimensional stereoscopic format. To
experience binocular depth perception as made possible with true stereoscopic formats,
each eyeball must be presented with a different image at the same time – this is not the
case with "wiggling" stereo. The apparent "stereo like effect" comes from syncing the
timing of the wiggle and the amount of parallax to the processing done by the visual
cortex. Three or five images with good parallax produce a much better effect than
simple left and right images.
Wiggling works for the same reason that a translational pan (or tracking shot) in a movie
provides good depth information: the visual cortex is able to infer distance information
from motion parallax, the relative speed of the perceived motion of different objects on
the screen. Many small animals bob their heads to create motion parallax (wiggling) so
they can better estimate distance prior to jumping. You can see this for yourself in a 3D
movie by removing the glasses during a scene where the camera is moving: the glasses
have very little additional effect at such a time.

Piku-Piku
A Piku-Piku is a new technique for viewing 3D photos on a computer screen pioneered
by 3D photo sharing site "Start 3D". Similar to a "wiggle", a Piku-Piku first converts a
stereo photo into a multiview 3D photo and then uses gentle animation to display the 3D
effect. Viewers can also stop the animation and interact with the Piku-Piku using a slider
giving the viewer control of the viewpoint. As with a "wiggle" the advantage of this
technique is that anyone can view a 3D photo on a normal screen without the need for
any special 3D display equipment.
Taking the pictures
It is necessary to take two photographs for a stereoscopic image. This can be done with
two cameras, with one camera moved quickly to two positions, or with a stereo
camera such as the Fujifilm FinePix Real 3D W1.
When using two cameras there are two prime considerations to take into account when
taking stereo pictures; How far the resulting image is to be viewed from and how far the
subject in the scene is from the two cameras.
How far you are intending to view the pictures from requires a certain separation
between the cameras. This separation is called stereo base or stereo base line and
results from the ratio of the distance to the image to the distance between your eyes.
The mean interpupillary distance (IPD) is 63 mm (about 2.5 inches), but varies with age,
race and gender. The vast majority of adults have IPDs in the range 50–75 mm. Almost
all adults are in the range 45–80 mm. The minimum IPD for children as young as five is
around 40 mm. In any case the farther you are from the screen the more the image will
pop out. The closer you are to the screen the flatter it will appear. Personal anatomical
differences can be compensated for by moving closer or farther from the screen .
For example if you are going to view a stereo image on your computer monitor from a
distance of 1000 mm you will have an eye to view ratio of 1000/63 or about 16. To set
your cameras the correct distance apart you take the distance to the subject (say a
person at a distance from the cameras of 3 metres) and divide by 16 which gives you a
stereo base of 188 mm between the cameras.
If you intend to view the stereo image from the same distance as it is captured (e.g. a
subject photographed three meters away, projected on a movie screen at a distance
from the viewer of three meters) then the stereo base separation will be the same as
the distance between the viewer's eyes (about 63 mm).
In the 1950s, stereoscopic photography regained popularity when a number of
manufacturers began introducing stereoscopic cameras to the public. The new cameras
were developed to use 135 film, which had gained popularity after the close of World
War II. Many of the conventional cameras used the film for 35 mm transparency slides,
and the new stereoscopic cameras utilized the film to make stereoscopic slides.
The Stereo Realist camera was the most popular, and the 35 mm picture format
became the standard by which other stereo cameras were designed. The stereoscopic
cameras were marketed with special viewers that allowed for the use of such slides,
which were similar to View-Master reels but offered a much larger image. With these
cameras the public could easily create their own stereoscopic memories. Although their
popularity has waned somewhat, these cameras are still in use today.
The 1980s saw a minor revival of stereoscopic photography extent when point-and-
shoot stereo cameras were introduced. These cameras suffered from poor optics and
plastic construction, so they never gained the popularity of the 1950s stereo cameras.
Over the last few years they have been improved upon and now produce good images.
The beginning of the 21st century marked the coming of the age of digital photography.
Stereo lenses were introduced which could turn an ordinary film camera into a stereo
camera by using a special double lens to take two images and direct them through a
single lens to capture them side-by-side on the film. Although current digital stereo
cameras cost thousands of dollars, cheaper models also exist, for example those
produced by the company Loreo. It is also possible to create a twin camera rig, together
with a "shepherd" device to synchronize the shutter and flash of the two cameras. By
mounting two cameras on a bracket, spaced a bit, with a mechanism to make both take
pictures at the same time. Newer cameras are even being used to shoot "step video"
3D slide shows with many pictures almost like a 3D motion picture if viewed properly. A
modern camera can take five pictures per second, with images that greatly exceed
HDTV resolution.
The side-by-side method is extremely simple to create, but it can be difficult or
uncomfortable to view without optical aids. One such aid for non-crossed images is the
modern Pokescope. Traditional stereoscopes such as the Holmes can be used as well.
Cross view technique now has the simple Perfect-Chroma cross viewing glasses to
facilitate viewing.

Imaging methods
If anything is in motion within the field of view, it is necessary to take both images at
once, either through use of a specialized two-lens camera, or by using two identical
cameras, operated as close as possible to the same moment.
A single digital camera can also be used if the subject remains perfectly still (such as an
object in a museum display). Two exposures are required. The camera can be moved
on a sliding bar for offset, or with practice, the photographer can simply shift the camera
while holding it straight and level. In practice the hand-held method works very well.
This method of taking stereo photos is sometimes referred to as the "Cha-Cha" method.
A good rule of thumb is to shift sideways 1/30th of the distance to the closest subject for
'side by side' display, or just 1/60th if the image is to be also used for color anaglyph or
anachrome image display. For example, if you are taking a photo of a person in front of
a house, and the person is thirty feet away, then you should move the camera 1 foot
between shots.
The stereo effect is not significantly diminished by slight pan or rotation between
images. In fact slight rotation inwards (also called 'toe in') can be beneficial. Bear in
mind that both images should show the same objects in the scene (just from different
angles) - if a tree is on the edge of one image but out of view in the other image, then it
will appear in a ghostly, semi-transparent way to the viewer, which is distracting and
uncomfortable. Therefore, you can either crop the images so they completely overlap,
or you can 'toe-in' the cameras so that the images completely overlap without having to
discard any of the images. However, be a little cautious - too much 'toe-in' can cause
eye strain for reasons best described here.

Longer base line


For making stereo images of a distant object (e.g., a mountain with foothills), one can
separate the camera positions by a larger distance (commonly called the "interocular")
than the adult human norm of 62-65mm. This will effectively render the captured image
as though it was seen by a giant, and thus will enhance the depth perception of these
distant objects, and reduce the apparent scale of the scene proportionately. However, in
this case care must be taken not to bring objects in the close foreground too close to the
viewer, as they will require the viewer to become cross-eyed to resolve them.
In the red-cyan anaglyphed example at right, a ten-meter baseline atop the roof ridge of
a house was used to image the mountain. The two foothill ridges are about four miles
(6.5 km) distant and are separated in depth from each other and the background. The
baseline is still too short to resolve the depth of the two more distant major peaks from
each other. Owing to various trees that appeared in only one of the images the final
image had to be severely cropped at each side and the bottom.
This technique can be applied to 3D imaging of the Moon: one picture is taken at
moonrise, the other at moonset, as the face of the Moon is centered towards the center
of the Earth and the diurnal rotation carries the photographer around the perimeter.
In the wider image, taken from a different location, a single camera was walked about
one hundred feet (30 m) between pictures. The images were converted to monochrome
before combination.(below)

Base line selection


There is a specific optimal distance for viewing of natural scenes (not stereograms),
which has been estimated by some to have the closest object at a distance of about
thirty times the distance between the eyes (when the scene extends to infinity). An
object at this distance will appear on the picture plane, the apparent surface of the
image. Objects closer than this will appear in front of the picture plane, or popping out of
the image. All objects at greater distances appear behind the picture plane.
This interpupillar or interocular distance will vary between individuals. If one assumes
that it is 2.5 inches (about 6.5 cm), then the closest object in a natural scene by this
criterion would be 30 × 2.5 = 75 inches (about 2 m). It is this ratio (1:30) that determines
the inter-camera spacing appropriate to imaging scenes. Thus if the nearest object is
thirty feet away, this ratio suggests an inter-camera distance of one foot. It may be that
a more dramatic effect can be obtained with a lower ratio, say 1:20 (in other words, the
cameras will be spaced further apart), but with some risk of having the overall scene
appear less "natural". This unnaturalness can often be seen in old stereoscope cards,
where a landscape will have the appearance of a stack of cardboard cut-outs. Where
images may also be used for anaglyph display a narrower base, say 1:50 or 1:60 will
allow for less ghosting in the display..

Multi-rig stereoscopic cameras


The precise methods for camera control have also allowed the development of multi-rig
stereoscopic cameras where different slices of scene depth are captured using different
inter-axial settings , the images of the slices are then composed together to form the
final stereoscopic image pair. This allows important regions of a scene to be given
better stereoscopic representation while less important regions are assigned less of the
depth budget. It provides stereographers with a way to manage composition within the
limited depth budget of each individual display technology.
CONCLUSION

It is quite apparent that the future of Displays and image capturing is stereoscopic but it
is still to jump some hurdles-

 Lack of 3D content – There is no doubt that Hollywood is stepping up


production of 3D features. But even if all new movies started being produced in
3D, they would represent only a small percentage of the overall catalog of
available films. Yes, studios are starting to "retrofit" older movies for 3D (such as
the recently 3D-ified versions of "Toy Story" and "Toy Story 2"), but it's an
expensive and time-consuming process. New 3D-only TV channels--such as
those rumored to be coming from DirecTV--may solve part of the problem, but
until broadcasters and sports leagues start investing in producing 3D TV shows
and covering major games in 3D, expect these channels to be looping "Avatar,"
"Up," and the opening ceremonies of the Beijing Olympics all day.
 Upgrade fatigue –People will be reluctant to trash their expensive LCD and LED
TVs which have only started to get pace in market and gamble upon an all new
technology.
 The glasses – Newer 3D processes may far exceed what was offered in the
1950s or 1980s, but they still require the viewer to wear a pair of glasses. That's
an acceptable trade-off for a two- or three-hour "event" movie like "Avatar." But
do you really want to do it every time you watch "The Big Bang Theory," "Lost,"
or "American Idol"? How about football or baseball?

As for 3D, I think the industry's best bet would be to focus on gamers. Avid gamers have
proven to be tech-savvy, deep-pocketed, and the most willing to accept the need for
step-up peripherals to enhance the gaming experience--everything from headsets to
motion controllers. Adding goggles to the mix wouldn't be too much of a stretch. Gamers
are probably the most receptive audience for 3D in the home--at least until the
electronics industry can figure out a way to deliver it without the glasses.
SOURCES

Wikipedia.
Other internet sources.
http://coolpics.911mb.com/
ABSTRACT

Stereoscopy is a technique of imaging and display of three-dimensional objects. This


effect is used by each one of us in everyday life to make out relative positions of
objects. The effect that our eyes being at a distance create is utilized to display images
with an illusion of depth. This process is essentially based on the capability of our brain
to fuse two different images that the eyes see into a single three-dimensional object.
Every process of stereoscopy gives different visual signal to each of our eyes to create
the effect in a way or other.

Similar procedure is followed in recording a stereoscopic image by takin two pictures


from different positions.

You might also like