Mathematics of Photogrammetry

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 59

Mathematics of

Photogrammetry
Photogrammetry
photo = "picture“, grammetry = "measurement“, therefore
photogrammetry = “photo-measurement”
 Photogrammetry is the science or art of obtaining reliable
measurements by means of photographs.
Formal Definition:
Photogrammetry is the art, science and technology of obtaining reliable
information about physical objects and the environment, through processes
of recording, measuring, and interpreting photographic images and patterns
of recorded radiant electromagnetic energy and other phenomena.
- As given by the American Society for Photogrammetry and Remote
Sensing (ASPRS)

24/10/2015 Training program on geospatial technologies 2


Uses of Photogrammetry
Products of photogrammetry:
1. Topographic maps: detailed and accurate graphic representation of cultural and natural
features on the ground.
2. Orthophotos: Aerial photograph modified so that its scale is uniform throughout.
3. Digital Elevation Maps (DEMs): an array of points in an area that have X, Y and Z
coordinates determined.

Current Applications:
1. Land surveying
2. Highway engineering
3. Preparation of tax maps, soil maps, forest maps, geologic maps, maps for city and
regional planning and zoning
4. Traffic management and traffic accident investigations
5. Military – digital mosaic, mission planning, rehearsal, targeting etc.17

24/10/2015 Training program on geospatial technologies 3


Photogrammetry

Metric Photogrammetry Interpretative Photogrammetry

• Making precise measurements Deals in recognizing and


from photos to determine the identifying objects and judging
relative locations of points. their significance through
careful and systematic
analysis.

Photographic Remote
Interpretation Sensing
(Includes use of
multispectral cameras,
infrared cameras, thermal
scanners, etc.)

24/10/2015 Training program on geospatial technologies 4


Terrestrial
Types of
photographs
Aerial

Vertical Oblique

Truly Vertical Tilted High oblique Low oblique


(does not include
(1◦< θ< 3 ◦) (includes horizon)
horizon)

24/10/2015 Training program on geospatial technologies 5


24/10/2015 Training program on geospatial technologies 6
24/10/2015 Training program on geospatial technologies 7
24/10/2015 Training program on geospatial technologies 8
24/10/2015 Training program on geospatial technologies 9
24/10/2015 Training program on geospatial technologies 10
24/10/2015 Training program on geospatial technologies 11
24/10/2015 Training program on geospatial technologies 12
Aerial Photography
 Vertical aerial photographs are taken along parallel passes called
flight strips.

 Lateral overlapping of adjacent flight strips is called a side lap


(usually 30%).

24/10/2015 Training program on geospatial technologies 13


Aerial Photography
 Position of camera at each exposure is called the exposure station.
 Altitude of the camera at exposure time is called the flying height.
 Photographs of 2 or more sidelapping strips used to cover an area is
referred to as a block of photos.

Overlapping photos called a


stereopair.

 Successive photographs along a flight strip overlap is called end lap –


60%
 Area of common coverage called stereoscopic overlap area.

24/10/2015 Training program on geospatial technologies 14


Aerial mapping cameras are the traditional imaging devices used in
traditional photogrammetry.

24/10/2015 Training program on geospatial technologies 15


Focal plane
 The focal plane of an aerial camera is the plane in which all
incident light rays are brought to focus.
 Focal plane is set as exactly as possible at a distance equal to
the focal length behind the rear nodal point of the camera lens.
 Film emulsion rests on the focal plane.

24/10/2015 Training program on geospatial technologies 16


Fiducials marks (Fiducials)
Fiducials are 2D control points whose xy coordinates are precisely
and accurately determined as a part of camera calibration.
Fiducial marks are situated in the middle of the sides of the focal
plane opening or in its corners, or in both locations. ( usually four or
eight in number)
• Serve to establish a reference
xy coordinate system for
image locations on
photograph
• They are accurately and
precisely determined as part
of camera calibration

24/10/2015 Training program on geospatial technologies 17


Lines joining opposite fiducials intersect at a point called the
indicated principal point.
Aerial cameras are carefully manufactured so that this occurs
very close to the true principal point.
True principal point: Point in the focal plane where a line from
the rear nodal point of the camera lens, perpendicular to the
focal plane, intersects the focal plane.

24/10/2015 Training program on geospatial technologies 18


Scale
 Map scale is defined as the ratio of map distance to the corresponding
distance on the ground.
 Scale of the photograph is the ratio of the distance on the photo to the
corresponding distance on the ground.
-represented as

1.Unit Equivalents : 1 in = 1000 fts


2.Unit fractions :1 in/1000 ft
3.Dimensionless representative fraction : 1/12,000
4.Dimensionless ratio :1:12,000

Large number in a scale expression denotes a small scale, and vice versa.
Example:- 1:1000 is a larger scale than 1:5000.

24/10/2015 Training program on geospatial technologies 19


Scale of Vertical Photograph over Flat Terrain

24/10/2015 Training program on geospatial technologies 20


Scale of Vertical Photograph over a Vertical Terrain

24/10/2015 Training program on geospatial technologies 21


Ground Coordinates from a Vertical Photograph

24/10/2015 Training program on geospatial technologies 22


H - Flying height above datum
A and B - ground Points
a and b - ground points on the photograph
xa, ya, xb and yb - Measured photographic coordinates
Xa, Ya, Xb and Yb - Arbitrary Ground coordinates of points A
and B

From similar triangles La’o and LA’A0

24/10/2015 Training program on geospatial technologies 23


From similar triangles La’’o and LA’’A0 ,

The ground coordinates of point B are

24/10/2015 Training program on geospatial technologies 24


Relief Displacement on a Vertical Photograph

 The shift or displacement


in the photographic
position of an image
caused by the relief of the
object i.e., it’s elevation
above or below a selected
datum.

24/10/2015 Training program on geospatial technologies 25


Consider planes Lao and LAAo

Similar triangles La’o and LA’P

Equating expressions (d) and (e) yields

Substituting r-r’ as d

24/10/2015 Training program on geospatial technologies 26


where, d - displacement relief
h - height above datum of object point whose image is
displaced
r - radial distance on photograph from principal point to
displaced image
(The units of d and r must be same)
H - flying height above same datum selected for measurement
of h

24/10/2015 Training program on geospatial technologies 27


Refinement of Measured Image Coordinates
Photo coordinates contain systematic errors from various
other sources such as
1. Film distortions due to shrinkage, expansion and lack
of flatness.
2. Failure of fiducial axes to intersect at the principle
point.
3. Lens distortions.
4. Atmospheric refraction distortions.
5. Earth curvature distortion

24/10/2015 Training program on geospatial technologies 28


Distortions Of Photographic Films
And Papers
1. Dimensional change during storage may be held to a
minimum by maintaining constant temperature and
humidity in the storage room.

2. The actual amount of distortion present in a film is a


present in a film is a function of several variables, including
the type of film and its thickness.

3. Typical values may vary from almost negligible amounts up


to approximately 0.2 percent.

24/10/2015 Training program on geospatial technologies 29


Shrinkage Correction
 Shrinkage or expansion present in a photograph can be
determined by comparing measured photographic distances
between opposite fiducial marks with their corresponding
values determined in camera calibration

x’a and y’a – Corrected photo coordinates

xa and xy - Measured Coordinates

xc / xm and yc / ym – Simply scale factors in x and y


directions
24/10/2015 Training program on geospatial technologies 30
Correction For Lens Distortions
 Lens distortion causes imaged positions to
be displaced from their ideal locations
 Symmetric radial distortion and
decentering distortion.
 In modern precision aerial mapping
cameras, lens distortions are typically less
than 5 µm.
 Symmetric radial lens distortion is an
unavoidable product of lens manufacture,
although with careful design its effects
. can be reduced to a very small amount
Decentering distortion, on the other hand, is primarily a function of the
imperfect assembly of lens elements, not the actual design.
The approach used for determining radial lens distortion values for these
older calibration reports was to fit a polynomial curve to a plot of
displacements( on the ordinate) versus radial distances( on the abscissa).

24/10/2015 Training program on geospatial technologies 31


Here,
∆r – Amount of radial lens distortion;
r – Is the radial distance from principal point
K1,k2,k3and k4 - Coefficients of the polynomial

 The coefficients of polynomial are solved by least squares


using the distortion values from the calibration report.

24/10/2015 Training program on geospatial technologies 32


Coordinates xc and yc are then computed by

24/10/2015 Training program on geospatial technologies 33


Correction For Atmospheric Refractiion

 Density of the atmosphere decreases with


increasing altitude.

 Light rays do not travel in straight lines


through the atmosphere, but there are bent
according to Snell’s Law.

Here,
α -Angle between the vertical and
the ray of light

K -Value which depends upon the


flying height mean sea leveland
elevation of the object

24/10/2015 Training program on geospatial technologies 34


H - Flying height of the camera above mean sea level
K - degrees

The radial distance r’ from the principal point to the corrected


image location can then be computed by

The change in radial distance ∆r is then computed by

24/10/2015 Training program on geospatial technologies 35


STEREOSCOPIC
VIEWING

24/10/2015 Training program on geospatial technologies 36


 With Binocular, vision, the eyes
fixate on certain point, the optical
axes of the two eyes converge on
the point, intersecting at an angle
called parallactic angle.
 Methods of judging depth may be
classified as either stereoscopic or
monoscopic.
 Monocular vision is the term
applied to viewing with only one
eye are termed as monoscopic.
 Stereoscopic depth perception is of
fundamental importance in
photogrammetry, for it enables the
formation of three-dimensional
stereomodel by viewing a pair of
overlapping photographs.

24/10/2015 Training program on geospatial technologies 37


24/10/2015 Training program on geospatial technologies 38
Stereoscopic Parallax
 Parallax is the apparent
displacement in the
position of an object
with respect to a frame
of reference, caused by a
shift in the position of
observation.
 The change in position
of an image from one
photograph to the next
caused by the aircraft’s motion is termed stereoscopic parallax,
x parallax , or simply parallax.

24/10/2015 Training program on geospatial technologies 39


 The parallax of any point is directly related
to the elevation of the point.
 Parallax is greater for high points than for
low points.

pa = xa – x’a
pa- Stereoscopic parallax of object point A.
xa- Measured photo coordinate of image a on
the left photograph of the stereopair.
x’a- Photo coordinate of image a’ on the right
photo
 X,Y,and Z ground coordinates can be calculated for points based upon
the measurements of their parallaxes.
 An overlapping pair of vertical photographs which have been exposed
at equal flying heights above datum.

24/10/2015 Training program on geospatial technologies 40


Similar triangles L1oay and L1A0Ay

Similar triangles L1oax and L1A0Ax,

Similar triangles L2oax and L2A0Ax,

24/10/2015 Training program on geospatial technologies 41


From the above equations.

Substituting pa for (xa – x’a ) in above equation

Parallax
Equations
hA - elevation of point A above datum
H - flying height above datum
B - Air base
f - Focal Length of the camera
pa - Parallax of point A
XA and XY - Ground coordinates of point A in previously defined
unique arbitrary coordinate system
Xa and xy - Photo coordinates of point a measured with respect
to the flight-line axes on the left photo

24/10/2015 Training program on geospatial technologies 42


Elements of Interior Orientation
Elements of interior orientation which can be determined through camera calibration are as
follows:
1. Calibrated focal length (CFL), the focal length that produces an overall mean
distribution of lens distortion. Better termed calibrated principal distance since it
represents the distance from the rear nodal point of the lens to the principal point of the
photograph, which is set as close to optical focal length of the lens as possible.

2. Principal point location, specified by coordinates of a principal point given


wrt x and y coordinates of the fiducial marks.
3. Fiducial mark coordinates: x and y coordinates of the fiducial marks which
provide the 2D positional reference for the principal point as well as images
on the photograph.
4. Symmetric radial lens distortion, the symmetric component of distortion that occurs
along radial lines from the principal point. Although negligible, theoretically always
present.
5. Decentering lens distortion, distortion that remains after compensating for symmetric
radial lens distortion. Components: asymmetric radial and tangential lens distortion.

24/10/2015 Training program on geospatial technologies 43


Camera Calibration:
General Approach
Step 1) Photograph an array of targets whose relative positions
are accurately known.
Step 2) Determine elements of interior orientation –
• make precise measurements of target images
• compare actual image locations to positions they should
have occupied had the camera produced a perfect perspective
view.
This is the approach followed in most methods.

24/10/2015 Training program on geospatial technologies 44


Analytical Photogrammetry
Definition: Analytical photogrammetry is the term used to describe the
rigorous mathematical calculation of coordinates of points in object
space based upon camera parameters, measured photo coordinates and
ground control.

Features of Analytical photogrammetry:


→ rigorously accounts for any tilts
→ generally involves solution of large, complex systems of redundant
equations by the method of least squares
→ forms the basis of many modern hardware and software system
including stereoplotters, digital terrain model generation, orthophoto
production, digital photo rectification and aerotriangulation.

24/10/2015 Training program on geospatial technologies 45


Image Measurement
Considerations
Before using the x and y photo coordinate pair, the following conditions
should be considered:
1. Coordinates (usually in mm) are relative to the principal point - the
origin.
2. Analytical photogrammetry is based on assumptions such as “light
rays travel in straight lines” and “the focal plane of a frame camera
is flat”. Thus, coordinate refinements may be required to
compensate for the sources of errors, that violate these
assumptions.
3. Measurements must be ensured to have high accuracy.
4. While making measurements of image coordinates of common
points that appear in more than one photograph, each object point
must be precisely identified between photos so that the
measurements are consistent.
5. Object space coordinates is based on a 3D cartesian system.
24/10/2015 Training program on geospatial technologies 46
Collinearity Condition
 The collinearity condition is illustrated in the figure below. The exposure
station of a photograph, an object point and its photo image all lie along a
straight line. Based on this condition we can develop complex mathematical
relationships.

24/10/2015 Training program on geospatial technologies 47


Collinearity Condition
Equations
• Coordinates of exposure station be XL, YL, ZL
wrt object (ground) coordinate system XYZ
• Coordinates of object point A be XA, YA, ZA
wrt ground coordinate system XYZ
• Coordinates of image point a of object point
A be xa, ya, za wrt xy photo coordinate system
(of which the principal point o is the origin;
correction compensation for it is applied
later)
• Coordinates of image point a be xa’, ya’, za’
in a rotated image plane x’y’z’ which is
parallel to the object coordinate system
• Transformation of (xa’, ya’, za’) to (xa, ya, za)
is accomplished using rotation equations.

24/10/2015 Training program on geospatial technologies 48


Rotation Equations
 Omega rotation about x’ axis:
 New coordinates (x1,y1,z1) of a point
(x’,y’,z’) after rotation of the original
coordinate reference frame about the x
axis by angle ω are given by:
x1 = x’
y1 = y’ cos ω + z’ sin ω

 zSimilarly,
1 = -y’sinwe
ω + z’ cos ω
obtain equations for phi
rotation about y axis:
x2 = -z1sin Ф + x1 cos Ф
y2 = y1
z2 = z1 cos Ф + x1 sin Ф

 And equations for kappa rotation about z


axis:
x = x2 cos қ + y2 sin қ
y = -x2 sin қ + y2 cos қ
z = z2
24/10/2015 Training program on geospatial technologies 49
Final Rotation Equations
We substitute the equations at each stage to get the following:
x = m11 x’ + m12 y’ + m13 z’
y = m21 x’ + m22 y’ + m23 z’ Where m’s are function of
z = m31 x’ + m32 y’ + m33 z’ rotation angles ω,Ф and қ
In matrix form: X = M X’
where  x  m11 m12 m13   x'

X   y  M  m21 m22 m23  X '   y '

 z   m31 m32 m33   z ' 
Properties of rotation matrix M:
1. Sum of squares of the 3 direction cosines (elements of M) in any row or column is
unity.
2. M is orthogonal, i.e. M-1 = MT

24/10/2015 Training program on geospatial technologies 50


24/10/2015 Training program on geospatial technologies 51
Collinearity Equations
Using property of similar triangles: xa ' ya '  za '
 
X A  X L YA  YL Z L  Z A
 X  XL   Y Y   Z  ZL 
Substitute this into rotation formula:  xa '   A  za ' ; ya '   A L  z a ' ; z a '   A  z a '
Z
 A  Z L  Z
 A  Z L  Z
 A  Z L 

 X  XL  '  Y Y   Z  ZL  '
Now,  xa  m11  A  z a  m12  A L  z a'  m13  A  z a
Z
 A  Z L  Z
 A  Z L  Z
 A  Z L 
factor out za’/(ZA-ZL), divide xa, ya
 X  XL  '  Y Y   Z  ZL  '
by za ya  m21  A  z a  m22  A L  z a'  m23  A  z a
 Z A  ZL   ZA  ZL   ZA  ZL 
add corrections for offset of
 X  XL  '  Y Y   Z  ZL  '
principal point (xo,yo) z a  m31  A  z a  m32  A L  z a'  m33  A  z a
Z
 A  Z L  Z
 A  Z L  Z
 A  Z L 
and equate za=-f, to get:

 m ( X  X L )  m12 (YA  YL )  m13 ( Z A  Z L ) 


xa  xo  f  11 A 
 m31 ( X A  X L )  m32 (YA  YL )  m33 ( Z A  Z L ) 
 m21 ( X A  X L )  m22 (YA  YL )  m23 ( Z A  Z L ) 
y a  yo  f  
 m31 ( X A  X L )  m32 (YA  YL )  m33 ( Z A  Z L ) 

24/10/2015 Training program on geospatial technologies 52


Review of Collinearity Equations
Collinearity equations:
Collinearity equations:
 m ( X  X L )  m12 (YA  YL )  m13 ( Z A  Z L ) 
xa  xo  f  11 A  A. are nonlinear and
 m31 ( X A  X L )  m32 (YA  YL )  m33 ( Z A  Z L ) 
B. involve 9 unknowns:
 m ( X  X L )  m22 (YA  YL )  m23 ( Z A  Z L ) 
ya  yo  f  21 A 
 m31 ( X A  X L )  m32 (YA  YL )  m33 ( Z A  Z L )  1. omega, phi, kappa
inherent in the m’s
Where,
2. Object coordinates (XA,
xa, ya are the photo coordinates of image point a
Y A, Z A )
XA, YA, ZA are object space coordinates of object/ground
point A 3. Exposure station
coordinates (XL, YL, ZL )
XL, YL, ZL are object space coordinates of exposure
station location
f is the camera focal length
xo, yo are the offsets of the principal point coordinates
m’s are functions of rotation angles omega, phi, kappa
(as derived earlier)

24/10/2015 Training program on geospatial technologies 53


Elements of Exterior Orientation
As already mentioned, the collinearity conditions involve 9 unknowns:
1) Exposure station attitude (omega, phi, kappa),
2) Exposure station coordinates (XL, YL, ZL ), and
3) Object point coordinates (XA, YA, ZA).

Of these, we first need to compute the position and attitude of the exposure
station, also known as the elements of exterior orientation.

Thus the 6 elements of exterior orientation are:


1) spatial position (XL, YL, ZL) of the camera and
2) angular orientation (omega, phi, kappa) of the camera

All methods to determine elements of exterior orientation of a single tilted


photograph, require:
1) photographic images of at least three control points whose X, Y and Z
ground coordinates are known, and
2) calibrated focal length of the camera.

24/10/2015 Training program on geospatial technologies 54


Space Resection By Collinearity
Space resection by collinearity involves formulating the “collinearity equations” for a
number of control points whose X, Y, and Z ground coordinates are known and whose
images appear in the vertical/tilted photo.
The equations are then solved for the six unknown elements of exterior orientation that
appear in them.
• 2 equations are formed for each control point
• 3 control points (min) give 6 equations: solution is unique, while 4 or more control
points (more than 6 equations) allows a least squares solution (residual terms will exist)
Initial approximations are required for the unknown orientation parameters, since
the collinearity equations are nonlinear, and have been linearized using Taylor’s
theorem.

Unknown ext. orientation


No. of points No. of equations
parameters

1 2 6
2 4 6
3 6 6
4 8 6

24/10/2015 Training program on geospatial technologies 55


Coplanarity Condition
 A similar condition to the collinearity condition, is coplanarity, which is
the condition that the two exposure stations of a stereopair, any object
point and its corresponding image points on the two photos, all lie in a
common plane.

 Like collinearity equations, the coplanarity equation is nonlinear and must


be linearized by using Taylor’s theorem. Linearization of the coplanarity
equation is somewhat more difficult than that of the collinearity equations.

 But, coplanarity is not used nearly as extensively as collinearity in


analytical photogrammetry.

 Space resection by collinearity is the only method still commonly used to


determine the elements of exterior orientation.

24/10/2015 Training program on geospatial technologies 56


Space Intersection By Collinearity
Use: To determine object point coordinates for points that lie in the
stereo overlap area of two photographs that make up a stereopair.
Principle: Corresponding rays to the same object point from two photos
of a stereopair must intersect at the point.

• For a ground point A:


• Collinearity equations are written for
image point a1 of the left photo (of the
stereopair), and for image point a2 of
the right photo, giving 4 equations.
• The only unknowns are XA, YA and ZA.
• Since equations have been linearized
using Taylor’s theorem, initial
approximations are required for each
point whose object space coordinates
are to be computed.

24/10/2015 Training program on geospatial technologies 57


Analytical Stereomodel
Aerial photographs for most applications are taken so that adjacent photos overlap
by more than 50%. Two adjacent photographs that overlap in this manner form
a stereopair.
Object points that appear in the overlap area of a stereopair constitute a
stereomodel.
The mathematical calculation of 3D ground coordinates of points in the
stereomodel by analytical photogrammetric techniques forms an analytical
stereomodel.

The process of forming an analytical stereomodel involves 3 primary steps:


1. Interior orientation (also called “photo coordinate refinement”):
Mathematically recreates the geometry that existed in the camera when a
particular photograph was exposed.

24/10/2015 Training program on geospatial technologies 58


• Relative (exterior) orientation: Determines the relative
angular attitude and positional displacement between the
photographs that existed when the photos were taken.
• Absolute (exterior) orientation: Determines the absolute
angular attitude and positions of both photographs.
• After these three steps are achieved, points in the
analytical stereomodel will have object coordinates in the
ground coordinate system.

24/10/2015 Training program on geospatial technologies 59

You might also like