Digital Photogrammetry2014
Digital Photogrammetry2014
Digital Photogrammetry2014
Digital PhOTOGRAMMETRY
&
Image processing
Lecture Notes
Lecturer:
Contents
Chapter
Name
0
Review to analytical photogrammetry
One
Introduction
Two
Digital images
Three
Four
Image Compression
Five
Digital image enhancement
Six
digital photogrammetric workstation
Seven
Photogrammetric DTM & DSM
Eight
Image Resampling
Nine
Orthophotos production
Course Syllabus
Palestine Polytechnic University
Engineering
Civil and Architectural Engineering Department
Surveying And Geomatics
COURSE: digital photogrammetry and image processing , 3 credit hours, CE531, 2nd
semester 2013/2014
TEXT BOOK: Lecture Notes : digital photogrammetry and image processing (2014)
REFERENCES: Digital photogrammetry, 2nd edition, Michel Kasser and Yves Egels
, elements of photogrammetry 3rd edition.
Review
Introduction
Digital images
Digital image acquisition
Image Compression
Digital image enhancement
digital photogrammetric workstation
Photogrammetric DTM & DSM
Image Resampling
Orthophotos production
COURSE POLICIES:
TEACHING METHODS:
Lectures
Assignments ,problem solving sessions
discussions
Lab work
first exam
Second exam
Final
Assignments
Class attendance
Project
15%
15%
30%
5%
5%
30%
Digital Photogrammetry
Ch.0
Ch0
Review to Analytical
Photogrammetry
Digital Photogrammetry
Ch.0
1- Introduction
2- Image Measurements
1. A fundamental type of measurement used in analytical photogrammetry is an x
and y photo coordinate pair.
2. Since mathematical relationships in analytical photogrammetry are based on
assumptions such as "light rays travel in straight lines" and "the focal plane of a
frame camera is flat," various coordinate refinements may be required to correct
measured photo coordinates for distortion effects that otherwise cause these
assumptions to be violated.
3. A number of instruments and techniques are available for making photo
coordinate measurements.
3- Control Points
Object space coordinates of ground control points, which may be either imageidentifiable features, are generally determined via some type of field survey technique
such as GPS.
It is important that the object space coordinates be based on a three-dimensional
Cartesian system which has straight, mutually perpendicular axes.
Digital Photogrammetry
Ch.0
4- Collinearity Condition
Perhaps the most fundamental and useful relationship in analytical photogrammetry is
the collinearity condition. Collinearity is the condition that the exposure station, any
object point, and its photo image all lie along a straight line in three-dimensional space.
xa x0 f
ya y0 f
5- Coplanarity Condition
Coplanarity is the condition that the two exposure stations of a stereopair, any object
point, and its corresponding image points on the two photos all lie in a common plane. In
the figure below, points L1, L2, a1, a2 and A all lie in the same plane.
Epipolar plane: any plane containing the two exposure stations and an object point, in
this instance plane L1AL2
Digital Photogrammetry
Ch.0
Epopolar line: the intersection of the epipolar plane with the left and right photoplanes.
Given the left photo location of image a1, its corresponding point a2 on the right photo is
known to lie along the right epipolar line. The coplanarity condition equation is:
For the photograph, we have 6 unknowns, and each control point has 2-observations
(x,y), so 3 control points give us exact solution, 4 control points or more we can apply
least squares solution.
Digital Photogrammetry
Ch.0
Digital Photogrammetry
Ch.0
8- Analytical Stereomodel
1. Aerial photographs for most applications are taken so that adjacent photos overlap
by more than 55 percent. Two adjacent photographs that overlap in this manner
form a stereopair; and object points that appear in the overlap area constitute a
stereomodel.
2. The mathematical calculation of three-dimensional ground coordinates of points
in the stereomodel by analytical photogrammetric techniques forms an analytical
stereomodel
Digital Photogrammetry
Ch.0
ax by c X Vx
dx ey f Y V y
where,
x and y are the machine coordinates.
X and Y are the fiducial coordinates.
Digital Photogrammetry
Ch.0
Using collinearity equations and, with the input data of the coordinates of image
point in each photo, each point gives two equations in the left photo and two equations in
8
Digital Photogrammetry
Ch.0
the right photo. Each point has three unknown model coordinates X, Y, and Z, in addition
to the five relative orientation unknown parameters ( 2 , 2 , k2, YL2, and ZL2). To solve
this system of equations the least number of pass points needed is n. The n is calculated
as follows:
4n=3n+5
Then
n=5
r
q
G ya Vya f
s
q
Where,
Digital Photogrammetry
Ch.0
Omega
Phi
Kappa
10
Digital Photogrammetry
Ch.0
13- Aerotriangulation
Aerotriangulation is the term most frequently applied to the process of determining the
X, Y, and Z ground coordinates of individual points based on photo coordinate
measurements.
The photogrammetric procedures discussed so far were restricted to one stereo model. It
is quite unlikely that a photogrammetric project is covered by only two photographs,
however. Most mapping projects require many models; large projects may involve as
many as one thousand photographs, medium sized projects hundreds of photographs.
Advantages of Aerotriangulation
1. Minimizing the field surveying by minimizing the number of required control
points.
2. Most of work is done in laboratory.
3. Access to the property of project area is not required.
4. Field survey in steep and high slope areas is minimized.
5. Accuracy of the field surveyed control points can easily be verified by
aerotriangulation.
Classifications of Aerotriangulation processes
1. Analog: involved manual interior, relative, and absolute orientation of the
successive models of long strips of photos using stereoscopic plotting instruments
having several projectors.
2. Semianalytical aerotriangulation: involves manual interior and relative orientation
of stereomodels within a stereoplotter, followed by measurement of model
coordinates. Absolute orientation is performed numerically hence the term
semianalytical aerotriangulation.
3. Analytical methods :consist of photo coordinate measurement followed by
numerical interior, relative, and absolute orientation from which ground
coordinates are computed.
11
Digital Photogrammetry
Ch.0
Digital Photogrammetry
Ch.0
The solution depends basically on the collinearity condition, where the collinearity
equations are:
The solution of the above equations give the exterior orientation parameters of all images
included in the adjustment (omega, phi, kappa, XL,YL,ZL).
For the adjustment we have:
- 2 observations(x,y) for any control or tie point in a photo.
- 6 unknowns for each photo (omega, phi, kappa, XL, YL, ZL).
- 3 unknowns for each tie point; ground coordinates(X, Y, and Z).
13
Digital Photogrammetry
Ch.0
Example:
For the bundle adjustment of the following for images, what is the number of unknowns,
observations, and how will the design matrix A appear?
Number of observations:
4 x 6 x 2 = 48 observations (collinearity equations).
Number of unknowns:
4 x 6 + 3 x 4 = 36 unknowns
14
Digital Photogrammetry
Ch.0
Example:
For the following model what is the number of unknowns, observations, and how will the
design matrix A appear?
15
Digital Photogrammetry
Ch.1
Ch01
Introduction
16
Digital Photogrammetry
Ch.1
Digital photogrammetry
1- Introduction
Basic types according to the distance between the object and the camera;
-the Aerial photogrammetry taken by aircrafts,
- and terrestrial photogrammetry using ground fixed cameras.
the classification of photogrammetry according to the processing methods;
first analog photogrammetry was only available using analog
measurements and analog workstations,
with computer advances the analytical photogrammetry using
analytical workstations by measurements on the printed images
and calculations of the points coordinates,
recently with high quality computers the complete
photogrammetric process are applied by computer using digital
images as input and output in as computer files(DTM , DSM ,
orthophotos ).see fig(1).
Fig (1)
Topics in photogrammetry
Analog Photogrammetry
Analog photogrammetric theory
Analog inner orientation
Analog relative orientation
Analog absolute orientation
Analog aerial triangluation
Analog photogrammetric instrument
Analog rectifier
Metric camera
17
Digital Photogrammetry
Ch.1
Imaging system
Analytic Photogrammetry
Analytical photogrammetric theory
Analytical inner orientation
Analytical relative orientation
Analytical absolute orientation
Analytical aerial triangulation
Multi-sensor, multi-platform integration technology and theory
Analytical plotter
Analytical orthophoto generation
Metric camera
Imaging system
Nonmetric camera
Digital Photogrammetry
Digital image processing
Digital image interpretation
Image matching
Full-automatic inner orientation
Full-automatic relative orientation
Full (semi)-automatic absolute orientation
Full-automatic aerial triangulation
3D measurement and viewing system
Visualization of scene
Multi-sensor, multi-platform integration technology and theory
Geometric rectification with various sensor and imaging system
18
Digital Photogrammetry
Ch.2
Ch02
Digital Images
19
Digital Photogrammetry
Ch.2
Digital images
1-Inroduction
An image is a bidimensional function of light intensity, f(x,y), where x and y are
the spatial coordinates and f at a point is proportional to the luminance or grey level at
that point.
digital image is a representation of a two-dimensional image as a finite set of
digital values, called picture elements or pixels.Typically, the pixels are stored in
computer memory as a raster image or raster map, a two-dimensional array of small
integers. These values are often transmitted or stored in a compressed form(GEPG,
TIFF..).
Digital images can be created by a variety of input devices and techniques, such as digital
cameras, scanners, coordinate-measuring machines, seismographic profiling, airborne
radar, and more.
20
Digital Photogrammetry
Ch.2
2- Types of images:
2-1 Binary Images:
In a binary image, each pixel assumes one of only two discrete values: 1 or 0. A binary
image is a digital image that has only two possible values for each pixel. Binary images
are also called bi-level or two-level. (The names black-and-white, B&W)
Binary images often arise in digital image processing as masks or as the result of certain
operations such as segmentation, thresholding , and dithering. Some input/output devices,
such as laser printers, fax machines, and bilevel computer displays, can only handle
bilevel images.
A binary image is usually stored in memory as a bitmap, a packed array of bits.
21
Digital Photogrammetry
Ch.2
3- Colors
The visible spectrum is divided into three sub domains that define the colors using a
constant radiation level:
Other definition of the basic colors according to the radiation level, so called the
complementary colors:
22
Digital Photogrammetry
Ch.2
Color science talks about colors in the range 0.0 (minimum) to 1.0 (maximum).
Most color formulae take these values. For instance, full intensity red is 1.0, 0.0,
0.0.
The color values may be written as numbers in the range 0 to 255, simply by
multiplying the range 0.0 to 1.0 by 255. This is commonly found in computer
science, where programmers have found it convenient to store each color value in
one 8-bit byte. This convention has become so widespread that many writers now
consider the range 0 to 255 authoritative and do not give a context for their
values. Full intensity red is 255,0,0.
23
Digital Photogrammetry
Ch.2
When written, RGB values in 24 bpp, also known as Truecolor, are commonly specified
using three integers between 0 and 255, each representing red, green and blue intensities,
in that order. For example:
yellow
(255,255,0)
(0, 0, 0) is black
(255, 255, 255)
is white
(255, 0, 0) is red
(0, 255, 0) is
green
(0, 0, 255) is
red
blue
(255,0,0)
(255, 255, 0) is
yellow
(0, 255, 255) is
cyan
(255, 0, 255) is
magenta
green
(0,255,0)
cyan
(0,255,255)
blue
(0,0,255)
red
(255,0,0)
magenta
(255,0,255)
3-1-2 HSV
The HSV (Hue, Saturation, Value) model, also known as HSB (Hue, Saturation,
Brightness), defines a color space in terms of three constituent components:
24
Digital Photogrammetry
Ch.2
25
Digital Photogrammetry
Ch.2
Let MAX equal the maximum of the (R, G, B) values, and MIN equal the minimum of
those values.
G = t, B = p
G = V, B = p
G = V, B = t
G = q, B = V
G = p, B = V
R = V,
G = p, B = q
26
Digital Photogrammetry
Ch.2
Radiometric Resolution
The radiometric resolution defines how many different colors/gray values are available in
order to represent an image. The higher the radiometric resolution the more details are
visible, but the more storage space will be needed.
The lowest radiometric resolution we know is a "Black - White" image. A pixels status
can be either 0 (black) or 1 (white).
The image resolution is usually expressed in BIT's.
1 Bit
->
6 Bit
->
8 Bit
->
etc.
If we consider a RGB image with 8 Bit each channel (24 Bit image) we need a storage
space of nearly 3 MB!
27
Digital Photogrammetry
Ch.2
Spatial Resolution
The details visible in an image are directly dependent on the spatial resolution. The
higher the resolution the more information is visible. Most of the pixels in a coarse
resolution image contain more than one material or feature; they are "Mixed Pixels". 'A
"mixed pixel" results from the fact that individual areas, consisting of different features
or classes may be below (smaller than) the resolution of the sensor.' . The phenomenon of
Mixed Pixels, which is a common problem of all digital images
28
Digital Photogrammetry
Ch.2
29
Digital Photogrammetry
Ch.2
30
Digital Photogrammetry
Ch.3
Ch03
Digital Image
Acquisition
31
Digital Photogrammetry
Ch.3
32
Digital Photogrammetry
Ch.3
1~\
I \~~
+
r.xtir o'i ~w
forw:.rd 'JiF!W
b~hrt~rd Vl ("\'1
33
Digital Photogrammetry
Ch.3
Linear CCD cameras have a good geometric quality images with minor
distortions.
Because the trajectory of the plane is mostly uncertain it is important to have a
high precision real position of the sensors ( this problem is neglected in the
satellite images). To solve the problem an inertial navigation system is
attached (accelerometers to measure the accelerations of the plane, and
gyroscopes to measure the rotation angles) in addition to precise GPS
receivers. Otherwise the sensed data are unusable. So called Kalman filter
algorithm is applied to get the final image pixels.
The residual errors in the Linear CCD cameras are 0.1pixels as in the matrix
CCD cameras.
34
Digital Photogrammetry
Ch.3
35
Digital Photogrammetry
Ch.3
4- Photogrammetric scanners
Scanners depend on the principle of CCD, arranged in matrix form, or in linear form
using TDI technology (Time Delay integration).
4-1 Types of Scanners
There are basically four different types of scanners: film, hand-held, flatbed, and drum.
Film Scanners / Slide Scanners: Film scanners are small desktop scanners used
to scan 35mm film and slides. Some of film/slide scanners include an APS
(Advanced Photo System) film adapter for use with the APS film format.
Slides usually are higher quality than prints and produce a higher quality scan.
Slides are brighter than prints and have a higher dynamic range. Many slide
scanners have resolutions in the 5,000-6,000 ppi range and can be very expensive
to purchase.
Hand-held scanners: Hand-held scanners are small instruments that you slide
across the image by hand. They can only scan 2"-5" at a time so are only useful
for small photos. They are sometimes called half-page scanners and are the least
expensive type of scanners.
Flatbed scanners: Also called desktop scanners, flatbed scanners range from
inexpensive low-end scanners for hobby use to very high quality, expensive units
used by professionals. They generally are not as high quality as the drum
scanners.
Images are placed on a glass bed either with or without a holder. The scan area
varies in size from 8-1/2" x 11" to 13" x 18". Either the bed is stationary and the
scanning head moves or if the bed moves, the scanning head is stationary. They
are either a single-pass or three-pass scanner. Single-pass captures all the RGB
colors by moving the light source over the image once. Three-pass scanners use
three passes, one pass each for red, green and blue. The single-pass scanners are
faster but the three-pass scanners are generally more accurate.
36
Digital Photogrammetry
Ch.3
Flatbed scanners can scan originals of varying thicknesses, and some are capable
of scanning three-dimensional objects. You can add adapters for automatic page
feeders. There are also templates you can use to hold pieces such as
transparencies
or
slides.
In traditional flatbed scanners, the scanning head moves in one direction only.
There is a new technology called XY scanning which positions its scanning head
along an XY axis. The scanner head slides both horizontally and vertically
beneath the bed. The XY scanning technology assures high resolution and
uniform sharpness of the entire scanning area. It also makes it possible to enlarge
an image to a much higher percentage than the traditional flatbed.
The highest resolution you can achieve without interpolation is about 5,000 dpi.
With interpolation, the resolution may increase to about 11,000 dpi.
Drum scanners: Also known as a rotary scanner, the drum scanner scans images
that are mounted on a rotating drum. The drum spins rapidly in front of a
stationary reading head on either a horizontal or vertical unit. The vertical ones
are
beneficial
since
they
save
on
space.
Drum scanners are generally higher quality but are also very expensive. Some
have the capabilities to scan at a resolution of 12,000 dpi without interpolation.
Drum scanners cost from $25,000 to several hundred thousand dollars and require
trained
operators
to
achieve
the
best
results.
Generally, drum scanners have a larger scanning area than the other types. Some
offer scanning drums that are 20" x 24" or larger. The larger scanning area makes
it possible to scan large items or a combination of several smaller items.
The disadvantage of drum scanners is that the original image must be thin and
flexible enough to be wrapped around the drum.
37
Digital Photogrammetry
Ch.3
Geometry:
With current aerial photographs, a level of precision of the order of +/- 2 um can
be reached in aerial triangulation. This precision is also usually obtained with
analytical plotters. Consequently, it is useful to also require such precision for
photographic scanners.
Image resolution:
This parameter is decisively determined by the quality of the film and by the
aerial camera. As will be shown later on, it seems appropriate to require a pixel
size of 10x10 um for black-and-white images whereas a pixel size of 15-20 um
Dynamic range:
This should correspond to the contrast of aerial photographs which might range
from 0.1 to 2.0 D for black-and-white pictures and from 0.2 to 3.5 D for color
photographs.
38
Digital Photogrammetry
Ch.3
Image density is measured from image brightness with optical densiometers, and ranges
from 0 to 4, where 0 is pure white and 4 is very black. More density is less brightness.
The minimum and maximum values of density capable of being captured by a specific
scanner are called DMin and DMax. If the scanner's DMin were 0.2 and DMax were 3.1,
its Dynamic Range would be 2.9. DMax implies unique image tone values are
distinguishable, and not hidden by electronic noise. Greater dynamic range can detect
greater image detail in dark shadow areas of the photographic image, because the range is
extended at the black end.
An interesting mathematical curiosity is the absolute theoretical maximum density range
shown in the chart below for the various numbers of bits. Log 10 of the largest number
can be computed as the theoretical dynamic range. 8 bits can store a numerical value 0 to
255. And then for example, the Log base 10 of 255 is 2.4. Log 10 of 1 is 0 (log is only
defined >0). The difference is 2.4.
Number
of bits
Log 10 of the
largest number
2 to the power of 4 = 16
Log 10 of 15 = 1.2
2 to the power of 5 = 32
Log 10 of 31 = 1.5
256
2.4
10
1024
3.0
12
4096
3.6
14
16384
4.2
16
65536
4.8
Image noise:
The noise of photographic film is mainly defined by its granularity. If considering
the values given by the producers, the sensor noise should not exceed +/- 0.03 D
for a pixel size of 10 x 10 um and an image noise of only 0.02 D could even be
reached with the Kodak Panatomic-X film. This presumes that the modulation
transfer function of the scanners also allows a resolution corresponding to the
pixel size.
39
Digital Photogrammetry
Ch.3
Color reproduction:
With the increasing use of color photographs, it is important to be able to also
scan color photographs.
Data compression :
The great mass of data produced when digitizing images can be effectively
reduced by data compression techniques.
Instruments handling:
The handling of the instruments as well as the management of the considerable
amount of data are important criteria; however, this aspect is not going to be
discussed in more detail here.
40
Digital Photogrammetry
Ch.3
Radiometry
Density range of 0 to 3.5D, optionally shifted to achieve a maximum of
4.0D
41
Digital Photogrammetry
Ch.3
42
Digital Photogrammetry
Ch.4
Ch04
Compression
Of
Digital Images
43
Digital Photogrammetry
Ch.4
Spot satellite in panchromatic mode represents a volume of: 6,000 lines times
6,000 columns at 8 bits/pixels = 288 Mbits. A classical digitized aerial image,
scanned with a 14 pm pixel size, provides 2,048 Mbits.
Considering limitations that apply in most systems on capacities of storage and/or
transmission, it is necessary first to reduce to the minimum quantity of necessary
bits per pixel to represent the picture. Preserving in the picture the necessary
information.
The efficiency of the compression will be measured by the rate of compression
that is the ratio between the numbers of bits of the picture source to the number of
bits of the picture compressed.
Data pictures present a natural redundancy that doesnt contribute to information
and that it is possible to eliminate before storage and transmission. One can
compress pictures efficiently therefore without any loss of information (so-called
reversible compression).
the end user of pictures is interested in only part of the information carried by the
picture( relevant information). It will therefore be possible to compress pictures
more efficiently again while removing non-relevant information (so called
compression with losses).
The level of quality required by the user: reversible compression (case of certain
applications in medical imagery). very low quality level (case of certain
transmission applications of pictures on the Internet).
44
Digital Photogrammetry
Ch.4
k
H 0 (S )
Example-1: a uniform picture where all pixels have all the same value, H0(S)
this picture contains no information;
Example-2: a binary black and white picture of the type of those processed by
fax machines:
In practice, black << wliite therefore H(S) << 1, which explains that the
reversible compression algorithms used in fax machines have mean rates of
compression greater than 100;
Example-3: the picture of Spot satellite on Genoa on 8 bits in H0(S)= 6.97;
Example-4: a picture (8 bits) of saturated white noise in which all values are
equi-probable (flat histogram), H0(S) = 8. It is not therefore possible to
compress this picture in a reversible way.
Compression with losses: When one searches for some compression rates
higher than CRmax, which is the most current case, one is obliged to
introduce losses of information in the chain of compression.
45
Digital Photogrammetry
Ch.4
46
Digital Photogrammetry
Ch.4
Example-1: if an image with size of 2.547 MB , if JEPG compression with CRmax 1.65
, what is the compressed image size without loss.
size of original image
CRmax
size of compressed image
size of original image
size of compressed image
1.544 MB
CRmax
Example-2:
For the following 3-bit 4X4 image:
0 3 5 1
7 7 5 5
3 1 4 3
3 1 0 3
k 3
For 3-bit image, color range:
Range 0 7
7
1
H 0 ( S ) Pi log 2
i 0
Pi
Pi
2
16
3
P1
16
P2 0
P0
5
16
4
P4
16
3
P5
16
P6 0
P3
2
16
1
P7
Pi . log 2
1
Pi
0.375
0.452
0
0.524
0.5
0.452
0
0.375
2.178
H 0 (S ) 2.178
3
CRmax
1.377
2.178
47
Digital Photogrammetry
Ch.4
JPEG-LS norm: The value of the current pixel x is predicted from a linear
combination of pixels a, b, c previously encoded.
48
Digital Photogrammetry
Ch.4
6-2 GIF
GIF (Graphics Interchange Format) is an image format developed by CompuServe and is the most common type
of image format used on the Web. It was developed as a way to store images in small files that can be quickly
exchanged and easily downloaded. GIF files have a color depth of 8 bits per pixel, so the image must be in Index
color mode in order to be saved as a GIF. GIF files can be accurately displayed on a greater number of systems,
as most systems can display at least 256 colors. GIF files are also saved as low-resolution, usually 72 ppi. The
GIF format should never be used for images that will be professionally printed. If you have an image you would like
to put on the web and also printed, you will need to save two separate files, one as a GIF and one as a TIFF or
EPS. GIF compression is known as a "lossless compression" method, in which the image is analyzed and
compressed without the loss of the original picture data.
The compression technique used with GIF is called LZW compression, which stands for Lempel, Ziv, and Welch.
Lempel, Ziv, and Welch are the mathematicians who were the inventors of this technology. The computer maker
Unisys holds the patent on LZW file compression technique which means that anyone creating GIF files
should owe Unisys a licensing fee for the use of the LZW compression technology. Most software programs like
Adobe Photoshop and Macromedia Fireworks, that are used to create GIF files, are already licensed by Unisys,
so most people should not have to worry about it.
A technique called "run length encoding" is used in GIF compression. The "run length encoding" technique records
the color changes of each horizontal line of pixels, from left to right. If a complete row of pixels is of one color, then
there is less data to record. When there are fewer color changes per row of pixels, the result will be a smaller GIF
file and a faster loading time. If the file size and the loading time are of a major concern, then large amounts of
extra vertical detail should be avoided. In the example shown below, a border of stripes was added to each
identical GIF image. The image with the vertical stripes on the left, will cause the file size to be larger because
there are more color changes to record on each horizontal row of pixels. The horizontal stripes on the image on
the right, create a smaller file because there are fewer color changes running horizontally along the image.
49
Digital Photogrammetry
Ch.4
6-3 JPEG
The JPEG (Joint Photographic Experts Group) format/compression technique was developed specifically for
photographs. JPEG is utilized to gain high compression levels for photographic images without sacrificing the
image quality. It is used exclusively for the compression of 24-bit images and it will not work for images less than
24-bit. It also does not work very well for non-photographic images such as illustrations, cartoons, flat color areas,
or images with defined edges. JPEG is much more suitable for images that contain irregularities and soft edges
rather than images with many straight lines and hard edges. The irregularities cause the pixels to be less well
defined, which decreases the size of the file. The more irregular the image is, the better suited it is for JPEG.
Note: The JPEG format is used mostly for the web and for PhotoCDs. Images that will
be used in a page layout program and printed on a press should NOT be saved as
JPEGs.
24-bit JPEG images look great on 24-bit monitors, but may not look so good on 8-bit or 16-bit systems. The colors
in the 24-bit image that are not contained in the 8-bit or 16-bit palette of the computer system, will be dithered.
Even if flat areas of color in the JPEG image are among the colors in the 8-bit or 16-bit color palette, there could
still be problems with the JPEG image when viewed on a lower bit depth system. The JPEG compression process
introduces elements into the solid color areas that make the images look muddy or blurry.
JPEG compression is known as "lossy compression", which means that non-essential data is lost during the
compression. JPEG images may be compressed at several levels. The way the compression works is that the
image data is separated into levels of importance. The more the image is compressed, the more levels of
information are thrown out, which creates a smaller file, and along with it, the loss of image detail. The loss of this
data is permanent and it cannot be restored. If the image is not compressed by too great a factor, the overall
quality does not suffer that much. With JPEG, you have the choice of compressing an image without sacrificing too
much in the way of image quality, or you can have the advantage of having a greatly reduced file size, but a
resulting image of much poorer quality.
100% quality = 78.81k
Even though the image on the right has been compressed to 25% of the quality of the original image on the left,
the quality of the right image is still tolerable and the file size has been reduced to less than one-tenth of the
original.
50
Digital Photogrammetry
Ch.4
6-4 PDF
Adobe Systems developed the PDF (Portable Document Format) as a file format which allows a document to be
viewed on any computer, and it will look the same on all of them, regardless of how it was created or on which
operating system it is viewed on. The PDF format stores all of the fonts, colors, and graphics in such a way that
these components will look exactly as they were intended to look. Also, regardless of the printing device that is
used, a PDF file will print correctly on all of them.
Converting files to PDF is one of the best options when transferring files via email or the Web. It ensures that the
files will be readable on other computers. PDF removes the problems of files not opening properly on different
computer systems or not opening at all.
PDF files are much smaller than the files they originate from and download faster for display on the Web. They can
be attachments for e-mail and can be integrated with Web sites or CD-ROM applications. PDF files can be
augmented with video, sound, Web links, and security alternatives for more enjoyable viewing.
Once a document has been saved as a PDF file, you are very limited with your ability to edit the document.
Because of the editing limitations, the document should also be saved in its original format so that if it is necessary
to make changes to the document, the editing can be accomplished easily using the original program in which it
was designed. The edited document can then be resaved as a PDF file.
In order for a PDF file to be viewed on any computer, the computer must have Adobe Acrobat Reader installed
on it. The software is free and can be downloaded from the Internet from the Adobe site at www.adobe.com.
The illustration below shows a PDF document displayed on Adobe Acrobat.
51
Digital Photogrammetry
Ch.4
6-5 TIFF
TIF or TIFF format (Tagged Image File Format) is the most common format for saving bitmapped images that will
be printed or imported into a page layout program such as QuarkXPress. It can be used on both the Mac and the
PC. It was originally created by the Aldus Corp. for saving scanned images. A TIFF file can be CMYK, RGB,
grayscale, index, or bitmap.
52
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
Ch05
Image Enhancement
53
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
Image histogram, plots number of pixels of each grey level. The normalised
histogram gives the fraction of pixels within an image with each value.
2- Point operators
They modify each pixel on the image as a function of its intensity value,
independently of the rest of the image. The application is based on transformation
functions, and in practice is carried out using LUT's (Look Up Tables) , where to the
input grey level of each pixel is assigned a fixed output value. The LUT's allow the
storage of a transformation just by saving a simple table, it not being necessary to store
6bit : L = 2 6 = 64
8bit : L = 28 = 256
54
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
Linear transformations: They are defined by a linear function. Here are som
examples:
a) Modification of gain and offset:
g(x,y) = a f(x,y) + b
where a controls the contrast and b controls the overall brightness of an image. It can
be applied to correct the two calibration parameters of a sensor.
b) Radiometric normalization based on mean and standard deviation:
The mean () and standard deviation ( ) of the input and output (reference) im age
g(x,y) needed. Thus, parameters a and b are expressed as a function of them:
a = g / f
b = g - a f
55
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
c) Contrast enhancement:
The histogram of a digital image is the graphic representation of the frequencies
of the different intensity values on the image.
The linear transform ation methods to enhance the contrast are based on
histogram stretching. Two intensity values are selected on the X-axis of the histogram
and then, a linear transform ation is defined to stretch or amplify the contrast inside this
interval. The effect shown in figure .3,
56
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
=
g
0 <= f <= a
(f - a) + ga
a < f <= b
(f - b) + gb
b < f <= L
Values of , , > 1 have the effect of increasing the contrast, while values of , , <
1 reduce the contrast.
57
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
e) Thresholding and density slicing: The simplest expression is the binarization, where
a threshold is defined and the values unde r it are recoded to zero, while values over
the threshold are given the maximum value (figure .5).
58
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
Figure .6.- Simple thresholding (left) and density slicing transformation (right)
f fmin
( gmax gmin)
fmax fmin
gi
gmin
gmax
fmin
fi
fmax
59
Example:
Transform the following image from 6bit to 8bit?
6bit:
8bit:
F min = 0
g min = 0
gi = 0 +
F max = 63
g max = 255
Fi
255
63
The pixel values are always integers, the final image is:
60
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
k = scale constant
This transformation also stretches the low values and reduces the contrast in
high intensity values (figure 7.7).
b) Exponential: It has the opposite effect, increasing the contrast in the higher values
and reducing the contrast in the lower values (figure .7):
g(x,y) = k (e|f(x,y)-1)
Figure .7.- Logarithmic (a) and exponential (b) transformations (Richards, 1993)
61
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
deviation of the histogram s, when im ages of the sam e area that have been taken on
different dates or by differe nt sensors are being com pared. This is com mon when
analyzing changes or evolutions.
62
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
( i , j )W
ij
( x , y ). h(i , j )
63
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
a) Smoothing filters
They are used to reduce noise on images. The most common are:
Mean filter: Each pixel value is substituted by the m ean value of the neighbouring
pixels:
g( x, y) =
1
NW
( i , j )W
ij
( x, y)
1
9
1
9
1
9
1
9
1
9
1
9
1
9
1
9
1
9
64
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
Median filter: Each pixel value is substituted by the median value of the
neighbouring pixels. It is efficient in eliminating binary (salt and pepper) noise.
Mode filter: Each pixel value is substituted by the most repeated value of the
1 2 1
1
2 4 2
16
1 2 1
Figure 7.10 shows some examples of smoothing filters.
Gaussian filter :
Median (5x5)
Mean (5x5)
Mean (11x11)
Figure .10.- Image with gaussian noise and the result of application of 3
low-pass filters.
65
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
Example: What would be the result of applying a (3x3 )mean filter over the 9 central
pixels of the 5x5 image shown below? And the result of applying a (3x3) median filter?
What is the difference between both results?
1
1
1
1
1 1 1 1
1 1 1 1
1 10 1 1
1 1 1 1
1 1 1 1
1 1 1
Mediana : 1 1 1
1 1 1
The mean reduces the noise effect of the central pixel, but the median eliminates its
effect. This is a typical case of the suitability of median filtering to eliminate binary
noise.
1 1 1
After the application of a high-pass filter, usually a scaling of the intensity range
is needed for visualization purposes. Image .11 shows an example of the application.
66
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
1
1
1
1
1
1
1
1
1
49
1
1
1
1
1
1
1
1
1
1
1
1
Laplacian:
1 1 1
1 8 1
1 1 1
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.3
9.7
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.3
67
Example:
Transform the following 6bit image apply : mean filter?
68
Digital Photogrammetry
Original image
Ch.5
Dr.Ghadi Zakarneh
High-pass (3x3)
Sobel (3x3)
Figure .11.- Application of a high-pass filter (center) and a Sobel filter (right).
Figure .12.- Example of application of a High boost filter. There is a subtle increase of spatial detail
(high frequencies) in the filtered image (right).
69
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
2
Y
f G X + GY
The direction of the gradient vector at the point (x,y) is given by the angle :
GY
GX
= arctg
measured with respect to the X axis.
f x = f ( x , y ) f ( x 1, y )
f y = f ( x , y ) f ( x , y 1)
and their associated masks:
0 0
x =
1 1
0 1
y =
0 1
70
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
f = X + y
f = X + y
Kirsch directional operators: represented by the masks (on the four principal
directions):
1
The final value for the pixel is selected to the maximum of the 4 resulting
Values and a threshold value can be applied to define the edges.
0 1 2
2 1 0
1 2 1
1 0 1
2 0 2 ; 0
0
0 ; 1 0 1 ; 1 0 1
2 1
0
1
1 0 1
2
1
1 2
0
The final value for the pixel is selected to the maximum of the 4 resulting
Values and a threshold value can be applied to define the edges.
71
Example:
the following 6bit image apply : Kirsch filter , and apply for the final image the threshold
rule: if the value => 30 then 1 , else 0?
1st filter:
2nd filter:
3rd filter:
72
4th filter:
Threshold of images:
73
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
the second
derivatives::
2f 2f
f =
+ 2
x 2
y
2
the second derivative in the sam e direction is obtained by derivation of the previous
expression:
f xx = f xq [ f ( x , y ) f ( x 1, y )] =
[ f ( x, y) f ( x 1, y)] [ f ( x 1, y) f ( x 2, y] =
f ( x , y ) 2 f ( x 1, y ) + f ( x 2, y )
This is equivalent to a linear m ask (1 -2 1). Changing signs and extending the m ask to
two directions
0 1 0
f = 1 4 1
0 1 0
2
74
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
INVERSE TRANSFORM
F (u ) = f ( x) e j 2ux dx
COMPONENTS
F (u ) = R 2 (u ) + I 2 (u )
f ( x) = F (u ) e j 2ux du
CONTINUOUS
FUNCTION
1-D
F (u ) = F (u ) e j (u )
I (u )
R(u )
(u ) = arctg
DISCRETE
FUNCTION
2-D
F (u , v) =
f ( x, y ) e
j 2 ( ux + vy )
dxdy
1
F (u, v) =
M N
M 1 N 1
f ( x, y) e
x =0 y =0
f ( x, y ) =
F (u, v) e
j 2 ( ux + vy )
dudv
ux vy
j 2 +
M N
f ( x, y ) =
M 1 N 1
F (u, v) e
ux vy
j 2 +
M N
u =0 v =0
75
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
i , jW
ij
( x, y ) h(i, j )
If F(u,v) is the Fourier transform of f(x,y), and H(u,v) is the Fourier transform of
h(x,y), then the Fourier transform of f(x,y)*h(x,y) is F(u,v).H(u,v). That is,
f ( x, y ) h( x, y ) F (u, v) H (u , v)
f ( x, y ) h( x, y ) F (u, v) H (u , v)
The practical importance of this theorem is that any convolution of an image can
be obtained as a product of two im ages in the frequency dom ain. The advantages are
that large masks on the spatial domain can be easily performed in the frequency domain,
and that is easier to filter the periodic noise in this domain.
Steps of filtering
Original
imagel
f(x,y)
FT
Filter
h(x,y)
FT
Image
spectrum
F(u,v)
G(u,v)
Transfer
function
H(u,v)
(TF)-1
Filtered
image
g(x,y)
76
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
Examples
Figure .12.- Three images (left) and their respective Fourier transforms (right).
7-19
77
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
Low-pass filtering
(a)
(b)
PB (r=15)
PB (r=30)
PB (r= 50)
Figure .14.- Transfer functions (left) defined by two gaussian functions, =0,2 (above) and =0,8
(below). Semi-profiles of the transfer functions (center). Filtered images(right).
7-20
78
Digital Photogrammetry
Ch.5
Dr.Ghadi Zakarneh
High-pass filtering
(a)
(b)
(c)
Figure .15.- Examples of three images filtered in the frequency domain using: (a) High-pass
circular filter (radius=15); (b) Band-pass circular filter (ring between 10 and 40); and (c) the same
with extremes frequencies.
79
Digital Photogrammetry
Ch.5
(a)
(b)
(c)
(d)
Dr.Ghadi Zakarneh
Figure .16.- Periodic noise elimination: (a) Original image; (b) Fourier spectrum; (c) specific binary
filter (transfer function); (d) image result of filtering.
80
Digital Photogrammetry
Ch.6
Ch06
Digital
Photogrammetric
Workstations
81
Digital Photogrammetry
Ch.6
Photogrammetric Workstations
1-
Introduction:
82
Digital Photogrammetry
Ch.6
The basic functions of the Photogrammetric workstations are shown in the graph below:
equipment
83
Digital Photogrammetry
Ch.6
84
Digital Photogrammetry
Ch.6
3-3
85
Digital Photogrammetry
3-4
Ch.6
Epipolar resampling
The visualization of a stereoscopic model can be tedious for operators,
because the two images have too different scales and orientations.
Performances of usual image-matching techniques are degraded also by
this type of images.
86
Digital Photogrammetry
3-5
4-
Ch.6
After this process, two points of a line of the left image will have their
counterparts in one line of the right image, and the transverse parallax is
constant.
Epipolar lines are the intersections of the bundle of planes containing the
basis and of the two image planes. In the resampled images, these lines are
parallel, whereas they were converging in the initial images.
This resampling requires at least the calculation of the relative orientation
to generate the new images.
Epipolar resampling improves the operators comfort, accelerates the work
of correlators, since the zone of research of the homologous points is
reduced to a one-dimensional space (instead of a two- dimensional space
for a systematic correlation of images).
Some systems of digital photogrammetry require working with images in
epipolar geometry for the extraction of the DTM; others do this
resampling continuously.
vector display
87
Digital Photogrammetry
Ch.7
Ch07
Photogrammetric
DSM & DTM
88
Digital Photogrammetry
Ch.7
The term digital elevation model (DEM) is used generically to mean the
digital cartographic representation of the elevation of the earth surface in any form.
89
Digital Photogrammetry
Ch.7
Resource Management
90
Digital Photogrammetry
Ch.7
91
Digital Photogrammetry
Ch.7
92
Digital Photogrammetry
Ch.7
5- DEM representation
A very large number of program have been devised and written for terrain
modelling applications in surveying and civil engineering. And they basically follow
one of two main approaches:
1- They are based on, or make use of height data which has been collected or
arranged in the form of regular interval .
2- They are based on, a triangular network.
5.1- Grid Representation
Gird models represent the terrain by interpolating from the input data on to
affixed grid. this technique has limitations, but it is an easy method to implement and
store on microcomputer. The main disadvantages are that the grid size must be fixed
and must be small to accurately represent irregular surfaces, which naturally leads
to excess data in smooth areas, figure .4 shows principle of grid representation.
93
Digital Photogrammetry
Ch.7
94
Digital Photogrammetry
Ch.7
95
Digital Photogrammetry
Ch.7
6- Photogrammetric DTM
Production in digital Photogrammetry DTM can be produced either manually
or automatic.
6.1- Automatic DTM production (Image Matching)
For the image matching we have the following workflow as shon in figure .7:
Define the window A around the pixel in the first image e.g. 3X3.
Find the correlation coefficient between the pixel and its window in
the first image and with a window B with same size around each pixel
in the search area in the second image.
From all calculated correlation coefficients select the pixel with the
best correlation coefficient. This pixel is the pixel we need to find.
96
Digital Photogrammetry
Ch.7
A
m
c11
i 1 j 1
ij
A Bij B
m n
m n
2
2
A
A
ij
Bij B
i 1 j 1
i 1 j 1
97
Digital Photogrammetry
If
Ch.7
c:
= 1 then the windows in A and B are identical
= 0 then the windows in A and B are uncorrelated
= -1 then the windows in A and B are inverse (negative image)
> 0.7 the points have good correlation
98
Digital Photogrammetry
Ch.7
c1 i1 , j1 , i2 , j 2
v 2 i2 , j 2 1
v1 i1 , j1 2
4- Scalar product:
This defines the cosine of the angle between the two vectors , if the cosine is 1 , this
means the angle is 0 and the two vectors are identical.
v1 i1 , j1 v 2 i2 , j 2
c 2 i1 , j1 , i2 , j 2
v1 i1 , j1 v 2 i2 , j 2
99
Digital Photogrammetry
Ch.7
Example:
The candidate array A is an ideal template for a fiducial cross, and the following
search array S is a portion of a digital image containing a fiducial cross. Compute the
position of the fiducial within the search array by correlation. Use a 5 X 5 search
window size.
100
Digital Photogrammetry
Ch.7
Select the maximum correlation coefficient. The maximum value, 0.94, occurs at row
3, column 3 of the C array. The position in the second image is row 5 and column 5.
Feature Based matching
Features such as edges are extracted from the images, These are then compared to
find the best match.
Advantages:
Fast, since only a small subset of pixels are used.
Accurate, since features can be located with sub-pixel precision.
Are more robust with respect to radiometric and perspective distortion.
Disadvantage
Sparse depth maps, matching only takes place where features occur.
Intermediate matches must be interpolated.
Matching primitives:
Zero crossing locations
direction of sign change, contour orientation.
Edges
end point coordinates, length, orientation, edge strength(contrast with
respect to background),difference between grey levels on either side.
Regions
shape, size, relative geometry.
Suitable for smooth objects delineated by edges.
101
Digital Photogrammetry
Ch.7
correlation
coefficient. The point along the eppipolar line with best correlation
coefficient value in the matching point x2 , y2 . the X,Y,Z coordinates of
point in the left photo x1, y1 and the right photo x2 , y2 .
The estimated error in the elevation is:
z
Where:
H
* r * match
B
H: flying height
B: air base
102
Digital Photogrammetry
Ch.7
This value is the position of the best matching value as shown in the figure below.
H: flying height
B: air base
r0 : The ground pixel size in meters.
103
Digital Photogrammetry
2.
Ch.7
Where:
H
* r
4B
H: flying height
B: air base
104
Digital Photogrammetry
Ch.7
105
Digital Photogrammetry
Ch.8
Ch08
Resampling
106
Digital Photogrammetry
Ch.8
Resampling
Resampling is a technique for calculating image gray values by interpolation, after a
geometrical transformation of an image. This becomes necessary after geo-referencing,
rectification, magnification etc.
107
Digital Photogrammetry
Ch.8
Other simple example for image resampling is through win zooming an image, so if you
zoom an image 4 times, then you will get an image filled each 4th pixel while the rest are
empty and have to be resampled.
108
Digital Photogrammetry
Ch.8
Generally there are three interpolation methods which again leads to different results, as
shown for the example below:
The resulting image after image registration is:
1. Nearest neighborhood:
It is a very fast solution, the range is one pixel. The pixel values are directly
copied from one image to the other, there is no interpolation involved. In the
example above:
A pixel is superimposed at a fractional location (R 619.71, C 493.39).
Rounding these values to the nearest integer yields 620 and 493 for the
row and column indices, respectively. Thus, the resampled value is 56.
109
Digital Photogrammetry
Ch.8
Nearest
Neighbor
Resampling
3x3 blocks
d ist ributed CVer
7x7 bloc Ks
Nearest
Neighhm
Resampling
7/3 resize
.. ,,.
'.
... ,.,.. ...
'". .. . .. .
" ... .. ... ...
". '.' ...
.,,
" "'
., ,,. "..
'" .
'"
'" ' " " ...
...
...
'"
..,, ...
......"'..... .........."''.' ....." ......... " ....
..
'"
-.1: J
~ .. 4
J#
,..~~
-.1 J
'
'"
'"
""
.. ..~ .. -.1 ~ ..
"'
110
Digital Photogrammetry
Ch.8
2. Bilinear:
It is a linear interpolation in rows and columns, the interpolation range is the
four surrounding 4 pixels. So it is the weighted mean of the closest 4 pixels.
3.
Finally, since DNs are generally integers, the value is rounded to 59.
111
Digital Photogrammetry
Ch.8
The picture below shows the zooming of 4 times using linear interpolation.
Hill near
Interpolation
,
,,
Hi linear
Interpolation
'
.,
'
'
pix.! morUed:
112
'
Digital Photogrammetry
Ch.8
4. Bicubic:
It is based on the fitting of two third degree polynomials to the region
surrounding the point. The 16 nearest pixel values in the input image are used
to estimate the value (r, c) on the output image. The bicubic follows a sinc
function as shown below.
Where:
The constant a= free number define the weighting function at x=1, best
results when a=-0.5.
The value of x=the absolute difference between the interpolated
fractional pixel position and the column or row number.
113
Digital Photogrammetry
Ch.8
And R is :
The final pixel value is found by rounding DN, in the example the final result: 61.
114
Digital Photogrammetry
Ch.9
Dr.Ghadi Zakarneh
Ch09
Orthophotography
115
Digital Photogrammetry
Ch.9
Dr.Ghadi Zakarneh
3- Orthorectification Process
In a case like the Nebraska example, a simple rectification process like removing the
effects of the tilt of the camera may be all that is necessary. This is very rare and in most
cases a more involved process is required. After removing the effect of the camera tilt,
removing the effects of relief must be accomplished by knowing the elevation of the
terrain above (or below) the mapping plane must be known.
4-Methods
There are two methods by which rectification of an aerial photograph can occur. In the
first case, Ground Control Points (GCP) are determined either conventional ground
surveys, from published maps, by Global Positioning System (GPS) surveys, or by
aerotriangulation. These points are taken at visible physical features on the landscape. On
the corresponding image, the x, y photo coordinates are then determined for each
corresponding GCP. Depending on the type of algorithmic correction to be used, a
116
Digital Photogrammetry
Ch.9
Dr.Ghadi Zakarneh
117
Digital Photogrammetry
Ch.9
Dr.Ghadi Zakarneh
5- Steps of orthophotography
Images acquisition
Geometry of images is done by calculating the image orientation parameters:
omega, phi, kappa, XL, YL, and ZL.
Collecting DTM direct by the photogrammetry , or the use of existing DTMs.
Orthophoto computation: this can be done using the DTM and the image
orientation parameters. From the DTM the X Y and Z coordinates are measured
and using the collinearity equations the xy-image coordinates are calculated to get
the color (grey values). The calculated values of xy-image are in sub-pixels .
resampling methods have to be used to interpolate the grey level values.
118
Digital Photogrammetry
Ch.9
Dr.Ghadi Zakarneh
119
Photogrametric project
In the project care the following steps and requirements:
1- Image scanning for both photos. use 20 micron resolution
2- Collection of ground control points. 5 points minimum.
3- Model orientation. To produce detailed map (e.g. buildings , streets and roads, tree,
power poles, telephone poles , walls etc)
4- Digitizing of ground objects.
5- DTM automatic creation.
6- Collection of ground check points to determine map accuracy and contour interval. At
least 20 check points. To use accuracy standards for testing: refer to photogrammetry 2
lecture notes appendix.
- 25 horizontal check points on well recognized objects.
- 25 vertical check points on open lands (not recognized objects: points on the
terrain), checked by overlaying the points over the DTM on ArcGIS, (spatial or
3d analyst extension has to be used).
7- Production of the final map using ArcGIS direct printing, or export to image with at
least 300dpi.
8- Production of orthophoto for the model area. Use bicubic image resampling.
- what is its accuracy and spatial resolution
- 5 or 2.5 contour interval has to be overlaid
- Coordinates grid with proper interval
9- Create mosaic for the whole area covered by both photos. Hebron contour map or
Westbank contour map can be used to create the DTM (DEM) to be used for
orthorectification of both photos. Bilinear image resampling.
The final products are:
Printed Details map for the model area.
Printed Orthomap for the model area.
Printed Mosaic for the orthorectified images.
CD-ROM of all project data, including ground control and check points as GISshapefiles.
Marks:
The marks of the project are 30% of both theoretical and the practical course:
10 marks: student work and efforts
10 marks: cartography
10 marks: accuracy testing analysis report and its discussion
NOTE: The student must have all data available for the final exam, and he maybe asked to
show some details on the PC.
Deadline: 10-12-2013.
120
Finished
Best of Luck
Sincerely Yours,
Dr.Eng. Ghadi Younis - Zakarneh
---------------------------------------------------------Lecturer in Palestine Polytechnic University
----------------------------------------------------------Palestine Polytechnic University
College of Engineering & Technology
Civil & Architectural Eng. Dept.
(B+) Building - Room (B+307)
Wadi Alhareya
Hebron
Palestine
P.O. Box :198
Tel: 00972-2-2233050
---------------------------------------------------------ghadi@engineer.com
email:
[email protected]
web:
www.ghadi.de.tf
Facebook: fb.com/ghadi.zakarneh
-----------------------------------------------------------
121