Sbm1603!72!87 Medical Image Processing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

lOMoARcPSD|27943792

SBM1603-72-87 - MEDICAL IMAGE PROCESSING

medical image processing (Sathyabama Institute of Science and Technology)

Studocu is not sponsored or endorsed by any college or university


Downloaded by BALAJI S ([email protected])
lOMoARcPSD|27943792

UNIT-5
IMAGE COMPRESSION
Image compression address the problem of reducing the amount of data required to represent
a digital image with no significant loss of information. Interest in image compression dates back
more than 25 years. The field is now poised significant growth through the practical application of
the theoretic work that began in 1940s, when C.E. Shannon and others first formulated the
probabilistic view of information and its representation, transmission and compression.

Images take a lot of storage space:


- 1024 x 1024 x 32 x bits images requires 4 MB
- suppose you have some video that is 640 x 480 x 24 bits x 30 frames per second , 1 minute of
video would require 1.54 GB
Many bytes take a long time to transfer slow connections – suppose we have 56,000 bps
- 4MB will take almost 10 minutes, - 1.54 GB will take almost 66 hours
Storage problems, plus the desire to exchange images over the Internet, have led to a
large interest in imagecompression algorithms.

Definition: Image compression refers to the process of redundancy amount of data required to
represent the given quantity of information for digital image. The basis of reduction process is
removal of redundant data.

5.1 Data compression requires the identification and extraction of source redundancy. In other
words, datacompression seeks to reduce the number of bits used to store or transmit information.

Need for Compression:


In terms of storage, the capacity of a storage device can be effectively increased with
methods that compress a body of data on its way to a storage device and decompress it when it is
retrieved.
In terms of communications, the bandwidth of a digital communication link can be
effectively increased by compressing data at the sending end and decompressing data at the
receiving end.
At any given time, the ability of the Internet to transfer data is fixed. Thus, if data can effectively
be compressed wherever possible, significant improvements of data throughput can be achieved.
Many files can be combined into one compressed document making sending easier.

5.2 DATA REDUNDANCY: Data are the means by which information is conveyed. Various
amounts of data can be used to convey the same amount of information. Example: Four different
representation of the same information (number five)
1) A picture (1001, 632 bits);
2) A word “five” spelled in English using the ASCII character set (32 bits);
3) A single ASCII digit (8bits);
4) Binary integer (3bits)

72

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

Compression algorithms remove redundancy


If more data are used than is strictly necessary, then we say that there is redundancy in the
dataset.

Data compression is defined as the process of encoding data using a representation that reduces
the overall size of data. This reduction is possible when the original datasetcontains some
type of redundancy. Digital image
compression is a field that studies methods for reducing the total number of bits required to
represent an image. This can be achieved by eliminating various types of redundancy that exist
in the pixel values. In general, three basic redundancies exist in digital images that follow

REDUNDANCY IN DIGITAL IMAGES


–Coding redundancy- usually appear as results of the uniform representation of each pixel
–Spatial/Temporal redundancy-because the adjacent pixels tend to have similarity in practical.

– Irrelevant Information-Image contain information which are ignored by the human visual
system

5.2.1 Coding Redundancy:

Our quantized data is represented using code words. The code words are ordered in the same
way as the intensities that they represent; thus the bit pattern 00000000, corresponding to the value
0, represents the darkest points in an image and the bit pattern 11111111, corresponding to the value
255, represents the brightest points.

73

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

- if the size of the code word is larger than is necessary to represent all quantization
levels, then we havecoding redundancy

An 8-bit coding scheme has the capacity to represent 256 distinct levels of intensity in an image. But
if there are only 16 different grey levels in an image, the image exhibits coding redundancy because
it could be represented using a 4-bit coding scheme. Coding redundancy can also arise dueto the use
of fixed-length code words.

Grey level histogram of an image also can provide a great deal of insight into the construction of
codes to reduce the amount of data used to represent it.

Let us assume, that a discrete random variable rk in the interval (0,1) represents the grey levels of
an image and that each rk occurs with probability Pr(rk). Probability can be estimated from the
histogram of an image using

Pr (rk) = hk /n for k = 0, 1……L-1 (3)

Where L is the number of grey levels and hk is the frequency of occurrence of grey level k (the
number of times that the kth grey level appears in the image) and n is the total number of the pixels
in the image. If the number of the bits used to represent each value of rk is l (rk), the average number
of bits required to represent each pixel is:

Example:

74

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

5.2.2 Interpixel Redundancy:


Consider the images shown in Figs. 1.1(a) and (b). As Figs. 1.1(c) and (d) show, these images have
virtually identical histograms. Note also that both histograms are trimodal, indicating the presence
of three dominant ranges of gray-level values. Because the gray levels in these images are not
equally probable, variable-length coding can be used to reduce the coding redundancy thatwould
result from a straight or natural binary encoding of their pixels. The coding process, however, would
not alter the level of correlation between the pixels within the images. In other words, the codes
used to represent the gray levels of each image have nothing to do with the correlation between
pixels. These correlations result from the structural or geometric relationships between the objects
in the image.

Fig.1.1 Two images and their gray-level histograms and normalized autocorrelation
coefficients along one line.

Figures 1.1(e) and (f) show the respective autocorrelation coefficients computed along one line of
each image.

75

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

The scaling factor in Eq. above accounts for the varying number of sum terms that arise for each
integer value of n. Of course, n must be strictly less than N, the number of pixels on a line. The
variable x is the coordinate of the line used in the computation. Note the dramatic difference
between the shape of the functions shown in Figs. 1.1(e) and(f). Their shapes can be qualitatively
related to the structure in the images in Figs. 1.1(a) and (b).This relationship is particularly
noticeable in Fig. 1.1 (f), where the high correlation between pixels separated by 45 and 90 samples
can be directly related to the spacing between the vertically oriented matches of Fig. 1.1(b). In
addition, the adjacent pixels of both images are highly correlated. When n is 1, γ is 0.9922 and
0.9928 for the images of Figs.
1.1 (a) and (b), respectively. These values are typical of most properly sampled television images.

These illustrations reflect another important form of data redundancy—one directly related to the
interpixel correlations within an image. Because the value of any given pixel can be reasonably
predicted from the value ofits neighbors, the information carried by individual pixels is relatively
small. Much of the visual contribution of a single pixel to an image is redundant; it could have been
guessed on the basis of the values of its neighbors. A variety of names, including spatial
redundancy, geometric redundancy, and interframe redundancy, have been coined to refer tothese
interpixel dependencies. We use the term interpixel redundancy to encompass them all.

In order to reduce the interpixel redundancies in an image, the 2-D pixel array normally used for
human viewing and interpretation must be transformed into a more efficient (but usually
"nonvisual") format. For example, the differences between adjacent pixels can be used to represent
an image. Transformations of this type (that is, those that remove interpixel redundancy) are
referred to as mappings. They are called reversible mappings if the original image elements can be
reconstructed from the transformed data set.

5.2.3 Psychovisual Redundancy:

The brightness of a region, as perceived by the eye, depends on factors other than simply the light
reflected by the region. For example, intensity variations (Mach bands) can be perceived in an area
of constant intensity. Such phenomena result from the fact that the eye does not respond withequal
sensitivity to all visual information. Certain information simply has less relative importance than
other information in normal visual processing. This information is said to be psychovisually
redundant. It can be eliminated without significantly impairing the quality of image perception.

That psychovisual redundancies exist should not come as a surprise, because human perception of
the information in an image normally does not involve quantitative analysis of every pixel value
in the image. In general, an observer searches for distinguishing features such as edges or textural
regions and mentally combines them into recognizable groupings. The brain then correlates these
groupings with prior knowledge in order to complete the image interpretation process. Psychovisual
redundancy is fundamentally different from the redundancies discussed earlier. Unlike coding and
interpixel redundancy, psychovisual redundancy is associated with real or quantifiable visual
information. Its elimination is possible only because the information itself is not essential for
normal visual processing. Since the elimination of psychovisually redundant dataresults in a loss
of quantitative information, it iscommonly referred to as quantization.

76

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

This terminology is consistent with normal usage of the word, which generally means themapping
of a broad range of input values to a limited number of output values. As it is an irreversible
operation (visual information is lost), quantization results in lossy data compression.

5.3 IMAGE COMPRESSION MODELS

Figure shows, a compression system consists of two distinct structural blocks: an encoder and a
decoder. An input image f(x, y) is fed into the encoder, which creates a set of symbols from the
input data. After transmission over the channel, the encoded representation is fed to the decoder,
where a reconstructed output image f^(x, y) is generated. In general, f^(x, y) may or may not be an
exact replica of f(x, y). If it is, the system is error free or information preserving; if not, some level
of distortion is present in the reconstructed image. Both the encoder and decoder shown in Fig. 3.1
consist of two relatively independent functions or sub blocks. The encoder is made up ofasource
encoder, which removes input redundancies, and a channel encoder, which increases the noise
immunity of the source encoder's output. As would be expected, the decoder includes a channel
decoder followed by a sourcedecoder. If the channel between the encoder and decoder is noise free
(not prone to error), the channel encoder and decoder are omitted, and the general encoder and
decoder become the source encoder and decoder, respectively.

The Source Encoder and Decoder:


➢ Source Encoder
Reduces/eliminates any coding, interpixel or psychovisual redundancies. The Source
Encoder contains 3processes:
• Mapper
Transforms the image into array of coefficients reducing interpixel redundancies. This is
a reversible process which is not lossy. May or may not reduce directly the amount of data
required to represent the image.
• Quantizer

77

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

This process reduces the accuracy and hence psychovisual redundancies of a given image.
This process isirreversible and therefore lossy. It must be omitted when error-free
compression is desired.
• Symbol Encoder
This is the source encoding process where fixed or variable-length code is used torepresent
mapped and quantized data sets. This is a reversible process (not lossy). Removes coding
redundancy by assigning shortest codes for the most frequently occurringoutput values.
➢ Source Decoder contains two components.
• Symbol Decoder: This is the inverse of the symbol encoder and reverse of the
variable-length coding isapplied.
• Inverse Mapper: Inverse of the removal of the interpixel redundancy.
•The only lossy element is the Quantizer which removes the psycho visual redundancies
causing irreversibleloss. Every Lossy Compression methods contain the Quantizer module.
• If error-free compression is desired the quantizer module is removed.
The Channel Encoder and Decoder:
The channel encoder and decoder play an important role in the overall encoding-decoding process
when thechannel is noisy or prone to error. They are designed to reduce the impact of channel noise
by inserting a controlled form of redundancy into the source encoded data. As the output of the
source encoder contains little redundancy, it would be highly sensitive to transmission noise
without the addition of this "controlled redundancy." One of the most useful channel encoding
techniques was devised by R. W. Hamming (Hamming [1950]). It is based on appending enough
bits to the data being encoded to ensure that some minimum number of bits must change between
valid code words. Hamming showed, for example, that if 3 bits of redundancy are added to a 4-bit
word, so that the distance between any two valid code words is 3,all single-bit errors can be detected
and corrected. (By appending additional bits of redundancy, multiple-bit errors can be detected and
corrected.) The 7- bit Hamming (7, 4) code word h1, h2, h3…., h6, h7 associated with a 4-bit binary
number b3b2b1b0 is

78

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

Where denotes the exclusive OR operation. Note that bits h1, h2, and h4 are even- parity bits for
the bit fields b3 b2 b0, b3b1b0, and b2b1b0, respectively. (Recall that a string of binary bits has
even parity if the number of bits with a value of 1 is even.) To decode a Hamming encoded result,
the channel decoder must check the encoded value for odd parity over the bit fields in which even
parity was previously established. A single-bit error is indicated by a nonzero parity word c4c2c1,
where
If a nonzero value is found, the decoder simply complements the code word bit position
indicated by the parityword. The decoded binary value is then extracted from the corrected code
word as h3h5h6h7.
5.4 ELEMENTS OF INFORMATION THEORY
Measuring Information
The generation of information is modeled as a probabilistic process. Random event E occurs with
probability P(E)

The base of the logarithm determines the units used to measure the information. If the base
2 is selected theresulting information unit is called bit. If P(E)=0.5 (two possible equally likely
events) the information is one bit.

The Information Channel

79

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

80

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

5.5. Fundamental Coding Theorems

Fig: A Communication System Model

5.5.1 Noiseless Coding Theorem

81

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

82

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

5.5.2 Noisy Coding Theorem

5.6. Pattern and PPattern :


Arrangement of descriptors Pattern class:
Family of patterns
Denoted by ω1, ω2, ω3, . . . , ωW Where W is the number of classes
Patten Arrangements (3):
pattern Classes

83

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

Pattern classes: a pattern class is a family of patterns that share some common properties
Pattern recognition: to assign patterns to their respective classes

Here is another example of pattern vector generation.


In this case, different types of noisy shapes.

String descriptions adequately generate patterns of objects and other entities whose structure is based on relatively
simple connectivity of primitives, usually associated with boundary shape.

84

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

• Tree descriptions is more powerful than string ones.


• Most hierarchical ordering schemes lead to tree structure.

• Decision-theoretic approaches to recognition are based on the use decision functions.

W pattern classes w1,w2 ,...,wW , we want to find W decision functions


d1 (x), d2 (x),...,dW (x)with the property that, if a pattern x
belongs to class wi, then
di (x) > d j (x) j = 1,2,...,W ; j ¹ i
• The decision boundary separating class w and
i w j is given by
di (x) = d j (x) or di (x) - d j (x) = 0

References
1. Rafael C. Gonzalez, Richard E. Woods, Steven L. Eddins, “Digital Image Processing
UsingMatlab”, 3rd Edition Tata McGraw Hill Pvt. Ltd., 2011.
2. Anil Jain K. “Fundamentals of Digital Image Processing”, PHI Learning Pvt. Ltd., 2011.
3. William K. Pratt, “Introduction to Digital Image Processing”, CRC Press, 2013.

85

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

Question Bank

PART-A
S.
N
o
1. What is image compression?

2. Investigate the performance metrics for evaluating image compression.


3. List the need for Compression?
4. Define compression ratio.
5. What is Redundancy?
6. What is a pattern?

86

Downloaded by BALAJI S ([email protected])


lOMoARcPSD|27943792

6. Validate the types of data redundancy.


7. What is the operation of source encoder?
8. What is the function of channel encoder?
9. Categorize video compression standards.
10. Specify the fundamental coding theorem.
11. What is meant by inverse mapping?

S.No PART-B

What is data redundancy? Illustrate various types of data redundancy in


1.
detail.
2. Demonstrate in detail about Image compression model?
3. Discuss in detail source encoder and decoder,
4. Analyze Shannon’s first theorem for noiseless coding theorem.
5. Apply and analyze Shannon’s second theorem for noisy coding theorem.
6. Evaluate fundamental coding theorem.
7. Summarize the different types of redundancy.
8. Compare and contrast noiseless and noisy coding theorem.

87

Downloaded by BALAJI S ([email protected])

You might also like