MODULE 3 Part1
MODULE 3 Part1
MODULE 3 Part1
SYLLABUS
Introduction, Compression principles, text compression, image Compression.
3.1 Introduction
Compression is generally applied in multimedia communication to reduce the volume
of information to be transmitted or to reduce the bandwidth that is required for its
transmission.
Compression is applied to reduce the volume of information to be transmitted-text,
fax and images-or to reduce the bandwidth that is required for its transmission-speech,
audio and video.
3.2 Compression principles
Compression algorithms are based on the following compression principles:
1
In the case of lossless compression algorithm the aim is to reduce the amount of source
information to be transmitted in such a way that, when the compressed information is
decompressed, there is no loss of information.
Figure 3.1 Source encoder/ destination decoder alternatives: (a) software only; (b)
special processors/hardware.
Lossless compression is therefore said to be reversible.
An example application of lossless compression is for the transfer over the network of a text
file since no part of the source information is lost during either the compression or
decompression operations.
The main aim of lossy compression algorithm, is normally not to reproduce the exact
copy of the source information after decompression but, a rather a version of it.
Example applications of lossy compression are for the transfer of digitized images and audio
and video streams
3.2.3 Entropy encoding
Entropy encoding is lossless and independent of the type of the information that is being compressed.
Examples of entropy encoding:
1. Run-length encoding and 2. Statistical encoding.
3.2.3.1 Run-Length Encoding
Typical applications of this type of encoding are when the source information comprises long
substrings of the same character or binary digit.
Instead of transmitting source string in the form of independent codewords or bits, it is
transmitted in the form of different set of codewords which indicate:
1. Particular character or bit being transmitted and
2. Number of characters/bits in the substring.
Destination knows the set of codewords being used; it simply interprets each codeword
received and outputs the appropriate number of characters or bits. For example, in an
application that involves the transmission of long strings of binary bits that comprise a limited
number of substrings, each substring can be assigned a separate codeword.
2
The Total bit string is then, transmitted in the form of the string of codewords – selected from
the codeword set.
An example application which uses this technique is for the transmission of the binary
strings produced by the scanner in a facsimile machine.
When scanning typed documents in many instances scanner produces long substrings of either
binary 0s or 1s.Instead of these transmitting directly, they are sent in the form of a string of
codewords, each indicating both the bit 0 or 1 and the number of bits in the substring.
For example, if the output of the scanner was 00000001111111111000001 this can be represented as
0,7 1,10 0,5 1,2 ….
Alternatively Since, only the two binary digits 0 and 1 are involved and if, first substring
always comprises binary 0s then, the string could be represented as 7, 10, 5, 2 …
To send this in the digital form individual decimal digits would be sent – in their binary form
and assuming a fixed number of bits per codeword the number of bits per codeword would be
determined by the largest possible substring.
3
To exploit this property of the source information: A set of smaller codewords can be used
each of which indicates –only difference in amplitude between the current value/signal being
encoded and the immediately preceding value/signal instead, of using a relatively large
codewords to represent the amplitude of each value/signal.
For example if the digitization of analog signal requires, 12 bits to obtain the required dynamic
range but, the maximum difference in amplitude between successive samples of the signal
requires only 3 bits ,then by using only the difference values a saving of 75% on transmission
bandwidth can be obtained.
Differential encoding can be lossless or lossy and depends on the number of bits used to
encode the difference values. If the number of bits used is sufficient to cater for the maximum
difference value can be lossless.
3.2.4.2 Transform encoding
Transforming encoding involves transforming the source information from one form into
another form.
In general there is no loss of information associated with the transformation operation and this
technique is used in a number of applications involving both images and video.
For example digitization of a continuous tone monochromatic image produces a 2-D matrix of
pixel values each of which represents the level of gray in a particular position of the image
.some examples are shown in Figure 3.2 (a).
The rate of change in magnitude as one traverses the matrix gives rise to a term known as
spatial frequency.
The matrix is scanned in both vertical and horizontal direction gives rise to term horizontal and
vertical frequency components.
The Transformation of 2-D matrix of pixel values into an equivalent matrix of spatial
frequency components can be carried out from this mathematical technique known as DCT
(Discrete Cosine Transform).The basic principle behind the DCT is shown in Figure 3.2 (b).
4
The degree of imbalance is a function of the relative frequency of occurrence of the
characters: the larger the spread, the more unbalanced is the tree. The resulting tree is
known as Huffman code tree.
A Huffman (code) tree is a binary tree with branches assigned the values 0 or 1.
The base of the tree is known as root node and the point at which divides is a branch
node. The termination point of a branch is known as a leaf node.
Figure 3.2 Transform coding: (a) example pixel patterns; (b) DCT
transform principles
An example of a Huffman code tree and tree derivation is shown in Figure 3.3(a) and
(b). This corresponds to the string of characters AAAABBCD.
As each branch divides, a binary value of 0 or 1 is assigned to each new branch: a
binary 0 for the left branch and a binary 1 for the right branch.
5
Huffman codewords have the unique property that a shorter codeword will never form
the start of a longer codeword. This property is known as prefix property.
A flowchart of a suitable decoding algorithm is given in Figure 3.4 (a).
The algorithm assumes a table of codewords is available at the receiver and this also
holds the corresponding ASCII codeword.
The received bit stream is held in the variable BIT-STREAM and the variable
codeword is used to hold the bits in each codeword while it is being constructed.
Once a codeword is identified the corresponding ASCII codeword is written into the
variable RECEIVE_BUFFER.
The procedure repeats until all the bits in the received string have been processed.
An example of a decoded string corresponding to the codeword set derived is given in
Figure 3.4 (b).
Figure 3.3 Huffman code tree construction: (a) final tree with codes; (b) tree derivation
3.3.2 Dynamic Huffman coding
The basic Huffman coding method requires both the transmitter and the receiver to
know the table of codewords relating to the data being transmitted.
With dynamic Huffman coding, the transmitter (encoder) and receiver (decoder) build
the Huffman tree and hence the codeword table dynamically as the characters are
being transmitted/received.
With this method, if the character to be transmitted is currently present in the tree
its codeword is determined and sent in the normal way.
6
If the character is not present that is, it is its first occurrence the character is
transmitted in its uncompressed form.
The encoder updates its Huffman tree either by incrementing the frequency of
occurrence of the transmitted character by introducing the new character into the tree.
Both transmitter and receiver start with a tree that comprises the root node and a
single empty leaf node, a leaf node with a zero frequency of occurrence assigned to its
0-branch.
Figure 3.4 Decoding of a received bit stream (a) Decoding algorithm (b) example
For each subsequent character, the encoder first checks whether the character is
already present in the tree.
If it is, then the encoder sends the current codeword for the character in the normal
way, the codeword being determined by its position in the tree followed by the
uncompressed codewords for the character.
The encoder and decoder proceed to update their copy of the tree based on the last
character that has been transmitted/received.
If it is a new character, the existing empty leaf node in the tree is replaced with a new
branch node.
7
If the character is already present in the tree, then the frequency of occurrence of the
leaf node is incremented by unity.
The savings in transmission bandwidth start only when characters begin to repeat
themselves.
Dynamic Huffman coding is now used in a number of communicating applications
that involve the transmission of text.
3.3.3 Arithmetic coding
Arithmetic coding is a form of entropy encoding used in lossless data compression.
Normally, a string of characters such as the words "hello there" is represented using a
fixed number of bits per character, as in the ASCII code.
When a string is converted to arithmetic encoding, frequently used characters will be
stored with fewer bits and not-so-frequently occurring characters will be stored with
more bits, resulting in fewer bits used in total.
Arithmetic coding differs from other forms of entropy encoding, such as Huffman
coding, in that rather than separating the input into component symbols and replacing
each with a code, arithmetic coding encodes the entire message into a single number,
an arbitrary-precision fraction q where 0.0 ≤ q < 1.0.
It represents the current information as a range, defined by two numbers.
Recent family of entropy coders called asymmetric numeral systems allows for faster
implementations thanks to directly operating on a single natural number representing
the current information.
3.3.4 Lempel-Ziv coding
Lempel-Ziv compression algorithm uses strings of characters as the basis of the coding algorithm.
Ex.: For compression of text is table containing all the possible character strings, for Ex. words
– that occur in the text to be transferred is held by both encoder and decoder.
As each word occurs in the text, instead of sending the word as a set of individual say ASCII
codewords - the encoder sends only the index of where the word is stored in the table.
On receipt of each index, the decoder uses this to access the corresponding word/string of
characters from the table and proceeds to reconstruct the text into its original form.
Thus the table is used as a dictionary and LZ algorithm is known as a dictionary-based
compression algorithm.
Most word-processing packages have a dictionary associated with them which is used for both
spell checking and for compression of text.
As with the other static coding methods, the basic requirements with the LZ algorithm is that a
copy of the dictionary is held by both the encoder and decoder.
It can be relatively inefficient if the text to be transmitted comprises only a small
subset of the words stored in the dictionary.
Hence a variation of the LZ algorithm has been developed which allows the
dictionary to be built up dynamically by the encoder and decoder as the compressed
text is being transferred.
3.3.5 Lempel-Ziv-Welsh coding
The principle of the Lempel-Ziv-Welsh (LZW) coding algorithm is for the encoder
and decoder to build the contents of the dictionary dynamically as the text I being
transferred.
8
In order to describe how the dictionary is built up, assume the text to be compressed
starts with the string:
This is simple as it is…
Initially, the dictionary held by both the encoder and decoder contains only the
individual characters from the character set being used; for example the 128
characters in the ASCII character set.
Hence the first word in the example text is sent by the encoder using the index of each
of the four characters T,h,i and s.
When the encoder reads the next character from the string the first space (SP)
character it determines that this is not an alphanumeric character. It then transmits the
character using the index.
When the decoder determines that the fifth character is a space character, it interprets
This as a word delimiter and proceeds to store the word this in its dictionary.
The space character following the second occurrence of the word is, the contents of
the dictionary held by both the encoder and decoder as shown in Figure 3.5 (a).
The encoder and decoder double the size of their dictionary to 512 locations. This
necessitates an index length of 9 bits and so from this point, the encoder uses 9-bit
codewords.
This procedure is shown in diagrammatic form in Figure 3.5 (b).
10
Figure 3.6 GIF compression principles; (a) basic operational mode; (b) dynamic
mode using LZW coding
11
It starts with the basic color table containing 256 colors and the table can be extended to
contain up to 4096 entries containing common strings of pixels in the image being transferred.
Again a Standard format is used for the transfer of both the color table and the compressed
image data.
12
A technique known as over scanning is used which means that all lines start with a minimum
of one white pel. To enable the receiver to regain synchronization, each scanned line is
terminated with a known end-of-line (EOL) code.
1 000111 1 010
62 00110011 62 000001100110
63 00110100 63 000001100111
Figure 3.8 ITU-T group 3 and 4 facsimile conversion codes: (a) termination-codes, (b) make-up codes.
Modified READ also known as 2D or two dimensional coding identifies black and white run-
lengths by comparing adjacent scan lines. READ stands for relative element address designate.
With MMR coding the run-lengths associated with a line are identified by comparing the line
contents, known as the coding line (CL), relative to the immediately preceding line, known as
the reference line (RL).
The run length associated with a coding line as one of three possibilities or modes relative to
the reference line as shown in Figure 3.9.
The three possibilities are :
13
i. Pass mode: This is the case when the run-length in the reference line (b1b2) is to
the left of the next run-length in the coding line (a1a2) that is b2 is to the left of
a1.
ii. Vertical mode: This is the case when the run-length in the reference line (b1b2)
overlaps the next run-length in the coding line(a1a2) by a maximum of plus or
minus3 pels.
iii. Horizontal mode: This is the case when the run-length in the reference line (b1b2)
overlaps the run-length (a1a2) by more than plus or minus 3 pels
Figure 3.9 Some examples run-length possibilities (a) pass mode (b) vertical mode
(c) horizontal mode
3.4.5 JPEG
Lossy sequential mode or baseline mode is intended for the compressor of both
monochromatic and color digitized pictures.
There are five main stages associated with this mode:
i. Image/block preparation
ii. Forward DCT
14
iii. Quantization
iv. Entropy encoding
v. Frame building. These are shown in Figure 3.10.
15
Figure 3.11 Image /block preparation: (a) image preparation, (b) block preparation
A number of points are considered from the above expression:
1. All 64 values in the input matrix P[x,y] contribute to each entry in the transformed
matrix F[i,j].
16
2. For i=j=0, the two cosine terms (both horizontal and vertical frequency
coefficients) are both 0. The mean of all 64 values in the matrix is known as DC
coefficients.
3. The values in all the other location of the transformed matrix have a frequency
coefficient associated with them either horizontal (x=1-7 for y=0), vertical (x=0 for
y=1-7) or both (x=1-7 for y=1-7) they are known as AC coefficients.
4. For j=0, only horizontal coefficients are present which increase in frequency for i=1-
7.
5. For j=0, only horizontal coefficients are present which increase in frequency for i=1-
7.
The above Points are summarized in figure 3.12
3.4.5.3. Quantization
The main source of information loss occurs during the quantization and entropy
encoding stages where the compression takes place.
The quantization process aims to reduce the size of the DC and AC coefficients so
that less bandwidth is required for their transmission.
The sensitivity of the eye varies with spatial frequency, which implies that the
amplitude threshold below which the eye will detect a particular spatial frequency
also varies.
Therefore, the threshold values used vary for each of the 64 DCT coefficients.
These are held in a two-dimensional matrix known as the quantization table.
The JPEG standard includes two default quantization table value-one for use with
the luminance coefficients and the other for use with the two sets of chrominance
coefficients.
The quantization table is shown in Figure 3.13.
A number of points from the values shown in the tables.
17
1. The computation of the quantized coefficients involves rounding the quotients
to the nearest integer’s value.
2. The threshold values used, in general, increase in magnitude with increasing
spatial frequency.
3. The DC coefficient in the transformed matrix is largest.
4. Many of the highest-frequency coefficients are zero.
18
4. Huffman Encoding: Significant levels of compression can be obtained by
replacing long strings of binary digits by a string of much shorter codewords, the
length of each codewords being a function of its relative frequency of occurrence.
Figure 3.14 Vectoring using zigzag scan: (a) principle; (b) vector representation.
19
v. The time to carry out the decoding functions is similar to that used to perform the
encoding.
vi. vi.
On receipt of the encoded bit stream the frame decoder first identifies the control
information and tables within the various headers.
It then loads the contents of each table into the related table and passes the control
information to the image builder.
It then starts to pass the compressed bit stream of the Huffman decoder which carries
out the corresponding decompression operation using either the default or the
preloaded table of codewords.
The two decompressed streams containing the DC and AC coefficients of each block
are then passed to the differential and run-length decoders respectively.
The resulting matrix of values is then dequantized using either the default or the
preloaded values in the quantization table.
Each resulting block of 8x8 spatial frequency coefficients is passed in turn to the
inverse DCT which transforms them back into their spatial form using the expression:
20
Figure 3.16 JPEG decoder schematic
Finally, as with the GIF, it is also possible to encode and rebuild the image in a
progressive way by first sending an outline of the image and then progressively
adding more detail to it. This can be achieved in the following ways:
i. Progressive mode: In this mode, first the DC and low- frequency coefficients
of each block are sent and then the higher-frequency coefficients.
ii. Hierarchical mode: In this mode, the total image is first sent using a low
resolution then at a higher resolution.
Question bank
1. Explain the meaning of the following terms, relating to compression:
a. Lossless and lossy compression
b. Source and entropy encoding (Dec 2010 4M)
2. With a aid of a diagram, identify the five main stages of operation of JPEG and
explain each stage briefly. (Dec 2010 10M)
3. Code the given string “ABACADABACADABACABAB” using Huffman coding.
Derive Huffman code tree. Determine the savings in transmission bandwidth over
normal ASCII and binary coding (Dec 2010 6M, Jun 2012 6M)
4. Explain with a neat diagram JPEG encoder (Dec 2015 10M, jun 2011 10M,jan 2016
10M)
5. Encode the string went comprising characters with probability of e=0.3, n=0.3,
t=0.5,w=0.1, .=0.1 using arithmetic coding (Dec 2015 10M)
6. Explain clearly about image encoding and decoding methods with diagram (jun 2015
14M)
7. Explain Huffman coding procedure for encoding any given data. (jun 2015 6M)
21
8. With a help of a diagram, identify the main stages of operation of JPEG and explain
each stage in detail (encoder and decoder) (jun 2012 14M, jun 2013 14M)
9. Encode the string went comprising characters with probability of e=0.3, n=0.3,
t=0.2,w=0.1, .=0.1 using arithmetic coding (jun 2011 10M, jan 2016 10M)
10. Code the given string “ABACADABACADABACABAB” using Huffman coding.
Derive Huffman code tree (jun 2013 6M, ).
11. With a help of a diagram, identify the five main stages of associated with the baseline
of operation of JPEG encoder and give a brief description of the role of image/block
preparation (jun 2010 6M)
12. A series of message is to be transmitted between computers over a PSTN. The
messages comprise the characters A through H. The probability of each character is as
follows:
A and B =0.25 C and D=0.14 E,F,G and H=0.055.
a. Use Shannon’s formula to derive the minimum average number of bits/
character.
b. Use Huffman coding to derive the codeword and prove that this is the
minimum set by constructing Huffman code tree. (jun 2010 14M)
13. Briefly discuss JPEG encoder and decoder. (jan 2017 14M)
14. A series of message is to be transmitted between computers over a PSTN. The
messages comprises the characters A through H. The probability of each character is
as follows:
A and B =0.25 C and D=0.14 E,F,G and H=0.055.derive code word set using
Huffman coding. (jan 2017 6M)
15. What are pass mode, vertical mode and horizontal modes in run length possibilities
and explain the same with flow chart. (7M).
16. Describe forward DCT, quantization block of JPEG standard. (6M)
22