MODULE 3 Part1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 24

MODULE 3

SYLLABUS
Introduction, Compression principles, text compression, image Compression.

3.1 Introduction
 Compression is generally applied in multimedia communication to reduce the volume
of information to be transmitted or to reduce the bandwidth that is required for its
transmission.
 Compression is applied to reduce the volume of information to be transmitted-text,
fax and images-or to reduce the bandwidth that is required for its transmission-speech,
audio and video.
3.2 Compression principles
 Compression algorithms are based on the following compression principles:

1. Source encoders and destination decoders.


2. Lossless and lossy compression.
3. Entropy encoding.
4. Source encoding
3.2.1 Source encoders and destination decoders
 Compression algorithm is applied to the source information relating to a particular multimedia
application prior to transmission.
 Matching decompression algorithm must be applied to reproduce the original source
information or a nearly exact copyofit.
 Source encoder is used for the application of the compression algorithm.
 Destination decoder is used for the application of decompression algorithm.
 Two computers communicating with each other, the time required to perform the compression
and decompression is not always critical.
 Both algorithms are implemented in software within the two computers.
 The general scheme is shown in part (a) of figure 3.1. An example application that uses this
approach is compression of text and image.
 In other applications, the time required to perform the Compression and decompression
algorithm in software is not acceptable and instead the two algorithms must be performed by
special processors in separate units as shown in part (b) of figure 3.1.
 An example application which uses this approach is compression of speech, audio and video.

3.2.2 Lossless and Lossy Compression


 Compression algorithms can be classified as either lossless or lossy.

1
 In the case of lossless compression algorithm the aim is to reduce the amount of source
information to be transmitted in such a way that, when the compressed information is
decompressed, there is no loss of information.

Figure 3.1 Source encoder/ destination decoder alternatives: (a) software only; (b)
special processors/hardware.
 Lossless compression is therefore said to be reversible.
 An example application of lossless compression is for the transfer over the network of a text
file since no part of the source information is lost during either the compression or
decompression operations.
 The main aim of lossy compression algorithm, is normally not to reproduce the exact
copy of the source information after decompression but, a rather a version of it.
 Example applications of lossy compression are for the transfer of digitized images and audio
and video streams
3.2.3 Entropy encoding
 Entropy encoding is lossless and independent of the type of the information that is being compressed.
 Examples of entropy encoding:
1. Run-length encoding and 2. Statistical encoding.
3.2.3.1 Run-Length Encoding
 Typical applications of this type of encoding are when the source information comprises long
substrings of the same character or binary digit.
 Instead of transmitting source string in the form of independent codewords or bits, it is
transmitted in the form of different set of codewords which indicate:
1. Particular character or bit being transmitted and
2. Number of characters/bits in the substring.
 Destination knows the set of codewords being used; it simply interprets each codeword
received and outputs the appropriate number of characters or bits. For example, in an
application that involves the transmission of long strings of binary bits that comprise a limited
number of substrings, each substring can be assigned a separate codeword.
2
 The Total bit string is then, transmitted in the form of the string of codewords – selected from
the codeword set.
 An example application which uses this technique is for the transmission of the binary
strings produced by the scanner in a facsimile machine.
 When scanning typed documents in many instances scanner produces long substrings of either
binary 0s or 1s.Instead of these transmitting directly, they are sent in the form of a string of
codewords, each indicating both the bit 0 or 1 and the number of bits in the substring.
 For example, if the output of the scanner was 00000001111111111000001 this can be represented as
0,7 1,10 0,5 1,2 ….
 Alternatively Since, only the two binary digits 0 and 1 are involved and if, first substring
always comprises binary 0s then, the string could be represented as 7, 10, 5, 2 …
 To send this in the digital form individual decimal digits would be sent – in their binary form
and assuming a fixed number of bits per codeword the number of bits per codeword would be
determined by the largest possible substring.

3.2.3.2 Statistical Encoding


 Many Applications use a set of codewords to transmit the source information.
 A Set of ASCII codewords are often used for the transmission of strings of characters.
 Normally, all the codewords in the set comprise a fixed number of binary bits, for Ex.7 bits in
the case of ASCII.
 In many applications, the symbols and hence codewords that are present in the source
information do not occur with the same frequency of occurrence, that is with equal probability.
 Ex in a string of text, the character A may occur more frequently than character P which
occurs more frequently than character Z.
 Statistical encoding exploits this property by using the set of variable length codewords with the
shortest codewords with the shortest codewords used to represent the most frequently occurring
symbols.
 As with run-length encoding, the destination must know the set of codewords used by the source.
 A decoder wrongly interprets that a shorter codeword forms the starting of the longer
codewords, a codeword set that avoids this said to possess the prefix property and an example
of an encoding scheme that uses this property is Huffman encoding algorithm.
 The minimum average number of bits that are required to transmit a particular source
stream is known as the entropy and is computed using Shannon formula:
 Entropy, H = − ∑n i=1 Pilog2Pi
 Average number of bits per codeword:
Average number of bits per codeword
i=1 NiPi
=∑n
3.2.4 Source Encoding
 Source encoding exploits a particular property of the source information in order to produce
an alternative form of representation that is either a compressed version of the original form or
is more amenable to the application of compression.
3.2.4.1. Differential Encoding
 Differential encoding is used extensively in applications where, the amplitude of a value or
signal covers a large range but the difference in amplitude between successive values/signals is
relatively small.

3
 To exploit this property of the source information: A set of smaller codewords can be used
each of which indicates –only difference in amplitude between the current value/signal being
encoded and the immediately preceding value/signal instead, of using a relatively large
codewords to represent the amplitude of each value/signal.
 For example if the digitization of analog signal requires, 12 bits to obtain the required dynamic
range but, the maximum difference in amplitude between successive samples of the signal
requires only 3 bits ,then by using only the difference values a saving of 75% on transmission
bandwidth can be obtained.
 Differential encoding can be lossless or lossy and depends on the number of bits used to
encode the difference values. If the number of bits used is sufficient to cater for the maximum
difference value can be lossless.
3.2.4.2 Transform encoding
 Transforming encoding involves transforming the source information from one form into
another form.
 In general there is no loss of information associated with the transformation operation and this
technique is used in a number of applications involving both images and video.
 For example digitization of a continuous tone monochromatic image produces a 2-D matrix of
pixel values each of which represents the level of gray in a particular position of the image
.some examples are shown in Figure 3.2 (a).
 The rate of change in magnitude as one traverses the matrix gives rise to a term known as
spatial frequency.
 The matrix is scanned in both vertical and horizontal direction gives rise to term horizontal and
vertical frequency components.
 The Transformation of 2-D matrix of pixel values into an equivalent matrix of spatial
frequency components can be carried out from this mathematical technique known as DCT
(Discrete Cosine Transform).The basic principle behind the DCT is shown in Figure 3.2 (b).

3.3 Text compression


 The three different types of text are: Unformatted, Formatted and hypertext. All are
represented as strings of characters selected from a defined set.
 Any compression algorithm associated with text must be lossless since the loss of just
a single character could modify the meaning of a complete string. Hence, statistical
encoding methods are used.
 There are two types of statistical encoding methods which are used with text
i. One which uses single characters as the basis of deriving an optimum set of
codewords (Huffman and arithmetic coding algorithms).
ii. Other uses variable-length strings of characters (Lempel-Ziv (LZ) algorithm).
 There are two types of coding used for text: static coding and dynamic coding
(adaptive coding).

3.3.1 Static Huffman coding


 The coding operation involves creating an unbalanced tree with some branches
shorter than others.

4
 The degree of imbalance is a function of the relative frequency of occurrence of the
characters: the larger the spread, the more unbalanced is the tree. The resulting tree is
known as Huffman code tree.
 A Huffman (code) tree is a binary tree with branches assigned the values 0 or 1.
 The base of the tree is known as root node and the point at which divides is a branch
node. The termination point of a branch is known as a leaf node.

Figure 3.2 Transform coding: (a) example pixel patterns; (b) DCT
transform principles
 An example of a Huffman code tree and tree derivation is shown in Figure 3.3(a) and
(b). This corresponds to the string of characters AAAABBCD.
 As each branch divides, a binary value of 0 or 1 is assigned to each new branch: a
binary 0 for the left branch and a binary 1 for the right branch.

5
 Huffman codewords have the unique property that a shorter codeword will never form
the start of a longer codeword. This property is known as prefix property.
 A flowchart of a suitable decoding algorithm is given in Figure 3.4 (a).
 The algorithm assumes a table of codewords is available at the receiver and this also
holds the corresponding ASCII codeword.
 The received bit stream is held in the variable BIT-STREAM and the variable
codeword is used to hold the bits in each codeword while it is being constructed.
 Once a codeword is identified the corresponding ASCII codeword is written into the
variable RECEIVE_BUFFER.
 The procedure repeats until all the bits in the received string have been processed.
 An example of a decoded string corresponding to the codeword set derived is given in
Figure 3.4 (b).

Figure 3.3 Huffman code tree construction: (a) final tree with codes; (b) tree derivation
3.3.2 Dynamic Huffman coding
 The basic Huffman coding method requires both the transmitter and the receiver to
know the table of codewords relating to the data being transmitted.
 With dynamic Huffman coding, the transmitter (encoder) and receiver (decoder) build
the Huffman tree and hence the codeword table dynamically as the characters are
being transmitted/received.
 With this method, if the character to be transmitted is currently present in the tree
its codeword is determined and sent in the normal way.

6
 If the character is not present that is, it is its first occurrence the character is
transmitted in its uncompressed form.
 The encoder updates its Huffman tree either by incrementing the frequency of
occurrence of the transmitted character by introducing the new character into the tree.
 Both transmitter and receiver start with a tree that comprises the root node and a
single empty leaf node, a leaf node with a zero frequency of occurrence assigned to its
0-branch.

Figure 3.4 Decoding of a received bit stream (a) Decoding algorithm (b) example

 For each subsequent character, the encoder first checks whether the character is
already present in the tree.
 If it is, then the encoder sends the current codeword for the character in the normal
way, the codeword being determined by its position in the tree followed by the
uncompressed codewords for the character.
 The encoder and decoder proceed to update their copy of the tree based on the last
character that has been transmitted/received.
 If it is a new character, the existing empty leaf node in the tree is replaced with a new
branch node.

7
 If the character is already present in the tree, then the frequency of occurrence of the
leaf node is incremented by unity.
 The savings in transmission bandwidth start only when characters begin to repeat
themselves.
 Dynamic Huffman coding is now used in a number of communicating applications
that involve the transmission of text.
3.3.3 Arithmetic coding
 Arithmetic coding is a form of entropy encoding used in lossless data compression.
Normally, a string of characters such as the words "hello there" is represented using a
fixed number of bits per character, as in the ASCII code.
 When a string is converted to arithmetic encoding, frequently used characters will be
stored with fewer bits and not-so-frequently occurring characters will be stored with
more bits, resulting in fewer bits used in total.
 Arithmetic coding differs from other forms of entropy encoding, such as Huffman
coding, in that rather than separating the input into component symbols and replacing
each with a code, arithmetic coding encodes the entire message into a single number,
an arbitrary-precision fraction q where 0.0 ≤ q < 1.0.
 It represents the current information as a range, defined by two numbers.
 Recent family of entropy coders called asymmetric numeral systems allows for faster
implementations thanks to directly operating on a single natural number representing
the current information.
3.3.4 Lempel-Ziv coding
 Lempel-Ziv compression algorithm uses strings of characters as the basis of the coding algorithm.
 Ex.: For compression of text is table containing all the possible character strings, for Ex. words
– that occur in the text to be transferred is held by both encoder and decoder.
 As each word occurs in the text, instead of sending the word as a set of individual say ASCII
codewords - the encoder sends only the index of where the word is stored in the table.
 On receipt of each index, the decoder uses this to access the corresponding word/string of
characters from the table and proceeds to reconstruct the text into its original form.
 Thus the table is used as a dictionary and LZ algorithm is known as a dictionary-based
compression algorithm.
 Most word-processing packages have a dictionary associated with them which is used for both
spell checking and for compression of text.
 As with the other static coding methods, the basic requirements with the LZ algorithm is that a
copy of the dictionary is held by both the encoder and decoder.
 It can be relatively inefficient if the text to be transmitted comprises only a small
subset of the words stored in the dictionary.
 Hence a variation of the LZ algorithm has been developed which allows the
dictionary to be built up dynamically by the encoder and decoder as the compressed
text is being transferred.
3.3.5 Lempel-Ziv-Welsh coding
 The principle of the Lempel-Ziv-Welsh (LZW) coding algorithm is for the encoder
and decoder to build the contents of the dictionary dynamically as the text I being
transferred.

8
 In order to describe how the dictionary is built up, assume the text to be compressed
starts with the string:
This is simple as it is…
 Initially, the dictionary held by both the encoder and decoder contains only the
individual characters from the character set being used; for example the 128
characters in the ASCII character set.
 Hence the first word in the example text is sent by the encoder using the index of each
of the four characters T,h,i and s.
 When the encoder reads the next character from the string the first space (SP)
character it determines that this is not an alphanumeric character. It then transmits the
character using the index.
 When the decoder determines that the fifth character is a space character, it interprets
This as a word delimiter and proceeds to store the word this in its dictionary.
 The space character following the second occurrence of the word is, the contents of
the dictionary held by both the encoder and decoder as shown in Figure 3.5 (a).
 The encoder and decoder double the size of their dictionary to 512 locations. This
necessitates an index length of 9 bits and so from this point, the encoder uses 9-bit
codewords.
 This procedure is shown in diagrammatic form in Figure 3.5 (b).

3.4 Image compression


Images can be two types: computer generated (graphical) images and digitized images (both
documents and pictures).

3.4.1 Graphics Interchange Format


 GIF is extensively used with the Internet interne6t for the representation and compression of
graphical images.
 Color images comprising 24-bit pixels (8 bits each for R, G, and B) are supported.
 GIF reduces the number of possible colors
 which matches most closely- those used in original image Resulting table of colors - so,
consists of 256 entries - each of which contains - a 24-bit color value Instead of sending 24-bit
value - 8-index to the table entry - that contains closest match color to the original - is sent to
decoder
 Compression ratio of 3:1 - is obtained Global color table - is called to table of colors - if -
table of colors can relate to the whole image Local color table - is called to table of colors.
 If - table of colors if relate to - a portion of the image Contents of the table are sent across the
network - together with the compressed image data and other information -such as aspect ratio,
screen size, in standardized format as shown in Figure 3.6 (a).
 In Figure 3.6 (b), the LZW coding algorithm can be used to obtain further levels of compression.
 GIF allows an image to be stored and subsequently transferred over the network in an
interlaced mode.
 The compression data is divided into four groups as shown in Figure 3.7, the first contains 1/8
of the total compressed image data, the second 1/8, the third a further 1/4, and the last the
remaining 1/2.
9
Figure 3.5 LZW compression algorithm: (a) basic operation; (b) dynamically extending the
number of entries in the dictionary

10
Figure 3.6 GIF compression principles; (a) basic operational mode; (b) dynamic
mode using LZW coding

3.4.2 Tagged Image file Format (TIFF)


 It Supports pixel resolution of up to 48 bits - 16 bits each for R, G, and B and is intended for
transfer of both images and digitized documents.
 Image data can be stored and transferred over the network - in a number of different formats
and particular format being use - is indicated by a number - which are as follows:
 1. Uncompressed format (code number 1)
 2. Digitized documents (code number 2, 3 and 4)
 3. LZW compressed (code number 5)
 These use the same compression algorithms that are used in the facsimile machines.
 The LZW compression algorithm that is used is the same as that used with GIF.

11
 It starts with the basic color table containing 256 colors and the table can be extended to
contain up to 4096 entries containing common strings of pixels in the image being transferred.
 Again a Standard format is used for the transfer of both the color table and the compressed
image data.

Figure 3.7 GIF interlaced format

3.4.3 Digitized documents


 In most documents many scanned lines consist only of long strings of white pels, while
others comprise a mix of long strings of white and long strings of black pels.
 Normally facsimile machines are used with public carrier networks; ITU-T has produced
standards relating to them.
 These are 1. T2 (Group 1) 2. T3 (Group 2), 3. T4 (Group 3) 4. T6 (Group 4). T2 and T3 -
are earlier standards - now rarely, used - both operate digitally.
 As part of the Standardization process, extensive analyses of typical scanned document pages
were made.
 Tables of codewords - produced based on the relative frequency of occurrence of the number
of contiguous white and black pels found in a scanned line.
 Resulting codewords are fixed and grouped into two separate tables -1. Termination-codes table
2. Make-up codes table. The codewords in each table is shown is shown in Figure 3.8.

12
 A technique known as over scanning is used which means that all lines start with a minimum
of one white pel. To enable the receiver to regain synchronization, each scanned line is
terminated with a known end-of-line (EOL) code.

White run- codeword Black run- Code-word


length length
0 00110101 0 0000110111

1 000111 1 010

62 00110011 62 000001100110

63 00110100 63 000001100111

White run- codeword Black run- Code-word


length length
64 11011 64 0000001111

128 10010 128 000011001000

2560 000000011111 2560 000000011111

EOL 000000000001 EOL 00000000001

Figure 3.8 ITU-T group 3 and 4 facsimile conversion codes: (a) termination-codes, (b) make-up codes.

 Modified READ also known as 2D or two dimensional coding identifies black and white run-
lengths by comparing adjacent scan lines. READ stands for relative element address designate.
 With MMR coding the run-lengths associated with a line are identified by comparing the line
contents, known as the coding line (CL), relative to the immediately preceding line, known as
the reference line (RL).
 The run length associated with a coding line as one of three possibilities or modes relative to
the reference line as shown in Figure 3.9.
 The three possibilities are :

13
i. Pass mode: This is the case when the run-length in the reference line (b1b2) is to
the left of the next run-length in the coding line (a1a2) that is b2 is to the left of
a1.
ii. Vertical mode: This is the case when the run-length in the reference line (b1b2)
overlaps the next run-length in the coding line(a1a2) by a maximum of plus or
minus3 pels.
iii. Horizontal mode: This is the case when the run-length in the reference line (b1b2)
overlaps the run-length (a1a2) by more than plus or minus 3 pels

Figure 3.9 Some examples run-length possibilities (a) pass mode (b) vertical mode
(c) horizontal mode

3.4.4 Digitized pictures


 For the Digitization of both continuous-tone monochromatic pictures and color pictures the
amount of computer memory stored and display of these pictures on a number of popular types of
display was studied
 The Amount of memory needed ranged from 307 kbytes (approx.) to 2.4 Mbytes.
 In order to reduce the time to transmit Digitized pictures compression is normally applied to
the 2-D array of pixel values that represents the digitized picture before it is transmitted over the
network.
 Most widely adopted standard relating to the compression of digitized pictures is developed by
an international standards body known as JPEG (Joint Photographic Experts Group) also forms
the basis for most of the video compression algorithms.

3.4.5 JPEG
 Lossy sequential mode or baseline mode is intended for the compressor of both
monochromatic and color digitized pictures.
 There are five main stages associated with this mode:
i. Image/block preparation
ii. Forward DCT

14
iii. Quantization
iv. Entropy encoding
v. Frame building. These are shown in Figure 3.10.

Figure 3.10 JPEG encoder schematic

3.4.5.1 Image/block preparation


 In the case of continuous tone monochromatic image, just a single 2D matrix is
required.
 For a color image if a CLUT is used just a single matrix of values is required.
 If the image is represented in an R, G, B format three matrices are required.
 The representation of a video signal for color images the alternative form of
representation known as Y, Cb, Cr can be used.
 The four alternatives forms of representation are shown in Figure 3.11 (a).
 Block preparation is necessary to compute the transformed value for each position
in a matrix, so each matrix is first divided into a set of smaller 8x8 submatrices.
Each known as block.
 These are then fed sequentially to the DCT which transforms each block separately
as shown in figure 3.11 part (b).

3.4.5.2 Forward DCT


 Each pixel value is quantized using 8 bits which produces a value in the range 0 to
255 for the intensity/luminance values –R, G, B or Y.
 Value in the range -128 to +127 for the two chrominance signals Cb and Cr.
 The DCT of each 8x8 block of values is computed using the expression:

15
Figure 3.11 Image /block preparation: (a) image preparation, (b) block preparation
A number of points are considered from the above expression:

1. All 64 values in the input matrix P[x,y] contribute to each entry in the transformed
matrix F[i,j].

16
2. For i=j=0, the two cosine terms (both horizontal and vertical frequency
coefficients) are both 0. The mean of all 64 values in the matrix is known as DC
coefficients.
3. The values in all the other location of the transformed matrix have a frequency
coefficient associated with them either horizontal (x=1-7 for y=0), vertical (x=0 for
y=1-7) or both (x=1-7 for y=1-7) they are known as AC coefficients.
4. For j=0, only horizontal coefficients are present which increase in frequency for i=1-
7.
5. For j=0, only horizontal coefficients are present which increase in frequency for i=1-
7.
The above Points are summarized in figure 3.12

Figure 3.12: DCT computation features.

3.4.5.3. Quantization
 The main source of information loss occurs during the quantization and entropy
encoding stages where the compression takes place.
 The quantization process aims to reduce the size of the DC and AC coefficients so
that less bandwidth is required for their transmission.
 The sensitivity of the eye varies with spatial frequency, which implies that the
amplitude threshold below which the eye will detect a particular spatial frequency
also varies.
 Therefore, the threshold values used vary for each of the 64 DCT coefficients.
These are held in a two-dimensional matrix known as the quantization table.
 The JPEG standard includes two default quantization table value-one for use with
the luminance coefficients and the other for use with the two sets of chrominance
coefficients.
 The quantization table is shown in Figure 3.13.
 A number of points from the values shown in the tables.

17
1. The computation of the quantized coefficients involves rounding the quotients
to the nearest integer’s value.
2. The threshold values used, in general, increase in magnitude with increasing
spatial frequency.
3. The DC coefficient in the transformed matrix is largest.
4. Many of the highest-frequency coefficients are zero.

Figure 3.13 Example computation of a set of quantized DCT coefficients

3.4.5.4. Entropy Encoding


 The entropy encoding stage comprises four steps: vectoring, differential encoding,
run-length encoding, and Huffman encoding.
 The role of each step is separately discussed.
1. Vectoring: The output of the quantization stage is a 2D matrix of values, before
applying any entropy encoding to the set of values in the matrix. To exploit the
presence of the large number of zeros in the quantized matrix, a zigzag scan of the
matrix is used as shown in Figure 3.14
2. Differential Encoding: the first element in each transformed block is the DC
coefficient which is the measure of the average color associated with the
corresponding 8x8 block of pixel values.
3. Run-Length Encoding: The remaining 63 values in the vector are the AC
coefficients and, because of the zigzag scan, the vector contains long strings of
zeros within it to exploit this feature, the AC coefficients are encoded in the form
of a string of pairs of values. Hence the 63 values in the vector is encoded as:

(0,6)(0,7) (0,3) (0,3) (0,3)(0,2) (0,2) (0,2) (0,2) (0,0)

18
4. Huffman Encoding: Significant levels of compression can be obtained by
replacing long strings of binary digits by a string of much shorter codewords, the
length of each codewords being a function of its relative frequency of occurrence.

Figure 3.14 Vectoring using zigzag scan: (a) principle; (b) vector representation.

3.4.5.5. Frame building


 The compressed version of a printed picture is stored in the memory of a computer
ready for either integrating with other media if necessary or accessing from a remote
computer.
 The JPEG standard, therefore, also includes a definition of the structure of the total bit
stream relating to a particular image/picture. This is known as a frame and its outline
structure is shown in Figure 3.15.
 The role of the frame builder is to encapsulate all the information relating to an
encoded image/ picture in this format.
 At the top level, the complete frame-plus header is encapsulated between a start-of-
frame and end-of-frame delimiter which allows the receiver to determine the start and
end of all the information relating to a complete image/picture.
 The frame header contains a number of fields that
include:
i. The overall width and height of the image in pixels.
ii. The number and type of components that are used to represent the
image (CLUT, R/G/B, Y/Cb,Cr).
iii. The digitization format used (4:2:2, 4:2:0 etc).
 At the second level, a frame consists of a number of components each of which is
known as a scan. These are preceded by a header which contains fields that include:
i. The identity of the components (R/G/B etc)
ii. The number of bits used to digitize each component;
iii. The quantization table of values that used has been used to encode each
component.

3.4.5.6 JPEG decoding


iv. In Figure 3.16 a JPEG decoder is made up of a number of stages which are simply the
corresponding decoder sections of those used in the encoder.

19
v. The time to carry out the decoding functions is similar to that used to perform the
encoding.
vi. vi.

Figure 3.15 JPEG encoder output bit stream format

 On receipt of the encoded bit stream the frame decoder first identifies the control
information and tables within the various headers.
 It then loads the contents of each table into the related table and passes the control
information to the image builder.
 It then starts to pass the compressed bit stream of the Huffman decoder which carries
out the corresponding decompression operation using either the default or the
preloaded table of codewords.
 The two decompressed streams containing the DC and AC coefficients of each block
are then passed to the differential and run-length decoders respectively.
 The resulting matrix of values is then dequantized using either the default or the
preloaded values in the quantization table.
 Each resulting block of 8x8 spatial frequency coefficients is passed in turn to the
inverse DCT which transforms them back into their spatial form using the expression:

20
Figure 3.16 JPEG decoder schematic

 Finally, as with the GIF, it is also possible to encode and rebuild the image in a
progressive way by first sending an outline of the image and then progressively
adding more detail to it. This can be achieved in the following ways:
i. Progressive mode: In this mode, first the DC and low- frequency coefficients
of each block are sent and then the higher-frequency coefficients.
ii. Hierarchical mode: In this mode, the total image is first sent using a low
resolution then at a higher resolution.

Question bank
1. Explain the meaning of the following terms, relating to compression:
a. Lossless and lossy compression
b. Source and entropy encoding (Dec 2010 4M)
2. With a aid of a diagram, identify the five main stages of operation of JPEG and
explain each stage briefly. (Dec 2010 10M)
3. Code the given string “ABACADABACADABACABAB” using Huffman coding.
Derive Huffman code tree. Determine the savings in transmission bandwidth over
normal ASCII and binary coding (Dec 2010 6M, Jun 2012 6M)
4. Explain with a neat diagram JPEG encoder (Dec 2015 10M, jun 2011 10M,jan 2016
10M)
5. Encode the string went comprising characters with probability of e=0.3, n=0.3,
t=0.5,w=0.1, .=0.1 using arithmetic coding (Dec 2015 10M)
6. Explain clearly about image encoding and decoding methods with diagram (jun 2015
14M)
7. Explain Huffman coding procedure for encoding any given data. (jun 2015 6M)

21
8. With a help of a diagram, identify the main stages of operation of JPEG and explain
each stage in detail (encoder and decoder) (jun 2012 14M, jun 2013 14M)
9. Encode the string went comprising characters with probability of e=0.3, n=0.3,
t=0.2,w=0.1, .=0.1 using arithmetic coding (jun 2011 10M, jan 2016 10M)
10. Code the given string “ABACADABACADABACABAB” using Huffman coding.
Derive Huffman code tree (jun 2013 6M, ).
11. With a help of a diagram, identify the five main stages of associated with the baseline
of operation of JPEG encoder and give a brief description of the role of image/block
preparation (jun 2010 6M)
12. A series of message is to be transmitted between computers over a PSTN. The
messages comprise the characters A through H. The probability of each character is as
follows:
A and B =0.25 C and D=0.14 E,F,G and H=0.055.
a. Use Shannon’s formula to derive the minimum average number of bits/
character.
b. Use Huffman coding to derive the codeword and prove that this is the
minimum set by constructing Huffman code tree. (jun 2010 14M)
13. Briefly discuss JPEG encoder and decoder. (jan 2017 14M)
14. A series of message is to be transmitted between computers over a PSTN. The
messages comprises the characters A through H. The probability of each character is
as follows:
A and B =0.25 C and D=0.14 E,F,G and H=0.055.derive code word set using
Huffman coding. (jan 2017 6M)
15. What are pass mode, vertical mode and horizontal modes in run length possibilities
and explain the same with flow chart. (7M).
16. Describe forward DCT, quantization block of JPEG standard. (6M)

22

You might also like