Image Compression Using Verilog

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

ISSN 2348–2370

Vol.06,Issue.10,
November-2014,
Pages:1169-1173

www.ijatir.org

Designing a Image Compression for JPEG Format by Verilog HDL


B.MALLESH KUMAR1, D.V.RAJESHWAR RAJU2
1
PG Scholar, Dept of ECE, Prasad Engineering College, India.
2
Assoc Prof, Dept of ECE, Prasad Engineering College, India.

Abstract: Data compression is the reduction or elimination more robust than the analog counterpart for processing,
of redundancy in data representation in order to achieve manipulation, storage, recovery, and transmission over long
savings in storage and communication costs. Data distances, even across the globe through communication
compression techniques can be broadly classified into two networks. In recent years, there have been significant
categories: Lossless, Lossy schemes. In lossless methods, advancements in processing of still image, video, graphics,
the exact original data can be recovered while in lossy speech, and audio signals through digital computers in order
schemes a close approximation of the original data can be to accomplish different application challenges. As a result,
obtained. The lossless method is also called entropy coding multimedia information comprising image, video, audio,
schemes since there is no loss of information content during speech, text, and other data types has the potential to
the process of compression. Digital images require an become just another data type. Still image and video data
enormous amount of space for storage. This work is to comprise a significant portion of the multimedia data and
design VLSI architecture for the JPEG Baseline Image they occupy the lion‟s share of the communication
Compression Standard. The architecture exploits the bandwidth for multimedia communication. As a result,
principles of pipelining and parallelism to the maximum development of efficient image compression techniques
extent in order to obtain high speed the architecture for continues to be an important challenge to us, both in
discrete cosine transforms and the entropy encoder are academia and in industry.
based on efficient algorithms designed for high speed VLSI.
For example, a color image with a resolution of 1024 x 1024 Despite the many advantages of digital representation of
picture elements (pixels) with 24 bits per pixel would signals compared to the analog counterpart, they need a very
require 3.15Mbytes in uncompressed form. Very high-speed large number of bits for storage and transmission. For
design of efficient compression techniques will significantly example, a high-quality audio signal requires approximately
help in meeting that challenge. In recent years, a working 1.5 megabits per second for digital representation and
group known as Joint Photographic Expert Group (JPEG) storage. A television-quality low-resolution color video of
has defined an international standard for coding and 30 frames per second with each frame containing 640 x 480
compression of continuous tone still images. This standard pixels (24 bits per color pixel) needs more than 210
is commonly referred to as the JPEG standard. The primary megabits per second of storage. As a result, a digitized one-
aim of the JPEG standard is to propose an image hour color movie would require approximately 95 gigabytes
compression algorithm that would be application of storage. The storage requirement for upcoming high-
independent and aid VLSI implementation of data definition television (HDTV) of resolution 1280 x 720 at 60
compression. In this project, we propose efficient single frames per second is far greater. A digitized one-hour color
chip VLSI architecture for the JPEG baseline compression movie of HDTV-quality video will require approximately
standard algorithm. The architecture fully exploits the 560 gigabytes of storage. A digitized 14 x 17 square inch
principles of pipelining and parallelism to achieve high radiograph scanned at 70 pm occupies nearly 45 megabytes
speed. The JPEG baseline algorithm consists mainly of two of storage. Transmission of these digital signals through
parts: (i) Discrete Cosine Transform (DCT) computation limited bandwidth communication channels is even a greater
and (ii) Entropy encoding. The architecture for entropy challenge and sometimes impossible in its raw form.
encoding is based on a hardware algorithm designed to yield Although the cost of storage has decreased drastically over
maximum clock speed. the past decade due to significant advancement in
microelectronics and storage technology, the requirement of
Keywords: VLSI Architecture, Discrete Cosine Transform data storage and data processing applications is growing
(DCT) Computation, Entropy Encoding, Verilog HDL. explosively to outpace this achievement.

I. INTROCUTION A. Classification of Compression Algorithms:


Today we are talking about digital networks, digital In an abstract sense, we can describe “data compression”
representation of images, movies, video, TV, voice, digital as a method that takes an input data D and generates a
library-all because digital representation of the signal is shorter representation of the data c(D) with a fewer number
Copyright @ 2014 IJATIR. All rights reserved.
B.MALLESH KUMAR,D.V.RAJESHWAR RAJU
of bits compared to that of D. The reverse process is called introduced by a lossy compression or decompression
“decompression”, which takes the compressed data c(D) and algorithm. Similarly, if we compress a huge ASCII file
generates or reconstructs the data D‟ as shown in Figure 1 containing a program written in C language, for example,
Sometimes the compression (coding) and decompression we expect to get back the same C code after decompression
(decoding) systems together are called a “CODEC,” as because of obvious reasons. The lossy compression
shown in the broken box in Figure 1. techniques are usually applicable to data where high fidelity
of reconstructed data is not required for perception by the
human perceptual system. Examples of such types of data
are image, video, graphics, speech, audio, etc. Some image
compression applications may require the compression
scheme to be lossless (i.e., each pixel of the decompressed
image should be exactly identical to the original one).
Medical imaging is an example of such an application where
compressing digital radiographs with a lossy scheme could
Fig.1 CODEC. be a disaster if it has to make any compromises with the
diagnostic accuracy.
The reconstructed data D‟ could be identical to the
original data D or it could be an approximation of the II. JPEG COMPRESSION
original data D, depending on the reconstruction After each input 8x8 block of pixels is transformed to
requirements. If the reconstructed data D‟ is an exact replica frequency space using the DCT, the resulting block contains
of the original data D, we call the algorithm applied to a single DC component, and 63 AC components. The DC
compress D and decompress c(D) to be “lossless”. On the component is predictive encoded through a difference
other hand, we say the algorithms are “lossy” when D‟ is between the current DC value and the previous. This mode
not an exact replica of D.Hence as far as the reversibility of only uses Huffman coding models, not arithmetic coding
the original data is concerned, the data compression models which are used in JPEG extensions. This mode is
algorithms can be broadly classified in two categories – the most basic, but still has a wide acceptance for its high
“lossless” and “lossy”.Usually we need to apply lossless compression ratios, which can fit many general applications
data compression techniques on text data or scientific data. very well.
For example, we cannot afford to compress the electronic i. Loss less Mode
copy of this text book using a lossy compression technique. Quite simply, this mode of JPEG experiences no loss
It is expected that we shall reconstruct the same text after when comparing the source image, to the reproduced image.
the decompression process. This method does not use the discrete cosine transform,
rather it uses predictive, differential coding. As it is loss
less, it also rules out the use of quantization. This method
does not achieve high compression ratios, but some
applications do require extremely precise image
reproduction.
ii. Base Line Jpeg Compression
The baseline JPEG compression algorithm is the most
basic form of sequential DCT based compression. By using
transform coding, quantization, and entropy coding, at an 8-
bit pixel resolution, a high-level of compression can be
achieved. However, the compression ratio achieved is due to
sacrifices made in quality. The baseline specification
assumes that 8-bit pixels are the source image, but
extensions can use higher pixel resolutions. JPEG assumes
that each block of data input is 8x8 pixels, which are serially
input in raster order. Baseline JPEG compression has some
configurable portions, such as quantization tables, and
Huffman tables.. By studying the source images to be
compressed, Huffman codes and quantization codes can be
optimized to reach a higher level of compression without
Fig.2 Classification of Compression Techniques. losing more quality than is acceptable. Although this mode
of JPEG is not highly configurable, it still allows a
A small error in the reconstructed text can have a considerable amount of compression. Furthermore
completely different meaning. We do not expect the compression can be achieved by sub sampling chrominance
sentence “You should not delete this file” in a text to change portions of the input image, which is a useful technique
to “You should now delete this file” as a result of an error playing on the human visual system.

International Journal of Advanced Technology and Innovative Research


Volume. 06, IssueNo.10, November-2014, Pages: 1169-1173
Designing a Image Compression for JPEG Format by Verilog HDL
A. Discrete Cosine Transform (DCT): actually no real periodicity. If the image is run through a
The discrete cosine transform is the basis for the JPEG Fourier transform, the sine terms can actually incur large
compression standard. For JPEG, this allows for efficient changes in amplitude for the signal, due to sine not being
compression by allowing quantization on elements that is orthogonal. DCT will avoid this by not carrying this
less sensitive. The DCT algorithm is completely reversible information to represent the changes. In the case of JPEG, a
making this useful for both loss less and lossy compression two-dimensional DCT is used, which correlates the image
techniques. The DCT is a special case of the well-known with 64 basis functions. The DCT equation can be
Fourier transform. Essentially the Fourier transform in represented in matrix format. The matrix of DCT is
theory can represent a given input signal with a series of
sine and cosine terms. The discrete cosine transform is a
special case of the Fourier transform in which the sine
components are eliminated. For JPEG, a two-dimensional
DCT algorithm is used which is essentially the one-
dimensional version evaluated twice. By this property there
are numerous ways to efficiently implement the software or
hardware based DCT module. The DCT is operated two
dimensionally taking into account 8 by 8 blocks of pixels.
The resulting data set is an 8 by 8 block of frequency space
The first row (i: j) of the matrix has all the entries equal to
components, the coefficients scaling the series cosine terms,
known as basis functions. The First element at row 0 and 1/8 as expected from The columns of T form an orthogonal
column 0, is known as the DC term, the average frequency set, so T is an orthogonal matrix. When doing the inverse
DCT the orthogonality of T is important, as the inverse of T
value of the entire block. The other 63 terms are AC
is Tr, which is easy to calculate.
components, which represent the spatial frequencies that
compose the input pixel block, by scaling the cosine terms
within the series. B. Procedure for doing the DCT on an 8x8 Block:
Before we begin, it should be noted that the pixel values
There are two useful products of the DCT algorithm. of a black-and-white image range from 0 to 255 in steps of
1, where 0 and pure white by represent pure black 255. Thus
First it has the ability to concentrate image energy into a
it can be seen how a photo, illustration, etc. can be
small number of coefficients. Second, it minimizes the
accurately represented by these256 shades of gray. Since an
interdependencies between coefficients. These two points
image comprises hundreds or even thousands of 8x8 blocks
essentially state why this form of transform is used for the
standard JPEG compression technique. By compacting the of pixels, the following description of what happens to one
energy within an image, more coefficients are left to be 8x8 block is a microcosm of the JPEG process what is done
to one block of image pixels is done to all of them, in the
quantized coarsely, impacting compression positively, but
order earlier specified. Now, let„s start with a block of
not losing quality in the resulting image after
image-pixel values. This particular block was chosen from
decompression. Taking away inter-pixel relations allows
quantization to be non-linear, also affecting quantization the very upper- left-hand corner of an image.
positively. DCT has been effective in producing great
pictures at low bit rates and is fairly easy to implement with
fast hardware based algorithms. An orthogonal transform
such as the DCT has the good property that the inverse DCT
can take its frequency coefficients back to the spatial
domain at no loss. However, implementations can be lossy
due to bit limitations and especially apparent in those
algorithms in hardware. The DCT does win in terms of (1)
computational complexity as there are numerous studies that Because the DCT is designed to work on pixel values
have been completed in different techniques for evaluating ranging from -128 to 127, the original block is leveled off
the DCT. by subtracting 128 from each entry. This results in the
following matrix
The discrete cosine transform is actually more efficient in
reconstructing a given number of samples, as compared to a
Fourier transform. By using the property of orthogonality of
cosine, as opposed to sine, a signal can be periodically
reconstructed based on a fewer number of samples. Any
sine based transform is not orthogonal, and would have to
take Fourier transforms of more numbers of samples to
approximate a sequence of samples as a periodic signal. As
the signal we are sampling, the given image, there is (2)
International Journal of Advanced Technology and Innovative Research
Volume. 06, IssueNo.10, November-2014, Pages: 1159-1163
B.MALLESH KUMAR,D.V.RAJESHWAR RAJU
We are now ready to perform the Discrete Cosine in the quantization matrix, and then rounding to the nearest
Transform, which is accomplished by matrix multiplication integer value. For the following step, quantization matrix
Q50 is used. Recall that the coefficients situated near the
(3) upper-left corner correspond to the lower Frequencies œ to
In matrix M is first multiplied on the left by the DCT which the human eye is most sensitive œ of the image
matrix T from the previous section; this transforms the block. In addition, the zeros represent the less important,
rows. The columns are then transformed by multiplying on higher frequencies that have been discarded, giving rise to
the right by the transpose of the DCT matrix. This block lossy part of compression. As mentioned earlier only the
matrix now consists of 64 DCT coefficients, cij, where j and non-zero components are used for reconstruction of the
I range from 0 to7. The top-left coefficient, c00, correlates image. The number of zeros given by each Quantization
to the low frequencies of the original image block. As we matrices varies.
move away from C00in all directions, the DCT coefficients
correlate to higher and higher frequencies of the image
block, where c77 corresponds to the highest frequency. It is
important to note that the human eye is most sensitive to
low frequencies, and results from the quantization step will
reflect this fact.
(5)
C. Quantization: D. Encoder:
Our 8x8 block of DCT coefficients is now ready for The quantized matrix is now ready for the final step of
compression by quantization. A remarkable and highly compression. The entire matrix coefficients are coded into
useful feature of the JPEG process is that in this step, the binary format by the Encoder. After quantization it is
varying levels of image compression and quality are quite common that maximum of the coefficients are equal to
obtainable through selection of specific quantization zeros. JPEG takes the advantage of encoding quantized
matrices. This enables the user to decide on quality levels coefficients in zigzag order. Entropy coding is a special
ranging from 1 to 100, where 1 gives the poorest image form of lossless data compression. It involves arranging the
quality and highest compression, while 100 gives the best image components in a "zigzag" order employing run-length
quality and lowest compression. As a result, the encoding (RLE) algorithm that groups similar frequencies
quality/compression ratio can be tailored to suit different together, inserting length coding zeros, and then using
needs. Subjective experiments involving the human visual Huffman coding on what is left. The JPEG standard also
system have resulted in the JPEG standard quantization allows, but does not require decoders to support, the use of
matrix. With a quality level of 50, this matrix renders both arithmetic coding, which is mathematically superior to
high compression and excellent decompressed image Huffman coding. However, this feature has rarely been used
quality. as it was historically covered by patents requiring royalty-
bearing licenses, and because it is slower to encode and
decode compared to Huffman coding. Arithmetic coding
typically makes files about 5-7% smaller. The previous
quantized DC coefficient is used to predict the current
quantized DC coefficient. The difference between the two is
encoded rather than the actual value. The encoding of the 63
quantized AC coefficients does not use such prediction
differencing.
III. RESULT
(4)

If, however, another level of quality and compression is


desired, scalar multiples of the JPEG standard quantization
matrix may be used. For a quality level greater than 50 (less
compression, higher image quality), the standard
quantization matrix is multiplied by (100-quality level)/50.
For a quality level less than 50 (more compression, lower
image quality), the standard quantization matrix is
multiplied by 50/quality level. The scaled quantization
matrix is then rounded and clipped to have positive integer
values ranging from1 to 255. For example; the following
quantization matrices yield quality levels of 10 and 90.
Quantization is achieved by dividing each element in the
transformed image matrix D by the corresponding element Fig.3 Simulated result.

International Journal of Advanced Technology and Innovative Research


Volume. 06, IssueNo.10, November-2014, Pages: 1169-1173
Designing a Image Compression for JPEG Format by Verilog HDL
IV. CONCLUSION
The emerging JPEG continuous-tone image compression
standard is not a panacea that will solve the myriad issues
which must be addressed before digital images will be fully
integrated within all the applications that will ultimately
benefit from them. For example, if two applications cannot
exchange uncompressed images because they use
incompatible color spaces, aspect ratios, dimensions, etc.
then a common compression method will not help.
However, a great many applications are “stuck” because of
storage or transmission costs, because of argument over
which (nonstandard) compression method to use, or because
VLSI codecs are too expensive due to low volumes. For
these applications, the thorough technical evaluation,
testing, selection, validation, and documentation work
which JPEG committee members have performed is
expected to soon yield an approved international standard
that will withstand the tests of quality and time. As diverse
imaging applications become increasingly implemented on
open networked computing systems, the ultimate measure of
the committee‟s success will be when JPEG-compressed
digital images come to be regarded and even taken for
granted as “just another data type,” as text and graphics are
today.
IV. FUCTURE SCOPE
Other JPEG extensions include the addition of a version
marker segment that stores the minimum level of
functionality required to decode the JPEG data stream.
Multiple version markers may be included to mark areas of
the data stream that have differing minimum functionality
requirements. The version marker also contains information
indicating the processes and extension used to encode the
JPEG data stream.
V. REFERENCES
[1]. Adobe Systems Inc. PostScript Language Reference
Manual. Second Ed. Addison Wesley, Menlo Park, Calif.
1990.
[2]. Digital Compression and Coding of Continuoustone
Still Images, Part 1, Requirements and Guidelines. ISO/IEC
JTC1 Draft International Standard 10918-1, Nov. 1991.
[3]. Digital Compression and Coding of Continuoustone
Still Images, Part 2, Compliance Testing. ISO/IEC JTC1
Committee Draft 10918-2, Dec. 1991.
[4]. Encoding parameters of digital television for studios.
CCIR Recommendations, Recommendation 601, 1982.
[5]. Howard, P.G., and Vitter, J.S. New methods for lossless
image compression using arithmetic coding. Brown
University Dept. of Computer Science Tech. Report No.
CS-91-47, Aug. 1991.
[6]. Hudson, G.P. The development of photographic
videotex in the UK. In Proceedings of the IEEE Global
Telecommunications Conference, IEEE Communication
Society, 1983, pp. 319-322.

International Journal of Advanced Technology and Innovative Research


Volume. 06, IssueNo.10, November-2014, Pages: 1159-1163

You might also like