MCT QP Bank With Answer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

DHANALAKSHMI SRINIVASAN ENGINEERING COLLEGE, PERAMBALUR


DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING

EC6018 MULTIMEDIA COMPRESSION & COMMUNICATION


UNIT I MULTIMEDIA COMPONENTS
PART – A
1. What are the responsibilities of interface and information designers in the development of a
multimedia project? (A/M 15)
 An interface designer is responsible for: i) creating software device that organizes content,
allows users to access or modify content and present that content on the screen, ii) building a
user friendly interface.
 Information designers, who structure content, determine user pathways and feedback and select
presentation media.
2. List the features of multimedia. (A/M 14)
 A Multimedia system has four basic characteristics:
 Multimedia systems must be computer controlled.
 Multimedia systems are integrated.
 The information they handle must be represented digitally.
 The interface to the final presentation of media is usually interactive.
3. What are the multimedia components? (A/M 17)
 Text, Audio, Images, Animations, Video and interactive content are the multimedia components.
 The first multimedia element is text. Text is the most common multimedia element.
4. Differentiate Serif and Sans serif fonts. (N/D 16, A/M 16, N/D 15)
Answer:
S.No. Serif fonts Sans serif fonts
The ones without such decorative
A font that has decorative corners or
1. corners are called Sans Serif (No
stands at the corners is called Serif.
Serif) fonts.
Sans means “without”. So Sans Serif
2. Serif stands for stroke or line font means font without strokes or
lines.
Serif fonts have the extra stroke or
Sans-Serif doesn’t have any such
3. decorative design on the end of
design or stroke.
letters.

Page 1
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

4. Example: Times New Roman font. Example: Arial font.

5.

5. Write any two advantages of MIDI over digital audio. (A/M 15)
 Both are digital audio files, and the main difference is the way they produce sound.
 MIDI files are much more compact than digital audio files.
 MIDI files embedded in web pages load and play more quickly than their digital equivalent.
 MIDI data is completely editable. A particular instrument can be removed from the song and/or
a particular instrument can be changed by another just by selecting it.
 MIDI files may sound better than digital audio files if the MIDI sound source you are using his of
high quality.
6. How are GIF images generated? (A/M 16)
 Upload images: Click the upload button and select as many images as you want.
 Arrange images: Drag and drop the images selected until it ordered correctly.
 Adjust options: Adjust the Delay until the speed of your GIF looks normal.
 Generate the image.
7. How are 21/2 dimension animations are created? (N/D 16, N/D 15)
 21/2 – D usually referring to an animation created in several flat layers to give some of the depth
effects of true 3 – D.
 Various techniques that are used in creating 2D abstracts are morphing, twining, onion skinning,
Anime, and amid rotoscoping.
8. Define Luminance. (A/M 17)
 Luminance refers to brightness.
 Luminance is a measure of the light strength that is actually perceived by the human eye.
 It describes the amount of light that passes through, is emitted or reflected from a particular
area, and falls within a given solid angle.
 Luminance measures just the portion that is perceived.
9. Define multimedia.
 ‘Multi’ means ‘many’ and ‘media’ means ‘material through which something can be transmitted
or send’.
 Information being transferred by more than one medium is called as multimedia.

Page 2
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

 It is the combination of text, image, audio, video, animation, graphic & hardware, that can be
delivered electronically / digitally which can be accessed interactively.
 It is of two types: Linear & Non – Linear.
10. Describe the applications of multimedia.
 Multimedia in Education: It is commonly used to prepare study material for the students and
also provide them proper understanding of different subjects.
 Multimedia in Entertainment:
a) Movies: Multimedia used in movies gives a special audio and video effect.
b) Games: Multimedia used in games by using computer graphics, animation, videos has
changed the gaming experience.
 Multimedia in Business:
a) Videoconferencing: This system enables to communicate using audio and video between
two different locations through their computers.
b) Marketing and advertisement: Different advertisement and marketing ideas about any
product on television and internet is possible with multimedia.
11. Write the difference between multimedia and hypermedia.
S.No Multimedia Hypermedia
Multimedia is the presentation of Hypermedia is the use of advanced form of
media as text, images, graphics, video & hypertext like interconnected systems to
1. audio by the use of computers or the store and present text, graphics & other
information content processing media types where the content is linked to
devices. each other by hyperlinks.
Multimedia can be in linear or non- Hypermedia is an application of
linear content format, but the multimedia, hence a subset of multimedia.
2.
hypermedia is only in non-linear
content format.
12.Define Nyquist Sampling theorem.
 Nyquist sampling theorem states that in order to obtain an accurate representation of a time
varying analog signal, its amplitude must be sample at a minimum rate that is equal to or
greater than twice the highest sinusoidal frequency component that is present in the signal.
13.Define Aspect ratio.
 Both the number of pixels per scanned line & the number of line per frame vary the actual
numbers used being determined by the aspect ratio of the display screen. This is the ratio of the
screen with to the screen height.

Page 3
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

PART – B
1. Discuss the skill set needed to develop a multimedia project. Also describe how this is different
from the other skill sets? (N/D 16, N/D 15)
Key points:
 Multimedia definition: (1 Mark)
 Information being transferred by more than one medium is called as multimedia.
 Multimedia skill set: (12 Marks)

Role of multimedia team:


 Project Manager Design & management of a project.
 Multimedia Designer Who deals with visuals.
 Interface Designer Who device the navigational pathways & content maps.
 Information Designer Who structure content, determine user pathways & feedback and
select presentation media.
 Multimedia Writer Writing proposals & test screens.
 Video Specialist Delivery of video files on CD, DVD or the web.
 Audio Specialist Scheduling recording sessions.
 Multimedia Programmer Software Engineer.
 Multimedia Producer Put together a coordinate set of pages for the web.
2. Discuss on the text representation techniques. (A/M 16)
Key points:
 Text – definition: (2 Marks)
 Text includes unformatted text, comprising strings of characters from a limited character set
and formatted text, comprising strings for the structuring, access & presentation of electronic
documents.
 Representation of texts (2 Marks)
 Unformatted text
Page 4
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

 Formatted text
 Hyper text
 Explanation of Unformatted text (3 Marks)
 Also known as plain text.
 Enables pages to be created which comprise strings of fixed sized characters from a limited
character set.
 Explanation of Control characters:i) Format control character
ii) Information separator
iii) Transmission control character
 Refer Pg. No. 89 in Text book 1 (19th edition).
 Explanation of Formatted text (3 Marks)
 Also known as rich text.
 Enables documents to be created that consist of characters of different styles & variable size &
shape, each of which can be plain, bold or italic.
 Refer Pg. No. 91 in Text book 1 (19th edition).
 Explanation of Hyper text (3 Marks)
 It is a type of formatted text that enables a set of documents referred to as pages.
 Refer Pg. No. 93 in Text book 1 (19th edition).
3. Compare and contrast MIDI and digital audio. (A/M 16, N/D 16, N/D 15)
Key points:
MIDI Digital Audio
 Definition A MIDI (Musical Instrument A digital audio refers to the
Digital Interface) file is reproduction & transmission
software for representing of sound stored in a digital
musical information in a format.
digital format.
 Format type Compressed. Compressed.
 Contain Do not contain a recording of Contain a recording of sound.
sound.
 Storage No actual sound stored in Actual sound stored in digital
MIDI file. audio file.
 Advantages  Files are tiny, often less  They reproduce the exact
than 10K. sound files.
 Download from a web  It reproduces better than
page in no time. CD quality.
 Fit easily on a floppy desk.
 The files are any time
ideal.

Page 5
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

 Disadvantages  They sound little different 


They take up 10MB or
from the original sounds.
more per minute of sound.
 Even with high-speed
internet connections, a
simple audio file can take
several minutes to
download.
 When combined with
video, the files can cause
problems.
4. Describe the capability and limitations of bitmap and vector images. (A/M 15)
Key points:
 Bitmap definition (2 marks)
 Bitmap is derived from the word ‘bit’ which means the simplest element in which only two digits
are used and ‘map’ which is a two-dimensional matrix of these bits.
 A Bitmap is a data matrix describing the individual dots of an image that are the smallest
elements (pixels) of resolution on a computer screen or a printer.
 Bitmap capability (5 Marks)
 Bitmaps are an image format suited for creation of: Photo – realistic images, Complex drawings,
Images that require fine detail.
 Bitmapped images are known as paint graphics and it can have varying bit and color depths.
 More bits provide more color depth, hence more photo – realism but requires more memory and
processing power.
 Monochrome just requires one bit per pixel, representing black or white.
8 bits per pixel allows 16 bits per pixel 24 bits per pixel allows 32 bits per pixel allows
256 distinct colors. represents 32K distinct millions of colors. trillions of colors.
colors.
 Bitmaps are best for photo – realistic images or complex drawings requiring fine detail.
 Bitmaps can be inserted by using clip art galleries, using bitmap software, capturing and editing
images, scanning images.
 Limitations of Bitmap (1 Mark)
 Bitmaps are not easily scalable and resizable.
 Bitmaps use more memory space and have a larger file size.
 It can be converted to vector images using auto tracing.
 Vector – Drawn image (5 Marks)

Page 6
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

 Vector – Drawn images are created from geometric objects such as lines, rectangles, ovals,
polygons using mathematical formula.
 A vector is a line that is described by the location of its end points.
 Vector drawing makes use of Cartesian co-ordinates.
 Cartesian co-ordinates are numbers that describe a point in two or three-dimensional space as
the intersection of X, Y and Z axis.
 Vector images use less memory space and have a smaller file size (. svg) as compared to bitmaps.
 For the web, pages that use vector graphics in plug-ins download faster and when used for
animation, draw faster than bitmaps.
 It cannot be used for photorealistic images.
 It requires a plug-in for web-based display.
5. Explain the technique of computer animation and compare it with the traditional cel
animation. (A/M 15) / Explain any two animation techniques with an example. (A/M 16)
 Animation definition (1 Mark)
 Types of Animation (1 Mark)
 Traditional Animation. (2D, Cel, Hand Drawn)
 2D Animation. (Vector-Based)
 3D Animation. (CGI, Computer Animation)
 Motion Graphics. (Typography, Animated Logos)
 Stop Motion. (Claymation, Cut-Outs)
 Techniques of animation (1 Mark)
 Drawn animation.
 Model animation or stop motion animation.
 Computer animation or computer generated imagery (CGI)
 Techniques description (8 Marks)
 Computer animation Vs. Traditional cel animation (2 Marks)
6. With the aid of a diagram. Explain the term interlaced scanning and progressive scanning in
detail. (A/M 16)

Interlaced scan: Traditional TV systems (such as NTSC, the standard TV system in the United States)
use an interlaced scan, where half the picture appears on the screen at a time. The other half of the picture
follows an instant later (1/60th of a second, to be precise). The interlaced system relies on the fact that
your eyes can’t detect this procedure in action — at least not explicitly.

Progressive scan: In a progressive-scan system, the entire picture is painted at once, which greatly
reduces the flickering that people notice when watching TV. Progressive scan is available throughout a
range of TV types.
Page 7
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

7. Describe the various output devices available for personal computers and explain how they
may be used in multimedia production and delivery? (N/D 15, N/D 16)

 An output device is any piece of computer hardware equipment used to communicate the results of
data processing carried out by an information processing system (such as a computer) to the outside
world.

In computing, input/output, or I/O, refers to the communication between an information processing


system (such as a computer), and the outside world. Inputs are the signals or data sent to the system, and
outputs are the signals or data sent by the system to the outside.Output Devices

Examples of output devices:

1. Speakers
2. Headphones
3. Screen (Monitor)
4. Printer

Output Devices

We need to head to the computer store one more time. We've picked out your system
unit and input devices. Now we've really got to figure out what's important for your output
devices, or how we are going to see and hear the data and information created and stored in
your computer.

Monitors
Monitors are created with LCD (liquid crystal display) or LED (light-emitting diode). LCDs have layers of glass,
polarized film and liquid crystals. You get electrical impulses sent through, and this causes the color to be
shown and image to be displayed. LED monitors take the LCD one step further. They put a diode on the back
that forces light through the layers for a sharper picture and better colors. It is said that LED monitors will last
longer than LCD monitors.

Printers

The next difficult decision to make will be the printer that will work best for you. Printers are used to
create a tangible product to look at away from a monitor. For consumer use there are two kinds to choose
from: the inkjet and the laser printer.

Page 8
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

The inkjet printer uses a liquid ink that's sprayed through a print head onto a piece of paper. How?
Simply put, the printer interprets the signal from the computer and converts it to instructions that go through
the print head. Inkjet printers are typically inexpensive to purchase, although the replacement ink can be
costly and add up.

Laser printers use heat technology and specialized powder called toner or another medium (I've seen it
with wax - it looked like crayons) that's heat-sealed onto a piece of paper. Laser printers are somewhat
expensive, though they've come down in cost as the technology has increased.

Speakers

Anytime you want to listen to something or record something, your speakers are essential in
completing these processes. They work as both input and output devices, translating the element of sound and
recording it to use later on. The most common method for recording and saving sound is through .wav files;
this standard format is recognized by virtually all computing devices. You can use speakers for listening and
recording music, watching movies, playing games or speaking with people online through a telephonelike
service.

8. Explain the working principles of digital camera and scanner with neat diagram. (A/M 17)
 256×256 – This is the basic resolution a camera has. The images taken in such a resolution will look
blurred and grainy. They are the cheapest and also unacceptable.
 640×480 – This is a little more high resolution camera than 256×256 type. Though a clearer image
than the former can be obtained, they are frequently considered to be low end. These type of cameras
are suitable for posting pics and images in websites.
 1216×912 – This resolution is normally used in studios for printing pictures. A total of 1,109,000
pixels are available.
 1600×1200 – This is the high resolution type. The pictures are in their high end and can be used to
make a 4×5 with the same quality as that you would get from a photo lab.
 2240×1680 – This is commonly referred to as a 4 megapixel cameras. With this resolution you can
easily take a photo print up to 16×20 inches.
 4064×2704 – This is commonly referred to as a 11.1 megapixel camera. 11.1 megapixels takes
pictures at this resolution. With this resolution you can easily take a photo print up to 13.5×9 inch
prints with no loss of picture quality.

Page 9
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

Parameters of a Digital Camera

Aperture – Aperture refers to the diameter of the opening in the camera.


Shutter Speed – Shutter speed refers to the rate and amount of light that passes through the aperture.
Focal Length – The focal length is a factor that is designed by the manufacturer.
Lens – There are mainly four types of lenses used for a digital camera.

UNIT II AUDIO AND VIDEO COMPRESSION


PART – A
1. Define frequency masking APR/MAY2017
Auditory masking occurs when the perception of one sound is affected by the
presence of another sound. Auditory masking in the frequency domain is known as
simultaneous masking, frequency masking or spectral masking.
2. What is the principle of adaptive predictive coding APR/MAY2017, APR/MAY2015
By varying the number of bits used for difference signals based on its amplitude
(i.e) using few bits to encode small difference values can save bandwidth and also
improve quality. This is the principle of Adaptive Differential Pulse Code
Modulation(ADPCM).
3. What is the basic principle of ADPCM? NOV/DEC2015

By varying the number of bits used for difference signals based on its amplitude
(i.e) using few bits to encode small difference values can save bandwidth and also
improve quality. This is the principle of Adaptive Differential Pulse Code
Modulation(ADPCM).
4. What are the profiles in MPEG-2 video standard? NOV/DEC2015

MPEG-2 defines 7 profiles at different applications


1. Simple
2. Main
Page 10
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

3. SNR scalable
4. Spatially scalable
5. High
6. 4:2:2
7. Multi-view
5. What are the different delays suffered by CELP coders NOV/DEC2016, APR/MAY2015

There are two delays through which a CELP-Code Excited Linear Prediction coing
suffers, they are,
Processing Delay: This delay occurs when each block of digitized samples are
analyzed by the encoder & the speech being reconstructed at the decoder.
Algorithmic delay:The time required to accumulate the block of samples is known
as Algorithmic Delay.

6. What is the advantage of adaptive predictive coding NOV/DEC201

(i) APC has a reduced bandwidth upto 8 kbps.


(ii)The quality of the data is maintained, even after compression.

7. What is meant by delta modulation APR/MAY2017


Delta modulation is the one-bit version of differential pulse code modulation
8. What are the profiles in MPEG-2 video standards APR/MAY2017

MPEG-2 defines 7 profiles at different applications


 Simple
 Main
 SNR scalable
 Spatially scalable
 High
 4:2:2
 Multi-view
9. What are the benefits of Compression?
 It provides less disk space (i.e) more data in realtime.
 Faster file transfer.
 Faster writing and reading.
10.List the major features of H.263 standard.
H.263 supports CIF, QCIF, sub QCIF, 4CIF and 16CIF. For the compressed video, the
standard defines maximum bit rate per picture measured in units of 1.024 bits.
16 MARKS

1. Discuss the techniques of DPCM with neat diagram. what are the advantage of ADPCM
over DPCM (APR/MAY2015(8)) , (APR/MAY2017(16)), (NOV/DEC 2016(8))
Principle of DPCM

Page 11
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

Power spectrum of the DPCM quantization error Power spectral density of the
quantization error q measured for intraframe DPCM with a 16 level quantizer

Signal distortions due to intraframe DPCM coding


Granular noise: random noise in flat areas of the picture
Edge busyness: jittery appearance of edges (for video)
Slope overload: blur of high contrast edges, Moire patterns in periodic structures.

DPCM is not practical because of the two problems


Eliminating the problem of accumulation of quantization noise
Reducing the effect of transmission errors

The general inferences that can be drawn from the above results are:
1. Backward adaptive predictors & quantizer are usually preferred in design as forward
adaptive algorithms introduce delay & require more bandwidth which is not acceptable
in connections having multiple links.
2. It is better to use backward adaptive predictors of higher order so as to obtain
a better estimate of the samples which also results in low MSE values.

Page 12
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

3. Adaptive quantizer used in ADPCM should be chosen such that the reconstruction
levels used does not lose speech intelligibility. This is necessary as adaptive quantizer
contributes significantly to the encoding times required.

2. Write a brief note on MPEG perceptual coders (APR/MAY2015(8)) ,(NOV/DEC


2015(8)), (NOV/DEC 2016(8))
Perceptual audio coder (PAC) is an algorithm, like MPEG's MP3 standard, used to
compress digital audio by removing extraneous information not perceived by most
people. It is used by Sirius Satellite Radio for their DARS service.

"Transmission bandwidth increases continuously, but the demand increases even


more
# need for compression technology
"Applications of audio coding
– audio streaming and transmission over the internet
– mobile music players
– digital broadcasting
– soundtracks of digital video (e.g. digital television and DVD)
– Requirements for audio coding systems
Requirements for audio coding systems
Compression efficiency: sound quality vs. bit-rate
"Absolute achievable quality
–often required: given sufficiently high bit-rate, no audible difference
compared to CD-quality original audio
"Complexity
–computational complexity: main factor for general purpose computers
–storage requirements: main factor for dedicated silicon chips
–encoder vs. decoder complexity
•the encoder is usually much more complex than the decoder
•encoding can be done off-line in some applications Requirements (cont.)
"Algorithmic delay
– depending on the application, the delay is or is not an important criterion
– very important in two way communication (~ 20 ms OK)
– not important in storage applications
– somewhat important in digital TV/radio broadcasting (~ 100 ms)
"Editability
– a certain point in audio signal can be accessed from the coded bitstream
– requires that the decoding can start at (almost) any point of the bitstream
"Error resilience
– susceptibility to single or burst errors in the transmission channel
– usually combined with error correction codes, but that costs bits

Page 13
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

"Filter bank
– used to decompose an input signal into subbands or spectral components
(time-frequency domain)
"Perceptual model (aka psychoacoustic model)
– usually analyzes the input signal instead of the filterbank outputs (time-
domain input provides better time and frequency resolution)
– computes signal-dependent masked threshold based on psychoacoustics
"Quantization and coding
– spectral components are quantized and encoded
– goal is to keep quantization noise below the masked threshold
"Frame packing
– bitstream formatter assembles the bitstream, which typically consists of the
coded data and some side information
– Perceptual models: masked threshold
– Perceptual models: tonality estimation
– Perceptual models: MPEG-1 Layer 2

3. Describe the principle of MPEG 4 with diagrams of encoder and decoder


(APR/MAY2015(8)) ,(NOV/DEC 2015(8)), (NOV/DEC 2016(10))
MPEG-4 absorbs many of the features of MPEG-1 and MPEG-2 and other related
standards, adding new features such as (extended) VRML support for 3D rendering,
object-oriented composite files (including audio, video and VRML objects), support
for externally specified Digital Rights Management and various types of
interactivity.

MPEG-4 provides a series of technologies for developers, for various service-


providers and for end users:

 MPEG-4 enables different software and hardware developers to create multimedia


objects possessing better abilities of adaptability and flexibility to improve the quality
of such services and technologies as digital television, animation graphics, the World
Wide Web and their extensions.
 Data network providers can use MPEG-4 for data transparency. With the help of
standard procedures, MPEG-4 data can be interpreted and transformed into other
signal types compatible with any available network.
 The MPEG-4 format provides end users with a wide range of interaction with various
animated objects.
 Standardized Digital Rights Management signaling, otherwise known in the MPEG
community as Intellectual Property Management and Protection (IPMP).
Page 14
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

The MPEG-4 format can perform various functions, among which might be the following:

 Multiplexes and synchronizes data, associated with media objects, in such a way that
they can be efficiently transported further via network channels.
 Interaction with the audio-visual scene, which is formed on the side of the receiver.

4. Give a brief note on H.263 video compression standard (APR/MAY2015(8)), (NOV/DEC


2016(6))

H.263 that was adopted by ITU-T, we would like to tell the reader s why we do not
talk about any standard called H.262, which should have been logically there in
between the H.261 and H.263.
The other requirements of H.263 standardization were:
• Use of available technology
• Interoperability between the other standards, like H.261
Page 15
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

• Flexibility for future extensions


• Quality of service parameters, such as resolution, delay, frame-rate etc.
• Subjective quality measurements.
H.263 block diagram

H.263 sampling blocks


• 4:2:0 sampling
–luminance Y to chrominance CB, CR
•Block:
–8 x 8 pixels
•Macro block (MB):
–4 Y + CB + CR blocks
•Group of blocks (GOB):
–One or more rows of MBs
–In GOB header: resynchronization

The H.263 standard supports five pictures formats ---


• Sub-QCIF 128 x 96 pixels (Y), 64 x 48 pixels ( U,V)
• QCIF 176 x 144 pixels (Y ), 88 x 72 pixels (U,V)
• CIF 352 x 288 pixels (Y), 176 x 144 pixel (U,V)
• 4CIF 704 x 576 pixels (Y), 352 x 288 pixel (U,V)
• 16 CIF 1408 x 1152 pixels (Y), 704 x 576 pixel (U,V)
Two (six) frame types:
–I-frames: intra
–P-frames: predictive (inter)
–B-frames (optional): bidirectionalpredicted
–PB-frames (optional): decoded B and P frame as one unit
–EI-frames (optional): enhanced I-frame
–EP-frames (optional): enhanced P-frame
5. Elaborate on various video compression standards with emphasis on their supporting
features(any two standards).give required diagrams (APR/MAY2017(16))

H.264 (MPEG-4 AVC)

Page 16
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

H.264, also known as MPEG-4 AVC ( Advanced Video Coding) or MPEG4 part 10,
improves video compression when compared to MPEG-4 and MPEG-2 by using
advanced algorithms, simplified integer transforms, and in-loop deblocking filter.
MPEG-4
MPEG-4 is one of the most widely used codecs for video security. It offers
improved quality relative to MPEG-2. This codec is designed to operate within a
wide range of bit rates and resolutions, so it is well suited for the video surveillance
industry.

MPEG-2
MPEG-2 was approved as a standard in 1994 and was designed for high frame
and bit rates. MPEG-2 extends the earlier MPEG-1 compression standard to
produce high quality video at the expense of a lower compression ratio and at a
higher bit-rate. The frame rate is locked at 25 (PAL)/30 (NTSC) fps, as is the case
for MPEG-1.
JPEG
JPEG is a format specified in the JPEG still picture coding standard in which each
video frame is separately compressed as a JPEG image. JPEG is a very well-known
standard and is widely used in video surveillance applications and still image
cameras. The first generation of DVRs all used JPEG, but this is no longer the case.
JPEG 2000
JPEG 2000 is a wavelet-based image compression standard created by the Joint
Photographic Experts Group committee that provides better compression for still
image coding by filtering, sub-sampling, and “smoothing” video data to remove
unnecessary details. JPEG 2000 is very scalable and brings many new tools that
improve compression, but requires significantly more processing power then JPEG
to encode an image.

6. In detail, explain the concept of linear and adaptive predictive coding standards with
necessary figures (APR/MAY2017(16))
Linear predictive coding (LPC) is a tool used mostly in audio signal processing and
speech processing for representing the spectral envelope of a digital signal of speech
in compressed form, using the information of a linear predictive model.[1] It is one of
the most powerful speech analysis techniques, and one of the most useful methods for
encoding good quality speech at a low bit rate and provides extremely accurate
estimates of speech parameters.

Liner Prediction is a well-known technique used in spectral analysis [4]. LPC


(Linear Predictive coding) analyzes the speech signal by estimating the formants,
removing speech signal, and estimating the intensity and frequency of the remaining
buzz. The process is called inverse filtering, and the remaining signal is called the
Page 17
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

residue. In LPC system, each expressed as a linear combination of the previous


samples. This equation is called a linear called as linear predictive coding. The
coefficients of the difference equation characterize the formants. Speech signal
recorded using PRAAT and sampled at 16 KHz, is processed features in MATLAB

LPC Applications
Standard telephone system
Text-to-Speech synthesis
Voice mail systems,
telephone answering machines
multimedia applications.

Adaptive predictive coding (APC) is a narrowband analog-to-digital conversion


that uses a one-level or multilevel sampling system in which the value of the signal at each
sampling instant is predicted according to a linear function of the past values of the
quantized signals.\

Adaptive predictive coding (APC) is a narrowband analog-to-digital conversion that


uses a one-level or multilevel sampling system in which the value of the signal at each
sampling instant is predicted according to a linear function of the past values of the
quantized signals.

APC is related to linear predictive coding (LPC) in that both use adaptive
predictors. However, APC uses fewer prediction coefficients, thus requiring a higher
sampling rate than LPC

Adaptive prediction can be done in two different ways:


Forward adaptive prediction
Based on the input of a DPCM system
More sensitive to variation of local statistics
Side information: prediction coefficients
Backward adaptive prediction
Based on the output of the DPCM
Less sensitive to variation of local statistics
No side inforamtion
In either case, the data (either input or output) has to be buffered. Autocorrelation
coefficients are analyzed, based on which the prediction parameters are determined

Page 18
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

7. With the aid of an example explain how DCT blocks are derived from macro blocks in
an I frame (NOV/DEC 2015(8))

A macroblock is also classified as field coded or frame coded depending on how the
four blocks are extracted from it. See Field DCT Coding and Frame DCT Coding.

Field DCT Coding and Frame DCT Coding

Field DCT coding and frame DCT coding differ according to the contents of the
blocks that make up a macroblock.

In a frame coded macroblock, the four blocks each come from the same frame of
video.

In a field coded macroblock, there are two possibilities: either all four blocks come
from a given field of video, or two blocks come from one field and two from another field.

Progressive sequences: FRAME DCT ONLY

For progressive sequences, all pictures are frame pictures with frame DCT coded
macroblocks only.

Interlaced sequences: FRAME or FIELD DCT

Field DCT coding can be applied only to interlaced sequences.

For interlaced sequences, the encoder may decide on a frame by frame basis to use a
frame picture or two field pictures.

In the case of a field picture, all the blocks in every macroblock come from one field,
that is, there are only field coded macroblocks and no frame coded macroblocks.

In the case of an (interlaced) frame picture, the decision to use frame or field DCT
coding is made on a macroblock-by-macroblock basis.

- If the interlaced macroblock from an interlaced frame picture is frame DCT coded,
each of its four blocks has pixels from both fields.

- If the interlaced macroblock from an interlaced frame picture is field coded, each
block consists of pixels from only one of the two fields. Each 16x16 macroblock
is split into fields 16 pixels wide x 8 pixels high by taking alternating lines of
pixels, then each field is split into left and right parts, making two 8x8 blocks
from one field and two from the other field.

An I-frame (Intra-coded picture), a complete image, like a JPG or BMP image file. P and
B frames hold only part of the image information (the part that changes between frames),
so they need less space in the output file than an I-frame.

8. Discuss the methodology of achieving higher levels of compression making the


predictor coefficients associated with the ADPCM adaptive (NOV/DEC 2015(8))

Page 19
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

ADPCM Encoder
Subsequent to the conversion of the A-law or m-law, PCM input signal to uniform
PCM, a difference signal is obtained by subtracting an estimate of the input signal from
the input signal itself. An adaptive 31-, 15-, 7-, or 4-level quantizer is used to assign five,
four, three, or two binary digits, respectively, to the value of the difference signal for
transmission to the decoder. A inverse quantizer produces a quantized difference signal
from these same five, four, three or tw binary digits, respectively. The signal estimate is
added to this quantized difference signal to produce the reconstructed version of the
input signal. Both the reconstructed signal and thequantized difference signal are
operated upon by an adaptive predictor, which produces the estimate of the input signal,
thereby completing the feedback loop.

ADPCM Decoder
The decoder includes a structure identical to the feedback portion of the encoder,
together with a uniform PCM to A-law or m-law conversion and a synchronous coding
adjustment.
The synchronous coding adjustment prevents cumulative distortion occurring on
synchronous tandem coding (ADPCM, PCM, ADPCM, etc., digital connections) under
certain conditions the synchronous coding adjustment is achieved by adjusting the PCM
output codes in a manner which attempts to eliminate quantizing distortion in the next
ADPCM encoding stage.

9. Explain the different types of frames in video compression principles


(APR/MAY2017(16))

Three frame types:


• I-Picture (Intra-frame picture)
• P-Picture (Inter-frame predicted picture)
•B-Picture(Bi-directional predicted-interpolated pictures)

I-frames

Page 20
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

Are encoded without reference to any other frames. Each frame is treated as a
separate (digitized) picture and the Y, Cb and Cr matrices are encoded independently
using the JPEG encoded independently using the JPEG algorithm

P-frames

The encoding of a P-frame is relative to the contents of either a preceding I-frame


or a contents of either a preceding I frame or a preceding P-frame.
P-frames are encoded using a combination P frames are encoded using a
combination of motion estimation and motion compensation
B-frames
Their contents are predicted using search regions in both past and future frames
.
Allowing for occasional moving objects, this also provides better motion
estimation this also provides better motion estimation

UNIT III TEXT AND IMAGE COMPRESSION

PART – A
1. Define entropy encoding APR/MAY2017
Entropy coding is a type of lossless coding to compress digital data by representing
frequently occuring patterns with many bits.
2. Define differential encoding APR/MAY2017

Page 21
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

Differential encoding is used in applications where the amplitude of a value or


signal covers a large range but the difference in amplitude between successive values or
signals is relatively small.
Techniques that transmit information by encoding differences are called
differential encoding. Differential encoding schemes are very popular for speech coding.

3. Give one application each suitable for lossy& lossless compression? NOV/DEC2015
Compression of satellite images is an example for lossless compression whereas
compression of general images(move stills) is good example for lossy compression.
4. Derive the binary form of the following run length encoded AC coefficients (0,6) (0,7)(3,3)
(0,-1) (0,0) NOV/DEC2015

5. Define the term ‘run length coding’ NOV/DEC2016

Run Length Encoding(RLE) is very simple form of data compression in which


consecutive sequences of the same data value are stored or transmitted as a single data value
and count.
6. Bring out the difference between loseless and lossy compression NOV/DEC2016

S.No Lossless compression Lossy compression


1. In lossless compression,original In lossy compression, original data is not
data is exactly restored after exactly restored after decompression.
decompression.
2. Mainly used for text data Mainly used for image data compression &
compression decompression
& decompression
3. Compression ratio is less. Compression ratio is high.
4. Ex: Run length coding, Huffman Ex: Wavelet transform, Discrete cosine
coding, Arithmetic coding. transform.

7. Give the principle of differential encoding APR/MAY2017, APR/MAY2015


Differential encoding is used in applications where the amplitude of a value or
signal covers a large range but the difference in amplitude between successive values or
signals is relatively small.
Techniques that transmit information by encoding differences are called
differential encoding. Differential encoding schemes are very popular for speech coding.
8. Define the term compression ratio?
If the total number of bits required to represent the data before compression is
B0and the total number of bits required to represent the data after compression B1, then
compression ratio is given by,
Compression Ratio(CR) =
9. Differentiate static and dynamic coding with respect to text compression.
S.No Static coding Dynamic coding
1. In static coding, the shortest In dynamic coding, codewords used can

Page 22
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

codewords are used for change as when transfer takes place.


representing most frequently
occurring characters.
2. Static coding is applied for In dynamic coding, the receiver is able to
applications in which the text to compute the same set of codewords that is
be compressed has known being used at each point during the
characteristics in terms of the transfer.
characters & their relative
frequencies.

10.When does a codeword said to have prefix property? APR/MAY2017, APR/MAY2015


A code is said to have the prefix property if no codeword is a prefix of any other
codeword.
PART – B

1. Find Huffman codeword of the given text “AAAAAAAAAABBBBBCCCSS” BY USING STATIC


Huffman tree. Calculate Entropy and derive the average number of bits per character for
codeword? APR/MAY2017 (16)
Symbol occurrence probability codeword
A 10 10/20 0
B 5 5/20 10
C 3 3/20 110
S 2 2/20 111
SUM 20 2

AVERAGE SEARCH LENGTH

Avg L = Σ Li * P (i)
=10/20*1+5/20*2+3/20*3+2/20*3=10+10+9+6/20=35/20=1.75

Space required to store the original message=20*8=160 bits


Space required to store the decoded message=35 bits

2. Explain Lempel Ziv Welsh Compression APR/MAY2017 (16) , APR/MAY 2015(6)


The LZW algorithm is a very common compression technique. This algorithm is typically used in
GIF and optionally in PDF and TIFF. Unix’s ‘compress’ command, among other uses. It is lossless,
meaning no data is lost when compressing.

Page 23
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

LZW compression works by reading a sequence of symbols, grouping the symbols into
strings, and converting the strings into codes. Because the codes take up less space than the
strings they replace, we get compression.Characteristic features of LZW includes,

LZW ENCODING

* PSEUDOCODE
1 Initialize table with single character strings
2 P = first input character
3 WHILE not end of input stream
4 C = next input character
5 IF P + C is in the string table
6 P=P+C
7 ELSE
8 output the code for P
9 add P + C to the string table
10 P=C
11 END WHILE
12 output code for P

Compression using LZW

Example 1: Use the LZW algorithm to compress the string: BABAABAAA


The steps involved are systematically shown in the diagram below.

LZW Decompression Algorithm

* PSEUDOCODE
1 Initialize table with single character strings
2 OLD = first input code
3 output translation of OLD
4 WHILE not end of input stream
5 NEW = next input code
6 IF NEW is not in the string table
Page 24
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

7 S = translation of OLD
8 S=S+C
9 ELSE
10 S = translation of NEW
11 output S
12 C = first character of S
13 OLD + C to the string table
14 OLD = NEW
15 END WHILE

3. (i) describe the operation of JPEG encoder and Decoder with neat diagrams(10)
APR/MAY2015 (10)
JPEG is an image compression standard that was developed by the “Joint
Photographic Experts Group”. JPEG was formally accepted as an international standard in
1992.
JPEG is a lossy image compression method. It employs a transform coding method
using the DCT (Discrete Cosine Transform).

Main Steps in JPEG Image Compression


• Transform RGB to YIQ or YUV and subsample color.
• DCT on image blocks.
• Quantization.
• Zig-zag ordering and run-length encoding.
• Entropy coding.
Run-length Coding (RLC) on AC coefficients
Entropy Coding
• The DC and AC coefficients finally undergo an entropy coding step to gain a possible
further compression.
• Use DC as an example: each DPCM coded DC coefficient is represented by (SIZE,
AMPLITUDE), where SIZE indicates how many bits are needed for representing the
coefficient, and AMPLITUDE contains the actual bits.

Page 25
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

ii) Give a brief note on GIF and TIFF formats APR/MAY2015 (6)

GIF(F) : stands for “Graphics Interchange Format.”


A small, limited-color raster file. It is used for on-screen viewing only, when a very
small file with just a few solid colors is needed. A bit-mapped file format used for
graphics as opposed to photographic images. GIF supports 8-bit color (maximum of 256
colors, compared to Jpegs 16 million colors.) It’s widely used on the Web because the files
compress well. GIFs include a color table that includes the most representative 256 colors
used. Not recommended for files with a lot of color shading!

Limited color palette

LZW compression

Transparency

Interlacing

Animation

Specsheet

Resolution

Name: GIF
Developer: CompuServe
Release date: 1987
Type of data: bitmap
Number of colors: 2, 4, 8, 16, 32, 64, 128 or 256
Color spaces: RGB
Compression algorithms: LZW
Ideal use: internet publishing
Extension on PC-platform: .gif
Macintosh file type: ?
Special features: support for transparency, interlacing, and animation

Page 26
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

TIFF: stands for “Tagged Image File Format” and is one of the most widely supported file
formats for storing bit-mapped images on personal computers (both PCs and Macintosh
computers).

How to edit TIFF files

All professional image editing applications on the market are capable of opening TIFF
files. My favorite is Adobe Photoshop.

How to convert TIFF files


There are tons of converters that can convert a TIFF file to a JPEG, PNG, EPS,
PDF or other file format.

4. A series of message is to be transferred between two computers . the message comprises


of the characters A,B,C,D and E. the probabilities of occurrence of the above characters are
0.4,0.19,0.16,0.15,and 0.1 respectively. Use Huffman coding to obtain a codeword for the
above characters. Determine the average number of bits per codeword (10)APR/MAY
2015

Symbol occurrence probability codeword


A 1 0.4/5 0
B 1 0.19/5 10
C 1 0.16/5 110
D 1 0.15/5 1110
E 1 0.1/5 1111
SUM 5

AVERAGE SEARCH LENGTH

Avg L = Σ Li * P (i)
=1/5*1+1/5*2+1/5*3+1/5*4+1/5*4=1+2+3+4+4/5=14/5=2.8

Space required to store the original message=5*8=40 bits

Space required to store the decoded message=14 bits

5. (i)Explain the operation of LZ compression algorithm. Assume a dictionary of 16,000


words and an average word length of 5 bits, derive the average compression ratio that is
achieved relative to using 7 bit ASCII code word (8) NOV/DEC2015, APR/MAY2017(8)

Solution:
In general , a dictionary with an index of n bits can contain up to 2n entries.
Now Assume a dictionary of 16,000 words
214=16384and hence an index of 14 bits is required
Using 7 bit ASCII codeword and an average of 5 characters per word requires 35
bits
Hence compression ratio 35/14= 2.5:1

Page 27
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

(ii)With help of diagram identify the five main stages associated with the baseline mode
of operation JPEG and give a brief description of the role of each stages (8) NOV/DEC2015,
APR/MAY2017(8)

JPEG: Joint Photographic Expert Group

Intermediate Standard\
For digital compression and coding of continuous-tone still images
Gray-scale
color

 Preparation: analog to digital conversion


 Processing: Transform Data into a domain easier to compress
 Quantization: Reduce precision at which the output is stored
 Entropy encoding: Remove redundant information in the resulting data stream
6. (i) with the aid of the diagram explain how individual 8x8 blocks of pixel values are
derived by the image and block preparation stage for a monochrome and RGB image
NOV/DEC2015(8) , APR/MAY2017(8)
Why 8x8 Blocks

RGB color system


 Three component representation of the color of a pixel
 Represents the intensities of the red, green, and blue components
 24 bit “True Color”
 Each component represented with 8 bits of precision

Page 28
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

 The components each contain roughly the same amount of information

YUV Color Space


 An ideal format for JPEG compression
 The brightness and color information in an image are separated
 Concentrates the most important info into one component, allowing for greater
compression
 Y component represents the color intensity of the image (equivalent to a black
and white television signal)
 U and V represent the relative redness and blueness of the image

o Y = 0.299R + 0.587G + 0.114B


o U = -0.1687R -0.3313G + 0.5B + 128
o V = 0.5R – 0.4187G – 0.0813B + 128

7. Design a Huffman code and find average length for a source that puts letters from an
alphabet A={a1,a2,a3,a4,a5} with P(a1)=P(a3)=P(a4)=0.1, P(a2)=0.3 and P(a5)=0.4
NOV/DEC2016(8)

Page 29
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

a1=0.1
a2=0.3
a3=0.1
a4=0.1
a5=0.4

1.0

0.4 0.6

0.3 0.3

0.1 0.2

0.1 0.1

Path length

a1=0.1=4
a2=0.3=2
a3=0.1=4
a4=0.1=3
a5=0.4=1
Optimal path will be

= (4*0.1) + (2*0.3) + (4*0.1) + (3*0.1) + (1*0.4)

=0.4+0.6+0.4+0.3+0.4=2.1

(ii) Describe dynamic Huffman code for the same output source with the above probabilities
NOV/DEC2016 (8)

Dynamic Huffman code

This coding scheme presupposes a previous determination of the symbol distribution.


The actual algorithm starts with this distribution which is regarded as constant about the entire
data. If the symbol distribution changes, then either losses in compression or a completely new
construction of the code tree must be accepted (incl. header data required).
Page 30
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

In the following an example for a simple code tree is presented together with some
principal considerations.

Example: "abracadabra"

Symbol Frequency
a 5
b 2
r 2
c 1
d 1

According to the outlined coding scheme the symbols "d" and "c" will be coupled together in a
first step. The new interior node will get the frequency 2.

1. Step

Symbol Frequency Symbol Frequency


a 5 a 5
b 2 b 2
r 2 r 2
c 1 -----------> 1 2
d 1

2. Step
Symbol Frequency Symbol Frequency
a 5 a 5
b 2 b 2
r 2 -----------> 2 4
1 2

3. Step

Symbol Frequency Symbol Frequency


a 5 a 5
2 4 -----------> 3 6
b 2

4. Step
Symbol Frequency Symbol Frequency
3 6 -----------> 4 11
a 5

Page 31
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

Code Table

If only one single node is remaining within the table, it forms the root of the Huffman tree. The
paths from the root node to the leaf nodes define the code word used for the corresponding
symbol:

Symbol Frequency Code Word


a 5 0
b 2 10
r 2 111
c 1 1101
d 1 1100

Complete Huffman Tree:

UNIT IV VOIP TECHNOLOGY

PART – A

1. What are the challenges involved in VOiP APR/MAY2017,NOV/DEC2015

The major challenges of VOIP are good speech quality, low transmission delay, low
jitter and low loss of data during transmission and reception.

2. List the types of CODEC APR/MAY2017


Codec Basics. Codecs are compression technologies and have two components, an
encoder to compress the files, and a decoder to decompress. There are codecs for data
(PKZIP), still images (JPEG, GIF, PNG), audio (MP3, AAC) and video (Cinepak, MPEG-2,
H.264, VP8). There are two kinds of codecs; lossless, and lossy.
3. write the various audio codec methods available NOV/DEC2015

CODEC Datarate Voice quality


1 G. 711 64 kbps High
2 G.723.1 6,4,5,3 kbps Low
3 G.726 40,32,24,16 kbps Medium
4 G.728 16 kbps Medium
5 G.729 8 kbps Medium

4. What is ip transport NOV/DEC2016


Page 32
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

IP (Internet Protocol) is a outing protocol for passing of data packets.


IP itself makes no guarantee that a given packet will be delivered. IP is known as
“best-effort” protocol. Which means information may be delivered when there is no traffic or
discarded when the traffic is heavy.

5. Write notes on SS7 NOV/DEC2016

SS7(Signaling System no.7) enables a wide range of services including caller-ID, toll
free calling, call screening, number portability. SS7 is the foundation for intelligent network
services. SS7 supports VOIP for many new services.

6. What is voice over IP technology APR/MAY2017, APR/MAY2015

VOIP is simply the transport of voice traffic using the internet protocol(IP) VOIP
can be matched for its quality, reliability and scalability.
7. Differentiate Lossy and Loseless compression APR/MAY 2017
If the reconstructed data in the received end is same as that of the original data,
then it is a lossless compression.
If the reconstructed data in the received end is differs from that of the original
data, then it is a lossy compression

8. what are the different factors that determine the QoS of VoIP systems? APR/MAY 2017

QoS is a collective measure of the level of service delivered to a customer. QoS can
be characterized by several performance criteria such as availability, throughput,
connection setup time. QoS can be measured in terms of bandwidth; packet loss delay and
jitter.

9. Why are H.323 protocols designed?


The lack of interoperability between systems from different vendors was a major
inconvenience & impeded the early adoption of VOIP. To solve this issue, ITU-T
recommended H.323 protocol. H.323 acts as a signaling protocol for VOIP.
10. List the salient features of VOIP technology?
 Lower Equipment Cost
 Lower operating expense
 Widespread availability of IP
 Potentially lower bandwidth requirements
PART-B

1. Explain the network architecture and protocols supporting the functionality of VOIP
networks NOV/DEC2016(16) ,APR/MAY2017(16)

VOIP (Voice over Internet Protocol) is the technology for voice conversations with Internet
network or any IP . This means that, it sends the voice signal into digital form in packets rather
than send it in digital form or analog circuits using a mobile phone company or conventional
PSTN (acronym for Public Switched Telephone Network).

Page 33
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

Advantages of hosted VOIP phone systems:

1. VoIP Is Easier to Install, Configure, and Maintain

2. VoIP Scales Up or Down Easily

3. Employees' Numbers Follow Them Wherever They Go

4. A Range of Call Features Are Supported

5. Even Older Technology Like Fax Is Supported

6. Hosted VoIP Saves Businesses Money

7. VoIP Integrates With Other Business Systems

2. Discuss in detail about H.323 architectures(16) NOV/DEC2015 , APR/MAY 2017(16)


H.323 is a recommendation from the ITU Telecommunication Standardization
Sector (ITU-T) that defines the protocols to provide audio-visual communication sessions
on any packet network.
Protocols

Page 34
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

H.323 is a system specification that describes the use of several ITU-T and IETF
protocols. The protocols that comprise the core of almost any H.323 system Codecs H.323
utilizes both ITU-defined codecs and codecs defined outside the ITU. Codecs that are
widely implemented by H.323 equipment include:
 Audio codecs: G.711, G.729 (including G.729a), G.723.1, G.726, G.722, G.728,
Speex, AACLD
 Text codecs: T.140
 Video codecs: H.261, H.263, H.264

Architecture
The H.323 system defines several network elements that work together in order to
deliver rich multimedia communication capabilities. Those elements are Terminals,
Multipoint Control Units (MCUs), Gateways, Gatekeepers, and Border Elements.
Collectively, terminals, multipoint control units and gateways are often referred to as
endpoints.

H.323: architectural elements


Gatekeeper
Gateway
Multipoint Control Unit

H.323 Network Signaling

Page 35
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

3. Explain in detail the network architecture and protocol design of SIP (16) NOV/DEC2015 ,
APR/MAY 2017(16)

SIP
SIP (Session Initiation Protocol) is a protocol to initiate sessions
• It is an application layer protocol used to
– establish
– modify
– terminate
• It supports name mapping and redirection services transparently
Used by a UA to indicate its current IP address and the URLs for which it would like to
receive calls.
– INVITE
• initiate sessions (session description included in the message body
encoded using SDP)
– ACK
• confirms session establishment
– BYE
• terminates sessions
– CANCEL
• cancels a pending INVITE
– REGISTER
• binds a permanent address to a current location
– OPTIONS
• capability inquiry
– Other extensions have been standardized
• e.g. INFO, UPDATE, MESSAGE, PRACK, REFER, etc.

4. Write a brief note on the challenges arise applications of VOIP APR/MAY 2015(8),
NOV/DEC2016(8)

Page 36
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

H.323 is gateway-gatekeeper based VOIP application which defines protocol,


procedure and different network component to support a good multimedia
communication capability over packet based network.

Session Initiation Protocol


SIP is used to control the initiation, modification and termination of interactive
multimedia sessions. The multimedia sessions may be established as audio or video calls
among two or more parties or subscriber, chat sessions (audio or video chat) or game
sessions.
VOIP packet format [15]:
RTP Header: RTP is the real time transport protocol, used for the transmission of
audio or video stream. This parameter is one of the important things for the VOIP
application. RTP supports the samples to be reconstructed in the proper sequence of
order and gives us a technique for the measurement of delay and jitter. The size of the
header is 12 in bytes.
VOIP Codec:
The conversion process of analog waveform to the digital form is carried out by
acodec. Various types codec present in real time application – GSM 6.10, G.711, G.729,
G.723.1 etc. Codec samples the waveform at regular intervals and generates a value for
each sample.
Three fundamental security requirements, named confidentiality, integrity and
availability, have to be addressed:

Attacks on VoIP

5. Discuss about the terminology and concept behind VOIP APR/MAY2017(16)

Page 37
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

Terminals: LAN-based communication end-points


¿ Gateway (media and/or signaling):
• Interface between packet- and circuit-switched networks
? Media gateway: voice transcoding, protocol conversion
? Media gateway controller: call handling, call state
? Signaling gateway: signaling mediation
¿ Gatekeeper:
• Admission control, SNMP services, address translation
¿ MCU: Multipoint Control Unit:
• Handling of broadcasts / conference calls

Anti-tromboning
Back-to-Back User Agent
Call Origination
Chatter Bug
Downstream QoS
IP Multimedia Subsystem (IMS)

6. Explain ss7 protocol suit and also discuss ISUP call establishment and release in detail
APR/MAY2015

Signaling System No. 7 (SS7) is a set of telephony signaling protocols developed in 1975,
which is used to set up and tear down most of the world's public switched telephone

Page 38
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

network (PSTN) telephone calls. It also performs number translation, local number
portability, prepaid billing, Short Message Service (SMS), and other mass market services.

SS7 Level 1: Physical Connection

This is the physical level of connectivity, virtually the same as Layer 1 of the OSI model.

SS7 Level 2: Data Link

The data link level provides the network with sequenced delivery of all SS7 message
packets.

SS7 Level 3: Network Level


The network level depends on the services of Level 2 to provide routing, message
discrimination and message distribution functions.
• Message Discrimination determines to whom the message is addressed.
• Message Distribution is passed here if it is a local message.
• Message Routing is passed here if it is not a local message.
SS7 Level 4: Protocols, User and Application Parts

Page 39
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

Level 4 consists of several protocols, user parts and application parts

TCAP
Transactional Capabilities Application Part (TCAP) facilitates connection to an external
database.
ASP

Application Service Part (ASP) provides the functions of Layers 4 through 6 of the OSI model.

SCCP
Signaling Connection Control Part (SCCP) is a higher level protocol than MTP that provides end-
to-end routing

TUP
Telephone User Part (TUP) is an analog protocol that performs basic telephone call connect and
disconnect.
ISUP
ISDN User Part (ISUP) supports basic telephone call connect/disconnect between end offices.
BISUP
Broadband ISDN User Part (BISUP) is an ATM protocol intended to support services such as
high-definition television (HDTV), multilingual TV, voice and image storage and retrieval, video
conferencing, high-speed LANs and multimedia.

7. Discuss ISUP call establishment and release in detail

Call Initiated

Page 40
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

Calling party goes “off hook” on an originating switch (SSP) and dials the directory
number of the called party.

Destination switch (SSP) checks the dialed number against its routing table and confirms
that the called party’s line is available for ringing.
ISUP Call Released
If the calling party hangs up first, the originating switch sends an ISUP release
message (REL) to release the trunk between the two switches. If the called party releases
first, the destination switch sends an REL message to the originating switch to release the
circuit.

When the destination switch receives the REL, it disconnects and idles the trunk,
and transmits an ISUP release complete message (RLC) to the originating switch to
acknowledge the release of the remote end of the circuit.

8. Explain CODEC methods

Page 41
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

A codec is a device or computer program for encoding or decoding a digital data


stream or signal.
There are many codecs for audio, video, fax and text. Below is a list of the most
common codecs for VoIP. As a user, you may think that you have little to do with what
these are, but it is always good to know a minimum about these, since you might have to
make decisions one day relating codecs concerning VoIP in your business; or at least
might one day understand some words in the Greek VoIP people speak! I won’t drag you
into all the technicalities of codecs, but will just mention them.

Codecs are used to convert an analog voice signal to digitally encoded version.
Codecs vary in the sound quality, the bandwidth required, the computational
requirements, etc. Each service, program, phone, gateway, etc typically supports several
different codecs, and when talking to each other, negotiate which codec they will use.

AMR Codec

The AMR (Adaptive Multi-Rate) codec encodes narrowband (200-3400 Hz) signals
at variable bit rates ranging from 4.75 to 12.2 kbps with toll quality speech starting at 7.4
kbps. AMR is the required standard codec for 2.5G/3G wireless networks based on GSM
(WDMA, EDGE, GPRS).
BroadVoice Codec

BroadVoice is based on Two-Stage Noise Feedback Coding (TSNFC) rather than the
popular Code-Excited Linear Prediction (CELP) coding paradigm. BroadVoice has two
variants: a 16 kb/s version called BroadVoice16 (BV16) for narrowband telephone-
bandwidth speech sampled at 8 kHz, and a 32 kb/s version called BroadVoice

DoD CELP

DoD CELP Codec also known as Federal Standard 1016. 4.8 Kbps

GIPS

GIPS Global IP Sound in the producer of a family of VOIP codecs and related
software. GSM Codec

GSM (Global System for Mobile communications) is a cellular phone system


standard popular outside the USA.

iLBC

iLBC is a VOIP codec originally created by Global IP Sound but made available
(including its source code) under a restricted but free and fairly liberal license, including
permission to modify.

ITU G.722

Page 42
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

G.722 is a high bit rate (48/56/64Kbps) ITU standard codec.

ITU G.723.1

 G.723.1 is an ITU standard codec.

ITU G.726

G.726 is an ITU standard codec. This codec uses the Adaptive Differential Pulse
Code Modulation (ADPCM) scheme.

UNIT V MULTIMEDIA NETWORKING

PART – A
1. Define packet jitter APR/MAY2017

Jitter in IP networks is the variation in the latency on a packet flow between two systems,
when some packets take longer to travel from one system to the other. ... A jitter buffer
(or de-jitter buffer) can mitigate the effects of jitter, either in the network on a router or
switch, or on a computer
2. What is meant by RSVP APR/MAY2017

RSVP (Resource Reservation Protocol) is a set of communication rules that allows


channels or paths on the Internet to be reserved for the multicast (one source to many
receivers) transmission of video and other high-bandwidth messages. RSVP is part of the
Internet Integrated Service (IIS) model, which ensures best-effort service, real-time service,
and controlled link-sharing.

3. Define any 4 quality of service parameters related to multimedia data transmission


NOV/DEC2015, APR/MAY2017

Decompression, Jitter removal

Error correction , GUI


4. What are the limitations of best effort service NOV/DEC2016

The limitations of best-effort service are packet loss, excessive end-To- end delay
and packet jitter.

5. What is meant by streaming NOV/DEC2016


Streaming media is video or audio content sent in compressed form over the
internet & played immediately. It avoids the process of saving the data to the hard. By
streaming, a user need not wait to download a file to play it.

6. Why multimedia networking is needed APR/MAY2017

The importance of communications or networking for multimedia lies in the new


applications that will be generated by adding networking capabilities to multimedia
computers, and hopefully gains in efficiency and cost of ownership and use when multimedia
resources are part of distributed computing systems.

Page 43
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

7. Give the applications of real time streaming protocol APR/MAY2015


Real Time Streaming Protocol(RTSP) is used by the client application to
communicate to the server information such as the requesting of media file, type of
clients applications, mechanism of delivery of file & other actions like resume, pause, fast-
forward & rewind. It is mostly used in entertainment & communication system to control
streaming media servers.

8. Write the shortcomings of integrated services APR/MAY2015

The shortcomings of integrated service(intserv) is that, the per-flow resource


reservation may give significant workload to routers & also it does not allow more
qualitative definitions of service distinctions.
9. Mention the applications of multimedia networking.
 Streaming Video
 IP telephony
 Internet Radio
 Teleconferencing
 Interactive games
 Virtual worlds
 Multimedia web
10.Name any two packet loss recovery schemes.
Packet loss recovery schemes includes Forward Error correction(FEC), interleaving
an receiver-based repair.

PART – B
1. MULIMEDIA NETWORKING
Network provides application with level of performance needed for application to
function.
Multimedia applications: network audio and video(“continuous media”)

2. MM Networking Applications
Classes of MM applications:
1) Streaming stored audio and video
2) Streaming live audio and video
3) Real-time interactive audio and video Fundamental characteristics:
❒Typically delay sensitive
❍end-to-end delay
❍delay jitter
❒But loss tolerant:
infrequent losses cause minor glitches
❒Antithesis of data, which are loss intolerant but delay tolerant.

Page 44
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

3. Streamed,stored and audio making the best effort service


Streaming:
❒media stored at source
❒transmitted to client
❒streaming: client playout begins before all data has arrived
❒timing constraint for still-to-be transmitted data: in time for playout

Streaming Stored Multimedia: Interactivity


VCR-like functionality: client can pause, rewind, FF, push slider bar
❍10 sec initial delay OK
❍1-2 sec until command effect OK
❍RTSP often used (more later)
❒timing constraint for still-to-be transmitted data: in time for playout
Streaming Live Multimedia
Examples:
❒Internet radio talk show
❒Live sporting event
Streaming
❒playback buffer
❒playback can lag tens of seconds after transmission
❒still have timing constraint
Interactivity
❒fast forward impossible
❒rewind, pause possible!
4. Give a detail notes on multimedia protocols for real time interactive applications with an
example APR/MAY2017(8)
❒applications: IP telephony, video conference, distributed interactive worlds
❒end-end delay requirements:
❍audio: < 150 msec good, < 400 msec OK
• includes application-level (packetization) and network delays
• higher delays noticeable, impair interactivity
❒session initialization
❍how does called advertise its IP address, port number, encoding algorithms?

5. Beyond best effort service APR/MAY2015(8), APR/MAY2017(8)


Thus far: “making the best of best effort”
Future: next generation Internet with QoS guarantees

Page 45
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

❍RSVP: signalling for resource reservations


❍Differentiated Services: differential guarantees
❍Integrated Services: firm guarantees
❒simple model for sharing and congestion studies:

Example: 1MbpsI P phone, FTP share 1.5 Mbps link.


❍bursts of FTP can congest router, cause audio loss
❍want to give priority to audio over FTP

Principle 1
packet marking needed for router to distinguish between different classes; and new
router policy to treat packets accordingly what if applications misbehave (audio sends higher
than declared rate)
❍policing: force source adherence to bandwidth allocations
❒marking and policing at network edge:
❍similar to ATM UNI (User Network Interface)

Principle 2
provide protection (isolation) for one class from others

Allocating fixed (non-sharable) bandwidth to flow: inefficient use of bandwidth if flows


doesn’t use its allocation

Principle 3
While providing isolation, it is desirable to use resources as efficiently as possible

Basic fact of life: can not support traffic demands beyond link capacity

Page 46
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

Principle 4
Call Admission: flow declares its needs, network may block call (e.g., busy signal) if it
cannot meet needs

6. Explain the Scheduling and Policing Mechanisms suitable for multimedia systems with
suitable diagrams NOV/DEC2016(8) , APR/MAY2015(16) ,APR/MAY2017(16)
❒scheduling: choose next packet to send on link
❒FIFO (first in first out) scheduling: send in order of arrival to queue
❍real-world example?
❍discard policy: if packet arrives to full queue: who to discard?
• Tail drop: drop arriving packet
• priority: drop/remove on priority basis
• random: drop/remove randomly

Priority scheduling: transmit highest priority queued packet


❒multiple classes, with different priorities
❍class may depend on marking or other header info, e.g. IP-source/dest, port numbers,
etc..
❍Real world example?

Page 47
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

round robin scheduling:


❒multiple classes
❒cyclically scan class queues, serving one from each class (if available)
❒real world example?

Weighted Fair Queuing:


❒generalized Round Robin
❒each class gets weighted amount of service in each cycle
❒real-world example?

7. Explain the policing mechanisms adopted in multimedia networks with necessary


diagrams NOV/DEC2016 (8) , APR/MAY2015(16)
Policing Mechanisms
Goal: limit traffic to not exceed declared parameters
Three common-used criteria:
❒(Long term) Average Rate: how many pkts can be sent per unit time (in the long run)
❍crucial question: what is the interval length: 100 packets per/sec or 6000 packets per
min have same average!
❒Peak Rate: e.g., 6000 pkts per min. (ppm) avg.; 1500ppm peak rate
❒(Max.) Burst Size: max. number of pkts sent consecutively (with no intervening idle)
Token Bucket: limit input to specified Burst Size and Average Rate.

Page 48
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

❒bucket can hold b tokens


❒tokens generated at rate r token/sec unless bucket-full
❒over interval of length t: number of packets admitted less than or equal to (r t + b).

token bucket, WFQ combine to provide guaranteed upper bound on delay, i.e., QoS
guarantee!

8. Write a detailed note on Differentiated services (APR/MAY 2015(8)), NOV/DEC2016(8) ,


APR/MAY2017(8)
Concerns with Integrated service:
❒Scalability: signaling, maintaining per-flow router state difficult with large
number of flows
❒Flexible Service Models: Integrated service has only two classes. Also want
“qualitative” service classes
❍“behaves like a wire”
❍relative service distinction: Platinum, Gold, Silver
Differentiated-service approach:
❒simple functions in network core, relatively complex functions at edge routers
(or hosts)
❒Do’t define define service classes, provide functional components to build service
classes
Edge router:
_ per-flow traffic management
_ marks packets as in-profile and out-profile
Core router:
_ per class traffic management
_ buffering and scheduling based on marking at edge
_ preference given to in-profile packets
_ Assured Forwarding

Page 49
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

Edge-router Packet Marking


❒class-based marking: packets of different classes marked differently
❒intra-class marking: conforming portion of flow marked differently than non-
conforming one
❒profile: pre-negotiated rate A, bucket size B
❒packet marking at edge based on per-flow profile

Possible usage of marking:


class-based marking: packets of different classes marked differently
❒intra-class marking: conforming portion of flow marked differently than non-
conforming one
Classification and Conditioning
❒Packet is marked in the Type of Service (TOS) inIPv4, and Traffic Class in IPv6
❒6 bits used for Differentiated Service Code Point(DSCP) and determine PHB that the
packet will receive
❒2 bits are currently unused

may be desirable to limit traffic injection rate ofsome class:


❒user declares traffic profile (e.g., rate, burst size)
❒traffic metered, shaped if non-conforming

Page 50
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

Forwarding (PHB)
❒PHB result in a different observable (measurable)forwarding performance behaviour
❒PHB does not specify what mechanisms to use to-ensure required PHB performance
behaviour
❒Examples:
❍Class A gets x% of outgoing link bandwidth over time intervals of a specified
length
❍Class A packets leave first before packets from class B

PHBs being developed:


❒Expedited Forwarding: pkt departure rate of a class equals or exceeds specified rate
❍logical link with a minimum guaranteed rate
❒Assured Forwarding: 4 classes of traffic
❍each guaranteed minimum amount of bandwidth
❍each with three drop preference partitions

9. Give an overview of integrated services (NOV/DEC2016(8))


architecture for providing QOS guarantees in IP-networks for individual application
sessions
❒resource reservation: routers maintain state info(a la VC) of allocated resources,
QoS req’s
❒admit/deny new call setup requests:Resource reservation
❍call setup, signaling (RSVP)
❍traffic, QoS declaration
❍per-element admission control

Call Admission
Arriving session must :
❒declare its QOS requirement
❍R-spec: defines the QOS being requested
❒characterize traffic it will send into network
❍T-spec: defines traffic characteristics
❒signalling protocol: needed to carry R-spec and T-spec to routers (where reservation is
required)
❍RSVP
Guaranteed service:
Page 51
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB

❒worst case traffic arrival: leaky-bucket-policed source


❒simple (mathematically provable) bound on delay[Parekh 1992, Cruz 1988]Controlled load
service:
❒"a quality of service closely approximating the QoS that same flow would receive from an
unloaded network element."

10.Explain the principle of RSVP (APR/MAY 2015(8)) , APR/MAY2017(8) , APR/MAY2017(8)


Connectionless (stateless) forwarding by IP routers + best effort service = no network
signaling protocols in initial IP design
❒New requirement: reserve resources along end-to-end path (end system, routers) for
QoS for multimedia applications
❒RSVP: Resource Reservation Protocol [RFC 2205]
❍“allow users to communicate requirements to network in robust and efficient
way.” i.e., signaling!
❒Earlier Internet Signaling protocol: ST-II [RFC 1819]
RSVP Design Goals
1. Accommodate heterogeneous receivers (different bandwidth along paths)
2. Accommodate different applications with different resource requirements
3. Make multicast a first class service, with adaptation to multicast group membership
4. Leverage existing multicast/unicast routing, with adaptation to changes in underlying unicast,
multicast routes
5. Control protocol overhead to grow (at worst) linear in # receivers
6. Modular design for heterogeneous underlying Technologies
RSVP: overview of operation
❒Senders, receiver joins a multicast group
❍done outside of RSVP
❍Senders need not join group
❒sender-to-network signaling
❍path message: make sender presence known to routers
❍path teardown: delete sender’s path state from routers
❒receiver-to-network signaling
❍reservation message: reserve resources from sender(s) to receiver
❍reservation teardown: remove receiver reservations
❒network-to-end-system signaling
❍path error
❍reservation error

Page 52

You might also like