MCT QP Bank With Answer
MCT QP Bank With Answer
MCT QP Bank With Answer
Page 1
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
5.
5. Write any two advantages of MIDI over digital audio. (A/M 15)
Both are digital audio files, and the main difference is the way they produce sound.
MIDI files are much more compact than digital audio files.
MIDI files embedded in web pages load and play more quickly than their digital equivalent.
MIDI data is completely editable. A particular instrument can be removed from the song and/or
a particular instrument can be changed by another just by selecting it.
MIDI files may sound better than digital audio files if the MIDI sound source you are using his of
high quality.
6. How are GIF images generated? (A/M 16)
Upload images: Click the upload button and select as many images as you want.
Arrange images: Drag and drop the images selected until it ordered correctly.
Adjust options: Adjust the Delay until the speed of your GIF looks normal.
Generate the image.
7. How are 21/2 dimension animations are created? (N/D 16, N/D 15)
21/2 – D usually referring to an animation created in several flat layers to give some of the depth
effects of true 3 – D.
Various techniques that are used in creating 2D abstracts are morphing, twining, onion skinning,
Anime, and amid rotoscoping.
8. Define Luminance. (A/M 17)
Luminance refers to brightness.
Luminance is a measure of the light strength that is actually perceived by the human eye.
It describes the amount of light that passes through, is emitted or reflected from a particular
area, and falls within a given solid angle.
Luminance measures just the portion that is perceived.
9. Define multimedia.
‘Multi’ means ‘many’ and ‘media’ means ‘material through which something can be transmitted
or send’.
Information being transferred by more than one medium is called as multimedia.
Page 2
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
It is the combination of text, image, audio, video, animation, graphic & hardware, that can be
delivered electronically / digitally which can be accessed interactively.
It is of two types: Linear & Non – Linear.
10. Describe the applications of multimedia.
Multimedia in Education: It is commonly used to prepare study material for the students and
also provide them proper understanding of different subjects.
Multimedia in Entertainment:
a) Movies: Multimedia used in movies gives a special audio and video effect.
b) Games: Multimedia used in games by using computer graphics, animation, videos has
changed the gaming experience.
Multimedia in Business:
a) Videoconferencing: This system enables to communicate using audio and video between
two different locations through their computers.
b) Marketing and advertisement: Different advertisement and marketing ideas about any
product on television and internet is possible with multimedia.
11. Write the difference between multimedia and hypermedia.
S.No Multimedia Hypermedia
Multimedia is the presentation of Hypermedia is the use of advanced form of
media as text, images, graphics, video & hypertext like interconnected systems to
1. audio by the use of computers or the store and present text, graphics & other
information content processing media types where the content is linked to
devices. each other by hyperlinks.
Multimedia can be in linear or non- Hypermedia is an application of
linear content format, but the multimedia, hence a subset of multimedia.
2.
hypermedia is only in non-linear
content format.
12.Define Nyquist Sampling theorem.
Nyquist sampling theorem states that in order to obtain an accurate representation of a time
varying analog signal, its amplitude must be sample at a minimum rate that is equal to or
greater than twice the highest sinusoidal frequency component that is present in the signal.
13.Define Aspect ratio.
Both the number of pixels per scanned line & the number of line per frame vary the actual
numbers used being determined by the aspect ratio of the display screen. This is the ratio of the
screen with to the screen height.
Page 3
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
PART – B
1. Discuss the skill set needed to develop a multimedia project. Also describe how this is different
from the other skill sets? (N/D 16, N/D 15)
Key points:
Multimedia definition: (1 Mark)
Information being transferred by more than one medium is called as multimedia.
Multimedia skill set: (12 Marks)
Formatted text
Hyper text
Explanation of Unformatted text (3 Marks)
Also known as plain text.
Enables pages to be created which comprise strings of fixed sized characters from a limited
character set.
Explanation of Control characters:i) Format control character
ii) Information separator
iii) Transmission control character
Refer Pg. No. 89 in Text book 1 (19th edition).
Explanation of Formatted text (3 Marks)
Also known as rich text.
Enables documents to be created that consist of characters of different styles & variable size &
shape, each of which can be plain, bold or italic.
Refer Pg. No. 91 in Text book 1 (19th edition).
Explanation of Hyper text (3 Marks)
It is a type of formatted text that enables a set of documents referred to as pages.
Refer Pg. No. 93 in Text book 1 (19th edition).
3. Compare and contrast MIDI and digital audio. (A/M 16, N/D 16, N/D 15)
Key points:
MIDI Digital Audio
Definition A MIDI (Musical Instrument A digital audio refers to the
Digital Interface) file is reproduction & transmission
software for representing of sound stored in a digital
musical information in a format.
digital format.
Format type Compressed. Compressed.
Contain Do not contain a recording of Contain a recording of sound.
sound.
Storage No actual sound stored in Actual sound stored in digital
MIDI file. audio file.
Advantages Files are tiny, often less They reproduce the exact
than 10K. sound files.
Download from a web It reproduces better than
page in no time. CD quality.
Fit easily on a floppy desk.
The files are any time
ideal.
Page 5
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Page 6
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Vector – Drawn images are created from geometric objects such as lines, rectangles, ovals,
polygons using mathematical formula.
A vector is a line that is described by the location of its end points.
Vector drawing makes use of Cartesian co-ordinates.
Cartesian co-ordinates are numbers that describe a point in two or three-dimensional space as
the intersection of X, Y and Z axis.
Vector images use less memory space and have a smaller file size (. svg) as compared to bitmaps.
For the web, pages that use vector graphics in plug-ins download faster and when used for
animation, draw faster than bitmaps.
It cannot be used for photorealistic images.
It requires a plug-in for web-based display.
5. Explain the technique of computer animation and compare it with the traditional cel
animation. (A/M 15) / Explain any two animation techniques with an example. (A/M 16)
Animation definition (1 Mark)
Types of Animation (1 Mark)
Traditional Animation. (2D, Cel, Hand Drawn)
2D Animation. (Vector-Based)
3D Animation. (CGI, Computer Animation)
Motion Graphics. (Typography, Animated Logos)
Stop Motion. (Claymation, Cut-Outs)
Techniques of animation (1 Mark)
Drawn animation.
Model animation or stop motion animation.
Computer animation or computer generated imagery (CGI)
Techniques description (8 Marks)
Computer animation Vs. Traditional cel animation (2 Marks)
6. With the aid of a diagram. Explain the term interlaced scanning and progressive scanning in
detail. (A/M 16)
Interlaced scan: Traditional TV systems (such as NTSC, the standard TV system in the United States)
use an interlaced scan, where half the picture appears on the screen at a time. The other half of the picture
follows an instant later (1/60th of a second, to be precise). The interlaced system relies on the fact that
your eyes can’t detect this procedure in action — at least not explicitly.
Progressive scan: In a progressive-scan system, the entire picture is painted at once, which greatly
reduces the flickering that people notice when watching TV. Progressive scan is available throughout a
range of TV types.
Page 7
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
7. Describe the various output devices available for personal computers and explain how they
may be used in multimedia production and delivery? (N/D 15, N/D 16)
An output device is any piece of computer hardware equipment used to communicate the results of
data processing carried out by an information processing system (such as a computer) to the outside
world.
1. Speakers
2. Headphones
3. Screen (Monitor)
4. Printer
Output Devices
We need to head to the computer store one more time. We've picked out your system
unit and input devices. Now we've really got to figure out what's important for your output
devices, or how we are going to see and hear the data and information created and stored in
your computer.
Monitors
Monitors are created with LCD (liquid crystal display) or LED (light-emitting diode). LCDs have layers of glass,
polarized film and liquid crystals. You get electrical impulses sent through, and this causes the color to be
shown and image to be displayed. LED monitors take the LCD one step further. They put a diode on the back
that forces light through the layers for a sharper picture and better colors. It is said that LED monitors will last
longer than LCD monitors.
Printers
The next difficult decision to make will be the printer that will work best for you. Printers are used to
create a tangible product to look at away from a monitor. For consumer use there are two kinds to choose
from: the inkjet and the laser printer.
Page 8
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
The inkjet printer uses a liquid ink that's sprayed through a print head onto a piece of paper. How?
Simply put, the printer interprets the signal from the computer and converts it to instructions that go through
the print head. Inkjet printers are typically inexpensive to purchase, although the replacement ink can be
costly and add up.
Laser printers use heat technology and specialized powder called toner or another medium (I've seen it
with wax - it looked like crayons) that's heat-sealed onto a piece of paper. Laser printers are somewhat
expensive, though they've come down in cost as the technology has increased.
Speakers
Anytime you want to listen to something or record something, your speakers are essential in
completing these processes. They work as both input and output devices, translating the element of sound and
recording it to use later on. The most common method for recording and saving sound is through .wav files;
this standard format is recognized by virtually all computing devices. You can use speakers for listening and
recording music, watching movies, playing games or speaking with people online through a telephonelike
service.
8. Explain the working principles of digital camera and scanner with neat diagram. (A/M 17)
256×256 – This is the basic resolution a camera has. The images taken in such a resolution will look
blurred and grainy. They are the cheapest and also unacceptable.
640×480 – This is a little more high resolution camera than 256×256 type. Though a clearer image
than the former can be obtained, they are frequently considered to be low end. These type of cameras
are suitable for posting pics and images in websites.
1216×912 – This resolution is normally used in studios for printing pictures. A total of 1,109,000
pixels are available.
1600×1200 – This is the high resolution type. The pictures are in their high end and can be used to
make a 4×5 with the same quality as that you would get from a photo lab.
2240×1680 – This is commonly referred to as a 4 megapixel cameras. With this resolution you can
easily take a photo print up to 16×20 inches.
4064×2704 – This is commonly referred to as a 11.1 megapixel camera. 11.1 megapixels takes
pictures at this resolution. With this resolution you can easily take a photo print up to 13.5×9 inch
prints with no loss of picture quality.
Page 9
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
By varying the number of bits used for difference signals based on its amplitude
(i.e) using few bits to encode small difference values can save bandwidth and also
improve quality. This is the principle of Adaptive Differential Pulse Code
Modulation(ADPCM).
4. What are the profiles in MPEG-2 video standard? NOV/DEC2015
3. SNR scalable
4. Spatially scalable
5. High
6. 4:2:2
7. Multi-view
5. What are the different delays suffered by CELP coders NOV/DEC2016, APR/MAY2015
There are two delays through which a CELP-Code Excited Linear Prediction coing
suffers, they are,
Processing Delay: This delay occurs when each block of digitized samples are
analyzed by the encoder & the speech being reconstructed at the decoder.
Algorithmic delay:The time required to accumulate the block of samples is known
as Algorithmic Delay.
1. Discuss the techniques of DPCM with neat diagram. what are the advantage of ADPCM
over DPCM (APR/MAY2015(8)) , (APR/MAY2017(16)), (NOV/DEC 2016(8))
Principle of DPCM
Page 11
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Power spectrum of the DPCM quantization error Power spectral density of the
quantization error q measured for intraframe DPCM with a 16 level quantizer
The general inferences that can be drawn from the above results are:
1. Backward adaptive predictors & quantizer are usually preferred in design as forward
adaptive algorithms introduce delay & require more bandwidth which is not acceptable
in connections having multiple links.
2. It is better to use backward adaptive predictors of higher order so as to obtain
a better estimate of the samples which also results in low MSE values.
Page 12
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
3. Adaptive quantizer used in ADPCM should be chosen such that the reconstruction
levels used does not lose speech intelligibility. This is necessary as adaptive quantizer
contributes significantly to the encoding times required.
Page 13
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
"Filter bank
– used to decompose an input signal into subbands or spectral components
(time-frequency domain)
"Perceptual model (aka psychoacoustic model)
– usually analyzes the input signal instead of the filterbank outputs (time-
domain input provides better time and frequency resolution)
– computes signal-dependent masked threshold based on psychoacoustics
"Quantization and coding
– spectral components are quantized and encoded
– goal is to keep quantization noise below the masked threshold
"Frame packing
– bitstream formatter assembles the bitstream, which typically consists of the
coded data and some side information
– Perceptual models: masked threshold
– Perceptual models: tonality estimation
– Perceptual models: MPEG-1 Layer 2
The MPEG-4 format can perform various functions, among which might be the following:
Multiplexes and synchronizes data, associated with media objects, in such a way that
they can be efficiently transported further via network channels.
Interaction with the audio-visual scene, which is formed on the side of the receiver.
H.263 that was adopted by ITU-T, we would like to tell the reader s why we do not
talk about any standard called H.262, which should have been logically there in
between the H.261 and H.263.
The other requirements of H.263 standardization were:
• Use of available technology
• Interoperability between the other standards, like H.261
Page 15
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Page 16
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
H.264, also known as MPEG-4 AVC ( Advanced Video Coding) or MPEG4 part 10,
improves video compression when compared to MPEG-4 and MPEG-2 by using
advanced algorithms, simplified integer transforms, and in-loop deblocking filter.
MPEG-4
MPEG-4 is one of the most widely used codecs for video security. It offers
improved quality relative to MPEG-2. This codec is designed to operate within a
wide range of bit rates and resolutions, so it is well suited for the video surveillance
industry.
MPEG-2
MPEG-2 was approved as a standard in 1994 and was designed for high frame
and bit rates. MPEG-2 extends the earlier MPEG-1 compression standard to
produce high quality video at the expense of a lower compression ratio and at a
higher bit-rate. The frame rate is locked at 25 (PAL)/30 (NTSC) fps, as is the case
for MPEG-1.
JPEG
JPEG is a format specified in the JPEG still picture coding standard in which each
video frame is separately compressed as a JPEG image. JPEG is a very well-known
standard and is widely used in video surveillance applications and still image
cameras. The first generation of DVRs all used JPEG, but this is no longer the case.
JPEG 2000
JPEG 2000 is a wavelet-based image compression standard created by the Joint
Photographic Experts Group committee that provides better compression for still
image coding by filtering, sub-sampling, and “smoothing” video data to remove
unnecessary details. JPEG 2000 is very scalable and brings many new tools that
improve compression, but requires significantly more processing power then JPEG
to encode an image.
6. In detail, explain the concept of linear and adaptive predictive coding standards with
necessary figures (APR/MAY2017(16))
Linear predictive coding (LPC) is a tool used mostly in audio signal processing and
speech processing for representing the spectral envelope of a digital signal of speech
in compressed form, using the information of a linear predictive model.[1] It is one of
the most powerful speech analysis techniques, and one of the most useful methods for
encoding good quality speech at a low bit rate and provides extremely accurate
estimates of speech parameters.
LPC Applications
Standard telephone system
Text-to-Speech synthesis
Voice mail systems,
telephone answering machines
multimedia applications.
APC is related to linear predictive coding (LPC) in that both use adaptive
predictors. However, APC uses fewer prediction coefficients, thus requiring a higher
sampling rate than LPC
Page 18
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
7. With the aid of an example explain how DCT blocks are derived from macro blocks in
an I frame (NOV/DEC 2015(8))
A macroblock is also classified as field coded or frame coded depending on how the
four blocks are extracted from it. See Field DCT Coding and Frame DCT Coding.
Field DCT coding and frame DCT coding differ according to the contents of the
blocks that make up a macroblock.
In a frame coded macroblock, the four blocks each come from the same frame of
video.
In a field coded macroblock, there are two possibilities: either all four blocks come
from a given field of video, or two blocks come from one field and two from another field.
For progressive sequences, all pictures are frame pictures with frame DCT coded
macroblocks only.
For interlaced sequences, the encoder may decide on a frame by frame basis to use a
frame picture or two field pictures.
In the case of a field picture, all the blocks in every macroblock come from one field,
that is, there are only field coded macroblocks and no frame coded macroblocks.
In the case of an (interlaced) frame picture, the decision to use frame or field DCT
coding is made on a macroblock-by-macroblock basis.
- If the interlaced macroblock from an interlaced frame picture is frame DCT coded,
each of its four blocks has pixels from both fields.
- If the interlaced macroblock from an interlaced frame picture is field coded, each
block consists of pixels from only one of the two fields. Each 16x16 macroblock
is split into fields 16 pixels wide x 8 pixels high by taking alternating lines of
pixels, then each field is split into left and right parts, making two 8x8 blocks
from one field and two from the other field.
An I-frame (Intra-coded picture), a complete image, like a JPG or BMP image file. P and
B frames hold only part of the image information (the part that changes between frames),
so they need less space in the output file than an I-frame.
Page 19
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
ADPCM Encoder
Subsequent to the conversion of the A-law or m-law, PCM input signal to uniform
PCM, a difference signal is obtained by subtracting an estimate of the input signal from
the input signal itself. An adaptive 31-, 15-, 7-, or 4-level quantizer is used to assign five,
four, three, or two binary digits, respectively, to the value of the difference signal for
transmission to the decoder. A inverse quantizer produces a quantized difference signal
from these same five, four, three or tw binary digits, respectively. The signal estimate is
added to this quantized difference signal to produce the reconstructed version of the
input signal. Both the reconstructed signal and thequantized difference signal are
operated upon by an adaptive predictor, which produces the estimate of the input signal,
thereby completing the feedback loop.
ADPCM Decoder
The decoder includes a structure identical to the feedback portion of the encoder,
together with a uniform PCM to A-law or m-law conversion and a synchronous coding
adjustment.
The synchronous coding adjustment prevents cumulative distortion occurring on
synchronous tandem coding (ADPCM, PCM, ADPCM, etc., digital connections) under
certain conditions the synchronous coding adjustment is achieved by adjusting the PCM
output codes in a manner which attempts to eliminate quantizing distortion in the next
ADPCM encoding stage.
I-frames
Page 20
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Are encoded without reference to any other frames. Each frame is treated as a
separate (digitized) picture and the Y, Cb and Cr matrices are encoded independently
using the JPEG encoded independently using the JPEG algorithm
P-frames
PART – A
1. Define entropy encoding APR/MAY2017
Entropy coding is a type of lossless coding to compress digital data by representing
frequently occuring patterns with many bits.
2. Define differential encoding APR/MAY2017
Page 21
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
3. Give one application each suitable for lossy& lossless compression? NOV/DEC2015
Compression of satellite images is an example for lossless compression whereas
compression of general images(move stills) is good example for lossy compression.
4. Derive the binary form of the following run length encoded AC coefficients (0,6) (0,7)(3,3)
(0,-1) (0,0) NOV/DEC2015
Page 22
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Avg L = Σ Li * P (i)
=10/20*1+5/20*2+3/20*3+2/20*3=10+10+9+6/20=35/20=1.75
Page 23
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
LZW compression works by reading a sequence of symbols, grouping the symbols into
strings, and converting the strings into codes. Because the codes take up less space than the
strings they replace, we get compression.Characteristic features of LZW includes,
LZW ENCODING
* PSEUDOCODE
1 Initialize table with single character strings
2 P = first input character
3 WHILE not end of input stream
4 C = next input character
5 IF P + C is in the string table
6 P=P+C
7 ELSE
8 output the code for P
9 add P + C to the string table
10 P=C
11 END WHILE
12 output code for P
* PSEUDOCODE
1 Initialize table with single character strings
2 OLD = first input code
3 output translation of OLD
4 WHILE not end of input stream
5 NEW = next input code
6 IF NEW is not in the string table
Page 24
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
7 S = translation of OLD
8 S=S+C
9 ELSE
10 S = translation of NEW
11 output S
12 C = first character of S
13 OLD + C to the string table
14 OLD = NEW
15 END WHILE
3. (i) describe the operation of JPEG encoder and Decoder with neat diagrams(10)
APR/MAY2015 (10)
JPEG is an image compression standard that was developed by the “Joint
Photographic Experts Group”. JPEG was formally accepted as an international standard in
1992.
JPEG is a lossy image compression method. It employs a transform coding method
using the DCT (Discrete Cosine Transform).
Page 25
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
ii) Give a brief note on GIF and TIFF formats APR/MAY2015 (6)
LZW compression
Transparency
Interlacing
Animation
Specsheet
Resolution
Name: GIF
Developer: CompuServe
Release date: 1987
Type of data: bitmap
Number of colors: 2, 4, 8, 16, 32, 64, 128 or 256
Color spaces: RGB
Compression algorithms: LZW
Ideal use: internet publishing
Extension on PC-platform: .gif
Macintosh file type: ?
Special features: support for transparency, interlacing, and animation
Page 26
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
TIFF: stands for “Tagged Image File Format” and is one of the most widely supported file
formats for storing bit-mapped images on personal computers (both PCs and Macintosh
computers).
All professional image editing applications on the market are capable of opening TIFF
files. My favorite is Adobe Photoshop.
Avg L = Σ Li * P (i)
=1/5*1+1/5*2+1/5*3+1/5*4+1/5*4=1+2+3+4+4/5=14/5=2.8
Solution:
In general , a dictionary with an index of n bits can contain up to 2n entries.
Now Assume a dictionary of 16,000 words
214=16384and hence an index of 14 bits is required
Using 7 bit ASCII codeword and an average of 5 characters per word requires 35
bits
Hence compression ratio 35/14= 2.5:1
Page 27
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
(ii)With help of diagram identify the five main stages associated with the baseline mode
of operation JPEG and give a brief description of the role of each stages (8) NOV/DEC2015,
APR/MAY2017(8)
Intermediate Standard\
For digital compression and coding of continuous-tone still images
Gray-scale
color
Page 28
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
7. Design a Huffman code and find average length for a source that puts letters from an
alphabet A={a1,a2,a3,a4,a5} with P(a1)=P(a3)=P(a4)=0.1, P(a2)=0.3 and P(a5)=0.4
NOV/DEC2016(8)
Page 29
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
a1=0.1
a2=0.3
a3=0.1
a4=0.1
a5=0.4
1.0
0.4 0.6
0.3 0.3
0.1 0.2
0.1 0.1
Path length
a1=0.1=4
a2=0.3=2
a3=0.1=4
a4=0.1=3
a5=0.4=1
Optimal path will be
=0.4+0.6+0.4+0.3+0.4=2.1
(ii) Describe dynamic Huffman code for the same output source with the above probabilities
NOV/DEC2016 (8)
In the following an example for a simple code tree is presented together with some
principal considerations.
Example: "abracadabra"
Symbol Frequency
a 5
b 2
r 2
c 1
d 1
According to the outlined coding scheme the symbols "d" and "c" will be coupled together in a
first step. The new interior node will get the frequency 2.
1. Step
2. Step
Symbol Frequency Symbol Frequency
a 5 a 5
b 2 b 2
r 2 -----------> 2 4
1 2
3. Step
4. Step
Symbol Frequency Symbol Frequency
3 6 -----------> 4 11
a 5
Page 31
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Code Table
If only one single node is remaining within the table, it forms the root of the Huffman tree. The
paths from the root node to the leaf nodes define the code word used for the corresponding
symbol:
PART – A
The major challenges of VOIP are good speech quality, low transmission delay, low
jitter and low loss of data during transmission and reception.
SS7(Signaling System no.7) enables a wide range of services including caller-ID, toll
free calling, call screening, number portability. SS7 is the foundation for intelligent network
services. SS7 supports VOIP for many new services.
VOIP is simply the transport of voice traffic using the internet protocol(IP) VOIP
can be matched for its quality, reliability and scalability.
7. Differentiate Lossy and Loseless compression APR/MAY 2017
If the reconstructed data in the received end is same as that of the original data,
then it is a lossless compression.
If the reconstructed data in the received end is differs from that of the original
data, then it is a lossy compression
8. what are the different factors that determine the QoS of VoIP systems? APR/MAY 2017
QoS is a collective measure of the level of service delivered to a customer. QoS can
be characterized by several performance criteria such as availability, throughput,
connection setup time. QoS can be measured in terms of bandwidth; packet loss delay and
jitter.
1. Explain the network architecture and protocols supporting the functionality of VOIP
networks NOV/DEC2016(16) ,APR/MAY2017(16)
VOIP (Voice over Internet Protocol) is the technology for voice conversations with Internet
network or any IP . This means that, it sends the voice signal into digital form in packets rather
than send it in digital form or analog circuits using a mobile phone company or conventional
PSTN (acronym for Public Switched Telephone Network).
Page 33
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Page 34
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
H.323 is a system specification that describes the use of several ITU-T and IETF
protocols. The protocols that comprise the core of almost any H.323 system Codecs H.323
utilizes both ITU-defined codecs and codecs defined outside the ITU. Codecs that are
widely implemented by H.323 equipment include:
Audio codecs: G.711, G.729 (including G.729a), G.723.1, G.726, G.722, G.728,
Speex, AACLD
Text codecs: T.140
Video codecs: H.261, H.263, H.264
Architecture
The H.323 system defines several network elements that work together in order to
deliver rich multimedia communication capabilities. Those elements are Terminals,
Multipoint Control Units (MCUs), Gateways, Gatekeepers, and Border Elements.
Collectively, terminals, multipoint control units and gateways are often referred to as
endpoints.
Page 35
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
3. Explain in detail the network architecture and protocol design of SIP (16) NOV/DEC2015 ,
APR/MAY 2017(16)
SIP
SIP (Session Initiation Protocol) is a protocol to initiate sessions
• It is an application layer protocol used to
– establish
– modify
– terminate
• It supports name mapping and redirection services transparently
Used by a UA to indicate its current IP address and the URLs for which it would like to
receive calls.
– INVITE
• initiate sessions (session description included in the message body
encoded using SDP)
– ACK
• confirms session establishment
– BYE
• terminates sessions
– CANCEL
• cancels a pending INVITE
– REGISTER
• binds a permanent address to a current location
– OPTIONS
• capability inquiry
– Other extensions have been standardized
• e.g. INFO, UPDATE, MESSAGE, PRACK, REFER, etc.
4. Write a brief note on the challenges arise applications of VOIP APR/MAY 2015(8),
NOV/DEC2016(8)
Page 36
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Attacks on VoIP
Page 37
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Anti-tromboning
Back-to-Back User Agent
Call Origination
Chatter Bug
Downstream QoS
IP Multimedia Subsystem (IMS)
6. Explain ss7 protocol suit and also discuss ISUP call establishment and release in detail
APR/MAY2015
Signaling System No. 7 (SS7) is a set of telephony signaling protocols developed in 1975,
which is used to set up and tear down most of the world's public switched telephone
Page 38
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
network (PSTN) telephone calls. It also performs number translation, local number
portability, prepaid billing, Short Message Service (SMS), and other mass market services.
This is the physical level of connectivity, virtually the same as Layer 1 of the OSI model.
The data link level provides the network with sequenced delivery of all SS7 message
packets.
Page 39
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
TCAP
Transactional Capabilities Application Part (TCAP) facilitates connection to an external
database.
ASP
Application Service Part (ASP) provides the functions of Layers 4 through 6 of the OSI model.
SCCP
Signaling Connection Control Part (SCCP) is a higher level protocol than MTP that provides end-
to-end routing
TUP
Telephone User Part (TUP) is an analog protocol that performs basic telephone call connect and
disconnect.
ISUP
ISDN User Part (ISUP) supports basic telephone call connect/disconnect between end offices.
BISUP
Broadband ISDN User Part (BISUP) is an ATM protocol intended to support services such as
high-definition television (HDTV), multilingual TV, voice and image storage and retrieval, video
conferencing, high-speed LANs and multimedia.
Call Initiated
Page 40
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Calling party goes “off hook” on an originating switch (SSP) and dials the directory
number of the called party.
Destination switch (SSP) checks the dialed number against its routing table and confirms
that the called party’s line is available for ringing.
ISUP Call Released
If the calling party hangs up first, the originating switch sends an ISUP release
message (REL) to release the trunk between the two switches. If the called party releases
first, the destination switch sends an REL message to the originating switch to release the
circuit.
When the destination switch receives the REL, it disconnects and idles the trunk,
and transmits an ISUP release complete message (RLC) to the originating switch to
acknowledge the release of the remote end of the circuit.
Page 41
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Codecs are used to convert an analog voice signal to digitally encoded version.
Codecs vary in the sound quality, the bandwidth required, the computational
requirements, etc. Each service, program, phone, gateway, etc typically supports several
different codecs, and when talking to each other, negotiate which codec they will use.
AMR Codec
The AMR (Adaptive Multi-Rate) codec encodes narrowband (200-3400 Hz) signals
at variable bit rates ranging from 4.75 to 12.2 kbps with toll quality speech starting at 7.4
kbps. AMR is the required standard codec for 2.5G/3G wireless networks based on GSM
(WDMA, EDGE, GPRS).
BroadVoice Codec
BroadVoice is based on Two-Stage Noise Feedback Coding (TSNFC) rather than the
popular Code-Excited Linear Prediction (CELP) coding paradigm. BroadVoice has two
variants: a 16 kb/s version called BroadVoice16 (BV16) for narrowband telephone-
bandwidth speech sampled at 8 kHz, and a 32 kb/s version called BroadVoice
DoD CELP
DoD CELP Codec also known as Federal Standard 1016. 4.8 Kbps
GIPS
GIPS Global IP Sound in the producer of a family of VOIP codecs and related
software. GSM Codec
iLBC
iLBC is a VOIP codec originally created by Global IP Sound but made available
(including its source code) under a restricted but free and fairly liberal license, including
permission to modify.
ITU G.722
Page 42
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
ITU G.723.1
ITU G.726
G.726 is an ITU standard codec. This codec uses the Adaptive Differential Pulse
Code Modulation (ADPCM) scheme.
PART – A
1. Define packet jitter APR/MAY2017
Jitter in IP networks is the variation in the latency on a packet flow between two systems,
when some packets take longer to travel from one system to the other. ... A jitter buffer
(or de-jitter buffer) can mitigate the effects of jitter, either in the network on a router or
switch, or on a computer
2. What is meant by RSVP APR/MAY2017
The limitations of best-effort service are packet loss, excessive end-To- end delay
and packet jitter.
Page 43
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
PART – B
1. MULIMEDIA NETWORKING
Network provides application with level of performance needed for application to
function.
Multimedia applications: network audio and video(“continuous media”)
2. MM Networking Applications
Classes of MM applications:
1) Streaming stored audio and video
2) Streaming live audio and video
3) Real-time interactive audio and video Fundamental characteristics:
❒Typically delay sensitive
❍end-to-end delay
❍delay jitter
❒But loss tolerant:
infrequent losses cause minor glitches
❒Antithesis of data, which are loss intolerant but delay tolerant.
Page 44
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Page 45
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Principle 1
packet marking needed for router to distinguish between different classes; and new
router policy to treat packets accordingly what if applications misbehave (audio sends higher
than declared rate)
❍policing: force source adherence to bandwidth allocations
❒marking and policing at network edge:
❍similar to ATM UNI (User Network Interface)
Principle 2
provide protection (isolation) for one class from others
Principle 3
While providing isolation, it is desirable to use resources as efficiently as possible
Basic fact of life: can not support traffic demands beyond link capacity
Page 46
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Principle 4
Call Admission: flow declares its needs, network may block call (e.g., busy signal) if it
cannot meet needs
6. Explain the Scheduling and Policing Mechanisms suitable for multimedia systems with
suitable diagrams NOV/DEC2016(8) , APR/MAY2015(16) ,APR/MAY2017(16)
❒scheduling: choose next packet to send on link
❒FIFO (first in first out) scheduling: send in order of arrival to queue
❍real-world example?
❍discard policy: if packet arrives to full queue: who to discard?
• Tail drop: drop arriving packet
• priority: drop/remove on priority basis
• random: drop/remove randomly
Page 47
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Page 48
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
token bucket, WFQ combine to provide guaranteed upper bound on delay, i.e., QoS
guarantee!
Page 49
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Page 50
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Forwarding (PHB)
❒PHB result in a different observable (measurable)forwarding performance behaviour
❒PHB does not specify what mechanisms to use to-ensure required PHB performance
behaviour
❒Examples:
❍Class A gets x% of outgoing link bandwidth over time intervals of a specified
length
❍Class A packets leave first before packets from class B
Call Admission
Arriving session must :
❒declare its QOS requirement
❍R-spec: defines the QOS being requested
❒characterize traffic it will send into network
❍T-spec: defines traffic characteristics
❒signalling protocol: needed to carry R-spec and T-spec to routers (where reservation is
required)
❍RSVP
Guaranteed service:
Page 51
UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB
Page 52