MC-12 (MPEG Video Compression)
MC-12 (MPEG Video Compression)
MC-12 (MPEG Video Compression)
inverse
quantizer
predictive frame
motion vectors
IDCT
motion frame
compensation memory
motion
estimation
MPEG Decoder
entropy inverse
decoder quantizer
IDCT
previous
picture store
1/2
M ux
future
picture store
motion compensation
Types of Frames
I frames (intra coded pictures) are coded
without using information about other
frames (intraframe coding). An I frame is
treated as still image. Here MPEG falls
back on the result of JPEG.
These are the code without any
reference to other images. In video
sequence I-Frame are present at
regular intervals.
P-Frame (Prediction Frame)
They require information from previous I
and/or P-Frame for encoding and decoding.
P-Frame can be indicted only after
referenced I or P Frame has been decoded.
The Number of P Frame in group of picture
is generally limited as error in one frame
may propagated in others. The number of
frame b/w a P Frame and preceding I or P
frame is called Prediction Span which may
range from 1to 3.
Its compression ratio is high as compared
to I frame. Compression ratio 20:1 to 30:1
.
Types of Individual images in MPEG:
I,B and P Frames
B
B
P
B
B
P
Time Axis I
B Frames
B frames (bi-directionally predictive coded
pictures) that fill in the jumped frames.
It requires information from the previous
and following I and/or P Frame for
encoding and decoding.
The highest compression ratio is attainable
by using these frames.
B Frames are never used as a reference for
other frames. Reference frames must be
transmitted first.
Details
If a P-picture is completely different from the
preceding I-picture (so that it's unpredictable),
it becomes an entirely new I-picture. Normally,
P-pictures are used to predict other P-pictures
several times before MPEG encodes another I-
picture.
The MPEG standard allows as many as three B-
pictures in a row, the number of I-pictures is
typically two per second which means that P-
pictures are used to forward predict two-to-five
following P-pictures before another I picture is
coded.
MPEG-1
Designed for up to 1.5 Mbit/sec
Standard for the compression of moving pictures and
audio. Allow easy editing
MPEG-1 standard provide a video resolution of 352-
by-240 at 30 frames per second (fps).
This was based on CD-ROM video applications, and is
a popular standard for video on the Internet,
transmitted as .mpg files. In addition, level 3 of
MPEG-1 is the most popular standard for digital
compression of audio--known as MP3.
MPEG-1 is the standard of compression for Video CD,
the most popular video distribution format though out
much of Asia.
This produces video quality slightly below the quality
of conventional VCR videos.
MPEG-2
Offers resolutions of 720x480 and 1280x720
at 60 fps, with full CD-quality audio. This is
sufficient for all the major TV standards,
including NTSC, and even HDTV.
MPEG-2 can compress a 2 hour video into a
few gigabytes. While decompressing an
MPEG-2 data stream requires only modest
computing power, encoding video in MPEG-2
format requires significantly more processing
power.
MPEG-2
Designed for between 1.5 and 15 Mbit/sec.
Standard on which Digital Television set top
boxes and DVD compression is based. It is
based on MPEG-1, but designed for the
compression and transmission of digital
broadcast television.
The most significant enhancement from MPEG-
1 is its ability to efficiently compress interlaced
video.
MPEG-4 Coding of audio-visual objects
A graphics and video compression algorithm
standard that is based on MPEG-1 and MPEG-2
and Apple QuickTime technology.
Standard for multimedia and Web
compression.
MPEG-4 is based on object-based
compression, similar in nature to the Virtual
Reality Modeling Language.
Individual objects within a scene are tracked
separately and compressed together to create
an MPEG4 file.
This results in very efficient compression that
is very scalable, from low bit rates to very
high.
MPEG-4 (Contd)
It also allows developers to control objects
independently in a scene, and therefore
introduce interactivity.
Place media objects anywhere in a given
coordinate system;
Apply transforms to change the geometrical or
acoustical appearance of a media object;
Group primitive media objects in order to form
compound media objects;
Apply streamed data to media objects, in
order to modify their attributes (e.g. a sound,
a moving texture belonging to an object;
animation parameters driving a synthetic
face);
Change, interactively, the user’s viewing and
listening points anywhere in the scene.
MPEG-7 Multimedia content description interface
MPEG-7, the Multimedia Content Description
Interface Standard, is the standard for rich
descriptions of multimedia content, enabling
highly sophisticated management, search, and
filtering of that content.
MPEG-7 is designed to be generic and not
targeted to a specific application.
The main tools used to implement MPEG-7
descriptions are the Description Definition
Language (DDL), Description Schemes (DSs),
and Descriptors (Ds).
MPEG-7 will address both retrieval from digital
archives (pull applications) as well as filtering of
streamed audiovisual broadcasts on the
Internet.
MPEG-21 Multimedia framework
MPEG-21 will attempt to describe the elements
needed to build an infrastructure for the delivery
and consumption of multimedia content, and how
they will relate to each other.
The Digital Items can be considered the “what” of
the Multimedia Framework (e.g., a video
collection, a music album) and the Users can be
considered the “who” of the Multimedia
Framework.
MPEG-21 identifies and defines the mechanisms
and elements needed to support the multimedia
delivery chain.
MPEG-21
Includes a Rights Expression Language
(REL) and a Rights Data Dictionary.
Unlike other MPEG standards that describe
compression coding methods, MPEG-21
describes a standard that defines the
description of content and also processes
for accessing, searching, storing and
protecting the copyrights of content.