Yyyyyyyyyyyyyyyyyyyyy Yyyyyyyy Y$ Yyyy Yyyy Yy%Y Yy

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Video:

Video is the technology of electronically capturing, recording, processing, storing, transmitting, and reconstructing a sequence of still images representing scenes in motion Video is a medium of communication that delivers more information per second than any other element of multimedia. Nowadays, a lot of video we see on TV and in movies has a digital component. For example, many of the special effects that you see in movies are digitally generated using the computer.

Types of video:
Analog video: Analog video is represented as a continuous (time varying) signal. Where the intensity
and the color components vary with (x,y) coordinates and time.

Digital video: Digital video is represented as a sequence of digital image.


Each point of the image or each basic element of the image is called as a pixel or pel. Each individual image is called a frame.

Advantages of Digital Video:


Random Access allowed by the digital format enables us to quickly jump to any pointing a movie. In analog format, we have to wind the tape backward and forward to reach a point in the movie. Digital format also allows us to quickly cut, paste, or edit video in other ways. It also allows easy addition of special effects. It is easy to duplicate digital video without loss of quality. In the analog tapes, video producers lose some quality after the first time they edit the video. This loss in quality is known as generation loss. Now video producers can convert real-life video that they have shot into digital format and edit it without losing the original quality of the film. Finally, digital video allows for interactivity.

Data compression:
Image and vide o data compression refers to a process in which the amount of data used to represent image and video is reduced to meet a bit rate requirement (below or at most equal to the maximum available bit rate), while the

quality of the reconstructed image or video satisfies a require meant for a certain application and the complexity of computation involved is affordable for the application. Bit rate (also known as coding rate), as an important parameter in image and video compression, is often expressed in a unit of bits per second (bits per sec, or bps), which is suitable in visual communication. The required quality of the reconstructed image and video is application dependent. In medical diagnosis and some scientific measurements, we may need the reconstructed image and video to mirror the original image and video. In other words, only reversible, information-preserving schemes are allowed. This type of compression is referred to as loss less compression. In applications, such as motion picture and television (TV), a certain amount of information loss is allowed. This type of compression is called lossy compression.

Need for video compression:


With increasingly demanding video services, such as three-dimensional (3-D) movies and games, and high video quality, such as HDTV, advanced image, and video data compression is necessary. It becomes an enabling technology to bridge the gap between the required huge amount of video data and the limited hardware capability. It makes it possible to use digital video in transmission and storage environments that would not support uncompressed (raw) video. Data represents information and the quantity of data can be measured. In the context of digital image and video, data is usually measured in the number of binary units (bits).

Redundancies in images and video:


Image and video compression is not only a necessity for rapid growth of digital visual communications, but is also feasible. Its feasibility rests with two types of redundancies, i.e., special redundancy and temporal redundancy. By making use of these redundancies, we can achieve image and video compression. Spatial redundancy: Spatial redundancy represents the statistical correlation between pixels within an image frame. Hence it is also called intra frame redundancy. Spatial redundancy implies that the intensity value of a pixel can be guessed from that of its neighboring pixels. In other words, it is not necessary to represent each pixel in an image frame independently.

Temporal redundancy: Temporal redundancy is concerned with the statistical correlation between pixels from successive frames in a temporal image or video sequence. Therefore, it is also called inter frame redundancy.

Video compression standards-The need:


Image and video compression has been a very active field of research and development for over 20 years and many different systems and algorithms for compression and decompression have been proposed and developed. In order to encourage interworking, competition and increased choice, it has been necessary to define standard methods of compression encoding and decoding to allow products from different manufacturers to communicate effectively. This has led to the development of a number of key International Standards for image and video compression, including the JPEG, MPEG and H.26 series of standards. To obtain a 2D sampled image, a camera focuses a 2D projection of the video scene onto a sensor, such as an array of Charge Coupled Devices (CCD array). In the case of color image capture, each color component is separately filtered and projected onto a CCD array.

MPEG-4 AND H.264


MPEG-4Visual and H.264 (also known as Advanced Video Coding) are standards for the coded representation of visual information. Each standard is a document that primarily defines two things, a coded representation (or syntax) that describes visual data in a compressed form and a method of decoding the syntax to reconstruct visual information. Each standard aims to ensure that compliant encoders and decoders can successfully interwork with each other, whilst allowing manufacturers the freedom to develop competitive and innovative products. The standards specifically do not define an encoder; rather, they define the output that an encoder should produce. A decoding method is defined in each standard but manufacturers are free to develop alternative decoders as long as they achieve the same result as the method in the standard. MPEG-4Visual (Part 2 of the MPEG-4 group of standards) was developed by the Moving Picture Experts Group (MPEG), a working group of the International Organization for Standardization (ISO). This group of several hundred technical experts (drawn from industry and research organisations) meets at 23 month intervals to develop the MPEG series of standards. MPEG-4 (a multipart standard covering audio coding, systems issues and related aspects of audio/visual communication) was first conceived in 1993 and Part 2 was standardized in 1999. The H.264 standardization effort was initiated by the Video Coding Experts Group (VCEG), a working group of the International Telecommunication Union (ITU-T) that operates in a similar way to MPEG and has been responsible for a series of visual telecommunication standards. The final stages of developing the H.264 standard have been carried out by the Joint Video Team, a collaborative effort of both VCEG and MPEG, making it possible to publish the final standard under the joint auspices of ISO/IEC (as MPEG-4 Part 10) and ITU-T (as Recommendation H.264) in 2003.

Some abbreviations CODEC: COder / DECoder pair H.264: A video coding standard ISO: International Standards Organisation, a standards body ITU: International Telecommunication Union, a standards body

JPEG: Joint Photographic Experts Group, a committee of ISO (also an image coding standard) Macroblock: Region of frame coded as a unit (usually 1616 pixels in the original frame) MPEG: Motion Picture Experts Group, a committee of ISO/IEC MPEG-1:A multimedia coding standard MPEG-2: A multimedia coding standard MPEG-4:A multimedia coding standard

You might also like