Fundamentals of Multimedia

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 53

Fundamentals

of Multimedia
UNIT 1: INTRODUCTION TO MULTIMEDIA

1.1 MULTIMEDIA - INTRODUCTION


Multimedia systems are becoming an integral part of our heterogeneous computing and
communication environment. We have seen an explosive growth of multimedia
computing, communication, and applications over the last decade. The World Wide Web,
conferencing, digital entertainment, and other widely used applications are using not only
text and images but also video, audio, and other continuous media. In the future, all
computers and networks will include multimedia devices. They will also require
corresponding processing and communication support to provide appropriate services for
multimedia applications in a seamless and often also ubiquitous way.
Multimedia is probably one of the most overused terms of the 90s. The field is at the
crossroads of several major industries: computing, telecommunications, publishing,
consumer audio-video electronics, and television/movie/broadcasting. Multimedia only
brings new industrial players to the game, but adds a new dimension to the potential
market.
The word multimedia is composed of two parts: the prefix Multi and the root Media. The
prefix Multi does not pose any difficulty; it comes from the Latin word “multus”, which
means numerous or many. The root media has a more complicated story. Media is plural
form of the Latin word “medium”. Media is noun and means ― “middle, center”
Multimedia is the integration of multiple forms of media. It includes text, audio,
animations, and video and so on. Medium is ―a means to distribute and represent
information. Media are, for example, text, graphics, picture, voice, sound and music.
1.2 CHARACTERISTICS OF A MULTIMEDIA SYSTEM
 Multimedia systems must be computer controlled.
 Multimedia systems are integrated.
 The information they handle must be represented digitally.
 The interface to the final presentation of media is usually interactive.
1.3 CHALLENGES FOR MULTIMEDIA SYSTEMS
Supporting multimedia applications over a computer network renders the application
distributed. Multimedia systems may have to render a variety of media at the same
instant -- a distinction from normal applications. There is a temporal relationship
between many forms of media (e.g. Video and Audio. There 2 are forms of problems
here
 Sequencing within the media -- playing frames in correct order/time frame in
video
 Synchronisation -- inter-media scheduling (e.g. Video and Audio). Lip
synchronisation is clearly important for humans to watch playback of video
and audio and even animation and audio.
The key issues multimedia systems need to deal with here are:
 How to represent and store temporal information.
 How to strictly maintain the temporal relationships on play back/retrieval
1.4 HISTORY
Newspaper were perhaps the first mass communication medium to employ Multimedia -
they used mostly text, graphics, and images.
In 1895, Guglielmo Marconi sent his first wireless radio transmission at Pontecchio, Italy.
A few years later (in 1901) he detected radio waves beamed across the Atlantic. Initially
invented for telegraph, radio is now a major medium for audio broadcasting.
Television was the new media for the 20th century. It brings the video and has since
changed the world of mass communications.
Some of the important events in relation to Multimedia in Computing include:
1945 - Bush wrote about Memex.
1967 - Negroponte formed the Architecture Machine Group at MIT
1969 - Nelson & Van Dam hypertext editor at Brown, Birth of The Internet
1971 - Email
1976 - Architecture Machine Group proposal to DARPA: Multiple Media
1980 - Lippman & Mohl: Aspen Movie Map
1983 - Backer: Electronic Book
1985 - Negroponte, Wiesner: opened MIT Media Lab
1989 - Tim Berners-Lee proposed the World Wide Web to CERN (European Council for
Nuclear Research)
1990 - K. Hooper Woolsey, Apple Multimedia Lab, 100 people, educ.
1991 - Apple Multimedia Lab: Visual Almanac, Classroom MM Kiosk
1992 - the first M-bone audio multicast on the Net
1993 - U. Illinois National Center for Supercomputing Applications: NCSA Mosaic
1994 - Jim Clark and Marc Andreesen: Netscape
1995 - JAVA for platform-independent application development. Duke is the first applet.
1996 - Microsoft, Internet Explorer.
1.5 TYPES OF MULTIMEDIA

LINEAR MULTIMEDIA
Linear Multimedia is a type of a multimedia that is designed to be presented in a sequential
manner. It has a distinct beginning and end. It goes on a logical flow from a starting point to a
conclusion. It is usually intended for display purposes with not much interaction or distraction
from the audience. Because of its nature where audience participation is not expected, Linear
Multimedia may also be referred to as “Passive Multimedia”
In this kind of presentation, the creator of the multimedia is in control.
This kind of media is preferential if interaction is not necessary in the presentation
Main goals include: to entertain, to transmit knowledge, and to make people familiar on a
certain topic WITHOUT any form of diversion. Examples:
 A PowerPoint presentation
 A slideshow of pictures that goes on with a specific direction
 A storyline/ A movie
 An anime episode
 A YouTube videos
Advantages:
 Audience gets to focus and concentrate on a specific topic.
 There is logical order in the presentation. Organized
 Presenter controls the flow of the presentation
 Effective when we need our audience to absorb the information well
Disadvantages:
 Minimal interactivity, or none at all
 Audience has no say on the topic they want to dwell into.
NON-LINEAR MULTIMEDIA
Non-linear multimedia is a non-sequential type of multimedia where the person’s participation
is crucial. In this type of media, the person needs to interact with a computer program, thus
making him in control of the experience.
With the presence of an interface, the person and the computer interact with each other. From
a starting point, the person using a nonlinear multimedia is given a range of options that,
according to his own preferences, will lead him to a new information.
Examples may include:
 A Website
 A search engine’s home page
 A DVD menu screen
 A YouTube Channel
 An anime or Korean drama streaming site
Advantages:
 The person is in control and may use the multimedia according to his preferences and
needs.
Disadvantages:
 Requires a level of computer literacy from the user
 May be unorganized if not used well
1.6 COMPONENTS AND STRUCTURE
Multimedia applications can include many types of media. The primary characteristic of a
multimedia system is the use of more than one kind of media to deliver content and
functionality. Web and desktop computing programs can both involve multimedia components.
As well as different media items, a multimedia application will normally involve programming
code and enhanced user interaction. Multimedia items generally fall into one of five main
categories and use varied techniques for digital formatting.
a) Text
It may be an easy content type to forget when considering multimedia systems, but text
content is by far the most common media type in computing applications. Most multimedia
systems use a combination of text and other media to deliver functionality. Text in
multimedia systems can express specific information, or it can act as reinforcement for
information contained in other media items. This is a common practice in applications with
accessibility requirements. For example, when Web pages include image elements, they
can also include a short amount of text for the user's browser to include as an alternative,
in case the digital image item is not available.
b) Images
Digital image files appear in many multimedia applications. Digital photographs can
display application content or can alternatively form part of a user interface. Interactive
elements, such as buttons, often use custom images created by the designers and developers
involved in an application. Digital image files use a variety of formats and file extensions.
Among the most common are JPEGs and PNGs. Both of these often appear on websites, as
the formats allow developers to minimize on file size while maximizing on picture quality.
Graphic design software programs such as Photoshop and Paint.NET allow developers to
create complex visual effects with digital images.
c) Audio
Audio files and streams play a major role in some multimedia systems. Audio files appear
as part of application content and also to aid interaction. When they appear within Web
applications and sites, audio files sometimes need to be deployed using plug-in media
players. Audio formats include MP3, WMA, Wave, MIDI and RealAudio. When
developers include audio within a website, they will generally use a compressed format to
minimize on download times. Web services can also stream audio, so that users can begin
playback before the entire file is downloaded.
d) Video
Digital video appears in many multimedia applications, particularly on the Web. As with
audio, websites can stream digital video to increase the speed and availability of playback.
Common digital video formats include Flash, MPEG, AVI, WMV and QuickTime. Most
digital video requires use of browser plug-ins to play within Web pages, but in many cases
the user's browser will already have the required resources installed.
e) Animation
Animated components are common within both Web and desktop multimedia applications.
Animations can also include interactive effects, allowing users to engage with the
animation action using their mouse and keyboard. The most common tool for creating
animations on the Web is Adobe Flash, which also facilitates desktop applications. Using
Flash, developers can author FLV files, exporting them as SWF movies for deployment to
users. Flash also uses ActionScript code to achieve animated and interactive effects

1.7 MULTIMEDIA HARDWARE AND SPECIFICATIONS


HARDWARE REQUIREMENTS
1. CPU: Central Processing Unit (CPU) is the brain of computer, where processing
and synchronization of all activities takes place. The efficiency of a computer is
judged by the speed of the CPU in processing of data. For a multimedia computer
a Pentium processor is preferred because of higher efficiency.
2. Monitor: The monitor is used to see the computer output. Generally, it displays
25 rows and 80 columns of text. The text or graphics in a monitor is created as a
result of an arrangement of tiny dots, called pixels. Resolution is the amount of
details the monitor can render. Resolution is defined in terms of horizontal and
vertical pixel (picture elements) displayed on the screen.
3. Video Grabbing Card: It is needed to convert the analog video signal to digital
signal for processing in a computer. Normal computer will not be able to do it
alone. It requires special equipment called video grabbing card and software to
this conversion process. This card translates the analog signal it receives from
conventional sources such as a VCR or a video camera, and converts them into
digital format.
4. Sound Card: Today’s computers are capable of creating the professional
multimedia needs. Not only user can use computer to compose his own music,
but it can also be used for recognition of speech and synthesis. It can even read
back the entire document for you. But before all this happens, we need to convert
the conventional sound signal to computer understandable digital signals. This is
done using a special component added to the system called sound card.
5. Storage: For storing stuff, needed storage devices like, HDD with good storage
capacity and speed. CD/DVD>Blu-ray Disc may be required for the distribution
purpose.
SOFTWARE REQUIREMENTS
For the creation of multimedia on the PC there are loads of software packages. These
software packages can acquire from being absolutely free or paying few bucks.
Here is a summary of just a few of such programs.
1. Adobe CS: Adobe CS is a collection of graphic design, video editing, and web
development applications made by Adobe Systems many of which are the
industry standard that includes
2. Adobe Dreamweaver: Although a hybrid WYSIWYG and code-based web
design and development application, that allows users to create websites
everything can be done visually.
3. Adobe Fireworks: A graphics package that allows users to create bitmap and
vector graphics editor with features such as: slices, the ability to add hotspots etc.)
for rapidly creating website prototypes and application interfaces.
4. Adobe Flash Player: Adobe Flash is a multimedia platform that is popular for
adding animation and interactivity to web pages. Originally acquired by
Macromedia, Flash was introduced in 1996, and is currently developed and
distributed by Adobe Systems.
5. Flash is commonly used to create animation, advertisements, and various web
page Flash components, to integrate video into web pages, and more recently, to
develop rich Internet applications.
6. Adobe Shockwave: Adobe Shockwave is a multimedia player program, first
developed by Macromedia, acquired by Adobe Systems in 2005. It allows Adobe
Director applications to be published on the Internet and viewed in a web browser
on any computer which has the Shockwave plug-in installed.
7. Adobe Photoshop Pro: Simply Photoshop, is a graphics editing program
developed and published by Adobe Systems. It is the current market leader for
commercial bitmap and image manipulation software, and is the flagship product
of Adobe Systems. It has been described as “an industry standard for graphics
professionals”
8. Gimp: Is an alternative to Photoshop and cheaper but not quite as good.
9. Google Sketchup: SketchUp is a 3D modelling program designed for architects,
civil engineers, filmmakers, game developers, and related professions.
10. Microsoft Frontpage: As a WYSIWYG editor, FrontPage is designed to hide
the details of pages’ HTML code from the user, making it possible for novices to
easily create web pages and sites.
11. Apple QuickTime: QuickTime is an extensible proprietary multimedia
framework developed by Apple, capable of handling various formats of digital
video, 3D models, sound, text, animation, music, panoramic images, and
interactivity.
12. Microsoft PowerPoint: PowerPoint Presentations are generally made up of
slides may contain text, graphics, movies, and other objects, which may be
arranged freely on the slide.
1.8 MULTIMEDIA INPUT AND OUTPUT DEVICES
1. Input Devices
Graphics workstations can make use of various devices for data input. Most systems
have a keyboard and one more additional device specifically designed for interactive
input. These include a mouse, trackball, space-ball, and joystick. Some other input
devices used in particular applications are digitizers, dials, button boxes, data gloves,
touch panels, and voice systems.
a) Keyboard: An alphanumeric keyboard on a graphics system is used primarily as a
device for entering text strings, issuing certain commands, and selecting menu
options. The keyboard is an efficient device for inputting such nongraphic data as
picture labels associated with a graphics display.
b) Joy stick: Joystick is another positioning device, which consists of a small, vertical
lever mounted on a base. The joystick is used to steer the screen cursor around.
Most Joysticks, select screen positions with actual stick movement; others respond
to pressure on the stick. Some joysticks are mounted on a keyboard, and some are
designed as stand-alone units. The distance that the stick is moved in any direction
from its centre position corresponds to the relative screen-cursor movement in that
direction.
c) Data Gloves: A Data Glove that can be used to grasp a ―virtual object‖. The glove
is constructed with a series of sensors that detect hand and finger motions.
Electromagnetic coupling between transmitting antennas and receiving antennas are
used to provide information about the position and orientation of the hand. The
transmitting and receiving antennas can each be structured as a set of three mutually
perpendicular coils, forming a three-dimensional Cartesian reference system. Input
from the gloves is used to position or manipulate objects in a virtual-scene. A two-
dimensional projection of the scene can be viewed on a video monitor, or a three-
dimensional projection can be viewed with a headset.
d) Digitizers: A common device for drawing, painting, or interactively selecting
position is a Digitizer. These devices can be designed to input coordinate values in
either a two-dimensional or three-dimensional space. In engineering or architectural
applications, a digitizer is often used to scan a drawing or object and input a set of
discrete coordinate positions. The input positions are then joined with straight-line
segments to generate an approximation of a curve or surface shape.
e) Image Scanners: Drawings, graphs, photographs, or text can be stored for
computer processing with an image scanner by passing an optical scanning
mechanism over the information to be stored. The gradations of grey scale or color
are then recorded and stored in an array. Once we have the internal representation
of a picture, we can apply transformations to rotate, scale, or crop the picture to a
particular screen area.
f) Touch panels: As the name implies, touch panels allow displayed objects or screen
positions to be selected with the touch of a finger. A typical application of touch
panels is for the selection of processing options that are represented as a menu of
graphical icons. Some monitors can be adapted for touch input by fitting a
transparent device containing a touch-sensing mechanism over the video monitor
screen. Touch input can be recorded using optical, electrical, or acoustic methods.
g) Light Pens: Pencil-shaped devices are used to select screen positions by detecting
the light coming from points on the CRT screen. They are sensitive to the short
burst of light emitted from the phosphor coating at the instant the electron beam
strikes a particular point. Other light sources, such as the background light in the
room, are usually not detected by a light pen. An activated light pen, pointed at a
spot on the screen as the electron beam lights up that spot, generates as electrical
pulse that causes the coordinate position of the electron beam to be recorded. As
with cursor-positioning devices, recorded light-pen coordinates can be used to
position an object or to select a processing option.
h) Voice Systems: Speech recognizers are used with some graphics workstations as
input devices for voice commands. The voice system input can be used to initiate
graphics operations or to enter data. These systems operate by matching an input
against a predefined dictionary of words and phrases. A dictionary is set up by
speaking the command words several times. The system then analyses each word
and establishes a dictionary of word frequency patterns, along with the
corresponding functions that are to be performed. Later, when a voice command is
given, the system searches the dictionary for a frequency-pattern match. A separate
dictionary is needed for each operator using the system.
2. Output Devices
Graphics workstations can make use of various devices for data output. These include
a monitors, printers, plotters and speakers.
a) Monitors: Also called as Visual Display Unit (VDU), are the main output device
of a computer. It forms images from tiny dots, called pixels that are arranged in a
rectangular form. The sharpness of the image depends upon the number of pixels.
There are two kinds of viewing screen used for monitors.
 Cathode-Ray Tube (CRT)
 Flat-Panel Display
b) Printers: Printer is an output device, which is used to print information on paper.
There are two types of printers −
i. Impact Printers: Impact printers print the characters by striking them on the
ribbon, which is then pressed on the paper. Examples: Dot matrix, Daisy Wheel
Printer
ii. Non-Impact Printers: Non-impact printers print the characters without using the
ribbon. These printers print a complete page at a time, thus they are also called
as Page Printers. These printers are of two types − Laser Printers, Inkjet Printers
c) Plotter: A plotter is a printer designed for printing vector graphics. Instead of
printing individual dots on the paper, plotters draw continuous lines. This makes
plotters ideal for printing architectural blueprints, engineering designs, and other
CAD drawings. There are two main types of plotters – drum and flatbed plotters.
d) Speaker: Speakers are transducers that convert electromagnetic waves into sound
waves. The speakers receive audio input from a device such as a computer or an
audio receiver. This input may be either in analog or digital form. Analog speakers
simply amplify the analog electromagnetic waves into sound waves. Regardless of
their design, the purpose of speakers is to produce audio output that can be heard
by the listener. Speakers are transducers that convert electromagnetic waves into
sound waves. The speakers receive audio input from a device such as a computer or
an audio receiver.
e) Projector: A projector is an output device that can take images generated by a
computer or Blu-ray player and reproduce them onto a screen, wall, or other surface.
Typically, the surface projected onto is large, flat, and lightly coloured. For
example, you could use a projector to show a presentation on a large screen so that
everyone in the room can see it. Projectors can produce either still (slides) or moving
images (videos).
1.9 USES OF MULTIMEDIA
In many ways, ours has been a multimedia society for decades. A variety of media – print
material, film strips, and visual aids - have been used in the classroom for years.
Conferences and seminars have made effective use of music, lights, slide projectors and
videotapes. And ubiquitous televisions have shaped a new multimedia generation.
What differentiates multimedia as the buzzword of the nineties, however, is the
partnership potential of multiple media and computer technologies. Computers can now
present data, text, sound, graphics, and limited motion video on the desktop. Computer
based multimedia skill and knowledge applications offer benefits and value difficult to
equal in non-technology implementations.
1. Advertising: Advertising has changed way to do business past couple of decades.
Multimedia plays a great and a vital role in the field of advertising. As whatever it is
whether print or electronic advertisement, they first are prepared on the computer by
using professionals' software's and then it is brought in front of the target audiences.
It may include: Print advertising, Radio (audio) advertising, Television (video)
advertising, Digital advertising, Display Ads
2. Education: In the area of education, the multimedia has a great importance,
particularly in the schools, their usage has a significant role to educate children
effectively and more visually.
Nowadays the classroom need is not limited to that traditional method rather it needs
audio and visual media. With the use of multimedia everything can be integrated into
one system. As an education aid the PC contains a high-quality display with mic
option. This all has promoted the development of a wide range of computer-based
training.
3. Mass Media: It is used in the field of mass media i.e. journalism, in various
magazines and newspapers that are published periodically. The use of multimedia
plays a vital role in a publishing house as there are many works of newspaper
designing and other stuff also.
Nowadays it's not only the text that we can see in the newspaper, but we can also see
photographs in newspaper, this not only makes newspaper a perfect example but will
also explain the worthiness of hypermedia.
4. Gaming Industry: One of the most exciting applications of multimedia is games.
Nowadays the live internet is used to play gaming with multiple players has become
popular. In fact, the first application of multimedia system was in the field of
entertainment and that too in the video game industry. The integrated audio and video
effects make various types of games more entertaining.
5. Science and Technology: Multimedia had a wide application in the field of science
and technology. It is capable of transferring audio, sending message and formatted
multimedia documents. At the same time the it also helps in live interaction through
audio messages and it is only possible with the hypermedia. It reduces the time and
cost can be arranged at any moment even in emergencies.
At the same time, it is useful for surgeons as they can use images created from
imaging scans of human body to practice complicated procedures such as brain
removal and reconstructive surgery. The plans can be made in a better way to reduce
the costs and complications.
6. Pre-Production: Pre-Production comprises of everything you do before you start
recording of audio or video. This phase of your project is extremely important.
Everything you do in pre-production will save time and aggravation during
production and post-production. The techniques shown will include: how to design
storyboards, including how to show correct camera angles for the scene, writing your
story, and how to use video transitions can be done with the help of multimedia.
7. Post Production: It is the final step of production involves editing scenes, adding
various transition effects, addition of voice to characters, background score, dubbing
and much more can be done using multimedia technologies.
8. Fine Arts: In fine arts, there are multimedia artists, who blend techniques using
different media that in some way incorporates interaction with the viewer. One of the
famous artists is Peter Greenaway who is blending cinema with opera with the help
of all sorts of digital media.
9. Engineering: Software engineers often use multimedia in computer simulations for
anything such as military or industrial training. It is also used for software interfaces
which are done as collaboration between creative professionals and software
engineers.
10. Research: In the area of mathematical and scientific research, multimedia is
primarily used for modelling and simulation. For example, looking at a molecular
model by a scientist of a particular substance and manipulate it to arrive at a new
substance.

1.10 HYPERMEDIA
Hypertext is text displayed on a computer display or other electronic devices with
references (hyperlinks) to other text that the reader can immediately access, or where text
can be revealed progressively at multiple levels of detail. Hypertext documents are
interconnected by hyperlinks, which are typically activated by a mouse click, keypress
sequence or by touching the screen. Apart from text, the term "hypertext" is also
sometimes used to describe tables, images, and other presentational content formats with
integrated hyperlinks. Hypertext is one of the key underlying concepts of the World Wide
Web, where Web pages are often written in the Hypertext Mark-up Language (HTML).
As implemented on the Web, hypertext enables the easy-to-use publication of
information over the Internet.
Hypermedia is the use of text, data, graphics, audio and video as elements of an extended
hypertext system in which all elements are linked, where the content is accessible via
hyperlinks. Text, audio, graphics, and video are interconnected to each other creating a
compilation of information that is generally considered as non-linear system. The modern
world wide web is the best example for the hypermedia, where the content is most of the
time interactive hence non-linear. Hypertext is a subset of hypermedia, and the term was
first used by Ted Nelson in 1965.
Hypermedia content can be developed using specified software such as Adobe Flash,
Adobe Director and Macromedia Author ware. Some business software as Adobe
Acrobat and Microsoft Office Suite offers limited hypermedia features with hyperlinks
embedded in the document itself.

1.11 MULTIMEDIA PRESENTATION AND PRODUCTION


A multimedia presentation is basically a digital show whose content is expressed through
various media types like text, images, sound, video etc. There can be various objectives
of the presentation for example, to deliver some information about a company’s
performance, to enhance the knowledge of students, to present the facilities offered by a
travel agency to the tourists and so on. In fact, any subject matter where information may
be expressed through various visual and audio information may be a potential application
area for a multimedia presentation. The end users who execute and watch the presentation
are called the viewer or the target audience. Different types of presentations may have
different categories of audience like company employees, students, professional, factory
workers, tourist, etc. The presentation is usually played back on a PC either from the
hard disk or a CD-ROM. Sometimes when the audience consists of a large number of
people, presentation may be projected on a big screen using a projecting system. Before
a presentation can be viewed, however it has to be created. This process is known as
multimedia production. The production work is carried out by a team of professionals
equipped with the required skills and knowledge. These professionals are called
developers or authors and development work is called authoring. Authoring involves a
number of steps.
1.12 CHARACTERISTICS OF A MULTIMEDIA PRESENTATION
 Multiple media: Multimedia presentation comprises of text, graphics and images,
animation, sound, and video.
 Non-linearity: Non-linearity is the capability of ‘jumping’ or navigating from one
point within a presentation to another point without appreciable delay.
 Scope of interactivity: To make non-linearity a possibility, a user needs to interact
with a presentation. For non-linear presentation, a user can directly navigate to an area
of interest. Such interaction is made possible through a set of interactivity elements
embedded within the presentation like buttons, menu items or hyperlinks.
 Integrity: Although there may be several media types present and playback
simultaneously, they need to be integrated or be part of a single entity which is the
presentation.
 Digital representation: Multimedia requires instant access to different portions of
the presentation. This is best done inside a digital computer which store data on
random access devices like hard disk, and compact disc. Multimedia presentations are
produced and played back on the digital platform.

1.13 OVERVIEW OF MULTIMEDIA SOFTWARE AND AUTHORING TOOLS


Music Sequencing and Notation
1. Pro Audio is a well-known sequencing and editing software. The term sequencer
comes from older devices that stored sequences of notes in the MIDI music
language (events, in MIDI; see Section). It is also possible to insert WAV files
and Windows MCI commands (for animation and video) into music tracks. (MCI
is a ubiquitous component of the Windows API.)
2. Cubase is another sequencing / editing program, with capabilities similar to those
of Cakewalk. It includes some digital audio editing tools.
3. Digital Audio tools deal with accessing and editing the actual sampled sounds
that make up audio.
4. Cool Edit is a powerful, popular digital audio toolkit with capabilities (for PC
users, at least) that emulate a professional audio studio, including multitrack
productions and sound file editing, along with digital signal processing effects.
5. Sound Forge is a sophisticated PC - based program for editing WAV files. Sound
can be captured from a CD - ROM drive or from tape or microphone through the
sound card, then mixed and edited. It also permits adding complex special effects.
Graphics and Image Editing
1. Adobe Illustrator is a powerful publishing tool for creating and editing vector
graphics, which can easily be exported to use on the web.
2. Adobe Photoshop is the standard in a tool for graphics, image processing, and
image manipulation. Layers of images, graphics, and text can be separately
manipulated for maximum flexibility, and its "filter factory" permits creation of
sophisticated lighting effects.
3. Macromedia Fireworks is software for making graphics specifically for the
web. It includes a bitmap editor, a vector graphics editor, and a JavaScript
generator for buttons and rollovers.
4. Macromedia Freehand is a text and web graphics editing tool that supports
many bitmap formats, such as GIF, PNG, and JPEG. These am pixel - based
formats, in that each pixel is specified. It also supports vector - based formats, in
which endpoints of lines are specified instead of the pixels themselves, such as
SWF (Macromedia Flash) and FHC (Shockwave Freehand). It can also read
Photoshop format.
Video Editing
1. Adobe Premiere is a simple, intuitive video editing tool for nonlinear editing —
putting video clips into any order. Video and audio are arranged in tracks, like a
musical score. It provides a large number of video and audio tracks,
superimpositions, and virtual clips. A large library of built - in transitions, filters,
and motions for clips allows easy creation of effective multimedia productions.
2. Adobe After Effects is a powerful video editing tool that enables users to add
and change existing movies with effects such as lighting, shadows, and motion
blurring. It also allows layers, as in Photoshop, to permit manipulating objects
independently.
3. Final Cut Pro is a video editing tool offered by Apple for the Macintosh
platform. It allows the capture of video and audio from numerous sources, such
as film and DV. It provides a complete environment, from capturing the video to
editing and color correction and finally output to a video file or broadcast from
the computer.
Animation (Multimedia APIs)
1. Java3D is an API used by Java to construct and render 3D graphics, similar to
the way Java Media Framework handles media files. It provides a basic set of
object primitives (cube, splines, etc.) upon which the developer can build scenes.
It is an abstraction layer built on top of OpenGL or DirectX (the user can select
which), so the graphics are accelerated.
2. DirectX, a Windows API that supports video, images, audio, and 3D animation,
is the most common API used to develop modern multimedia Windows
applications, such as computer games.
3. OpenGL was created in 1992 and has become the most popular 3D API in use
today. OpenGL is highly portable and will run on all popular modern operating
systems, such as UNIX, Linux, Windows, and Macintosh.
Rendering Tools
1. 3D Studio Max includes a number of high - end professional tools for character
animation, game development, and visual effects production. Models produced
using this tool can be seen in several consumer games, such as for the Sony
PlayStation.
2. Softimage XSI (previously called Softimage 3D) is a powerful modelling,
animation, and rendering package for animation and special effects in films and
games.
3. Maya, a competing product to Softimage, is a complete modelling package. It
features a wide variety of modelling and animation tools, such as to create
realistic clothes and fur.
4. RenderMan is a rendering package created by Pixar. It excels in creating
complex surface appearances and images and has been used in numerous movies,
such as Monsters Inc. and Final Fantasy: The Spirits Within. It is also capable of
importing models from Maya.
5. GIF Animation Packages For a simpler approach to animation that also allows
quick development of effective small animations for the web, many shareware
and other programs permit creating animated GIF images. GIFs can contain
several images, and looping through them creates a simple animation. Gifcon and
GifBuilder are two of these. Linux also provides some simple animation tools,
such as animate.
Multimedia Authoring
Tools that provide the capability for creating a complete multimedia presentation,
including interactive user control, are called authoring programs.
1. Adobe Flash is a multimedia platform used to add animation, video, and
interactivity to web pages. Flash is frequently used for advertisements, games and
flash animations for broadcast. More recently, it has been positioned as a tool for
"Rich Internet Applications" ("RIAs"). Flash manipulates vector and raster
graphics to provide animation of text, drawings, and still images. It supports
bidirectional streaming of audio and video, and it can capture user input via
mouse, keyboard, microphone, and camera. Flash contains an object - oriented
language called ActionScript and supports automation via the JavaScript Flash
language (JSFL).
Flash content may be displayed on various computer systems and devices, using
Adobe Flash Player, which is available free of charge for common web browsers,
some mobile phones, and a few other electronic devices (using Flash Lite).
Some users feel that Flash enriches their web experience, while others find the
extensive use of Flash animation, particularly in advertising, intrusive and
annoying. Flash has also been criticized for adversely affecting the usability of
web pages.
2. Adobe Director (formerly Macromedia Director) is a multimedia application
authoring platform created by Macromedia — now part of Adobe Systems. It
allows users to build applications built on a movie metaphor, with the user as the
"director" of the movie. Originally designed for creating animation sequences,
the addition of a scripting language called Lingo made it a popular choice for
creating CD - ROMs and standalone kiosks and web content using Adobe
Shockwave. Adobe Director supports both 2D and 3D multimedia projects.
3. Author ware was an interpreted, flowchart - based, graphical programming
language. Authorware is used for creating interactive programs that can integrate
a range of multimedia content, particularly e - learning applications. The
flowchart model differentiates Authorware from other authoring tools, such as
Adobe Flash and Adobe Director, which rely on a visual stage, time - line and
script structure.
4. Quest, which uses a type of flowcharting metaphor, is similar to Authorware in
many ways. However, the flowchart nodes can encapsulate information in a more
abstract way (called "frames") than simply subroutine levels. As a result,
connections between icons are more conceptual and do not always represent flow
of control in the program.
Unit 2: Basics of Text and Images
TEXT – INTRODUCTION
It may be an easy content type to forget when considering multimedia systems, but text content
is by far the most common media type in computing applications. Most multimedia systems
use a combination of text and other media to deliver functionality. Text in multimedia systems
can express specific information, or it can act as reinforcement for information contained in
other media items. This is a common practice in applications with accessibility requirements.
For example, when Web pages include image elements, they can also include a short amount
of text for the user's browser to include as an alternative, in case the digital image item is not
available.
Text is an important component used in many multimedia applications. They are characters
that are used to create words, sentences and paragraphs. text alone provide just one source of
information. yet, text is good at providing basic information. it is the simplest, and often the
most effective way to get one's message across. insufficient attention given to the presentation
and flow of text within multimedia application can result in the failure to communicate the
presentation’s central message.
STANDARDS - ASCII, UNICODE
1. ASCII:
It stands for American Standard Code for Information Interchange, ASCII is a standard
that assigns letters, numbers, and other characters within the 256 slots available in the 8-bit
code. The ASCII decimal (Dec) number is created from binary, which is the language of
all computers. For example, the lower case "h" character (Char) has a decimal value of 104,
which is "01101000" in binary.
ASCII is the most common format for text files in computers and on the Internet. In an
ASCII file, each alphabetic, numeric, or special character is represented with a 7-bit binary
number (a string of seven 0s or 1s). 128 possible characters are defined.
ASCII was first developed and published in 1963 by the X3 committee, a part of the
American Standards Association (ASA). The ASCII standard was first published as ASA
X3.4-1963, with 10 revisions of the standard being published between 1967 and 1986.
The ASCII table is divided into 3 different sections.
 Non-printable, system codes between 0 and 31.
 Lower ASCII, between 32 and 127. This table originates from the older, American
systems, which worked on 7-bit character tables.
 Higher ASCII, between 128 and 255. This portion is programmable; characters are
based on the language of your operating system or program you are using. Foreign
letters are also placed in this section.
2. Unicode:
Unicode is an entirely new idea in setting up binary codes for text or script character.
Unicode is a computing industry standard for the consistent encoding, representation, and
handling of text expressed in most of the world's writing systems. The Unicode Standard
consists of a set of code charts for visual reference, an encoding method and set of standard
character encodings, a set of reference data files, and a number of related items, such as
character properties, rules for normalization, decomposition, collation, rendering, and
bidirectional display order (for the correct display of text containing both right-to-left
scripts, such as Arabic and Hebrew, and left-to-right scripts). Unicode's success at unifying
character sets has led to its widespread and predominant use in the internationalization and
localization of computer software.
Unicode can be implemented by different character encodings. The Unicode standard
defines UTF-8, UTF-16, and UTF-32, and several other encodings are in use. The most
commonly used encodings are UTF-8, UTF-16 and UCS-2, a precursor of UTF-16.
HYPERMEDIA AND HYPERTEXT
Hypertext is text displayed on a computer display or other electronic devices with references
(hyperlinks) to other text that the reader can immediately access, or where text can be revealed
progressively at multiple levels of detail. Hypertext documents are interconnected by
hyperlinks, which are typically activated by a mouse click, keypress sequence or by touching
the screen. Apart from text, the term "hypertext" is also sometimes used to describe tables,
images, and other presentational content formats with integrated hyperlinks. Hypertext is one
of the key underlying concepts of the World Wide Web, where Web pages are often written in
the Hypertext Mark-up Language (HTML). As implemented on the Web, hypertext enables the
easy-to-use publication of information over the Internet.
Hypermedia is the use of text, data, graphics, audio and video as elements of an extended
hypertext system in which all elements are linked, where the content is accessible via
hyperlinks. Text, audio, graphics, and video are interconnected to each other creating a
compilation of information that is generally considered as non-linear system. The modern
world wide web is the best example for the hypermedia, where the content is most of the time
interactive hence non-linear. Hypertext is a subset of hypermedia, and the term was first used
by Ted Nelson in 1965.
Hypermedia content can be developed using specified software such as Adobe Flash, Adobe
Director and Macromedia Author ware. Some business software as Adobe Acrobat and
Microsoft Office Suite offers limited hypermedia features with hyperlinks embedded in the
document itself.
ABOUT FONTS AND FACES
A typeface is family of graphic characters that usually includes many type sizes and styles. A
font is a collection of characters of a single size and style belonging to a particular typeface
family. Typical font styles are bold face and italic. Other style attributes such as underlining
and outlining of characters, may be added at the user’s choice.
The size of a text is usually measured in points. One point is approximately 1/72 of an inch i.e.
0.0138. The size of a font does not exactly describe the height or width of its characters. This
is because the x-height (the height of lower case character x) of two fonts may differ.
Typefaces of fonts can be described in many ways, but the most common characterization of a
typeface is serif and sans serif. The serif is the little decoration at the end of a letter stroke.
Times, Times New Roman, Bookman are some fonts which comes under serif category. Arial,
Optima, Verdana are some examples of sans serif font. Serif fonts are generally used for body
of the text for better readability and sans serif fonts are generally used for headings.

Postscript fonts are a method of describing an image in terms of mathematical constructs


(Bezier curves), so it is used not only to describe the individual characters of a font but also to
describe illustrations and whole pages of text. Since postscript makes use of mathematical
formula, it can be easily scaled bigger or smaller.
Apple and Microsoft announced a joint effort to develop a better and faster quadratic curves
outline font methodology, called TrueType. In addition to printing smooth characters on
printers, TrueType would draw characters to a low resolution (72 dpi or 96 dpi) monitor.
FONT EDITORS
There are several software that can be used to create customized font. These tools help an
multimedia developer to communicate his idea or the graphic feeling. Using these software
different typefaces can be created.
In some multimedia projects it may be required to create special characters. Using the font
editing tools, it is possible to create a special symbol and use it in the entire text. Following are
the popular software that can be used for editing and creating fonts:
 Fontographer
 Fontmonger
 Cool 3D text
IMAGE -INTRODUCTION
Images are the important element of a multimedia project or a web site. In order to make a
multimedia presentation look elegant and complete, it is necessary to spend ample amount of
time to design the graphics and the layouts. Competent, computer literate skills in graphic art
and design are vital to the success of a multimedia project.
Digital image files appear in many multimedia applications. Digital photographs can display
application content or can alternatively form part of a user interface. Interactive elements, such
as buttons, often use custom images created by the designers and developers involved in an
application. Digital image files use a variety of formats and file extensions. Among the most
common are JPEGs and PNGs. Both of these often appear on websites, as the formats allow
developers to minimize on file size while maximizing on picture quality. Graphic design
software programs such as Photoshop and Paint.NET allow developers to create complex visual
effects with digital images.
DIGITAL IMAGE FORMAT
Though there are different kinds of image formats in the literature, we shall consider the image
format that comes out of an image frame grabber, i.e., the captured image format, and the
format when images are stored, i.e., the stored image format.
Captured Image Format
The image format is specified by two main parameters: spatial resolution, which is specified
as pixels x pixels (e.g. 640x480) and color encoding, which is specified by bits per pixel. Both
parameter values depend on hardware and software for input/output of images.
Stored Image Format
When we store an image, we are storing a two-dimensional array of values, in which each value
represents the data associated with a pixel in the image. For a bitmap, this value is a binary
digit.
TYPES
There are two main type of image files: Raster and Vector. Raster images are created with
pixel-based programs or captured with a camera or scanner. They are more common in general
such as jpg, gif, png, and are widely used on the web. Vector graphics are created with vector
software and are common for images that will be applied onto a physical product. Also used in
CAD, engineering, and 3D graphics.
Raster Images
Raster graphics are bitmaps. A bitmap is a grid of individual pixels that collectively compose
an image. Raster graphics render images as a collection of countless tiny squares. Each square,
or pixel, is coded in a specific hue or shade. Each color pixel contributes to the overall image.
Raster graphics are best used for non-line art images; specifically, digitized photographs,
scanned artwork or detailed graphics. Raster images are capable of rendering complex, multi-
coloured visuals, including soft color gradients. Digital cameras create raster images, and all
the photographs you see in print and online are raster images. Raster images are ideal for photo
editing and creating digital paintings in programs such as Photoshop and GIMP, and they can
be compressed for storage and web optimized images.
Quality and size of a raster image is often dictated by how many pixels are contained in an
inch, expressed as pixels-per-inch or ppi; as well as the overall dimensions of the image, also
expressed as pixels (for example, 5,000 pixels wide by 2,500 pixels high).
The greater the ppi and dimensional measurements, the higher the quality. Most printing
projects require images to be at least 300ppi, for example
There are different types of raster files: JPG, GIF, PNG, etc.
Vector Images
Unlike raster graphics, which are comprised of coloured pixels arranged to display an image,
vector graphics are made up of paths, each with a mathematical formula (vector) that tells the
path how it is shaped and what color it is bordered with or filled by.
Since mathematical formulas dictate how the image is rendered, vector images retain their
appearance regardless of size. They can be scaled infinitely. Vector images can be created and
edited in programs such as Illustrator, CorelDraw, and InkScape.
Though vectors can be used to imitate photographs, they’re best-suited for designs that use
simple, solid colors. Vector images are comprised of shapes, and each shape has its own color;
thus, vectors cannot achieve the color gradients, shadows, and shading that raster images can
(it is possible to mimic them, but it requires rasterizing part of the image – which means it
would not be a true vector). True vector graphics are comprised of line art, sometimes called
wireframes, that are filled with color.
Because vectors can be infinitely scaled without loss of quality, they’re excellent for logos,
illustrations, engravings, etchings, product artwork, signage, and embroidery. Vectors should
not be used for digital paintings or photo editing; however, they’re perfect for projects such as
printing stickers that do not include photos.
This table compares some of the differences between raster and vector images.

Raster Vector
1. Comprised of pixels, arranged to form 1. Comprised of paths, dictated by
an image mathematical formulas
2. Constrained by resolution and 2. Infinitely scalable
dimensions
3. Capable of rich, complex color blends 3. Difficult to blend colors without
rasterizing
4. Large file sizes (but can be 4. Small file sizes
compressed)
5. File types include .jpg, .gif, .png, .tif, 5. File types include .ai, .cdr, .svg; plus .eps
.bmp, .psd; plus .eps and .pdf when and .pdf when created by vector
created by raster programs programs
6. Raster software includes Photoshop 6. Vector software includes Illustrator,
and GIMP CorelDraw, and InkScape
7. Perfect for “painting” 7. Perfect for “drawing”
8. Capable of detailed editing 8. Less detailed, but offers precise paths
COLOUR AND COLOUR MODELS
"Colour" refers to the human brain's subjective interpretation of combinations of a narrow band
of wavelengths of light. Also, what wavelengths reach the eye depend on both the wavelengths
in the light source and what wavelengths are absorbed by the objects off which the light reflects.
A Colour Model is simply a way to define color. A color model is a system for creating a full
range of colours from a small set of primary colors. A model describes how color will appear
on the computer screen or on paper. Three popular color models are:
a) CMYK (Cyan, Magenta, Yellow, Black)
The CMYK model is used for print work and it describes colors based on their
percentage of Cyan, Magenta, Yellow and Black. These four colors are used by
commercial printers and bureaus and you may also find that your home printer uses
these colors too. These four colors are needed to reproduce full color artwork in
magazines, books and brochures. By combining Cyan, Magenta, Yellow and Black on
paper in varying percentages, the illusion of lots of colors is created.
CMYK is known as a “subtractive” color model. White is the natural color of the paper
or other background, while black results from a full combination of coloured inks.
b) RGB (Red, Green, Blue)
The RGB model is used when working with screen-based designs. A value between 0
and 255 is assigned to each of the light colors, Red, Green and Blue. So, for example,
if you wanted to create a purely blue color, Red would have a value of 0, Green would
have a value of 0 and Blue would have a value of 255 (pure blue). To create black, Red,
Green and Blue would each have a value of 0 and to create white, each would have a
value of 255. RGB is known as an “additive” model and is the opposite of the
subtractive color model.
In case of RGB Model, the “value” of color referring to the strength of the colors in
relation to each other.
c) Lab Color
The Lab color model is a slightly more complex beast. It is made up of three
components – the lightness component (L) ranging from 0 to 100, the “a” component
comes from the green-red axis in the Adobe Color Picker, and the “b” component which
comes from the blue-yellow axis in the Adobe Color Picker. Both “a” and “b” can range
from +127 to –128. When Photoshop is converting from one model to another, it uses
Lab as the intermediate color model.

d) HSL
The HSL model describes colors in terms of hue, saturation, and lightness (also called
luminance). The model has two prominent properties:
 The transition from black to a hue to white is symmetric and is controlled solely
by increasing lightness. Shading and tinting are controlled by a single value,
lightness
 Decreasing saturation transitions to a shade of gray dependent on the lightness,
thus keeping the overall intensity relatively constant. Tones are controlled by a
single value, saturation
The advantages of using hue are
 The relationship between tones around the color circle is easily identified
 Shades, tints, and tones can be generated easily without affecting the hue
Lightness combines the concepts of shading and tinting. Assuming full saturation,
lightness is neutral at the midpoint value, for example 50%, and the hue displays
unaltered. As lightness decreases below the midpoint, it has the effect of shading. Zero
lightness produces black. As lightness increases above 50%, it has the effect of tinting,
and full lightness produces white.
At zero saturation, lightness controls the resulting shade of grey. A value of zero still
produces black, and full lightness still produces white. The midpoint value results in
the "middle" shade of grey, with an RGB value of (128,128,128). As saturation
decreases, it produces tones of the reference hue that converge on a shade of grey that
is determined by the lightness. This keeps the total intensity relatively constant.
SPECIFICATION OF DIGITAL IMAGES
A digital image is a binary representation of a two-dimensional image. It may be of vector or
raster type. But most of the times, the term "digital image" often refers to raster images or
bitmapped images.
A digital image may be characterized in three main ways:
 The image resolution refers to the image dimensions (width × height) in units of the
number of dots (pixels). Common resolutions are 640 × 480 or 1280 × 960, although
larger images from digital still cameras are common.
 The colour depth is the number of colours that may be specified for each pixel. For true
colour, this should be in the thousands or millions.
 The file format for an image describes the way it is saved on disk and affects its
compatibility with different programs for viewing, e-mailing, etc. The internet standard
image file format is JPEG*, and carries the benefit of small file size, high definition
and broad compatibility with internet e-mail and browser software.
OVERVIEW OF IMAGE PROCESSING
Image processing involves image recognition, image enhancement, image synthesis, image
reconstruction and image understanding. The original is not altered in document image
workflow management system rather, annotations are recorded and stored separately an image
processing system, on the other hand, may actually alter the contents of the image itself.
Examples of image processing systems applications include recognition of images, as in factory
floor quality assurance systems; image enhancement, as in satellite reconnaissance systems;
image synthesis, as in law enforcement suspect identification systems; and image
reconstruction, as in plastic surgery design systems.

Image enhancement: Most image display systems provide some level of image enhancement.
This may be a simple scanner sensitivity adjustment very much akin to the light-dark
adjustment in a copier. Increasing the sensitivity and contrast makes the picture darker by
making borderline pixels black or increasing the grey-level of pixels. Or it may be more
complex, with capabilities built in the compression boards. These capabilities might include
the following:

a) Image calibration- the overall image density is calibrated, and the image pixels are
adjusted to a predefined level.
b) Real-time alignment- the image is aligned in real-time for skewing caused by improper
feeding of paper.
c) Grey-scale normalization- the overall level of an image is evaluated to determine if it
is skewed in one direction and if it needs correction.
d) RGB hue intensity adjustment- too much color makes picture garish and fuzzy.
Automatic hue intensity adjustment brings the hue intensity within predefined ranges.
e) Color separation-A picture with very little color contrast can be dull and may not bring
out the details. The hardware used can detect and adjust the range of color separation.

Image Animation: Computer-created or scanned images can be displayed sequentially at


controlled display speeds provide image animation that simulates real processes. Image
animation is a technology that was developed by Walt Disney and brought into every home in
the form of cartoons. The basic concept of displaying the successive images at short intervals
to give the perception of motion is being used successfully in designing moving parts such as
automobile engines.

Image Annotation: Image Annotation can be performed in one of two ways: as a text file
stored along with the image or as a small image stored with the original image.

Optical Character Recognition: Data entry has traditionally been more expensive component
of data processing. OCR technology, used for data entry by scanning typed or printed words in
a form, has been in use for quite some time.

IMAGE FILE FORMAT


Image file formats are standardized means of organizing and storing digital images. Image files
are composed of digital data in one of these formats that can be rasterized for use on a computer
display or printer. An image file format may store data in uncompressed, compressed, or vector
formats. Once rasterized, an image becomes a grid of pixels, each of which has a number of
bits to designate its color equal to the color depth of the device displaying it.
There are hundreds of image file types. The PNG, JPEG, and GIF formats are most often used
to display images on the Internet. Few of them are discussed below:
JPEG/JFIF: JPEG (Joint Photographic Experts Group) is a lossy compression method; JPEG-
compressed images are usually stored in the JFIF (JPEG File Interchange Format) file format.
The JPEG/JFIF filename extension is JPG or JPEG. Nearly every digital camera can save
images in the JPEG/JFIF format, which supports eight-bit grayscale images and 24-bit color
images (eight bits each for red, green, and blue). JPEG applies lossy compression to images,
which can result in a significant reduction of the file size.
TIFF: The TIFF (Tagged Image File Format) format is a flexible format that normally saves
eight bits or sixteen bits per color (red, green, blue) for 24-bit and 48-bit totals, respectively,
usually using either the TIFF or TIF filename extension. The tagged structure was designed to
be easily extendible, and many vendors have introduced proprietary special-purpose tags – with
the result that no one reader handles every flavour of TIFF file. TIFFs can be lossy or lossless,
depending on the technique chosen for storing the pixel data. TIFF can handle device-specific
color spaces, such as the CMYK defined by a particular set of printing press inks. OCR (Optical
Character Recognition) software packages commonly generate some form of TIFF image
(often monochromatic) for scanned text pages.
GIF: GIF (Graphics Interchange Format) is in normal use limited to an 8-bit palette, or 256
colors. GIF is most suitable for storing graphics with few colors, such as simple diagrams,
shapes, logos, and cartoon style images, as it uses LZW lossless compression, which is more
effective when large areas have a single color, and less effective for photographic or dithered
images. Due to its animation capabilities, it is still widely used to provide image animation
effects, despite its low compression ratio compared to modern video formats.
BMP: The BMP file format (Windows bitmap) handles graphic files within the Microsoft
Windows OS. Typically, BMP files are uncompressed, and therefore large and lossless; their
advantage is their simple structure and wide acceptance in Windows programs.
PNG: The PNG (Portable Network Graphics) file format was created as a free, open-source
alternative to GIF. The PNG file format supports eight-bit palette images (with optional
transparency for all palette colors) and 24-bit true color (16 million colors) or 48-bit true colour
with and without alpha channel - while GIF supports only 256 colors and a single transparent
color. PNG provides a patent-free replacement for GIF (though GIF is itself now patent-free),
and can also replace many common uses of TIFF. Indexed-color, grayscale, and true colour
images are supported, plus an optional alpha channel.
IMAGE OUTPUT ON MONITOR AND PRINTER.
Monitors, also referred as Visual Display Unit (VDU), are the default standard output devices
of a computer. Output on the VDU is formed as images
of thousands of tiny dots, called pixels that are arranged
in a rectangular form. The sharpness of the image
depends upon the number of pixels. There are two kinds
of viewing screen used for monitors.
Cathode-Ray Tube (CRT): CRT monitors look like an
old television and are normally used with desktop
computer systems. The CRT display is made up of small
picture elements called pixels. The smaller the pixels,
the better the image clarity or resolution. It takes more
than one illuminated pixel to form a whole character, such as
the letter ‘e’ in the word help. A finite number of characters can be displayed on a screen at
once. The screen can be divided into a series of character boxes - fixed location on the screen
where a standard character can be placed. Most screens are capable of displaying 80 characters
of data horizontally and 25 lines vertically.
There are some disadvantages of CRT −
 Large in Size
 High power consumption

Flat-Panel Display Monitor


Flat-panel monitors once commonly used with Laptops now are
favourite for desktop computer systems. They make use of two
technologies, LCD (Liquid Crystal Display) and LED (Light
Emitting Diode). The flat-panel displays have reduced volume,
weight and power requirement in comparison to the CRT. They
could be hung on walls or wear them on your wrists. Current
uses of flat-panel display include calculators, video games,
monitors, laptop computer, and graphics display.
The flat-panel display is divided into two categories −
 Emissive Displays − Emissive displays are devices that convert electrical energy into
light. For example, plasma panel and LED (Light-Emitting Diodes).
 Non-Emissive Displays − Non-emissive displays use optical effects to convert sunlight
or light from some other source into graphics patterns. For example, LCD (Liquid-
Crystal Device).
PRINTER
A printer is an output device that prints paper documents. This includes text documents,
images, or a combination of both.
The printed output produced by a printer is often called a hard copy, which is the physical
version of an electronic document
TYPES OF PRINTERS
Printers fall broadly into two categories:
 Impact printers use a device to strike an inked
ribbon, pressing ink from the ribbon onto the
paper. Examples Dot Matrix, Daisy Wheel Printer
 Non-impact printers use different methods to
place ink (or another substance) on the page.
Laser Printer, Inkjet Printers.
Inkjet Printer
It is a non-impact printer. An ink-jet printer produces
high-quality documents at a relatively low price. You can use the documents produced by an
ink-jet printer in most circumstances, except when only the highest quality is acceptable, such
as for important business correspondence. An ink-jet printer sprays ink through small nozzles
onto a page to produce images.

Laser Printer
A laser printer is a non-impact high-speed printer that is ideal for business documents and
graphics. Laser printers produce the highest quality images. Most laser printers are
monochrome, but colour laser printers are also available. Low speed laser printers can print 4-
12 pages per minute. Very high-speed laser printers can print 500-1000 pages per minute
A laser printer works like a photocopier to produce images on a page. A laser beam draws
images on a light-sensitive drum.
The drum picks up a fine powdered ink called toner, and then transfers the toner to the paper
to create the images.

Speed of printer
 Speed of Character Printers such as Dot Matrix and Inkjet Printer is measured in
Character Per Second (CPS)
 Speed of Line Printer such as Drum Printer is measured in Lines Per Minute (LPM)
 Speed of Page Printer such as Laser Printer is measured in Page Per Minute (PPM)
Unit 3: Introduction to Audio and Video
INTRODUCTION TO AUDIO
Audiology is the discipline interested in manipulating acoustic signals that can be perceived by
humans. Important aspects are psychoacoustics, music, the MIDI standard, and speech
synthesis and analysis. Most multimedia applications use audio in the form of music and/or
speech, and voice communication is of particular significance in distributed multimedia
applications.

Sound is perhaps the most important element of multimedia. It is meaningful “speech” in any
language, from a whisper to a scream. It can provide the listening pleasure of music, the
startling accent of special effects or the ambience of a mood setting background. Sound is the
terminology used in the analog form, and the digitized form of sound is called as audio.
Sound is a physical phenomenon caused by vibration of a material, such as violin string or
wood log. This type of vibration triggers pressure wave fluctuations in the air around the
material. The pressure wave propagates in the air. The pattern of this oscillation is called wave
form. When hear a sound when such a wave reaches our ears.
BASIC PROPERTIES/CHARACTERISTICS OF SOUND
1. Frequency refers to how often something happens -- or in our case, the number of
periodic, compression-rarefaction cycles that occur each second as a sound wave moves
through a medium -- and is measured in Hertz (Hz) or cycles/second. The term pitch is
used to describe our perception of frequencies within the range of human hearing.
2. Amplitude/Loudness refer to how loud or soft the sound is. The amplitude of a sound
is a measure of its power and is measured in decibels. It is perceived as loud and soft.
Studies in hearing show that we perceive sounds at very low and very high frequencies
as being softer than sounds in the middle frequencies, even though they have the same
amplitude.
3. Duration refers to how long a sound lasts.
4. Timbre (pronounced TAM-burr) refers to the characteristic sound or tone color of an
instrument. A violin has a different timbre than a piano.
5. Envelope refers to the shape or contour of the sound as it evolves over time. A simple
envelope consists of three parts: attack, sustain, and decay. An acoustic guitar has a
sharp attack, little sustain and a rapid decay. A piano has a sharp attack, medium sustain,
and medium decay. Voice, wind, and string instruments can shape the individual attack,
sustain, and decay portions of the sound.
6. Location describes the sound placement relative to our listening position. Sound is
perceived in three-dimensional space based on the time difference it reaches our left
and right eardrums.
These six properties of sound are studied in the fields of music, physics, acoustics, digital
signal processing (DSP), computer science, electrical engineering, psychology, and
biology.
NATURE OF SOUND WAVES
Sound is a longitudinal, mechanical wave, in which the particles oscillate to and from in the
same direction of wave propagation. Sound waves cannot be transmitted through vacuum. The
transmission of sound requires at least a medium, which can be solid, liquid, or gas.
Sound is a variation in pressure. A region of increased pressure on a sound wave is called a
compression (or condensation). A region of decreased pressure on a sound wave is called a
rarefaction (or dilation).
The sources of sound
 vibrating solids
 rapid expansion or compression (explosions and implosions)
 Smooth (laminar) air flow around blunt obstacles may result in the formation of vortices
(the plural of vortex) that snap off or shed with a characteristic frequency. This process
is called vortex shedding and is another means by which sound waves are formed. This
is how a whistle or flute produces sound.
Human hearing and speech
Humans are generally capable of hearing sounds between 20 Hz and 20 kHz (although I can't
hear sounds above 13 kHz). Sounds with frequencies above the range of human hearing are
called ultrasound. Sounds with frequencies below the range of human hearing are called
infrasound.
ELEMENTS OF A SOUND SYSTEM
- Microphone
A microphone or mic or mike is a transducer that converts sound into an electrical signal.
Microphones are used in many applications such as telephones, hearing aids, public address
systems for concert halls and public events, motion picture production, live and recorded audio
engineering, sound recording, two-way radios, megaphones, radio and television broadcasting,
and in computers for recording voice, speech recognition, VoIP, and for non-acoustic purposes
such as ultrasonic sensors or knock sensors.
- Amplifier
Amplification is fundamental to modern electronics, and amplifiers are widely used in almost
all electronic equipment. An amplifier is an electronic device that can increase the power of a
signal. An amplifier uses electric power from a power supply to increase the amplitude of a
signal. The amount of amplification provided by an amplifier is measured by its gain: the ratio
of output voltage, current, or power to input. An amplifier is a circuit that has a power gain
greater than one.
An amplifier can either be a separate piece of equipment or an electrical circuit contained
within another device. Amplifiers can be categorized in different ways. One is by the frequency
of the electronic signal being amplified. For example, audio amplifiers amplify signals in the
audio (sound) range of less than 20 kHz, RF amplifiers amplify frequencies in the radio
frequency range between 20 kHz and 300 GHz.
- Speakers
Speakers are popular output devices used with computer systems. They receive audio input
from the computer's sound card and produce audio output in the form of sound waves. Most
computer speakers are active speakers, meaning they have an internal amplifier which allows
you to increase the volume, or amplitude, of the sound. Speakers usually come in pairs, which
allows them to produce stereo sound from two separate audio channels.
- Sound card
The sound card is an expansion card that allows the computer to send audio information to an
audio device, like speakers, a pair of headphones, etc. Although the computer does not need a
sound device to function, they are included on every machine in one form or another, either in
an expansion slot or built into the motherboard (onboard). Unlike the CPU and RAM, the sound
card is not a necessary piece of hardware required to make a computer work. The terms audio
card, audio adapter, and sound adapter are sometimes used in place of sound card.
Creative (Sound Blaster), Turtle Beach, and Diamond Multimedia are popular sound card
makers, but there are many others.
DIGITAL AUDIO
Digital audio is a technology that is used to record, store, manipulate, generate and reproduce
sound using audio signals that have been encoded in digital form.
It also refers to the sequence of discreet samples that are taken from an analog audio waveform.
Instead of a continuous sinusoidal wave, digital audio is composed of discreet points which
represent the amplitude of the waveform approximately.
The more samples taken, the better the representation, and hence impacts the quality of the
digital audio. Most modern multimedia devices can only process digital audio, and in the case
of cell phones requiring analog audio input, they still convert it to digital before transmission.
To create a digital audio from an analog audio source, tens of thousands of samples are taken
per second to ensure the replication of the waveform, with each sample representing the
intensity of the waveform in that instant.
The samples are stored in binary form same as any digital data, regardless of the type. The
samples which are merged into a single data file must be formatted correctly in order for it to
be played on a digital player with the most common digital audio format being MP3.
Apart from the sampling frequency, another parameter in digital encoding is the number of bits
used when taking samples. The common sampling parameter used is 16-bit samples taken over
a spectrum of 44.1 thousand cycles per second or 44.1 Kilo Hertz (kHz). CD quality digital
audio therefore requires 1.4 million bits of data per second.
PREPARING DIGITAL AUDIO FILES
Preparing digital audio files is fairly straight forward. If you have analog source materials –
music or sound effects that you have recorded on analog media such as cassette tapes.
 The first step is to digitize the analog material and recording it onto a computer
readable digital media.
 It is necessary to focus on two crucial aspects of preparing digital audio files:
o Balancing the need for sound quality against your available RAM and Hard disk
resources.
o Setting proper recording levels to get a good, clean recording.
Remember that the sampling rate determines the frequency at which samples will be drawn for
the recording. Sampling at higher rates more accurately captures the high frequency content of
your sound. Audio resolution determines the accuracy with which a sound can be digitized.
Formula for determining the size of the digital audio
Monophonic = Sampling rate * duration in seconds * (bit resolution / 8) * 1
Stereo = Sampling rate * duration in seconds * (bit resolution / 8) * 2
 The sampling rate is how often the samples are taken.
 The sample size is the amount of information stored. This is called as bit resolution.
 The number of channels is 2 for stereo and 1 for monophonic.
 The time span of the recording is measured in seconds.
Editing Digital Recordings
Once a recording has been made, it will almost certainly need to be edited. The basic sound
editing operations that most multimedia procedures needed are as follows:
1. Multiple Tasks: Able to edit and combine multiple tracks and then merge the tracks and
export them in a final mix to a single audio file.
2. Trimming: Removing dead air or blank space from the front of a recording and an
unnecessary extra time off the end.
3. Splicing and Assembly: Using the same tools mentioned for trimming, removing the
extraneous noises that inevitably creep into recording.
4. Volume Adjustments: If you are trying to assemble ten different recordings into a single
track there is a little chance that all the segments have the same volume.
5. Format Conversion: In some cases, digital audio editing software might read a format
different from that read by presentation or authoring program.
6. Resampling or downsampling: If sounds have been recorded and edited at 16-bit
sampling rates but are using lower rates, then they must be resampled.
7. Equalization: Some programs offer digital equalization capabilities that allow to modify
a recording frequency content so that it sounds brighter or darker.
8. Digital Signal Processing: Some programs allows to process the signal with
reverberation, multitap delay, and other special effects using DSP routines.
9. Reversing Sounds: Another simple manipulation is to reverse all or a portion of a digital
audio recording. Sounds can produce a surreal, other worldly effect when played
backward.
10. Time Stretching: Advanced programs let user to alter the length of a sound file without
changing its pitch. This feature can be very useful but most time stretching algorithms
will severely degrade the audio quality.
MUSICAL INSTRUMENT DIGITAL INTERFACE (MIDI)
Musical Instrument Digital Interface (MIDI) is an industry standard for representing sound in
a binary format. MIDI is not an audio format, however. It does not store actual digitally
sampled sounds. Instead, MIDI stores a description of sounds, in much the same way that a
vector image format stores a description of an image and not image data itself. Sound in MIDI
data is stored as a series of control messages. Each message describes a sound event using
terms such as pitch, duration, and volume. When these control messages are sent to a MIDI-
compatible device the information in the message is interpreted and reproduced by the device.
The MIDI standard also defines the interconnecting hardware used by MIDI devices and the
communications protocol used to interchange the control information)
MIDI data may be compressed, just like any other binary data, and does not require special
compression algorithms in the way that audio data does.
FILE FORMATS
Hundreds of file formats exist for recording and playing digital sound and music files. While
many of these file formats are software dependant — for example a Creative Labs Music File
is a .cmf — there are several well-known and widely supported file formats.
Audio files come in all types and sizes. And while we may all be familiar with MP3, what about
other formats like, AAC, FLAC, OGG, or WMA? Why do so many standards exist?

File Format and Codec


An audio file format and audio codec (compressor/decompressor) are two very different things.
Audio codecs are the libraries that are executed in multimedia players. The audio codec is
actually a computer program that compresses or decompresses digital audio data according to
the audio file format specifications. For example, the WAV audio file format is usually coded
in the OCM format, as are the popular Macintosh AIFF audio files.
Audio Formats
Audio Formats can be broken down into three main categories. Uncompressed formats, lossless
compression formats, and lossy compression.
 Uncompressed audio formats (often referred to as PCM formats) are just as formats
that use no compression. This means larger file sizes. A WAV audio file is an example
of an uncompressed audio file.
 Lossless compression applies compression to an uncompressed audio file, but it doesn't
lose information or degrade the quality of the digital audio file. The WMA audio file
format uses lossless compression.
 Lossy compression will result in some loss of data as the compression algorithm
eliminates redundant or unnecessary information. Lossy compression has become
popular online because of its small file size, it is easier to transmit over the Internet.
MP3 and Real Audio files uses a lossy compression.
COMMON AUDIO FORMATS
MP3 (.mp3)
MP3 is the name of the file extension and also the name of the type of file for MPEG, audio
layer 3, released back in 1993 and quickly exploded in popularity, eventually becoming the
most popular audio format in the world for music files. It uses perceptual audio coding and
psychoacoustic compression to remove all superfluous information. The main pursuit of MP3
is to cut out all of the sound data that exists beyond the hearing range of most normal people
and to reduce the quality of sounds that aren’t as easy to hear, and then to compress all other
audio data as efficiently as possible.
AAC
AAC stands for Advanced Audio Coding. It was developed in 1997 as the successor to MP3,
and while it did catch on as a popular format to use, it never really overtook MP3 as the most
popular for everyday music and recording.
The compression algorithm used by AAC is much more advanced and technical than MP3, so
when you compare a particular recording in MP3 and AAC formats at the same bitrate, the
AAC one will generally have better sound quality.
WMA - Windows Media Audio (.wma)
Short for Windows Media Audio, WMA is a Microsoft file format for encoding digital audio
files similar to MP3 though can compress files at a higher rate than MP3. WMA files, which
use the ".wma" file extension, can be of any size compressed to match many different
connection speeds, or bandwidths.
WAV (.wav)
WAV is the format used for storing sound in files developed jointly by Microsoft and IBM.
Support for WAV files was built into Windows 95 making it the de facto standard for sound
on PCs. WAV sound files end with a .wav extension and can be played by nearly all Windows
applications that support sound.
AIFF
AIFF stands for Audio Interchange File Format. Similar to how Microsoft and IBM developed
WAV for Windows, AIFF is a format that was developed by Apple for Mac systems back in
1988.
Also similar to WAV files, AIFF files can contain multiple kinds of audio. For example, there
is a compressed version called AIFF-C and another version called Apple Loops which is used
by GarageBand and Logic Audio — and they all use the same AIFF extension.
Real Audio (.ra .ram .rm)
Real Audio is a proprietary format and is used for streaming audio that enables you to play
digital audio files in real-time. To use this type of file you must have RealPlayer (for Windows
or Mac), which you can download for free. Real Audio was developed by RealNetworks.
MIDI - Musical Instrument Digital Interface (.mid)
It is a standard adopted by the electronic music industry for controlling devices, such as
synthesizers and sound cards, that emit music. At minimum, a MIDI representation of a sound
includes values for the note's pitch, length, and volume. It can also include additional
characteristics, such as attack and delay time.
OGG (.ogg)
Ogg is an audio compression format, comparable to other formats used to store and play digital
music, but differs in that it is free, open and unpatented. It uses Vorbis, a specific audio
compression scheme that's designed to be contained in Ogg.
FLAC
FLAC stands for Free Lossless Audio Codec. A bit on the nose maybe, but it has quickly
become one of the most popular lossless formats available since its introduction in 2001.
What’s nice is that FLAC can compress an original source file by up to 60% without losing a
single bit of data. What’s even nicer is that FLAC is an open source and royalty-free format
rather than a proprietary one, so it doesn’t impose any intellectual property constraints.
ALAC
ALAC stands for Apple Lossless Audio Codec. It was developed and launched in 2004 as a
proprietary format but eventually became open source and royalty-free in 2011. ALAC is
sometimes referred to as Apple Lossless. While ALAC is good, it’s slightly less efficient than
FLAC when it comes to compression.
RED BOOK STANDARD
The method for digitally encoding the high-quality stereo of the consumer CD music market is
an instrument standard, ISO 10149. This is also called as RED BOOK standard.
The developers of this standard claim that the digital audio sample size and sample rate of red
book audio allow accurate reproduction of all sounds that humans can hear. The red book
standard recommends audio recorded at a sample size of 16 bits and sampling rate of 44.1 KHz.
VIDEO – INTRODUCTION
Visual multimedia source that combines a sequence of images to form a moving picture. The
video transmits a signal to a screen and processes the order in which the screen captures should
be shown. Videos usually have audio components that correspond with the pictures being
shown on the screen.
Video is an excellent tool for delivering multimedia. Video places the highest performance
demand on computer and its memory and storage. Digital video has replaced analog video as
the method of choice for making and delivering video for multimedia
ANALOG VS DIGITAL VIDEO
Digital video has supplanted analog video as the method of choice for making video for
multimedia use. While broadcast stations and professional production and post-production
houses remain greatly invested in analog video hardware, digital video gear produces excellent
finished products at a fraction of the cost of analog.
A digital camcorder directly connected to a computer workstation eliminates the image-
degrading analog-to-digital conversion step typically performed by expensive video capture
cards, and brings the power of nonlinear video editing and production to everyday users.
BROADCAST VIDEO STANDARDS
Four broadcast and video standards and recording formats are commonly in use around the
world: NTSC, PAL, SECAM, and HDTV. These standards and formats are not easily
interchangeable.

NTSC: The United States, Japan, and many other countries use a system for broadcasting and
displaying video that is based upon the specifications set forth by the 1952 National Television
Standards Committee. These standards define a method for encoding information into the
electronic signal that ultimately creates a television picture. As specified by the NTSC standard,
a single frame of video is made up of 525 horizontal scan lines drawn onto the inside face of a
phosphor-coated picture tube every 1/30th of a second by a fast-moving electron beam.
PAL: The Phase Alternate Line (PAL) system is used in the United Kingdom, Europe,
Australia, and South Africa. PAL is an integrated method of adding color to a black-and-white
television signal that paints 625 lines at a frame rate 25 frames per second.
SECAM: The Sequential Color and Memory (SECAM) system is used in France, Russia, and
few other countries. Although SECAM is a 625-line, 50 Hz system, it differs greatly from both
the NTSC and the PAL color systems in its basic technology and broadcast method.
HDTV: High Definition Television (HDTV) provides high resolution in a 16:9 aspect ratio
(see following Figure). This aspect ratio allows the viewing of Cinemascope and Panavision
movies. There is contention between the broadcast and computer industries about whether to
use interlacing or progressive-scan technologies.
RECORDING FORMATS
S-VHS video: In S-VHS video, color and luminance information are kept on two separate
tracks. The result is a definite improvement in picture quality. This standard is also used in Hi-
8. still, if your ultimate goal is to have your project accepted by broadcast stations, this would
not be the best choice.
Component (YUV): In the early 1980s, Sony began to experiment with a new portable
professional video format based on Betamax. Panasonic has developed their own standard
based on a similar technology, called “MII,” Betacam SP has become the industry standard for
professional video field recording. This format may soon be eclipsed by a new digital version
called “Digital Betacam.”
VIDEO COMPRESSION
To digitize and store a 10-second clip of full-motion video in computer requires transfer of an
enormous amount of data in a very short amount of time. Reproducing just one frame of digital
video component video at 24 bits requires almost 1MB of computer data; 30 seconds of video
will fill a gigabyte hard disk. Full-size, full-motion video requires that the computer deliver
data at about 30MB per second. This overwhelming technological bottleneck is overcome using
digital video compression schemes or codecs (coders/decoders). A codec is the algorithm used
to compress a video for delivery and then decode it in real-time for fast playback.
Real-time video compression algorithms such as MPEG, P*64, DVI/Indeo, JPEG, Cinepak,
Sorenson, ClearVideo, RealVideo, and VDOwave are available to compress digital video
information. Compression schemes use Discrete Cosine Transform (DCT), an encoding
algorithm that quantifies the human eye’s ability to detect color and image distortion. All of
these codecs employ lossy compression algorithms.
MPEG: The MPEG standard has been developed by the Moving Picture Experts Group, a
working group convened by the International Standards Organization (ISO) and the
International Electro-technical Commission (IEC) to create standards for digital representation
of moving pictures and associated audio and other data. MPEG1 and MPEG2 are the current
standards. Using MPEG1, you can deliver 1.2 Mbps of video and 250 Kbps of two-channel
stereo audio using CD-ROM technology. MPEG2, a completely different system from
MPEG1, requires higher data rates (3 to 15 Mbps) but delivers higher image resolution, picture
quality, interlaced video formats, multiresolution scalability, and multichannel audio features.
DVI/Indeo: DVI is a property, programmable compression/decompression technology based
on the Intel i750 chip set. This hardware consists of two VLSI (Very Large Scale Integrated)
chips to separate the image processing and display functions.
Two levels of compression and decompression are provided by DVI: Production Level Video
(PLV) and Real Time Video (RTV). PLV and RTV both use variable compression rates. DVI’s
algorithms can compress video images at ratios between 80:1 and 160:1. DVI will play back
video in full-frame size and in full color at 30 frames per second.
VIDEO FRAMES AND FRAME RATE
In video and animation, a frame is one of the many still images which compose the complete
moving picture. The term is derived toward the end of the 19th century, from the beginning of
modern filmmaking.
A frame rate refers to the number of individual frames or images that are displayed per second
of film or TV display. The frame rates for TV and films are standardized by The Society of
Motion Picture and Television Editors, also known as SMPTE.
For example, a Flash movie on the Web may play 12 frames per second, creating the
appearance of motion. Most video is shot at 24 or 30 frames per second, or FPS. FPS is often
measured in 3D games as a way of checking how fast the graphics processor of a computer is
Practically, there no “best frame rate” for shooting video. It purely based on the end result to
achieve. Movies and films are almost exclusively projected at 24 frames per second. Television
does not have an internationally accepted frame rate. PAL and SECAM use 25 FPS in Europe
and in Japan they use 29.97 NTSC
PREVALENT FRAME RATE

Frame Rate Where used Description

24 Film; HD This is the universally accepted film frame rate. Movie theatres
always use this frame rate. Many high definition formats can
record and play back video at this rate.

23.98 Film; Many HD formats (some SD formats) can record at this speed
HD (NTSC and is usually preferred over true 24 FPS because of NTSC
Compatibility) compatibility.

25 PAL; The European video standard. Film is sometimes shot at 25


HD video FPS when destined for editing or distribution on PAL video.
Frame Rate Where used Description

29.97 FPS NTSC; This has been the color NTSC video standard since 1953.
HD video

30 FPS HD video, early Before color was added to NTSC video signals, the frame rate
black and white was truly 30 FPS. However, this format is almost never used
NTSC video today.

50 FPS PAL; HD video This refers to the interlaced field rate (double the frame rate)
of PAL. Some 1080i HD cameras can record at this frame rate.

59.94 FPS HD video with HD cameras can record at this frame rate, which is compatible
NTSC with NTSC video. It is also the interlaced field rate of NTSC
compatibility video.

60 FPS HD video High definition equipment can often play and record at this
frame rate but 59.94 FPS is much more common because of
NTSC compatibility.

FILE FORMATS
1. Flash Video Format or .flv: Due to the availability of cross-platform of Flash video
players, this format has even become more popular. In fact, the flash videos can be played
in different Flash movies files. These are entirely supported by each browser on each
platform. The best thing about these flash videos is that they support both streaming and
progressive downloads.
2. AVI Format or .avi: Created by none other than Microsoft, AVI format effectively stores
data to be encoded in different codec’s. This is an abbreviation of "audio video interleave”
and this was introduced by Microsoft. This also contains both video and audio data. In this
format, you will notice that it utilizes less compression as compared to other similar
formats. This is also one of the famous formats used by internet users.
3. MP4 Format: This is used in storing visual and audio streams online. This mainly utilizes
a separate compression intended for video and audio tracks. The video will be compressed
using the MPEG-4 video encoding.
4. MPG Format: Standardized by the famous MPEG, this video format is used to create
downloadable movies.
5. 3GP File Extension: This 3GP format is designed for transmitting video and audio files
between the internet and 3G cell phones.
6. The RealVideo Format: This mainly serves its purpose of streaming videos at low
bandwidths.
7. Quicktime Format [.MOV]: This is likewise used for saving video and movie files on the
internet. This also contains a single or multiple track that store audio, text, video and
effects. This can also be made compatible with Windows and Mac Platforms.
Unit 4: Basics of Animation, Files and Disc formats
ANIMATION – INTRODUCTION
Animation is all about generating a chain of drawings or pictures taken by way of a simulation
procedure for creating movement. It is a type of optical illusion through which viewer are able
to see still images or drawings moving. The procedure involves the manifestation of motion as
a result of displaying still pictures or photographs one after the other at the rate of 24 pictures
per second.
TYPES OF ANIMATIONS
a) Traditional animation: Traditional animation, sometimes referred to as cel animation,
is one of the older forms of animation, in it the animator draws every frame to create
the animation sequence. In traditional animation, animators draw images on a
transparent piece of paper fitted on a peg using a coloured pencil, one frame at the time.
Animators usually test animations with very rough drawings to see how many frames
they would need for the action to work. The animation process of traditional animation
can be lengthy and costly.
b) 2D Vector-based animation: 2D animation is the term often used when referring to
traditional hand-drawn animation, but it can also refer to computer vector animations
that adopts the techniques of traditional animation.
Vector-based animations, meaning computer generated 2D animations, uses the exact
same techniques as traditional animation, but benefits from the lack of physical objects
needed to make traditional 2D animations, as well as the ability to use computer
interpolation to same time.
In addition to the option of animating frame by frame, an animator has the option of
creating rigs for the characters and then move the body parts individually instead of
drawing the character over and over. These flexibilities provide beginners with more
options when approaching animation, especially if drawing isn’t their strong suit.
Traditional animation, on the other hand, requires very strong drawing skills.
c) 3D computer animation: 3D animation works in a completely different way than
traditional animation. They both require an understanding of the same principles of
movement and composition, but the technical skill set is very different for each task.
3D animation, also referred to as CGI, or just CG, is made by generating images using
computers. That series of images are the frames of an animated shot. Instead of drawn
or constructed with clay, characters in 3D animation are digitally modelled in the
program, and then fitted with a ‘skeleton’ that allows animators to move the models.
Animation is done by posing the models on certain key frames, after which the
computer will calculate and perform an interpolation between those frames to create
movement.
d) Motion graphics: Motion graphics is quite different from the other types of animation.
Unlike the other types on our list it is not character or story driven. It’s the art of
creatively moving graphic elements or texts, usually for commercial or promotional
purposes. Think animated logos, explainer videos, app commercials, television promos
or even film opening titles.
The process of creating Motion Graphics depends on the programs used, since video
editing software often have different UI or settings, but the idea is the same. Motion
Graphics usually involves animating images, texts or video clips using key framing that
are tweened to make a smooth motion between frames.
e) Stop motion: Stop-Motion animation can be referred to any animation that uses objects
that are photographed in a sequence to create the illusion of movement. The process of
stop-motion animation is very long, as each object has to be carefully moved inch by
inch, while it’s being photographed every frame, to create a fluid sequence of
animation.
The different types of stop-motion animation are Claymation, Cut-Out, Silhouette,
Lego, and Pixelation.
USES OF ANIMATIONS
1. Cartoons: The most common use of animation, and perhaps the origin of it, is cartoons.
Cartoons appear all the time on television and the cinema and can be used for
entertainment, advertising, presentations and many more applications that are only
limited by the imagination of the designer.
2. Simulations: Many times, it is much cheaper to train people to use certain machines on
a virtual environment (i.e., on a computer simulation), than to actually train them on
the machines themselves. Simulations of all types that use animation are supposed to
respond to real-time stimuli, and hence the events that will take place are non—
deterministic.
3. Scientific Visualisation: Graphical visualisation is very common in all areas of science.
The usual form that is takes is x-y plots and when things get more complicated three-
dimensional graphs are used. However, there are many cases that something is more
complex to be visualised in a three-dimensional plot, even if that has been enhanced
with some other effect (e.g., colour). Here is where animation comes in. Data is
represented in multiple images (frames) which differ a little from each other, and
displayed one after the other to give the illusion of motion. This
4. Teaching and Communicating: One of the most difficult aspects of teaching is
communicating ideas effectively. When this becomes too difficult using the classical
teaching tools (speech, blackboard etc.) animation can be used to convey information.
From its nature, an animation sequence contains much more information than a single
image or page of text. This, and the fact that an animation can be very “pleasing to the
eye”, makes animation the perfect tool for learning.
5. Medical Animation: A medical animation is a short educational film, usually based
around a physiological or surgical topic, rendered using 3D computer graphics. While
it may be intended for a variety of audiences, medical animation is most commonly
utilized as an instructional tool for medical professionals or their patients.
6. Architecture Visualization: Architectural Animation is a short architectural movie
created on a computer. A computer-generated building is created along with
landscaping and sometimes moving people & vehicles.
7. Mechanical Animation: Using computer modelling and animation to create virtual
models of products and mechanical designs can save companies thousands to millions
of dollars, by cutting down on development costs. Working in a virtual world can let
developers eliminate a lot of problems that would normally require extensive physical
test models & experimentation.
8. Forensic Animation: Forensic animation is a branch of forensics in which animated
recreation of incidents are created to aid investigators & help solve cases. Examples
include the use of computer animation, stills, and other audio-visual aids. Check out
this video to understand how animation helps forensic experts.

DATA COMPRESSION - INTRODUCTION


Compression, or "data compression," is used to reduce the size of one or more files. When a
file is compressed, it takes up less disk space than an uncompressed version and can be
transferred to other systems more quickly. Therefore, compression is often used to save disk
space and reduce the time needed to transfer files over the Internet.
Compression is used for different types of data, e.g. documents, sound, video. It is either
applied to transfer a file, whether on a disk or over a network, to ensure that the file can fit on
the storage device, or both. The aim of compression is reducing the quantity of the file size but
to keep the quality of the original data.
When discussing about compressing graphics (bitmaps) for use in multimedia, it should be
known that bitmap graphics can produce some of the largest file sizes compared to other media
elements, such as text files, vectors, Flash animations. These large file sizes result in slow
loading times, particularly on the web and a heavy burden on the system resources (memory
and storage space).
A common data compression technique removes and replaces repetitive data elements and
symbols to reduce the data size. Data compression for graphical data can be lossless
compression or lossy compression, where the former saves all replaces but save all repetitive
data and the latter deletes all repetitive data.
There are two primary types of data compression:
 File Compression: File compression can be used to compress all types of data into a
compressed archive. These archives must first be decompressed with a decompression
utility in order to open the original file(s).
 Media Compression: Media compression is used to save compressed image, audio,
and video files. Examples of compressed media formats include JPEG images, MP3
audio, and MPEG video files. Most image viewers and media playback programs can
open standard compressed file types directly.
COMPRESSION TECHNIQUES
a) Lossless Compression: Lossless compression is a class of data compression algorithms
that allows the exact original data to be reconstructed from the compressed data. The
term lossless is in contrast to lossy data compression, which only allows an
approximation of the original data to be reconstructed, in exchange for better
compression rates.
Most lossless compression programs do two things in sequence: the first step generates
a statistical model for the input data, and the second step uses this model to map input
data to bit sequences in such a way that "probable" data will produce shorter output
than "improbable" data. Refers to data compression techniques in which no data is lost.
This type of compression can be applied not just to graphics but to any kind of computer
data such as spreadsheets, text documents and software applications. If you need to
send files as an email attachment, then you may be best to compress it first. A common
format which is used to do this is the .zip format. When you open the compressed file
up all the original data is retrieved.
For example, if you compress a word document with a lossless algorithm it looks for
repeated letters and temporarily discards them. When the document is decompressed,
the letters are retrieved.
b) Lossy compression: Lossy compression technologies attempt to eliminate redundant
or unnecessary information. Most video compression technologies, such as MPEG, use
a lossy technique. Is a data encoding method which discards some of the data, in order
to achieve its goal, with the result that decompressing the data yields content that is
different from the original, though similar enough to be useful in some way. Lossy
compression is most commonly used to compress multimedia data (audio, video, still
images), especially in applications such as streaming media and internet telephony.
Lossy compression formats suffer from generation loss: repeatedly compressing and
decompressing the file will cause it to progressively lose quality. This is in contrast
with lossless data compression.
LOSSLESS vs. LOSSY DATA COMPRESSION
 Lossless technique keeps the source as it is during compression while a change of the
original source is expected in lossy technique but very close to the origin.
 Lossless technique is reversible process which means that the original data can be
reconstructed. However, the lossy technique is irreversible due to the loss of some data
during extraction.
 Lossless technique produces larger compressed file compared with lossy technique.
 Lossy technique is mostly used for images and sound.
CODEC
CODEC stands for COder and DECoder. It is small piece of software-based process that
encodes and compresses data (usually an audio or video clip) for data storage and/or decodes
and decompresses the same for playback or editing. However, a codec can also be a physical
piece of hardware responsible for turning analog video and audio into a digital format.
TYPE OF CODECS
Hardware-Based: It performs analog-to-digital and digital-to-analog conversion in real time
mode. For example, a modem used for sending data traffic over analog voice circuits.
Software-Based: The process of encoding source voice and video captured by a microphone
or video camera in digital form for transmission to other participants in calls, video
conferences, and streams or broadcasts.
Audio Codec: It converts analog audio signals into digital signals for transmission or encodes
them for storage. At the receiving end device converts the digital signals back to analog form
using an audio decoder for playback. An example the codecs used in the sound cards of
personal computers.
Video Codec: It converts analog video signals into digital signals for transmission or encodes
them for storage. At the receiving end device converts the digital signals back to analog form
using a video decoder for playback. An example the codecs used in the display cards of
personal computers.
OVERVIEW OF GIF, JPEG, MPEG.
1. GIF: Stands for Graphics Interchange Format, is a bitmap image format that was
developed by CompuServe in way back 1987 (GIF 87a). Later on, 1989, CompuServe
released updated version called “GIF 89a”. It format is widely used for images on the web
and sprites (still or animated graphics) in the software programs, due to its wide support
and portability.
It supports up to 8 bits colour for each image (per pixel), thus avail a space of maximum
256 different colours (indexed colour). With the release of “GIF 89a” version, images
supported transparent background and image metadata.
It also became popular due to the support for animation by allowing a stream of images to
be stored in a single file.
Since GIFs may only contain 256 colors, they are not ideal for storing digital photos, such
as those captured with a digital camera. GIFs are better suited for buttons and banners on
websites, since these types of images typically do not require a lot of colors.
GIF images are compressed using lossless data compression technique to reduce the file
size without compromising with the image visual quality.
Usage of GIF images
a) Line art such as logos with limited number of colours.
b) Small animations and low-resolution video clips.
c) Low colour images and animation data for games.
2. JPEG: It stands for Joint Photographic Expert Group. It he most popular image file format
that is commonly used in digital cameras to store photograph. It supports 2 24 (16 million)
colour per pixel in an image.
The colours in a JPEG image are produced by using 8 bits for each color (red, green, and
blue) in the RGB color space. This provides 28 or 256 values for each of the three colors,
which combined allow for 256 x 256 x 256 or 16777216 colours. All three colours with
values of 0 produce pure black, whereas with values of 255 create white.
Joint Photographic Expert Group published the first specification in 1992. Since than
several variants of the format were published, including JPEG 2000 and JPEG XR.
JPEG applies lossy compression to images, which can result in a significant reduction of
the file size. Applications can determine the degree of compression to apply, and the
amount of compression affects the visual quality of the result. It compresses the image by
ten times with almost not degradation in quality.
Along with image data, JPEG files may also include metadata that tells about the contents
of the file such as, dimensions, colour space and colour profile (sRGB or Adobe RGB).
Digital photographers often prefer to capture images in a raw format for better control over
image editing in the highest quality possible. Then thereafter they export the pictures as
JPEG (.JPG)
JPEG files also include EXIF data which is added by digital camera about aperture settings,
shutter speed, focal length, flash settings, ISO number and many more.
JPEG is not well suited for line drawings and other textual or iconic graphics, where the
sharp contrasts between adjacent pixels can cause noticeable changes. Such images are
better saved in a lossless graphics format such as TIFF, GIF, PNG, or a raw image format.
Moreover, JPEG is also not well suited for multiple edits, as some image quality tend to
lost each time the image is recompressed. Rather than, image should be saved in a lossless
format and subsequently edited in that format, then finally published as JPEG for
distribution.
File extensions: .JPG, .JPEG, .JFIF, .JPX, .JP2
3. MPEG: It stands for Moving Picture Experts Group. The International Organization for
Standardization (ISO) and the International Electrotechnical Commission (IEC) formed it
to set standards for encoding digital audio and video and media compression standards are
widely adopted and universally available.
Since 1988 when it has been established, the group has produced standards that help
industry offer end users an ever more enjoyable digital media experience. The MPEG
organization has produced a number of digital media standards since its inception.
Examples include:
MPEG-1 – Audio/video standards designed for digital storage media (such as an MP3 file)
MPEG-2 – Standards for digital television and DVD video
MPEG-4 – Multimedia standards for the computers, mobile devices, and the web
MPEG-7 – Standards for the description and search of multimedia content
MPEG-MAR – A mixed reality and augmented reality reference model
MPEG-DASH – Standards that provide solutions for streaming multimedia data over
HTTP (such as servers and CDNs)
MPEG compression, significantly reduces the file size with very less loss in quality. This
makes transferring files over the Internet more efficient and faster.
File extensions: .MP3, .MP4, .M4V, .MPG, .MPE, .MPEG
CD-Technology
The invention of the laser diode, which is an essential part of the Compact Disc and all other
optical recording systems led to the development of Compact Disc. The basic principle is that
a fine laser beam is focused on a surface that contains digital information in the form of tiny
pits. Since the surface of the disc is reflective, the laser beam is reflected with the pattern of
the pits to a photodiode, after which the signal can be detected and converted into analogue
audio information. This means there is a non-contact readout system, which cannot damage the
information carrier, so that a Compact Disc in principle has an unlimited lifetime.
Points to ponder
 Consists of a circular disk, which is coated with a thin material that is highly reflective
 Laser beam technology is used for recording/reading of data on the disk
 A random-access medium for high capacity secondary storage
 It can store extremely large amounts of data in a limited space

TYPES OF OPTICAL DISKS

Optical Disk

Media Type Writablity


MULTI-
CD DVD Blu Ray ROM WORM RW
SESSION
MEDIA WISE
COMPACT DISC
A compact disc is a portable circular polycarbonate disc that can be used to record, store and
play back audio, video and other data in digital form. The disk is coated with a reflective
aluminium layer on which a sequence of 'pits' are placed in a spiral track.
A standard compact disc measures 120 mm, across, is 1.2 mm thick, weighs between 15 grams
and 20 grams, and has a capacity of 80 minutes of audio, or 650 megabytes (MB) to 700 MB
of data.
CDs are fragile and prone to scratches; they can be repaired, but disc readability may be
affected. To prevent corrosion and physical damage, a protective layer covers the reflective
surface.
DVD
DVD optical disc technology uses denser recording techniques in addition to layering and two-
sided manufacturing to achieve very large disc capacities. DVDs can hold video, audio and
computer data. DVD drives are also able to read CD-ROMs. The original purpose of DVD was
to hold video data in particular - DVD once was said to stand for Digital Video Disk.
However, as the number of DVD applications grew, the variety of data that can be stored on
DVD was reflected in its present name, Digital Versatile disc. A DVD can store up to 17GB of
data on a single disk
BLU-RAY
Blu-ray (not Blue-ray) also known as Blu-ray Disc (BD), is the name of a new optical disc
format jointly developed by the Blu-ray Disc Association (BDA), a group of the world's leading
consumer electronics, personal computer and media manufacturers (including Apple, Dell,
Hitachi, HP, JVC, LG, Mitsubishi, Panasonic, Pioneer, Philips, Samsung, Sharp, Sony, TDK
and Thomson)
WRITABILITY
Read-only memory (ROM) Disc
ROM disks are used for the distribution of standard program and data files. CD-ROMs are
stamped by the vendor, and once stamped, they cannot be erased and filled with new data.
ROM Disk are particularly well-suited to information that requires large storage capacity. This
includes large software applications that support colour, graphics, sound, and especially video.
Worm Disc
Read-only memory (ROM) disks are used for the distribution of standard program and data
files. CD-ROMs are stamped by the vendor, and once stamped, they cannot be erased and filled
with new data.
ROM Disk are particularly well-suited to information that requires large storage capacity. This
includes large software applications that support colour, graphics, sound, and especially video.
Multi-session Disc
A multisession disk is a recordable format that allows the recording of a compact disk to be
conducted in more than one recording session. If there is free space left on the Disk after the
first session, additional data can be written to it at a later date. Each session has its own lead
in, program area, and lead out. This takes up about 20 megabytes of space, and therefore, is
less efficient than recording data all at once.
Re-Writable Disc
RW disks, just like the magnetic storage disks, allows information to be recorded and erased
many times. Usually, there is a separate erase cycle although this may be transparent to the
user. RW is an optical disc format that allows repeated recording on a disc. The RW format
was introduced by Hewlett-Packard, Mitsubishi, Philips, Ricoh, and Sony, in a 1997. RW
drives can write both CD-R and CD-RW discs and can read any type of CD.
SPEED
Rotation speed indicates the revolutions per minute or RPM range that the drive can produce.
Data transfer rate refers to the speed at which data can be read from an optical media drive.
The amount of time taken to write a disc depends upon the speed of the recorder, the writing
method used by the recorder and the amount of information required to be written. Recording
speed is measured the same as the reading speed of ordinary CD-ROM drives and players. At
single speed (1x) a recorder writes 150 KB (153,600 bytes) of data (CD-ROM Mode 1) per
second and at a multiple of that figure at each speed increment above 1x.
Read/Write Audio CD-ROM CD-ROM CD-i/XA CD-i/XA
Mode 1 Mode 2 Form 1 Form 2
Speed (2,352 (2,048 (2,336 (2,048 (2,324
(CLV) Bytes/Block) Bytes/Block) Bytes/Block) Bytes/Block) Bytes/Block)
1x 176,000 153,600 175,200 153,600 174,300
2x 352,800 307,200 350,400 307,200 348,600
4x 705,600 614,400 700,800 614,400 697,200
6x 1,058,400 921,600 1,051,200 921,600 1,045,800
8x 1,411,200 1,228,800 1,401,600 1,228,800 1,394,400
12x 2,112,000 1,843,200 2,102,400 1,843,200 2,091,600
16x 2,816,000 2,457,600 2,803,200 2,457,600 2,788,800
20x 3,520,000 3,072,000 3,504,000 3,072,000 3,486,000

Writing Modes
Optical Disc products came into its own writing speed accelerated due to rapid advances made
in hardware and media technology. Available units now employ a variety of writing modes
including Constant Linear Velocity (CLV), Zone Constant Linear Velocity (ZCLV), Partial
Constant Angular Velocity (PCAV) and Constant Angular Velocity (CAV).
Constant Linear Velocity (CLV)
CDs were originally designed for consumer audio applications and initially operated using a
CLV mode to maintain a constant data transfer rate across the entire disc. The CLV mode sets
the disc’s rotation at 500 RPM decreasing to 200 RPM (1x CLV) as the optical head of the
player or recorder reads or writes from the inner to outer diameter. Since the entire disc is
written at a uniform transfer rate it takes, for example, roughly 76 minutes to complete a full
74 minute/650 MB disc at 1x CLV. As recording speed increases the transfer rate increases
correspondingly so that at 8x CLV writing an entire disc takes 9 minutes and at 16x 5 minutes.
Zone Constant Linear Velocity (ZCLV)
In contrast to CLV which maintains a constant data transfer rate throughout the recording
process, ZCLV divides the disc into regions or zones and employs progressively faster CLV
writing speeds in each. For example, a 40x ZCLV recorder might write the first 10 minutes of
the disc at 20x CLV, the next 15 minutes at 24x CLV, the following 30 minutes at 32x CLV
and the remainder at 40x CLV speed.
Partial Constant Angular Velocity (PCAV)
Some recorders make use of the PCAV mode which spins the disc at a lower fixed RPM when
the optical head is writing near the inner diameter but then shifts to CLV part way further out
on the disc. As a result, the data transfer rate progressively increases until a predetermined
point is reached and thereafter remains constant. For example, a 24x PCAV recorder might
accelerate from 18x to 24x speed over the first 14 minutes of the disc then maintain 24x CLV
writing for the remainder of the disc.
Constant Angular Velocity (CAV)
The CAV mode spins the disc at a constant RPM throughout the entire writing process.
Consequently, the data transfer rate continuously increases as the optical head writes from the
inner to outer diameter of the disc. For example, a 48x CAV recorder might begin writing at
22x at the inner diameter of the disc accelerating to 48x by the outer diameter of the disc.
BURNING PROCESS (Example CD Burning)
The surface of a CD is made of a polycarbonate layer with moulded spiral tracks on the top.
The data are stored on the CD as a series of minute grooves which are known as ‘pits’ encoded
on these spiral tracks. The areas between the ‘pits’ are known as ‘lands’. These pits and lands
do not represent the 1s and 0s, rather each change from pit to land
or land to pit is interpreted as 0 while no change is read as 1.
The burning process of a CD is nothing but creating a pattern of pits
and lands over the polycarbonate layer. But since the data must be
accurately encoded on such a small scale, the burning process must
be extremely précised. A CD burner is used to write (burn) the data
on a CD. It incorporates a moving laser quite similar to a CD player which is known as ‘Write
Laser’. The Write Laser which is more powerful than the ‘Read Laser’, has the capability to
alter the surface of CD instead of just bouncing the laser light off. During burning process, as
per the data (binary values) the Write Laser bounces the light beam over the CD surface and
creates a series of pits on it.
READING PROCESS (Example CD Burning)

When user plays the CD, the Read Laser bounces the light beams (not capable to modify the
surface of CD) on the surface and detects the pits and lands. Each change between pit to land
or vice versa is translated as zero and no change (pit to pit or land to land) is translated as one.
These binary values form the actual data.
COMPACT DISC FORMATS
With the rise of personal computers (PCs) and other commercial technologies, various compact
disc formats branched off to store data. Sony and Philips created specifications for these CD
versions -- called Rainbow Books, due to the various colors on the book bindings -- to define
each product format. In 1985, the CD-ROM entered the market and went beyond audio to
record optical data storage.
Compact disc variations include:
 CD-Read-Only Memory. CD-ROMs are readable by any computer with a CD-ROM
drive.
 CD-interactive. Released in 1993, CD-i could be played on CD players, but not in a
CD-ROM drive. The format was later modified to be read by both.
 CD-ReWritable. The CD-RW used a metallic alloy that reflected differently than
regular compact discs. This change in reflectivity made a CD-RW unreadable to many
early CD players.
 CD-Recordable. The CD-R is a compact disc that can be written to once and read many
times.
 CD-ROM eXtended Architecture. The CD-ROM XA is an extension of the standard
CD-ROM that allows audio, video and computer data to be accessed simultaneously.
 Photo CD. Designed by Kodak, the photo CD was created for the express purpose of
storing photographs in a digital format that could be accessed and edited on a computer.
It launched in 1992, and was originally designed to hold 100 high-quality images.
 Video CD. The video CD, or VCD, was created in 1993. VCD quality was intended to
have comparable quality to VHS recordings, but has a much lower resolution than a
modern digital video disk (DVD).
DVD FORMATS.
 DVD-R is a type of write once, read many (WORM) DVD format that allows the user
to record a single time on a DVD disk.
 DVD-RW is a DVD format that allows the user to record and erase multiple times on a
single DVD disk.
 DVD-Audio (DVD-A) is a DVD format developed by Panasonic that is specifically
designed to hold audio data, and particularly, high-quality music.
 DVD-ROM stores the same type of computer data typical of a CD-ROM. DVD-ROMs
have seven times the storage capacity of CD-ROMs.
 Digital Versatile disc - Random Access Memory (DVD-RAM) is an adaptation of
DVD-ROM that uses magneto-optical technology to record data, both on the grooves
and the lands (flat areas) of the disk.

You might also like