ITT Project1

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 71

INTRODUCTION

Computers are indispensable in our lives today. We simply cannot function without these wonders of technology. Be it the medical sector, business sector, entertainment sector, computers are here in every part of our lives. In every home, every office computers have made a phenomenal impact on our lives. They have made our lives infinitely easier and comfortable. Like every technological marvel computers too have gone through a lot of makeovers and have evolved exponentially. The computer we see today is quite different from the one made in the beginning. The number of applications of a computer has increased so has the speed and accuracy of the calculations. Reservation of tickets in Airlines and Railways, payment of our telephone and electricity bills, deposits and withdrawals of money from the banks, business data processing, medical diagnosis etc. are some of the areas where computers have become immensely helpful. However there is one limitation of the computer. Human beings can perform calculations of their own accord. Computers on the other hand are dumb machines and have to be given proper instructions to carry out any calculation. This is why we should know how a computer works.

ITT, The Institute Of Chartered Accountants Of India |

BASIC FUNDAMENTALS
A computer is a programmable machine that receives input, stores and manipulates data, and provides output in a useful format. While a computer can, in theory, be made out of almost anything and mechanical examples of computers have existed through much of recorded human history, the first electronic computers were developed in the mid-20th century (19401945). Originally, they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs). Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space. Simple computers are small enough to fit into mobile devices, and can be powered by a small battery. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". However, the embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are the most numerous. The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century onwards though, the word began to take on its more familiar meaning, describing a machine that carries out computations.[3] The first adding machine, a precursor of the digital computer, was devised in 1642 by the French scientist, mathematician, and philosopher Blaise Pascal. This device employed a series of tenITT, The Institute Of Chartered Accountants Of India | 2

toothed wheels, each tooth representing a digit from 0 to 9. The wheels were connected so that numbers could be added to each other by advancing the wheels by a correct number of teeth. In the 1670s the German philosopher and mathematician Gottfried Wilhelm Leibniz improved on this machine by devising one that could also multiply. The French inventor Joseph-Marie Jacquard, in designing an automatic loom, used thin, perforated wooden boards to control the weaving of complicated designs. During the 1880s the American statistician Herman Hollerith conceived the idea of using perforated cards, similar to Jacquards boards, for processing data. Employing a system that passed punched cards over electrical contacts, he was able to compile statistical information for the 1890 United States census.

Babbage's Difference Engine Considered by many to be a direct forerunner of modern calculating devices, the Difference Engine was able to compute mathematical tables. This woodcut shows a small portion of the ingenious machine, which was designed by Charles Babbage in the 1820s. Babbages later idea for the Analytical Engine would have been a true, programmable computer if the project had been pursued with adequate funding. As it was, neither machine was completed in his lifetime, although it was not beyond the technological capabilities of the time. In 1991 a team at the London Science Museum finished work on a fully functional Difference Engine No. 2.

ITT, The Institute Of Chartered Accountants Of India |

THE BETTMANN ARCHIVE

Electronic Computers

ITT, The Institute Of Chartered Accountants Of India |

Circuit Board and Transistors A close-up of a smoke detectors circuit board reveals its components, which include transistors, resistors, capacitors, diodes, and inductors. Rounded containers house the transistors that make the circuit work. Transistors are capable of serving many functions, such as amplifying and switching. Each transistor consists of a small piece of semiconducting material, such as silicon, that has been doped, or treated with impurity atoms, to create n-type and p-type regions. Invented in 1948, transistors are a fundamental component in nearly all modern electronic devices. H. Schneebeli/Science Source/Photo Researchers, Inc.

During World War II a team of scientists and mathematicians, working at Bletchley Park, north of London, created one of the first all-electronic digital computers: Colossus. By December 1943, Colossus, which incorporated 1,500 vacuum tubes, was operational. It was used by the team headed by Alan Turing, in the largely successful attempt to crack German radio messages enciphered in the Enigma code. Independently of this, in the United States, a prototype electronic machine had been built as early as 1939, by John Atanasoff and Clifford Berry at Iowa State College. This prototype and later research were completed quietly and later overshadowed by the development of the Electronic Numerical Integrator And Computer ( ENIAC) in 1945. ENIAC was granted a patent, which was overturned decades later, in 1973, when the machine was revealed to have incorporated principles first used in the Atanasoff-Berry Computer (ABC).

ITT, The Institute Of Chartered Accountants Of India |

UNIVAC Computer System The first commercially available electronic computer, UNIVAC I, was also the first computer to handle both numeric and textual information. Designed by J. Presper Eckert and John Mushily, whose corporation subsequently passed to Remington Rand, the implementation of the machine marked the beginning of the computer era. Here, a UNIVAC computer is shown in action. The central computer is in the background, and in the foreground is the supervisory control panel. Remington Rand delivered the first UNIVAC machine to the US Bureau of Census in 1951. THE BETTMANN ARCHIVE

ENIAC contained 18,000 vacuum tubes and had a speed of several hundred multiplications per minute, but originally its program was wired into the processor and had to be manually altered. Later machines were built with program storage, based on the ideas of the Hungarian-American mathematician John von Neumann. The instructions, like the data, were stored within a memory, freeing the computer from the speed limitations of the paper-tape reader during execution and permitting problems to be solved without rewiring the computer. See Von Neumann Architecture. The use of the transistor in computers in the late 1950s marked the advent of smaller, faster, and more versatile logical elements than were possible with vacuum-tube machines. Because transistors use much less power and have a much longer life, this development alone was responsible for the improved machines
ITT, The Institute Of Chartered Accountants Of India | 6

called second-generation computers. Components became smaller, as did inter-component spacings, and the system became much less expensive to build. A general purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by busses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Control unit

Diagram showing how a particular MIPS architecture instruction would be decoded by the control system. The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into a series of control signals which activate other parts of the computer. Control systems in advanced computers may change the order of some instructions so as to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from. The control system's function is as follows (note that this is a simplified
ITT, The Institute Of Chartered Accountants Of India | 7

description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU): 1. Read the code for the next instruction from the cell indicated by the program counter. 2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems. 3. Increment the program counter so it points to the next instruction. 4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code. 5. Provide the necessary data to an ALU or register. 6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation. 7. Write the result from the ALU back to a memory location or to a register or perhaps an output device. 8. Jump back to step (1). Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer programand indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer that runs a microcode program that causes all of these events to happen. Arithmetic/logic unit (ALU)
ITT, The Institute Of Chartered Accountants Of India | 8

Main article: Arithmetic logic unit The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc.) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbersalbeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operationalthough it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for creating complicated conditional statements and processing boolean logic. Superscalar computers may contain multiple ALUs so that they can process several instructions at the same time. Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform arithmetic on vectors and matrices. Memory

ITT, The Institute Of Chartered Accountants Of India |

Magnetic core memory was the computer memory of choice throughout the 1960s, until it was replaced by semiconductor memory. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or 128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information A personal computer is made up of multiple physical components of computer hardware, upon which can be installed a system software called operating system and a

ITT, The Institute Of Chartered Accountants Of India |

10

multitude of software applications to perform the operator's desired functions. Though a PC comes in many different forms, a typical personal computer consists of a case or chassis in a tower shape (desktop), containing components such as a motherboard. The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware. Though considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[21] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its
ITT, The Institute Of Chartered Accountants Of India | 11

operation code or opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose fromeach with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they
ITT, The Institute Of Chartered Accountants Of India | 12

may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will carry process them. While some computers may have strange concepts "instructions" and "output" (see quantum computing), modern computers based on the von Neumann architecture are often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. Computers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at
ITT, The Institute Of Chartered Accountants Of India | 13

the University of Manchester in 1953.[16] In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household[citation needed]. Modern smartphones are fully programmable computers in their own right, and as of 2009 may well be the most common form of such computers in existence. Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of these being completed in Great Britain. The first working prototype to be demonstrated was the Manchester Small-Scale Experimental Machine (SSEM or "Baby") in 1948. The Electronic Delay Storage Automatic Calculator (EDSAC), completed a year after the SSEM at Cambridge University, was the first practical, non-experimental implementation of the stored program design and was put to use immediately for research work at the university. Shortly thereafter, the machine originally described by von Neumann's paperEDVACwas completed but did not see full-time use for an additional two years. Nearly all modern computers implement some form of the storedprogram architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in
ITT, The Institute Of Chartered Accountants Of India | 14

computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture. Beginning in the 1950s, Soviet scientists Sergei Sobolev and Nikolay Brusentsov conducted research on ternary computers, devices that operated on a base three numbering system of 1, 0, and 1 rather than the conventional binary numbering system upon which most computers are based. They designed the Setun, a functional ternary computer, at Moscow State University. The device was put into limited production in the Soviet Union, but supplanted by the more common binary architecture. Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.[32] In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network that it produced was called the ARPANET.[33] The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that
ITT, The Institute Of Chartered Accountants Of India | 15

are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments. Cray designed many supercomputers that used multiprocessing heavily. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic storedprogram architecture and from general purpose computers.[31] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Main article: Multiprocessing Main article: Computer multitasking While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[29]
ITT, The Institute Of Chartered Accountants Of India | 16

One means by which this is done is with a special signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.[30] Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss. Hard disk drives are common storage devices used with computers. I/O is the means by which a computer exchanges information with the outside world.[27] Devices that provide input or output to the computer are called peripherals.[28] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both
ITT, The Institute Of Chartered Accountants Of India | 17

input and output devices. Computer networking is another form of I/O. Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics[citation needed]. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. Main article: Input/output memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: randomaccess memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like
ITT, The Institute Of Chartered Accountants Of India | 18

hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[26] In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: randomaccess memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM
ITT, The Institute Of Chartered Accountants Of India | 19

is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[26] In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. Input/output (I/O) Main article: Input/output

Hard disk drives are common storage devices used with computers. I/O is the means by which a computer exchanges information with the outside world.[27] Devices that provide input or output to the computer are called peripherals.[28] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might
ITT, The Institute Of Chartered Accountants Of India | 20

contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics[citation needed]. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. Multitasking Main article: Computer multitasking While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[29] One means by which this is done is with a special signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.[30] Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is
ITT, The Institute Of Chartered Accountants Of India | 21

waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss. Multiprocessing
Main article: Multiprocessing

Cray designed many supercomputers that used multiprocessing heavily. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic storedprogram architecture and from general purpose computers.[31] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and

ITT, The Institute Of Chartered Accountants Of India |

22

cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.

Networking and the Internet

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.[32] In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network that it produced was called the ARPANET.[33] The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is
ITT, The Institute Of Chartered Accountants Of India | 23

becoming increasingly ubiquitous even in mobile computing environments.

Stored-program architecture

Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of these being completed in Great Britain. The first working prototype to be demonstrated was the Manchester Small-Scale Experimental Machine (SSEM or "Baby") in 1948. The Electronic Delay Storage Automatic Calculator (EDSAC), completed a year after the SSEM at Cambridge University, was the first practical, non-experimental implementation of the stored program design and was put to use immediately for research work at the university. Shortly thereafter, the machine originally described by von Neumann's paperEDVACwas completed but did not see full-time use for an additional two years. Nearly all modern computers implement some form of the storedprogram architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture. Beginning in the 1950s, Soviet scientists Sergei Sobolev and Nikolay Brusentsov conducted research on ternary computers, devices that operated on a base three numbering system of 1, 0, and 1 rather than the conventional binary numbering system upon which most computers are based. They designed the Setun, a functional ternary computer, at Moscow State University. The

ITT, The Institute Of Chartered Accountants Of India |

24

device was put into limited production in the Soviet Union, but supplanted by the more common binary architecture.

Semiconductors and microprocessors

Computers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at the University of Manchester in 1953.[16] In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household[citation needed]. Modern smartphones are fully programmable computers in their own right, and as of 2009 may well be the most common form of such computers in existence.

Programs The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be
ITT, The Institute Of Chartered Accountants Of India | 25

given to the computer, and it will carry process them. While some computers may have strange concepts "instructions" and "output" (see quantum computing), modern computers based on the von Neumann architecture are often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. Stored program architecture In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is
ITT, The Institute Of Chartered Accountants Of India | 26

met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions.
Machine code

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose fromeach with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.
Higher-level languages and program design

ITT, The Institute Of Chartered Accountants Of India |

27

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[21] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Hardware The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware.

ITT, The Institute Of Chartered Accountants Of India |

28

A personal computer is made up of multiple physical components of computer hardware, upon which can be installed a system software called operating system and a multitude of software applications to perform the operator's desired functions. Though a PC comes in many different forms, a typical personal computer consists of a case or chassis in a tower shape (desktop), containing components such as a motherboard.

Motherboard The motherboard is the main component inside the case. It is a large rectangular board with integrated circuitry that connects the rest of the parts of the computer including the CPU, the RAM, the disk drives (CD, DVD, hard disk, or any others) as well as any peripherals connected via the ports or the expansion slots. Components directly attached to the motherboard include:

ITT, The Institute Of Chartered Accountants Of India |

29

The central processing unit (CPU) performs most of the calculations which enable a computer to function, and is sometimes referred to as the "brain" of the computer. It is usually cooled by a heat sink and fan. The chip set mediates communication between the CPU and the other components of the system, including main memory. RAM (Random Access Memory) stores all running processes (applications) and the current running OS. The BIOS includes boot firmware and power management. The Basic Input Output System tasks are handled by operating system drivers. Internal Buses connect the CPU to various internal components and to expansion cards for graphics and sound. o Current

The north bridge memory controller, for RAM and PCI Express PCI Express, for expansion cards such as graphics and physics processors, and high-end network interfaces PCI, for other expansion cards SATA, for disk drives ATA (superseded by SATA) AGP (superseded by PCI Express) VLB VESA Local Bus (superseded by AGP) ISA (expansion card slot format obsolete in PCs, but still used in industrial computers)

Obsolete

External Bus Controllers support ports for external peripherals. These ports may be controlled directly by the south bridge I/O controller or based on expansion cards attached to the motherboard through the PCI bus.
o o o o

USB FireWire eSATA SCSI


ITT, The Institute Of Chartered Accountants Of India | 30

Power supply A power supply unit (PSU) converts alternating current (AC) electric power to low-voltage DC power for the internal components of the computer. Some power supplies have a switch to change between 230 V and 115 V. Other models have automatic sensors that switch input voltage automatically, or are able to accept any voltage between those limits. Power supply units used in computers are nearly always switch mode power supplies (SMPS). The SMPS provides regulated direct current power at the several voltages required by the motherboard and accessories such as disk drives and cooling fans.

Removable media devices

CD (compact disc) - the most common type of removable media, suitable for music and data. o CD-ROM Drive - a device used for reading data from a CD. o CD Writer - a device used for both reading and writing data to and from a CD. DVD (digital versatile disc) - a popular type of removable media that is the same dimensions as a CD but stores up to
ITT, The Institute Of Chartered Accountants Of India | 31

12 times as much information. It is the most common way of transferring digital video, and is popular for data storage. o DVD-ROM Drive - a device used for reading data from a DVD. o DVD Writer - a device used for both reading and writing data to and from a DVD. o DVD-RAM Drive - a device used for rapid writing and reading of data from a special type of DVD. Blu-ray Disc - a high-density optical disc format for data and high-definition video. Can store 70 times as much information as a CD. o BD-ROM Drive - a device used for reading data from a Blu-ray disc. o BD Writer - a device used for both reading and writing data to and from a Blu-ray disc. HD DVD - a discontinued competitor to the Blu-ray format. Floppy disk - an outdated storage device consisting of a thin disk of a flexible magnetic storage medium. Used today mainly for loading RAID drivers. Iomega Zip drive - an outdated medium-capacity removable disk storage system, first introduced by Iomega in 1994. USB flash drive - a flash memory data storage device integrated with a USB interface, typically small, lightweight, removable, and rewritable. Capacities vary, from hundreds of megabytes (in the same ballpark as CDs) to tens of gigabytes (surpassing, at great expense, Blu-ray discs). Tape drive - a device that reads and writes data on a magnetic tape, used for long term storage and backups.

Secondary storage It is the hardware that keeps data inside the computer for later use and remains persistent even when the computer has no power.

Hard disk - for medium-term storage of data.

ITT, The Institute Of Chartered Accountants Of India |

32

Solid-state drive - a device similar to hard disk, but containing no moving parts and stores data in a digital format. RAID array controller - a device to manage several internal or external hard disks and optionally some peripherals in order to achieve performance or reliability improvement in what is called a RAID array.

Input Devices

Text input devices o Keyboard - a device to input text and characters by depressing buttons (referred to as keys). Pointing devices o Mouse - a pointing device that detects two dimensional motion relative to its supporting surface. Optical Mouse - uses light to determine mouse motion. o Trackball - a pointing device consisting of an exposed protruding ball housed in a socket that detects rotation about two axes. o Touchscreen - senses the user pressing directly on the display Gaming devices o Joystick - a control device that consists of a handheld stick that pivots around one end, to detect angles in two or three dimensions. o Game pad - a hand held game controller that relies on the digits (especially thumbs) to provide input. o Game controller - a specific type of controller specialized for certain gaming purposes. Image, Video input devices o Image scanner - a device that provides input by analyzing images, printed text, handwriting, or an object.

ITT, The Institute Of Chartered Accountants Of India |

33

Web cam - a low resolution video camera used to provide visual input that can be easily transferred over the internet. Audio input devices o Microphone - an acoustic sensor that provides input by converting sound into electrical signals.
o

Output Devices

Printer - a device that produces a permanent humanreadable text of graphic document. Speakers - typically a pair of devices (2 channels) which convert electrical signals into audio. o Headphones - for a single user hearing the audio. Monitor - an electronic visual display with textual and graphical information from the computer. o CRT - (Cathode Ray Tube) display o LCD - (Liquid Crystal Display) as of 2010, it is the primary visual display for personal computers.

ITT, The Institute Of Chartered Accountants Of India |

34

Microsoft Windows
Microsoft Windows is a series of software operating systems and graphical user interfaces produced by Microsoft. Microsoft first introduced an operating environment named Windows in November 1985 as an add-on to MS-DOS in response to the growing interest in graphical user interfaces (GUIs). Microsoft Windows came to dominate the world's personal computer market, overtaking Mac OS, which had been introduced in 1984. As of October 2009, Windows had approximately 91% of the market share of the client operating systems for usage on the Internet. The most recent client version of Windows is Windows 7; the most recent server version is Windows Server 2008 R2; the most recent mobile OS version is Windows Phone 7.

Early versions The history of Windows dates back to September 1981, when Chase Bishop, a computer scientist, designed the first model of an electronic device and project "Interface Manager" was started. It was announced in November 1983 (after the Apple Lisa, but
ITT, The Institute Of Chartered Accountants Of India | 35

before the Macintosh) under the name "Windows", but Windows 1.0 was not released until November 1985. The shell of Windows 1.0 was a program known as the MS-DOS Executive. Other supplied programs were Calculator, Calendar, Cardfile, Clipboard viewer, Clock, Control Panel, Notepad, Paint, Reversi, Terminal, and Write. Windows 1.0 did not allow overlapping windows, due to Apple Computer owning this feature. Instead all windows were tiled. Only dialog boxes could appear over other windows. Windows 2.0 was released in October 1987 and featured several improvements to the user interface and memory management. Windows 2.0 allowed application windows to overlap each other and also introduced more sophisticated keyboard-shortcuts. It could also make use of expanded memory. Windows 2.1 was released in two different flavors: Windows/386 employed the 386 virtual 8086 mode to multitask several DOS programs, and the paged memory model to emulate expanded memory using available extended memory. Windows/286 (which, despite its name, would run on the 8086) still ran in real mode, but could make use of the high memory area. The early versions of Windows were often thought of as simply graphical user interfaces, mostly because they ran on top of MSDOS and used it for file system services. However, even the earliest 16-bit Windows versions already assumed many typical operating system functions; notably, having their own executable file format and providing their own device drivers (timer, graphics, printer, mouse, keyboard and sound) for applications. Unlike MSDOS, Windows allowed users to execute multiple graphical applications at the same time, through cooperative multitasking. Windows implemented an elaborate, segment-based, software virtual memory scheme, which allowed it to run applications larger than available memory: code segments and resources were swapped in and thrown away when memory became scarce, and data segments moved in memory when a given application had relinquished processor control, typically waiting for user input.
ITT, The Institute Of Chartered Accountants Of India | 36

Windows 3.0 and 3.1 Windows 3.0 (1990) and Windows 3.1 (1992) improved the design, mostly because of virtual memory and loadable virtual device drivers (VxDs) which allowed them to share arbitrary devices between multitasked DOS windows.[citation needed] Also, Windows applications could now run in protected mode (when Windows was running in Standard or 386 Enhanced Mode), which gave them access to several megabytes of memory and removed the obligation to participate in the software virtual memory scheme. They still ran inside the same address space, where the segmented memory provided a degree of protection, and multitasked cooperatively. For Windows 3.0, Microsoft also rewrote critical operations from C into assembly.

Windows 95, 98, and Me Windows 95 was released in August 1995, featuring a new user interface, support for long file names of up to 255 characters, and the ability to automatically detect and configure installed hardware (plug and play). It could natively run 32-bit applications, and featured several technological improvements that increased its stability over Windows 3.1. There were several OEM Service Releases (OSR) of Windows 95, each of which was roughly equivalent to a service pack.Microsoft's next release was Windows 98 in June 1998. Microsoft released a second version of Windows
ITT, The Institute Of Chartered Accountants Of India | 37

98 in May 1999, named Windows 98 Second Edition (often shortened to Windows 98 SE). In September 2000, Microsoft released Windows Me (Me standing for Millennium Edition), which updated the core from Windows 98 but adopted some aspects of Windows 2000 and removed the "boot in DOS mode" option. It also added a new feature called System Restore, allowing the user to set the computer's settings back to an earlier date.

Windows NT family The NT family of Windows systems was fashioned and marketed for higher reliability business use. The first release was NT 3.1 (1993), numbered "3.1" to match the consumer Windows version, which was followed by NT 3.5 (1994), NT 3.51 (1995), NT 4.0 (1996), and Windows 2000 (2000). 2000 is the last NT-based Windows release which does not include Microsoft Product Activation. NT 4.0 was the first in this line to implement the "Windows 95" user interface (and the first to include Windows 95s built-in 32-bit runtimes). Microsoft then moved to combine their consumer and business operating systems with Windows XP, coming in both home and professional versions (and later niche market versions for tablet PCs and media centers); they also diverged release schedules for server operating systems. Windows Server 2003, released a year and a half after Windows XP, brought Windows Server up to date with MS Windows XP. After a lengthy development process,

ITT, The Institute Of Chartered Accountants Of India |

38

Windows Vista was released toward the end of 2006, and its server counterpart, Windows Server 2008 was released

in early 2008. On July 22, 2009, Windows 7 and Windows Server 2008 R2 were released as RTM (release to manufacturing). Windows 7 was released on October 22, 2009.

64-bit operating systems Windows NT included support for several different platforms before the x86-based personal computer became dominant in the professional world. Versions of NT from 3.1 to 4.0 variously supported PowerPC, DEC Alpha and MIPS R4000, some of which were 64-bit processors, although the operating system treated them as 32-bit processors. With the introduction of the Intel Itanium architecture (also known as IA-64), Microsoft released new versions of Windows to support it. Itanium versions of Windows XP and Windows Server 2003 were released at the same time as their mainstream x86 (32-bit) counterparts. On April 25, 2005, Microsoft released Windows XP Professional x64 Edition and Windows Server 2003 x64 Editions to support the x86-64 (or x64 in Microsoft terminology) architecture. Microsoft dropped support for the Itanium version of Windows XP in 2005. Windows Vista is the first end-user version of Windows that Microsoft has released simultaneously in x86 and x64 editions. Windows Vista does not support the Itanium architecture. The modern 64-bit Windows family comprises AMD64/Intel64 versions of Windows 7, and Windows Server 2008, in both Itanium and x64 editions. Windows Server 2008 R2 drops the 32-bit version, although Windows 7 does not. On January 30, 2007 Microsoft released Windows Vista. It contains a number of new features, from a redesigned shell and user interface to significant technical changes, with a particular focus
ITT, The Institute Of Chartered Accountants Of India | 39

on security features. It is available in a number of different editions, and has been subject to some criticism.

MS-WORD
Microsoft Word is a word processor designed by Microsoft. It was first released in 1983 under the name Multi-Tool Word for Xenix systems.[1][2][3] Subsequent versions were later written for several other platforms including IBM PCs running DOS (1983), the Apple Macintosh (1984), the AT&T Unix PC (1985), Atari ST (1986), SCO UNIX, OS/2, and Microsoft Windows (1989). It is a component of the Microsoft Office system; it is also sold as a standalone product and included in Microsoft Works Suite. Beginning with the 2003 version, the branding was revised to emphasize Word's identity as a component within the Office suite on PC versions; Microsoft began calling it Microsoft Office Word instead of merely Microsoft Word. The 2010 version appears to be branded as Microsoft Word, once again. The current versions are Microsoft Word 2010 for Windows and 2008 for Mac. History Word 1981 to 1989 On February 1, 1983, concepts and ideas that would be used in Microsoft Word were copied from Bravo, the original GUI writing word processor developed at Xerox PARC.[4][5] With this,
ITT, The Institute Of Chartered Accountants Of India | 40

development on what was originally named Multi-Tool Word began. Richard Brodie renamed it Microsoft Word, and Microsoft released the program on October 25, 1983, for the IBM PC. Free demonstration copies of the application were bundled with the November 1983 issue of PC World, making it the first program to be distributed on-disk with a magazine.[1][6] Although MS-DOS was a character-based system, Microsoft Word was the word processor for the IBM PC that showed actual line breaks and typeface markups such as bold and italics directly on the screen while editing, although this was not a true WYSIWYG system because available displays did not have the resolution to show actual typefaces. Other DOS word processors, such as WordStar and WordPerfect, used simple text only display with markup codes on the screen or sometimes, at the most, alternative colors.[7] As with most DOS software, each program had its own, often complicated set of commands and nomenclature for performing functions that had to be learned. For example, in Word for MSDOS, a file would be saved with the sequence Escape-T-S: pressing Escape called up the menu box, T accessed the set of options for Transfer and S was for Save (the only similar interface belonged to Microsoft's own Multiplan spreadsheet). As most secretaries had learned how to use WordPerfect, companies were reluctant to switch to a rival product that offered few advantages. Desired features in Word such as indentation before typing (emulating the F4 feature in WordPerfect), the ability to block text to copy it before typing, instead of picking up mouse or blocking after typing and a reliable way to have macros and other functions that always replicate the same function time after time, were just some of Word's problems for production typing. Word for Macintosh was ported, with minor changes from the DOS source code,[citation needed] which had been written for use with highresolution displays and laser printers, although none were yet available to the general public. Following the precedents of LisaWrite and MacWrite, Word for Macintosh attempted to add
ITT, The Institute Of Chartered Accountants Of India | 41

closer WYSIWYG features into its package. After Word for Mac was released in 1985, it gained wide acceptance. There was no Word 2.0 for Macintosh. Instead, the second release of Word for Macintosh, shipped in 1987, was named Word 3.0; this was Microsoft's first attempt to synchronize version numbers across platforms. Word 3.0 included numerous internal enhancements and new features including the first implementation of the Rich Text Format (RTF) specification, but was plagued with bugs. Within a few months Word 3.0 was superseded by Word 3.01, which was much more stable. All registered users of 3.0 were mailed free copies of 3.01. In 1986, an agreement between Atari and Microsoft brought Word to the Atari ST.[8] The Atari ST version was a translation of Word 1.05 for the Apple Macintosh. It was released under the name Microsoft Write (the name of the word processor included with Windows during the 80s and early 90s).[9][10] Unlike other versions of Word, the Atari version was a one time release with no future updates or revisions. The release of Microsoft Write was one of two major PC applications that were released for the Atari ST (the other application being WordPerfect). Microsoft Write was released for the Atari ST in 1988. Word 1990 to 1995 The first version of Word for Windows was released in 1989 at a price of 500 US dollars.[citation needed] With the release of Windows 3.0 the following year, sales began to pick up (Word for Windows 1.0 was designed for use with Windows 3.0, and its performance was poorer with the versions of Windows available when it was first released). The failure of WordPerfect to produce a Windows version proved a fatal mistake[citation needed]. It was version 2.0 of Word that firmly established Microsoft Word as the market leader.
[11]

ITT, The Institute Of Chartered Accountants Of India |

42

After MacWrite, Word for Macintosh never had any serious rivals, although programs such as Nisus Writer provided features such as non-contiguous selection which were not added until Word 2002 in Office XP. In addition, many users[who?] complained that major updates reliably came more than two years apart, too long for most business users at that time. Word 5.1 for the Macintosh, released in 1992, was a very popular word processor owing to its elegance, relative ease of use and feature set. Version 6.0 for the Macintosh, released in 1994, was widely derided, unlike the Windows version. It was the first version of Word based on a common codebase between the Windows and Mac versions; many accused it of being slow, clumsy and memory intensive. In response to user requests, Microsoft offered a free "downgrade" to Word 5.1 for dissatisfied Word 6.0 purchasers. With the release of Word 6.0 in 1993 Microsoft again attempted to synchronize the version numbers and coordinate product naming across platforms; this time across the three versions for DOS, Macintosh, and Windows (where the previous version was Word for Windows 2.0). There may have also been thought given to matching the current version 6.0 of WordPerfect for DOS and Windows, Word's major competitor. This wound up being the last version of Word for DOS. In addition, subsequent versions of Word were no longer referred to by version number, and were instead named after the year of their release (e.g. Word 95 for Windows, synchronizing its name with Windows 95, and Word 98 for Macintosh), once again breaking the synchronization. When Microsoft became aware of the Year 2000 problem, it released the entire DOS port of Microsoft Word 5.5 instead of getting people to pay for the update. As of July 2010, it is still available for download from Microsoft's web site. Word 6.0 was the second attempt to develop a common codebase version of Word. The first, code-named Pyramid, had been an attempt to completely rewrite the existing product. It was abandoned when it was determined that it would take the development team too long to rewrite and then catch up with all
ITT, The Institute Of Chartered Accountants Of India | 43

the new capabilities that could have been added in the same time without a rewrite. Supporters of Pyramid claimed that it would have been faster, smaller, and more stable than the product that was eventually released for Macintosh, and which was compiled using a beta version of Visual C++ 2.0 that targets the Macintosh, so many optimizations have to be turned off (the version 4.2.1 of Office is compiled using the final version), and sometimes use the Windows API simulation library included.[13] Pyramid would have been truly cross-platform, with machine-independent application code and a small mediation layer between the application and the operating system. More recent versions of Word for Macintosh are no longer ported versions of Word for Windows, although some code is often appropriated from the Windows version for the Macintosh version.
[citation needed]

Later versions of Word have more capabilities than merely word processing. The drawing tool allows simple desktop publishing operations such as adding graphics to documents. Collaboration, document comparison, multilingual support, translation and many other capabilities have been added over the years. Word 97 Word 97 had the same general operating performance as later versions such as Word 2000. This was the first copy of Word featuring the Office Assistant, "Clippit", which was an animated helper used in all Office programs. This was a take over from the earlier launched concept in Microsoft Bob. Word 98 Word 98 for the Macintosh gained many features of Word 97, and was bundled with the Macintosh Office 98 package. Document compatibility reached parity with Office 97 and Word on the Mac became a viable business alternative to its Windows counterpart. Unfortunately, Word on the Mac in this and later releases also became vulnerable to future macro viruses that could compromise
ITT, The Institute Of Chartered Accountants Of India | 44

Word (and Excel) documents, leading to the only situation where viruses could be cross-platform. A Windows version of this was only bundled with the Japanese/Korean Microsoft Office 97 Powered By Word 98 and could not be purchased separately. Word 2001/Word X Word 2001 was bundled with the Macintosh Office for that platform, acquiring most, if not all, of the feature set of Word 2000. Released in October 2000, Word 2001 was also sold as an individual product. The Macintosh version, Word X, released in 2001, was the first version to run natively on (and required) Mac OS X. Word 2002/XP Word 2002 was bundled with Office XP and was released in 2001. It had many of the same features as Word 2000, but had a major new feature called the 'Task Panes', which gave quicker information and control to a lot of features that were before only available in modal dialog boxes. One of the key advertising strategies for the software was the removal of the Office Assistant in favor of a new help system, although it was simply disabled by default.

Word 2003 For the 2003 version, the Office programs, including Word, were rebranded to emphasize the unity of the Office suite, so that Microsoft Word officially became Microsoft Office Word. Microsoft Word 2003 also has a page limit of 32,767 pages. Word 2004

ITT, The Institute Of Chartered Accountants Of India |

45

A new Macintosh version of Office was released in May 2004. Substantial cleanup of the various applications (Word, Excel, PowerPoint) and feature parity with Office 2003 (for Microsoft Windows) created a very usable release. Microsoft released patches through the years to eliminate most known macro vulnerabilities from this version. While Apple released Pages and the open source community created NeoOffice, Word remains the most widely used word processor on the Macintosh. Word 2007 The release includes numerous changes, including a new XMLbased file format, a redesigned interface, an integrated equation editor and bibliographic management. Additionally, an XML data bag was introduced, accessible via the object model and file format, called Custom XML this can be used in conjunction with a new feature called Content Controls to implement structured documents. It also has contextual tabs, which are functionality specific only to the object with focus, and many other features like Live Preview (which enables you to view the document without making any permanent changes), Mini Toolbar, Super-tooltips, Quick Access toolbar, SmartArt, etc.

Word 2007 uses a new file format called docx. Word 20002003 users on Windows systems can install a free add-on called the "Microsoft Office Compatibility Pack" to be able to open, edit, and save the new Word 2007 files.[14] Alternatively, Word 2007 can save to the old doc format of Word 97-2003.[15][16]

ITT, The Institute Of Chartered Accountants Of India |

46

It is also possible to run Word 2007 on Linux using Wine.


Word 2010

It includes a plethora of new features common to other applications such as Excel, and Powerpoint, in Office 2010.

MS-EXCEL & MS-POWERPOINT


Microsoft Excel Microsoft Excel (full name Microsoft Office Excel) is a spreadsheet application written and distributed by Microsoft for Microsoft Windows and Mac OS X. It features calculation, graphing tools, pivot tables and a macro programming language called VBA (Visual Basic for Applications). It has been a very widely applied spreadsheet for these platforms, especially since version 5 in
ITT, The Institute Of Chartered Accountants Of India | 47

1993. Excel forms part of Microsoft Office. The current versions are Microsoft Office Excel 2010 for Windows and 2008 for Mac.

Microsoft Excel (Mac OS X)

Microsoft Excel has the basic features of all spreadsheets,[1] using a grid of cells arranged in numbered rows and letter-named columns to organize data manipulations like arithmetic operations. It has a battery of supplied functions to answer statistical, engineering and financial needs. In addition, it can display data as line graphs, histograms and charts, and with a very limited three-dimensional graphical display. It allows sectioning of data to view its dependencies on various factors from different perspectives (using pivot tables and the scenario manager[2]). And it has a programming aspect, Visual Basic for
ITT, The Institute Of Chartered Accountants Of India | 48

Applications, allowing the user to employ a wide variety of numerical methods, for example, for solving differential equations of mathematical physics,[3][4] and then reporting the results back to the spreadsheet. Finally, it has a variety of interactive features allowing user interfaces that can completely hide the spreadsheet from the user, so the spreadsheet presents itself as a so-called application, or decision support system (DSS), via a customdesigned user interface, for example, a stock analyzer,[5] or in general, as a design tool that asks the user questions and provides answers and reports.[6][7][8] In a more elaborate realization, an Excel application can automatically poll external databases and measuring instruments using an update schedule, [9] analyze the results, make a Word report or Power Point slide show, and e-mail these presentations on a regular basis to a list of participants.

Charts Like some other spreadsheet applications, Microsoft Excel supports charts, graphs or histograms generated from specified groups of cells. The generated graphic component either can be embedded within the current sheet, or added as a separate object. These displays are dynamically updated if cells change content, making a useful design tool. For example, suppose that the important design requirements are displayed visually; then, in response to a user's change in trial values for parameters, the curves describing the design change shape, and their points of intersection shift, assisting the selection of the best design.

ITT, The Institute Of Chartered Accountants Of India |

49

Excel 2007

A tabulation of "what's new" in Excel 2007 is found in Dodge and Stinson. The most obvious change is a completely revamped menu system, which means a user must abandon most habits acquired from previous versions. On-line help frequently cannot locate a feature you use, so a handbook with a good index can be invaluable. Some practical advantages of the new system, which actually make the update worthwhile, are greatly improved management of named variables through the Name Manager, and much improved flexibility in formatting graphs, which now allow (x, y) coordinate labeling and lines of arbitrary weight. The number of rows is now 1,048,576 and columns is 16,384. Several improvements to pivot tables were introduced. New file extensions are used, including .xlsm for a workbook with macros and .xlsx for a workbook without macros.

Microsoft PowerPoint Microsoft PowerPoint, usually called just PowerPoint, is a presentation program by Microsoft. It is part of the Microsoft Office suite, and runs on Microsoft Windows and Apple's Mac OS X operating system.

ITT, The Institute Of Chartered Accountants Of India |

50

PowerPoint is used by business people, educators, students, and trainers. From Microsoft Office 2003 to 2008 for Mac, Microsoft revised the brand labeling to emphasize PowerPoint's place within the office suite, calling it Microsoft Office PowerPoint instead of just Microsoft PowerPoint. However, from Office 2010 (for Windows) and the upcoming Office:Mac 2011, the name was changed back to 'Microsoft PowerPoint'. The current versions are Microsoft PowerPoint 2010 for Windows and Microsoft Office PowerPoint 2008 for Mac.

History The original version of this program was created by Dennis Austin and Thomas Rudkin of Forethought, Inc.. Originally designed for the Macintosh computer, the initial release was called "Presenter". In 1987, it was renamed to "PowerPoint" due to problems with trademarks, the idea for the name coming from Robert Gaskins. In August of the same year, Forethought was bought by Microsoft for $14 million USD ($26.8 million in present-day terms), and became Microsoft's Graphics Business Unit, which continued to further develop the software. PowerPoint changed significantly with PowerPoint 97. Prior to PowerPoint 97, presentations were linear, always proceeding from one slide to the next. PowerPoint 97 incorporated the Visual Basic for Applications (VBA) language, underlying all macro generation in Office 97, which allowed users to invoke pre-defined transitions
ITT, The Institute Of Chartered Accountants Of India | 51

and effects in a non-linear movie-like style without having to learn programming (or even having to be aware of the existence of VBA). PowerPoint 2000 (and the rest of the Office 2000 suite) introduced a clipboard that could hold multiple objects at once. Another noticeable change was that the Office Assistant, whose frequent unsolicited appearances in PowerPoint 97 (as an animated paperclip) had annoyed many users, was changed to be less intrusive.

Operation

PowerPoint presentations consist of a number of individual pages or "slides". The "slide" analogy is a reference to the slide projector, a device that can be seen as obsolete, within the context of widespread use of PowerPoint and other presentation software. Slides may contain text, graphics, movies, and other objects, which may be arranged freely on the slide. PowerPoint, however, facilitates the use of a consistent style in a presentation using a template or "Slide Master". The presentation can be printed, displayed live on a computer, or navigated through at the command of the presenter. For larger audiences the computer display is often projected using a video projector. Slides can also form the basis of webcasts. PowerPoint provides three types of movements: Entrance, emphasis, and exit of elements on a slide itself are controlled by what PowerPoint calls Custom Animations 2. Transitions, on the other hand are movements between slides. These can be animated in a variety of ways 3. Custom animation can be used to create small story boards by animating pictures to enter, exit or move.
1.

ITT, The Institute Of Chartered Accountants Of India |

52

PowerPoint Viewer The Microsoft Office PowerPoint Viewer is a program used to run presentations on computers that do not have Microsoft PowerPoint installed. The Office PowerPoint Viewer is added by default to the same disk or network location that contains one or more presentations you packaged by using the Package for CD feature. The PowerPoint Viewer is installed by default with a Microsoft Office 2003 installation for use with the Package for CD feature. The PowerPoint Viewer file is also available for download from the Microsoft Office Online Web site. Presentations password-protected for opening or modifying can be opened by the PowerPoint Viewer. The Package for CD feature allows you to package any password-protected file or set a new password for all packaged presentations. The PowerPoint Viewer prompts you for a password if the file is open password-protected. The PowerPoint Viewer supports opening presentations created using PowerPoint 97 and later. In addition, it supports all file content except OLE objects and scripting.

Microsoft PowerPoint 2010

PowerPoint 2010 has changed from its predecessor. Screen Capturing has been introduced, allowing you to take a screen capture and add it onto your document. Also, you can now remove background images and you can add special effects, such as 'Pencil effects' onto pictures. Plus, new transitions are available. However, the ability to apply text effects directly onto existing text, seen in Microsoft Word is not available; a separate WordArt text box is required.

ITT, The Institute Of Chartered Accountants Of India |

53

VIRUS
What is a computer virus?

A computer virus is a small software program that spreads from one computer to another computer and that interferes with computer operation. A computer virus may corrupt or delete data on a computer, use an e-mail program to spread the virus to other computers, or even delete everything on the hard disk.

ITT, The Institute Of Chartered Accountants Of India |

54

Computer viruses are most easily spread by attachments in e-mail messages or by instant messaging messages. Therefore, you must never open an e-mail attachment unless you know who sent the message or unless you are expecting the e-mail attachment. Computer viruses can be disguised as attachments of funny images, greeting cards, or audio and video files. Computer viruses also spread by using downloads on the Internet. Computer viruses can be hidden in pirated software or in other files or programs that you may download.

Symptoms of a computer virus

If you suspect or confirm that your computer is infected with a computer virus, obtain the current antivirus software. The following are some primary indicators that a computer may be infected:
ITT, The Institute Of Chartered Accountants Of India | 55

The computer runs slower than usual. The computer stops responding, or it locks up frequently. The computer crashes, and then it restarts every few minutes. The computer restarts on its own. Additionally, the computer does not run as usual. Applications on the computer do not work correctly. Disks or disk drives are inaccessible. You cannot print items correctly. You see unusual error messages. You see distorted menus and dialog boxes. There is a double extension on an attachment that you recently opened, such as a .jpg, .vbs, .gif, or .exe. extension. An antivirus program is disabled for no reason. Additionally, the antivirus program cannot be restarted. An antivirus program cannot be installed on the computer, or the antivirus program will not run. New icons appear on the desktop that you did not put there, or the icons are not associated with any recently installed programs. Strange sounds or music plays from the speakers unexpectedly. A program disappears from the computer even though you did not intentionally remove the program.

Note These are common signs of infection. However, these signs may also be caused by hardware or software problems that have nothing to do with a computer virus. Unless you run the Microsoft Malicious Software Removal Tool, and then you install industrystandard, up-to-date antivirus software on your computer, you cannot be certain whether a computer is infected with a computer virus or not.

ITT, The Institute Of Chartered Accountants Of India |

56

Symptoms of worms and trojan horse viruses in e-mail messages

When a computer virus infects e-mail messages or infects other files on a computer, you may notice the following symptoms: The infected file may make copies of itself. This behavior may use up all the free space on the hard disk. A copy of the infected file may be sent to all the addresses in an e-mail address list. The computer virus may reformat the hard disk. This behavior will delete files and programs. The computer virus may install hidden programs, such as pirated software. This pirated software may then be distributed and sold from the computer. The computer virus may reduce security. This could enable intruders to remotely access the computer or the network. You receive an e-mail message that has a strange attachment. When you open the attachment, dialog boxes appear, or a sudden degradation in system performance occurs. Someone tells you that they have recently received email messages from you that contained attached files that you did not send. The files that are attached to the e-mail messages have extensions such as .exe, .bat, .scr, and .vbs extensions.

Symptoms that may be the result of ordinary Windows functions

A computer virus infection may cause the following problems:

ITT, The Institute Of Chartered Accountants Of India |

57

Windows does not start even though you have not made any system changes or even though you have not installed or removed any programs. There is frequent modem activity. If you have an external modem, you may notice the lights blinking frequently when the modem is not being used. You may be unknowingly supplying pirated software. Windows does not start because certain important system files are missing. Additionally, you receive an error message that lists the missing files. The computer sometimes starts as expected. However, at other times, the computer stops responding before the desktop icons and the taskbar appear. The computer runs very slowly. Additionally, the computer takes longer than expected to start. You receive out-of-memory error messages even though the computer has sufficient RAM. New programs are installed incorrectly. Windows spontaneously restarts unexpectedly. A disk utility such as Scandisk reports multiple serious disk errors. A partition disappears. The computer always stops responding when you try to use Microsoft Office products. You cannot start Windows Task Manager. Antivirus software indicates that a computer virus is present.

ANTIVIRUS
ITT, The Institute Of Chartered Accountants Of India | 58

Antivirus (or anti-virus) software is used to prevent, detect, and remove malware, including computer viruses, worms, and trojan horses. Such programs may also prevent and remove adware, spyware, and other forms of malware. A variety of strategies are typically employed. Signature-based detection involves searching for known patterns of data within executable code. However, it is possible for a user to be infected with new malware for which no signature exists yet. To counter such so-called zero-day threats, heuristics can be used. One type of heuristic approach, generic signatures, can identify new viruses or variants of existing viruses by looking for known malicious code (or slight variations of such code) in files. Some antivirus software can also predict what a file will do if opened/run by emulating it in a sandbox and analyzing what it does to see if it performs any malicious actions. If it does, this could mean the file is malicious. However, no matter how useful antivirus software is, it can sometimes have drawbacks. Antivirus software can degrade computer performance. Inexperienced users may have trouble understanding the prompts and decisions that antivirus software presents them with. An incorrect decision may lead to a security breach. If the antivirus software employs heuristic detection (of any kind), success depends on achieving the right balance between false positives and false negatives. In addition to the drawbacks mentioned above, the effectiveness of antivirus software has also been researched and debated. One study found that the detection success of major antivirus software dropped over a one-year period.

ITT, The Institute Of Chartered Accountants Of India |

59

History

Most of the computer viruses that were written in the early and mid '80s were limited to self-reproduction and had no specific damage routine built into the code (research viruses). That changed when more and more programmers became acquainted with virus programming and released viruses that manipulated or even destroyed data on infected computers. It then became necessary to think about antivirus software to fight these malicious viruses. There are competing claims for the innovator of the first antivirus product. Possibly the first publicly documented removal of a computer virus in the wild was performed by Bernd Fix in 1987. Fred Cohen, who published one of the first academic papers on computer viruses in 1984, started to develop strategies for antivirus software in 1988 that were picked up and continued by later antivirus software developers. Also in 1988 a mailing list named VIRUS-L was initiated on the BITNET/EARN network where new viruses and the possibilities of detecting and eliminating viruses were discussed. Some members of this mailing list like John McAfee or Eugene Kaspersky later founded software companies that developed and sold commercial antivirus software. Before Internet connectivity was widespread, viruses were typically spread by infected floppy disks. Antivirus software came into use, but was updated relatively infrequently. During this time, virus checkers essentially had to check executable files and the boot sectors of floppy and hard disks. However, as internet usage became common, initially through the use of modems, viruses spread throughout the Internet. Over the years antivirus software had to check many more types of files (and not only executable files) for several reasons:
ITT, The Institute Of Chartered Accountants Of India | 60

Powerful macros used in word processor applications, such as Microsoft Word, presented a further risk. Virus writers started using the macros to write viruses embedded within documents. This meant that computers could now also be at risk from infection by documents with hidden attached macros as programs.[ Later email programs, in particular Microsoft Outlook Express and Outlook, were vulnerable to viruses embedded in the email body itself. Now, a user's computer could be infected by just opening or previewing a message.

As always-on broadband connections became the norm and more and more viruses were released, it became essential to update virus checkers more and more frequently. Even then, a new zeroday virus could become widespread before antivirus companies released an update to protect against it.
Identification methods

Malwarebytes' Anti-Malware version 1.46 - a proprietary freeware antimalware product

There are several methods which antivirus software can use to identify malware. Signature based detection is the most common method. To identify viruses and other malware, antivirus software compares the contents of a file to a dictionary of virus signatures. Because
ITT, The Institute Of Chartered Accountants Of India | 61

viruses can embed themselves in existing files, the entire file is searched, not just as a whole, but also in pieces. Heuristic-based detection, like malicious activity detection, can be used to identify unknown viruses. File emulation is another heuristic approach. File emulation involves executing a program in a virtual environment and logging what actions the program performs. Depending on the actions logged, the antivirus software can determine if the program is malicious or not and then carry out the appropriate disinfection actions.
Signature based detection

Traditionally, antivirus software heavily relied upon signatures to identify malware. This can be very effective, but cannot defend against malware unless samples have already been obtained and signatures created. Because of this, signature-based approaches are not effective against new, unknown viruses. Because new viruses are being created each day, the signaturebased detection approach requires frequent updates of the virus signature dictionary. To assist the antivirus software companies, the software may allow the user to upload new viruses or variants to the company, allowing the virus to be analyzed and the signature added to the dictionary. Although the signature-based approach can effectively contain virus outbreaks, virus authors have tried to stay a step ahead of such software by writing "oligomorphic", "polymorphic" and, more recently, "metamorphic" viruses, which encrypt parts of themselves or otherwise modify themselves as a method of disguise, so as to not match virus signatures in the dictionary.
Effectiveness

Studies in December 2007 have shown that the effectiveness of antivirus software has decreased in recent years, particularly against unknown or zero day attacks. The German computer
ITT, The Institute Of Chartered Accountants Of India | 62

magazine c't found that detection rates for these threats had dropped from 40-50% in 2006 to 20-30% in 2007. At that time, the only exception was the NOD32 antivirus, which managed a detection rate of 68 percent. The problem is magnified by the changing intent of virus authors. Some years ago it was obvious when a virus infection was present. The viruses of the day, written by amateurs, exhibited destructive behavior or pop-ups. Modern viruses are often written by professionals, financed by criminal organizations. Traditional antivirus software solutions run virus scanners on schedule, on demand and some run scans in real time. If a virus or malware is located the suspect file is usually placed into a quarantine to terminate its chances of disrupting the system. Traditional antivirus solutions scan and compare against a publicised and regularly updated dictionary of malware otherwise known as a blacklist. Some antivirus solutions have additional options that employ an heuristic engine which further examines the file to see if it is behaving in a similar manner to previous examples of malware. A new technology utilized by a few antivirus solutions is whitelisting, this technology first checks if the file is trusted and only questioning those that are not. Independent testing on all the major virus scanners consistently shows that none provide 100% virus detection. The best ones provided as high as 99.6% detection, while the lowest provide only 81.8% in tests conducted in February 2010. All virus scanners produce false positive results as well, identifying benign files as malware. Although methodologies may differ, some notable independent quality testing agencies include AV-Comparatives, ICSA Labs, West Coast Labs, VB100 and other members of the AMTSO (AntiMalware Testing Standards Organization).
New viruses

ITT, The Institute Of Chartered Accountants Of India |

63

Most popular anti-virus programs are not very effective against new viruses, even those that use non-signature-based methods that should detect new viruses. The reason for this is that the virus designers test their new viruses on the major anti-virus applications to make sure that they are not detected before releasing them into the wild. Some new viruses, particularly ransomware, use polymorphic code to avoid detection by virus scanners. Jerome Segura, a security analyst with ParetoLogic, explained:

It's something that they miss a lot of the time because this

type of [ransomware virus] comes from sites that use a polymorphism, which means they basically randomize the file they send you and it gets by well-known antivirus products very easily. I've seen people firsthand getting infected, having all the pop-ups and yet they have antivirus software running and it's not detecting anything. It actually can be pretty hard to get rid of, as well, and you're never really sure if it's really gone. When we see something like that usually we advise to reinstall the operating system or reinstall backups.

A proof of concept malware reveals how new viruses/malware could use the Graphics Processing Unit (GPU) to avoid detection from anti-virus software. The potential success of this involves bypassing the CPU in order to make it much harder for security researchers to analyse the inner workings of such malware. In current antivirus software a new document or program is scanned with only one virus detector at a time. CloudAV would be able to send programs or documents to a network cloud where it will use multiple antivirus and behavioural detection simultaneously. It is more thorough and also has the ability to check the new document or programs access history.

ITT, The Institute Of Chartered Accountants Of India |

64

CloudAV is a cloud computing antivirus developed as a product of scientists of the University of Michigan. Each time a computer or device receives a new document or program, that item is automatically detected and sent to the antivirus cloud for analysis. The CloudAV system uses 12 different detectors that act together to tell the PC whether the item is safe to open.

Specialist tools

Virus removal tools are available to help remove stubborn infections or certain types of infection. Examples include the Trend Micro Rootkit Buster for the detection of rootkits and VundoFix for removing some variants of Vundo infections. Antivirus vendors also offer tools to remove virus infections, such as the Avira AntiVir Removal Tool. A rescue disk that is bootable (such as a CD/DVD disc or USB storage device) can be used to run anti-virus software outside of the installed operating system and remove the infections when dormant. A bootable anti-virus disk can be useful when, for example, the installed operating system is no longer bootable or has malware that is resisting all attempts to be disinfected by the anti-virus program on the infected computer. Examples of some of these bootable disks include the Avira AntiVir Rescue System (a Linux-based rescue CD) and AVG Rescue CD. The AVG Rescue CD download page has the option of a bootable USB program for booting off a USB storage device.This could be useful on netbooks which don't have an internal optical drive or on computers which support booting from USB devices, as an alternative to using a blank CD/DVD to create a bootable disc.
Popularity

A survey by Symantec from 2009 suggests that a third of small to medium sized business do not use antivirus protection; whereas more than 80% of home users have some kind of antivirus installed.
ITT, The Institute Of Chartered Accountants Of India | 65

On 7 July 2010 OPSWAT issued an antivirus market share report based on endpoint detections which suggested that the majority of endpoint antivirus market share was held by free products like Avast!, Avira and AVG.

Other methods

A command-line virus scanner, Clam AV 0.95.2, running a virus signature definition update, scanning a file and identifying a Trojan

Installed antivirus software running on an individual computer is only one method of guarding against viruses. Other methods are also used, including cloud-based antivirus, firewalls and on-line scanners. Cloud antivirus In current antivirus software a new document or program is scanned with only one virus detector at a time. CloudAV would be able to send programs or documents to a network cloud where it will use multiple antivirus and behavioral detection simultaneously. It is more thorough and also has the ability to check the new document or programs access history. CloudAV is a cloud computing antivirus developed as a product of scientists of the University of Michigan. Each time a computer or device receives a new document or program, that item is
ITT, The Institute Of Chartered Accountants Of India | 66

automatically detected and sent to the antivirus cloud for analysis. The CloudAV system uses 12 different detectors that act together to tell the PC whether the item is safe to open.

Network firewall Network firewalls prevent unknown programs and Internet processes from accessing the system protected. However, they are not antivirus systems as such and thus make no attempt to identify or remove anything. They may protect against infection from outside the protected computer or LAN, and limit the activity of any malicious software which is present by blocking incoming or outgoing requests on certain TCP/IP ports. A firewall is designed to deal with broader system threats that come from network connections into the system and is not an alternative to a virus protection system.
Online scanning

Some antivirus vendors maintain websites with free online scanning capability of the entire computer, critical areas only, local disks, folders or files.
Specialist tools

Virus removal tools are available to help remove stubborn infections or certain types of infection. Examples include the Trend Micro Rootkit Buster for the detection of rootkits and VundoFix for removing some variants of Vundo infections. Antivirus vendors also offer tools to remove virus infections, such as the Avira AntiVir Removal Tool. A rescue disk that is bootable (such as a CD/DVD disc or USB storage device) can be used to run anti-virus software outside of the installed operating system and remove the infections when
ITT, The Institute Of Chartered Accountants Of India | 67

dormant. A bootable anti-virus disk can be useful when, for example, the installed operating system is no longer bootable or has malware that is resisting all attempts to be disinfected by the anti-virus program on the infected computer. Examples of some of these bootable disks include the Avira AntiVir Rescue System (a Linux-based rescue CD) and AVG Rescue CD. The AVG Rescue CD download page has the option of a bootable USB program for booting off a USB storage device. This could be useful on netbooks which don't have an internal optical drive or on computers which support booting from USB devices, as an alternative to using a blank CD/DVD to create a bootable disc.

FUTURE PROSPECTS
The history of computers and computer technology has thus far been a long and fascinating one, stretching back more than half a century to the first primitive computing machines. As anyone who has looked at the world of computers lately can attest, the size of computers has reduced sharply, even as the power of these machines has increased at an exponential rate. In fact the cost of computers has come down so much that many households now own not only one but two, three, even more PCs. As the world of computers and computer technology continues to evolve and change, many people from science fiction writers and futurists to computer workers and ordinary users, have wondered what the future holds for the computer and related technologies. Beyond these innovations, however, there are likely to be many, many more. One of the most important areas of research in the world of computers is that of artificial intelligence. Even though this remains the goal of many artificial intelligence researchers, in fact artificial intelligence technology is already in place and already
ITT, The Institute Of Chartered Accountants Of India | 68

serving the needs of humans everywhere. Nanotechnology is another important part of the future of computers, expected to have a profound impact on the people around the globe. Nanotechnology is the process whereby matter is manipulated at the atomic level, providing the ability to build objects from their most basic parts.

CONCLUSION

It was really a nice experience making this project. We really enjoyed it a lot. Though collecting information about the concerned topic was not so simple, yet it was interesting. The major part of this project provides us with valuable information about COMPUTER apart from the basics. The role of computers has always been indispensible to our lives. Computers have made human being more responsible to their work by decreasing the complexities of their day to day tasks. Even they are more reliable and faster. The last decade has proved itself to be a boom in IT industry and the process is still continuing. With increasing development in computer technology, many job opportunities had been developed. The development of GUI based operating systems has made personal computing easier and user friendly. Also different types of application software have covered almost every field of our day to day work.

ITT, The Institute Of Chartered Accountants Of India |

69

At last we would like to conclude that whether it is an accountants job or an engineers, computers have always left an impression of trust on these professionals by helping them in their complicated work.

Thank You, ASHISH MAKKAR AMIT SABU

BIBLIOGRAPHY
I took help from the following places: GOOGLE WIKIPEDIA MODULE I MODULE II

ITT, The Institute Of Chartered Accountants Of India |

70

ITT, The Institute Of Chartered Accountants Of India |

71

You might also like