ELE275

Download as pdf or txt
Download as pdf or txt
You are on page 1of 50

E.

LE 275 Computer Applications I

1. Introduction to Computers and Computing : Identification, functions, applications


and use of PC parts and Peripheral devices.

2. Safety precautions and preventive maintenance of PC

3. Filling System: Word processing applications and use

4.Internet : available services, principle of operation, applications, demonstrations.

5. Spreadsheet: applications, and how to use,

6. Database Management package: applications, demonstrations, Report


presentation,

7. Report presentation, Software Packages: Applications, demonstrations and use

8. Mini Project to test proficiency in use of these software packages.

O. Oniyide

Engr. Dr, O. S. Sakariya

2022/2023 academic session

CHAPTER ONE

Introduction to Computers and Computing

1.0 Computer System.

The Computer System can be viewed to be a system of interconnected devices that share
a central storage system and various peripheral devices such as printers, scanners,
modems, routers, etc. The Computer System is any programmable device including its
software, hardware, peripherals, procedures, users, interconnections and inputs for the
electronic processing and output information used for reporting and control. Computer
system developed from the work of Blaise Pascal through the invention of the Abacus to
the work of Charles Babbage witnessing a trend from big calculating machines
(Analogue in nature) to Digital and miniaturized equipment with underlying technologies
:Mini, Micro and Nano technologies. Today’s computers are Nano technology driven and
follow Moore’s law cost wise. Computers through the Internet revolutionized the world
and turned it into a global village with emerging and technology driven generations: 2G,
3G, 4G and 5G. Artificial Intelligence (AI) and Artificial General Intelligence (AGI) is
gradually driving Computers into wearable equipment on humans (weighing few grams
and yet powerful and detachable piece capable of being plugged into projectors, scanners
and Printers but which can display on the wearers hand, information through laser Inks)
The AI Pin which costs about 700 dollars and which shall be released in April 2024 is an
example of an AGI driven Computer System of our generation. Chip driven health care
system, Elderly people fall detection systems all have embedded in them Micro and Nano
chips of Computers and are subsets of AI systems. The terms Personal Computers,
Industrial Computers, Data Capture Systems are terms often used in the Identification,
understanding the functions and applications of Computers as outlined as follows:

Personal Computer is a term often used to refer to computer system operated at a time
by one person. Computers used by many people are often referred to as multi user
computer system. Computer Systems can be single user, single task oriented systems,
and single user multi task oriented systems, multi user single task and multi user multi
task computer systems.

a b.

c
d. e. f.

g. h.

Figures a,b, c and d show the pictures of a 6th generation, core i7 and touch screen Panel
Personal Computer (PC) System. Picture e show 4th / 5th generation personal computer
while Pictures h, g and f show computers of the 2nd and third generations respectively
and as desktop (h) or Personal Computers (f)

Industrial Computers are used in businesses, manufacturing plants and industrial


settings. They are used mainly as data acquisition systems, process controls, hosting and
data transfers. Pictures 1 and 3. Show a small form factor Intel Quad core i7-8550
Industrial CPU used as data capture or acquisition systems. Picture 2 and 4 show
Industrial Computer used in process control. Picture 5 show a CNC machine for process
control. Picture 6 depicts an Industrial grade computer used in the Electric power
industry, 9 depicts a Mini Computer System.

1 2
3
4

5.

Picture 7. Show an industrial computer used in oil rigs. Picture 8 depicts Industrial
computerfor DCC Train Control System.
7.

8.

9.
10

Instructions (programs) accept and process data to produce an output. The outputs from
Computers are referred to as Information. Information need be precise, concise, relevant,
accurate, timely and precise hence machines used for processing information must show
case the inherent qualities of which speed (a measure of the number of instructions the
computer system can conveniently execute per second), plays an important role.

Examples of Computer System include: Automated manufacturing equipment.


Automated laboratory equipment, Manufacturing execution systems, Laboratory, Clinical
and manufacturing database systems, Data capture systems, Control systems, Cloud
computing equipment, Ubiquitous Computing equipment etc.
Electronic data capture systems in Clinical Data Management

The Computer System is defined as an electronic device because computers contains


electronic components such as Capacitors, Resistors, Transistors, Integrated circuits with
various level of integration ranging from Scale to Very Large Scale Integration (VLSI)
etc as the major components that forms Processors (Microprocessors – utilizing Micro
and Nano technologies), Memory chips, Address Decoders & Encoders, Multiplexers and
DE multiplexers, Bus buffers, Clock circuits and what have you? As they exist on the
Computer System.

Computer Networks:

A computer network is a collection of computers and other hardware interconnected by


communication channel that allow sharing of resources and information. The term
network has been used to mean the interconnection of two or more devices for the
purpose of resource sharing. Resources that can be shared on a Computer network
include: Storage/Memory, Printers (Dot Matrix, Lasers, Thermal, Ink Jet, and Desk Jet),
Scanners, Data capture systems (Bar code readers, Video and Music Servers etc.
Computer networks can be classified based on:

 Medium: Based on medium used to transport data over a communication


network, computer networks exist as Wired, Wireless and exotic technologies
computer networks
 Coverage area: Computer networks exist as Local Area Network (LAN), Wide
Area Network (WAN), and Metropolitan Area Networks (MAN).

Communication protocols define the rules and data formats for exchanging
information on a computer network and provide the basis for network programming.
Examples of communication protocol used for computer networks include: Border
Gateway Protocol (BGP), Data Transfer Protocol (DTP), File Transfer Protocol
(FTP), Ethernet (Thin and Thick), Hyper Text Transfer Protocol (HTTP),
IEEE802.11 Protocol, Transmission Control Protocol (TCP), Internet Protocol (IP),
and Voice over Internet Protocol (VOIP) etc.

Computer networking is sometimes considered a sub-discipline of electrical


engineering, telecommunications, computer science, information technology or
computer engineering, since it relies upon the theoretical and practical application of
these disciplines.

Before the advent of computer networks that were based upon some type of
telecommunications system, communication between calculation machines and early
computers was performed by human users by carrying out instructions between them.
Many of the social behaviors seen in today’s internet were demonstrably present in
the 19th century and arguably in even earlier networks using visual signals.

 In September 1940, George Stibitz used a Teletype machine to send


instructions for a problem set from his Model at Dartmouth College to his
Complex Number Calculator in New York and received results back by the
same means. Linking output systems like tele typewriters to computers were
an interest at the Advanced Research Projects Agency (ARPA) when, in 1962,
J.C.R. Licklider was hired and developed a working group he called the
“Intergalactic Computer Network”, a precursor to the ARPANET.

 Early networks of communicating computers included the military radar


system Semi-Automatic Ground Environment (SAGE) which started in the
late 1950s.

 The commercial airline reservation system semi-automatic business research


environment (SABRE) went online with two connected mainframes in 1960.
 In 1964, researchers at Dartmouth developed the Dartmouth Time Sharing
System for distributed users of large computer systems. The same year, at
Massachusetts Institute of Technology, a research group supported by General
Electric and Bell Labs used a computer to route and manage telephone
connections.

 Throughout the 1960s Leonard Kleinrock, Paul Baran and Donald Davies
independently conceptualized and developed network systems which used
packets that could be used in a network between computer systems.

 1965 Thomas Marill and Lawrence G. Roberts created the first wide area
network (WAN). This was an immediate precursor to the ARPANET, of
which Roberts became program manager.

 The first widely used telephone switch that used true computer control was
introduced by Western Electric in 1965.

 In 1969 the University of California at Los Angeles, the Stanford Research


Institute, University of California at Santa Barbara, and the University of Utah
were connected as the beginning of the ARPANET network using 50 Kbit/s
circuits.

 Commercial services using X.25 were deployed in 1972, and later used as an
underlying infrastructure for expanding TCP/IP networks.

Today, computer networks are the core of modern communication. All modern aspects
of the public switched telephone network (PSTN) are computer-controlled, and telephony
increasingly runs over the Internet Protocol, although not necessarily the public Internet.
The scope of communication has increased significantly in the past decade, and this
boom in communications would not have been possible without the progressively
advancing computer network. Computer networks and the technologies needed to
connect and communicate through and between them, continue to drive computer
hardware, software, and peripherals industries.

Common layouts

A networks topology is the layout of the interconnections of the nodes of a computer


network. Common layouts are:

 A bus network: all nodes are connected to a common medium along this medium.

 A star network: all nodes are connected to a special central node. This is the
typical layout found in a Wireless LAN, where each wireless client connects to
the central Wireless access point.
 A ring network: each node is connected to its left and right neighbor node, such
that all nodes are connected and that each node can reach each other node by
traversing nodes left-or rightwards.

 A mesh network: each node is connected to an arbitrary number of neighbours in


such a way that there is at least one traversal from any node to any other.

 A fully connected network: each node is connected to every other node in the
network.

Note that the physical layout of the nodes in a network may not necessarily reflect the
network topology.

Ubiquitous computing (ubicomp) is a post-desktop model of human-computer


interaction in which information processing has been thoroughly integrated into everyday
objects and activities. Ubiquitous computing is defined as “machines that fit the human
environment instead of forcing humans to enter theirs. For example, a domestic
ubiquitous computing environment might interconnect lighting and environmental
controls with personal biometric monitors woven into clothing so that illumination and
heating conditions in a room might be modulated, continuously and imperceptibly.

Ubiquitous computing presents challenges across computer science: in systems design


and engineering, in systems modeling, and in user interface design. Contemporary
human-computer interaction models, whether command-line, menu-driven, or GUI-
based, are inappropriate and inadequate to the ubiquitous case. This suggests that the
“natural” interaction paradigm appropriate to a gully robust ubiquitous computing has yet
to emerge – although there is also recognition in the field that in many ways we are
already living in an ubicomp world.

Mark Weiser proposed three basic forms for ubiquitous system devices: tabs, pads and
boards,
 Tabs: wearable centimeter sized devices

 Pads: hand-held decimeter-sized devices

 Boards: metre sized interactive display devices.

These three forms proposed by Weiser are characterized by being macro-sized, having a
planar form and on incorporating visual output displays. Hence, three additional forms
for ubiquitous systems have been proposed:

 Dust: miniaturized devices can be without visual output displays, e.g., Micro
Electro-Mechanical Systems (MEMS), ranging from nanometers through
micrometers to millimeters.

 Skin: fabrics based upon light emitting and conductive polymers, organic
computer devices, can be formed into more flexible non-planar display surfaces
and products such as clothes and curtains.

 Clay: ensembles of MEMS can be formed into arbitrary three dimensional shapes
as artifacts resembling many different kinds of physical object.

Internet:

The Internet is a word which appeared in 1982, resulting from the work of Professor
Leonard Kleinrock of the Advanced Research Project Agency NETwork (ARPANET) in
1968. The ARPANET was commissioned to build a robust, fault tolerant and distributed
computer network for the United States. At takeoff, the ARPANET project suffered
International acceptance and support an event which led to the development of the X25
protocol in Europe, The Norwegian Seismic Array (NORSAR) in June, 1973, The Tanum
Earth Station in Sweden, Eight years before the establishment of The Tanum Earth
Station and precisely in 1965, the Lanlate Earth Station, Oyo State in Nigeria, West
Africa, was established as the first of three satellite earth station in Nigeria. In UK, the
network developed by the Peter T. Kirstein’s research group of the Institute of Computer
Science, University of London, known today as the University College London.

The Internet is a network of networks that consists of millions of private, public,


academic, business, and government networks of local to global scope that are linked by
a broad array of electronic, wireless and optical networking technologies.

Resources and services available on the internet include:

 The World Wide Web (WWW).

 Email support infrastructure.


 Voice Over Internet Service (VOIS).

 Internet Protocol Television (IPTV).

 Blogging.

 Instant messaging.

 Internet forums.

 Social networking (Facebook, Linkedin, Googlechart, Yahoo, Twitter, Togo, His,


Myspace etc).

 On line Classroom (Google classroom, meet, zoom, etc )

 Online shopping.

 E-commerce.

 E-Conference.

 E-voting.

 Financial Services.

The Internet and the World Wide Web are often used interchangeably in everyday
speech. Thus it is common to speak of going on the internet when invoking a web
browser to view web pages. The World Wide Web is just one of many services running
on the Internet. It is a collection of interconnected documents and other resources linked
by hyperlinks and Universal Resource Locators. (URLs). As of December, 2012, more
than 24 billion people worldwide use thte Internet. The word “Surfing” is also used
sometimes in place of browsing as in Internet Surfing and Internet browsing.

A brief History of the Internet is summarized as follows:

 Research into packet switching started in the early 1960s and packet switched
networks such as Mark 1 at NPL in the UK, ARPANET, CYCLADES, Merit
Network, Tymnet, and Telenet, were developed in the late 1960s and early 1970s
using a variety of protocols. The ARPANET in particular led to the development
of protocols for internetworking, where multiple separate networks could be
joined together into a network of networks thanks to the work of British scientist
Donald Davies whose ground-breaking work on Packet Switching was essential to
the system.
 The first two nodes of what would become the ARPANET were interconnected
between Leonard Kleinrock’s Network Measurement Center at the UCLA’s
School of Engineering and Applied Science and Douglas Engelbart’s NLS system
at SRI International (SRI) in Menlo Park, California, on 29 October, 1969. The
third site on the ARPANET was the Culler-Fried Interactive Mathematics center
at the University of California at Santa Barbara, and the fourth was the University
of Utah Graphics Department. In an early sign of future growth, there were
already fifteen sites connected to the young ARPANET by the end of 1971.
These early years were documented in the 1972 film Computer Networks: The
Heralds of Resource Sharing.

 Early international collaborations on ARPANET were sparse. For various


political reasons, European developers were concerned with developing the X.25
networks. Notable exceptions were the Norwegian Seismic Array (NORSAR) in
June 1973, followed in 1973 by Sweden with satellite links to the Tanum Earth
Station and Peter T. Kirstein’s research group in the UK, initially at the Institute
of Computer Science, University of London and later at University College
London.

 In December 1974, RFC 675 – Specification of Internet Transmission Control


Program, by Vinton Cerf, Yogen Dalal, and Carl Sunshine, used the term
internet, as shorthand for internetworking; later RFC’s repeat this use, so the
word started out as an adjective rather than the noun it is today. Access to the
ARPANET was expanded in 1981 when the National Science Foundation (NSF)
developed the Computer Science Network (CSNET). In 1982, the Internet
Protocol Suite (TCP/IP) was standardized and the concept of a world-wide
network of fully interconnected TCP/IP networks called the Internet was
introduced.

 TCP/IP network access expanded again in 1986 when the National Science
Foundation Network (NSFNET) provided access to supercomputer sites in the
United States from research and education organisations, first at 56 kbit/s and
later at 1.5Mbit/s and 45 Mbit/s. Commercial internet service providers (ISPs)
began to emerge in the late 1980s and early 1990s. The ARPANET was
decommissioned in 1990. The Internet was commercialized in 1995 when
NSFNET was decommissioned, removing the last restrictions on the use of the
Internet to carry commercial traffic. The Internet started a rapid expansion to
Europe and Australia in the mid to late 1980s and to Asia in the late 1980s and
early 1990s to Africa including Nigeria. Early Internet Service providers in
Nigeria include: Infoweb Limited, LinkServe Limited, Hyperia, Microcom
Systems Limited and Cyber Space. The proliferation of Global System of Mobile
communications (GSM) in Nigeria has in recent times paved way for easier
access to the Internet through the GMS operators and enhanced by the
introduction of the fiber optic technology.

General structure of The Internet:

It has been determined that both the Internet IP routing structure and hypertext links
of the World Wide Web are examples of scale-free networks.

Internet as a “prime example of a large-scale, highly engineered, yet highly complex


system”. The Internet is heterogeneous; for instance, data transfer rates and physical
characteristics of connections vary widely. Thus, the possibility of developing
alternative structures is investigated. The Internet structure was found to be highly
robust to random failures and very vulnerable to high degree attacks.

Data Transfer

File sharing is an example of transferring large amounts of data across the Internet. A
computer file can be emailed to customers, colleagues and friends as an attachment.
It can be uploaded to a website or FTP server for easy download by others.

Streaming media is the real-time delivery of digital media for the immediate
consumption or enjoyment by end users. They may also allow time-shift viewing or
listening such as Preview classic clips and Listen Again features. Podcasting is a
variation on this theme, where – usually audio – material is downloaded and played
back on a computer or shifted to a portable media player to be listened to on the
move. For example, standard image quality needs 1 Mbit/s link speed for SD 480p,
HD 720p quality requires 2.5 Mbit/s, and the top-of-the-line HDX quality needs 4.5
Mbit/s for 1080p. With fiber optic technology speeds of 10 to 100 Mbits/s could be
obtained.

Webcams are a low-cost extension of this phenomenon. Internet users can watch
animals around an African waterhole. Video chat rooms and video conferencing are
also popular with many uses being from personal webcams, with and without two-
way sound. You Tube claims that its users watch hundreds of millions, and upload
hundreds of thousands of videos daily.

The Internet is a globally distributed network comprising many voluntarily


interconnected autonomous networks. It operates without a central governing body.
However, to maintain interoperability, all technical and policy aspects of the
underlying core infrastructure and the principal name spaces are administered by the
Internet Corporation for Assigned Names and Numbers (ICANN), headquartered in
Marina del Rey, California ICANN is the authority that coordinates the assignment of
unique identifiers for use on the Internet, including domain names.

Internet Protocol (IP) addresses, application port numbers in the transport protocols,
and many other parameters. Examples of domain names include the .com.org. ,
edu.ng

The Internet allows greater flexibility in working hours and location, especially with
the spread of unmetered high-speed connections. The Internet can be accessed almost
anywhere by numerous means, including through mobile Internet devices. Mobile
phones, data cards, handheld game consoles and cellular routers allow users to
connect ot the Internet wirelessly within the limitations imposed by small screens and
other limited facilities of such pocket-sized devices, the services of the Internet,
including email and the web, may be available. Internet chat, whether using Yahoo,
Hi5 or Google chat room, an instant messaging system, or a social networking
website, allows colleagues to stay in touch in a very convenient way while working at
their computers during the day. Messages can be exchanged even more quickly and
conveniently than via email. These systems may allow files to be exchanged,
drawings and images to be shared, or voice and video contact between team
members.

Content management systems allow collaborating teams to work on shared sets of


documents simultaneously without accidentally destroying each other’s work.

Internet allows computer users to remotely access other computers and information
stores easily, wherever they may be. They may do this with or without computer
security, i.e. authentication and encryption technologies, depending on the
requirements. This is encouraging new ways of working from home, collaboration
and information sharing in many industries.
Computing:

Computing is described as any goal-oriented activity requiring, benefiting from, or


creating computers. For example, computing includes designing and building
hardware and software systems; processing, structuring, and managing various kinds
of information; doing scientific research on and with computers; making computer
systems behave intelligently; creating and using communications and entertainment
media; finding and gathering information relevant to any particular purpose and so
on.

There are five sub-disciplines of the computing field: Computer Science, Computer
Engineering, Information Systems, Information Technology, and Software
Engineering.

The discipline of computing is the systematic study of algorithmic processes that


describe and transform information: their theory, analysis, design, efficiency,
implementation, and application.

Computer Engineering:

Computer Engineering is a discipline that integrates several fields of electrical


engineering and computer science required to develop computer systems. Computer
engineers usually have training in electronic engineering (or electrical engineering),
software design, and hardware-software integration instead of only software
engineering or electronic engineering. Computer engineers are involved in many
hardware and software aspects of computing. From the design of individual
microprocessors, personal computers, and supercomputers, to circuit design.

Cloud Computing: The origin of cloud computing is obscure. It is derived from the
practice of using drawings of styled clouds to denote networks in diagrams of
computing and communication systems. The word cloud is used as a metaphor for
the Internet, based on the standardilised use of cloud-like shapes to denote network on
telephony schematics. The cloud symbol has been used to represent the internet as
early as 1994.

The underlying concept of cloud computing dated back to the 1950s when large scale
mainframe became available in academia and corporations, accessible via thin
clients/terminal computers. Early proponents of cloud computing include Herb Grosh
(1950’s) and John McCarthy in 1960’s. In the 1950’s, Herb Grosh, the author of
Grosh’s law postulated that the entire world would operate on dumb terminals
powered by about 15 large data centers. In the 1960’s John McCarthy postulated that
computation may be someday organized as a public utility. The postulates of Herb
Grosh being their earliest gave birth to the establishment of the following data
centers:

 Service Beaureau Corporation (SBC) in 1957

 Tymshare in 1966

 National CSS in 1967

 Dial Data in 1968

 Bolt Beranek and Newman (BBN) in 1968

 Dan & Bradstreet in 1979

 Amazon web services 2006

 Eucalyptus in 2008. Eucalyptus is the first open source software for


deploying private and hybrid clouds. It was an European union funded
project.

 Real time cloud environment in mid 2008.

 Smarter – Computing which gave birth to the Cloud Computing on 1st March,
2011.

1.1 Areas of Application of the Computer System:

The Computers system has become very versatile equipment to human being for the
gathering and processing of data, for the automation of processes (manufacturing, space
explorations, nuclear researches, transportation industry, aviation, oceanography,
agriculture, etc.) The foundation work of Abacus, Blaize Pascal, Mary Jacquard and
Charles Babbage etc to be studied under the history of the development of computers had
in modern times being turn around to be a useful tool to all. I watched the recently
concluded United States election which ushered in President Barack Obama for the
second term in office. The conduct of the Election, its broadcast and what have you?
have computer systems behind the scenes. With Computers the running of the affairs of
Countries, Nations and continents have been easier. Manufacturing, Communication,
Transport, Commerce, Entertainment, Farming, Education, Home and Office automation,
Data processing activities are but few areas of application of the Computer system. For
instance whenever a telephone call is made, a computer determines how to route the call
and calculate the cost of the call. The Computer also keep back up of all transactions
including the call pick, drop/loss rates, SMS’s and charts etc. Banks uses computers to
store and validate (verification and authentication) customers’ transactions on computers.
Trajectories of satellites in which include records of flight schedules are done with
Computers. Trajectories of satellites in space are calculated with Computers. Business
analysts use computers to create charts (Line, Bar, Pie, Histograms, etc), which makes
information easier and better to understand or capture. Health care delivery has been
revolutionized with the Computer System. Pulses, Brain waves, Ultra sound scan, X rays
pictures, drugs mixing and production and even operations are carried out with the
Computer System. Demographic dat, maps and other geographic information are stored
on Computers for referencing.

1.2 The Sub Systems of the Computer System

The Computer System is made of two sub systems:

1.2.1 Hardware Sub System: The hardware sub system refers to the physical
components of the Computer System. This term refers to the electro-mechanical
components of the Computer System, Peripherals, disk drives and data terminals.

1.2.1 Software Sub System: The Software sub system is made up of the instructions
otherwise known as programs which make the Hardware subsystem of the Computer
System works. The software subsystem comprises the Operating System software, User
software and Application programs. In recent times, utility programs, device drivers and
other routines that enable input and output operations on the computer system has been
included in the Operating System software.

Operating System enables input and output operations to be carried out on the computer
system. The operating system software also manages the resources available on the
computer system. It also manages the storage and retrieval of data to and fro storage or
memory devices on the computer system. The operating system software can be
classified into two namely; the command/character user interface operating system and
the graphics user interface operating system. With the command user interface operating
system, user interacts with the computer system through the issuance of commands which
are combination of tokens that serve as directives to make the computer system carry out
tasks. Examples of command user interface operating system include the PC DOS for
personal computer disk operating system, MS DOS for Microsoft disk operating system.
It is noteworthy to mention here that some character user interface operating system
commands are still included till today in most graphics user interface operating systems.
The UNIX, Xenix and the Microsoft Windows operating systems are examples in this
category.

1.3 Utility Software: is system software designed to help analyze, configure,


optimize or maintain a computer.

Utility software usually focuses on how the computer infrastructure (including the
computer hardware, operating system, application software and data storage) operates.
Due to this focus, utilities are often rather technical and targeted at people with an
advanced level of computer knowledge – in contrast to application software, which
allows users to do things like creating text documents, playing video games, listening for
music or viewing websites.

1.3.1 Utility Software Categories

 Anti-virus utilities scan for computer systems

 Archivers output a stream or a single file when provided with a directory or a set
of files. Archive utilities, unlike archive suites, usually do not include
compression or encryption capabilities. Some archive utilities may even have a
separate un-archive utility for the reverse operation.

 Backup Software can make copies of all information stored on a disk and restore
either the entire disk (e.g. in an event of disk failure) or selected files (e.g. in an
event of accidental deletion).

 Clipboard Managers expand the clipboard functionality of an operating system.

 Cryptographic utilities encrypt and decrypt streams and files.

 Data Compression utilities output a shorter stream or a smaller file when


provided with a stream or file.

 Data Synchronization utilities establish consistency among data from a source to


target data storage and vice versa. There are several branches of this type of
utility:
o File Synchronization utilities maintain consistency between two sources.
They may be used to create redundancy or backup copies but are also used
to help users carry their digital music, photos and video in their mobile
devices.

o Revision Control utilities are intended to deal with situations where more
than one user attempts to simultaneously modify the same file.

 Disk Checkers can scan operating hard drive.

 Disk Cleaners can find files that are unnecessary to computer operation, or take
up considerable amounts of space. Disk cleaner helps the user to decide what to
delete when their hard disk is full.

 Disk Compression utilities can transparently compress/uncompress the contents


of a disk, increasing the capacity of the disk.

 Disk defragmenters can detect computer files whose contents are broken across
several locations on the hard disk, and move the fragments to one location to
increase efficiency.

 Disk partitions can divide an individual drive into multiple logical drives, each
with its own file system which can be mounted by the operating system and
treated as an individual drive.

 Disk space analyzers for the visualization of disk space usage by getting the size
for each folder (including sub folders) & files in folder or drive, showing the
distribution of the used space.

 Disk storage utilities

 File mangers provide a convenient method of performing routine data


management tasks, such as deleting, renaming, cataloging, uncataloging, moving,
copying, merging, generating and modifying data sets.

 Hex editors directly modify the text or data of a file. These files could be data or
an actual program.

 Memory testers check for memory failures

 Network utilities analyze the computer’s network connectivity, configure


network settings, check data transfer or log events.

 Registry cleaners clean and optimize the Windows registry by removing old
registry keys that are no longer in use.
 Screensavers were desired to prevent phosphor-burn-in on CRT and plasma
computer monitors by blanking the screen or filling it with moving images or
patterns when the computer is not in use. Contemporary screensavers are used
primarily for entertainment or security.

 System monitors for monitoring resources and performance in a computer


system.

 System profilers provide detailed information about the software installed and
hardware attached to the computer.

 In computing, a device driver or software driver is a computer program that


operates or controls a particular type of device that is attached to a computer. A
driver typically communicates with the device through the computer bus or
communications subsystem to which the hardware connects. When a calling
program invokes a routine in the driver, the driver issues commands to the device.
Once the device sends data back to the driver, the driver may invoke routines in
the original calling program. Drivers are hardware-dependent and operating-
system-specific. They usually provide the interrupt handling required for any
necessary asynchronous time-dependent hardware interface.

1.4 Solving problems with the computer system

To make the computer system solve task for the user, the problem has to be analyzed and
a plan for solving such task have to be produced. The person who solves task on the
computer system by writing programs is known as a Computer Programmer. The series
of instructions that is written to solve tasks on the Computer System is known as a
Computer program and a person who uses a Computer program is known as a User.
Tasks are solved with the computer system through the knowledge of the existence of
relevant data and the desired output, followed by a step by step procedure to process the
data so as to yield the desired output. The step by step procedure may be produced with
the aid of an Algorithm, a flowchart , Pseudo-code and Top –down charts/ The
conversion of the Algorithm, flowchart and Pseudo-code as the case may be into a
Computer Program using specific syntax and semantics produces a Computer program.
Notice here that the Algorithm, Flowchart and the Pseudo-codes are not Computer
programs but tools for developing Computer programs.

1.4.1 Data can be defined as raw facts and figures. The raw facts and figures that has to
do with a human being as a person is known as Personal data.

The Personal data of a human being contains fields such as: Name, Age, Sex, Marital
Status, State of origin, Local Government Area of origin, Next of kin, E-mail address,
Telephone number and any other data that has to do with a human being.
Data may be expressed in numerals, often referred to as Numeric data e.g. Age of a
person which is always positive numerals belonging to the set of positive or decimated
integers greater than zero.

Data may also be expressed as Non-numeric or Character data. Non-numeric data


include: Name, Sex, and Marital Status. Codes (Bar, Colour), Rays (Visible light rays,
Infra-Red, X-rays, Gamma rays etc.) Image and Sound.

Data expressed as combination of both Numerals and Characters are known as


alphanumeric data. The Date of birth in the Personal record of a human being, the
Telephone number (+234-08123456789), Address (e.g. No 1 Michael Jenyo Close) are
but few examples of Alphanumeric data.

When raw data are processed, then they become Information. For instance if the age of
Students in a class of 100 is arranged in descending order of magnitude such that the
oldest Student in the class appears as the first entry and the youngest student as the last
entry in the list, then Information is produced (Age Information). The piece of
information is obtained from the raw data in this case by carrying out comparison
operation.

In processing data the arithmetic operations of Addition (Concatenation) Subtraction (De-


concatenation). Multiplication (often referred to as repeated addition) and Division
(often referred to as repeated subtraction) are employed as the case may be.

The total deduction from a Salary account for instance, if there are 3 deductions A to C to
be made will be, Total Deduction = deductions (A+B+C). this is an example of addition
operation. Also if the first three letters of the Alphabet are needed to form a Name, since
characters are involved, then the operation becomes a concatenation operation expressed
as Name = A+B+C. This result shall leave Name =ABC.

The human brain receives input data which may be audible sound, smell, sight, feeling,
taste etc., which get processed to produce the intelligence that makes the human being
make decision and shortly before the decisions are made arithmetic operations outlined
above including comparison such as, Wow! the heat that the skin experience now, is it
equal to the normal room temperature? If yes, the human being flees nowhere but as soon
as the comparison is less or far greater than what the human being usually experience,
then the intelligence comes into play and the human brain will start to run abstractly, a
routine that will take the being out of the “bad” area. In other words, both the foolish and
the wise being process data. The measure of how fast the data are processed is known as
the processing speed Pertinent to state here that the very first general purpose Computer
Systems are designed to act as human Clerks.
The Processing speed of the Computer system also known as speed of operation of the
Computer System is measured as the number of instruction the Computer System can
execute per second. It is also a measure of the number of bits or bytes the Computer
System can process per second.

A bit is either zero (0) or 1. A byte is a collection of eight binary digits. A nibble is a
combination of 4 binary digits. A word is a collection of 8 binary digits.

Bit = {0,1}/[(0)or(1)]

Nibble = {four binary digits} e.g. 1010

Byte = {Eight binary digits} e.g. 10101011

Kilobit = 1024 bit = 210 bit

Kilobyte = 1024*8 bit = 210 * 23 bit = 213 bit

Megabit = 1024*1024 bit= 210 * 210 bit = 220bit

Megabyte = 1024*1024*8 bit= 220 * 23 bit = 223 bit

Gigabit = 1024*1024*1024 bit= 210 * 210 *210 bit = 230


bit

Gigabyte = 1024*1024*1024*8 bit= 230 * 23 bit = 233 bit

Terabit = 1024*1024*1024*1024 bit= 240 bit

Terabyte = 1024*1024*1024*1024*8 bit= 240 * 23 bit =


243 bit

Another measure of speed often used in High performance Computing is the number of
Floating Point Operations that can be executed per second. FLOPS,

The FLOPS are used to measure the speed of computer systems used for scientific
calculations in floating – point format.

The terms:

KILOFLOPS (103),

MEGAFLOPS (106),

GIGAFLOPS (109),

TERAFLOPS (1012),
PETAFLOPS (1015),

EXAFLOPS (1018),

ZETTAFLOPS (1021), and

YOTTAFLOPS (1024) are units used in quantifying the speed of operation


of the Computer System in FLOPS.

The singular “FLOP” is frequently encountered. Alternatively, the singular FLOP (or
flop) is used as an abbreviation for “Floating-point Operation”, and a flop count is a
count of these operations (e.g., required by a given algorithm or computer program). In
this context, “flops: is simply the plural rather than a rate. The expression 1 flops is
actually interpreted as ƒ flop =1 s-1 n flops=1.

For comparison, a hand-held calculator performs relatively few FLOPS. Each calculation
request, such as to add or subtract two numbers, requires only a single operation, so there
is rarely any need for its response time to exceed what the operator can physically use. A
computer response time below 0.1 second in a calculation context is usually perceived as
instantaneous by a human operator, so a simple calculator needs only about 10 FLOPS to
be considered functional.

1.5 CLASSIFICATION OF COMPUTER SYSTEM

1.5.0 Computer Systems or computing equipment would be classified here based on the
following sub heading: Type/Input data, Size and Age of Technology.

Based on type or input data, Computer systems exist as Analogue, Digital and Hybrid
Computer Systems.

1.5.1 Analogue Computer System

These are computers that works on input data represented or presented in the form of
continuous data or quantities. Such systems output accuracy is usually dependent on the
user. Analogue Computers have been used and are still in use till today in process
controls in Industries, decision making and data storage in Banks, The Pictures below
shows examples and areas of application of few analogue Computer Systems:
1.5.2 Digital Computer Systems: These are computers that works on input data
represented or presented in the form of discrete quantities or which exist at intervals of
time. The pulse, heartbeat, light state (on/off) etc are examples of discrete data. Digital
Computer Systems are very accurate. However the accuracy of digital systems are
functions of precision and rounding offs introduced by the manufacturer. Rounding up
and coding systems goes a long way in determining the accuracy of Digital Systems. For
instance, consider the task of determining a University Scholar between two candidates
having a CGPA of 4.732 each. The coding system used in determining the CGPA if re-
ran n-times would produce the same result to any number of decimal places because the
same weight are used for all candidates in the School. Nevertheless, it is obvious that the
candidates may not have the same raw score in all subjects, Candidate A may have 75%
in ELE 275 and candidate B 89% in ELE275. The two scores are A is with 5 points
attached to each. Hence if ELE 275 is a 3 unit course the two students would have a
weight of 15 points each in ELE 275. If the calculation has been done using normal
averaging system, it will be obvious that just a single student out of the two students
would emerge as the University Scholar. Examples of Digital Computer Systems are
shown in Pictures below:

1.5.3 Hybrid Computer Systems

Hybrid computer systems are computers that works on input data represented in the form
of both discrete and continuous quantities. Examples are shown below:
1.6 Classification of Computer based on Physical size. Computer systems exists as
Mainframe Computers, Mini Computers, Microcomputers and Embedded Systems.

1.6.1 Mainframe Computers

Mainframe Computers: (colloquially referred to as “big iron”) are computers used


primarily by corporate and governmental organisations for critical applications, bulk data
processing such as census, industry and consumer statistics, enterprise resource planning,
and transaction processing. The term originally referred to the large cabinets that housed
the central processing unit and main memory of early computers. Later, the term was
used to distinguish high-end commercial machines from less powerful units. Most large-
scale computer system architectures were established in the 1960s, but continue to
evolve.

 Mainframes are measured in millions of instructions per second (MIPS) while


assuming typical instructions are integer operations, but supercomputers are
measured in floating point operations per second (FLOPS) and more recently by
traversed edges per second or TEPS. Examples of integer operations include
moving data around in memory or checking values. Floating point operations are
mostly addition, subtraction, and multiplication with enough digits of precision to
model continuous phenomena such as weather prediction and nuclear simulations.
In terms of computational ability, supercomputers are more powerful.
 Mainframes are built to be reliable for transaction processing as it is commonly
understood in the business world. A transaction could refer to a set of operations
including disk read/writes, operating system calls, or some form of data transfer
from one subsystem to another. This operation does not count toward the
processing power of a computer. Transaction processing is not exclusive to
mainframes but also used in the performance of microprocessor-based servers and
online networks.

1.6.2 Minicomputer System:

A minicomputer, or colloquially mini, is a class of smaller computers that evolved in the


mid-1960s and sold for much less than mainframe and mid-size computers from IBM and
its direct competitors. In a 1970 survey, the New York Times suggested a consensus
definition of a minicomputer as a machine costing less than 25,000 (3.75million Nigerian
Naira) with an input-output device such as a tele printer and at least 4k words of memory,
that is capable of running programs in a higher level language, such as Fortran or Basic.

The class formed a distinct group with its own hardware architectures and operating
systems.

When single-chip CPUs appeared, beginning with the Intel 4004 in 1971, the term
“minicomputer” came to mean a machine that lies in the middle range of the computing
spectrum, in between the smallest mainframe computers and the microcomputers. The
term “minicomputer” is little used today; the contemporary term for this class of system
is “midrange computer”, such as the higher-end SPARC, POWER and Itanium-based
systems from Oracle, IBM and Hewlett-Packard.
The term “minicomputer” evolved in the 1960s to describe the smaller computers that
became possible with the use of transistors and core memory technologies, minimal
instructions sets and less expensive peripherals such as the ubiquitous Teletype Model 33
ASP. They usually took up one or a few 19-inch rack cabinets, compared with the large
mainframes that could fill a room. The first successful minicomputer was Digital
Equipment Corporation’s 12-bit PDP-8.

Minicomputers were also known as midrange computers. They grew to have relatively
high processing power and capacity. They were used in manufacturing process control,
telephone switching and to control laboratory equipment. In the 1970s, they were the
hardware that was used to launch the computer-aided design (CAD) industry and other
similar industries where a smaller dedicated system was needed.

As microcomputers developed in the 1970s and 80s, minicomputers filled the mid-range
area between low-powered microcomputers and high-capacity mainframes. At the time,
microcomputers were single-user, relatively simple machines running simple program-
launcher operating systems like CP/M or MS-DOS, while minis were much more
powerful systems that ran full multi-user, multitasking operating systems, such as VMS
and Unix, often with timesharing versions of BASIC fro application development (MAI
Basic Four systems being very popular in that regard). The classical mini was a 16-bit
computer, while the emerging higher performance 32-bit minis were often referred to as
superminis.

1.6.3 Microcomputer

Microcomputer is a small, relatively inexpensive computer with a microprocessor as its


central processing unit (CPU). It includes a microprocessor, memory, and input/output
(I/O) facilities. Microcomputers became popular in the 1970s and 80s with the advent of
increasingly powerful microprocessors. The predecessors to these computers,
mainframes and minicomputers, were comparatively much larger and more expensive
(though indeed present-day mainframes such s the IBM System z machines use one or
more custom microprocessors as their (CPUs). Many microcomputers (when equipped
with a keyboard and screen for input and output) are also personal computers (in the
generic sense).

Monitors, keyboards and other devices for input and output may be integrated or
separate. Computer memory in the form of RAM, and at least one other less volatile,
memory storage device are usually combined with the CPU on a system bus in one unit.
Other devices that make up a complete microcomputer system include batteries, a power
supply unit, a keyboard and various input/output devices used to convey information to
and from a human operator (printers, monitors, human interface devices).
Microcomputers are designed to serve only one user at a time, although they can often be
modified with software or hardware to concurrently serve more than one user.
Microcomputers fit well on or under desks or tables, so that they are within easy access
of users. Bigger computers like minicomputers, mainframes, and supercomputers take up
large cabinets or even dedicated rooms.

A microcomputer comes equipped with at least one type of data storage, usually RAM.
Although some microcomputers (particularly in the form of floppy disk and hard disk
drives) were built into the microcomputer case.

1.7 Embedded system:

An embedded system is a computer system designed for specific control functions


within a larger system, often with real-time computing constraints. It is embedded as part
of a complete device often including hardware and mechanical parts. By contrast, a
general-purpose computer, such as a personal computer (PC), is designed to be flexible
and to meet a wide range of end-user needs. Embedded systems control many devices in
common use today.

Embedded systems contain processing cores that are either microcontrollers or digital
signal processors (DSP). The key characteristic, however, is being dedicated to handle a
particular task. Since the embedded system is dedicated to specific tasks, design
engineers can optimize it is reduce the size and cost of the product and increase the
reliability and performance. Some embedded systems are mass-produced, benefiting
from economies of scale.

Physically, embedded systems range from portable devices such as digital watches and
MP3 players, to large stationary installations like traffic lights, factory controllers.
Complexity varies from low, with a single microcontroller chip, to very high with
multiple units, peripherals and networks mounted inside a large chassis or enclosure.

Embedded systems are especially suited for use in transportation, fire safety, safety and
security, medical applications and life critical systems as these systems can be isolated
from hacking and thus be more reliable. For fire safety, the systems can be designed to
have greater ability to handle higher temperatures and continue to operate. In dealing
with security, the embedded systems can be self-sufficient and be able to deal with cut
electrical and communication systems.

In addition to commonly described embedded systems based on small computers, a new


class of miniature wireless devices called motes are quickly gaining popularity as the
field of wireless sensor networking rises. Wireless sensor networking, WSN, makes use
of miniaturization made possible by advanced IC design to couple full wireless
subsystems to sophisticated sensors, enabling people and companies to measure a myriad
of things in the physical world and act on this information through IT monitoring and
control systems. These motes are completely self contained, and will typically run off a
battery source for many years before the batteries need to be changed or charged.

1.7.1 Characteristics of Embedded Systems

1. Embedded systems are designed to do some specific task, rather than be a


general-purpose computer for multiple tasks. Some also have real-time
performance constraints that must be met, for reasons such as safety and usability;
others may have low or no performance requirements, allowing the system
hardware to be simplified to reduce costs.

2. Embedded systems are not always standalone devices. Many embedded systems
consist of small, computerized parts within a larger device that serves a more
general purpose. For example, the Gibson Robot Guitar features an embedded
system for running the strings, but the overall purpose of the Robot Guitar is, or
course, to play music. Similarly, an embedded system in an automobile provides
a specific function as a subsystem of the car itself.

3. The program instructions written for embedded systems are referred to as


firmware, are stored in read-only memory or Flash memory chips. They run with
limited computer hardware resources: little memory, small or non-existent
keyboard or screen.
1.8 History of the Development of Computers:

The history of the modern computer begins with two separate technologies, automated
calculation and programmability, but no single device can be identified as the earliest
computer, partly because of the inconsistent application of that term. A few devices are
worth mentioning though, like some mechanical aids to computing, which were very
successful and survived for centuries until the advent of the electronic calculator are
summarized as follows:

2500 BC,

The Sumerian Abacus, was designed. The device is still in use to date. It is noteworthy
to state here that several centuries after the Abacus was invented, that the Abacus
competed with modern desk calculating machine in Japan in the year 1946 at a
computation speed contest, competing with a modern desl calculating machine.

80 BC

The Antikythera mechanism, an ancient astronomical computer built by the Greeks


around 80 BC.
10 – 70 AD

The Greek mathematician Hero of Alexandria (c. 10-70AD) built a mechanical theater
which performed a play lasting 10 minutes and was operated by a complex system of
ropes and drums that might be considered to be a means of deciding which parts of the
mechanism performed which actions and when. This is the essence of programmability.

900 – 1000

Around the end of the 10th century, the French monk Gerbert d’Aurillac brought back
from Spain the drawings of a machine invented by the Moors that answered either Yes or
No to the questions it was asked.

1200 – 1300

Again in the 13th century, the monks Albertus Magnua and Roger Bacon built talking
androids without any further development (Albertus Magnus complained that he had
wasted forty years of his life when Thomas Aquinas, terrified by his machine, destroyed
it).

1600 – 1650

The slide rules, invented in the 1620s, which were carried on five Apollo space missions,
including to the moon.

In 1642, the Renaissance saw the invention of the mechanical calculator, a device that
could perform all four arithmetic operations (addition, subtraction, multiplication and
division) without relying on human intelligence.

The mechanical calculator was at the root of the development of computers in two
separate ways. Initially, it was in trying to develop more powerful and more flexible
calculators that the computer was first theorized by Charles Babbage and then developed.
Secondly, development of a low-cost electronic calculator, successor to the mechanical
calculator, resulted in the development by Intel of the first commercially available
microprocessor integrated circuit.
1800 - 1850

In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing
a series of punched paper cards as a template which allowed his loom to weave intricate
patterns automatically. The resulting Jacquard loom was an important step in the
development of computers because the use of punched cards to define woven patterns can
be viewed as an early, albeit limited, form of programmability.

 George Boole: (2 November 1815 – 8 December 1864) was an English


mathematician, philosopher and logician, a self-taught British mathematician;
devised an algebra of Logic and later became a key tool in Computer design. His
work was in the fields of differential equations and algebraic logic, and he is now
best known as the author of The Laws of Thought. As the inventor of the
prototype of what is now called Booleam logic, which became the basis of the
modern digital computer, Boole is regarded in hindsight as a founder of the field
of computer science.

 Charles Babbage: (26 December 1791 – 18 October 1871) a British


mathematician and engineer, regarded as the father of the computer. Although the
mechanical “analytical engine (1871)” that he conceived was never built, it
influenced the design of modern computers. It has units for input, output,
memory, arithmetic, logic and control. Algorithms were intended to be
communicated to the computer via punched cards, and numbers were to be stored
on toothed wheels. His previous machine, the Difference engine was produced in
1822. Charles Babbage was a rare gem.

 Augusta Ada Byron: (10 December 1815 – 27 November 1852) a mathematician


and a colleague of Charles Babbage, regarded as the first programmer. She
encouraged Babbage to modify the design based on programming considerations.
Together they developed the concepts of decision structures, loops, and a library
of procedures.

 Herman Hollerith: The founder of a Company that was later to become IBM; at
the age of 20 he devised a computer that made it possible to process the data for
the US Census of 1890 in one-third the time required for the 1880 census. His
electromagnetic “tabulating machine” passed metal pins through holes in punched
cards and into mercury filled cups to complete an electronic circuit. Each location
of a hole corresponded to a characteristic of the population.

1930s

Alan Turing: a gifted and far-sighted British mathematician; made fundamental


contributions to the theory of computer science, assisted in the construction of some of
the early large computers, and proposed a test for detecting intelligence within a machine.
His theoretical “Turing machine” laid down the foundation for the development of
general-purpose programmable computers. He changed the course of the second world
war by breaking the German “Enigma” code, thereby making secrete German messages
comprehensible to the Allies.

John V. Atanosoff: a mathematician and physicist at Iowa State University; declare by a


federal court in Minnesota to be the inventor of the first electronic digital special-purpose
computer. Designed with the assistance of his graduate assistant, Clifford Berry, this
computer used vacuum tubes (instead of the less efficient relays) for storage and
arithmetic functions.

1940:

Howard Aiken a professor at Harvard University; built the Mark 1, a large-scale digital
computer functionally similar to the “analytical engine” proposed by Babbage. This
computer, which took five years to build and used relays for storage and computations,
was technologically obsolete before it was completed.

Grace M. Hopper: Retired in 1986 at the age of 79 as a rear Admiral in the United
States Navy; wrote the first major subroutine (a procedure used to calculate sin z on the
Mark 1 computer) and one of the first assembly languages. In 1945 she found that a
moth fused into a wire of the Mark 1 was causing the computer to malfunction, thus the
origin of the term “debugging” for finding errors. As an administrator at Remington
Rand in the 1950’s Dr. Hopper pioneered the development and use of Common Business
Oriented Language (COBOL), a programming language for the business community
written in English-like notation.

John Mauchley and J. Preper Etkert: Electrical Engineers working at the University of
Pennysylvania; built the first large-scale electronic digital general-purpose computer to
be put into full operation. The ENIAC, (Electronic Numerical Integrator And
Computer) used 18,000 vacuum tubes for storage and arithmetic computations, weighted
30 tons, and occupied 1500 square feet. It could perform 300 multiplications of 1 20-
digit numbers per second, whereas the Mark 1 required 3 seconds to perform a single
multiplication. Later they designed and developed the UNIVAC 1, the first commercial
electronic computer.

The ENIAC, which became operational in 1946, is considered to be the first general-
purpose electronic computer.

John Von Neumann: a mathematical genius and member of the Institute of Advanced
Studies in Princeton, New Jersey; developed the stored program concept used in all
modern computers. Prior to this development, instructions were programmed into
computers by manually rewiring connections. Along with Hermann H. Goldstein, he
wrote the first paper on the use of flowecharts.

Replica of the Small-Scale Experimental Machine (SSEM), the world’s first stored-
program computer, at the Museum of Science and Industry in Manchester, England
EDSAC (Electronic Delay Storage Automatic Calculator) was one of the first
computers to implement the stored-program (Von Neumann) architecture.

Stanislaw Ulam: American research mathematician and educator; pioneered the


application of random numbers and computers to the solution of problems in mathematics
and physics. His techniques, known as Monte Carlo methods or computer simulation, are
used to determine the likelihoods of various outcomes of games of chance and to analyze
business operations.

Maurice V. Wilkes: An electrical engineer at Cambridge University in England and


student of von Neumann; built the EDSAC, the first computer to use the stored-program
concept. Along with D.J. Wheeler and S. Gill, he wrote the first computer programming
text, The Preparation of Programs for an Electronic Digital Computer (Addison-Wesley,
1951), which dealt in depth with the use and construction of a versatile subroutine library.

John Bardeen, Walter Brattain, and William Shockley: Physicists at Bell Labs;
developed the transistor, a miniature device that replaced the vacuum tube and
revolutionized computer design. It was smaller, lighter, more reliable, and cooler than
the vacuum tube.

1950s

John Backus: a programmer for IBM, in 1953 headed a small group of programmers
who wrote the most extensively used early interpretive computer system, the IBM 701
Speed coding System. An interpreter translates a high-level language program into
machine language one statement at a time as the program is executed. In 1957, Backus
and his team produced the compiled language Fortran, which soon became the primary
academic and scientific language. A compiler translates an entire program into efficient
machine language before the program is executed. QBasic combines the best of both
worlds. It has the power and speed of a compiled language and the case of use of an
interpreted language.

Donald L. Shell: in 1959, the year that he received his Ph.D. in mathematics from the
University of Cincinnati, published an efficient algorithm for ordering (or sorting) lists of
data. Sorting has been estimated to consume nearly one-quarter of the running time of
computers. The methods of Shell, Bubble, quick, merge and heap sorts are modern
techniques of sorting.

1960s

John G. Kemeny and Thomas E. Kurtz:: professors of mathematics at Dartmouth


College and the inventors of BASIC; led Dartmouth to national leadership in the
educational uses of computing. Kemeny’s distinguished career included serving as an
assistant to both John Von Neumann and Albert Einstein, serving as president of
Dartmouth College, and chairing the commission to investigate the Three Mile Island
accident. In recent years, Kemeny and Kurtz have devoted considerable energy to the
promotion of structured BASIC.

Corrado Bohm and Guiseppe Jacopini: two European mathematicians; proved that any
program can be written with the three structures; sequences, decisions, and loops. This
result led to the systematic methods of modern program design known as structured
programming.

Edsger W. Dijkstra: professor of computer science at the Technological University at


Eindhoven, The Netherlands; stimulated the move to structured programming with the
publication of a widely read article, “Go To Statement Considered Harmful”. In that
article he proposes that GOTO statements be abolished from all high-level languages
such as BASIC. The modern programming structures available in QBasic do away with
the need for GOTO statements.

Harlan B. Mills: IBM Fellow an Professor of Computer Science at the University of


Maryland; has long advocated the use of structured programming. In 1969, Mills was
asked to write a program creating an information database for the New York Times, a
project that was estimated to require thirty man-years with traditional programming
techniques. Using structured programming techniques, Mills single-handedly completed
the project in six months. The methods of structured programming are used throughout
this material. To date IBM has so many delivery centers in the United States. Europe,
Africa Asia. The picture below shows the IBM delivery Center in Brno, Czech Republic.
The delivery center serves the need of IBM customers in central Europe and France.
IBM delivery Centre, Brno Czech Republic, an IBM service centre for Central
Europe.

Donald E. Knuth: Professor of Computer Science at Stanford University; is generally


regarded as the preeminent scholar of computer science in the world. He is best known
for his monumental series of books. The Art of Computer Programming, the definitive
work on algorithms, was published by him.

Ted Hoff, Stan Mazer, Robert Noyce, and Federico Faggin: Engineers at the Intel
Corporation; developed the first microprocessor chip. Such chips, which serve as the
central processing units for microcomputers, are responsible for the extraordinary
reduction in the size of computers. A computer with greater power than the ENIAC can
now be held in the palm of the hand.

1970s

Paul Allen and bill Gates: Cofounders of Microsoft Corporation; developed languages
and the operating system for the IBM PC. The operating system, known as MS-DOS, is
a collection of programs that manage the operation of the computer. In 1974, Gates
dropped out of Harvard after one year, and Allen left a programming job with Honeywell
to write software together. Their initial project was a version of Basic for the Altair, the
first microcomputer. Microsoft is one of the most highly respected software companies
in the United States and a leader in the development of programming languages. Bill
Gates ranked the file of most Fabulous millionaire of modern times. He successfully
upgraded the MS-DOS from command user interface operating system to Graphics user
interface operating system, purchasing rights to own several users and application
programs. This move further witnessed the release of various versions of Windows 95
popularly known then as Microsoft Windows 95, then Windows 98, Me, 2000, XP
(Windows developed exclusively for professionals), Windows Vista, Windows Mobile,
Windows 7 and Windows 8. Pls update this Info down to Windows 11. Each operating
system software have distinguishing features such as installation requirements and areas
of application.

Stephen Wozniak and Stephen Jobs: Cofounders of Apple Computer Inc; started the
microcomputer revolution. The two had met as teenagers while working summers at
Hewlett-Packard. Another summer, Jobs worked in an orchard, a job that inspired the
names of their computers. Wozniak designed the Apple computer in Jobs’ parents’
garage and Jobs promoted it so successfully that the company was worth hundreds of
millions of dollars when it went public. Both men resigned from the company in 1985.
Jobs returned in 1977. The Apple Computer system is displayed in the picture below as
found in the Projects Laboratory of the Department of Electrical and Electronics
Engineering, University of Ilorin, Ilorin, Nigeria. The apple computer system is still in
production till today featuring various laptops, desktops. Personal Computers, IPADS
and Smart phones.

Dan Bricklin and Dan Fylstra: cofounders of Software Arts; wrote VisiCalc, the first
electronic spreadsheet program. An electronic spreadsheet is a worksheet divided into
rows and columns, which analysts use to construct budgets and estimate costs. A change
made in one number results in the updating of all numbers derived from it. For instance,
changing a person’s housing expenses will immediately produce a change in his total
expenses. Bricklin got the idea for an electronic spreadsheet after watching one of his
professors at the Harvard Business School struggle while updating a spreadsheet at the
blackboard. VisiCalc became so popular that many people bought personal computers
just so they could run the program. A simplified spreadsheet produced with the
Microsoft excel spreadsheet is shown below. Other examples of spreadsheet package
include the lotus 123.

Microsoft Excel Spreadsheet showing the rows and columns with associated cell
references and menu bars.

Robert Barnaby: a dedicated programmer; best known for writing WordStar, one of the
most popular word processors. Word processing programs account for 30% of all
software sold in the United States. The QBasic editor uses WordStar-like commands.
Later versions of word processors include the word perfect, Word pad, amipro, Microsoft
word, etc.

1980s

The 1980s witnessed home computers and the now ubiquitous personal computer. With
the evolution of the Internet, personal computers are becoming as common as the
television and the telephone in the household.

William L. Sydnes: manager of the IBM Entry Systems Boca engineering group;
headed the design team for the IBM Personal Computer. Shortly after its introduction in
1981, the IBM PC dominated the microcomputer field. QBasic runs on all IBM Personal
Computers and compatibles in those days, but in recent times.
International Business Machines: abbreviated IBM and nicknamed “Big Blue”, is a
multinational computer technology and IT consulting corporation headquartered in
Armonk, New York, United States. The company is one of the few information
technology companies with a continuous history dating back to the 19th century. IBM
manufactures and sells computer hardware and software (with a focus on the latter), and
offers infrastructure services, hosting services, and consulting services in arrears ranging
from mainframe computers to nanotechnology. Ginni Rometty is the president and CEO
of IBM.

IBM has been well known through most of its recent history as one of the world’s largest
computer companies and systems integrators. With over 433,362 (2012) Pls update the
info employees worldwide, IBM is one of the largest and most profitable information
technology employers in the world. IBM holds more patents than any other U.S. based
Technology Company and has eleven research laboratories worldwide. The company has
scientists, engineers, consultants, and sales professionals in over 170 countries. IBM
employees have earned five Noble Prizes, four Turing Awards, five National Medals of
Technology, and five National Medals of Science. Update yhis info pls

Mitchell D. Kapor: cofounder of Lotus Corporation; wrote the business software


program 1-2-3, one of the most successful piece of software for personal computers,
Lotus 1-2-3 is an integrated program consisting of a spreadsheet, a database manager, and
a graphics package. Microsoft Corporation later bought patent rights from Lotus 1-2-3,
renamed it as Microsoft Excel and to date the office package include the excel
spreadsheet package.

Tom Button: group product manager for applications programmability at Microsoft,


headed the team that developed QuickBasic, QBasic, and Visual Basic. These modern,
easy-to-use languages have greatly increased the productivity of programmers.

Alan Cooper: Director of applications software for Coactive Computing Corporations; is


considered the father of Visual Basic. In 1987 he wrote a program called “Ruby” that
delivered visual programming to the average user. A few years later, Ruby was
combined with QuickBasic to produce Visual Basic, the remarkably successful language
that allows Windows programs to be written from within Windows easily and efficiently.

Tim Berners-Lee: British computer scientist, proposed the World Wide Web project in
1989 while working in Switzerland. His brainchild has grown into a global phenomenon
now known as the Internet and a daughter to the Internet known as Cloud Computing.
Philip Emeagwali (born in 1954) is a Nigerian-born engineer and computer
scientist/geologist who was one of two winners of the 1989 Gordon Bell Prize, a prize
from the IEEE, for his use of a Connection Machine supercomputer to help analyze
petroleum fields.

Emeagwali was born in Akure, Nigeria on 23 August 1954. His early schooling was
suspended in 1967 due to the Nigerian-Biafran war. When he turned fourteen, he served
in the Biafran army. After the war he completed a high-school equivalency through self-
study. He travelled to the United States to study under a scholarship after taking a
correspondence course at the University of London. He received a bachelor’s degree in
mathematics from Oregon State University in 1977. During this time, he worked as a
civil engineer at the Bureau of Land Reclamation in Wycoming. He later moved to
Washington DC, receiving in 1986 a master’s degree from George Washington
University in ocean and marine engineering, and a second master’s in applied
mathematics from the University of Maryland.

Emeagwali received a $1,000 1989 Gordon Bell Prize, based on an application of the
CM-2 massively-parallel computer for computational fluid dynamics (oil-reservoir
modeling). He won in the “price/performance” category, with a performance figure of
400 Mflops/$1M, corresponding to an absolute performance of 3.1 Gflops. The other
recipient of the award, who won in the “peal performance” category for a similar
application of the CM-2 to oil-related seismic data processing, actually had a price-
performance figure of 500 Mflops/$1M (superior to what Emeagwali had achieved) and
an absolute performance of 6.0 Gflops, but the judges decided not to award both prizes to
the same team. Emeagwali’s simulation was the first program to apply a pseudo-time
approach to reservoir modeling. His achievements were quoted in a speech by Bill
Clinton as an example of what Nigerians could achieve when given the opportunity.

1990s

Mark Andreessen: while a graduate student at the University of Illinois, led a small
band of fellow students to develop Mosaic, a program that allowed the user to move
around the World Wide Web by clicking on words and symbols. Andreessen went onto
cofound Netscape Communication Corporation, today Netscape is the world’s leading
Web browser. Visual Basic can be used to build a simplified Web-browser.

2000s

Modern smartphones are fully programmable computers in their own right, and as of
2009 may well be the most common form of such computers in existence.

 Microsoft Windows 2000 was released February 17, 2000.

 Microsoft releases Windows ME June 19, 2000.

 Microsoft introduces C# to the public in June 2000.

 Microsoft releases Internet Explorer 6.0 in August 27, 2001.

 USB 2.0 is introduced in 2001.

 SATA 1.0 is introduced in August 2001.

 Microsoft Windows XP home and professional editions are released October 25,
2001.

 Microsoft Windows XP 64-Bit Edition (Version 2002) for Itanium systems is


released in 2001.

 PCI Express is approved as standard in 2002.

 Intel Pentium M is introduced in March, 2003

 Intel announces the new BTX form factor in 2003.

 Google acquires Picasa in 2004.

 The first release of Ubuntu is released October 20, 2004.

 Firefox 1.0 is first introduced on November 9, 2004.

 Microsoft Windows XP Professional x64 Edition is released on April 24, 2005.


 The blu-ray is first announced and introduced at the 2006 CES on January 4,
2006.

 Microsoft Internet Explorer 7 is introduced October 18, 2006.

 Microsoft releases Microsoft Windows Vista to corporations on November 30,


2006.

 AMD releases the DTX motherboard form factor in January 2007.

 DDR3 is introduced in 2007.

 Google releases Android November 5, 2007.

 T-Mobile’s GI phone (HTC Dream) is the first phone to be released with Google
Android on September 23, 2008.

 The first Intel 7 is released to the public in November of 2008.

 Microsoft Internet Explorer 8 is introduced March 19, 2009.

 The analog TV signal begins to be phased out as broadcasts moved to high-


definition on June 12, 2009.

 Microsoft releases Windows 7 October 22, 2009.

 USB 3.0 begins being released in November of 2009.

 Intel releases the AHCI specification in October 2010.

 Microsoft releases Internet Explorer 9 March 14, 2011.

 Microsoft introduces Office 365 June 28, 2011.

 Google and several other companies migrate to IPv6 on June 6, 2012.

 Microsoft Windows 8 and Microsoft Surface is released October 26, 2012.

1.9 Generation of Computers

A generation refers to the state of improvement in the development of a product. This


term is also used in the different advancements of computer technology. With each new
generation, the circuitry has gotten smaller and more advanced than the previous
generation before it. As a result of the miniaturization, speed, power, and memory of
computers has proportionally increased. New discoveries are constantly being developed
that affect the way we live, work and play.
1.9.1 The First Generation: 1946-1958 (The Vacuum Tube Years)

The first generation computers were huge, slow, expensive, and often undependable. In
1946 two Americans, Presper Eekert and John Mauchly built the ENIAC electronic
computer which used vacuum tubes instead of the mechanical switches of the Mark 1.
The ENIAC used thousands of vacuum tubes, which took up a lot of space and gave off a
great deal of heat just like light bulbs do. The ENIAC led to other vacuum tube type
computers like the EDVAC (Electronic Discrete Variable Automatic Computer) and the
UNIVAC 1 (UNIVersal Automatic Computer).

The vacuum tube was an extremely important step in the advancement of computers.
Vacuum tubes were invented the same time the light bulb was invented by Thomas
Edison and worked very similar to light bulbs. It’s purpose was to act like an amplifier
and a switch. Without any moving parts, vacuum tubes could take very weak signals and
make the signal stronger (amplify it). Vacuum tubes could also stop and start the flow of
electricity instantly (switch). These two properties made the ENIAC computer possible.

The ENIAC gave off so much heat that they had to be cooled by gigantic air
conditioners. However even with these huge coolers, vacuum tubes still overheated
regularly. It was time for something new.

1.9.2 The Second Generation: 1959-1964 (The Era of the Transistor) The transistor
computer did not last as long as the vacuum tube computer lasted, but it was no less
important in the advancement of computer technology. In 1947 three scientists, John
Bardeen, William Shockley, and Walter Brattain working at AT&T’s Bell Labs
invented what would replace the vacuum tube forever. This invention was the transistor
which functions like a vacuum tube in that it can be used to relay and switch electronic
signals.
There were obvious differences between the transistor and the vacuum tube. The
transistor was faster, more reliable, smaller, and much cheaper to build that a vacuum
tube. One transistor replaced the equivalent of 40 vacuum tubes. These transistors were
made of solid materials, some of which is silicon, an abundant element (second only to
oxygen) found in beach sand and glass. Therefore they were very cheap to produce.
Transistors were found to conduct electricity faster and better than vacuum tubes.
They were also much smaller and gave off virtually no heat compared to vacuum tubes.
Their use marked a new beginning for the computer. Without this invention, space travel
in the 1960’s would not have been possible. However, a new invention would even
further advance out ability to use computers.

1.9.3 The Third Generation: 1965-1970 (Integrated Circuits – Miniaturizing the


Computer)

Transistors were a tremendous breakthrough in advancing the computer. However no


one could predict that thousands even now millions of transistors (circuits) could be
compacted in such a small space. The integrated circuit, or as it is sometimes referred
to as semiconductor chip, packs a huge number of transistors on to a single water of
silicon. Robert Noyce of Fairchild Corporation and Jack Kilby of Texas
Instruments independently discovered the amazing attributes of integrated circuits.
Placing such large numbers of transistors on a single chip vastly increased the power of a
single computer and lowered its cost considerably.

Since the invention of integrated circuits, the number of transistors that can be placed on
a single chip has doubled every two years, shrinking both the size and cost of computers
even further and further enhancing its power. Most electronic devices today use some
form of integrated circuits placed on printed circuit boards – thin pieces of bakelite or
fiberglass that have electrical connections etched onto them – sometimes called a
motherboard.

These third generation computers could carry out instructions in billionths of a second.
The size of these machines dropped to the size of small file cabinets. Yet, the single
biggest advancement in the computer era was yet to be discovered.
1.9.4 The Fourth Generation: 1971-Today (The Microprocessor)

This generation can be characterized by both the jump to monolithic integrated circuits
(millions of transistors put onto one integrated circuit chip) and the invention of the
microprocessor (a single chip that could do all the processing of a full-scale computer).
By putting millions of transistors onto one single chip more calculation and faster speeds
could be reached by computers. Because electricity travels about a foot in a billionth of a
second, the smaller the distance the greater the speed of computers.

However what really triggered the tremendous growth of computers and its significant
impact on our lives is the invention of the microprocessor. Ted Hoff, employed by
Intel (Robert Noyce’s new company) invented a chip the size of a pencil eraser that
could do all the computing and logic work of a computer. He microprocessor was made
to be used in calculators, not computers. It led, however, to the invention of personal
computers, or microcomputers.

It wasn’t until the 1970’s that people began buying computer for personal use. One of
the earliest personal computers was the Altair 8800 computer kit. In 1975 you could
purchase this kit and put it together to make your own personal computer. In 1977 the
Apple II was sold to the public and in 1981 IBM entered the PC (personal computer)
market.
Today we have all heard of Intel and its Pentium® Processors and now we know how it
all got started. There is no end in sight for the computer movement.

1.9.5 Fifth Generation Computers: (1984-1990)

In this period, computer technology achieved more superiority and parallel processing,
which was until limited to vector processing and pipelining, where hundreds of
processors could all work on various parts of a single program. There were introduction
of systems like the Sequent Balance 8000, which connected up to twenty processors to
one shared memory module.

This machine was as competent as the DEC VAX-780 in the context that it had a general
purpose UNIX system and each processor worked on a different user’s job. On the other
hand, INTEL IPSC-1 or Hypercube, as it was called, connected each processor to its own
memory and used a network interface to connect the processors. With the concept of
distributed network coming in, memory posed no further problem and the largest IPSC-1
was built with 128 processors. Towards the end of the fifth generation, another parallel
processing was introduced in the devices, which were called Data parallel or SIMD. In
this system, all the processors operate under the instruction of a single control unit.

In this generation semiconductor memories became the standard were pursued


vigorously. Other developments were the increasing use of single user workstations and
widespread use of computer networks. Both wide area network (WAN) and local area
network (LAN) developed at an incredible pace and led to a distributed computing
environment. RISC technology i.e. a particular technique for the internal organization of
CPU and the plunging cost of RAM ushered in huge gains in computational power of
comparatively cheaper servers and workstations. This generation also witnessed a sharp
increase in both quantitative and qualitative aspects of scientific visualization.

1.9.6 Sixth Generation of Computers (1990 till date)

Of all those changes that have taken place in the field of computer technology, some
changes are abrupt whereas others are defined. In the current period, this transition from
one period to another is clear only in retrospect because most of them are gradual
advancements of an already established system. This present generation of computer
technology is highly related with parallel computing and several growth areas has been
noticed in this area, in both hardware part and in the better understanding of how to
develop algorithms to make full use of massive parallel architectures.

Though vector system is equally in use, it is often speculated that the future would be
dominated by parallel systems. However, there are several devices where there are
combinations of parallel-vector architectures. Fujitsu Corporation is planning to build a
system with more than 200 vector processors. Another goal of this sixth generation is to
attain Teraflops i.e. ten arithmetic operations per second and that can be done by building
up a system with more than thousand processors. Currently, the processors are
constructed with a combination of RISC, pipelining and parallel processing.

Networking technology is spreading rapidly and one of the most conspicuous growths of
the sixth generation computer technology is the huge growth of WAN. For regional
network, T1 is the standard and the national “backbone” uses T3 to interconnect the
regional networks. Finally, the rapid advancement and high level of awareness regarding
computer technology is greatly indebted to the two legislations. Just like the Lax report
of 1982, the High Performance Computing Act of 1991, Information Infrastructure, and
technology Act of 1992 have strengthened and ensured the scope of high performance
computing. The former has ensured the establishment of high performance computing
and communications programming (HPCCP) and the later has reinforced the necessity of
making leading edge technologies available to academicians right from kindergarten up
to graduation level.

Examples of operating system software: (UNICOS and COS which run on Cray
supercomputers, MVS and VM which run on IBM mainframes, WINDOWS, UNIX,
XENUX, LINUX, which run on personal computers, the Macintosh operating system
which run on Apple Macintosh Computers, Pen Right on pen computers and Smart
boards for classroom/lecture session applications.

BOOTING: Booting is the process of loading an operating system into the computer’s
main memory. To a lame man, it is the process of turning on the computer system and
making it ready to accept instruction from the user. The booting system of the computer
involves:

 Turning on the computer by the user.

 The loading of the diagnostic routine from the Read Only Memory (ROM) which
tests the main memory, the central processing unit, the presence or absence of
keyboards, mouse, printers, plotters, scanners, or in general peripherals connected
to the computer system to make sure they will work properly.

 Next is the copying into the memory routines of the Basic Input Output System
(BIOS) which shall enable computer system interpret keyboard, mouse, display
screen, and disk readers characters.

 The last stage is the loading of the boot program with fetching of the operating
system usually from the hard disk, ROM, or from flash disks and load it into the
computer main memory where it will remain until the computer is turned off.

Note that programs placed in startup folders of the computer system are executed after
the loading of the operating system and that many of them e.g. Antivirus program may
apparently slow down the booting process. This bottle neck has been solved in
multitasking operating systems where more than one task can be executed at any point in
time. Also warm boot and clod boot are types of booting.

1.10 STUDY QUESTIONS FOR CHAPTER I

Q1.1: In each of the 4 generations what was the cause for the increase of speed, power,
or memory?.
Q1.2: Why did the ENIAC and other computers like it give off so much heat? (Be very
specific).

Q1.3: What characteristics made the transistors better than the vacuum tube?

Q1.4: How was space travel made possible through the invention of transistors?

Q1.5: What did the microprocessor allow the computers to do? and what was the
microprocessor’s original purpose?

Q1.6: When was the first computer offered to the public and what was its name?

Q1.7: What was Robert Noyce and Jack Kilby known for?

Q1.8: Intel was started by who?

Q1.9: What is monolithic integrated circuits?

Q1.10: How do you think society will be different if scientists are able to create a chip
that will perform a trillion operations in a single second?

You might also like