Operating Systems Notes FINAL - Unit2
Operating Systems Notes FINAL - Unit2
Operating Systems Notes FINAL - Unit2
CONTENTS
Objectives
Introduction
2.1 Operations and Functions of OS
2.2 Types of Operating System
2.3 Operating System: Examples
2.3.1 Disk Operating System (DOS)
2.3.2 UNIX
2.3.3 Windows
2.3.4 Macintosh
2.4 Summary
2.5 Keywords
2.6 Self Assessment
2.7 Review Questions
2.8 Further Readings
Objectives
After studying this unit, you will be able to:
Describe operations and functions of operating system
Explain various types of operating system
Introduction
The primary objective of operating system is to increase productivity of a processing resource,
such as computer hardware or computer-system users. User convenience and productivity were
secondary considerations. At the other end of the spectrum, an OS may be designed for a personal
computer costing a few thousand dollars and serving a single user whose salary is high. In this
case, it is the user whose productivity is to be increased as much as possible, with the hardware
utilization being of much less concern. In single-user systems, the emphasis is on making the
computer system easier to use by providing a graphical and hopefully more intuitively obvious
user interface.
Process Management
The CPU executes a large number of programs. While its main concern is the execution of user
programs, the CPU is also needed for other system activities. These activities are called processes.
A process is a program in execution. Typically, a batch job is a process. A time-shared user
program is a process. A system task, such as spooling, is also a process. For now, a process may
be considered as a job or a time-shared program, but the concept is actually more general.
The operating system is responsible for the following activities in connection with processes
management:
1. The creation and deletion of both user and system processes
2. The suspension and resumption of processes.
3. The provision of mechanisms for process synchronization
4. The provision of mechanisms for deadlock handling.
Memory Management
Memory is the most expensive part in the computer system. Memory is a large array of words or
bytes, each with its own address. Interaction is achieved through a sequence of reads or writes of
specific memory address. The CPU fetches from and stores in memory.
There are various algorithms that depend on the particular situation to manage the memory.
Selection of a memory management scheme for a specific system depends upon many factors, but
especially upon the hardware design of the system. Each algorithm requires its own hardware
support.
The operating system is responsible for the following activities in connection with memory
management.
1. Keep track of which parts of memory are currently being used and by whom.
2. Decide which processes are to be loaded into memory when memory space becomes
available.
3. Allocate and deallocate memory space as needed.
The main purpose of a computer system is to execute programs. These programs, together with
the data they access, must be in main memory during execution. Since the main memory is too
small to permanently accommodate all data and program, the computer system must provide
secondary storage to backup main memory. Most modem computer systems use disks as
the primary on-line storage of information, of both programs and data. Most programs, like
compilers, assemblers, sort routines, editors, formatters, and so on, are stored on the disk until
loaded into memory, and then use the disk as both the source and destination of their processing.
Hence the proper management of disk storage is of central importance to a computer system.
Notes There are few alternatives. Magnetic tape systems are generally too slow. In addition, they are
limited to sequential access. Thus tapes are more suited for storing infrequently used files, where
speed is not a primary concern.
The operating system is responsible for the following activities in connection with disk
management:
1. Free space management
2. Storage allocation
3. Disk scheduling.
I/O Management
One of the purposes of an operating system is to hide the peculiarities or specific hardware
devices from the user. For example, in UNIX, the peculiarities of I/O devices are hidden from the
bulk of the operating system itself by the I/O system. The operating system is responsible for the
following activities in connection to I/O management:
1. A buffer caching system
2. To activate a general device driver code
3. To run the driver software for specific hardware devices as and when required.
File Management
File management is one of the most visible services of an operating system. Computers can store
information in several different physical forms: magnetic tape, disk, and drum are the most
common forms. Each of these devices has it own characteristics and physical organisation.
For convenient use of the computer system, the operating system provides a uniform logical view
of information storage. The operating system abstracts from the physical properties of its storage
devices to define a logical storage unit, the file. Files are mapped, by the operating system, onto
physical devices.
A file is a collection of related information defined by its creator. Commonly, files represent
programs (both source and object forms) and data. Data files may be numeric, alphabetic or
alphanumeric. Files may be free-form, such as text files, or may be rigidly formatted. In general
a files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and
user. It is a very general concept.
The operating system implements the abstract concept of the file by managing mass storage
device, such as types and disks. Also files are normally organised into directories to ease their
use. Finally, when multiple users have access to files, it may be desirable to control by whom and
in what ways files may be accessed.
The operating system is responsible for the following activities in connection to the file
management:
1. The creation and deletion of files.
2. The creation and deletion of directory.
3. The support of primitives for manipulating files and directories.
4. The mapping of files onto disk storage.
5. Backup of files on stable (non volatile) storage.
6. Protection and security of the files.
Protection Notes
The various processes in an operating system must be protected from each other’s activities. For
that purpose, various mechanisms which can be used to ensure that the files, memory segment,
CPU and other resources can be operated on only by those processes that have gained proper
authorisation from the operating system.
Example: Memory addressing hardware ensures that a process can only execute within
its own address space. The timer ensures that no process can gain control of the CPU without
relinquishing it. Finally, no process is allowed to do its own I/O, to protect the integrity of
the various peripheral devices. Protection refers to a mechanism for controlling the access of
programs, processes, or users to the resources defined by a computer controls to be imposed,
together with some means of enforcement.
Protection can improve reliability by detecting latent errors at the interfaces between component
subsystems. Early detection of interface errors can often prevent contamination of a healthy
subsystem by a subsystem that is malfunctioning. An unprotected resource cannot defend
against use (or misuse) by an unauthorised or incompetent user.
Networking
A distributed system is a collection of processors that do not share memory or a clock. Instead, each
processor has its own local memory, and the processors communicate with each other through
various communication lines, such as high speed buses or telephone lines. Distributed systems
vary in size and function. They may involve microprocessors, workstations, minicomputers, and
large general purpose computer systems.
The processors in the system are connected through a communication network, which can be
configured in the number of different ways. The network may be fully or partially connected.
The communication network design must consider routing and connection strategies and the
problems of connection and security.
A distributed system provides the user with access to the various resources the system maintains.
Access to a shared resource allows computation speed-up, data availability, and reliability.
Command Interpretation
One of the most important components of an operating system is its command interpreter. The
command interpreter is the primary interface between the user and the rest of the system.
Many commands are given to the operating system by control statements. When a new job is
started in a batch system or when a user logs-in to a time-shared system, a program which reads
and interprets control statements is automatically executed. This program is variously called
(1) the control card interpreter, (2) the command line interpreter, (3) the shell (in Unix), and so on.
Its function is quite simple: get the next command statement, and execute it.
The command statements themselves deal with process management, I/O handling, secondary
storage management, main memory management, file system access, protection, and
networking.
Notes The Figure 2.1 depicts the role of the operating system in coordinating all the functions.
I/O
Management File
Protection &
Security Management
Process Secondary
Management Storage
Management
Operating System
Communication
Management Memory
User Management
Interface Networking
In a batch processing operating system environment users submit jobs to a central place where
these jobs are collected into a batch, and subsequently placed on an input queue at the computer
where they will be run. In this case, the user has no interaction with the job during its processing,
and the computer’s response time is the turnaround time the time from submission of the job
until execution is complete, and the results are ready for return to the person who submitted the
job.
Time Sharing
Another mode for delivering computing services is provided by time sharing operating
systems. In this environment a computer provides computing services to several or many users
concurrently on-line. Here, the various users are sharing the central processor, the memory, and
other resources of the computer system in a manner facilitated, controlled, and monitored by the
operating system. The user, in this environment, has nearly full interaction with the program
during its execution, and the computer’s response time may be expected to be no more than a
few second.
The third class is the real time operating systems, which are designed to service those applications
where response time is of the essence in order to prevent error, misrepresentation or even disaster.
Examples of real time operating systems are those which handle airlines reservations, machine
tool control, and monitoring of a nuclear power station. The systems, in this case, are designed to
be interrupted by external signals that require the immediate attention of the computer system.
These real time operating systems are used to control machinery, scientific instruments and Notes
ndustrial systems. An RTOS typically has very little user-interface capability, and no end-user
utilities. A very important part of an RTOS is managing the resources of the computer so that
a particular operation executes in precisely the same amount of time every time it occurs. In a
complex machine, having a part move more quickly just because system resources are available
may be just as catastrophic as having it not move at all because the system is busy.
A number of other definitions are important to gain an understanding of operating systems:
A multiprogramming operating system is a system that allows more than one active user program
(or part of user program) to be stored in main memory simultaneously. Thus, it is evident that a
time-sharing system is a multiprogramming system, but note that a multiprogramming system
is not necessarily a time-sharing system. A batch or real time operating system could, and indeed
usually does, have more than one active user program simultaneously in main storage. Another
important, and all too similar, term is “multiprocessing”.
Primary Memory
MONITOR
PROGRAM 1
PROGRAM 2
. . .
. . .
PROGRAM N
Buffering and Spooling improve system performance by overlapping the input, output and
computation of a single job, but both of them have their limitations. A single user cannot always
keep CPU or I10 devices busy at all times. Multiprogramming offers a more efficient approach
to increase system performance. In order to increase the resource utilisation, systems supporting
multiprogramming approach allow more than one job (program) to reside in the memory to
utilise CPU time at any moment. More number of programs competing for system resources
better will mean better resource utilisation.
The idea is implemented as follows. The main memory of a system contains more than one
program (Figure 2.2).
The operating system picks one of the programs and starts executing. During execution of
program 1 it needs some I/O operation to complete in a sequential execution environment
(Figure 2.3a). The CPU would then sit idle whereas in a multiprogramming system, (Figure 2.3b)
the operating system will simply switch over to the next program (program 2).
Program 1 Program 2
P1 P1 P1 P2 P2 P2
Program 1
P1 P1 P1
Program 2
When that program needs to wait for some 110 operation, it switches over to program 3 and so
on. If there is no other new program left in the main memory, the CPU will pass its control back
to the previous programs.
Multiprogramming has traditionally been employed to increase the resources utilisation of a
computer system and to support multiple simultaneously interactive users (terminals).
Multiprocessing System
A multiprocessing system is a computer hardware configuration that includes more than one
independent processing unit. The term multiprocessing is generally used to refer to large
computer hardware complexes found in major scientific or commercial applications.
A multiprocessor system is simply a computer that has >1 & not <=1 CPU on its motherboard. If
the operating system is built to take advantage of this, it can run different processes (or different
threads belonging to the same process) on different CPUs.
Today’s operating systems strive to make the most efficient use of a computer’s resources.
Most of this efficiency is gained by sharing the machine’s resources among several tasks
(multi-processing). Such “large-grain” resource sharing is enabled by operating systems without
any additional information from the applications or processes. All these processes can potentially
execute concurrently, with the CPU (or CPUs) multiplexed among them. Newer operating
systems provide mechanisms that enable applications to control and share machine resources at
a finer grain-, that is, at the threads level. Just as multiprocessing operating systems can perform
more than one task concurrently by running more than a single process, a process can perform
more than one task by running more than a single thread.
PC platform
Multiprocessor chip
sets Intel IA-32 and IA-64
DDR
(Intel, AMD, VIA,
ServerWorks)
AMD Athlon
Multiprocessor
RAMBUS switch fabrics
(Sun, Unisys) Sun Ultra Sparc
System Bus
AMD Hyper
PCI/PCI-X InfiniBand Transport
Peripheral Bus
A distributed computing system consists of a number of computers that are connected and
managed so that they automatically share the job processing load among the constituent
computers, or separate the job load as appropriate particularly configured processors. Such a
system requires an operating system which, in addition to the typical stand-alone functionality,
provides coordination of the operations and information flow among the component computers.
The networked and distributed computing environments and their respective operating systems
are designed with more complex functional capabilities. In a network operating system, the users
are aware of the existence of multiple computers, and can log in to remote machines and copy
Notes files from one machine to another. Each machine runs its own local operating system and has its
own user (or users).
A distributed operating system, in contrast, is one that appears to its users as a traditional
uni-processor system, even though it is actually composed of multiple processors. In a true
distributed system, users should not be aware of where their programs are being run or where
their files are located; that should all be handled automatically and efficiently by the operating
system.
True distributed operating systems require more than just adding a little code to a uni-processor
operating system, because distributed and centralised systems differ in critical ways. Distributed
systems, for example, often allow program to run on several processors at the same time, thus
requiring more complex processor scheduling algorithms in order to optimise the amount of
parallelism achieved.
As embedded systems (PDAs, cellphones, point-of-sale devices, VCR’s, industrial robot control,
or even your toaster) become more complex hardware-wise with every generation, and more
features are put into them day-by-day, applications they run require more and more to run on
actual operating system code in order to keep the development time reasonable. Some of the
popular OS are:
1. Nexus’s Conix: an embedded operating system for ARM processors.
2. Sun’s Java OS: a standalone virtual machine not running on top of any other OS; mainly
targeted at embedded systems.
3. Palm Computing’s Palm OS: Currently the leader OS for PDAs, has many applications and
supporting companies.
4. Microsoft’s Windows CE and Windows NT Embedded OS.
In theory, every computer system may be programmed in its machine language, with no systems
software support. Programming of the “bare-machines” was customary for early computer
systems. A slightly more advanced version of this mode of operating is common for the simple
evaluation boards that are sometimes used in introductory microprocessor design and interfacing
courses.
Programs for the bare machine can be developed by manually translating sequences of instructions
into binary or some other code whose base is usually an integer power of 2. Instructions and data
are then fed into the computer by means of console switches, or perhaps through a hexadecimal
keyboard. Programs are started by loading the program counter with the address of the first
instruction. Results of execution are obtained by examining the contents of the relevant registers
and memory locations. Input/Output devices, if any, must be controlled by the executing
program directly, say, by reading and writing the related I/O ports. Evidently, programming of
the bare machine results in low productivity of both users and hardware. The long and tedious
process of program and data entry practically precludes execution of all but very short programs
in such an environment.
The next significant evolutionary step in computer system usage came about with the advent
of input/output devices, such as punched cards and paper tape, and of language translators.
Programs, now coded in a programming language, are translated into executable form by
a computer program, such as compiler or an interpreter. Another program, called the loader,
automates the process of loading executable programs into memory. The user places a program
and its input data on an input device, and the loader transfers information from that input device Notes
into memory. After transferring control to the loaded program by manual or automatic means,
execution of the program commences. The executing program reads its input from the designated
input device and may produce some output on an output device, such as a printer or display
screen. Once in memory, the program may be rerun with different set of input data.
The mechanics of development and preparation of programs in such environments are quite slow
and cumbersome due to serial execution of programs and numerous manual operations involved
in the process. In a typical sequence, the editor program is loaded to prepare the source code of
the user program. The next step is to load and execute the language translator and to provide
it with the source code of the user program. When serial input devices, such as card readers,
are used, multiple-pass language translators may require the source code to be repositioned
for reading during each pass. If syntax errors are detected, the whole process must be repeated
from the beginning. Eventually, the object code produced from the syntactically correct source
code is loaded and executed. If run-time errors are detected, the state of the machine can be
examined and modified by means of console switches, or with the assistance of a program called
a debugger. The mode of operation described here was initially used in late fifties, but it was also
common in low-end microcomputers of early eighties with cassettes as I/O devices.
In addition to language translators, system software includes the loader and possibly editor and
debugger programs. Most of them use input/output devices and thus must contain some code
to exercise those devices. Since many user programs also use input/output devices, the logical
refinement is to provide a collection of standard I/O routines for the use of all programs.
In the described system, I/O routines and the loader program represent a rudimentary form
of an operating system. Although quite crude, it still provides an environment for execution of
programs far beyond what is available on the bare machine. Language translators, editors, and
debuggers are system programs that rely on the services of, but are not generally regarded as
part of, the operating system.
Although a definite improvement over the bare machine approach, this mode of operation is
obviously not very efficient. Running of the computer system may require frequent manual loading
of programs and data. This results in low utilization of system resources. User productivity,
especially in multiuser environments, is low as users await their turn at the machine. Even with
such tools as editors and debuggers, program development is very slow and is ridden with
manual program and data loading.
Parallel operating systems are primarily concerned with managing the resources of parallel
machines. This task faces many challenges: application programmers demand all the performance
possible, many hardware configurations exist and change very rapidly, yet the operating system
must increasingly be compatible with the mainstream versions used in personal computers and
workstations due both to user pressure and to the limited resources available for developing
new versions of these system. There are several components in an operating system that can be
parallelized. Most operating systems do not approach all of them and do not support parallel
applications directly. Rather, parallelism is frequently exploited by some additional software
layer such as a distributed file system, distributed shared memory support or libraries and
services that support particular parallel programming languages while the operating system
manages concurrent task execution.
The convergence in parallel computer architectures has been accompanied by a reduction in the
diversity of operating systems running on them. The current situation is that most commercially
available machines run a flavour of the UNIX OS (Digital UNIX, IBM AIX, HP UX, Sun Solaris,
Linux).
Others run a UNIX based microkernel with reduced functionality to optimize the use of the CPU,
such as Cray Research’s UNICOS. Finally, a number of shared memory MIMD machines run
Microsoft Windows NT (soon to be superseded by the high end variant of Windows 2000).
There are a number of core aspects to the characterization of a parallel computer operating
system: general features such as the degrees of coordination, coupling and transparency; and
more particular aspects such as the type of process management, inter-process communication,
parallelism and synchronization and the programming model.
Multitasking
In computing, multitasking is a method where multiple tasks, also known as processes, share
common processing resources such as a CPU. In the case of a computer with a single CPU, only
one task is said to be running at any point in time, meaning that the CPU is actively executing
instructions for that task. Multitasking solves the problem by scheduling which task may be the
one running at any given time, and when another waiting task gets a turn. The act of reassigning
a CPU from one task to another one is called a context switch. When context switches occur
frequently enough the illusion of parallelism is achieved. Even on computers with more than
one CPU (called multiprocessor machines), multitasking allows many more tasks to be run than
there are CPUs.
In the early ages of the computers, they where considered advanced card machines and therefore
the jobs they performed where like: “find all females in this bunch of cards (or records)”. Therefore,
utilisation was high since one delivered a job to the computing department, which prepared and
executed the job on the computer, delivering the final result to you. The advances in electroniv
engineering increased the processing power serveral times, now leaving input/output devices
(card readers, line printers) far behind. This ment that the CPU had to wait for the data it required
to perform a given task. Soon, engineers thought: “what if we could both prepare, process and
output data at the same time” and multitasking was born. Now one could read data for the
next job while executing the current job and outputting the results of a previously job, thereby
increasing the utilisation of the very expensive computer.
Cheap terminals allowed the users themselves to input data to the computer and to execute jobs
(having the department do it often took days) and see results immediately on the screen, which
introduced what was called interactive tasks. They required a console to be updated when a key
was pressed on the keyboard (again a task with slow input). Same thing happens today, where Notes
your computer actually does no work most of the time - it just waits for your input. Therefore
using multitasking where serveral tasks run on the same computer improves performance.
Multitasking is the process of letting the operating system perform multiple task at what seems
to the user simultaniously. In SMP (Symmetric Multi Processor systems) this is the case, since
there are serveral CPU’s to execute programs on - in systems with only a single CPU this is
done by switching execution very rapidly between each program, thus givin the impression of
simultanious execution. This process is also known as task swithcing or timesharing. Practically
all modern OS has this ability.
Multitasking is, on single-processor machines, implemented by letting the running process
own the CPU for a while (a timeslice) and when required gets replaced with another process,
which then owns the CPU. The two most common methods for sharing the CPU time is either
cooperative multitasking or preempetive multitasking.
Cooperative Multitasking: The simplest form of multitasking is cooperative multitasking. It lets
the programs decide when they wish to let other tasks run. This method is not good since it lets
one process monopolise the CPU and never let other processes run. This way a program may be
reluctant to give away processing power in the fear of another process hogging all CPU-time.
Early versions of the MacOS (up til MacOS 8) and versions of Windows earlier than Win95/
WinNT used cooperative multitasking (Win95 when running old apps).
Preempetive Multitasking: Preempetive multitasking moves the control of the CPU to the OS,
letting each process run for a given amount of time (a timeslice) and then switching to another
task. This method prevents one process from taking complete control of the system and thereby
making it seem as if it is crashed. This method is most common today, implemented by among
others OS/2, Win95/98, WinNT, Unix, Linux, BeOS, QNX, OS9 and most mainframe OS. The
assignment of CPU time is taken care of by the scheduler.
DOS (Disk Operating System) was the first widely-installed operating system for personal
computers. It is a master control program that is automatically run when you start your personal
computer (PC). DOS stays in the computer all the time letting you run a program and manage
files. It is a single-user operating system from Microsoft for the PC. It was the first OS for the PC
and is the underlying control program for Windows 3.1, 95, 98 and ME. Windows NT, 2000 and
XP emulate DOS in order to support existing DOS applications.
2.3.2 UNIX
UNIX operating systems are used in widely-sold workstation products from Sun Microsystems,
Silicon Graphics, IBM, and a number of other companies. The UNIX environment and the
client/server program model were important elements in the development of the Internet and
the reshaping of computing as centered in networks rather than in individual computers. Linux,
a UNIX derivative available in both “free software” and commercial versions, is increasing in
popularity as an alternative to proprietary operating systems.
UNIX is written in C. Both UNIX and C were developed by AT&T and freely distributed to
government and academic institutions, causing it to be ported to a wider variety of machine
families than any other operating system. As a result, UNIX became synonymous with “open
systems”.
Notes UNIX is made up of the kernel, file system and shell (command line interface). The major shells are
the Bourne shell (original), C shell and Korn shell. The UNIX vocabulary is exhaustive with more
than 600 commands that manipulate data and text in every way conceivable. Many commands
are cryptic, but just as Windows hid the DOS prompt, the Motif GUI presents a friendlier image
to UNIX users. Even with its many versions, UNIX is widely used in mission critical applications
for client/server and transaction processing systems. The UNIX versions that are widely used
are Sun’s Solaris, Digital’s UNIX, HP’s HP-UX, IBM’s AIX and SCO’s UnixWare. A large number
of IBM mainframes also run UNIX applications, because the UNIX interfaces were added to
MVS and OS/390, which have obtained UNIX branding. Linux, another variant of UNIX, is also
gaining enormous popularity.
2.3.3 Windows
Windows is a personal computer operating system from Microsoft that, together with some
commonly used business applications such as Microsoft Word and Excel, has become a de facto
“standard” for individual users in most corporations as well as in most homes. Windows contains
built-in networking, which allows users to share files and applications with each other if their
PC’s are connected to a network. In large enterprises, Windows clients are often connected to a
network of UNIX and NetWare servers. The server versions of Windows NT and 2000 are gaining
market share, providing a Windows-only solution for both the client and server. Windows is
supported by Microsoft, the largest software company in the world, as well as the Windows
industry at large, which includes tens of thousands of software developers.
This networking support is the reason why Windows became successful in the first place.
However, Windows 95, 98, ME, NT, 2000 and XP are complicated operating environments.
Certain combinations of hardware and software running together can cause problems, and
troubleshooting can be daunting. Each new version of Windows has interface changes that
constantly confuse users and keep support people busy, and Installing Windows applications
is problematic too. Microsoft has worked hard to make Windows 2000 and Windows XP more
resilient to installation of problems and crashes in general.
2.3.4 Macintosh
The Macintosh (often called “the Mac”), introduced in 1984 by Apple Computer, was the first
widely-sold personal computer with a Graphical User Interface (GUI). The Mac was designed
to provide users with a natural, intuitively understandable, and, in general, “user-friendly”
computer interface. This includes the mouse, the use of icons or small visual images to represent
objects or actions, the point-and-click and click-and-drag actions, and a number of window
operation ideas. Microsoft was successful in adapting user interface concepts first made popular
by the Mac in its first Windows operating system. The primary disadvantage of the Mac is that
there are fewer Mac applications on the market than for Windows. However, all the fundamental
applications are available, and the Macintosh is a perfectly useful machine for almost everybody.
Data compatibility between Windows and Mac is an issue, although it is often overblown and
readily solved.
The Macintosh has its own operating system, Mac OS which, in its latest version is called Mac OS
X. Originally built on Motorola’s 68000 series microprocessors, Mac versions today are powered
by the PowerPC microprocessor, which was developed jointly by Apple, Motorola, and IBM.
While Mac users represent only about 5% of the total numbers of personal computer users, Macs
are highly popular and almost a cultural necessity among graphic designers and online visual
artists and the companies they work for.
Task DOS is a character based operating system what about Windows operating
system.