Lecture2 OS

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 80

LECTURE 2

Evolution of Operating Systems

Dr. Akshi Kumar


Computer System Architecture (traditional)
Systems Today

3
Hardware Complexity Increases

4
OS needs to keep pace with hardware
improvements

5
Evolution of an Operating Systems?

• Must adapt to hardware upgrades and new types of hardware. Examples:


• Character vs. graphic terminals
• Introduction of paging hardware
• Must offer new services, e.g., internet support.
• The need to change the OS on regular basis place requirements on it’s
design:
• modular construction with clean interfaces.
• object oriented methodology.
Evolution of Operating Systems

• Early Systems (1950)


• Simple Batch Systems (1960)
• Multiprogrammed Batch Systems (1970)
• Time-Sharing and Real-Time Systems (1970)
• Personal/Desktop Computers (1980)
• Multiprocessor Systems (1980)
• Networked/Distributed Systems (1980)
Evolution of Operating Systems
Early Systems

• Structure
• Single user system.
• Programmer/User as operator (Open Shop).
• Large machines run from console.
• Paper Tape or Punched cards.
Example of an early computer system
Characteristics of Early Systems

• Early software: Assemblers, Libraries of common subroutines (I/O,


Floating-point), Device Drivers, Compilers, Linkers.
• Need significant amount of setup time.
• Extremely slow I/O devices.
• Very low CPU utilization.
• But computer was very secure.
Serial Processing
• Before the 1950s, there was no operating system, and users used to give
programs to the computer system itself.
• So, less speed and more errors were generated (as serial processing is
done on a single machine).
• The developers or the programmers had to provide the entire program
in the form of sequential instruction in the form of a punched card.
These punched cards were first translated into a card reader and then it
was submitted to the operating system.
Serial Processing
• Due to this extensive process of execution of a simple program and
human intervention, the overall execution time was very large and
inefficient.

• There were various other problems such as no user interaction,


execution of only one process at a time, very less memory, no error
handling, etc.

• Now, a red light was used to detect the error in the program execution.
So, if there was any error, the error got detected due to red blinking
lights.
Drawbacks of Serial Operating System

The major drawbacks of the serial were:

• No user and computer system interaction.


• Very less memory.
• It required a lot of time for program execution.
• Only one program could be executed at a time.
• the user could not execute another program when one program was in
execution.
Evolution of Operating Systems

• Early Systems (1950)


• Simple Batch Systems (1960)
• Multiprogrammed Batch Systems (1970)
• Time-Sharing and Real-Time Systems (1970)
• Personal/Desktop Computers (1980)
• Multiprocessor Systems (1980)
• Networked/Distributed Systems (1980)
Simple Batch Systems

• Use of high-level languages, magnetic tapes.

• Jobs are batched together by type of languages.

• An operator was hired to perform the repetitive


tasks of loading jobs, starting the computer, and
collecting the output (Operator-driven Shop).

• It was not feasible for users to inspect memory or


patch programs directly.
Operator-driven Shop
Operation of Simple Batch Systems

• The user submits a job (written on cards or tape) to a computer operator.


• The computer operator place a batch of several jobs on an input device.
• A special program, the monitor, manages the execution of each program in
the batch.
• Monitor utilities are loaded when needed.
• “Resident monitor” is always in main memory and available for execution.
Idea of Simple Batch Systems

• Reduce setup time by batching similar jobs.


• Alternate execution between user program and the monitor program.
• Rely on available hardware to effectively alternate execution from various
parts of memory.
• Use Automatic Job Sequencing – automatically transfer control from one
job when it finishes to another one.
Control Cards (1)

• Problems:
– 1. How does the monitor know about the nature of the job (e.g.,
Fortran versus Assembly) or which program to execute?
– 2. How does the monitor distinguish:
(a) job from job?
(b) data from program?
• Solution: Introduce Job Control Language (JCL) and
control cards.
Control Cards (2)

• Special cards that tell the monitor which programs to run:


$JOB
$FTN
$RUN
$DATA
$END
• Special characters distinguish control cards from data or program
cards:
$ in column 1
// in column 1 and 2
709 in column1
Job Control Language (JCL)

• JCL is the language that provides instructions to the monitor:


$JOB
• what compiler to use $FTN
...
• what data to use FORTRAN
• Example of job format: ------->> program
• $FTN loads the compiler and transfers control to it. ...
• $LOAD loads the object code (in place of compiler). $LOAD
$RUN
• $RUN transfers control to user program. ...
Data
...
$END
Example card deck of a Job
Effects of Job Control Language (JCL)

• Each read instruction (in user program) causes one line of input to be read.
• Causes (OS) input routine to be invoked:
• checks for not reading a JCL line.
• skip to the next JCL line at completion of user program.
Resident Monitor

• Resident Monitor is first rudimentary OS.


• Resident Monitor (Job Sequencer):
• initial control is in monitor.
• loads next program and transfers control to it.
• when job completes, the control transfers back to monitor.
• Automatically transfers control from one job to another, no idle time
between programs.
Resident Monitor Layout
Resident Monitor Parts

• Parts of resident monitor:


– Control Language Interpreter – responsible for reading and
carrying out instructions on the cards.
– Loader – loads systems programs and applications programs into
memory.
– Device drivers – know special characteristics and properties for
each of the system’s I/O devices.
Desirable Hardware Features

• Memory protection
• do not allow the memory area containing the monitor to be altered by a
user program.
• Privileged instructions
• can be executed only by the resident monitor.
• A trap occurs if a program tries these instructions.
• Interrupts
• provide flexibility for relinquishing control to and regaining control from
user programs.
• Timer interrupts prevent a job from monopolizing the system .
Offline Operation
• Problem:
• Card Reader slow, Printer slow (compared to
Tape).
• I/O and CPU could not overlap.
• Solution: Offline Operation (Satellite Computers) :
speed up computation by loading jobs into memory
from tapes while card reading, and line printing is
done off-line using smaller machines.
Spooling (1)

• Problem:
• Card reader, Line printer and Tape drives slow
(compared to Disk).
• I/O and CPU could not overlap.
• Solution: Spooling -
• Overlap I/O of one job with the computation of
another job (using double buffering, DMA, etc).
• Technique is called SPOOLing: Simultaneous
Peripheral Operation On Line.
Spooling (2)
• While executing one job, the OS:
• Reads next job from card reader into a storage area on the disk (Job pool).
• Outputs printout of previous job from disk to printer.

• Job pool – data structure that allows the OS to select which job to run next
in order to increase CPU utilization.
We assumed Uniprogramming until now

• I/O operations are exceedingly slow (compared to instruction execution).


• A program containing even a very small number of I/O operations, will
spend most of its time waiting for them.
• Hence: poor CPU usage when only one program is present in memory.
Uniprogramming

• In uni-programming system jobs are submitted one by one to the system.


From within the batch, the jobs are processed one by one.
• A collection of jobs forms a batch (batch processing) from which the jobs
are executed one by one. Every user submits his/her job to the operator who
forms a batch of jobs.
• The entire system is used by one process at a time.
• The sequence of steps is as follows:
1. All users submit the jobs to the operator ( the one who manages the system)
2. The operator selects similar kinds of jobs and makes batches.
3. Job1 from batch 1 is submitted for processing.
4. Job1 uses the CPU and I/O till it is completed. Till this time no other process can use
the CPU.
5. After job1 finishes job 2 is loaded into the memory.
Memory Layout for Uniprogramming
Disadvantages of batch processing / uni-
programming:
1. Wastage of CPU time
2. No user interaction
3. No mechanism to prioritize processes
Evolution of Operating Systems

• Early Systems (1950)


• Simple Batch Systems (1960)
• Multiprogrammed Batch Systems (1970)
• Time-Sharing and Real-Time Systems (1970)
• Personal/Desktop Computers (1980)
• Multiprocessor Systems (1980)
• Networked/Distributed Systems (1980)
Memory Layout for Batch Multiprogramming

Several jobs are kept in main memory at the same time, and the CPU is
multiplexed among them.
Multiprogramming (1)
Multiprogramming (2)
What is Multiprogramming?
• In multi-programming more than one process can reside in the main
memory at a time. Thus, when process P1 goes for I/O operation the CPU is
not kept waiting and is allocated to some another process (lets say P2). This
keeps the CPU busy at all times.
• Use interrupts to run multiple programs simultaneously
● When a program performs I/O, instead of polling, execute another
program till interrupt is received.
● Requires secure memory, I/O for each program.
● Requires intervention if program loops indefinitely.
● Requires CPU scheduling to choose the next job to run.

40
Why Multiprogramming?
Multiprogramming needed for efficiency:
• Single user cannot keep CPU and I/O devices busy at all times.
• Multiprogramming organizes jobs (code and data) so CPU always has one
to execute.
• A subset of total jobs in system is kept in memory.
• One job selected and run via job scheduling.
• When it has to wait (for I/O for example), OS switches to another job.
Multiprogramming
• Advantages:
1. CPU is being utilized all the time as long as there are more than 1 process
2. The processes can be finished in less amount of time

• Disadvantages:
• 1. No user interaction
Types of Multiprogramming OS
There mainly two types of multiprogramming operating systems:
• Multitasking Operating System
• Multiuser Operating System

• Multitasking Operating System: Multitasking operating system has ability to execute


many programs at a same time. The operating system gets this by swapping every program
in and out memory, one simultaneously. Whenever a program is fetched out of memory
then it is kept store on the different types of secondary memory until it is required once
again.

• Multi-tasking examples are: Windows XP, Windows Vista, Windows 7, Windows 8 and
more.
Types of Multiprogramming OS
• Multiuser Operating System: When an operating system allows too many users to make
connection with single system running the same operating system is called the multiuser
operating system.

• While using on multi user operating system, we are able to operate several programs at
once and going out a couple of tasks simultaneously. The main objective for designing a
multi-user operating system is that it lets to use batch processing system and time
sharing over the mainframe computer. Multi-user operating system is mostly used in
enlarge organization, campus and universities, public sector and so on. With using of
multiuser OS, you are going to exchange files or data, and several hardware components
such as printer, plotter, and hard drives. Every user grabs a short period of CPU time for
this.

• Multiuser OS examples are: Windows 2000, Ubuntu, Mac OS, Linux, Unix, and more.
Difference between Multiprogramming and
Multitasking

Multiprogramming:

• Approach of context switching is implemented.


• Multi-programming allows enhancing CPU utilization by using jobs
• To decrease the CPU idle time for longer time
• Both techniques are using single CPU

Multi-tasking:
• Approach of context switching and time sharing is implemented
Evolution of Operating Systems

• Early Systems (1950)


• Simple Batch Systems (1960)
• Multiprogrammed Batch Systems (1970)
• Time-Sharing and Real-Time Systems (1970)
• Personal/Desktop Computers (1980)
• Multiprocessor Systems (1980)
• Networked/Distributed Systems (1980)
Timesharing
Hardware – getting cheaper; Human – getting expensive

● Programs queued for execution in FIFO order.


● Like multiprogramming, but timer device interrupts after a quantum
(timeslice).
● Interrupted program is returned to end of FIFO
● Next program is taken from head of FIFO
● Control card interpreter replaced by command language interpreter.

47
Timesharing (cont.)
● Interactive (action/response)
● when OS finishes execution of one command, it seeks the next control
statement from user.
● File systems
● online filesystem is required for users to access data and code.
● Virtual memory
● Job is swapped in and out of memory to disk.

48
Timesharing (cont.)

49
Timesharing (cont.)
• In the time-sharing operating system, several jobs or processes can be
loaded into the main memory simultaneously and several users can share
the system as well.
• We can hence, say that the time-sharing operating system was a logical
extension of the multiprogramming operating system.
• The name time-sharing was used because the processes used to share an
equal amount of time specified by the operating system developer.
• The main aim of the time-sharing operating system was to reduce the
overall process response time.
• The CPU could execute several processes by providing an equal amount of
time to each process, so the CPU utilization became better than the
multiprogramming operating system.
50
Timesharing (cont.)
Note: Response time is the total amount of time it takes to respond to a
process. It should not be confused with the execution time. The switching
between several operating system processes was handled by an operating
system subprocess knew as the CPU scheduler.

The benefits of the time-sharing operating system are:


• Multiple processes and user requests can be responded to simultaneously as
there is more than one processing unit.
• Better response time than the previously used multiprogramming operating
system.
• The CPU did not have to be idle due to regular switching.
• More efficient utilization of the CPU.
51
Real-time systems

● Correct system function depends on timeliness


● Feedback/control loops
● Sensors and actuators
● Hard real-time systems -
● Failure if response time too long.
● Secondary storage is limited
● Soft real-time systems -
● Less accurate if response time is too long.
● Useful in applications such as multimedia, virtual reality .

52
Evolution of Operating Systems

• Early Systems (1950)


• Simple Batch Systems (1960)
• Multiprogrammed Batch Systems (1970)
• Time-Sharing and Real-Time Systems (1970)
• Personal/Desktop Computers (1980)
• Multiprocessor Systems (1980)
• Networked/Distributed Systems (1980)
Personal Computing Systems

● Single user systems, portable.


● I/O devices - keyboards, mice, display screens, small printers.
● Laptops and palmtops, Smart cards, Wireless devices.
● Single user systems may not need advanced CPU utilization or protection
features.
● Advantages:
● user convenience, responsiveness, ubiquitous

54
Evolution of Operating Systems

• Early Systems (1950)


• Simple Batch Systems (1960)
• Multiprogrammed Batch Systems (1970)
• Time-Sharing and Real-Time Systems (1970)
• Personal/Desktop Computers (1980)
• Multiprocessor Systems (1980)
• Networked/Distributed Systems (1980)
Multiprocessor Systems

● Multiprocessor systems with more than one CPU in close communication.


● Improved Throughput, economical, increased reliability.
● Kinds:
• Vector and pipelined
• Symmetric and asymmetric multiprocessing
• Distributed memory vs. shared memory
● Programming models:
• Tightly coupled vs. loosely coupled ,message-based vs. shared
variable

56
What is a Multiprocessor System?

● Multiprocessor operating system allows the multiple processors, and these


processors are connected with physical memory, computer buses, clocks,
and peripheral devices.

● Main objective of using multiprocessor operating system is to consume high


computing power and increase the execution speed of system.

57
Multiprocessor System

58
Advantages of Multiprocessor Systems
There are list of several advantages of Multiprocessor operating system such as
• Great Reliability
• If due to any reason, any one processor gets fails then do not worry because,
entire system will do work properly. For example – if multiprocessor has 6
processors and any one processor does not perform properly, at this stage
rest of them processors have to responsibilities for handling this system.
• Improve Throughput
• Enhancing the throughput of system, entire system is improved, if couples
of processors work with getting collaboration.

59
Advantages of Multiprocessor Systems
• Cost Effective System
• Multiprocessor systems are cost effective compare to single processor
system in long life because this system is capable to share all input/output
devices, power supplies system, and data storage center. In multiprocessor,
do not need to connect all peripheral terminals separately with each
processor.
• Parallel Processing
• Multiprocessor O/S gets high performance due to parallel processing. In
this system, single job is divided into various same small jobs, and execute
them like as Parallel nature.

60
Disadvantages of Multiprocessor Systems
• Multiprocessor has complicated nature in both form such as H/W and S/W.
• It is more expensive due to its large architecture.
• Multiprocessor operating system has a daunting task for scheduling
processes due to its shareable nature.
• Multiprocessor system needs large memory due to sharing its memory with
other resources.
• Its speed can get degrade due to fail any one processor.
• It has more time delay when processor receives message and take
appropriate action.
• It has big challenge related to skew and determinism.
• It needs context switching which can be impacted its performance.
61
Examples of Multiprocessor Systems
• Examples for Symmetric Multiprocessor – Windows NT, Solaris, Digital
UNIX, OS/2 & Linux.
• Examples for Asymmetric Multiprocessor – SunOS Version 4, IOS

• Other Examples are


• Intel Nehalem – Beckton, Westmere, Sandy Bridge
• AMD Opteron – K10 (Barcelona, Magny Cours); Bulldozer
• ARM Cortex A9, A15 MPCore
• Oracle (Sun) UltraSpare T1, T2, T3, T4 (Niagara)

62
Evolution of Operating Systems

• Early Systems (1950)


• Simple Batch Systems (1960)
• Multiprogrammed Batch Systems (1970)
• Time-Sharing and Real-Time Systems (1970)
• Personal/Desktop Computers (1980)
• Multiprocessor Systems (1980)
• Networked/Distributed Systems (1980)
Distributed Systems
Hardware – very cheap ; Human – very expensive

● Distribute computation among many processors.


● Loosely coupled -
• no shared memory, various communication lines
● client/server architectures
● Advantages:
• resource sharing
• computation speed-up
• reliability
• communication - e.g. email
● Applications - digital libraries, digital multimedia 64
Distributed Systems

65
Distributed Systems

• Distributed operating system allows distributing of entire systems on the


couples of center processors, and it serves on the multiple real time products
as well as multiple users.
• All processors are connected by valid communication medium such as high
speed buses and telephone lines, and in which every processor contains own
local memory along with other local processor.
• According this nature, distributed operating system is known as loosely
coupled systems.
• This operating system involves multiple computers, nodes, and sites, and
these components are linked each other with LAN/WAN lines.
• Distributed OS is capable for sharing their computational capacity and I/O
files with allowing the virtual machine abstraction to users.
66
Types of Distributed Systems

Can be done in three areas, such as:


• Client-Server Systems
• Peer-to-Peer Systems
• Middleware

Client-Server Systems
Client-Server Systems is known as “Tightly Coupled Operating System”. This
system is designed mostly for multiprocessors and homogeneous
multicomputer. Client-Server Systems works as a centralized server because it
provides the approval to all requests, which are generated by client systems
side.
67
Types of Distributed Systems

Server systems can be divided into two segments:


Computer Server System: This system allows the interface, and then client
sends own all requests for executing as action. Finally it sends to back response
after executing action, and transfer result to client.

File Server System: File server allows the file system interface for clients
because their clients can be performed various tasks such as creation, updating,
deletion files, and more.

Objective – Hide and manage hardware resources.

68
Types of Distributed Systems

Peer-to-Peer System
Peer-to-Peer System is known as a “Loosely Couple System”. This concept is
implemented in the computer network application because it contains the bunch
of processors, and they are not shareable memories or clocks as well. Every
processors consist own local memory, and these processors make
communication with each other through various communication medium such
as high speed buses or telephone lines.

Objective – It provides local services to remote clients.

69
Types of Distributed Systems

Middleware
Middleware allows the interoperability in the between of all applications, which
are running on other operating systems. With using these services those
applications are capable for transferring all data each other.

Objective – It allows the distribution transparency.

70
Applications of Distributed Systems

There are various real life applications of operating system such as


• Telecommunication networks
• Internet Technology
• Airline reservation Control systems
• Distributed databases System
• Distributed Multimedia System
• Global positioning System
• World Wide Web
• Air Traffic Control System
• Automated Banking System
• Multiplayer online gaming
71
Examples of Distributed Systems

• Windows server 2003


• Windows server 2008
• Windows server 2012
• Ubuntu
• Linux (Apache Server)

72
Advantages of Distributed Systems

• It can share all resources such as (CPU, disk, network interface, nodes,
computers, and more) from one site to another site, and it increases the data
availability on entire system.
• It enhances the speed of data exchange from one site to other site.
• It reduces the probability of data corruption because all data are replicated
on all site, if any site gets fail then user can access data from other running
site.
• It provides excellent services to all users.
• It helps to decrease the load of jobs on one host system.
• It can be scaled easily; it means any network can be attached with other
network without hindrance.
• It is more reliable to single system.
• It has excellent performance. 73
Advantages of Distributed Systems

• Better portability.
• Better re-usability of existing hardware components.
• It helps to decrease the duration time in data processing.
• It is high fault tolerance system.
• Better flexibility, due to easy to use, install and error detection.
• It is openness system because this system can be accessed from local and
remote sites.
• Entire system works independently from each other and due to this feature if
any one site gets crash then entire system does not halt.
• Well protective system because in distributed operating system, every users
has unique UID and with using this UID all users can use every system. Due
to UID, no chance hack data from this system.
74
Limitations of Distributed Systems

• If center hub gets fails then entire network will halt.


• Distributed operating system is designed with such language, which is not
well defined till now.
• This system is more costly because it is not easily readable, and it contains
the huge infrastructure.
• Some time security issues can be arise while sharing data on entire networks.
• Some data packet can be corrupted due to following in the large networks.
• Its maintenance is more costly because it is distributed across multiple
servers.

75
Limitations of Distributed Systems

• If, some time anyone site gets overload then it can be created big challenges.

• If, same time multiple users try to access same data from local database as
well as remote data base then its performance can get degrade.

• Administration is very difficult task in distributed operating system.

• Distributed operating system only can support few softwares.

76
Summary

• Initially, the users used to provide the


instructions and the computer had to follow
them. After some time, the users used to
prepare their instructions in the form of jobs
on punch cards and submit them to
the computer operator.

• After that, users were able to communicate


with the operating systems with the help of
software called terminal. Nowadays, with the
development of Graphical User Interface, the
user experience is more convenient.
77
Summary

• Several processes can be loaded into the


main memory simultaneously with the help
of a single processor in the multiprocessor
operating system.

• In a time-sharing system, processes are used


to share an equal amount of time specified by
the operating system developer.

78
Summary

• In a parallel processing operating system,


there we more than one processor. Hence
several processes can be loaded into the
main memory simultaneously and all the
processors work concurrently on these
processes.

• In the distributed operating system, two or


more nodes are connected. Various nodes are
connected, but the memory or a clock is not
shared by the processors.
79

You might also like