Operating System (Day - 1)

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 69

Operating System (Day – 1)

Operating System
• Operating System is a program that:
– manages the computer hardware
– provides a basis for application programs, and
– acts as an intermediary between the computer user
and the computer hardware.
• Two Basic Functions of Operating System are:
– Resource Management
– Virtual Machine
Operating System

Fig: Layers and Views of a Computer System


Types of OS
• Batch System
• Time Sharing
– Processor time is shared among multiple users
– main objective is to maximize Response Time
• Multiprogramming
– Allows more than one programs at a time
– Main objective is to maximize CPU Usage during I/O wait, e.g: PC
• Real Rime
• Distributed
• Multiprocessor (Parallel-Processing)
Parallel Systems
• Multiprocessing can be:
– Tightly Coupled (Shared Memory System)
– Loosely Coupled (Distributed memory System)
• Types:
– Asymmetric multiprocessing:
• Each processor assigned a specific task
• Presence of controller processor (Master-slave relationship)
– Symmetric multiprocessing:
• Each processor performs all tasks within OS
• All processors are peers (no mater-slave relationship)
• Advantages:
– Increased Throughput
– Economy of Scale
– Increased Reliability
OS Structure
• OS is made of:
– Shell
• Includes the User
Interface (CLI, or GUI)
Hardware
• User interacts with the
OS through Shell Kernel
– Kernel
Shell
• Program remaining in
main memory when the User Programs
and Application
computer is running
users
• Heart of OS
• Interacts with hardware
Components of OS
• Kernel
• User Interface
• Security
• Process Execution
• Memory Management
• Interrupt
• Networking
Micro-Kernels
• Minimum modules of low level systems necessary to
run an Operating System
• Module that always resides on memory and runs on
kernel mode
• Other modules run as user process – file systems,
device drivers etc.
• This helps in flexibility in adapting the changing low-
level system implementation
• Also provides a platform for other OS to develop upon
and come up with new researches
Monolithic Kernel Vs Micro-Kernel
Process
• Program in execution
• Progresses in sequential fashion
• Also referred as
– jobs (in batch processing system)
– user programs or tasks (in time-shared system)
• It is a unit of work in a system
• Requires system resources for completion
Process…
• Process in memory includes:
Stack
– Program Counter
– Stack
– Data Section

Heap Structure

Data

Text (Program)

Fig: Process in Memory


Program Vs Process
S.N. Program Process
1 Consists of set of instructions It is a sequence of instruction
in programming language execution
2 It is a static object existing in It is a dynamic object (i.e.
a file form program in execution)
3 Program is loaded into Process is loaded into main
secondary storage device memory
4 The time span is unlimited Time span is limited
5 It is a passive entity It is an active entity
Process States (2-states process model)
Process States (5 states process model)
• As a process executes, it changes state
– new: The process is being created.
– running: Instructions are being executed.
– waiting: The process is waiting for some event to
occur.
– ready: The process is waiting to be assigned to a
process.
– terminated: The process has finished execution.
Process States

Fig: State Transition Diagram For 5 States Process Model


State Transitions
From To Sample Event(s) that causes transition
New Ready Newly created process is admitted into system
Ready Running Waiting process is assigned CPU
Running Blocked I/O request and wait
Running Ready Time quantum used (i.e. Time out) or interrupt
occurred
Running Terminat Process completed or intentionally halted by OS
ed
Blocked Ready I/O completed or the event process is waiting for
occurred
Process Control Block (PCB)
• Also called Task Control Block
• A data structure for process representation in OS
• Created by OS whenever a new process is
created
• Deleted from main memory and memory made
free after the termination
• Process with an active PCB can only compete for
resources
PCB..
Information associated with each
process.
• Process state
• Program counter
• CPU registers
• CPU scheduling information
• Memory-management information
• Accounting information
• I/O status information
Fig: Process Control Block
Threads
• Also called lightweight process
• A sequential execution stream within a
process
• Cannot run on its own, hence is not solely a
representation of program

19
Threads…
• A thread has its own unique:
– Registers, Program Counter
– Stack, Stack Pointer
• A thread does not have:
– Address Space, Program Code
– Global Variables, Heap
– OS Resources (Files; I/O devices)

20
Threads…

21
Why Threads? (Benefits)
Process with multiple threads makes a great server (e.g.
print server)
Increase responsiveness, i.e. with multiple threads in a
process, if one threads blocks then other can still
continue executing
Sharing of common data reduce requirement of inter-
process communication
Proper utilization of multiprocessor by increasing
concurrency
Threads are cheap to create and use very little resources
Context switching is fast (only have to save/reload PC,
Stack, SP, Registers) 22
Process Vs Thread
PROCESS THREAD
Doesn’t share memory (loosely Shares memories and files (tightly
coupled) coupled)
Creation is time consuming Fast
Execution slow Fast
More time to terminate Less time
More time to switch between Less time
processes
System calls are required for Not required
communication
more resources are required fewer resources are required
Not suitable for parallelism Suitable
23
Implementation of Threads
• User Level Threads
– Thread supported at user level
– Resides above kernel and are managed without the kernel support
– Provides a thread library of functions to allow user process to
create and manage their threads
– No need of knowledge at the Operating System level
• Kernel Level Threads
– Supported directly by Operating Systems
– Kernel provides system calls for creating and managing the threads
• Combined or Hybrid Threads
– Combines both user level and kernel level threads
– Thread creation is done in user space
Implementation of Threads…
a. User level Threads

b. Kernel level Threads

c. Hybrid Threads
25
Multithreading

one process one thread one process multiple threads

multiple process one thread multiple process multiple threads

26
Multithreading Models
• The way of establishing the relationship
between user level thread and kernel level
thread
• Three basic models:
– Many-to-One model
– One-to-One model
– Many-to-many model

27
Many-to-One model…

User Thread

Kernel Thread
28
One-to-One model

User Thread

Kernel Thread

29
Many-to-One model

User Thread

Kernel Thread

30
Scheduling
• When in time sharing system, CPU switches
between processes
• The switch between the processes maximize the
CPU utilization
• One process can get CPU time when another is
waiting for I/O operations to complete
• Increases throughput
• Process scheduler selects available process for
execution in CPU
Scheduling…(CPU switching between processes)
Scheduling Queues
• Job queue
– set of all processes in the system.
• Ready queue
– set of all processes residing in main memory,
ready and waiting to execute.
– Queue header points to the first and last processes in ready
queue
• Device queues
– set of processes waiting for an I/O device.
• Process migrate between various queues(sometimes
multiple times) for its termination
Scheduling…
• At first process is kept into ready queue
• Then, from the queue any of the processes is
selected for execution (i.e. dispatched)
• During execution, one of the following three cases
may arise:
– Process could issue an I/O request and kept in I/O queue
– May create child process and wait for its termination
– Removed forcibly from CPU, due to interrupt, this is kept
back to ready queue
Scheduling…

Fig: Queueing Diagram of Process Scheduling


Scheduler
• It is an OS program (module) that selects the
next job to be admitted for execution
• Main objective of scheduling is to:
– Increase CPU utilization
– Increase throughput
• Types of scheduler:
– Long Term Scheduler
– Short Term Scheduler
– Medium Term Scheduler
Long Term Scheduler
• Also called job scheduler
• selects which processes should be brought into the
ready queue from batch queue
• Executes less frequently (seconds, even minutes)
• controls the degree of multiprogramming
• Responsible for
– selecting the processes from secondary storage device
– loading them into main memory for execution
Scheduler…
Short Term Scheduler
• Also called CPU scheduler
• selects which process should be executed next
and allocates CPU
• invoked very frequently (in milliseconds),hence
should be fast
• The process of assigning the CPU to a process is
also called process dispatching
• The program responsible is known as dispatcher
Medium Term Scheduler
• When a process may be in need of some I/O
operation, then it may be removed to hard disk.
• Later, it can be reloaded into main memory and
continued from where it was left earlier
• This saving of the suspended process is said to
be swapped out or rolled out
• This swapping is done by medium term
scheduler
Medium Term Scheduler
Scheduling
• Basis of multiprogrammed OS
• The run of a process: CPU Burst
• The wait for an I/O: I/O Burst
• A process with more CPU Burst is called CPU Bound.
• A process with more I/O Burst is called I/O Bound.
• CPU–I/O Burst Cycle – Process execution consists of
a cycle of CPU execution and I/O wait.
• Both start and end by CPU burst.
• Switches between CPU burst and I/O burst multiple
times before termination
CPU-I/O burst cycle
Scheduling Criteria
• CPU Utilization:
– Average time during which CPU is busy executing user programs and/or
system programs.
– The more the better
• Throughput:
– Average amount of work completed per unit time
– The higher the better
• Turnaround Time(TAT):
– Time elapsed from time of submission of task to the task it is completed
– Total time from time waited to get into the memory, from memory to ready
queue, CPU time and I/O time
– Hence,

TAT = process finish time – process arrival time


Scheduling Criteria
• Waiting Time(WT):
– Total time spent waiting in suspended state and ready state
– Hence,
WT = TAT – Processing Time

• Response Time:
– Interval between the submission of request and start of the
system’s first response
• All these times should be as less as possible
• Other criteria: fairness, balance etc.
Algorithms Used In Different Systems
– First-Come-First-Serve
– Shortest Job First
– Shortest Remaining Time Next
– Round Robin
– Priority
– Multilevel Queue
Scheduling Algorithms (Types)
Preemptive Algorithm
CPU can be taken away before the
completion

CPU switches to higher preference


ob from lower preference job
from middle
Not fair, CPU either switches
because of time constraints or due
to higher priority process
Costlier to implement
E.g.: Round Robin (RR) algorithm
First-Come-First-Served (FCFS)
• Allocates the CPU in the order the processes arrive
• Non-preemptive scheduling algorithm
• E.g.
Process Burst Time
P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
 Waiting time for P1 = 0; P2 = 24; P3 = 27
 Average waiting time: (0 + 24 + 27)/3 = 17
FCFS…
Suppose that the processes arrive in the order
P2 , P3 , P1 .
 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30
 Waiting time for P1 = 6; P2 = 0; P3 = 3
 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case.
 In example 1:
 Short processes coming later than long process have to wait more for CPU. This
effect is called Convoy effect
FCFS...
Example:

Gantt chart:

Calculation:
Shortest Job First (SJF)
• Allocate the CPU to the process with least CPU
burst time
• If two processes have same CPU burst, then FCFS is
used to break the tie
• Two schemes;
– Non Preemptive :
• once CPU given to the process it cannot be preempted until
completes its CPU burst
– Preemptive (also called Shortest Remaining First -SRTF):
• if a new process arrives with CPU burst length less than
remaining time of current executing process, preempt.
SJF (non-preemptive)…
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (non-preemptive)
P1 P3 P2 P4

0 3 7 8 12 16

• Average waiting time = (0 + 6 + 3 + 7)/4 = 4


SJF...
Example:

Gantt chart:

Calculation:
Shortest Remaining Time First (SRTF) –
SJF Preemptive
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (preemptive)
P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

• Average waiting time = (9 + 1 + 0 +2)/4 = 3


– Here, Big jobs are waiting for CPU, which may result in aging problem
Shortest Remaining Time First (SRTF)
Example:

Gantt chart:

Calculation:
Question:
• Make Gantt Chart and Find:
– Average Turnaround Time (ATAT)
– Average Waiting Time (AWT)
• Case1: FCFS
• Case2: SJF (preemptive)
• Case3: SJF (non preemptive)

Process Arrival Time(ms) CPU Burst (ms)


P0 0 10
P1 1 6
P2 3 2
P3 5 4
Priority Scheduling
• Gives preferential treatment to important jobs.
• Allows the programs with the highest priority to be processed first
• If two or more jobs with equal priority, then implements FCFS
algorithm
• The priority determination is affected by:
– Resource requirement
– Processing time
– Total spent time on the system
• Types:
– Preemptive Priority Scheduling
– Non-preemptive Priority Scheduling
Preemptive Priority
• Example:
Process Burst Time Priority Arrival
Time
P1 10 3 0
P2 5 2 1
P3 2 1 2
Note:
Priority 1 means highest priority & 3 means lowest priority

Gantt Chart: P1 P2 P3 P2 P1
0 1 2 4 8 17
AWT = {0+(8-1)+0+(4-2)+0}/3
= 9/3 = 3 ms.
Preemptive Priority (Problems)
• Indefinite blocking of low priority processes by
high priority processes called Starvation
• Completion of a process within finite time is
not guaranteed
• Solution:
– Aging: Increasing priority of process that has
waited for long time. Older process gets high
priority and hence can complete in finite time.
Non-Preemptive Priority
• Example:
Process Arrival Burst Time Priority
Time (AT)
P0 0 10 5
P1 1 6 4
P2 3 2 2
P3 5 4 0

Gantt Chart: P0 P3 P2 P1
0 10 14 16 22
AWT = {0+(10-5)+(14-3)+(16-1)}/4
= 31/4 = 7.75 ms.
Priority Scheduling
Example:

Non-Preemptive Solution:

Preemptive Solution:
Round Robin(RR) Scheduling
• Preemptive FCFS algorithm
• Each process gets a small unit of CPU time called time quantum (10-100
milliseconds).
• After this time has elapsed, the process is preempted and added to the end of the
ready queue.
• Let,
– n = number of processes,
– q = time quantum, then
• no process runs more than one time slice (i.e. q)
• no process waits more than (n-1)*q time units
• If a running process releases control due to some I/O, another process is scheduled
to run
• Note:
– q must be large w.r.t. context switch, otherwise overhead is too high.
RR Algorithm…
Example: Time Quantum = 20 ms
Process Burst Time
P1 53
P2 17
P3 68
P4 24
• The Gantt chart is:
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

• Typically, higher average turnaround than SJF, but better response.


• Calculate: AWT and ATAT yourself
RR Algorithm..
ATAT

Time Quantum

Fig: Illustration of turnaround time varying with the time quantum


RR Algorithm...
Example:

Gantt chart:

Calculation:
Multilevel Queue Scheduling
• Ready queue is partitioned into separate queues:
– foreground (interactive)
– background (batch)
• Each queue has its own scheduling algorithm,
– foreground – RR
– background – FCFS
• Scheduling must be done between the queues.
– Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
– Time slice – each queue gets a certain amount of CPU time which
it can schedule amongst its processes; i.e., 80% to foreground in
RR; 20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Feedback Queue
• A process can move between the various queues
• aging can be implemented this way

You might also like