CH 06
CH 06
CH 06
1
Chapter 6: CPU Scheduling and deadlock
Scheduling
Basic Concepts of scheduling
Scheduling Criteria
Scheduling Algorithms
Algorithm Evaluation
Deadlock
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
2
CPU Scheduling
In discussing process management and synchronization, we talked
about context switching among processes/threads on the ready queue
But we have glossed over the details of exactly which thread is
chosen from the ready queue
Making this decision is called scheduling
Types of Scheduler
Non-preemptive scheduler voluntarily cease to keep or claim; give up.
Preemptive scheduler
Process may be descheduled at any time
3
When to schedule?
4
Scheduling/Performance Criteria
6
Gantt Chart
Example:
A B C
0 10 12 16
Time
9
SCHEDULING
ALGORITHMS
10
First- Come, First-Served (FCFS) Scheduling
P1 P2 P3
0 24 27 30
11
FCFS Scheduling (Cont.)
P2 P3 P1
0 3 6 30
12
Shortest-Job-First (SJF) Scheduling
Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the shortest time
SJF is optimal – gives minimum average waiting time for a
given set of processes if all arrives simultanously
The difficulty is knowing the length of the next CPU request
Could ask the user
13
Example of SJF
ProcessArriva lBurst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
P4 P1 P3 P2
0 3 9 16 24
14
SJF is not always optimal
Is SJF optimal if not all the processes are available simultaneously?
P2 P1
0 2 12
16
Example of Shortest-remaining-time-first
Now we add the concepts of varying arrival times and preemption to the
analysis
ProcessAarri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26
Average waiting time =
WT for P1: (0-0) + (10-1)=9 WT for P2= 1 – 1= 0
WT for P3: 17 – 2= 15 WT for P4: 5-3= 2
Therefore, the AWT : [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5
Exercise:
What is AWT for nonprimitive SJF ?
17
Priority Scheduling
18
Example of Priority Scheduling
19
Round Robin (RR)
Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds.
After this time has elapsed, the process is preempted and added to
the end of the ready queue.
If there are n processes in the ready queue and the time quantum is
q, then each process gets 1/n of the CPU time in chunks of at most
q time units at once.
No process waits more than (n-1)q time units.
Timer interrupts every quantum to schedule next process
Performance
q large FIFO
q small q must be large with respect to context switch, otherwise
overhead is too high
20
Time Quantum and Context Switch Time
21
Example of RR with Time Quantum = 4
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
22
Multilevel Queue
23
Multilevel Queue Scheduling
SIIBS
24
Multilevel Feedback Queue
25
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is
served FCFS
When it gains CPU, job receives 8
milliseconds
If it does not finish in 8 milliseconds,
job is moved to queue Q1
At Q1 job is again served FCFS and
receives 16 additional milliseconds
If it still does not complete, it is
preempted and moved to queue Q2
26
MULTIPROCESSOR
SCHEDULING
Affinity and Load balancing
27
Multiple-Processor Scheduling
CPU scheduling becomes more complex when multiple CPUs are
Involved
Approaches to Multiple-processor scheduling
Asymmetric multiprocessing, in which one processor is the master,
controlling all activities and running all kernel code, while the other runs
only user code.
This approach is relatively simple, as there is no need to
share critical system data.
Symmetric multiprocessing, SMP, where each processor schedules
its own jobs, either from a common ready queue or from separate
ready queues for each processor.
Currently, most common ( XP, Solaris, Linux, etc …)
28
Processor affinity wedaginet
Processors contain cache memory, which speeds up repeated
accesses to the same memory locations.
If a process were to switch from one processor to another each time it
got a time slice,
the data in the cache ( for that process ) would have to be invalidated in
the 1st processor and
the cache for the 2nd processor must be repopulated from main memory,
thereby obviating the benefit of the cache.
Therefore SMP systems attempt to keep processes on the same
processor. This is called processor affinity.
Soft affinity - the system attempts to keep processes on the same
processor but makes no guarantees. https://youtu.be/
4Hef8nxWuYU
Hard affinity - a process specifies that it is not to be moved between
processors .E.g. Linux and some other Oses
29
Multiple-Processor Scheduling – Load Balancing
31
Multiple-Processor Scheduling – Load Balancing
Load balancing can be achieved through:
Push migration – periodic task checks load on each processor,
and if found pushes task from overloaded CPU to other CPUs
Pull migration – idle processors pulls waiting task from busy
processor
Push and pull migration are not mutually exclusive.
Note
Load balancing works against Affinity
Load balancing - Moving processes from CPU to CPU to
achieve load balancing
Affinity - Keep processes to run on same processor, and
if not carefully managed, the savings gained by balancing the system can
be lost in rebuilding caches.
One option is to only allow migration when imbalance
surpasses a given threshold.
32
REALTIME SCHEDULING
Rate Monotonic Scheduling
Earliest Deadline Scheduling
33
Real-Time CPU Scheduling
Real-time OS present obvious
scheduling challenges as the time at
which tasks complete is crucial to their
performance. They are two types
1. Soft real-time systems – no
guarantee as to when critical real-time
process will be scheduled
Degraded performance if their timing needs
cannot be met. Example: streaming video.
2. Hard real-time systems – task must
be serviced by its deadline
Two types of latencies affect
performance of RTS (Real time
systems)
1. Interrupt latency – time from arrival of interrupt to
start of routine that services interrupt
2. Dispatch latency – time for schedule to take
current process off CPU and switch to another
34
Priority-based Scheduling
For real-time scheduling, scheduler must support preemptive, priority-
based scheduling
But only guarantees soft real-time
For hard real-time, scheduler must also provide ability to meet
deadlines
Hard RTS are often characterized by tasks that must run at regular
periodic intervals
Has processing time t, deadline d, period p
0≤t≤d≤p
36
Rate Monotonic Scheduling (RMS)
The rate-monotonic scheduling algorithm uses pre-emptive scheduling
with static priorities.
A priority is assigned based on the inverse of its period
Shorter periods = higher priority;
Longer periods = lower priority
Example:- Assume we have 2 processes P1 and P2 with the following
characteristics
P1 has p1=50, t1=20, and a deadline that matches its period (d1= 50 ).
P2 has p2=100, t2=35, and d2= 100.
The total CPU utilization time is 20 / 50 = 0.4 for P1, and 35 / 100 = 0.35 for P2,
or 0.75 ( 75% ) overall.
P1 is assigned a higher priority than P2.
37
Missed Deadlines with Rate Monotonic Scheduling
38
RMS cont...
1. Deterministic modeling
Type of analytic evaluation
Takes a particular predetermined workload and defines the
performance of each algorithm for that workload
Consider 5 processes arriving at time 0:
42
Deterministic Evaluation
For each algorithm, calculate minimum average waiting time
Simple and fast, but requires exact numbers for input, applies only
to those inputs
FCS is 28ms:
RR is 23ms:
43
Queueing Models
44
Little’s Formula n=lambda*W
45
Simulations
46
Evaluation of CPU Schedulers by Simulation
47
Implementation
48
Scheduling Summary
49
Deadlocks
What is the goal?
51
System Model
System consists of resources
Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices
Each resource type Ri has Wi instances.
Each process utilizes a resource as follows:
request
use
Release
If request can not be granted:
– The requesting process can block
– The requesting process can be returned an error code and it can then try
requesting the resource again.
52
Formal definition of deadlock
A set of processes is deadlocked if each process in the set is waiting
for an event that only another process in the set can cause
Usually the event is release of a currently held resource
When deadlock occurs None of the processes can …
Run
release resources
be awakened
53
Questions
Is deadlock the same as starvation (or infinitely postponed)?
Can you mention some Real life analogies such as:
“You take the monitor, I grab the keyboard”
Cars in intersection
54
Deadlock Characterization
55
Resource-Allocation Graph
Is a directed graph used to model deadlock
Is having a set of vertices V and a set of edges E.
Vertices (V) is partitioned into two types:
P = {P1, P2, …, Pn}, the set consisting of all the processes in the system
R = {R1, R2, …, Rm}, the set consisting of all resource types in the
system
Edge (E) are of two types
request edge – directed edge Pi Rj
assignment edge – directed edge Rj Pi
56
Resource-Allocation Graph (Cont.)
Process
Pi requests instance of Rj
Pi
Rj
Pi is holding an instance of Rj
Pi
Rj
57
Example of a Resource Allocation Graph
One instance of R1
Two instances of R2
One instance of R3
Three instance of R4
P1 holds one instance of R2 and is
waiting for an instance of R1
P2 holds one instance of R1, one
instance of R2, and is waiting for
an instance of R3
P3 is holds one instance of R3
58
Resource Allocation Graph With A Deadlock
59
Graph With A Cycle But No Deadlock
Here also we have a cycle
P1→R1→P3→R2→P1
But no deadlock since
Process P4 may release its instance of resource type R2 and that
resource can then be allocated to P3, breaking the cycle.
Process P2 may release its instance of resource type R1 and that
resource can then be allocated to P1, breaking the cycle.
60
Basic Facts
61
Methods for Handling Deadlocks
62
Approach 1: The ostrich algorithm
Pretend there is no problem
Reasonable if
– deadlocks occur very rarely
– cost of prevention is high
Example of “cost”, only one process runs at a time(No Parallilism)
UNIX and Windows takes this approach for some of the more complex
resource relationships to manage
63
Approach 2:Deadlock Prevention
Deadlocks can be prevented by preventing at least one of the four
required conditions for deadlock to happen
64
Cont’d...
Attacking mutual exclusion condition(Let processes to use
resources simultaneously)
Because Mutex is not required for sharable resources (e.g., read-only files);
But, doing so may result in race condition
Not feasible in general (Some devices/resource are intrinsically not
shareable.)
printer, tape
Attacking Hold and Wait condition– Require process to request and be
allocated all its resources before it begins execution, or
a process never has to wait for what it needs
Issues
may not know required resources at start of run
not always possible
Another alternative,
allow process to request resources only when the process has none
allocated to it.
65
Deadlock Prevention (Cont.)
66
Example : Assume we have plotter and scanner resources
and they are numerically numbered as 1 and 2. The
following RAG shows a deadlock situation
67
Summary of approaches to deadlock prevention
Condition Approach
• Mutual Exclusion • Not feasible
• Hold and Wait • Request resources initially
• No Preemption • Take resources away
• Circular Wait • Order resources numerically
69
Approach 3. Deadlock Avoidance
This approach requires that the system has some additional a priori
information available
70
Safe State
When a process requests an available resource, system must decide
if immediate allocation leaves the system in a safe state
A state is safe if the system can allocate all resources requested by all
processes ( up to their stated maximums ) without entering a deadlock
state.
Maximum Current
Needs Allocation
P0 10 5
P1 4 2
P2 9 2
Yes it is in safe state, since
The sequence <P1, P0, P2> satisfies the safety condition.
What happens to the above table if process P2 requests and is
granted one more tape drive?
72
Basic Facts
If a system is in safe state no deadlocks
73
Deadlock Avoidance Algorithms
74
Resource-Allocation Graph Scheme
Claim edge Pi Rj
indicate that process Pj may request resource Rj;
75
Resource-Allocation Graph
76
Unsafe State In Resource-Allocation Graph
The resulting resource-allocation graph would have a cycle in it, and so
the request cannot be granted.
77
Banker’s Algorithm
78
Banker’s Algorithm
When a process gets all its resources it must return them in a finite
amount of time
Safe State
there is some scheduling order in which every process can run to
completion
From a safe state, the system can guarantee that all processes will
finish
Unsafe state: no such guarantee
Not a deadlock state
Some process may be able to complete
79
Data Structures for the Banker’s Algorithm
80
Safety Algorithm
81
Resource-Request Algorithm for Process Pi
• Requesti = request vector for process Pi.
If Requesti [j] = k then process Pi wants k instances of resource type Rj
1. If Requesti Needi go to step 2. Otherwise, raise error condition, since
process has exceeded its maximum claim
2. If Requesti Available, go to step 3. Otherwise Pi must wait, since
resources are not available
3. Pretend to allocate requested resources to Pi by modifying the state as
follows:
Available = Available – Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
82
Example of Banker’s Algorithm
83
Example (Cont.)
Need Available
ABC ABC
P0 743 332
P1 122
P2 600
P3 011
P4 431
84
Example: P1 Request (1,0,2)
Check that Request Available (that is, (1,0,2) (3,3,2) true
Allocation Need Available
ABC ABC ABC
P0 0 1 0 743 230
P1 3 0 2 020
P2 3 0 2 600
P3 2 1 1 011
P4 0 0 2 431
Executing safety algorithm shows that sequence < P1, P3, P4,
P0, P2> satisfies safety requirement
Therefore, the request by p1 can be granted
Questions
Can request for (3,3,0) by P4 be granted?
Can request for (0,2,0) by P0 be granted?
85
Another Example: Is allocation (0 2 0) to P0 Safe?
95
So Unsafe State ⟹ Do Not Enter
96
P0 Suspended Pending Request
• Restore the system to the state before the request.
Process Alloc Max Need Available
A B C A B C A B C A B C
P0 0 1 0 7 5 3 7 4 3 2 3 0
P1 3 0 2 3 2 2 0 2 0
P2 3 0 0 9 0 2 6 0 2
P3 2 1 1 2 2 2 0 1 1
P4 0 0 2 4 3 3 4 3 1
• When enough resources become available, P0 can
awaken
97
Results
P0’s request for 2 Bs cannot be granted because that would
prevent any other process from completing if they need
their maximum claim
98
Deadlock Detection
Detection algorithm
Recovery scheme
101
Single Instance of Each Resource Type
Maintain wait-for graph
Nodes are processes
Pi Pj if Pi is waiting for Pj
102
Resource-Allocation Graph and Wait-for Graph for
deadlock detection
103
Several Instances of a Resource Type
104
Detection Algorithm
1. Let Work and Finish be vectors of length m and n, respectively
Initialize:
(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi 0, then
Finish[i] = false; otherwise, Finish[i] = true
105
Example of Detection Algorithm
Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i
106
Example (Cont.)
State of system?
Can reclaim resources held by process P0, but insufficient
resources to fulfill other processes; requests
Deadlock exists, consisting of processes P1, P2, P3, and P4
107
Detection-Algorithm Usage
When, and how often, to invoke depends on:
how often is a deadlock likely to occur
how many processes will be affected by deadlock when it happens
108
Recovery From Deadlock
There are two options for breaking a deadlock
1. Process Termination:
a. Kill all deadlocked processes and release resources
b. Kill one deadlocked process at a time and release its resources
In which order should we choose to abort?
Priority of the process
How long process has computed, and how much longer to completion
Resources the process has used
Resources process needs to complete
How many processes will need to be terminated
Is process interactive or batch?
2. Resource Preemption: successively preempt some resources from
processes and give to other processes until the deadlock cycle is broken.
Issues that need to be addressed in this approach:
Selecting a victim: Which resources and which processes are to be preempted?
Rollback – return process to some safe state and restart it from that state
Starvation – same process may always be picked as victim
109
Deadlock Summary
In general, deadlock detection or avoidance is
expensive
Must evaluate cost of deadlock against detection or
avoidance costs
Deadlock avoidance and recovery may cause
starvation problems
UNIX and Windows use Ostrich Algorithm
110
End of Chapter 6
111