U-III Deadlocks
U-III Deadlocks
U-III Deadlocks
A set of processes is deadlocked if each process in the set is waiting for an event that only
another process in the set can cause. Because all the processes are waiting, none of them will
ever cause any event that could wake up any of the other members of the set, and all the
processes continue to wait forever.
In a multiprogramming environment, several processes may compete for a finite number of
resources. A process requests resources; if the resources are not available at that time, the
process enters a waiting state. Sometimes, a waiting process is never again able to change
state, because the resources it has requested are held by other waiting processes. This
situation is called a deadlock.
System Model
A system consists of a finite number of resources to be distributed among a number of
competing processes. The resources may be partitioned into several types (or classes), each
consisting of some number of identical instances. CPU cycles, files, and I/O devices (such as
printers and DVD drives) are examples of resource types. A process must request a resource
before using it and must release the resource after using it. A process may request as many
resources as it requires to carry out its designated task. Obviously, the number of resources
requested may not exceed the total number of resources available in the system.
Under the normal mode of operation, a process may utilize a resource in only the following
sequence:
1. Request. The process requests the resource. If the request cannot be granted immediately
(for example, if the resource is being used by another process), then the requesting process
must wait until it can acquire the resource.
2. Use. The process can operate on the resource (for example, if the resource is a printer, the
process can print on the printer).
3. Release. The process releases the resource.
Coffman et al. (1971) showed that four conditions must hold for there to be a deadlock:
1. Mutual exclusion. Each resource is either currently assigned to exactly one process or
is available.
2. Hold-and-wait. Processes currently holding resources that were granted earlier can
request new resources.
3. No-preemption. Resources previously granted cannot be forcibly taken away from a
process. They must be explicitly released by the process holding them.
4. Circular wait. There must be a circular list of two or more processes, each of which is
waiting for a resource held by the next member of the chain.
All four of these conditions must be present for a resource deadlock to occur. If one of them
is absent, no resource deadlock is possible.
Resource-Allocation Graph
Deadlocks can be described more precisely in terms of a directed graph called a system
resource-allocation graph. This graph consists of a set of vertices V and a set of edges E. The
set of vertices V is partitioned into two different types of nodes:
P = {P1, P2, ..., Pn}, the set consisting of all the active processes
and R = {R1, R2, ..., Rm}, the set consisting of all resource types
• Request edge : A directed edge from process Pi to resource type Rj is denoted by Pi
→ Rj ;
• Assignment edge : A directed edge from resource type Rj to process Pi is denoted by
Rj → Pi ; it signifies that an instance of resource type Rj has been allocated to process
Pi .
Note that a request edge points to only the rectangle Rj , whereas an assignment edge must
also designate one of the dots in the rectangle.
Given the definition of a resource-allocation graph, it can be shown that, if the graph contains
no cycles, then no process in the system is deadlocked. If the graph does contain a cycle, then
a deadlock may exist.
If each resource type has exactly one instance, then a cycle implies that a deadlock has
occurred. If the cycle involves only a set of resource types, each of which has only a single
instance, then a deadlock has occurred. Each process involved in the cycle is deadlocked. In
this case, a cycle in the graph is both a necessary and a sufficient condition for the existence
of deadlock. If each resource type has several instances, then a cycle does not necessarily
imply that a deadlock has occurred. In this case, a cycle in the graph is a necessary but not a
sufficient condition for the existence of deadlock. As described in fig. (d) even though there
is a cycle but there is no deadlock as there are two instances of resource Rb, out of which one
is free and can be allocated to P2 thus breaking the cycle.
Methods for Handling Deadlocks
1. Ignorance : Just ignore the problem. Maybe if you ignore it, it will ignore you.
2. Detection and recovery : Let them occur, detect them, and take action.
3. Dynamic avoidance: by careful resource allocation.
4. Prevention : by structurally negating one of the four conditions.
The deadlock detection algorithm is based on comparing vectors. Each process is initially
said to be unmarked. As the algorithm progresses, processes will be marked, indicating that
they are able to complete and are thus not deadlocked. When the algorithm terminates, any
unmarked processes are known to be deadlocked. This algorithm assumes a worst-case
scenario: all processes keep all acquired resources until they exit.
The deadlock detection algorithm works as follows.
1. Look for an unmarked process, Pi, for which the ith row of R is less than or equal to
A.
2. If such a process is found, add the ith row of C to A, mark the process, and go back to
step 1.
3. If no such process exists, the algorithm terminates.
When the algorithm finishes, all the unmarked processes, if any, are deadlocked.
To run the detection algorithm, we look for a process whose resource request can be satisfied.
The first one cannot be satisfied because there is no Bluray drive available. The second
cannot be satisfied either, because there is no scanner free. Fortunately, the third one can be
satisfied, so process 3 runs and eventually returns all its resources, giving A = (2 2 2 0)
At this point process 2 can run and return its resources, giving A = (4 2 2 1).Now the
remaining process can run. There is no deadlock in the system.
Now consider a minor variation of the situation. Suppose that process 3 needs a Blu-ray drive
as well as the two tape drives and the plotter. None of the requests can be satisfied, so the
entire system will eventually be deadlocked.
Even if we give process 3 its two tape drives and one plotter, the system deadlocks when it
requests the Blu-ray drive.
Every point in the diagram represents a joint state of the two processes. Initially, the state is
at p, with neither process having executed any instructions. If the scheduler chooses to run A
first, we get to the point q, in which A has executed some number of instructions, but B has
executed none. At point q the trajectory becomes vertical, indicating that the scheduler has
chosen to run B. With a single processor, all paths must be horizontal or vertical, never
diagonal.
When A crosses the I1 line on the path from r to s, it requests and is granted the printer.
When B reaches point t, it requests the plotter. The regions that are shaded are especially
interesting. The region with lines slanting from southwest to northeast represents both
processes having the printer. The mutual exclusion rule makes it impossible to enter this
region. Similarly, the region shaded the other way represents both processes having the
plotter and is equally impossible.
If the system ever enters the box bounded by I1 and I2 on the sides and I5 and I6 top and
bottom, it will eventually deadlock when it gets to the intersection of I2 and I6. At this point,
A is requesting the plotter and B is requesting the printer, and both are already assigned. The
entire box is unsafe and must not be entered. At point t the only safe thing to do is run
process A until it gets to I4. Beyond that, any trajectory to u will do.
The important thing to see here is that at point t, B is requesting a resource. The system must
decide whether to grant it or not. If the grant is made, the system will enter an unsafe region
and eventually deadlock. To avoid the deadlock, B should be suspended until A has requested
and released the plotter.
Safe and Unsafe States
A state is said to be safe if there is some scheduling order in which every process can run to
completion even if all of them suddenly request their maximum number of resources
immediately
The state of Fig. is safe because there exists a sequence of allocations that allows all
processes to complete. The order of execution will be :
Process B resulting in state (c). Then Process C resulting in state (d). Lastly process A.
Thus, the state of Fig. (a) is safe because the system, by careful scheduling, can avoid
deadlock.
Unsafe state
Now suppose we have the initial state shown in Fig.(a), but this time A requests and gets
another resource, giving Fig.(b). Can we find a sequence that is safe? Let us try. The
scheduler could run B until it asked for all its resources, as shown in Fig.(c). Eventually, B
completes and we get the state of Fig.(d). At this point we are stuck. We only have four
instances of the resource free, and each of the active processes needs five. There is no
sequence that guarantees completion. Thus, the allocation decision that moved the system
from Fig (a) to Fig. (b) went from a safe to an unsafe state. In retrospect, A’s request should
not have been granted.
It is worth noting that an unsafe state is not a deadlocked state. Starting at Fig.(b), the
system can run for a while. In fact, one process can even complete. Furthermore, it is possible
that A might release a resource before asking for any more, allowing C to complete and
avoiding deadlock altogether. Thus, the difference between a safe state and an unsafe state is
that from a safe state the system can guarantee that all processes will finish; from an unsafe
state, no such guarantee can be given.
The Banker’s Algorithm for a Single Resource
The banker’s algorithm considers each request as it occurs, seeing whether granting it leads
to a safe state. If it does, the request is granted; otherwise, it is postponed until later. To see if
a state is safe, the OS checks to see if it has enough resources to satisfy some process. If so,
those requests are assumed to be granted , and the process now closest to the limit is checked,
and so on. If all requests can eventually be repaid, the state is safe and the initial request can
be granted.
If the system has N processes each having a max request of two resources with R
resources in total, the system will be deadlock free for all cases if N < R – 1.
Consider the system with P processes each needing a maximum of N resources and a
total of R resources available. The condition that must hold to make the system
deadlock free for all cases R >= P(N-1) + 1.
Let Requesti be the request vector for process Pi . When a request for resources is made by process
Pi , the following actions are taken:
Example :
At time T0, the following snapshot of the system has been taken:
Allocation Max Available
ABC ABC ABC
P0 0 1 0 753 332
P1 2 0 0 322
P2 3 0 2 902
P3 2 1 1 222
P4 0 0 2 433
Answer the following questions using the banker’s algorithm:
a. What is the content of the matrix Need?
b. Is the system in a safe state?
c. If a request from process P1 arrives for (1,0,2), can the request be granted
immediately?
a. The content of the matrix Need is defined to be Max − Allocation and is as follows:
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
b. The system is currently in a safe state. and the sequence <P1, P3, P4, P2, P0> satisfies
the safety criteria.
c. As (1,0,2) <= (3,3,2) and (1,0,2) + (2,0,0) <=(3,2,2) the request can be granted if we
can find a safe sequence. The new system state assuming the request is granted will
look like : Allocation1 = (2,0,0) + (1,0,2) and Need1 = (1,2,2) – (1,0,2) and Available
= (3,3,2) – (1,0,2)
Allocation Need Available
ABC A BC ABC
P0 0 1 0 7 4 3 230
P1 3 0 2 0 2 0
P2 3 0 2 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1
The sequence <P1, P3, P4, P0, P2> satisfies the safety requirement. Hence, we can
immediately grant the request of process P1.
DEADLOCK PREVENION
For a deadlock to occur, each of the four necessary conditions must hold. By ensuring that at
least one of these conditions cannot hold, we can prevent the occurrence of a deadlock.
Mutual Exclusion
The mutual exclusion condition must hold. That is, at least one resource must be non-
sharable. Sharable resources, in contrast, do not require mutually exclusive access and thus
cannot be involved in a deadlock. Read-only files are a good example of a sharable resource.
A process never needs to wait for a sharable resource. In general, however, we cannot prevent
deadlocks by denying the mutual-exclusion condition, because some resources are
intrinsically non-sharable. For example, a mutex lock cannot be simultaneously shared by
several processes.
Hold and Wait
To ensure that the hold-and-wait condition never occurs in the system, we must guarantee
that, whenever a process requests a resource, it does not hold any other resources. One
protocol requires each process to request and be allocated all its resources before it begins
execution. It can be implemented by requiring that system calls requesting resources for a
process precede all other system calls. An alternative protocol allows a process to request
resources only when it has none. A process may request some resources and use them. Before
it can request any additional resources, it must release all the resources that it is currently
allocated.
Both these protocols have two main disadvantages. First, resource utilization may be low,
since resources may be allocated but unused for a long period. Otherwise, we must request all
resources at the beginning for both protocols. Second, starvation is possible. A process that
needs several popular resources may have to wait indefinitely, because at least one of the
resources that it needs is always allocated to some other process.
No Preemption
The third necessary condition for deadlocks is that there be no pre-emption of resources that
have already been allocated. To ensure that this condition does not hold, we can use the
following protocol. If a process is holding some resources and requests another resource that
cannot be immediately allocated to it (that is, the process must wait), then all resources the
process is currently holding are pre-empted. In other words, these resources are implicitly
released. The process will be restarted only when it can regain its old resources, as well as the
new ones that it is requesting. Alternatively, if a process requests some resources, we first
check whether they are available. If they are, we allocate them. If they are not, we check
whether they are allocated to some other process that is waiting for additional resources. If
so, we preempt the desired resources from the waiting process and allocate them to the
requesting process. If the resources are neither available nor held by a waiting process, the
requesting process must wait. While it is waiting, some of its resources may be preempted,
but only if another process requests them. A process can be restarted only when it is allocated
the new resources it is requesting and recovers any resources that were pre-empted while it
was waiting.
This protocol is often applied to resources whose state can be easily saved and restored later,
such as CPU registers and memory space. It cannot generally be applied to such resources as
mutex locks and semaphores.
Circular Wait
One way to ensure that this condition never holds is to impose a total ordering of all resource
types and to require that each process requests resources in an increasing order of
numeration. Let R = {R1, R2, ..., Rm} be the set of resource types. We assign to each
resource type a unique integer number, which allows us to compare two resources and to
determine whether one precedes another in our ordering.
F(tape drive) = 1
F(disk drive) = 5
F(printer) = 12
We can now consider the following protocol to prevent deadlocks: Each process can request
resources only in an increasing order of enumeration. That is, a process can initially request
any number of instances of a resource type —say, Ri . After that, the process can request
instances of resource type Rj if and only j > i.
Processes can request resources whenever they want to, but all requests must be made in
numerical order. A process may request first a scanner and then a tape drive, but it may not
request first a plotter and then a scanner.
With this rule, the resource allocation graph can never have cycles. We can get a deadlock
only if A requests resource j and B requests resource i. Assuming i and j are distinct
resources, they will have different numbers. If i > j, then A is not allowed to request j because
that is lower than what it already has. If i < j, then B is not allowed to request i because that is
lower than what it already has. Either way, deadlock is impossible.
With more than two processes, the same logic holds. At every instant, one of the assigned
resources will be highest. The process holding that resource will never ask for a resource
already assigned. It will either finish, or at worst, request even higher-numbered resources, all
of which are available. Eventually, it will finish and free its resources. At this point, some
other process will hold the highest resource and can also finish. In short, there exists a
scenario in which all processes finish, so no deadlock is present.