U-III Deadlocks

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

DEADLOCKS

Deadlock can be defined formally as follows:

A set of processes is deadlocked if each process in the set is waiting for an event that only
another process in the set can cause. Because all the processes are waiting, none of them will
ever cause any event that could wake up any of the other members of the set, and all the
processes continue to wait forever.
In a multiprogramming environment, several processes may compete for a finite number of
resources. A process requests resources; if the resources are not available at that time, the
process enters a waiting state. Sometimes, a waiting process is never again able to change
state, because the resources it has requested are held by other waiting processes. This
situation is called a deadlock.

System Model
A system consists of a finite number of resources to be distributed among a number of
competing processes. The resources may be partitioned into several types (or classes), each
consisting of some number of identical instances. CPU cycles, files, and I/O devices (such as
printers and DVD drives) are examples of resource types. A process must request a resource
before using it and must release the resource after using it. A process may request as many
resources as it requires to carry out its designated task. Obviously, the number of resources
requested may not exceed the total number of resources available in the system.
Under the normal mode of operation, a process may utilize a resource in only the following
sequence:
1. Request. The process requests the resource. If the request cannot be granted immediately
(for example, if the resource is being used by another process), then the requesting process
must wait until it can acquire the resource.
2. Use. The process can operate on the resource (for example, if the resource is a printer, the
process can print on the printer).
3. Release. The process releases the resource.

NECESSARY CONDITIONS FOR DEADLOCKS

Coffman et al. (1971) showed that four conditions must hold for there to be a deadlock:
1. Mutual exclusion. Each resource is either currently assigned to exactly one process or
is available.
2. Hold-and-wait. Processes currently holding resources that were granted earlier can
request new resources.
3. No-preemption. Resources previously granted cannot be forcibly taken away from a
process. They must be explicitly released by the process holding them.
4. Circular wait. There must be a circular list of two or more processes, each of which is
waiting for a resource held by the next member of the chain.

All four of these conditions must be present for a resource deadlock to occur. If one of them
is absent, no resource deadlock is possible.
Resource-Allocation Graph

Deadlocks can be described more precisely in terms of a directed graph called a system
resource-allocation graph. This graph consists of a set of vertices V and a set of edges E. The
set of vertices V is partitioned into two different types of nodes:
P = {P1, P2, ..., Pn}, the set consisting of all the active processes
and R = {R1, R2, ..., Rm}, the set consisting of all resource types
• Request edge : A directed edge from process Pi to resource type Rj is denoted by Pi
→ Rj ;
• Assignment edge : A directed edge from resource type Rj to process Pi is denoted by
Rj → Pi ; it signifies that an instance of resource type Rj has been allocated to process
Pi .
Note that a request edge points to only the rectangle Rj , whereas an assignment edge must
also designate one of the dots in the rectangle.

Given the definition of a resource-allocation graph, it can be shown that, if the graph contains
no cycles, then no process in the system is deadlocked. If the graph does contain a cycle, then
a deadlock may exist.
If each resource type has exactly one instance, then a cycle implies that a deadlock has
occurred. If the cycle involves only a set of resource types, each of which has only a single
instance, then a deadlock has occurred. Each process involved in the cycle is deadlocked. In
this case, a cycle in the graph is both a necessary and a sufficient condition for the existence
of deadlock. If each resource type has several instances, then a cycle does not necessarily
imply that a deadlock has occurred. In this case, a cycle in the graph is a necessary but not a
sufficient condition for the existence of deadlock. As described in fig. (d) even though there
is a cycle but there is no deadlock as there are two instances of resource Rb, out of which one
is free and can be allocated to P2 thus breaking the cycle.
Methods for Handling Deadlocks

1. Ignorance : Just ignore the problem. Maybe if you ignore it, it will ignore you.
2. Detection and recovery : Let them occur, detect them, and take action.
3. Dynamic avoidance: by careful resource allocation.
4. Prevention : by structurally negating one of the four conditions.

Deadlock Ignorance : The Ostrich Algorithm


The simplest approach is the ostrich algorithm: stick your head in the sand and pretend there
is no problem. Engineers ask how often the problem is expected, how often the system
crashes for other reasons, and how serious a deadlock is. If deadlocks occur on the average
once every five years, but system crashes due to hardware failures and operating system bugs
occur once a week, most engineers would not be willing to pay a large penalty in
performance or convenience to eliminate deadlocks.

DEADLOCK DETECTION AND RECOVERY


A second technique is detection and recovery. When this technique is used, the system does
not attempt to prevent deadlocks from occurring. Instead, it lets them occur, tries to detect
when this happens, and then takes some action to recover after the fact.

Deadlock Detection with One Resource of Each Type


Let us begin with the simplest case: there is only one resource of each type. Consider a
system with seven processes, A though G, and six resources, R through W. The state of
which resources are currently owned and which ones are currently being requested is as
follows:
1. Process A holds R and wants S. 2. Process B holds nothing but wants T.
3. Process C holds nothing but wants S. 4. Process D holds U and wants S and T.
5. Process E holds T and wants V. 6. Process F holds W and wants S.
7. Process G holds V and wants U.

Algorithm for detecting deadlock:


For each node, N in the graph, perform the following five steps with N as the starting node.
1. Initialize L to the empty list, designate all arcs as unmarked.
2. Add current node to end of L, check to see if node now appears in L two times. If it
does, graph contains a cycle (listed in L), algorithm terminates.
3. From given node, see if any unmarked outgoing arcs. If so, go to step4; if not, go to
step 6.
4. Pick an unmarked outgoing arc at random and mark it. Then follow it to the new
current node and go to step 2.
5. If this is initial node, graph does not contain any cycles, algorithm terminates.
Otherwise, dead end. Remove it, go back to previous node, make that one current
node, go to step 2.

Deadlock Detection with Multiple Resources of Each Type


When multiple copies of some of the resources exist, a different approach is needed to detect
deadlocks. A matrix-based algorithm is used for detecting deadlock among n processes and m
resources, each resource having multiple instances.

The deadlock detection algorithm is based on comparing vectors. Each process is initially
said to be unmarked. As the algorithm progresses, processes will be marked, indicating that
they are able to complete and are thus not deadlocked. When the algorithm terminates, any
unmarked processes are known to be deadlocked. This algorithm assumes a worst-case
scenario: all processes keep all acquired resources until they exit.
The deadlock detection algorithm works as follows.
1. Look for an unmarked process, Pi, for which the ith row of R is less than or equal to
A.
2. If such a process is found, add the ith row of C to A, mark the process, and go back to
step 1.
3. If no such process exists, the algorithm terminates.

When the algorithm finishes, all the unmarked processes, if any, are deadlocked.

To run the detection algorithm, we look for a process whose resource request can be satisfied.
The first one cannot be satisfied because there is no Bluray drive available. The second
cannot be satisfied either, because there is no scanner free. Fortunately, the third one can be
satisfied, so process 3 runs and eventually returns all its resources, giving A = (2 2 2 0)
At this point process 2 can run and return its resources, giving A = (4 2 2 1).Now the
remaining process can run. There is no deadlock in the system.
Now consider a minor variation of the situation. Suppose that process 3 needs a Blu-ray drive
as well as the two tape drives and the plotter. None of the requests can be satisfied, so the
entire system will eventually be deadlocked.
Even if we give process 3 its two tape drives and one plotter, the system deadlocks when it
requests the Blu-ray drive.

Recovery from Deadlock


Recovery through Preemption
In some cases it may be possible to temporarily take a resource away from its current owner
and give it to another process. In many cases, manual intervention may be required,
especially in batch-processing operating systems running on mainframes. The ability to take a
resource away from a process, have another process use it, and then give it back without the
process noticing it is highly dependent on the nature of the resource. Recovering this way is
frequently difficult or impossible. Choosing the process to suspend depends largely on which
ones have resources that can easily be taken back.

Recovery through Rollback


Processes can have checkpoints periodically. Checkpointing a process means that its state is
written to a file so that it can be restarted later. The checkpoint contains not only the memory
image, but also the resource state, in other words, which resources are currently assigned to
the process. To be most effective, new checkpoints should not overwrite old ones but should
be written to new files, so as the process executes, a whole sequence accumulates. When a
deadlock is detected, it is easy to see which resources are needed. To do the recovery, a
process that owns a needed resource is rolled back to a point in time before it acquired that
resource by starting at one of its earlier checkpoints. All the work done since the checkpoint
is lost. In effect, the process is reset to an earlier moment when it did not have the resource,
which is now assigned to one of the deadlocked processes. If the restarted process tries to
acquire the resource again, it will have to wait until it becomes available.

Recovery through Killing Processes


The crudest but simplest way to break a deadlock is to kill one or more processes. One
possibility is to kill a process in the cycle. With a little luck, the other processes will be able
to continue. If this does not help, it can be repeated until the cycle is broken.
Alternatively, a process not in the cycle can be chosen as the victim in order to release its
resources. In this approach, the process to be killed is carefully chosen because it is holding
resources that some process in the cycle needs.
Where possible, it is best to kill a process that can be rerun from the beginning with no ill
effects. For example, a compilation can always be rerun because all it does is read a source
file and produce an object file. If it is killed partway through, the first run has no influence on
the second run.
On the other hand, a process that updates a database cannot always be run a second time
safely. If the process adds 1 to some field of a table in the database, running it once, killing
it, and then running it again will add 2 to the field, which is incorrect.

Only idempotent processes can be killed.


DEADLOCK AVOIDANCE
The main algorithms for deadlock avoidance are based on the concept of safe and unsafe
states. Let us see a model for dealing with two processes and two resources. The horizontal
axis represents the number of instructions executed by process A. The vertical axis represents
the number of instructions executed by process B.
Process A needs a printer from I1 to I3 and the plotter from I2 to I4.
Process B needs the plotter from I5 to I7 and the printer from I6 to I8.

Every point in the diagram represents a joint state of the two processes. Initially, the state is
at p, with neither process having executed any instructions. If the scheduler chooses to run A
first, we get to the point q, in which A has executed some number of instructions, but B has
executed none. At point q the trajectory becomes vertical, indicating that the scheduler has
chosen to run B. With a single processor, all paths must be horizontal or vertical, never
diagonal.
When A crosses the I1 line on the path from r to s, it requests and is granted the printer.
When B reaches point t, it requests the plotter. The regions that are shaded are especially
interesting. The region with lines slanting from southwest to northeast represents both
processes having the printer. The mutual exclusion rule makes it impossible to enter this
region. Similarly, the region shaded the other way represents both processes having the
plotter and is equally impossible.
If the system ever enters the box bounded by I1 and I2 on the sides and I5 and I6 top and
bottom, it will eventually deadlock when it gets to the intersection of I2 and I6. At this point,
A is requesting the plotter and B is requesting the printer, and both are already assigned. The
entire box is unsafe and must not be entered. At point t the only safe thing to do is run
process A until it gets to I4. Beyond that, any trajectory to u will do.
The important thing to see here is that at point t, B is requesting a resource. The system must
decide whether to grant it or not. If the grant is made, the system will enter an unsafe region
and eventually deadlock. To avoid the deadlock, B should be suspended until A has requested
and released the plotter.
Safe and Unsafe States

A state is said to be safe if there is some scheduling order in which every process can run to
completion even if all of them suddenly request their maximum number of resources
immediately

The state of Fig. is safe because there exists a sequence of allocations that allows all
processes to complete. The order of execution will be :
Process B resulting in state (c). Then Process C resulting in state (d). Lastly process A.
Thus, the state of Fig. (a) is safe because the system, by careful scheduling, can avoid
deadlock.

Unsafe state

Now suppose we have the initial state shown in Fig.(a), but this time A requests and gets
another resource, giving Fig.(b). Can we find a sequence that is safe? Let us try. The
scheduler could run B until it asked for all its resources, as shown in Fig.(c). Eventually, B
completes and we get the state of Fig.(d). At this point we are stuck. We only have four
instances of the resource free, and each of the active processes needs five. There is no
sequence that guarantees completion. Thus, the allocation decision that moved the system
from Fig (a) to Fig. (b) went from a safe to an unsafe state. In retrospect, A’s request should
not have been granted.

It is worth noting that an unsafe state is not a deadlocked state. Starting at Fig.(b), the
system can run for a while. In fact, one process can even complete. Furthermore, it is possible
that A might release a resource before asking for any more, allowing C to complete and
avoiding deadlock altogether. Thus, the difference between a safe state and an unsafe state is
that from a safe state the system can guarantee that all processes will finish; from an unsafe
state, no such guarantee can be given.
The Banker’s Algorithm for a Single Resource
The banker’s algorithm considers each request as it occurs, seeing whether granting it leads
to a safe state. If it does, the request is granted; otherwise, it is postponed until later. To see if
a state is safe, the OS checks to see if it has enough resources to satisfy some process. If so,
those requests are assumed to be granted , and the process now closest to the limit is checked,
and so on. If all requests can eventually be repaid, the state is safe and the initial request can
be granted.

Safe Safe Unsafe

If the system has N processes each having a max request of two resources with R
resources in total, the system will be deadlock free for all cases if N < R – 1.

Consider the system with P processes each needing a maximum of N resources and a
total of R resources available. The condition that must hold to make the system
deadlock free for all cases R >= P(N-1) + 1.

The Banker’s Algorithm for Multiple Resources

Algorithm for checking to see if a state is safe:


1. Look for row, R, whose unmet resource needs all ≤ A. If no such row exists, system
will eventually deadlock since no process can run to completion
2. Assume process of row chosen requests all resources it needs and finishes. Mark
process as terminated, add all its resources to the A vector.
3. Repeat steps 1 and 2 until either all processes marked terminated (initial state was
safe) or no process left whose resource needs can be met (there is a deadlock).

Allocation matrix Request matrix


Available = (1 0 2 0)
1. Process D can be executed as its pending requests can be satisfied with available
resources. So after its execution Available = (2 1 2 1)
2. Process E can be executed as its pending requests can be satisfied with available
resources. So after its execution Available = (2 1 2 1)
3. Process A can be executed as its pending requests can be satisfied with available
resources. So after its execution Available = (5 1 3 2)
4. Process B can be executed as its pending requests can be satisfied with available
resources. So after its execution Available = (5 2 3 2)
5. Process C can be executed as its pending requests can be satisfied with available
resources. So after its execution Available = (6 3 4 2)
So the safe sequence will be DàEàAàBàC.
Now suppose B requests for 1 printer. If we give the printer to B. The request row for B will
be ( 0 1 2 2) and Available will be (1 0 1 0). The system will remain safe. So B will be given
1 printer.
But if E also asks for 1 printer, then giving it to E will make the system unsafe and after D
the system will enter into a deadlock state with process A ,B, C and E.

The algorithm for determining whether requests can be safely granted.

Let Requesti be the request vector for process Pi . When a request for resources is made by process
Pi , the following actions are taken:

1. If Requesti ≤ Needi , go to step 2. Otherwise, raise an error condition, since the


process has exceeded its maximum claim.
2. If Requesti ≤ Available, go to step 3. Otherwise, Pi must wait, since the resources are
not available.
3. Have the system pretend to have allocated the requested resources to process Pi by
modifying the state as follows:
Available = Available–Requesti ;
Allocationi = Allocationi + Requesti ;
Needi = Needi –Requesti ;
If the resulting resource-allocation state is safe, the transaction is completed, and process Pi is
allocated its resources. However, if the new state is unsafe, then Pi must wait for Requesti ,
and the old resource-allocation state is restored.

Example :
At time T0, the following snapshot of the system has been taken:
Allocation Max Available
ABC ABC ABC
P0 0 1 0 753 332
P1 2 0 0 322
P2 3 0 2 902
P3 2 1 1 222
P4 0 0 2 433
Answer the following questions using the banker’s algorithm:
a. What is the content of the matrix Need?
b. Is the system in a safe state?
c. If a request from process P1 arrives for (1,0,2), can the request be granted
immediately?
a. The content of the matrix Need is defined to be Max − Allocation and is as follows:
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
b. The system is currently in a safe state. and the sequence <P1, P3, P4, P2, P0> satisfies
the safety criteria.
c. As (1,0,2) <= (3,3,2) and (1,0,2) + (2,0,0) <=(3,2,2) the request can be granted if we
can find a safe sequence. The new system state assuming the request is granted will
look like : Allocation1 = (2,0,0) + (1,0,2) and Need1 = (1,2,2) – (1,0,2) and Available
= (3,3,2) – (1,0,2)
Allocation Need Available
ABC A BC ABC
P0 0 1 0 7 4 3 230
P1 3 0 2 0 2 0
P2 3 0 2 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1

The sequence <P1, P3, P4, P0, P2> satisfies the safety requirement. Hence, we can
immediately grant the request of process P1.

Q. Consider the following snapshot of a system:


Allocation Max Available
A B C D A B C D A B C D
P0 0 0 1 2 0 0 1 2 1 5 2 0
P1 1 0 0 0 1 7 5 0
P2 1 3 5 4 2 3 5 6
P3 0 6 3 2 0 6 5 2
P4 0 0 1 4 0 6 5 6
Answer the following questions using the banker’s algorithm:
a. What is the content of the matrix Need?
b. Is the system in a safe state?
c. If a request from process P1 arrives for (0,4,2,0), can the request be
granted immediately?
Ans.
a. Need
A B C D
P0 0 0 0 0
P1 0 7 5 0
P2 1 0 0 2
P3 0 0 2 0
P4 0 6 4 2
b. Yes, the system is safe with sequence P0,P2,P3,P4,P1.
c. As (1,0,0,0) + (0,4,2,0) <= (1,7,5,0) and (0,4,2,0) <= (1,5,2,0) the request can be
granted if we can find a safe sequence. The new system state assuming the request is
granted will look like : Allocation1 = (1,0,0,0) + (0,4,2,0) and Need1 = (0,7,5,0) –
(0,4,2,0) and Available = (1,5,2,0) – (0,4,2,0)
Allocation Need Available
A B C D A B C D A B C D
P0 0 0 1 2 0 0 0 0 1 1 0 0
P1 1 4 2 0 1 3 3 0
P2 1 3 5 4 1 0 0 2
P3 0 6 3 2 0 0 2 0
P4 0 0 1 4 0 6 4 2
Yes the request will be granted as the system will remain safe with sequence :
P0,P2,P3,P4,P1.

DEADLOCK PREVENION

For a deadlock to occur, each of the four necessary conditions must hold. By ensuring that at
least one of these conditions cannot hold, we can prevent the occurrence of a deadlock.
Mutual Exclusion
The mutual exclusion condition must hold. That is, at least one resource must be non-
sharable. Sharable resources, in contrast, do not require mutually exclusive access and thus
cannot be involved in a deadlock. Read-only files are a good example of a sharable resource.
A process never needs to wait for a sharable resource. In general, however, we cannot prevent
deadlocks by denying the mutual-exclusion condition, because some resources are
intrinsically non-sharable. For example, a mutex lock cannot be simultaneously shared by
several processes.
Hold and Wait
To ensure that the hold-and-wait condition never occurs in the system, we must guarantee
that, whenever a process requests a resource, it does not hold any other resources. One
protocol requires each process to request and be allocated all its resources before it begins
execution. It can be implemented by requiring that system calls requesting resources for a
process precede all other system calls. An alternative protocol allows a process to request
resources only when it has none. A process may request some resources and use them. Before
it can request any additional resources, it must release all the resources that it is currently
allocated.
Both these protocols have two main disadvantages. First, resource utilization may be low,
since resources may be allocated but unused for a long period. Otherwise, we must request all
resources at the beginning for both protocols. Second, starvation is possible. A process that
needs several popular resources may have to wait indefinitely, because at least one of the
resources that it needs is always allocated to some other process.
No Preemption
The third necessary condition for deadlocks is that there be no pre-emption of resources that
have already been allocated. To ensure that this condition does not hold, we can use the
following protocol. If a process is holding some resources and requests another resource that
cannot be immediately allocated to it (that is, the process must wait), then all resources the
process is currently holding are pre-empted. In other words, these resources are implicitly
released. The process will be restarted only when it can regain its old resources, as well as the
new ones that it is requesting. Alternatively, if a process requests some resources, we first
check whether they are available. If they are, we allocate them. If they are not, we check
whether they are allocated to some other process that is waiting for additional resources. If
so, we preempt the desired resources from the waiting process and allocate them to the
requesting process. If the resources are neither available nor held by a waiting process, the
requesting process must wait. While it is waiting, some of its resources may be preempted,
but only if another process requests them. A process can be restarted only when it is allocated
the new resources it is requesting and recovers any resources that were pre-empted while it
was waiting.
This protocol is often applied to resources whose state can be easily saved and restored later,
such as CPU registers and memory space. It cannot generally be applied to such resources as
mutex locks and semaphores.
Circular Wait
One way to ensure that this condition never holds is to impose a total ordering of all resource
types and to require that each process requests resources in an increasing order of
numeration. Let R = {R1, R2, ..., Rm} be the set of resource types. We assign to each
resource type a unique integer number, which allows us to compare two resources and to
determine whether one precedes another in our ordering.
F(tape drive) = 1
F(disk drive) = 5
F(printer) = 12
We can now consider the following protocol to prevent deadlocks: Each process can request
resources only in an increasing order of enumeration. That is, a process can initially request
any number of instances of a resource type —say, Ri . After that, the process can request
instances of resource type Rj if and only j > i.

Processes can request resources whenever they want to, but all requests must be made in
numerical order. A process may request first a scanner and then a tape drive, but it may not
request first a plotter and then a scanner.
With this rule, the resource allocation graph can never have cycles. We can get a deadlock
only if A requests resource j and B requests resource i. Assuming i and j are distinct
resources, they will have different numbers. If i > j, then A is not allowed to request j because
that is lower than what it already has. If i < j, then B is not allowed to request i because that is
lower than what it already has. Either way, deadlock is impossible.
With more than two processes, the same logic holds. At every instant, one of the assigned
resources will be highest. The process holding that resource will never ask for a resource
already assigned. It will either finish, or at worst, request even higher-numbered resources, all
of which are available. Eventually, it will finish and free its resources. At this point, some
other process will hold the highest resource and can also finish. In short, there exists a
scenario in which all processes finish, so no deadlock is present.

You might also like