Os 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Unit -5

Message Passing and Deadlocks

What is message passing technique in OS?

Message Passing provides a mechanism to allow processes to communicate and to


synchronize their actions without sharing the same address space.

For example − chat programs on World Wide Web.

Now let us discuss the message passing step by step.

Step 1 − Message passing provides two operations which are as follows −

• Send message
• Receive message

Messages sent by a process can be either fixed or variable size.

Step 2 − For fixed size messages the system level implementation is straight forward.
It makes the task of programming more difficult.

Step 3 − The variable sized messages require a more system level implementation but
the programming task becomes simpler.

Step 4 − If process P1 and P2 want to communicate they need to send a message to


and receive a message from each other that means here a communication link exists
between them.

Step 5 − Methods for logically implementing a link and the send() and receive()
operations.

Given below is the structure of message passing technique −


Characteristics

The characteristics of Message passing model are as follows −

• Mainly the message passing is used for communication.


• It is used in distributed environments where the communicating processes are present on remote
machines which are connected with the help of a network.
• Here no code is required because the message passing facility provides a mechanism for
communication and synchronization of actions that are performed by the communicating
processes.
• Message passing is a time consuming process because it is implemented through kernel (system
calls).
• It is useful for sharing small amounts of data so that conflicts need not occur.
• In message passing the communication is slower when compared to shared memory technique.

Message Passing

One way in which processes with private virtual address spaces can communicate is through message passing —
by sending and receiving messages to one another. Message passing allows programs to exchange arbitrary data
rather than just a small set of predefined messages like those supported by signals. And operating systems
typically implement a few different types of message passing abstractions that processes can use to communicate.

The message passing interprocess communication model consists of three parts:


1. Processes allocate some type of message channel from the OS. Example message channel types
include pipes for one-way communication, and sockets for two-way communication. There may be
additional connection setup steps that processes need to take to configure the message channel.

2. Processes use the message channel to send and receive messages to one another.

3. Processes close their end of the message channel when they are done using it.

A pipe is a one-way communication channel for two processes running on the same machine. One-way means that
one end of the pipe is for sending messages (or writing to) only, and the other end of the pipe is for receiving
messages (or for reading from) only. Pipes are commonly used in shell commands to send the output from one
process to the input of another process.

For example, consider the following command entered at a bash shell prompt that creates a pipe between two
processes (the cat process outputs the contents of file foo.c and the pipe (|) redirects that output to the input of
the grep command that searches for the string "factorial" in its input):

$ cat foo.c | grep factorial

To execute this command, the bash shell process calls the pipe system call to request that the OS creates a pipe
communication. The pipe will be used by the shell’s two child processes (cat and grep). The shell program sets up
the cat process’s stdout to write to the write end of the pipe and the grep process’s stdin to read from the read
end of the pipe, so that when the child processes are created and run, the cat process’s output will be sent as
input to the grep process (see Figure 1).

Figure 1. Pipes are unidirectional communication channels for processes on the same system. In this example, the
cat process sends the grep process information by writing to the write end of the pipe. The grep process receives
this information by reading from the read end of the pipe.

While pipes transmit data from one process to another in only one direction, other message passing abstractions
allow processes to communicate in both directions. A socket is a two-way communication channel, which means
that each end of a socket can be used for both sending and receiving messages. Sockets can be used by
communicating processes running on the same computer or running on different computers connected by a
network (see Figure 2). The computers could be connected by a local area network (LAN), which connects
computers in a small area, such as a network in a university computer science department. The communicating
processes could also be on different LANs, connected to the internet. As long as there exists some path through
network connections between the two machines, the processes can use sockets to communicate.
Figure 2. Sockets are bidirectional communication channels that can be used by communicating processes on
different machines connected by a network.

Because each individual computer is its own system (hardware and OS), and because the OS on one system does
not know about or manage resources on the other system, message passing is the only way in which processes on
different computers can communicate. To support this type of communication, operating systems need to
implement a common message passing protocol for sending and receiving messages over a network. TCP/IP is one
example of a messaging protocol that can be used to send messages over the internet. When a process wants to
send a message to another, it makes a send system call, passing the OS a socket on which it wants to transmit, the
message buffer, and possibly additional information about the message or its intended recipient. The OS takes care
of packing up the message in the message buffer and sending it out over the network to the other machine. When
an OS receives a message from the network, it unpacks the message and delivers it to the process on its system
that has requested to receive the message. This process may be in a Blocked state waiting for the message to
arrive. In this case, receipt of the message makes the process Ready to run again.

There are many system software abstractions built on top of message passing that hide the message passing
details from the programmer. However, any communication between processes on different computers must use
message passing at the lowest levels (communicating through shared memory or signals is not an option for
processes running on different systems). In the Parallel Systems Chapter, we discuss message passing and the
abstractions built atop it in more detail.

IMPLEMENTING MESSAGE PASSING

1 Buffering of Interprocess Messages

When a process Pi sends a message to some process Pj by using a nonblocking send, the kernel builds an
interprocess message control block (IMCB) to store all information needed to deliver the message (see Figure ).
The control block contains names of the sender and destination processes, the length of the message, and the text
of the message. The control block is allocated a buffer in the kernel area. When process Pj makes a receive call, the
kernel copies the message from the appropriate IMCB into the message area provided by Pj .

The pointer fields of IMCBs are used to form IMCB lists to simplify message delivery. Figure shows the organization
of IMCB lists when blocking sends and FCFS message delivery are used. In symmetric naming, a separate list is
used for every pair of communicating processes. When a process Pi performs a receive call to receive a message
from process Pj , the IMCB list for the pair Pi–Pj is used to deliver the message. In asymmetric naming, a single
IMCB list can be maintained per recipient process. When a process performs a receive, the first IMCB in its list is
processed to deliver a message.

If blocking sends are used, at most one message sent by a process can be undelivered at any point in time. The
process is blocked until the message is delivered. Hence it is not necessary to copy the message into an IMCB
MAILBOXES

A mailbox is a repository for interprocess messages. It has a unique name. The owner of a mailbox is typically the
process that created it. Only the owner process can receive messages from a mailbox. Any process that knows the
name of a mailbox can send messages to it. Thus, sender and receiver processes use the name of a mailbox, rather
than each other‘s names, in send and receive statements; it is an instance of indirect naming.

Figure illustrates message passing using a mailbox named sample. Process Pi creates the mailbox, using the
statement create_mailbox. Process Pj sends a message to the mailbox, using the mailbox name in its send
statement.

If Pi has not already executed a receive statement, the kernel would store the message in a buffer. The kernel may
associate a fixed set of buffers with each mailbox, or it may allocate buffers from a common pool of buffers when a
message is sent. Both create_mailbox and send statements return with condition codes.

The kernel may provide a fixed set of mailbox names, or it may permit user processes to assign mailbox names of
their choice. In the former case, confidentiality of communication between a pair of processes cannot be
guaranteed because any process can use a mailbox.
Figure : Creation and use of mailbox sample.

This way it can destroy a mailbox if no process is connected to it. Alternatively, it may permit the owner of a
mailbox to destroy it. In that case, it has the responsibility of informing all processes that have ―connected‖ to the
mailbox. The kernel may permit the owner of a mailbox to transfer the ownership to another process.

Use of a mailbox has following advantages:

• Anonymity of receiver: A process sending a message to request a service may have no interest in the identity of
the receiver process, as long as the receiver process can perform the needed function. A mailbox relieves the
sender process of the need to know the identity of the receiver. Additionally, if the OS permits the ownership of a
mailbox to be changed dynamically, one process can readily take over the service of another.

Classification of messages: A process may create several mailboxes, and use each mailbox to receive messages of a
specific kind. This arrangement permits easy classification of messages (see Example below).

Example Use of Mailboxes

An airline reservation system consists of a centralized data base and a set of booking processes; each process
represents one booking agent. Figure 8.8 shows a pseudocode for the reservation server. It uses three mailboxes
named enquire, book, and cancel, and expects a booking process to send enquiry, booking, and cancellation
messages to these mailboxes, respectively. Values of flags in the receive calls are chosen such that a receive call
returns with an error code if no

message exists. For improved effectiveness, the server processes all pending cancellation messages before
processing a booking request or an enquiry, and performs bookings before enquiries.

Repeat

while receive (book, flags1, msg_area1) returns a message

while receive (cancel, flags2, msg_area2) returns a message process the cancellation;

process the booking;

if receive (enquire, flags3, msg_area3) returns a message then while receive (cancel, flags2, msg_area2) returns a
message
process the cancellation; process the enquiry; forever

Figure : Airline reservation server using three mailboxes: enquire, book, and cancel.

DEADLOCKS

A process requests resources, if the resources are not available at that time, the process enters a waiting state.
Sometimes, a waiting process is never again able to change state, because the resources it has requested are held
by other waiting processes. This situation is called a Deadlock.

SYSTEM MODEL

• A system consists of a finite number of resources to be distributed among a number of competing


processes. The resources are partitioned into several types, each consisting of some number of identical
instances. Memory space, CPU cycles, files, and I/0 devices are examples of resource types.

• A process must request a resource before using it and must release the resource after using it. A process
may request as many resources as it requires carrying out its designated task. The number of resources
requested may not exceed the total number of resources available in the system.

Under the normal mode of operation, a process may utilize a resource in only the following sequence:

1. Request: The process requests the resource. If the request cannot be granted immediately, then the
requesting process must wait until it can acquire the resource.

2. Use: The process can operate on the resource.


3. Release: The process releases the resource.

A set of processes is in a deadlocked state when every process in the set is waiting for an event that can be caused
only by another process in the set. The events with which we are mainly concerned here are resource acquisition
and release. The resources may be either physical resources or logical resources

To illustrate a deadlocked state, consider a system with three CD RW drives.

Suppose each of three processes holds one of these CD RW drives. If each process now requests another drive, the
three processes will be in a deadlocked state.

Each is waiting for the event "CD RW is released," which can be caused only by one of the other waiting processes.
This example illustrates a deadlock involving the same resource type.
Deadlocks may also involve different resource types. For example, consider a system with one printer and one DVD
drive. Suppose that process Pi is holding the DVD and process Pj is holding the printer. If Pi requests the printer and
Pj requests the DVD drive, a deadlock occurs.

DEADLOCK CHARACTERIZATION

Necessary Conditions

A deadlock situation can arise if the following four conditions hold simultaneously in a system:

1. Mutual exclusion: At least one resource must be held in a non-sharable mode, that is, only one process at
a time can use the resource. If another process requests that resource, the requesting process must be
delayed until the resource has been released.

2. Hold and wait: A process must be holding at least one resource and waiting to acquire additional resources
that are currently being held by other processes.

3. No preemption: Resources cannot be preempted; that is, a resource can be released only voluntarily by
the process holding it, after that process has completed its task.

4. Circular wait: A set {P0, Pl, ... , Pn} of waiting processes must exist such that Po is waiting for a resource held
by P1, P1 is waiting for a resource held by P2, ... , Pn-1 is waiting for a resource held by Pn and Pn is waiting
for a resource held by Po.

Resource-Allocation Graph

Deadlocks can be described in terms of a directed graph called System Resource-Allocation Graph

The graph consists of a set of vertices V and a set of edges E. The set of vertices V is partitioned into two different
types of nodes:

• P = {P1, P2, ...,Pn}, the set consisting of all the active processes in the system.
• R = {R1, R2, ..., Rm} the set consisting of all resource types in the system.

A directed edge from process Pi to resource type Rj is denoted by Pi → Rj it signifies that process Pi has requested
an instance of resource type Rj and is currently waiting for that resource.

A directed edge from resource type Rj to process Pi is denoted by Rj → Pi it signifies that an instance of resource
type Rj has been allocated to process Pi.
• A directed edge Pi → Rj is called a Request Edge.
• A directed edge Rj → Pi is called an Assignment Edge.
Pictorially each process Pi as a circle and each resource type Rj as a rectangle. Since resource type Rj may have
more than one instance, each instance is represented as a dot within the rectangle.

A request edge points to only the rectangle Rj, whereas an assignment edge must also designate one of the dots in
the rectangle.

When process Pi requests an instance of resource type Rj, a request edge is inserted in the resource-allocation
graph. When this request can be fulfilled, the request edge is instantaneously transformed to an assignment edge.
When the process no longer needs access to the resource, it releases the resource; as a result, the assignment
edge is deleted.

The resource-allocation graph shown in Figure depicts the following situation.

The sets P, K and E:

• P = {P1, P2, P3}


• R= {R1, R2, R3, R4}
• E = {Pl →Rl, P2 → R3, Rl → P2, R2 → P2, R2 → P1, R3 → P3 }
Resource instances:

• One instance of resource type R1


• Two instances of resource type R2
• One instance of resource type R3
• Three instances of resource type R4
Process states:

• Process P1 is holding an instance of resource type R2 and is waiting for an instance of resource type R1.
• Process P2 is holding an instance of R1 and an instance of R2 and is waiting for an instance of R3.
• Process P3 is holding an instance of R3.
If the graph does contain a cycle, then a deadlock may exist.

• If each resource type has exactly one instance, then a cycle implies that a deadlock has occurred. If the
cycle involves only a set of resource types, each of which has only a single instance, then a deadlock has
occurred. Each process involved in the cycle is deadlocked.

• If each resource type has several instances, then a cycle does not necessarily imply that a deadlock has
occurred. In this case, a cycle in the graph is a necessary but not a sufficient condition for the existence of
deadlock.

To illustrate this concept, the resource-allocation graph depicted in below figure:

Suppose that process P3 requests an instance of resource type R2. Since no resource instance is currently
available, a request edge P3 → R2 is added to the graph. At this point, two minimal cycles exist in the system:

1. P1 →R1 → P2 → R3 → P3 → R2→P1
2. P2 →R3 → P3 → R2 → P2

Figure: Resource-allocation graph with a deadlock.

Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3, which is held by process P3.
Process P3 is waiting for either process P1 or process P2 to release resource R2. In addition, process P1 is waiting
for process P2 to release resource R1.

Consider the resource-allocation graph in below Figure. In this example also have a cycle:

P1→R1→P3→R2→P1
Figure: Resource-allocation graph with a cycle but no deadlock

However, there is no deadlock. Observe that process P4 may release its instance of resource type R2. That
resource can then be allocated to P3, breaking the cycle.

METHODS FOR HANDLING DEADLOCKS

The deadlock problem can be handled in one of three ways:

1. Use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlocked
state.

2. Allow the system to enter a deadlocked state, detect it, and recover.
3. Ignore the problem altogether and pretend that deadlocks never occur in the system.
To ensure that deadlocks never occur, the system can use either deadlock prevention or a deadlock-avoidance
scheme.

Deadlock prevention provides a set of methods for ensuring that at least one of the necessary conditions cannot
hold. These methods prevent deadlocks by constraining how requests for resources can be made.

Deadlock-avoidance requires that the operating system be given in advance additional information concerning
which resources a process will request and use during its lifetime. With this additional knowledge, it can decide for
each request whether or not the process should wait. To decide whether the current request can be satisfied or
must be delayed, the system must consider the resources currently available, the resources currently allocated to
each process, and the future requests and releases of each process

If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm, then a deadlock
situation may arise. In this environment, the system can provide an algorithm that examines the state of the
system to determine whether a deadlock has occurred and an algorithm to recover from the deadlock.

In the absence of algorithms to detect and recover from deadlocks, then the system is in a deadlock state yet has
no way of recognizing what has happened. In this case, the undetected deadlock will result in deterioration of the
system's performance, because resources are being held by processes that cannot run and because more and
more processes, as they make requests for resources, will enter a deadlocked state. Eventually, the system will
stop functioning and will need to be restarted manually.
DEADLOACK PREVENTION

Deadlock can be prevented by ensuring that at least one of the four necessary conditions cannot hold.

Mutual Exclusion

• The mutual-exclusion condition must hold for non-sharable resources. Sharable resources, do not require
mutually exclusive access and thus cannot be involved in a deadlock.

• Ex: Read-only files are example of a sharable resource. If several processes attempt to open a read-only
file at the same time, they can be granted simultaneous access to the file. A process never needs to wait
for a sharable resource.

• Deadlocks cannot prevent by denying the mutual-exclusion condition, because some resources are
intrinsically non-sharable.

Hold and Wait

To ensure that the hold-and-wait condition never occurs in the system, then guarantee that, whenever a process
requests a resource, it does not hold any other resources.

• One protocol that can be used requires each process to request and be allocated all its resources before
it begins execution.

• Another protocol allows a process to request resources only when it has none. A process may request
some resources and use them. Before it can request any additional resources, it must release all the
resources that it is currently allocated.

Ex:

• Consider a process that copies data from a DVD drive to a file on disk, sorts the file, and then prints the
results to a printer. If all resources must be requested at the beginning of the process, then the process
must initially request the DVD drive, disk file, and printer. It will hold the printer for its entire execution,
even though it needs the printer only at the end.

• The second method allows the process to request initially only the DVD drive and disk file. It copies from
the DVD drive to the disk and then releases both the DVD drive and the disk file. The process must then
again request the disk file and the printer. After copying the disk file to the printer, it releases these two
resources and terminates.

The two main disadvantages of these protocols:

1. Resource utilization may be low, since resources may be allocated but unused for a long period.
2. Starvation is possible.
No Preemption

The third necessary condition for deadlocks is that there be no preemption of resources that have already been
allocated.

To ensure that this condition does not hold, the following protocols can be used:

• If a process is holding some resources and requests another resource that cannot be immediately allocated
to it, then all resources the process is currently holding are preempted.

• The preempted resources are added to the list of resources for which the process is waiting. The process
will be restarted only when it can regain its old resources, as well as the new ones that it is requesting.

If a process requests some resources, first check whether they are available. If they are, allocate them.

If they are not available, check whether they are allocated to some other process that is waiting for additional
resources. If so, preempt the desired resources from the waiting process and allocate them to the requesting
process.

If the resources are neither available nor held by a waiting process, the requesting process must wait. While it is
waiting, some of its resources may be preempted, but only if another process requests them.

A process can be restarted only when it is allocated the new resources it is requesting and recovers any resources
that were preempted while it was waiting.

Circular Wait

One way to ensure that this condition never holds is to impose a total ordering of all resource types and to require
that each process requests resources in an increasing order of enumeration.

To illustrate, let R = {R1, R2, ... , Rm} be the set of resource types. Assign a unique integer number to each resource
type, which allows to compare two resources and to determine whether one precedes another in ordering.
Formally, it defined as a one-to-one function

F: R ->N, where N is the set of natural numbers.

Example: if the set of resource types R includes tape drives, disk drives, and printers, then the function F might be
defined as follows:

F (tape drive) = 1 F (disk drive) = 5 F (printer) = 12

Now consider the following protocol to prevent deadlocks. Each process can request resources only in an
increasing order of enumeration. That is, a process can initially request any number of instances of a resource type
-Ri. After that, the process can request instances of resource type Rj if and only if F(Rj) > F(Ri).
DEADLOCK AVOIDANCE

• To avoid deadlocks an additional information is required about how resources are to be requested. With
the knowledge of the complete sequence of requests and releases for each process, the system can decide
for each request whether or not the process should wait in order to avoid a possible future deadlock

• Each request requires that in making this decision the system consider the resources currently available,
the resources currently allocated to each process, and the future requests and releases of each process.

• The various algorithms that use this approach differ in the amount and type of information required. The
simplest model requires that each process declare the maximum number of resources of each type that it
may need. Given this a priori information, it is possible to construct an algorithm that ensures that the
system will never enter a deadlocked state. Such an algorithm defines the deadlock-avoidance approach.

Safe State

• Safe state: A state is safe if the system can allocate resources to each process (up to its maximum) in some
order and still avoid a deadlock. A system is in a safe state only if there exists a safe sequence.

• Safe sequence: A sequence of processes <P1, P2, ... , Pn> is a safe sequence for the current allocation state
if, for each Pi, the resource requests that Pi can still make can be satisfied by the currently available
resources plus the resources held by all Pj, with j <i.

In this situation, if the resources that Pi needs are not immediately available, then Pi can wait until all Pj have
finished. When they have finished, Pi can obtain all of its needed resources, complete its designated task, return
its allocated resources, and terminate. When Pi terminates, Pi+1 can obtain its needed resources, and so on. If no
such sequence exists, then the system state is said to be unsafe.

A safe state is not a deadlocked state. Conversely, a deadlocked state is an unsafe state. Not all unsafe states are
deadlocks as shown in figure. An unsafe state may lead to a deadlock. As long as the state is safe, the operating
system can avoid unsafe states

Figure: Safe, unsafe, and deadlocked state spaces.


Resource-Allocation-Graph Algorithm

• If a resource-allocation system has only one instance of each resource type, then a variant of the
resource-allocation graph is used for deadlock avoidance.

• In addition to the request and assignment edges, a new type of edge is introduced, called a claim edge.
• A claim edge Pi ->Rj indicates that process Pi may request resource Rj at some time in the future. This
edge resembles a request edge in direction but is represented in the graph by a dashed line.

• When process Pi requests resource Rj, the claim edge Pi ->Rj is converted to a request edge. When a
resource Rj is released by Pi the assignment edge Rj->Pi is reconverted to a claim edge Pi->Rj.

Figure: Resource-allocation graph for deadlock avoidance.

Note that the resources must be claimed a priori in the system. That is, before process Pi starts executing, all its
claim edges must already appear in the resource-allocation graph.

We can relax this condition by allowing a claim edge Pi ->Rj to be added to the graph only if all the edges
associated with process Pi are claim edges.

Now suppose that process Pi requests resource Rj. The request can be granted only if converting the request edge
Pi ->Rj to an assignment edge Rj->Pi does not result in the formation of a cycle in the resource-allocation graph.

There is need to check for safety by using a cycle-detection algorithm. An algorithm for detecting a cycle in this
graph requires an order of n2 operations, where n is the number of processes in the system.

• If no cycle exists, then the allocation of the resource will leave the system in a safe state.
• If a cycle is found, then the allocation will put the system in an unsafe state. In that case, process Pi will
have to wait for its requests to be satisfied.

To illustrate this algorithm, consider the resource-allocation graph as shown above. Suppose that P2 requests R2.
Although R2 is currently free, we cannot allocate it to P2, since this action will create a cycle in the graph.

A cycle, indicates that the system is in an unsafe state. If P1 requests R2, and P2 requests R1, then a deadlock will
occur.
Figure: An unsafe state in a resource-allocation graph

Banker's Algorithm

The Banker’s algorithm is applicable to a resource allocation system with multiple instances of each resource type.

• When a new process enters the system, it must declare the maximum number of instances of each resource
type that it may need. This number may not exceed the total number of resources in the system.

• When a user requests a set of resources, the system must determine whether the allocation of these
resources will leave the system in a safe state. If it will, the resources are allocated; otherwise, the process
must wait until some other process releases enough resources.

To implement the banker's algorithm the following data structures are used.

Let n = number of processes, and m = number of resources types

Available: A vector of length m indicates the number of available resources of each type. If available [j] = k, there
are k instances of resource type Rj available.

Max: An n x m matrix defines the maximum demand of each process. If Max [i,j] = k, then process Pi may request
at most k instances of resource type Rj

Allocation: An n x m matrix defines the number of resources of each type currently allocated to each process. If
Allocation[i,j] = k then Pi is currently allocated k instances of Rj

Need: An n x m matrix indicates the remaining resource need of each process. If Need[i,j] = k, then Pi may need k
more instances of Rj to complete its task.

Need [i,j] = Max[i,j] – Allocation [i,j]


Safety Algorithm

The algorithm for finding out whether or not a system is in a safe state. This algorithm can be described as follows:

1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work = Available
Finish [i] = false for i = 0, 1,…,n- 1

2. Find an index i such that both:

(a) Finish[i] = false


(b) Needi Work
If no such i exists, go to step 4

3. Work = Work + Allocationi


Finish[i] = true go to step 2

4. If Finish [i] == true for all i, then the system is in a safe state
This algorithm may require an order of m x n2 operations to determine whether a state is safe.

Banker's Algorithm: How Does It Work?


Let us consider a bank has "n" number of accounts and "M" is the total cash in the bank.
If a customer has applied for a loan, then the bank first subtracts the loan amount from
the total cash available and then verifies that the cash difference must be greater than
the total cash "M" to approve the loan. This process helps the bank to manage and
operate all the banking functions without any restriction.

In the same way, the banker’s algorithm works in a computer system’s operating system.
In a computer system, when a new process enters the system. Then, the process must
declare the maximum number of instances of each resource type that it may require to
execute. This number must not be more than the total number of resources in the
system. Now, when a new process requests resources, the system must calculate
whether the allocation of the requested resources will leave the computer system in a
safe state. If so then the process will get allocated the requested resources, otherwise it
must wait until some other process releases the requested resources.

By following this practice, the banker’s algorithm avoids deadlock and allocate resources
safely. For this reason, it is also termed as deadlock detection algorithm or deadlock
avoidance algorithm in the operating system.

Data Structures Used in Banker’s Algorithm

In a computer system, if the number of processes is equal to "n" and the number of
resource types is "m". The following four data structures are required to implement the
banker’s algorithm −

• Available − It is a vector of length "m" that indicates the number of available resources of each
type in the system. If Available [j] = k, then "k" be the instances of resource type Rj available in the
system.
• Max − It is a [n × m] matrix that defines the maximum resource demand of each process. When
Max [I, j] = k, then a process Pi may request maximum k instances of resource type, Rj.
• Allocation − – It is also a [n × m] matrix that defines the number of resources of each type which
are currently allocated to each process. Therefore, if Allocation [i, j] = k, then a process Pi is
currently allocated "k" instances of resource type, Rj.
• Need − It is an [n × m] matrix that represents the remaining resource need of each process. When
Need [i, j] = k, then a process Pi may need "k" more instances of resource type Rj to accomplish the
assigned task.
• Finish − It is a matrix of length "m". It includes a Boolean value, i.e. TRUE or FALSE to indicate
whether a process has been allocated to the needed resources, or all resources have been released
after finishing the assigned work.

Also, it is to be noted that,


Need [i, j] = Max [i, j] – Allocation [i, j]

The baker’s algorithm is a combination of two algorithms namely, Safety


Algorithm and Resource Request Algorithm. These two algorithms together control the
processes and avoid dead lock in a system.

Safety Algorithm

The safety algorithm is one that determines whether or not the system is in a safe state.
It follows the following safe sequence in the banker’s algorithm −

Step 1

• Consider two vectors named Work and Finish of lengths "m" and "n" respectively.
• Initialize: Work = Available
• Finish [i] = false; for i = 0, 1, 2, 3, 4, 5 … n.

Step 2

• Find "i" such that both Finish [i] = false and Needi ≤ Work
• If such "i" does not exist, then go to the step 4.

Step 3

• Work = Work + Allocation


• Finish [i] = true
• Go to step 2.

Step 4

• If Finish [i] = true for all "i", it means that the system is in a safe state otherwise it is in unsafe state.
• This algorithm takes (m × n2) operations to decide whether the state is safe or not.

Now, let us discuss the Resource Request Algorithm.

Resource Request Algorithm

This algorithm determines how a system behaves when a process request for each type
of resource in the system.

Let us consider a request vector Requesti for a process Pi. When Requesti [j] = k, then
the process Pi requires k instances of resource type Rj.

When a process Pi requests for resources, the following sequence is followed −


Step 1

If Requesti ≤ Needi, then go to the step 2. Otherwise give an error as the process has
exceeded its maximum claim for the resource.

Step 2

If Requesti ≤ Available, then go to the step 3. Otherwise, the process Pi must wait until
the resource is available.

Step 3

If the requested resource is allocated to the process, Pi by modifying the state as follows

Available = Available – Requesti;


Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;

When the state of the resulting resource allocation is safe, then the requested resources
are allocated to the process, Pi. If the new state is unsafe, then the process Pi must wait
for Requesti and the old resource allocation state is restored.

Resource-Request Algorithm

The algorithm for determining whether requests can be safely granted.

Let Requesti be the request vector for process Pi. If Requesti [j] == k, then process Pi wants k instances of resource
type Rj. When a request for resources is made by process Pi, the following actions are taken:

1. If Requesti Needigo to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim

2. If Requesti Available, go to step 3. Otherwise Pi must wait, since resources are not available

3. Have the system pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available – Request; Allocationi= Allocationi + Requesti; Needi=Needi – Requesti;

If safe the resources are allocated to Pi

If unsafe Pi must wait, and the old resource-allocation state is restored


Example

Consider a system with five processes Po through P4 and three resource types A, B, and C. Resource type A has ten
instances, resource type B has five instances, and resource type C has seven instances. Suppose that, at time T0the
following snapshot of the system has been taken:
The content of the matrix Need is defined to be Max - Allocation

The system is currently in a safe state. Indeed, the sequence <P1, P3, P4, P2, P0> satisfies the safety criteria.

Suppose now that process P1 requests one additional instance of resource type A and two instances of resource
type C, so Request1 = (1,0,2). Decide whether this request can be immediately granted.

Check that Request Available

(1,0,2) (3,3,2) true

Then pretend that this request has been fulfilled, and the following new state is arrived.

Executing safety algorithm shows that sequence <P1, P3, P4, P0, P2> satisfies safety requirement.
DEADLOCK DETECTION

If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm, then a deadlock
situation may occur. In this environment, the system may provide:

• An algorithm that examines the state of the system to determine whether a deadlock has occurred
• An algorithm to recover from the deadlock

Single Instance of Each Resource Type

• If all resources have only a single instance, then define a deadlock detection algorithm that uses a variant
of the resource-allocation graph, called a wait-for graph.

• This graph is obtained from the resource-allocation graph by removing the resource nodes and collapsing
the appropriate edges.

• An edge from Pi to Pj in a wait-for graph implies that process Pi is waiting for process Pj to release a
resource that Pi needs. An edge Pi → Pj exists in a wait-for graph if and only if the corresponding resource
allocation graph contains two edges Pi →Rq and Rq→Pi for some resource Rq.

Example: In below Figure, a resource-allocation graph and the corresponding wait-for graph is presented.

Figure: (a) Resource-allocation graph. (b) Corresponding wait-for graph.

• A deadlock exists in the system if and only if the wait-for graph contains a cycle. To detect deadlocks, the
system needs to maintain the wait-for graph and periodically invoke an algorithm that searches for a
cycle in the graph.

• An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is the number of
vertices in the graph.
Several Instances of a Resource Type

A deadlock detection algorithm that is applicable to several instances of a resource type. The algorithm employs
several time-varying data structures that are similar to those used in the banker's algorithm.

• Available: A vector of length m indicates the number of available resources of each type.
• Allocation: Ann x m matrix defines the number of resources of each type currently allocated to each
process.

• Request: An n x m matrix indicates the current request of each process. If Request[i][j]


equals k, then process P; is requesting k more instances of resource type Rj.

Algorithm:

1. Let Work and Finish be vectors of length m and n, respectively Initialize:

(a) Work = Available


(b) For i = 1,2, …, n, if Allocationi 0, then Finish[i] = false; otherwise, Finish[i] = true
2. Find an index isuch that both:

(a) Finish[i] == false


(b) Requesti Work
If no such i exists, go to step 4

3. Work = Work + Allocationi

Finish[i] = true

go to step 2

4. If Finish[i] == false, for some i, 1 i n, then the system is in deadlock state. Moreover, if

Finish[i] == false, then Pi is deadlocked

Algorithm requires an order of O(m x n2) operations to detect whether the system is in deadlocked state

Example of Detection Algorithm

Consider a system with five processes Po through P4 and three resource types A, B, and C. Resource type A has
seven instances, resource type B has two instances, and resource type C has six instances. Suppose that, at time
T0, the following resource-allocation state:
After executing the algorithm, Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i

Suppose now that process P2 makes one additional request for an instance of type C. The Request matrix is
modified as follows:

The system is now deadlocked. Although we can reclaim the resources held by process Po, the number of available
resources is not sufficient to fulfill the requests of the other processes.

Thus, a deadlock exists, consisting of processes P1, P2, P3, and P4.

Detection-Algorithm Usage

The detection algorithm can be invoked on two factors:

1. How often is a deadlock likely to occur?


2. How many processes will be affected by deadlock when it happens?
If deadlocks occur frequently, then the detection algorithm should be invoked frequently. Resources allocated to
deadlocked processes will be idle until the deadlock can be broken.

If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not
be able to tell which of the many deadlocked processes “caused” the deadlock.
RECOVERY FROM DEADLOCK

The system recovers from the deadlock automatically. There are two options for breaking a deadlock
one is simply to abort one or more processes to break the circular wait. The other is to preempt
some resources from one or more of the deadlocked processes.

Process Termination

To eliminate deadlocks by aborting a process, use one of two methods. In both methods, the system
reclaims all resources allocated to the terminated processes.

1. Abort all deadlocked processes: This method clearly will break the deadlock cycle, but at
great expense; the deadlocked processes may have computed for a long time, and the
results of these partial computations must be discarded and probably will have to be
recomputed later.

2. Abort one process at a time until the deadlock cycle is eliminated: This method incurs
considerable overhead, since after each process is aborted, a deadlock-detection algorithm
must be invoked to determine whether any processes are still deadlocked.

If the partial termination method is used, then we must determine which deadlocked process (or
processes) should be terminated. Many factors may affect which process is chosen, including:

1. What the priority of the process is


2. How long the process has computed and how much longer the process will compute before
completing its designated task

3. How many and what types of resources the process has used.
4. How many more resources the process needs in order to complete
5. How many processes will need to be terminated?
6. Whether the process is interactive or batch

Resource Preemption

To eliminate deadlocks using resource preemption, we successively preempt some resources from
processes and give these resources to other processes until the deadlock cycle is broken.

If preemption is required to deal with deadlocks, then three issues need to be addressed:
1. Selecting a victim. Which resources and which processes are to be preempted? As in process
termination, we must determine the order of preemption to minimize cost. Cost factors may
include such parameters as the number of resources a deadlocked process is holding and the
amount of time the process has thus far consumed during its execution.

2. Rollback. If we preempt a resource from a process, what should be done with that process?
Clearly, it cannot continue with its normal execution; it is missing some needed resource. We
must roll back the process to some safe state and restart it from that state. Since it is difficult
to determine what a safe state is, the simplest solution is a total rollback: abort the process
and then restart it.

3. Starvation. How do we ensure that starvation will not occur? That is, how can we guarantee
that resources will not always be preempted from the same process?

You might also like