OS UNIT-3
OS UNIT-3
OS UNIT-3
Deadlocks
Deadlock:
Every process needs some resources to complete its execution. However, the resource is granted in a
sequential order.
A Deadlock is a situation where each of the computer process waits for a resource which is being
assigned to some another process. In this situation, none of the process gets executed since the
resource it needs, is held by some other process which is also waiting for some other resource to be
released.
A deadlock happens in operating system when two or more processes need some resource to
complete their execution that is held by the other process.
In the above diagram, the process 1 has resource 1 and needs to acquire resource 2. Similarly process
2 has resource 2 and needs to acquire resource 1. Process 1 and process 2 are in deadlock as each of
them needs the other’s resource to complete their execution but neither of them is willing to
relinquish their resources.
1. Mutual Exclusion
A resource can only be shared in mutually exclusive manner. It implies, if two process cannot
use the same resource at the same time.
A process waits for some resources while holding another resource at the same time.
3. No preemption
The process which once scheduled will be executed till the completion. No other process can
be scheduled by the scheduler meanwhile.
4. Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that the last process
is waiting for the resource which is being held by the first process.
1. Deadlock Ignorance
Deadlock Ignorance is the most widely used approach among all the mechanism. This is being used
by many operating systems mainly for end user uses. In this approach, the Operating system assumes
that deadlock never occurs. It simply ignores deadlock. This approach is best suitable for a single end
user system where User uses the system only for browsing and all other normal stuff.
There is always a trade off between Correctness and performance. The operating systems like
Windows and Linux mainly focus upon performance. However, the performance of the system
decreases if it uses deadlock handling mechanism all the time if deadlock happens 1 out of 100 times
then it is completely unnecessary to use the deadlock handling mechanism all the time.
In these types of systems, the user has to simply restart the computer in the case of deadlock.
Windows and Linux are mainly using this approach.
2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait
holds simultaneously. If it is possible to violate one of the four conditions at any time then the
deadlock can never occur in the system.
The idea behind the approach is very simple that we have to fail one of the four conditions but there
can be a big argument on its physical implementation in the system.
3. Deadlock avoidance
In deadlock avoidance, the operating system checks whether the system is in safe state or in unsafe
state at every step which the operating system performs. The process continues until the system is in
safe state. Once the system moves to unsafe state, the OS has to backtrack one step.
In simple words, The OS reviews each allocation so that the allocation doesn't cause the deadlock in
the system.
This approach let the processes fall in deadlock and then periodically check whether deadlock occur
in the system or not. If it occurs then it applies some of the recovery methods to the system to get rid
of deadlock.
DEADLOCK AVOIDANCE:
In deadlock avoidance, the request for any resource will be granted if the resulting state of the system
doesn't cause deadlock in the system. The state of the system will continuously be checked for safe
and unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum number of resources a process
can request to complete its execution.
The simplest and most useful approach states that the process should declare the maximum number of
resources of each type it may ever need. The Deadlock avoidance algorithm examines the resource
allocations so that there can never be a circular wait condition.
The resource allocation state of a system can be defined by the instances of available and allocated
resources, and the maximum instance of the resources demanded by the processes.
The resource allocation graph is the pictorial representation of the state of a system. As its name
suggests, the resource allocation graph is the complete information about all the processes which are
holding some resources or waiting for some resources.
It also contains the information about all the instances of all the resources whether they are available
or being used by the processes.
In Resource allocation graph, the process is represented by a Circle while the Resource is represented
by a rectangle. Let's see the types of vertices and edges in detail.
Vertices are mainly of two types, Resource and process. Each of them will be represented by a
different shape. Circle represents process while rectangle represents resource.
A resource can have more than one instance. Each instance will be represented by a dot inside the
rectangle.
Edges in RAG are also of two types, one represents assignment and other represents the wait of a
process for a resource. The above image shows each of them.
A resource is shown as assigned to a process if the tail of the arrow is attached to an instance to the
resource and the head is attached to a process.
A process is shown as waiting for a resource if the tail of an arrow is attached to the process while the
head is pointing towards the resource.
Example
Let'sconsider 3 processes P1, P2 and P3, and two types of resources R1 and R2. The resources are
having 1 instance each.
According to the graph, R1 is being used by P1, P2 is holding R2 and waiting for R1, P3 is waiting
for R1 as well as R2.
The graph is deadlock free since no cycle is being formed in the graph.
Banker's Algorithm
Banker's algorithm does the same as we explained the Deadlock avoidance with the help of an
example. The algorithm predetermines whether the System will be in a safe state or not by simulating
the allocation of the resources to the processes according to the maximum available resources. It
makes an "s-state" check before actually allocating the resources to the Processes.
When there are more number of Processes and many Resources, then Banker's Algorithm is
useful.
1. Prevention: The operating system takes steps to prevent deadlocks from occurring by
ensuring that the system is always in a safe state, where deadlocks cannot occur. This is
achieved through resource allocation algorithms such as the Banker’s Algorithm.
2. Detection and Recovery: If deadlocks do occur, the operating system must detect and
resolve them. Deadlock detection algorithms, such as the Wait-For Graph, are used to identify
deadlocks, and recovery algorithms, such as the Rollback and Abort algorithm, are used to
resolve them. The recovery algorithm releases the resources held by one or more processes,
allowing the system to continue to make progress.
Deadlock Detection :
1. If resources have a single instance –
In this case for Deadlock detection, we can run an algorithm to check for the cycle in the Resource
Allocation Graph. The presence of a cycle in the graph is a sufficient condition for deadlock.
In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1 → P1
→ R2 → P2. So, Deadlock is Confirmed.
ADVANTAGES OR DISADVANTAGES:
1. Improved System Stability: Deadlocks can cause system-wide stalls, and detecting and
resolving deadlocks can help to improve the stability of the system.
2. Better Resource Utilization: By detecting and resolving deadlocks, the operating system
can ensure that resources are efficiently utilized and that the system remains responsive to user
requests.
3. Better System Design: Deadlock detection and recovery algorithms can provide insight into
the behavior of the system and the relationships between processes and resources, helping to
inform and improve the design of the system.
If we simulate deadlock with a table which is standing on its four legs then we can also simulate four
legs with the four conditions which when occurs simultaneously, cause the deadlock.
However, if we break one of the legs of the table then the table will fall definitely. The same happens
with deadlock, if we can be able to violate one of the four necessary conditions and don't let them
occur together then we can prevent the deadlock.
Mutual section from the resource point of view is the fact that a resource can never be used by more
than one process simultaneously which is fair enough but that is the main reason behind the deadlock.
If a resource could have been used by more than one process at the same time then the process would
have never been waiting for any resource.
However, if we can be able to violate resources behaving in the mutually exclusive manner then the
deadlock can be prevented.
Hold and wait condition lies when a process holds a resource and waiting for some other resource to
complete its task. Deadlock occurs because there can be more than one process which are holding one
resource and waiting for other in the cyclic order.
However, we have to find out some mechanism by which a process either doesn't hold any resource
or doesn't wait. That means, a process must be assigned all the necessary resources before the
execution starts. A process must not wait for any resource once the execution has been started.
3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we take the
resource away from the process which is causing deadlock then we can prevent deadlock.
This is not a good approach at all since if we take a resource away which is being used by the process
then all the work which it has done till now can become inconsistent.
4. Circular Wait
To violate circular wait, we can assign a priority number to each of the resource. A process can't
request for a lesser priority resource. This ensures that not a single process can request a resource
which is being utilized by some other process and no cycle will be formed.
Deadlock Ignorance :
Stick your head in the sand and pretend there is no problem at all, this method of solving any
problem is called Ostrich Algorithm. This Ostrich algorithm is the most widely used technique in
order to ignore the deadlock and also it used for all the single end-users uses. If there is deadlock in
the system, then the OS will reboot the system in order to function well. The method of solving any
problem varies according to the people.
Scientists all over the world believe that the most efficient method to deal with deadlock is
deadlock prevention. But the Engineers that deal with the system believe that deadlock prevention
should be paid less attention as there are very less chances for deadlock occurrence.
System failure, compiler error, programming bugs, hardware crashes that occur once a week should
be paid more attention rather than deadlock problem that occur once in years. Therefore most of the
engineers don’t pay much amount in eliminating the deadlock.
Many operating systems suffers from deadlock that are not even detected and then automatically
broke. Just as an explanation we know that the number of processes is determined by the process
table. Now as we know there are only finite number of slots in the process table and hence when
the table is full the fork fails. Now the reasonable approach for the new fork has to wait and try
again when the slot in the process table is empty.
Also, such problem is noticed while opening and closing file. The maximum time the file is opened
is restricted and mention in the i-node table and thus similar problem is observed when the table is
filled. Another limited resource is mentioned as swap space. In fact, all the table storing data in the
operating system has finite resource limit. It might happen that a collection of n processes might
each claim 1/n of the total, Should we clear all of these and then each try to claim another one?
Including UNIX and WINDOWS, all the operating system ignore the deadlock first assuming that
the user is restricted to one process. The deadlock ignorance comes into picture frequently
because by this method deadlock is eliminated for free rather than spending much on other
deadlock prevention methods also putting some inconvenient restrictions on process. Thus we have
to decide between the correctness and convenience between the different methods of solving the
deadlock.
Ignoring the possibility of deadlock in operating systems can have both advantages and
disadvantages.
Advantages:
1. Simplicity: Ignoring the possibility of deadlock can make the design and implementation of
the operating system simpler and less complex.
2. Performance: Avoiding deadlock detection and recovery mechanisms can improve the
performance of the system, as these mechanisms can consume significant system resources.
Disadvantages:
1. Unpredictability: Ignoring deadlock can lead to unpredictable behavior, making the system
less reliable and stable.
2. System crashes: If a deadlock does occur and the system is not prepared to handle it, it can
cause the entire system to crash, resulting in data loss and other problems.
3. Reduced availability: Deadlocks can cause processes to become blocked, which can reduce
the availability of the system and impact user experience.
In general, the disadvantages of ignoring deadlock outweigh the advantages. It is important to
implement effective deadlock prevention and recovery mechanisms in operating systems to
ensure the stability, reliability, and availability of the system.
Process Synchronization:
Race Condition:
When more than one process is executing the same code or accessing the same memory or any
shared variable in that condition there is a possibility that the output or the value of the shared
variable is wrong so for that all the processes doing the race to say that my output is correct this
condition known as a race condition. Several processes access and process the manipulations over
the same data concurrently, then the outcome depends on the particular order in which the access
takes place. A race condition is a situation that may occur inside a critical section. This happens
when the result of multiple thread execution in the critical section differs according to the order in
which the threads execute. Race conditions in critical sections can be avoided if the critical section
is treated as an atomic instruction. Also, proper thread synchronization using locks or atomic
variables can prevent race conditions.
Critical Section
The regions of a program that try to access shared resources and may cause race conditions are called
critical section. To avoid race condition among the processes, we need to assure that only one process
at a time can execute within the critical section.
Critical Section is the part of a program which tries to access shared resources. That resource may be
any resource in a computer like a memory location, Data structure, CPU or any IO device.
The critical section cannot be executed by more than one process at the same time; operating system
faces the difficulties in allowing and disallowing the processes from entering the critical section.
The critical section problem is used to design a set of protocols which can ensure that the Race
condition among the processes will never arise.
1. Mutual Exclusion
Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if one
process is executing inside critical section then the other process must not enter in the critical
section.
2. Progress
Progress means that if one process doesn't need to execute into critical section then it should
not stop other processes to get into the critical section.
3. Bounded Waiting
We should be able to predict the waiting time for every process to get into the critical section.
The process must not be endlessly waiting for getting into the critical section.
Peterson’s Solution:
Semaphores:
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled
by another thread. This is different than a mutex as the mutex can be signaled only by the thread
that is called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations wait() and
signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
• Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks,
as the locks can provide mutual exclusion. All the processes can share the same mutex
semaphore that is initialized to 1. Then, a process has to wait until the lock becomes 0. Then,
the process can make the mutex semaphore 1 and start its critical section. When it completes its
critical section, it can reset the value of the mutex semaphore to 0 and some other process can
enter its critical section.
• Counting Semaphores: They can have any value and are not restricted over a certain
domain. They can be used to control access to a resource that has a limitation on the number of
simultaneous accesses. The semaphore can be initialized to the number of instances of the
resource. Whenever a process wants to use that resource, it checks if the number of remaining
instances is more than zero, i.e., the process has an instance available. Then, the process can
enter its critical section thereby decreasing the value of the counting semaphore by 1. After the
process is over with the use of the instance of the resource, it can leave the critical section
thereby adding 1 to the number of available instances of the resource.
while(1){
while (TestAndSet(lock));
critical section
lock = false;
remainder section
}
Classical problems of Synchronization
In this article, we will see number of classical problems of synchronization as examples of a large
class of concurrency-control problems. In our solutions to the problems, we use semaphores for
synchronization, since that is the traditional way to present such solutions. However, actual
implementations of these solutions could use mutex locks in place of binary semaphores.
These problems are used for testing nearly every newly proposed synchronization scheme. The
following problems of synchronization are considered as classical problems:
1. Bounded-buffer (or Producer-Consumer) Problem
2. Dining-Philosophers Problem
3. Readers and Writers Problem
Bounded Buffer problem is also called producer consumer problem. This problem is generalized in
terms of the Producer-Consumer problem. Solution to this problem is, creating two counting
semaphores “full” and “empty” to keep track of the current number of full and empty buffers
respectively. Producers produce a product and consumers consume the product, but both use of one
of the containers each time.
Producer-Consumer problem is a classical synchronization problem in the operating system. With the
presence of more than one process and limited resources in the system the synchronization problem
arises. If one resource is shared between more than one process at the same time then it can lead to
data inconsistency. In the producer-consumer problem, the producer produces an item and the
consumer consumes the item produced by the producer.
Before knowing what is Producer-Consumer Problem we have to know what are Producer and
Consumer.
1. Producer Process should not produce any data when the shared buffer is full.
2. Consumer Process should not consume any data when the shared buffer is empty.
3. The access to the shared buffer should be mutually exclusive i.e at a time only one process
should be able to access the shared buffer and make changes to it.
For consistent data synchronization between Producer and Consumer, the above problem should be
resolved.
Semaphores are variables used to indicate the number of resources available in the
system at a particular time. semaphore variables are used to achieve `Process
Synchronization.
Full
The full variable is used to track the space filled in the buffer by the Producer process. It is initialized
to 0 initially as initially no space is filled by the Producer process.
Empty
The Empty variable is used to track the empty space in the buffer. The Empty variable is initially
initialized to the BUFFER-SIZE as initially, the whole buffer is empty.
Mutex
Mutex is used to achieve mutual exclusion. mutex ensures that at any particular time only the
producer or the consumer is accessing the buffer.
We will use the Signal() and wait() operation in the above-mentioned semaphores to arrive at a
solution to the Producer-Consumer problem.
Signal() - The signal function increases the semaphore value by 1. Wait() - The wait operation
decreases the semaphore value by 1.
• wait(Full) - Before the consumer process starts consuming any item from the buffer it checks
if the buffer is empty or has some item in it. So, the consumer process creates one more empty
space in the buffer and this is indicated by the full variable. The value of the full variable
decreases by one when the wait(Full) is executed. If the Full variable is already zero i.e the
buffer is empty then the consumer process cannot consume any item from the buffer and it
goes in the busy-waiting state.
• wait(mutex) - It does the same as explained in the producer process. It decreases the mutex
by 1 and restricts another process to enter the critical section until the consumer process
increases the value of mutex by 1.
• consume() - This function consumes an item from the buffer. when code reaches the
consuming () function it will not allow any other process to access the critical section which
maintains the data consistency.
• signal(mutex) - After consuming the item it increases the mutex value by 1 so that other
processes which are in a busy-waiting state can access the critical section now.
• signal(Empty) - when a consumer process consumes an item it increases the value of the
Empty variable indicating that the empty space in the buffer is increased by 1.
Mutex is used to solve the producer-consumer problem as mutex helps in mutual exclusion. It
prevents more than one process to enter the critical section. As mutexes have binary values i.e 0 and 1.
So whenever any process tries to enter the critical section code it first checks for the mutex value by
using the wait operation.
wait(mutex);
wait(mutex) decreases the value of mutex by 1. so, suppose a process P1 tries to enter the critical
section when mutex value is 1. P1 executes wait(mutex) and decreases the value of mutex. Now,
the value of mutex becomes 0 when P1 enters the critical section of the code.
Now, suppose Process P2 tries to enter the critical section then it will again try to decrease the value
of mutex. But the mutex value is already 0. So, wait(mutex) will not execute, and P2 will now keep
waiting for P1 to come out of the critical section.
signal(mutex)
signal(mutex) increases the value of mutex by 1.mutex value again becomes 1. Now, the
process P2 which was in a busy-waiting state will be able to enter the critical section by
executing wait(mutex).
In the above section in both the Producer process code and consumer process code, we have the wait
and signal operation on mutex which helps in mutual exclusion and solves the problem of the
Producer consumer process.
The dining philosopher's problem is the classical problem of synchronization which says that Five
philosophers are sitting around a circular table and their job is to think and eat alternatively. A bowl
of noodles is placed at the center of the table along with five chopsticks for each of the philosophers.
To eat a philosopher needs both their right and a left chopstick. A philosopher can only eat if both
immediate left and right chopsticks of the philosopher is available. In case if both immediate left and
right chopsticks of the philosopher are not available then the philosopher puts down their (either left
or right) chopstick and starts thinking again.
The dining philosopher demonstrates a large class of concurrency control problems hence it's a
classic synchronization problem.
Five Philosophers sitting around the table
Dining Philosophers Problem- Let's understand the Dining Philosophers Problem with the below
code, we have used fig 1 as a reference to make you understand the problem exactly. The five
Philosophers are represented as P0, P1, P2, P3, and P4 and five chopsticks by C0, C1, C2, C3, and C4.
1. Void Philosopher
2. {
3. while(1)
4. {
5. take_chopstick[i];
6. take_chopstick[ (i+1) % 5] ;
7. ..
8. . EATING THE NOODLE
9. .
10. put_chopstick[i] );
11. put_chopstick[ (i+1) % 5] ;
12. .
13. . THINKING
14. }
15. }
Similarly suppose now Philosopher P1 wants to eat, it will enter in Philosopher() function, and
execute take_chopstick[i]; by doing this it holds C1 chopstick after that it
execute take_chopstick[ (i+1) % 5]; by doing this it holds C2 chopstick( since i =1, therefore (1 +
1) % 5 = 2)
But Practically Chopstick C1 is not available as it has already been taken by philosopher P0, hence
the above code generates problems and produces race condition.
We use a semaphore to represent a chopstick and this truly acts as a solution of the Dining
Philosophers Problem. Wait and Signal operations will be used for the solution of the Dining
Philosophers Problem, for picking a chopstick wait operation can be executed while for releasing a
chopstick signal semaphore can be executed.
Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by only
two standard atomic operations - wait and signal, whose definitions are as follows:
1. 1. wait( S )
2. {
3. while( S <= 0) ;
4. S--;
5. }
6.
7. 2. signal( S )
8. {
9. S++;
10. }
From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an
infinite loop(because of the semicolon; after while loop). Whereas the job of the signal is to
increment the value of S.
The structure of the chopstick is an array of a semaphore which is represented as shown below -
1. semaphore C[5];
Initially, each element of the semaphore C0, C1, C2, C3, and C4 are initialized to 1 as the chopsticks
are on the table and not picked up by any of the philosophers.
Let's modify the above code of the Dining Philosopher Problem by using semaphore operations wait
and signal, the desired code looks like
1. void Philosopher
2. {
3. while(1)
4. {
5. Wait( take_chopstickC[i] );
6. Wait( take_chopstickC[(i+1) % 5] ) ;
7. ..
8. . EATING THE NOODLE
9. .
10. Signal( put_chopstickC[i] );
11. Signal( put_chopstickC[ (i+1) % 5] ) ;
12. .
13. . THINKING
14. }
15. }
In the above code, first wait operation is performed on take_chopstickC[i] and take_chopstickC
[ (i+1) % 5]. This shows philosopher i have picked up the chopsticks from its left and right. The
eating function is performed after that.
On completion of eating by philosopher i the, signal operation is performed on take_chopstickC[i]
and take_chopstickC [ (i+1) % 5]. This shows that the philosopher i have eaten and put down both the
left and right chopsticks. Finally, the philosopher starts thinking again.
• If one of the person tries editing the file, no other person should be reading or writing at the
same time, otherwise changes will not be visible to him/her.
• However if some person is reading the file, then others may read it at the same time.
Precisely in OS we call this situation as the readers-writers problem
Problem parameters:
Here priority means, no reader should wait if the share is currently opened for reading.
Three variables are used: mutex, wrt, readcnt to implement solution
do {
// writer requests for critical section
wait(wrt);
} while(true);
Reader process:
do {
readcnt--;
} while(true);
Thus, the semaphore ‘wrt‘ is queued on both readers and writers in a manner such that preference
is given to readers if writers are also there. Thus, no reader is waiting simply because a writer has
requested to enter the critical section.
MONITORS:
Monitors are a higher-level synchronization construct that simplifies process synchronization by
providing a high-level abstraction for data access and synchronization. Monitors are implemented
as programming language constructs, typically in object-oriented languages, and provide mutual
exclusion, condition variables, and data encapsulation in a single construct.
Syntax of monitor in OS
{
initializing_code;
}
Characteristics of Monitors in OS
Condition Variables
There are two sorts of operations we can perform on the monitor's condition variables:
1. Wait
2. Signal
y.wait(): The activity/process that applies the wait operation on a condition variable will be
suspended, and the suspended process is located in the condition variable's block queue.
y.signal(): If an activity/process applies the signal action on the condition variable, then one of the
blocked activity/processes in the monitor is given a chance to execute.
Advantages of Monitor in OS
• Monitors offer the benefit of making concurrent or parallel programming easier and less error-
prone than semaphore-based solutions.
• It helps in process synchronization in the operating system.
• Monitors have built-in mutual exclusion.
• Monitors are easier to set up than semaphores.
• Monitors may be able to correct for the timing faults that semaphores cause.
Disadvantages of Monitor in OS
1. Independent Process
2. Cooperating Process
Cooperating processes affect each other and may share data and information among themselves.
Interprocess Communication or IPC provides a mechanism to exchange data and information across
multiple processes, which might be on single or multiple computers connected by a network.
• Computational Speedup
• Modularity
• Information and data sharing
• Privilege separation
• Processes can communicate with each other and synchronize their action.
Pipes
• It is a half-duplex method (or one-way communication) used for IPC between two related
processes.
• It is like a scenario like filling the water with a tap into a bucket. The filling process is writing
into the pipe and the reading process is retrieved from the pipe.
Shared Memory
Multiple processes can access a common shared memory. Multiple processes communicate by shared
memory, where one process makes changes at a time and then others view the change. Shared
memory does not use kernel.
Message Passing
Message Queues
We have a linked list to store messages in a kernel of OS and a message queue is identified using
"message queue identifier".
Direct Communication
In this, process that wanna communicate must name the sender or receiver.
FIFO
Since multiple processes can read and write on shared variables and data, it is very important to
maintain consistency of data, else we can get error-prone results. To prevent this we synchronize the
processes. Some common ways to achieve this are given below.
• Semaphore: A variable that manages several processes' access to a shared resource. Binary
and counting semaphores are the two types of semaphores.
• Mutual Exclusion or mutex: is a term used to describe a situation where only one process or
thread at a time can enter the crucial part due to mutual exclusion. This avoids race conditions.
Data Sharing: IPC allows processes to share data with each other. This can be useful in
situations where one process needs to access data that is held by another process.
Resource Sharing: IPC allows processes to share resources such as memory, files, and
devices. This can help reduce the amount of memory or disk space that is required by a
system.
Synchronization: IPC allows processes to synchronize their activities. For example, one
process may need to wait for another process to complete its task before it can continue.
Modularity: IPC allows processes to be designed in a modular way, with each process
performing a specific task. This can make it easier to develop and maintain complex systems.
Scalability: IPC allows processes to be distributed across multiple systems, which can help
improve performance and scalability.
Overall, IPC is a powerful tool for building complex, distributed systems that require communication
and coordination between different processes.
Complexity: IPC can add complexity to the design and implementation of software systems,
as it requires careful coordination and synchronization between processes. This can lead to
increased development time and maintenance costs.
Overhead: IPC can introduce additional overhead, such as the need to serialize and
deserialize data, and the need to synchronize access to shared resources. This can impact the
performance of the system.
Scalability: IPC can also limit the scalability of a system, as it may be difficult to manage and
coordinate large numbers of processes communicating with each other.
Security: IPC can introduce security vulnerabilities, as it creates additional attack surfaces for
malicious actors to exploit. For example, a malicious process could attempt to gain
unauthorized access to shared resources or data.
Compatibility: IPC can also create compatibility issues between different systems, as
different operating systems and programming languages may have different IPC mechanisms
and APIs. This can make it difficult to develop cross-platform applications that work
seamlessly across different environments.
There are many examples of Inter-Process Communication (IPC) mechanisms used in modern
Operating Systems (OS). Here are some common examples:
Pipes: Pipes are a simple form of IPC used to allow communication between two processes.
A pipe is a unidirectional communication channel that allows one process to send data to
another process. The receiving process can read the data from the pipe, and the sending
process can write data to the pipe. Pipes are commonly used for shell pipelines, where the
output of one command is piped as input to another command.
Shared Memory: Shared Memory is a type of IPC mechanism that allows two or more
processes to access the same portion of memory. This can be useful for sharing large
amounts of data between processes, such as video or audio streams. Shared Memory is
faster than other IPC mechanisms since data is directly accessible in memory, but it requires
careful management to avoid synchronization issues.
Message Queues: Message Queues are another type of IPC mechanism used to allow
processes to send and receive messages. A message queue is a buffer that stores messages
until the receiver is ready to receive them. The sender can place messages in the queue, and
the receiver can retrieve messages from the queue.
Sockets: Sockets are a type of IPC mechanism used for communication between processes
on different machines over a network. A socket is a combination of an IP address and a port
number, which allows a process to connect to another process over the network. Sockets are
commonly used for client-server applications, where a server listens on a socket for
incoming connections, and clients connect to the server over the socket.