Inter Process Communication (IPC)

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 53

Inter Process Communication

(IPC)
Inter Process Communication (IPC)

Inter Process communication is the mechanism provided by the


operating system that allows processes to communicate with each
other. This communication could involve a process letting another
process know that some event has occurred or the transferring of
data from one process to another.
Inter Process Communication (IPC)
Types of Process
• Independent process.
An independent process is not affected by the execution of other
processes.
• Co-operating process.
A co-operating process can be affected by other executing processes.

Advantages of co-operative nature:


• Increasing computational speed
• Convenience
• Modularity
Inter Process Communication (IPC)
Inter Process Communication (IPC)
Inter Process Communication (IPC)

Establish a communication link (if a link already exists, no need to


establish it again.)
Start exchanging messages using basic primitives.
We need at least two primitives:
• send(message, destination) or send(message)
• receive(message, host) or receive(message)
Inter Process Communication (IPC)

A standard message can have two parts: header and body.


The header part is used for storing message type, destination id,
source id, message length, and control information. The control
information contains information like what to do if runs out of buffer
space, sequence number, priority.
Approaches to Interprocess Communication
Approaches to Interprocess Communication
• Pipe:-
The pipe is a type of data channel that is unidirectional in nature. It
means that the data in this type of data channel can be moved in
only a single direction at a time. Still, one can use two-channel of
this type, so that he can able to send and receive data in two
processes. Typically, it uses the standard methods for input and
output. These pipes are used in all types of POSIX systems and in
different versions of window operating systems as well.

• Shared Memory:-
It can be referred to as a type of memory that can be used or
accessed by multiple processes simultaneously. It is primarily used so
that the processes can communicate with each other. Therefore the
shared memory is used by almost all POSIX and Windows operating
systems as well.
Approaches to Interprocess Communication
• Message Queue:-
In general, several different messages are allowed to read and write the
data to the message queue. In the message queue, the messages are
stored or stay in the queue unless their recipients retrieve them. In short,
we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.

• Message Passing:-
It is a type of mechanism that allows processes to synchronize and
communicate with each other. However, by using the message passing,
the processes can communicate with each other without restoring the
hared variables.
Operations that inter-process communication mechanism:
• send (message)
• received (message)
Approaches to Interprocess Communication
• Direct Communication:-
In this type of communication process, usually, a link is created or
established between two communicating processes. However, in every
pair of communicating processes, only one link can exist.

• Indirect Communication:-
Indirect communication can only exist or be established when processes
share a common mailbox, and each pair of these processes shares
multiple communication links. These shared links can be unidirectional
or bi-directional.

FIFO:-
• It is a type of general communication between two unrelated
processes. It can also be considered as full-duplex, which means that
one process can communicate with another process and vice versa.
Some other different approaches
• Socket:-
It acts as a type of endpoint for receiving or sending the data in a
network. It is correct for data sent between processes on the same
computer or data sent between different computers on the same
network. Hence, it used by several types of operating systems.

• File:-
A file is a type of data record or a document stored on the disk and can
be acquired on demand by the file server. Another most important thing
is that several processes can access that file as required or needed.
Some other different approaches
• Signal:-
As its name implies, they are a type of signal used in inter process
communication in a minimal way. Typically, they are the massages of
systems that are sent by one process to another. Therefore, they are not
used for sending data but for remote commands between multiple
processes.
What is a race condition?
A race condition is a situation that may occur inside a critical section.
This happens when the result of multiple thread execution in critical
section differs according to the order in which the threads execute.

Race conditions in critical sections can be avoided if the critical section


is treated as an atomic instruction. Also, proper thread
synchronization using locks or atomic variables can prevent race
conditions.
What is a race condition?

Race condition occurs in the process:

Thread 1 Thread 2 Integer value


10
Read the Value 10
Reduce the Value 10
Write the value by 2 8
Read the value 8
Reduce the Value by2 8
Write the value 6
What is a race condition?
If the operations of these two threads execute concurrently
without the concept of locking or synchronization, the outcome
of the operations could be wrong.

Thread 1 Thread 2 Integer value

Read the Value 10


Read the value 10
Reduce the Value by 2 10
Reduce the Value by 2 10
Write the value 8
Write the value 8
What is a race condition?
If the operations of these two threads execute concurrently
without the concept of locking or synchronization, the outcome
of the operations could be wrong.

Thread 1 Thread 2 Integer value

Read the Value 10


Read the value 10
Reduce the Value by 2 10
Reduce the Value by 2 10
Write the value 8
Write the value 8
What is a Critical Section?
When more than one processes access the same code segment
that segment is known as the critical section. The critical section
contains shared variables or resources which are needed to be
synchronized to maintain the consistency of data variables.

The Critical section can be accessed by only one process at a


time. In the Critical section, there are many variables and
functions that are shareable among different processes. Now, the
question is: Which process will access the Critical Section first. It
depends upon the selected scheduling algorithm that which
process will get into the critical section first and will enjoy the
different variables and functions in the critical section.
What is a Critical Section?
The critical section can be defined by
following three sections:
1. Entry Section : Each process must
request permission to enter its critical
section; processes may go to critical
section through entry section.
2. Exit Section : Each critical section is
terminated by exit section.
3. Remainder section : The code after
exit section is remainder section.
What is a Critical Section?
Requirements of Synchronization mechanisms
The solution to the critical section problem must satisfy the
following conditions −

Mutual Exclusion: Mutual exclusion implies that only one process


can be inside the critical section at any time. If any other processes
require the critical section, they must wait until it is free.

Progress: Progress means that if a process is not using the critical


section, then it should not stop any other process from accessing it.
In other words, any process can enter a critical section if it is free.

Bounded Waiting: Bounded waiting means that each process must


have a limited waiting time. It should not wait endlessly to access
the critical section.
Critical Section Solutions :
1. Disabling Interrupts :
This is the simplest solution. When process enters in critical region,
interrupts are disabled by it and once it leaves critical section , it
again enables interrupts.
Because of either clock or interrupt, CPU makes switching from
one process to other. When interrupts are disabled, CPU switiching
will not happen.
Here process can complete its shared memory operations without
any fear and bothering about other process to enter.
Important thing is that we have to give power to a process to disable
interrupt. That will be unwise because if the processes disables the
interrupts and do not make them enable then many problems can
occur. Only kernels can easily enable and disable the interrupts. So it
is not a good approach for mutual exclusion.
Critical Section Solutions :
2. Lock Variables :
Here we can declare two variables one single and shared(lock)
initialized to 0. If the process finds that lock is 0 then it enters into
critical region and makes lock variable 1 and just before leaving it is
again set to 0.
If another process enters then it can see that lock variable is 1 then
it waits till it becomes 0. There could another case where process
keep waiting and if the lock variable becomes 0 then one more
process may enters and tries to make that lock variable to 1 before
waiting process makes it. So this approach is also not that effective
because this solution will bring race condition.
Critical Section Solutions :
2. Lock Variables : (Mutual Exclusion)
We have two processes P1 and P2. The process P1 wants to execute
its critical section. P1 gets into the entry section. Since the value of
lock is 0 hence P1 changes its value from 0 to 1 and enters into the
critical section.

Meanwhile, P1 is preempted by the CPU and P2 gets scheduled.


Now there is no other process in the critical section and the value of
lock variable is 0. P2 also wants to execute its critical section. It
enters into the critical section by setting the lock variable to 1.

Now, CPU changes P1's state from waiting to running. P1 is yet to


finish its critical section. P1 has already checked the value of lock
variable and remembers that its value was 0 when it previously
checked it.
Critical Section Solutions :
2. Lock Variables : (Mutual Exclusion)
Hence, it also enters into the critical section without checking the
updated value of lock variable.

Now, we got two processes in the critical section. According to the


condition of mutual exclusion, more than one process in the critical
section must not be present at the same time. Hence, the lock
variable mechanism doesn't guarantee the mutual exclusion.

The problem with the lock variable mechanism is that, at the same
time, more than one process can see the vacant tag and more than
one process can enter in the critical section. Hence, the lock variable
doesn't provide the mutual exclusion that's why it cannot be used in
general.
Critical Section Solutions :
3. Test Set Lock Mechanism :
Sometimes Process reads the old value of lock variable and enters
the critical section. Due to this reason, more than one process might
get into critical section. This doesn't affect the algorithm but, by
doing this, we can manage to provide the mutual exclusion to some
extent but not completely.
Critical Section Solutions :
3. Test Set Lock Mechanism :
In the updated version of code, the value of Lock is loaded into the
local register R0 and then value of lock is set to 1.
However, in step 3, the previous value of lock (that is now stored
into R0) is compared with 0. if this is 0 then the process will simply
enter into the critical section otherwise will wait by executing
continuously in the loop.
The benefit of setting the lock immediately to 1 by the process itself
is that, now the process which enters into the critical section carries
the updated value of lock variable that is 1.
Section 1 Section 2

1. Load Lock, R0 1. Load Lock, R0


2. CMP R0, #0 2. Store #1, Lock
3. JNZ step1 3. CMP R0, #0
4. store #1, Lock 4. JNZ step 1
Critical Section Solutions :
3. Test Set Lock Mechanism :
In previous solution provides mutual exclusion to some extent but it
doesn't make sure that the mutual exclusion will always be there.
There is a possibility of having more than one process in the critical
section.
Section 2

1. Load Lock, R0
2. Store #1, Lock
3. CMP R0, #0
4. JNZ step 1

What if the process gets preempted just after executing the first
instruction of the assembly code written in section 2? In that case, it
will carry the old value of lock variable with it and it will enter into
the critical section regardless of knowing the current value of lock
variable. This may make the two processes present in the critical
section at the same time.
Critical Section Solutions :
3. Test Set Lock Mechanism :
To get rid of this problem, we have to make sure that the
preemption must not take place just after loading the previous value
of lock variable and before setting it to 1. The problem can be solved
if we can be able to merge the first two instructions.
In order to address the problem, the operating system provides a
special instruction called Test Set Lock (TSL) instruction which simply
loads the value of lock variable into the local register R0 and sets it
to 1 simultaneously.
The process which executes the TSL first will enter into the critical
section and no other process after that can enter until the first
process comes out.
Section 2
No process can execute the critical
TSL Lock, R0
section even in the case of preemption CMP R0, #0
JNZ step 1
of the first process.
Critical Section Solutions :
3. Test Set Lock Mechanism :
• Mutual Exclusion is guaranteed in TSL mechanism since a process
can never be preempted just before setting the lock variable.
• According to the definition of the progress, a process which
doesn't want to enter in the critical section should not stop other
processes to get into it.
• Bounded Waiting is not guaranteed in TSL. Some process might
not get a chance for so long. We cannot predict for a process that
it will definitely get a chance to enter in critical section after a
certain time.
Critical Section Solutions :
4. Strict Alternation Approach :
Strict Alternation Approach is the software mechanism implemented
at user mode. It is a busy waiting solution which can be
implemented only for two processes. In this approach, A turn
variable is used which is actually a lock.

This approach can only be used for only two processes. In general,
let the two processes be Pi and Pj. They share a variable called turn
variable.

This algorithm ensures that only one process at a time can be in


critical section but does not satisfy the progress condition.
Critical Section Solutions :
5. Peterson’s Solution :
voidEntry_Section (int process)   
{  
    1. int other;    Array of all the process
    2. other = 1-process;  
    3. interested[process] = TRUE;   Process Number form defined array list
    4. turn = process;   
    5. while (interested [other] =True && TURN=process);  
}  
  
Critical Section   
  
voidExit_Section (int process)  
{  
    6. interested [process] = FALSE;  
}  
Critical Section Solutions :
5. Peterson’s Solution :
Let us consider two cooperative processes P1 and P2. The entry
section and exit section are shown below. Initially, the value of
interested variables and turn variable is 0.
Initially process P1 arrives and wants to enter into the critical
section. It sets its interested variable to True (instruction line 3) and
also sets turn to 1 (line number 4). Since the condition given in line
number 5 is completely satisfied by P1 therefore it will enter in the
critical section.
Critical Section Solutions :
5. Peterson’s Solution :
Meanwhile, Process P1 got preempted and process P2 got
scheduled. P2 also wants to enter in the critical section and executes
instructions 1, 2, 3 and 4 of entry section. On instruction 5, it got
stuck since it doesn't satisfy the condition (value of other interested
variable is still true). Therefore it gets into the busy waiting.

P1 again got scheduled and finish the critical section by executing


the instruction no. 6 (setting interested variable to false). Now if P2
checks then it are going to satisfy the condition since other process's
interested variable becomes false. P2 will also get enter the critical
section.
Any of the process may enter in the critical section for multiple
numbers of times. Hence the procedure occurs in the cyclic order.
Critical Section Solutions :
5. Peterson’s Solution :
Peterson’s Solution preserves all three conditions:
1. Mutual Exclusion is assured as only one process can access the
critical section at any time.
2. Progress is also assured, as a process outside the critical section
does not block other processes from entering the critical
section.
3. Bounded Waiting is preserved as every process gets a fair
chance.
Critical Section Solutions :
5. Peterson’s Solution :
Disadvantages of Peterson’s solution:

1. It involves busy waiting.(In the Peterson’s solution, the code


statement- “while(flag[j] && turn == j);” is responsible for this.
Busy waiting is not favored because it wastes CPU cycles that
could be used to perform other tasks.)
2. It is limited to 2 processes.
3. Peterson’s solution cannot be used in modern CPU
architectures.
What is Semaphore?
Semaphore is simply a variable that is non-negative and shared
between threads.

A semaphore is a signalling mechanism, and a thread that is waiting


on a semaphore can be signalled by another thread. It uses two
atomic operations, 1) Wait, and 2) Signal for the process
synchronization.

A semaphore either allows or disallows access to the resource,


which depends on how it is set up.
What is Semaphore?
Types of Semaphores
The two common kinds of semaphores are
• Counting semaphores
• Binary semaphores.
Semaphore
• Counting Semaphores
This type of Semaphore uses a count that helps task to be acquired
or released numerous times. If the initial count = 0, the counting
semaphore should be created in the unavailable state.
However, If the count is > 0, the semaphore is created in the
available state, and the number of tokens it has equals to its count.

• Binary Semaphores
The binary semaphores are quite similar to counting semaphores,
but their value is restricted to 0 and 1. In this type of semaphore,
the wait operation works only if semaphore = 1, and the signal
operation succeeds when semaphore= 0. It is easy to implement
than counting semaphores.
Semaphore
Some point regarding P and V operation:
• P operation is also called wait, sleep, or down operation, and V
operation is also called signal, wake-up, or up operation.
• Both operations are atomic and semaphore(s) is always initialized
to one. Here atomic means that variable on which read, modify
and update happens at the same time/moment with no pre-
emption i.e. in-between read, modify and update no other
operation is performed that may change the variable.
• A critical section is surrounded by both operations to implement
process synchronization.
Wait and Signal Operations in Semaphores
Wait for Operation
This type of semaphore operation helps you to control the entry of a
task into the critical section. However, If the value of wait is positive,
then the value of the wait argument X is decremented. In the case of
negative or zero value, no operation is executed. It is also called P(S)
operation.

After the semaphore value is decreased, which becomes negative,


the command is held up until the required conditions are satisfied.
Wait and Signal Operations in Semaphores
Signal operation
This type of Semaphore operation is used to control the exit of a
task from a critical section. It helps to increase the value of the
argument by 1, which is denoted as V(S).
Wait and Signal Operations in Semaphores
Counting semaphores can be used for controlling the access to the
resources whose instances are finite number. Every process that
needs resource performs the wait() operation on semaphore and
decrements it.

When process releases the resource it uses signal() operation and


thus semaphore is incremented. When semaphores value is zero, it
indicated that all resources are used. At this stage if any process
needs any resources, that processes will be blocked till the
semaphores value becomes greater than one.
Wait and Signal Operations in Semaphores
To perform synchronization using semaphores, following are the
steps −
Step 1 − Create a semaphore or connect to an already existing
semaphore (semget())
Step 2 − Perform operations on the semaphore i.e., allocate or
release or wait for the resources (semop())
Step 3 − Perform control operations on the message queue
(semctl())
Monitors
Monitors are a programming language component that aids in the
regulation of shared data access. The Monitor is a package that
contains shared data structures, operations, and synchronization
between concurrent procedure calls. Therefore, a monitor is also
known as a synchronization tool. Java, C#, Visual Basic, Ada, and
concurrent Euclid are among some of the languages that allow the
use of monitors. Processes operating outside the monitor can't
access the monitor's internal variables, but they can call the
monitor's procedures.

For example, synchronization methods like the wait() and notify()


constructs are available in the Java programming language.
Monitors
• It is the collection of condition variables and procedures
combined together in a special kind of module or a package.
• The processes running outside the monitor can’t access the
internal variable of the monitor but can call procedures of the
monitor.
• Only one process at a time can execute code inside monitors.
Monitors
Condition Variables
Two different operations are performed on the condition variables
of the monitor.
• Wait.
• signal.
Wait operation x.wait() : Process performing wait operation on any
condition variable are suspended. The suspended processes are
placed in block queue of that condition variable.
Signal operation x.signal(): When a process performs signal
operation on condition variable, one of the blocked processes is
given chance.
Semaphore can be used in other synchronization problems besides
Mutual Exclusion.
Differences between the Semaphore and Monitor

• A semaphore is an integer variable that allows many processes in


a parallel system to manage access to a common resource like a
multitasking OS. On the other hand, a monitor is a
synchronization technique that enables threads to mutual
exclusion and the wait() for a given condition to become true.
• When a process uses shared resources in semaphore, it calls the
wait() method and blocks the resources. When it wants to
release the resources, it executes the signal() In contrast, when a
process uses shared resources in the monitor, it has to access
them via procedures.
Differences between the Semaphore and Monitor
• Semaphore is an integer variable, whereas monitor is an abstract
data type.
• In semaphore, an integer variable shows the number of
resources available in the system. In contrast, a monitor is an
abstract data type that permits only a process to execute in the
crucial section at a time.
• Semaphores have no concept of condition variables, while
monitor has condition variables.
• A semaphore's value can only be changed using the wait() and
signal() In contrast, the monitor has the shared variables and the
tool that enables the processes to access them.
CLASSICAL EPIC PROBLEMS
1. Bounded Buffer (Producer-Consumer) Problem
2. The Readers Writers Problem
3. Dining Philosophers Problem
CLASSICAL EPIC PROBLEMS
1. Bounded Buffer (Producer-Consumer) Problem:
Bounded Buffer problem is also called producer consumer problem.
This problem is generalized in terms of the Producer-Consumer
problem. Solution to this problem is, creating two counting
semaphores “full” and “empty” to keep track of the current number
of full and empty buffers respectively. Producers produce a product
and consumers consume the product, but both use of one of the
containers each time.
CLASSICAL EPIC PROBLEMS
2. The Readers Writers Problem:
Suppose that a database is to be shared among several concurrent
processes. Some of these processes may want only to read the
database, whereas others may want to update (that is, to read and
write) the database. We distinguish between these two types of
processes by referring to the former as readers and to the latter as
writers. Precisely in OS we call this situation as the readers-writers
problem. Problem parameters:
• One set of data is shared among a number of processes.
• Once a writer is ready, it performs its write. Only one writer may
write at a time.
• If a process is writing, no other process can read it.
• If at least one reader is reading, no other process can write.
• Readers may not write and only read.
CLASSICAL EPIC PROBLEMS
3. Dining Philosophers Problem:
The Dining Philosopher Problem states that K philosophers seated
around a circular table with one chopstick between each pair of
philosophers. There is one chopstick between each philosopher. A
philosopher may eat if he can pickup the two chopsticks adjacent to
him. One chopstick may be picked up by any one of its adjacent
followers but not both. This problem involves the allocation of
limited resources to a group of processes in a deadlock-free and
starvation-free manner.

You might also like