OS Lec 25-26

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 12

Operating Systems

Week 13- Lecture 25&26 – Distributed Coordination


Event Ordering
• In a distributed system we cannot count on physical clocks even if the computers themselves have physical
clocks, because two clocks on two different systems can never be perfectly synchronized they will always, in
reality, be some drift between any two physical clocks.
• So, mostly physical time is ignored when it comes to ordering events within a distributed system, events are
only observable within a system.
• We are assuming that a single process is completely sequential in that we can take all the events within a
single process and totally order them. So that we get the exact sequence in which those events occurred
within that process.
• An event is simply the sending or receiving of a message.
• Now, partial ordering of events can be denoted by “ ⇢”, and we can define it as if – we have two events and
be within the same process and a comes before b then a ⇢ b i.e.; a happened before b.
• Now, if two processes are communicating with each other a is the sending of a message and b is the receipt of
that message by another process then we define an as happening before b.
• Lastly, we have a transitive property which says if a happened before b and b happened before c then a
happened before c as well, a⇢b and b⇢ c then a ⇢ c.
• Since this is a partial ordering it could very well happen that for two events a and b neither happened before
the other we simply cannot tell by looking at the observable events in the system and in that case we call a
and b to have happened concurrently.
Event Ordering
• P, Q, R are the process and each dot in the line
denotes the events, the curve lines denote
messages being sent between processes from
this representation we can see relate p1 and r4
and we can say that p1 happened before r4
because p1 sent a message to Q and that was
event q2 we move forward in time within the
process Q we send a message to process R
which was received over here at r3. So, we can
say p1 happens before r4. and r2, q6 are
concurrent as they cannot causally affect one
another.
• Happened before relation is an irreflexive partial
ordering on the set of all events happening in the
system i.e.; (a⇢ a) is not true for any event a.
• This relates back to Einstein’s general theory of
relativity where events are ordered in terms of
messages that could possibly be sent.
Mutual Exclusion
• Mutual exclusion is a concurrency control property which is introduced to prevent race
conditions. It is the requirement that a process can not enter its critical section while
another concurrent process is currently present or executing in its critical section i.e only
one process is allowed to execute the critical section at any given instance of time.
• Requirements of Mutual exclusion Algorithm:
• No Deadlock:
Two or more site should not endlessly wait for any message that will never arrive.
• No Starvation:
Every site who wants to execute critical section should get an opportunity to execute it in
finite time. Any site should not wait indefinitely to execute critical section while other site
are repeatedly executing critical section
• Fairness:
Each site should get a fair chance to execute critical section. Any request to execute critical
section must be executed in the order they are made i.e Critical section execution requests
should be executed in the order of their arrival in the system.
• Fault Tolerance:
In case of failure, it should be able to recognize it by itself in order to continue functioning
without any disruption.
Concurrency Control
• Concurrency controlling techniques ensure that multiple transactions are executed
simultaneously while maintaining the ACID properties of the transactions and serializability
in the schedules.
• Locking Based Concurrency Control Protocols
• Locking-based concurrency control protocols use the concept of locking data items.
A lock is a variable associated with a data item that determines whether read/write
operations can be performed on that data item. Generally, a lock compatibility matrix is used
which states whether a data item can be locked by two transactions at the same time.
• Locking-based concurrency control systems can use either one-phase or two-phase locking
protocols.
• One-phase Locking Protocol
• In this method, each transaction locks an item before use and releases the lock as soon as it
has finished using it. This locking method provides for maximum concurrency but does not
always enforce serializability.
Concurrency Control
• Two-phase Locking Protocol
• In this method, all locking operations precede the first lock-release or unlock operation. The
transaction comprise of two phases. In the first phase, a transaction only acquires all the
locks it needs and do not release any lock. This is called the expanding or the growing
phase. In the second phase, the transaction releases the locks and cannot request any new
locks. This is called the shrinking phase.
• Time Stamp Ordering Protocol: A timestamp is a tag that can be attached to any
transaction or any data item, which denotes a specific time on which the transaction or the
data item had been used in any way. A timestamp can be implemented in 2 ways. One is to
directly assign the current value of the clock to the transaction or data item. The other is to
attach the value of a logical counter that keeps increment as new timestamps are required.
The timestamp of a data item can be of 2 types:
• (i) W-timestamp(X): This means the latest time when the data item X has been written into.
• (ii) R-timestamp(X): This means the latest time when the data item X has been read from.
These 2 timestamps are updated each time a successful read/write operation is performed on
the data item X.
Deadlock Handling
• Deadlock is a situation where a process or a set of processes is blocked, waiting for some
other resource that is held by some other waiting process. It is an undesirable state of the
system. The following are the four conditions that must hold simultaneously for a deadlock
to occur.
• Mutual Exclusion – A resource can be used by only one process at a time. If another
process requests for that resource then the requesting process must be delayed until the
resource has been released.
• Hold and wait – Some processes must be holding some resources in non shareable mode
and at the same time must be waiting to acquire some more resources, which are currently
held by other processes in non-shareable mode.
• No pre-emption – Resources granted to a process can be released back to the system only as
a result of voluntary action of that process, after the process has completed its task.
• Circular wait – Deadlocked processes are involved in a circular chain such that each
process holds one or more resources being requested by the next process in the chain.
Deadlock Handling
• Deadlock Handling Methods
• 1. Deadlock Prevention : The strategy of deadlock prevention is to design the system
in such a way that the possibility of deadlock is excluded. Indirect method prevent the
occurrence of one of three necessary condition of deadlock i.e., mutual exclusion, no
pre-emption and hold and wait. Direct method prevent the occurrence of circular
wait. Prevention techniques – Mutual exclusion – is supported by the OS. Hold and
Wait – condition can be prevented by requiring that a process requests all its required
resources at one time and blocking the process until all of its requests can be granted at
a same time simultaneously. But this prevention does not yield good result because :
• long waiting time required
• in efficient use of allocated resource
• A process may not know all the required resources in advance
Deadlock Handling
• 2. Deadlock Avoidance : This approach allows the three necessary conditions of deadlock
but makes judicious choices to assure that deadlock point is never reached. It allows more
concurrency than avoidance detection A decision is made dynamically whether the current
resource allocation request will, if granted, potentially lead to deadlock. It requires the
knowledge of future process requests. Two techniques to avoid deadlock :
• Process initiation denial
• Resource allocation denial
• 3. Deadlock Detection : Deadlock detection is used by employing an algorithm that tracks
the circular waiting and killing one or more processes so that deadlock is removed. The
system state is examined periodically to determine if a set of processes is deadlocked. A
deadlock is resolved by aborting and restarting a process, relinquishing all the resources
that the process held.
• This technique does not limit resources access or restrict process action.
• Requested resources are granted to processes whenever possible.
• It never delays the process initiation and facilitates online handling.
• The disadvantage is the inherent pre-emption losses.
Reaching Agreement
• Reaching Agreement For a system to be reliable, we need a mechanism that allows a set of
processes to agree on a common value. Such an agreement may not take place, for several
reasons. First, the communication medium may be faulty, resulting in lost or garbled
messages. Second, the processes themselves may be faulty, resulting in unpredictable
process behavior. The best we can hope for in this case is that processes fail in a clean way,
stopping their execution without deviating from their normal execution pattern. In the worst
case, processes may send garbled or incorrect messages to other processes or even
collaborate with other failed processes in an attempt to destroy the integrity of the system.
Assignment

• Topic: Protection
1. Goals and principles of protection
2. Domains of protection
3. Access control and revocation of access rights
4. Access matrix and its implementation
Any Question?

You might also like