IAT_DC_2MARK

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

PART-A(2 MARKS)

1 State the differences from synchronous and asynchronous communication along with the
buffering strategy adapted in both.
Synchronous form of communication:
The sending and receiving processes synchronize at every message. In this case, both send and
receive are blocking operations. Whenever a send is issued the sending process is blocked until the
corresponding receive is issued. Whenever receive is issued, the process blocks until a message
arrives.
Asynchronous form of communication
The use of the send operation is non-blockingin that the sending process is allowed to proceed as
soon as the message has been copiedto a local buffer and the transmission of the message proceeds
in parallel with the sending process. The receive operation can have blocking and non-blocking
variants.

2 Define Distributed Computing


 A distributed system is a collection of independent entities that cooperate to
solve a problem that cannot be individually solved.
A distributed system is the one in which hardware or software
components located at networked computers communicate and
coordinate their actions only by passing messages each having its own
memory and operating system. It do not share common memory or a
common physical clock.
3 List the role of shared memory systems.
 The processes share variables directly, so no marshalling and unmarshalling. Shared variables
can be named, stored and accessed in DSM.
 Here, a process does not have private address space. So one process can alter the execution
of other.
 This cannot be used to heterogeneous computers.
 Synchronization is through locks and semaphores.
 Processes communicating through DSM may execute with non-overlapping lifetimes.
Any particular read or update may or may not involve communication by the underlying runtime
support.
4 Define distributed program
A distributed program is composed of a set of n asynchronous processes p1, p2,..,pi,…,pn that communicate by
message passing over the communication network.
 The communication delay is finite and unpredictable
 Let Cij denote the channel from process pi to process pj
 Let mij denote a message sent by pi to pj.
 Process execution and message transfer are asynchronous

5 Compare message passing systems and shared memory systems


Message Passing Distributed Shared Memory
Services Offered:
Variables have to be marshalled from The processes share variables directly, so no
one process, transmitted and unmarshalled into marshalling and unmarshalling. Shared variables can
other variables at the receiving process. be named, stored and accessed in
DSM.

Processes can communicate with other Here, a process does not have private address
processes. They can be protected from oneanother space. So one process can alter the executionof
by having private address spaces. other.

This technique can be used in heterogeneous This cannot be used to heterogeneous


computers. computers.
Synchronization between processes is through Synchronization is through locks and
message passing primitives. semaphores.
Processes communicating via message passing Processes communicating through DSM
must execute at the same time. may execute with non-overlapping lifetimes.
6 Name the primitives for distributed communication.
 Blocking/non-blocking, synchronous/asynchronous primitives
 Synchronous primitives
 Asynchronous primitives
 Blocking primitives
 Non-blocking primitives
 Processor synchrony
 Libraries and standards
7 Define Synchronous Primitives.
 A Send or a Receive primitive is synchronous if both the Send() and
Receive() handshake with each other. The processing for the Send primitive
completes only after the other corresponding Receive primitive has also
been completed. The processing for the Receive primitive completes when
the data to be received is copied into the receiver’s user buffer.

8 Identify the relationship exist between e1 and e2 “if the two events e1 and e2 of two different processes occur at
same time independently”
The event e1 and e2 are defines consistent states.
9 Define the scalar time and the vector time.
Scalar time
 The scalar time representation was proposed by Lamport to totally order events in a
distributed system. Time domain is represented as the set of non-negative integers.
 The logical local clock of a process pi and its local view of global time are squashed into
one integer variable Ci.

vector time
 The system of vector clocks was developed Fidge, Mattern, and Schmuck.
 Here, the time domain is represented by a set of n-dimensional non-negative integer
vectors.
 Each process pi maintains a vector vti[1..n], where vti[i] is the local logical clock of pi that
specifies the progress at process.


10 Define middleware.
 A distributed software is also termed as middleware.
 A distributed system follows a layered architecture that reduces the complexity of
the system design.
 Middleware hides the heterogeneity transparently at the platform level.
11 Define Processor synchrony
 Processor synchrony indicates that all the processors execute in lock-step with their
clocks synchronized.
 As this synchrony difficult in a distributed system, a large granularity of code, is
termedas a step, the processors are synchronized.
 This synchronization ensures that no processor begins executing the next step of
code until all the processors have completed executing the previous steps of code
assigned to each of the processors.
12 Define Causal precedence relation.
 The execution of a distributed application results in a set of distributed events
produced by the processes.
 Let H =Uihi denote the set of events executed in a distributed computation.
 a binary relation on the set H is denoted as →, that expresses causal dependencies
between events in the distributed execution.

 the relation → is Lamport’s “happens before” relation

13 Define the term: Clock Skew and clock drift.


1. Skew The skew of a clock is the difference in the frequencies of the clock and the perfect
clock. The skew of a clock Ca relative to clock Cb at time t is Ca'(t)−Cb'(t).

If the skew is bounded by ρ, then as per Eq.(3.1), clock values are allowed to diverge at a
ratein the range of 1−ρ to 1+ρ.
2. Drift (rate) The drift of clock Ca is the second derivative of the clock value with respect
to time, namely, Ca''(t). The drift of clock Ca relative to clock Cb at time t is is Ca''(t)−Cb''(t).

14 List out the different forms of message ordering paradigms.


 The order of delivery of messages in a distributed system is an important
 Aspect of system executions because it determines the messaging behavior that can be
expected by the distributed program.
 Distributed program logic greatly depends on the order of delivery of messages.
 Several orderings on messages have been defined:
(i) non-FIFO,
(ii) FIFO,
(iii) causalorder, and
(iv) synchronous order.

15 List out the characteristics of Group communication.


Group communication is done by broadcasting of messages. A message
broadcast is the sending of a message to all members in the distributed system. The
communication maybe
 Multicast: A message is sent to a certain subset or a group.
 Unicasting: A point-to-point message communication.
characteristics
 Fault tolerance based on replicated server
 Finding the discovery servers from spontaneous networks
 Better performance through replicated data
 Propagation of event notification
16 Define Closed and open groups.
Closed group algorithms Open group algorithms
If sender is also one of the receiver in the If sender is not a part of the
multicast algorithm, then it is closed communication group, then it is open
group algorithm. group algorithm.
They are specific and easy to implement. They are more general, difficult to design and
expensive.
It does not support large systems where client It can support large systems.
processes have short life.
17 Define Distributed Mutual Exclusion with neat diagram
 Mutual exclusion is a fundamental problem in distributed computing systems.
 Mutual exclusion ensures that concurrent access of processes to a shared resource or data is
serialized, that is, executed in mutually exclusive manner.

Figure 1: Three processes accessing a shared resource (critical section) simultaneously.


 Mutual exclusion in a distributed system states that only one process is allowed toexecute
the critical section (CS) at any given time.
 Message passing is the sole means for implementing distributed mutual exclusion.

18 Write the happened before relation.


The “causal ordering” model is based on Lamport’s “happens before” relation. Asystem
that supports the causal ordering model satisfies the following property:
CO: For any two messages mij and mkj,if
send(mij) ->send(mkj) then
rec(mij) -> rec(mkj)
19 Identify the basic approaches for implementing distributed mutual exclusion.
1. Token based approach
2. Non-token based approach
3. Quorum based approach
1. In the token-based approach,
 A unique token (also known as the PRIVILEGE message) is shared among the sites.
 A site is allowed to enter its CS if it possesses the token and it continues to hold the token
until the execution of the CS is over.
 Mutual exclusion is ensured because the token is unique.
 Example:Suzuki-Kasami’s Broadcast Algorithm.
2. In the non-token based approach,
 Two or more successive rounds of messages are exchanged among the sites to determine
which site will enter the CS next.
 This approach use timestamps instead of sequence number to order requests for the critical
section.
 When ever a site make request for critical section, it gets a timestamp. Timestamp is also
used to resolve any conflict between critical section requests.
 All algorithm which follows non-token based approach maintains a logical clock. Logical
clocks get updated according to Lamport’s scheme.
 Example: Lamport's algorithm, Ricart–Agrawala algorithm
3. In the quorum-based approach,
 Instead of requesting permission to execute the critical section from all other sites, Each site
requests only a subset of sites which is called a quorum.
 Any two subsets of sites or Quorum contains a common site.
 This common site is responsible to ensure mutual exclusion.
Example: Maekawa’s Algorithm
20 Define synchronization delay.
a. After a site leaves the CS, it is the time required and before the next site enters the CS (sees
Figure 9.1).

You might also like