Unit 5 & 6 DBM

Download as pdf or txt
Download as pdf or txt
You are on page 1of 68

UNIT – V

Parallel Databases
Multi-user DBMS Architectures

 The common architectures that are used to implement multi-user


database management systems:

 Teleprocessing
 File-Server
 Client-Server
Teleprocessing

 One computer with a single CPU and a number of terminals.

 Processing performed within the same physical computer. User


terminals are typically “dumb”, incapable of functioning on their own, and
cabled to the central computer.
File-Server Architecture

 In a file-server environment, the processing is distributed about the


network, typically a local area network (LAN).
File-Server Architecture

 The file server holds the files required by the application and the DBMS.
However, the applications and DBMS run on each workstation,
requesting files from the file server when necessary.

 The file server acts simply as a shared data disk. The DBMS on each
workstation sends requests to the file server for all data that the DBMS
requires that it is stored on disk.

 The file server architecture has three main disadvantages :


 There is a large amount of network traffic.
 A full copy of DBMS is required on each workstation.
 Concurrency, Recovery and Integrity control are more complex
because there can be multiple DBMS accessing the same files.
Client Server
 To overcome the disadvantages of the first two methods the client-server
architecture was developed. As the name suggests there is :
 A client process, which requires some resource,
 A server process, which provides the resource.

Fig. Traditional Two-Tier Client-Server


Traditional Two-Tier Client-Server
Traditional Two-Tier Client-Server
 Client process (tier 1) : There is no requirement that the client and server
must reside on the same machine. In the database context, the client
manages the user interface and the application logic.
 It takes the user requests.
 Accepts and checks syntax of user input
 Generates database requests in SQL or another database language.
 Transmits the message to the server, waits for response and formats
the response for the end-user.

 Server process (tier2):


 Accepts and processes database requests from clients.
 Checks authorization.
 Ensures integrity constraints not violated.
 Performs query/update processing and transmits response to the client.
 Maintains system catalogue.
 Provides concurrent database access.
 Provides recovery control.
Client Server
 Advantages of Client-Server Architecture:

 It enables wider access to existing databases.

 Increased Performance : If the clients and the server reside on


different computers then different computers can be processing
application in parallel.

 Hardware Costs may be Reduced : It is the only server that requires


the storage and processing sufficient to store and manage the database.

 Communication Costs are Reduced : Applications carry out part of


operations on the client and send only the requests for database access
over the network.

 Increased Consistency : The server can handle integrity checks, so


that constraints need be defined and validated only in one place.

 It Maps on to Open : System architecture quite naturally.


Alternative client server topologies

 Fig. 1 Single Client, Single Server.

 Fig. 2 Multiple Client, Single Server.

 Fig. 3 Multiple Client, Multiple Server.


Three-Tier Client-Server
Three-Tier Client-Server

 The need for enterprise scalability challenged the traditional two-tier


client–server model.

 Client side presented two problems preventing true scalability:


 ‘Fat’ client, requiring considerable resources on client’s computer to
run effectively.
 Significant client side administration overhead.

 By 1995, three layers proposed, each potentially running on a different


platform.
Three-Tier Client-Server
 Advantages:

 ‘Thin’ client, requiring less expensive hardware.

 Application maintenance centralized.

 Easier to modify or replace one tier without affecting others.

 Separating business logic from database functions makes it easier


to implement load balancing.

 Maps quite naturally to Web environment.


Oracle Architecture

Fig. Oracle architecture


Oracle Architecture
 Oracle server:
 An Oracle server includes an Oracle Instance and an Oracle
database.
 An Oracle database includes several different types of files: datafiles,
control files, redo log files and archive redo log files. The Oracle server
also accesses parameter files and password files.

 This set of files has several purposes.


 One is to enable system users to process SQL statements.
 Another is to improve system performance.
 Still another is to ensure the database can be recovered if there is a
software/hardware failure.

 The database server must manage large amounts of data in a multi-user


environment.
 The server must manage concurrent access to the same data.
 The server must deliver high performance. This generally means fast
response times.
Oracle Architecture
 Oracle instance: An Oracle Instance consists of two different sets of
components:
 The first component set is the set of background processes
(PMON, SMON, RECO, DBW0, LGWR, CKPT, D000 and others).
 These will be covered later in detail – each background process
is a computer program.
 These processes perform input/output and monitor other Oracle
processes to provide good performance and database reliability.

 The second component set includes the memory structures that


comprise the Oracle instance.
 When an instance starts up, a memory structure called the
System Global Area (SGA) is allocated.
 At this point the background processes also start.

 An Oracle Instance provides access to one and only one Oracle


database.
Oracle Architecture
 Oracle database: An Oracle database consists of files.
 Sometimes these are referred to as operating system files, but they
are actually database files that store the database information that
a firm or organization needs in order to operate.
 The redo log files are used to recover the database in the event of
application program failures, instance failures and other minor
failures.
 The archived redo log files are used to recover the database if a
disk fails.
 Other files not shown in the figure include:
 The required parameter file that is used to specify parameters
for configuring an Oracle instance when it starts up.
 The optional password file authenticates special users of the
database – these are termed privileged users and include
database administrators.
 Alert and Trace Log Files – these files store information about
errors and actions taken that affect the configuration of the
database.
Oracle Architecture
 User and server processes: The processes shown in the figure are
called user and server processes. These processes are used to
manage the execution of SQL statements.

 A Shared Server Process can share memory and variable


processing for multiple user processes.

 A Dedicated Server Process manages memory and variables for a


single user process.
Parallel Databases
 Parallel database systems consist of multiple processors and multiple
disks connected by a fast interconnection network.

 A coarse-grain parallel machine consists of a small number of powerful


processors

 A massively parallel or fine grain parallel machine utilizes thousands


of smaller processors.

 Two main performance measures:


 throughput --- the number of tasks that can be completed in a given
time interval
 response time --- the amount of time it takes to complete a single
task from the time it is submitted
Speed-Up and Scale-Up

 Speedup: The ability to execute the tasks in less time by


increasing the number of resources is called Speedup.
 Measured by:
speedup = time original
time parallel
Where ,
time original = time required to execute the task using 1
processor
time parallel = time required to execute the task using 'n'
processors

 Speedup is linear if equation equals N.


Speedup
Speed-Up and Scale-Up

 Scaleup: The ability to maintain the performance of the


system when both workload and resources increase
proportionally.

Scaleup = Volume Parallel / Volume Original

Where ,
Volume Parallel = volume executed in a given amount of time
using 'n' processor
Volume Original = volume executed in a given amount of time
using 1 processor

 Scale up is linear if equation equals 1.


Scaleup
Interconnection Network Architectures
 Bus. System components send data on and receive data from a single
communication bus;
 Does not scale well with increasing parallelism.

 Mesh. Components are arranged as nodes in a grid, and each


component is connected to all adjacent components
 Communication links grow with growing number of components,
and so scales better.
 But may require 2n hops to send message to a node (or n with
wraparound connections at edge of grid).

 Hypercube. Components are numbered in binary; components are


connected to one another if their binary representations differ in
exactly one bit.
 n components are connected to log(n) other components and can
reach each other via at most log(n) links; reduces communication
delays.
Interconnection Architectures
Parallel Database Architectures

 Shared memory -- processors share a common memory

 Shared disk -- processors share a common disk

 Shared nothing -- processors share neither a common


memory nor common disk

 Hierarchical -- hybrid of the above architectures


Parallel Database Architectures
Shared Memory

 Processors and disks have access to a common memory,


typically via a bus or through an interconnection network.

 Extremely efficient communication between processors


— data in shared memory can be accessed by any
processor without having to move it using software.

 Downside – architecture is not scalable beyond 32 or 64


processors since the bus or the interconnection network
becomes a bottleneck

 Widely used for lower degrees of parallelism (4 to 8).


Shared Disk
 All processors can directly access all disks via an
interconnection network, but the processors have private
memories.
 The memory bus is not a bottleneck
 Architecture provides a degree of fault-tolerance — if a
processor fails, the other processors can take over its
tasks since the database is resident on disks that are
accessible from all processors.
 Examples: IBM Sysplex and DEC clusters (now part of
Compaq) running Rdb (now Oracle Rdb) were early
commercial users
 Downside: bottleneck now occurs at interconnection to the
disk subsystem.
 Shared-disk systems can scale to a somewhat larger number
of processors, but communication between processors is
slower.
Shared Nothing
 Node consists of a processor, memory, and one or more disks.
Processors at one node communicate with another processor
at another node using an interconnection network. A node
functions as the server for the data on the disk or disks the
node owns.
 Examples: Teradata, Tandem, Oracle-n CUBE

 Data accessed from local disks (and local memory accesses)


do not pass through interconnection network, thereby
minimizing the interference of resource sharing.
 Shared-nothing multiprocessors can be scaled up to thousands
of processors without interference.
 Main drawback: cost of communication and non-local disk
access; sending data involves software interaction at both
ends.
Hierarchical
 Combines characteristics of shared-memory, shared-disk, and
shared-nothing architectures.
 Top level is a shared-nothing architecture – nodes connected
by an interconnection network, and do not share disks or
memory with each other.
 Each node of the system could be a shared-memory system
with a few processors.
 Alternatively, each node could be a shared-disk system, and
each of the systems sharing a set of disks could be a shared-
memory system.
 Reduce the complexity of programming such systems by
distributed virtual-memory architectures
 Also called non-uniform memory architecture (NUMA)
Evaluating Parallel Query in Parallel Databases

 The two techniques used in query evaluation are as


follows:
 Inter query parallelism

 This technique allows to run multiple queries on different


processors simultaneously.
 Pipelined parallelism is achieved by using inter query
parallelism, which improves the output of the system.
For example: If there are 6 queries, each query will take 3
seconds for evaluation. Thus, the total time taken to complete
evaluation process is 18 seconds. Inter query parallelism
achieves this task only in 3 seconds. However, Inter query
parallelism is difficult to achieve every time.
Evaluating Parallel Query in Parallel Databases contd.

 Intra Query Parallelism

 In this technique query is divided in sub queries which can run


simultaneously on different processors, this will minimize the
query evaluation time.
 Intra query parallelism improves the response time of the
system.

For Example: If we have 6 queries, which can take 3 seconds


to complete the evaluation process, the total time to complete
the evaluation process is 18 seconds. But We can achieve this
task in only 3 seconds by using intra query evaluation as each
query is divided in sub-queries.
Virtualization on Multicore processors

 Multicore processing and virtualization are rapidly becoming


ubiquitous in software development. They are widely used in
the commercial world, especially in large data centers
supporting cloud-based computing, to
(1) isolate application software from hardware and operating
systems,
(2) decrease hardware costs by enabling different applications
to share underutilized computers or processors,
(3) improve reliability and robustness by limiting fault and
failure propagation and support failover and recovery, and
(4) enhance scalability and responsiveness through the use of
actual and virtual concurrency in architectures, designs, and
implementation languages.
UNIT – VI

Distributed Databases
Distributed Systems
 Data spread over multiple machines (also referred to as
sites or nodes).
 Network interconnects the machines

 Data shared by users on multiple machines


Distributed Systems

 A distributed database system consists of loosely coupled


sites that share no physical component.

 Database systems that run on each site are independent


of each other.

 Transactions may access data at one or more sites


Types of Distributed Databases
 In a homogeneous distributed database
 All sites have identical software
 Are aware of each other and agree to cooperate in
processing user requests.
 Each site surrenders part of its autonomy in terms of right to
change schemas or software
 Appears to user as a single system
 In a heterogeneous distributed database

 Different sites may use different schemas and software


 Difference in schema is a major problem for query
processing
 Difference in software is a major problem for transaction
processing
 Sites may not be aware of each other and may provide only
limited facilities for cooperation in transaction processing
Distributed Databases
 Homogeneous distributed databases
 Same software/schema on all sites, data may be partitioned
among sites
 Goal: provide a view of a single database, hiding details of
distribution
 Heterogeneous distributed databases
 Different software/schema on different sites
 Goal: integrate existing databases to provide useful
functionality
 Differentiate between local and global transactions
 A local transaction accesses data in the single site at
which the transaction was initiated.
 A global transaction either accesses data in a site different
from the one at which the transaction was initiated or
accesses data in several different sites.
Trade-offs in Distributed Systems
 Sharing data – users at one site able to access the data
residing at some other sites.
 Autonomy – each site is able to retain a degree of control over
data stored locally.
 Higher system availability through redundancy — data can
be replicated at remote sites, and system can function even if a
site fails.
 Disadvantage: added complexity required to ensure proper
coordination among sites.
 Software development cost.
 Greater potential for bugs.
 Increased processing overhead.
Implementation Issues for Distributed Databases
 Atomicity needed even for transactions that update data at
multiple sites
 The two-phase commit protocol (2PC) is used to ensure
atomicity
 Basic idea: each site executes transaction until just
before commit, and the leaves final decision to a
coordinator
 Each site must follow decision of coordinator, even if there
is a failure while waiting for coordinators decision
 2PC is not always appropriate: other transaction models
based on persistent messaging, and workflows, are also used
 Distributed concurrency control (and deadlock detection)
required
 Data items may be replicated to improve data availability
Distributed Data Storage
 Assume relational data model

 Replication

 System maintains multiple copies of data, stored in different


sites, for faster retrieval and fault tolerance.

 Fragmentation

 Relation is partitioned into several fragments stored in


distinct sites

 Replication and fragmentation can be combined


 Relation is partitioned into several fragments: system
maintains several identical replicas of each such fragment.
Data Replication
 A relation or fragment of a relation is replicated if it is stored
redundantly in two or more sites.

 Full replication of a relation is the case where the relation is


stored at all sites.

 Fully redundant databases are those in which every site contains


a copy of the entire database.
Data Replication (Cont.)

 Advantages of Replication

 Availability: failure of site containing relation r does not result in


unavailability of r is replicas exist.
 Parallelism: queries on r may be processed by several nodes in
parallel.
 Reduced data transfer: relation r is available locally at each site
containing a replica of r.
 Disadvantages of Replication
 Increased cost of updates: each replica of relation r must be
updated.
 Increased complexity of concurrency control: concurrent
updates to distinct replicas may lead to inconsistent data unless
special concurrency control mechanisms are implemented.
 One solution: choose one copy as primary copy and apply
concurrency control operations on primary copy
Data Fragmentation

 Division of relation r into fragments r1, r2, …, rn which


contain sufficient information to reconstruct relation r.

 Horizontal fragmentation: each tuple of r is assigned to


one or more fragments

 Vertical fragmentation: the schema for relation r is split


into several smaller schemas
 All schemas must contain a common candidate key
(or superkey) to ensure lossless join property.
 A special attribute, the tuple-id attribute may be added
to each schema to serve as a candidate key.
Horizontal Fragmentation of account Relation

branch_name account_number balance

Hillside A-305 500


Hillside A-226 336
Hillside A-155 62

account1 = branch_name=“Hillside” (account )

branch_name account_number balance

Valleyview A-177 205


Valleyview A-402 10000
Valleyview A-408 1123
Valleyview A-639 750

account2 = branch_name=“Valleyview” (account )


Vertical Fragmentation of employee_info Relation

branch_name customer_name tuple_id

Hillside Lowman 1
Hillside Camp 2
Valleyview Camp 3
Valleyview Kahn 4
Hillside Kahn 5
Valleyview Kahn 6
Valleyview Green 7
deposit1 = branch_name, customer_name, tuple_id (employee_info )
account_number balance tuple_id

A-305 500 1
A-226 336 2
A-177 205 3
A-402 10000 4
A-155 62 5
A-408 1123 6
A-639 750 7
deposit2 = account_number, balance, tuple_id (employee_info )
Advantages of Fragmentation

 Horizontal:
 allows parallel processing on fragments of a relation
 allows a relation to be split so that tuples are located
where they are most frequently accessed
 Vertical:

 allows tuples to be split so that each part of the tuple is


stored where it is most frequently accessed
 tuple-id attribute allows efficient joining of vertical
fragments
 allows parallel processing on a relation
 Vertical and horizontal fragmentation can be mixed

 Fragments may be successively fragmented to an


arbitrary depth.
Distributed Transactions
 Transaction may access data at several sites.

 Each site has a local transaction manager responsible for:


 Maintaining a log for recovery purposes
 Participating in coordinating the concurrent execution of the
transactions executing at that site.
 Each site has a transaction coordinator, which is responsible
for:
 Starting the execution of transactions that originate at the
site.
 Distributing subtransactions at appropriate sites for
execution.
 Coordinating the termination of each transaction that
originates at the site, which may result in the transaction
being committed at all sites or aborted at all sites.
Transaction System Architecture
System Failure Modes
 Failures unique to distributed systems:

 Failure of a site.
 Loss of messages
 Handled by network transmission control protocols such
as TCP-IP
 Failure of a communication link
 Handled by network protocols, by routing messages via
alternative links
 Network partition
 A network is said to be partitioned when it has been
split into two or more subsystems that lack any
connection between them
– Note: a subsystem may consist of a single node
 Network partitioning and site failures are generally
indistinguishable.
Commit Protocols
 Commit protocols are used to ensure atomicity across sites

 a transaction which executes at multiple sites must either


be committed at all the sites, or aborted at all the sites.
 not acceptable to have a transaction committed at one site
and aborted at another

 The two-phase commit (2PC) protocol is widely used

 The three-phase commit (3PC) protocol is more complicated


and more expensive, but avoids some drawbacks of two-phase
commit protocol. This protocol is not used in practice.
Two Phase Commit Protocol (2PC)
 Assumes fail-stop model – failed sites simply stop working,
and do not cause any other harm, such as sending incorrect
messages to other sites.

 Execution of the protocol is initiated by the coordinator after the


last step of the transaction has been reached.

 The protocol involves all the local sites at which the transaction
executed

 Let T be a transaction initiated at site Si, and let the transaction


coordinator at Si be Ci
Phase 1: Obtaining a Decision
 Coordinator asks all participants to prepare to commit
transaction Ti.
 Ci adds the records <prepare T> to the log and forces log
to stable storage
 sends prepare T messages to all sites at which T executed
 Upon receiving message, transaction manager at site
determines if it can commit the transaction
 if not, add a record <no T> to the log and send abort T
message to Ci
 if the transaction can be committed, then:

 add the record <ready T> to the log

 force all records for T to stable storage

 send ready T message to Ci


Phase 2: Recording the Decision
 T can be committed of Ci received a ready T message from all
the participating sites: otherwise T must be aborted.
 Coordinator adds a decision record, <commit T> or <abort T>,
to the log and forces record onto stable storage. Once the
record stable storage it is irrevocable (even if failures occur)
 Coordinator sends a message to each participant informing it of
the decision (commit or abort)
 Participants take appropriate action locally.
2PC Disadvantages
Disadvantages:
The greatest disadvantage of the two-phase commit protocol is
that it is a blocking protocol. If the coordinator fails permanently,
some cohorts will never resolve their transactions: After a cohort
has sent an agreement message to the coordinator, it will block
until a commit or rollback is received.
Three Phase Commit (3PC)
 Assumptions:
 No network partitioning
 At any point, at least one site must be up.
 At most K sites (participants as well as coordinator) can fail
 Phase 1: Obtaining Preliminary Decision: Identical to 2PC Phase 1.
 Every site is ready to commit if instructed to do so
 Phase 2 of 2PC is split into 2 phases, Phase 2 and Phase 3 of 3PC
 In phase 2 coordinator makes a decision as in 2PC (called the pre-commit
decision) and records it in multiple (at least K) sites
 In phase 3, coordinator sends commit/abort message to all participating
sites,
 Under 3PC, knowledge of pre-commit decision can be used to commit despite
coordinator failure
 Avoids blocking problem as long as < K sites fail
 Drawbacks:
 higher overheads
 assumptions may not be satisfied in practice
Alternative Models of Transaction Processing
 Notion of a single transaction spanning multiple sites is
inappropriate for many applications
 E.g. transaction crossing an organizational boundary
 No organization would like to permit an externally initiated
transaction to block local transactions for an indeterminate
period
 Alternative models carry out transactions by sending messages
 Code to handle messages must be carefully designed to
ensure atomicity and durability properties for updates
 Isolationcannot be guaranteed, in that intermediate
stages are visible, but code must ensure no inconsistent
states result due to concurrency
 Persistent messaging systems are systems that provide
transactional properties to messages
 Messages are guaranteed to be delivered exactly once
Alternative Models (Cont.)
 Motivating example: funds transfer between two banks
 Two phase commit would have the potential to block updates
on the accounts involved in funds transfer
 Alternative solution:
 Debit money from source account and send a message to
other site
 Site receives message and credits destination account
 Messaging has long been used for distributed transactions
(even before computers were invented!)
 Atomicity issue
 once transaction sending a message is committed, message
must guaranteed to be delivered
 Guarantee as long as destination site is up and reachable,
code to handle undeliverable messages must also be
available
– e.g. credit money back to source account.
 If sending transaction aborts, message must not be sent
Handling of Failures - Site Failure
When site Si recovers, it examines its log to determine the fate of
transactions active at the time of the failure.
 Log contain <commit T> record: txn had completed, nothing to
be done
 Log contains <abort T> record: txn had completed, nothing to be
done
 Log contains <ready T> record: site must consult Ci to determine
the fate of T.
 If T committed, redo (T); write <commit T> record
 If T aborted, undo (T)
 The log contains no log records concerning T:
 Implies that Sk failed before responding to the prepare T
message from Ci
 since the failure of Sk precludes the sending of such a
response, coordinator C1 must abort T
 Sk must execute undo (T)
Handling of Failures- Coordinator Failure

 If coordinator fails while the commit protocol for T is executing then


participating sites must decide on T’s fate:
1. If an active site contains a <commit T> record in its log, then T
must be committed.
2. If an active site contains an <abort T> record in its log, then T
must be aborted.
3. If some active participating site does not contain a <ready T>
record in its log, then the failed coordinator Ci cannot have
decided to commit T.
 Can therefore abort T; however, such a site must reject any
subsequent <prepare T> message from Ci
4. If none of the above cases holds, then all active sites must have a
<ready T> record in their logs, but no additional control records
(such as <abort T> of <commit T>).
 In this case active sites must wait for Ci to recover, to find
decision.
 Blocking problem: active sites may have to wait for failed coordinator
to recover.
Handling of Failures - Network Partition
 If the coordinator and all its participants remain in one partition,
the failure has no effect on the commit protocol.
 If the coordinator and its participants belong to several
partitions:
 Sites that are not in the partition containing the coordinator
think the coordinator has failed, and execute the protocol to
deal with failure of the coordinator.
 No harm results, but sites may still have to wait for
decision from coordinator.
 The coordinator and the sites are in the same partition as the
coordinator think that the sites in the other partition have failed,
and follow the usual commit protocol.
 Again, no harm results
Recovery and Concurrency Control
 In-doubt transactions have a <ready T>, but neither a
<commit T>, nor an <abort T> log record.
 The recovering site must determine the commit-abort status of
such transactions by contacting other sites; this can slow and
potentially block recovery.
 Recovery algorithms can note lock information in the log.
 Instead of <ready T>, write out <ready T, L> L = list of
locks held by T when the log is written (read locks can be
omitted).
 For every in-doubt transaction T, all the locks noted in the
<ready T, L> log record are reacquired.
 After lock reacquisition, transaction processing can resume; the
commit or rollback of in-doubt transactions is performed
concurrently with the execution of new transactions.
Concurrency Control
 Modify concurrency control schemes for use in distributed
environment.

 We assume that each site participates in the execution of a


commit protocol to ensure global transaction atomicity.

 We assume all replicas of any item are updated


Single-Lock-Manager Approach
 System maintains a single lock manager that resides in a single
chosen site, say Si

 When a transaction needs to lock a data item, it sends a lock


request to Si and lock manager determines whether the lock
can be granted immediately
 If yes, lock manager sends a message to the site which
initiated the request
 If no, request is delayed until it can be granted, at which
time a message is sent to the initiating site
Single-Lock-Manager Approach (Cont.)
 The transaction can read the data item from any one of the
sites at which a replica of the data item resides.

 Writes must be performed on all replicas of a data item

 Advantages of scheme:

 Simple implementation
 Simple deadlock handling

 Disadvantages of scheme are:

 Bottleneck: lock manager site becomes a bottleneck


 Vulnerability: system is vulnerable to lock manager site
failure.
Distributed Lock Manager
 In this approach, functionality of locking is implemented by lock
managers at each site
 Lock managers control access to local data items
 But special protocols may be used for replicas
 Advantage: work is distributed and can be made robust to
failures
 Disadvantage: deadlock detection is more complicated

 Lock managers cooperate for deadlock detection


 Several variants of this approach
 Primary copy
 Majority protocol
 Biased protocol
 Quorum consensus
Primary Copy
 Choose one replica of data item to be the primary copy.

 Site containing the replica is called the primary site for


that data item
 Different data items can have different primary sites
 When a transaction needs to lock a data item Q, it requests a
lock at the primary site of Q.
 Implicitly gets lock on all replicas of the data item
 Benefit

 Concurrency control for replicated data handled similarly to


unreplicated data - simple implementation.
 Drawback

 If the primary site of Q fails, Q is inaccessible even though


other sites containing a replica may be accessible.

You might also like