Unit1 Parallel and Distributed
Unit1 Parallel and Distributed
Unit1 Parallel and Distributed
04/07/2024
UNIT – I, PART - C
OVERVIEW OF DISTRIBUTED COMPUTING
Dr. Prakash Singh Tanwar
Parallel Computing
Distributed Systems
By:Dr. P.S.Tanwar
Distributed OS
By:Dr. P.S.Tanwar
Distributed OS
Node3
Node1
Network
Node2
Node4
By:Dr. P.S.Tanwar
Distributed OS
• Functions
• Resource Sharing
• Computation Speedup
• Reliability
• Communication
By:Dr. P.S.Tanwar
8 PART -II: PARALLEL & DISTRIBUTED COMPUTING
INTRODUCTION
FEATURES OF DISTRIBUTED SYSTEMS
No Common Physical Clock
It is am important assumption as it introduces the element of “distribution” in the system and
gives rise to the inherent asynchrony amongst the processors.
No Shared Memory
A key feature that requires message passing for communication.
This feature implies the absence of the common physical clock
Geographical Separation:
Geographically the wider apart the processors are, the more representative the system of the
distributed system.
It is not necessary for the processors to be on a wide area network .
Autonomy and Heterogeneity: The processors are “loosely coupled”. They have different speeds
and each can be running on a different operating system. They are not part of dedicated systems, but
cooperate with one another by offering services or solving a problem jointly.
9 PART -II: PARALLEL & DISTRIBUTED COMPUTING
RELATION TO COMPUTER SYSTEM COMPONENT
The distributed system is presented as:
Each computer has a memory processing It shows the relationship of the software
unit and the computers are connected by a components that run on each of the computers and
communication network. use the local operating system and network
protocol stack for functioning.
The distributed software is also termed as “middleware”
A distributed execution is the execution of the process across the distributed system to collaboratively
achieve a common goal.
An execution is also termed as a “computation” or “run”
Middleware
• Middleware is an intermediate layer of software that sits between the
application and the network. It is used in distributed systems to provide
common services, such as authentication, authorization, compilation for
best performance on particular architectures, input/output translation, and
error handling.
• Middleware offers a number of advantages to distributed systems.
Middleware can be modularized from the application so it has better
potential for reuse with other applications running on different platforms.
• Application developers can design Middleware so it’s sufficiently high-
level that it becomes independent of specific hardware environments or
operating system platforms which simplifies porting applications
developed on one type of platform onto another without rewriting code or
without resorting to inefficient and expensive binary compatibility toolsets
such as cross-compilers.
1 PART -II: PARALLEL & DISTRIBUTED COMPUTING
1
RELATION TO COMPUTER SYSTEM COMPONENT
The distributed system is presented as:
Various primitives and calls to functions defined in various libraries of the middleware layer are
embedded in the user program code
Parallel Systems
By:Dr. P.S.Tanwar
Parallel Systems
By:Dr. P.S.Tanwar
Symmetric Multiprocessing Systems
By:Dr. P.S.Tanwar
Assymmetric Multiprocessing Systems
By:Dr. P.S.Tanwar
1 PART -II: PARALLEL & DISTRIBUTED COMPUTING
6
PARALLEL COMPUTING
INTRODUCTION
It is the use of multiple processing elements
simultaneously for solving any problem.
Problems are broken down into instructions and are solved
concurrently as each resource that has been applied to work
is working at the same time.
Advantages (over Serial Computing)
It saves time and money as many resources working
together will reduce the time and cut potential costs.
It can be impractical to solve larger problems on Serial
Computing.
It can take advantage of non-local resources when the local
resources are finite.
Serial Computing ‘wastes’ the potential computing power,
thus Parallel Computing makes better work of the hardware.
1 PART -II: PARALLEL & DISTRIBUTED COMPUTING
7
PARALLEL COMPUTING
TYPES OF PARALLELISM
Bit Level Parallelism
It is the parallel computing form based on the increasing processor’s size.
It reduces the number of instructions that the system must execute in
order to perform a task on large-sized data.
Task Parallelism
It employs the decomposition of a task into subtasks and then allocates each of the subtasks for
execution.
The processors perform the execution of sub-tasks concurrently.
1 PART -II: PARALLEL & DISTRIBUTED COMPUTING
8
PARALLEL COMPUTING
TYPES OF PARALLELISM
Data Level Parallelism
Instructions from a single stream operate concurrently on
several data – Limited by non-regular data manipulation
patterns and by memory bandwidth
WHY PARALLEL COMPUTING?
Real-world data needs more dynamic simulation and modeling, and for achieving the same, parallel
computing is the key.
Parallel computing provides concurrency and saves time and money.
Complex, large datasets, and their management can be organized only and only using parallel
computing’s approach
Ensures the effective utilization of the resources
APPLICATIONS
Databases and Data mining.
Real-time simulation of systems
Advanced graphics, augmented reality, and virtual reality.
19 PART -II: PARALLEL & DISTRIBUTED COMPUTING
PARALLEL COMPUTING
LIMITATIONS OF PARALLEL COMPUTING
It addresses such as communication and synchronization between multiple sub-tasks and processes
which is difficult to achieve.
The algorithms must be managed in such a way that they can be handled in a parallel mechanism.
The algorithms or programs must have low coupling and high cohesion. But it’s difficult to create
such programs.
More technically skilled and expert programmers can code a parallelism-based program well.
20 PART -II: PARALLEL & DISTRIBUTED COMPUTING
PARALLEL COMPUTING ARCHITECTURE
Parallel Architecture Types:
The parallel computer architecture is classified based on the following:
Multiprocessors
Multi Computers
Models based on Shared Memory Multi Computers:
1. Uniform Memory Access (UMA)
all the processors share the physical memory uniformly.
All the processors have equal access time to all the memory words.
Each processor may have a private cache memory.
Same rule is followed for peripheral devices.
When all the processors have equal access to all the peripheral devices, the system is called
a symmetric multiprocessor.
When only one or a few processors can access the peripheral devices, the system is called
an asymmetric multiprocessor.
21 PART -II: PARALLEL & DISTRIBUTED COMPUTING
PARALLEL COMPUTING ARCHITECTURE
Parallel Architecture Types:
1. Uniform Memory Access (UMA)
When all the processors have equal access to all the peripheral devices, the system is called
a symmetric multiprocessor.
When only one or a few processors can access the peripheral devices, the system is called
an asymmetric multiprocessor.
22 PART -II: PARALLEL & DISTRIBUTED COMPUTING
PARALLEL COMPUTING ARCHITECTURE
Parallel Architecture Types:
2. Non-Uniform Memory Access (NUMA)
In the NUMA multiprocessor model, the access time varies with the location of the memory word.
In this, the shared memory is physically distributed among all the processors, called local
memories.
The collection of all local memories forms a global address space that can be accessed by all the
processors.
23 PART -II: PARALLEL & DISTRIBUTED COMPUTING
DISTRIBUTED SYSTEM TYPES
Distributed System Types
The nodes in the distributed system can be arranged in the form of client/server or peer-to-peer
systems.
Client/Server Model
In this the client request the resources and the server provides the resources.
A server may serve multiple clients at the same time while a client is in contact with only one server
The client and the server communicate with each other via computer networks.
Peer-to-Peer Model
They contain nodes that are equal participants in data sharing.
All the tasks are equally divided between all the nodes.
The nodes interact with each other as required and share the resources.
24 PART -II: PARALLEL & DISTRIBUTED COMPUTING
DISTRIBUTED SYSTEM – ADVANTAGES/DIS. ADV.
Advantages of Distributed System
All the nodes in the distributed system are connected to each other. So nodes can share data with
other nodes
More nodes are easily be added to the distributed system i.e. it can scale as required
Failure of one node does not lead to the failure of an entire distributed system. Other nodes can still
communicate with each other.
Resources like printers can be shared with multiple nodes rather than been restricted to just one
Disadvantages of Distributed System
It is difficult to provide adequate security in distributed systems because the nodes as well the
connection need to be secured
Some messages and data can be lost in the network while traveling from one node to another
The database connected to the distributed system is quite complicated and difficult to handle as
compared to a single user system
Overloading may occur in the network if all the nodes try to send data at once.
25 PART -II: PARALLEL & DISTRIBUTED COMPUTING
DISTRIBUTED COMPUTING MODELS
The Models are:
Virtualization
Service-oriented Architecture (SOA)
Grid Computing
Utility Computing
Virtualization
It is a technique that allows sharing
single physical instance of an
application or resource among
multiple organization or tenants
26 PART -II: PARALLEL & DISTRIBUTED COMPUTING
DISTRIBUTED COMPUTING MODELS
Service-Oriented Architecture
It helps to use the application as a service for other applications regardless of the type of vendor,
product, or technology
It is possible to exchange the data between applications of different vendors without additional
programming or making changes to service.
27 PART -II: PARALLEL & DISTRIBUTED COMPUTING
DISTRIBUTED COMPUTING MODELS
Grid Computing
It refers to distributed computing, in which a group of computers from multiple locations are
connected with each other to achieve a common objective. These computer resources are
heterogeneous and graphically distributed.
It breaks complex tasks into smaller pieces, which are distributed to CPUs that reside within the grid.
Utility Computing
It is based on a pay-per-use model.
It offers computational resources on-
demand as a metered service.
Cloud computing, Grid computing and
managed IT services are based on the
concept of utility computing.
28 PART -II: PARALLEL & DISTRIBUTED COMPUTING
COMPARISION OF PARALLEL & DISTRIBUTED COMPUTING
Parallel Computing
In parallel computing multiple processors perform multiple tasks assigned to them simultaneously
Memory in a Parallel system can either be shared or distributed.
Parallel Computing Provides concurrency and saves time and money.
Distributed Computing
In this, multiple autonomous computers will be working and it seems to the user as a single
system.
In this there is no shared memory and computers communicate with each other through message
passing.
In this computing, a single task is divided among different computers.
29 PART -II: PARALLEL & DISTRIBUTED COMPUTING
COMPARISION OF PARALLEL & DISTRIBUTED COMPUTING
Parameter Parallel Computing Distributed Computing
Distributed computing is a computation
Parallel computing is a computation
Parallel vs. type in which multiple computers execute
type in which multiple processors
Distributed common tasks while communicating with
execute multiple tasks simultaneously
each other using message passing
No. of
Parallel computing occurs on one Distributed computing occurs between
Computers
computer. multiple computers.
Required
Processing In parallel computing multiple In distributed computing, computers rely on
Mechanism processors perform processing. message passing.
There is no global clock in distributed
Synchronizati All processors share a single master
computing, it uses synchronization
on clock for synchronization
algorithms.