CS416 - Parallel and Distributed Computing: Lecture # 6 (19-03-2021) Spring 2021 FAST - NUCES, Faisalabad Campus
CS416 - Parallel and Distributed Computing: Lecture # 6 (19-03-2021) Spring 2021 FAST - NUCES, Faisalabad Campus
CS416 - Parallel and Distributed Computing: Lecture # 6 (19-03-2021) Spring 2021 FAST - NUCES, Faisalabad Campus
and Distributed
Computing
Lecture # 6 (19-03-2021)
Spring 2021
FAST – NUCES, Faisalabad Campus
Agenda
2
A Quick Review
Parallel Algorithm Design Life Cycle
Tasks, Decomposition, and Task-dependency graphs
Granularity
Fine-grained
Coarse-grained
Concurrency
Max-degree of concurrency
Critical path length
Average-degree of concurrency
Task-interaction Diagrams
Processes and mapping
CS416 - Spring 2021
Quick Review to the Previous Lecture
3
Decomposition:
The process of dividing a computation into smaller parts,
some or all of which may potentially be executed in parallel.
Tasks
Programmer-defined units of computation into which the
main computation is subdivided by means of decomposition
Tasks can be of arbitrary size, but once defined, they are
regarded as indivisible units of computation.
The tasks into which a problem is decomposed may not all
be of the same size
Concurrent execution of multiple tasks is the key to reducing
the time required to solve the entire problem.
Task-Dependency Graph
The tasks in the previous examples are independent and
can be performed in any sequence.
In most of the problems, there exist some sort of
dependencies between the tasks.
An abstraction used to express such dependencies
among tasks and their relative order of execution is known
as a task-dependency graph
It is a directed acyclic graph in which node are tasks and
the directed edges indicate the dependencies between
them
The task corresponding to a node can be executed only
when all the predecessor [parent] tasks complete their
execution.
Granularity
The number and sizes of tasks into which a problem is
decomposed determines the granularity of the
decomposition
A decomposition into a large number of small tasks is called
fine-grained
A decomposition into a small number of large tasks is called
coarse-grained
For matrix-vector multiplication Figure 3.1 would
usually be considered fine-grained
Figure 3.4 shows a coarse-grained decomposition as
each task computes n/4 entries of the output vector
of length n
CS416 - Spring 2021
Principals of Parallel Algorithm Design
16