Parallel Algorithms Underlying MPI Implementations
Parallel Algorithms Underlying MPI Implementations
Parallel Algorithms Underlying MPI Implementations
Underlying MPI
Implementations
Parallel Algorithms Underlying MPI
Implementations
• Assume that each processor has formed the partial sum of the
components of the vector that it has.
• Step 1: Processor 2 sends its partial sum to processor 1 and
processor 1 adds this partial sum to its own. Meanwhile, processor 4
sends its partial sum to processor 3 and processor 3 performs a
similar summation.
• Step 2: Processor 3 sends its partial sum, which is now the sum of
the components on processors 3 and 4, to processor 1 and
processor 1 adds it to its partial sum to get the final sum across all
the components of the vector.
• At each stage of the process, the number of processes doing work is
cut in half. The algorithm is depicted in the Figure 13.1 below, where
the solid arrow denotes a send operation and the dotted line arrow
denotes a receive operation followed by a summation.
Recursive Halving and Doubling
• Pseudocode for
Broadcast Operation:
• The following algorithm
completes a broadcast
operation in logarithmic
time. Figure 13.4
illustrates the idea.
• Example 1:
– Matrix-vector multiplication using collective communication.
• Example 2:
– Matrix-matrix multiplication using collective communication.
• Example 3:
– Solving Poisson's equation through the use of ghost cells.
• Example 4:
– Matrix-vector multiplication using a client-server approach.
Example 1: Matrix-vector
Multiplication
P0 P1 P2 P3
Reduction (SUM)
Example 1: Matrix-vector
Multiplication
• The columns of matrix B and elements of column vector
C must be distributed to the various processors using
MPI commands called scatter operations.
• Note that MPI provides two types of scatter operations
depending on whether the problem can be divided evenly
among the number of processors or not.
• Each processor now has a column of B, called Bpart,
and an element of C, called Cpart. Each processor can
now perform an independent vector-scalar multiplication.
• Once this has been accomplished, every processor will
have a part of the final column vector A, called Apart.
• The column vectors on each processor can be added
together with an MPI reduction command that computes
the final sum on the root processor.
Example 1: Matrix-vector
Multiplication
#include <stdio.h>
#include <mpi.h>
#define NCOLS 4
int main(int argc, char **argv) {
int i,j,k,l;
int ierr, rank, size, root;
float A[NCOLS];
float Apart[NCOLS];
float Bpart[NCOLS];
float C[NCOLS];
float A_exact[NCOLS];
float B[NCOLS][NCOLS];
float Cpart[1];
root = 0;
/* Initiate MPI. */
ierr=MPI_Init(&argc, &argv);
ierr=MPI_Comm_rank(MPI_COMM_WORLD, &rank);
ierr=MPI_Comm_size(MPI_COMM_WORLD, &size);
Example 1: Matrix-vector
Multiplication
/* Initialize B and C. */
if (rank == root) {
B[0][0] = 1;
B[0][1] = 2;
B[0][2] = 3;
B[0][3] = 4;
B[1][0] = 4;
B[1][1] = -5;
B[1][2] = 6;
B[1][3] = 4;
B[2][0] = 7;
B[2][1] = 8;
B[2][2] = 9;
B[2][3] = 2;
B[3][0] = 3;
B[3][1] = -1;
B[3][2] = 5;
B[3][3] = 0;
C[0] = 1;
C[1] = -4;
C[2] = 7;
C[3] = 3;
}
Example 1: Matrix-vector
Multiplication
/* Put up a barrier until I/O is complete */
ierr=MPI_Barrier(MPI_COMM_WORLD);
/* Scatter matrix B by rows. */
ierr=MPI_Scatter(B,NCOLS,MPI_FLOAT,Bpart,NCOLS,MPI_FLOAT
,root,MPI_COMM_WORLD);
/* Scatter matrix C by columns */
ierr=MPI_Scatter(C,1,MPI_FLOAT,Cpart,1,MPI_FLOAT,
root,MPI_COMM_WORLD);
/* Do the vector-scalar multiplication. */
for(j=0;j<NCOLS;j++)
Apart[j] = Cpart[0]*Bpart[j];
/* Reduce to matrix A. */
ierr=MPI_Reduce(Apart,A,NCOLS,MPI_FLOAT,MPI_SUM,
root,MPI_COMM_WORLD);
Example 1: Matrix-vector
Multiplication
if (rank == 0) {
printf("\nThis is the result of the parallel computation:\n\n");
printf("A[0]=%g\n",A[0]);
printf("A[1]=%g\n",A[1]);
printf("A[2]=%g\n",A[2]);
printf("A[3]=%g\n",A[3]);
for(k=0;k<NCOLS;k++) {
A_exact[k] = 0.0;
for(l=0;l<NCOLS;l++) {
A_exact[k] += C[l]*B[l][k];
}
}
MPI_Finalize();
}
Example 1: Matrix-vector
Multiplication
• It is important to realize that this algorithm would change
if the program were written in Fortran. This is because C
decomposes arrays in memory by rows while Fortran
decomposes arrays into columns.
• If you translated the above program directly into a Fortran
program, the collective MPI calls would fail because the
data going to each of the different processors is not
contiguous.
• This problem can be solved with derived datatypes, which
are discussed in Chapter 6 - Derived Datatypes.
• A simpler approach would be to decompose the vector-
matrix multiplication into independent scalar-row
computations and then proceed as above. This approach
is shown schematically in Figure 13.6.
Example 1: Matrix-vector
Multiplication
B*C=A
(4mxM)
(nx4m) (nxM)
Example 2: Matrix-matrix
Multiplication
ρ ( x, y ) = e
π
(
a −a[ x− L / 4] 2 + y 2
−e − a [ ( x −3 L / 4 ) 2 + y 2 ]
)
Figure 13.10. Poisson Equation on a 2D grid with periodic boundary conditions.
where phi(x,y) is our unknown potential function and
rho(x,y) is the known source charge density. The domain
of the problem is the box defined by the x-axis, y-axis,
and the lines x=L and y=L.
Example 3: The Use of Ghost Cells to
solve a Poisson Equation
• Serial Code:
• To solve this equation, an iterative scheme is employed using finite
differences. The update equation for the field phi at the (n+1)th
iteration is written in terms of the values at nth iteration via
φi , j = π∆x ρ i , j + (φi +1, j + φi −1, j + φi , j +1 + φi , j −1 )
12
4
iterating until the condition
∑ i, j i, j
φ new
+ φ old
<ε
i, j
∑ρ
i, j
i, j
• Parallel Code:
• In this example, the domain is chopped into rectangles, in
what is often called block-block decomposition. In Figure
13.11 below,
Figure 13.12. Array indexing in a parallel Poisson solver on a 3x5 processor grid.
Example 3: The Use of Ghost Cells to
solve a Poisson Equation
• Note that P(1,2) (i.e., P(7)) is responsible for indices i=23-43 and
j=27-39 in the serial code double do-loop.
• A parallel speedup is obtained because each processor is working
on essentially 1/15 of the total data.
• However, there is a problem. What does P(1,2) do when its 5-point
stencil hits the boundaries of its domain (i.e., when i=23 or i=43, or
j=27 or j=39)? The 5-point stencil now reaches into another
processor's domain, which means that boundary data exists in
memory on another separate processor.
• Because the update formula for phi at grid point (i,j) involves
neighboring grid indices {i-1,i,i+1;j-1,j,j+1}, P(1,2) must communicate
with its North, South, East, and West (N, S, E, W) neighbors to get
one column of boundary data from its E, W neighbors and one row
of boundary data from its N,S neighbors.
• This is illustrated in Figure 13.13 below.
Example 3: The Use of Ghost Cells to
solve a Poisson Equation
(6) for{i=1;i<=N_local;i++){
for(j=1;j<=M_local;j++){
update phi[i][j]
}
}
End Loop over stencil iterations
(7) Output data
Example 3: The Use of Ghost Cells to
solve a Poisson Equation
• Note that initializing the data should be performed in parallel. That is,
each processor P(i,j) should only initialize the portion of phi for
which it is responsible. (Recall NO processor contains the full global
phi).
• In relation to this point, step (7), Output data, is not such a simple-
minded task when performing parallel calculations. Should you
reduce all the data from phi_local on each processor to one giant
phi_global on P(0,0) and then print out the data? This is certainly
one way to do it, but it seems to defeat the purpose of not having all
the data reside on one processor.
• For example, what if phi_global is too large to fit in memory on a
single processor? A second alternative is for each processor to write
out its own phi_local to a file "phi.ij", where ij indicates the
processor's 2-digit designation (e.g. P(1,2) writes out to file "phi.12").
• The data then has to be manipulated off processor by another code
to put it into a form that may be rendered by a visualization package.
This code itself may have to be a parallel code.
Example 3: The Use of Ghost Cells to
solve a Poisson Equation