Mpi Unit 5 Part 2 1
Mpi Unit 5 Part 2 1
Mpi Unit 5 Part 2 1
Message Passing
Interface
UNIT 5
Outline
Background
Message Passing
MPI
Group and Context
Communication Modes
Blocking/Non-blocking
Features
Programming / issues
Tutorial
Distributed Computing
Paradigms
Communication Models:
Message Passing
Shared Memory
Computation Models:
Functional Parallel
Data Parallel
Message Passing
A process is a program counter and
address space.
Inter-process communication:
Type:
Synchronous / Asynchronous
Movement of data from one process’s
address space to another’s
Synchronous Vs. Asynchronous
An asynchronous communication
completes as soon as the message is
on the way.
Synchronous Vs. Asynchronous
( cont. )
What is message passing?
Data transfer.
General
Threads
Portable !!!!!!!!!!!!!!!!!!!!!!!!!!
Basic ( 6 Functions ).
Basic Commands
#include <mpi.h>
main( int argc, char** argv )
{
MPI_Init( &argc, &argv );
/* main part of the program */
/*
Use MPI function call depend on your data
partitioning and the parallelization
architecture
*/
MPI_Finalize();
}
Initializing MPI
#include “mpi.h”
#include <stdio.h>
int main(int argc, char *argv[])
{
MPI_Init(&argc, &argv);
printf(“Hello, world!\n”);
MPI_Finalize();
Return 0;
}
A minimal MPI program(c)
(cont.)
#include “mpi.h” provides basic MPI definitions and
types.
#include <mpi.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
int rank, size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("I am %d of %d\n", rank, size);
MPI_Finalize();
return 0;
}
Some concepts
The receiver can specify a wildcard value for souce (MPI_ANY_SOURCE) and/or
a wildcard value for tag (MPI_ANY_TAG), indicating that any source and/or tag
are acceptable
Status is used for exrtra information about the received message if a wildcard
receive mode is used.
If the count of the message received is less than or equal to that described by the
MPI receive command, then the message is successfully received. Else it is
considered as a buffer overflow error.
MPI_STATUS
MPI_Finalize();
return 0;
}
When to use MPI
Compile using:
mpicc –o pi pi.c
Or
mpic++ –o pi pi.cpp
mpirun –np # of procs –machinefile XXX pi
int main(argc,argv)
int argc; char *argv[];
{
int done = 0, n, myid, numprocs, i, rc;
double PI25DT = 3.141592653589793238462643;
double mypi, pi, h, sum, x, a;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
while (!done)
{
if (myid == 0) {
printf("Enter the number of intervals: (0 quits) ");
scanf("%d",&n);
}
MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (n == 0) break;
h = 1.0 / (double) n;
sum = 0.0;
for (i = myid + 1; i <= n; i += numprocs)
{
x = h * ((double)i - 0.5);
sum += 4.0 / (1.0 + x*x);
}
mypi = h * sum;
MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
if (myid == 0) printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi - PI25DT));
}
MPI_Finalize();
}
MPI on ECE Solaris
Machines (3)
How to compile:
mpicc ex2.c -o ex2 –lm
How to run:
mpirun -np 4 -machinefile ml ex2
Where to get MPI library?
#include "stdafx.h"
#include <mpi.h>
#include <stdio.h>
Execute using:
mpiexec.exe –localonly <# of procs> exe_file_name.exe