Computer Science 400
Parallel Processing and High Performance Computing
Fall 2017, Siena College
For this lab, we'll look at MPI's non-blocking point-to-point communication.
You may work alone or with a partner on this lab.
Getting Set Up
You will receive an email with the link to follow to set up your GitHub repository nb-lab-yourgitname for this Lab. One member of the group should follow the link to set up the repository on GitHub, then that person should email the instructor with the other group members' GitHub usernames so they can be granted access. This will allow all members of the group to clone the repository and commit and push changes to the origin on GitHub. At least one group member should make a clone of the repository to begin work.
Non-Blocking Point-to-point Communication
Consider the mpiexchange.c program you wrote for the previous lab. Suppose we wanted that program to be generalized to work for more processes. That is, each process picks and number to send to the process with the next highest rank (except that with the highest rank, which sends it to 0). Then each process will receive that value, modify it in some way, then send it back. Those messages then need to be received and printed.
The program mpiring_danger.c
does just this. We can compile
and run it on any number of processes and it will likely work.
However, it is not guaranteed to work, since an MPI_Send
call
will not necessarily return until there is a corresponding
MPI_Recv
call to receive the message. If each process performs
the first MPI_Send
and those calls cannot complete until the
MPI_Recv
, we can enter a deadlock condition, where each
process is waiting for something to happen in some other process
before it can continue execution.
The chances of this problem increase with larger messages.
The program mpiring_danger_large.c
sends arrays in the same
pattern as the previous program sent single int values. The
#define
at the top of the program determines how large these
messages are.
These kinds of communication patterns are very common in parallel
programming, so we need a way to deal with them. MPI provides another
variation on point-to-point communication that is
non-blocking. These are also known as immediate mode
sends and receives. These functions are called MPI_Isend
and
MPI_Irecv
, and will always return immediately.
The program mpiring_safe_large.c
uses non-blocking sends to
remove the danger of deadlock in this program. Please see the
comments there about the need to wait for the message to be delivered
(or at least to have it in progress) before the send buffer can be
modified. This example uses MPI_Wait
to accomplish that.
We can get by the limit above by making the arrays for our send and receive buffers global variables.
MPI_Irecv
) in mpiring_safe_large.c
. Modify the
program to use MPI_Irecv
, adding any other needed variables
and function calls to achieve this. (5 points)Submitting
Your submission requires that all required deliverables are committed and pushed to the master for your repository on GitHub.
Grading
This assignment is worth 20 points, which are distributed as follows:
> Feature | Value | Score |
Question 1 | 3 | |
Question 2 | 5 | |
Question 3 | 5 | |
Question 4 | 2 | |
Practice program | 5 | |
Total | 20 | |