Computer Science 400
Parallel Processing and High Performance Computing

Fall 2017, Siena College

Lab 7: Non-Blocking Messages
Due: 11:59 PM, Wednesday, October 4, 2017

For this lab, we'll look at MPI's non-blocking point-to-point communication.

You may work alone or with a partner on this lab.

Getting Set Up

You will receive an email with the link to follow to set up your GitHub repository nb-lab-yourgitname for this Lab. One member of the group should follow the link to set up the repository on GitHub, then that person should email the instructor with the other group members' GitHub usernames so they can be granted access. This will allow all members of the group to clone the repository and commit and push changes to the origin on GitHub. At least one group member should make a clone of the repository to begin work.

Non-Blocking Point-to-point Communication

Consider the mpiexchange.c program you wrote for the previous lab. Suppose we wanted that program to be generalized to work for more processes. That is, each process picks and number to send to the process with the next highest rank (except that with the highest rank, which sends it to 0). Then each process will receive that value, modify it in some way, then send it back. Those messages then need to be received and printed.

The program mpiring_danger.c does just this. We can compile and run it on any number of processes and it will likely work.

However, it is not guaranteed to work, since an MPI_Send call will not necessarily return until there is a corresponding MPI_Recv call to receive the message. If each process performs the first MPI_Send and those calls cannot complete until the MPI_Recv, we can enter a deadlock condition, where each process is waiting for something to happen in some other process before it can continue execution.

Question 1: Try this program on noreaster for 2, 8, and 32 processes. Which ones run successfully to completion? (3 points)

The chances of this problem increase with larger messages.

The program mpiring_danger_large.c sends arrays in the same pattern as the previous program sent single int values. The #define at the top of the program determines how large these messages are.

Question 2: Run the program repeatedly with 8 processes on noreaster. Start with a message size of 16. It should run to completion. Double the message size, recompile, and re-run until the program no longer produces output. That will indicate a deadlock situation. What message size is the first that leads to deadlock in this program? (5 points)

These kinds of communication patterns are very common in parallel programming, so we need a way to deal with them. MPI provides another variation on point-to-point communication that is non-blocking. These are also known as immediate mode sends and receives. These functions are called MPI_Isend and MPI_Irecv, and will always return immediately.

The program mpiring_safe_large.c uses non-blocking sends to remove the danger of deadlock in this program. Please see the comments there about the need to wait for the message to be delivered (or at least to have it in progress) before the send buffer can be modified. This example uses MPI_Wait to accomplish that.

Question 3: Verify that this works by repeating the above experiment to see how large your messages can be before an error occurs. Eventually, you will run into trouble with the size of the arrays on the stack. At what message size does this problem occur, and what error do you see? (5 points)

We can get by the limit above by making the arrays for our send and receive buffers global variables.

Question 4: How large can you allocate your send and recieve buffer arrays as globals before you get a compiler error? Do not run these versions, just compile! This is what caused the noreaster's brief problems on Tuesday night. (2 points)

Practice Program: While it is not necessary for the correctness of the program in this case, we could also use non-blocking receives (MPI_Irecv) in mpiring_safe_large.c. Modify the program to use MPI_Irecv, adding any other needed variables and function calls to achieve this. (5 points)


Your submission requires that all required deliverables are committed and pushed to the master for your repository on GitHub.


This assignment is worth 20 points, which are distributed as follows:

> FeatureValueScore
Question 1 3
Question 2 5
Question 3 5
Question 4 2
Practice program 5
Total 20