Computer Science 335
Parallel Processing and High Performance Computing
Fall 2024, Siena College
For this lab, you will write your first real message passing programs using MPI. We focus here on the fundamental building blocks of message passing: sends and receives, known as point-to-point communication.
You may work alone or with a partner on this lab.
Learning goals:
Getting Set Up
In Canvas, you will find a link to follow to set up your GitHub repository, which will be named p2p-lab-yourgitname, for this lab. Only one member of the group should follow the link to set up the repository on GitHub, then others should request a link to be granted write access.
You may answer the lab questions right in the README.md file of your repository, or use the README.md to provide a link to a Google document that has been shared with your instructor or the name of a PDF of your responses that you would upload to your repository.
Point-to-Point Communication
Pacheco Chapter 3, in Sections 1, 2 and 3, introduces MPI's basic point-to-point communication capabilities. Note that you can obtain copies of all examples from the book's web site, and a copy of the examples has been placed on noreaster in /home/cs335/pacheco/ipp-source-use. The ones we are using today are in the pacheco-ch3 directory in your repository.
First, let's look at Pacheco's mpi_hello,c program. This is a more complicated version of a "Hello, World" program than we looked at in the previous lab. Instead of having each process print out its message directly with printf, only the process whose MPI_Comm_rank is 0 prints anything. All others send a message to process 0.
Pacheco describes what is required for a send to match a corresponding receive. Modify the mpi_hello.c program so that one of the parameters to MPI_Send does not match the intended corresponding MPI_Recv.
The mpi_trap1.c program uses MPI to parallelize the problem of using the trapeziodal rule to calculate an integral of a function between two endpoints. The math isn't our main concern, but it's worth a refresher on this as provided in Pacheco Section 3.2.1.
mpi_trap2.c performs the same computation but rather than hard-coding in the parameters for the endpoints of the interval and the number of trapezoids, it prompts for and reads those in at run time.
Gathering Timings
The trapezoidal rule program, if run with a sufficiently large number of trapezoids (e.g., large enough n), can take a significant amount of processing time. Let's measure it!
First, take a look at the man page for the MPI_Wtime function. This function is commonly used by MPI programs for timings, much in the way we used gettimeofday for our previous C programs. MPI_Wtime conveniently returns an elapsed time from a fixed point in time in the past, measured in seconds.
Some Practice
Here is my program's output:
0 sending 49152 to 1 0 received 98304 from 1 1 received 49152 from 0 1 sending 98304 to 0
Submission
Commit and push your code. Make sure your answers to lab questions provided using one of the mechanisms mentioned in the "Getting Set Up" part of the lab.
Grading
This assignment will be graded out of 40 points.
Feature | Value | Score |
Question 1 | 1 | |
Question 2 | 1 | |
Question 3 | 1 | |
Question 4 | 6 | |
Question 5 | 1 | |
Question 6 | 2 | |
Question 7 | 5 | |
timers | 3 | |
Question 8 | 10 | |
mpiexchange.c | 10 | |
Total | 40 | |