Computer Science 335
Parallel Processing and High Performance Computing
Fall 2021, Siena College
Lecture 09: MPI Collective Communication
Date: Friday, October 8, 2021
Agenda
- Announcements
- Lab 7: Collective Communication out, likely the task
mostly for Monday's class, due in a week
- Programming Project 3: Parallelizing Jacobi Iteration out
- Example: MPI implementation of Conway's Game of Life
- Uses a distributed data structure: Each process maintains
its own subset of the computational domain, in this case just
some of the rows of the grid. Other processes do not know about
the data on a given process. Only the data that is needed to
compute the next generation, a one-cell overlap (sometimes
called ghost cells or a halo), is exchanged
between neighbors before each iteration.
- When we need to get a global count of some statistic, such as
the count of live cells at the start, we use a reduction.
- The communication of cell data between iterations is done
with two pairs of sends and receives. Here, we use nonblocking
calls, then wait for their completion with the MPI_Waitall
call.
- This example also shows a technique for writing a file when
the intended contents are distributed among a set of processes.
Terminology
- collective communication
- reduction
- ghost cells/halo
Examples
- mpilife: MPI parallelization of the Conway's
Game of Life simulator