Computer Science 400
Parallel Processing and High Performance Computing
Fall 2017, Siena College
This brief lab exercise will introduce you to computing with multiple processes. First, we will see an example using the Unix fork system call. Then we will run our first message passing programs. Please work individually on this lab.
Getting Set Up
You will receive an email with the link to follow to set up your GitHub repository processes-lab-yourgitname for this Lab.
Introduction
Our first mechanism for introducing parallelism into our programs is to have multiple processes in execution that cooperate to solve a problem. Those processes will not share any memory - in fact, they will often be running on different physical pieces of hardware. When those processes need to communicate with each other (which they'll almost always need to do to perform a meaningful parallel computation), they will send message to each other.
This approach is called the message passing paradigm. It is very flexible in that message passing programs can be executed by creating multiple processes on the same physical system (usually one with multiple processors/cores), or by creating them on different systems that can communicate across some network medium.
Some characteristics of the message passing paradigm:
Creating Unix Processes
Unix programs can use fork() to create new processes.
The Unix system call fork() duplicates a process. The child is a copy of the parent - in execution at the same point, the statement after the return from fork().
The return value indicates if you are child or parent.
0 is child, >0 means parent, -1 means failure (limit reached, permission denied)
Example C program:
pid=fork(); if (pid) { parent stuff; } else { child stuff; }
A more complete program that uses fork() along with three other system calls (wait(), getpid(), and getppid()) is in the forking example in your repository for this lab.
Processes created using fork() do not share context, and must allocate shared memory explicitly, or rely on a form of message passing to communicate.
Run the program on noreaster.teresco.org.
Remember that the advantage of using processes such as these instead of threads is that the processes could potentially be running on different systems. But if they are going to cooperate, they will need to communicate:
Sockets and pipes provide only a very rudimentary interprocess communication. Each "message" sent through a pipe or across a socket has a unique sender and unique receiver and is really nothing more than a stream of bytes. The sender and receiver must add any structure to these communcations.
sockets a very simplistic example of two processes that can communicate over raw sockets. It is included mainly to show you that you don't want to be doing this if you can help it.
For many applications, this primitive interface is unreasonable. We want something at a higher level. Message passing libraries have evolved to meet this need.
Message Passing Libraries
Message passing is supported through a set of library routines. This allows programmers to avoid dealing with the hardware directly. Programmers want to concentrate on the problem they're trying to solve, not worrying about writing to special memory buffers or making TCP/IP calls or even creating sockets.
Examples: P4, PVM, MPL, MPI, MPI-2, etc. MPI has become an industry standard, under the guidance of the MPI Forum.
We will be looking at MPI in detail for the next couple weeks. For today, you will be considering an MPI-based "Hello, World" program, mpihello.
Compile the program on noreaster.teresco.org with the make command. MPI programs need to know about additional libraries, so are often compiled with a different command (than, say, gcc) that is aware of the extra MPI libraries.
You should now have an executable mpihello. Run it.
The standard command line, where you type the name of the program you wish to run, results in an MPI program that has a single process.
The mechanism to run an MPI program and launch multiple processes is somewhat system-dependent, but often involves a command such as mpirun or mpiexec. On noreaster, the command is mpirun. To run two processes:
mpirun -np 2 ./mpihello
Now run with increasing powers of two for the number of processes.
Notice that the first executable statement in an MPI program's
main function is a call
to the MPI_Init
function, and the last is a call to the
MPI_Finalize
function. We will take care not to have any
executable code outside that block.
Submitting
Your submission requires that all required delierables are committed and pushed to the master for your repository on GitHub.
Grading
This assignment is worth 30 points, which are distributed as follows:
> Feature | Value | Score |
Lab Question 1 | 2 | |
Lab Question 2 | 3 | |
Lab Question 3 | 1 | |
sockets additions | 10 | |
Lab Question 4 | 1 | |
Lab Question 5 | 1 | |
Lab Question 6 | 2 | |
Lab Question 7 | 2 | |
Lab Question 8 | 1 | |
Lab Question 9 | 3 | |
mpihello additions | 4 | |
Total | 30 | |