|
Computer Science 400 Parallel Processing Siena College Fall 2008
|
|
Lecture 04: Data Parallel Computation, Recursive Parallelism, OpenMP
Date: Wednesday, September 24, 2008
- Announcements
- I have limited in-person availability this week to help, but
email and I will get back to you as soon as I can
- Schedule
- For Thursday: complete palindromic pthreads implementation
- Lecture assignment recap
- Data Parallel Computation
- Explicit domain decomposition
- Recursive Parallelism
- Adaptive quadrature: sequential code and discussion of how to parallelize
- OpenMP - compiler-directed multithreading
Due at the start of class, Wednesday, October 1.
Turn in a few paragraphs in answer to the following question. Please
turn in a hard copy (typeset or handwritten are OK). We will discuss
this at the start of our next class, so no late submissions are
accepted.
- Our multithreading capabilities are limited by the fact that we
have nodes containing up to 4 processors. Soon, we'll start
looking at message passing which will allow us to make use of
multiple nodes. But for now, imagine you had access to a shared
memory multiprocessor system with dozens or even hundreds of
processors. Which of the approaches we have discussed do you think
would be most effective? Why?
The reading for next time is Quinn Chapter 17.
- matmult_decomp
- adapt_quad
- openmp_hello
- matmult_openmp
- matmult_omp_explicit
- matmult_omp_bagoftasks
- openmp_private
- openmp_shared
- openmp_reduction
- matmult_omp_explicit2
- openmp_sections
- matmult_omp_explicit3