Introduction to Parallel Programming is given in this course either for Shared Memory and Message Passing paradigms. The basic functionalities of two of the widest used parallel programming tools are presented: the MPI (Message Passing Interface) library for distributed architectures and OpenMP system for shared memory and multicore architectures.
MPI is a message-passing library specification which provides a powerful and portable way for expressing parallel programs.
OpenMP is a portable and scalable model that gives shared-memory parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from desktop to supercomputers.
Implementations of both MPI and OpenMP are available for all modern computer architectures. Programs can be written in C/C++ or FORTRAN.
Large part of the course will be devoted to practical sessions where students will use the concepts discussed in the presentations to parallelise the proposed programs.
Overview of message passing paradigm. MPI: point-to-point and collective communications, non-blocking communications, communicators and virtual topologies. Shared Memory parallel programming. OpenMP: Fork & Join model, Compiler directives, Parallel regions, Data scope, Worksharing, master and synchronization constructs, Environment variables and Runtime library routines.
Students and researchers interested in developing or optimizing parallel programs, either in shared or distributed memory computing environments.
Good knowledge and experience of C or FORTRAN. Good experience with UNIX operating systems.