THIS CORSE HAS BEEN CANCELLED DUE TO COVID-19 EMERGENCY PLAN.
Coordinating teacher: A. Marani
The aim of this course is to give Introduction to Parallel Programming for Shared Memory and Message Passing paradigms. The basic functionalities of two of the widest used parallel programming tools are presented: the MPI (Message Passing Interface) library for distributed architectures and OpenMP system for shared memory and multicore architectures.
MPI is a message-passing library specification which provides a powerful and portable way for expressing parallel programs.
OpenMP is a portable and scalable model that gives shared-memory parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from desktop to supercomputers.
Implementations of both MPI and OpenMP are available for all modern computer architectures. Programs can be written in C/C++ or FORTRAN.
Large part of the course will be devoted to practical sessions where students will use the concepts discussed in the presentations to parallelise the proposed programs.
By the end of the course the student will be able to:
- understand distributed parallel programming
- manage communications in MPI
- processor groups and topologies
- understand shared memory parallel programming
- OpenMP Compiler directives, Parallel regions, Data scope, Worksharing
- Fork & Join OpenMP model
- OpenMP Environment variables and Runtime library
Students and researchers interested in developing or optimizing parallel programs, either in shared or distributed memory computing environments.
Good knowledge and experience of C or FORTRAN. Good experience with UNIX operating systems.