456-0324/01 – Parallel Programming (PPA)
Gurantor department | Department of Computer Science | Credits | 4 |
Subject guarantor | RNDr. Ondřej Jakl, CSc. | Subject version guarantor | RNDr. Ondřej Jakl, CSc. |
Study level | undergraduate or graduate | Requirement | Optional |
Year | 1 | Semester | winter |
| | Study language | Czech |
Year of introduction | 2003/2004 | Year of cancellation | 2009/2010 |
Intended for the faculties | FEI | Intended for study types | Follow-up Master |
Subject aims expressed by acquired skills and competences
Survey in the field of design, realization and evaluation of parallel algorithms. Practical experience in parallel programming of selected multiprocessor computer systems.
Working knowledge in the field of parallel systems and their programming, including:
Design of parallel algorithms, parallelization of sequential algorithms.
Practical realization of a parallel algorithm based on the message passing or shared variables models.
Analysis of the algorithm, evaluation of its implementation. Optimization and increasing
of efficiency.
Teaching methods
Lectures
Tutorials
Project work
Other activities
Summary
The course provides the students with working knowledge in the area of parallel systems, algorithms and programming. It concentrates on practical development of parallel codes so that the students can make effective use of the modern powerful hardware, from supercomputers to multicore notebooks, for solution of demanding tasks from various application fields. The so-called multicomputers, where the interaction of parallel processes is based on message passing, are focused on, but the specifics of shared-memory systems (symmetric multiprocessors) are discussed as well. Exercises are devoted to the design of parallel algorithms and their implementation in the PVM, MPI, OpenMP environment or possibly parallel Matlab, on the most powerful computer systems of VŠB-TUO/CIT, nowadays on the network of workstations Ultra with remote access to the 32-processor cluster Termit and to the four-processor symmetric multiprocessor Quad.
Compulsory literature:
Syllabus
I. Foster: Designing and building of parallel programs. Addison-Wesley, 1995
C. Lin, L. Snyder: Principles of Parallel Programming. Pearson, 2009
Al Geist et al.: PVM: Parallel Virtual Machine. A User's Guide and Tutorial for Networked Parallel Computing. The MIT Press, 1994.
MPI: A Message-Passing Interface Standard. Message Passing Interface Forum, 2009.
Recommended literature:
B. Wilkinson, M. Allen: Parallel Programming. Prentice Hall, 1999.
R. Chandra et al.: Parallel Programming in OpenMP. Morgan Kaufmann Publishers, 2001.
Way of continuous check of knowledge in the course of semester
E-learning
Other requirements
Prerequisities
Subject has no prerequisities.
Co-requisities
Subject has no co-requisities.
Subject syllabus:
Lectures:
Introduction.
Motivation, historical remarks, actual trends. Relation to High Performance Computing.
Area of interest. Assumed knowledge.
Parallel computer systems.
Flynn's classification and memory-related categories. Used terminology. Review of architectures. Examples
of supercomputers. Clusters.
Interconnection subsystems.
Technical solutions (bus, hypercube, multistage network, etc.). Consequences for the design
of parallel algorithms.
Introduction to parallel programming.
Parallel programming models (overview): message passing, shared variables, data parallel approach.
Examples. Implementation tools. Building a parallel application.
Message passing model.
Principles and characteristics. Types of communication, basic concepts. Implementation conditions.
Message passing systems, major representatives.
(functional, domain), communication analysis, agglomeration, mapping to processors. Examples.
Parallel Virtual Machine.
Introduction to PVM. Historical remarks. Components and user's interface. Available implementations
(Termit, IBM SP). Illustrative example.
Parallel Virtual Machine (continued).
Overview of constructs and library routines: process control, information retrieval, message passing,
etc. Collective communication. Advanced techniques.
Development of parallel algorithms.
Domain, functional decomposition. Communication analysis. Agglomeration. Mapping onto processors.
Load-balancing techniques. Examples, perfectly parallel problems.
Message Passing Interface.
Development of the MPI standard. Comparison with PVM, specific features (derived datatypes,
virtual topologies, etc.).
Analysis of parallel algorithms.
Performance models, metrics. Parallel execution time. Speedup, efficiency, cost.
Superlinear speedup. Amdahl's law. Experimental studies.
Analysis of parallel algorithms (continued).
Scalability, fixed and variable problem size. Asymptotic analysis.
Programming of symmetric processors.
Technical conditions. Low-level tools, operating system bindings. Thread-based programming.
The OpenMP standard.
Motivation and historical remarks. Constructs (parallel regions, load-balancing directives,
synchronization, locking...). Example.
Selected parallel algorithms.
Numerical algorithms, sorting, graph algorithms and other applications. Summary and trends.
Alternative tools for the development of parallel applications (ADA, High Performance Fortran, etc.).
Developments in hardware. Parallel programming on personal computers.
Computer labs:
Tutorial organization. "Parallel thinking".
Working environment.
Multiprocesor Quad a cluster Termit (Ultra).
Building parallel codes in PVM.
Simple programs under PVM.
Domain decomposition.
Domain decomposition (cont.).
Functional decomposition.
Collective communication in v PVM. XPVM.
Analysis of algorithms.
MPI.
MPICH, LAM implementations.
MPI (cont.).
Collective communication.
Advanced techniques.
Presentations.
Projects.
Conditions for subject completion
Conditions for completion are defined only for particular subject version and form of study
Occurrence in study plans
Occurrence in special blocks
Assessment of instruction