Timothy Mattson provides Workshops on CPU and GPU Parallel Programming

Timothy Mattson will visit us in the week from March 24-28, 2025. In addition to his invited talk at our celebration of Marvins's first anniversary on March 25, 2025, he will also give three workshops on March 27 and 28 (see below).
Timothy Mattson is a Honorary Professor of the University of Bristol and he is regarded as an important co-founders of OpenMP (an API supporting multi-platform shared-memory multiprocessing programming). During his time at Intel, he has also worked on the further development of MPI, PyOMP, and GraphBLAS. He has written over 150 publications including six books on parallel computing. He will visit Bonn University during the week of March 22-28, 2025, in order to provide three workshops on various aspects of parallel programming and HPC.
CV by Timothy Mattson (provided by him):
Tim Mattson is a parallel programmer obsessed with every variety of science (Ph.D. Chemistry, UCSC, 1985). In 2023 he retired after a 45-year career in HPC (30 of which were with Intel). He has had the privilege of working with people much smarter than himself on great projects including: (1) the first TFLOP computer (ASCI Red), (2) Parallel programming languages … Linda, Strand, MPI, OpenMP, OpenCL, OCR and PyOMP (3) two different research processors (Intel's TFLOP chip and the 48 core SCC), (4) Data management systems (Polystore systems and Array-based storage engines), and (5) the GraphBLAS API for expressing graph algorithms as sparse linear algebra. Tim has over 150 publications including six books on different aspects of parallel computing.
He is also a recently retired kayak coach and instructor trainer (ACA certified). His obsession with sea kayaking, including “self-wetting” moments in the ocean, is pretty bad.
For more information about him, see Homepage of Timothy Mattson
Workshops with Timothy Mattson organized by HPC/A Lab and HPC@HRZ
An Introduction to Parallel Programming with OpenMP
Thursday, 27 March 2025
09.00 am - 05.00 pm
Using OpenMP to program GPUs
Friday, 28 March 2025
09.00 am - 03.00 pm
Floating Point arithmetic is not real. And your random numbers aren’t random.
Friday, 28 March 2025
03.30 pm - 05.00 pm
For detailed information on workshops see below.
An Introduction to Parallel Programming with OpenMP
March 27 2025, 09.00 am - 05.00 pm, Poppelsdorf Campus
This is a full day workshop that introduces parallel programming. It is a hands-on workshop meaning most of the learning happens through exercises. We use OpenMP for writing parallel code for the cores inside a CPU. Even if your goal is scaling across a full cluster, OpenMP is the right place to start. Why? The amount you need to learn before writing code is minimal. That means less time spent listening to someone lecture about parallelism and more time actually writing parallel code. While the emphasis is on programming the cores on a CPU, we will cover many of the core concepts you need for programming clusters and GPUs.!
Using OpenMP to program GPUs
28 March 2025, 09.00 am - 03.00 pm, Poppelsdorf Campus
The future is built around clusters with heterogenous nodes composed of CPUs and GPUs. The smart programmer wants to use all the hardware on a node. OpenMP is one of the few (if not only) ways to program both CPUs and GPUs from within a single programing model. In this hands-on workshop, we will cover GPU programming with OpenMP. I assume you have experience programming CPUs with OpenMP (such as by taking my prior workshop that introduces OpenMP) but no experience with GPU programming. Running from morning to mid-afternoon (three quarters of a day), we’ll cover everything you need to write well optimized code for a GPU. We will focus particularly on understanding GPU programming models in general (i.e., what you need to make sense of CUDA, OpenACC or OpenMP) and how they compare to CPU programming.
Floating Point arithmetic is not real. And your random numbers aren’t random.
28 March 2025, 03.30 pm - 05.00 pm, Poppelsdorf Campus
This is a 90-minute workshop that covers the key topics EVERY scientist MUST know about numbers on computers. In particular, we cover the fundamental issues raised by the IEEE-754 standard and what you need to know to avoid writing programs that cause disasters. We then talk about how to safely use random numbers on a computer with a particular focus on how random numbers fall apart on parallel computers.
Invited Talk for Marvins Birthday
25 March 2025, Poppelsdorf Campus
Tim Mattson is also the invited speaker of our celebration of Marvins's first anniversary on March 25, 2025. Here is the title and abstract:
The Hitchhikers Guide to the Future of HPC: Processors, People, and Programming
Hardware trends are clear. Driven by economics and the need to deliver increasing performance within a fixed power budget, computer systems are becoming increasingly complex. This complexity is directly managed by software, hence, the need for programmers with a detailed understanding of computer architecture.
Unfortunately, programmers today are trained with programming languages that hide the hardware. You can't specialize an algorithm to hardware features if there is a virtual machine between your code and the system or if you program in an interpreted language (such as python).
How are we going bridge this disconnect between our processors, the people who write our software, and the programming languages they use? We must fundamentally change how we construct software. We must automate key steps in software development using machine learning and Al technologies to map code onto the details of different systems.
In this talk, after describing the fundamentals of hardware evolution, we'll explore research to automate key aspects of software development. We will describe successes and reasons for hope, but also fundamental challenges that limit the applicability of Al to address this problem.