Date: 5 December 2024 @ 18:00 - 20:00

Register Chapel is a parallel programming language for scientific computing designed to exploit parallelism across a wide range of hardware, from multi-core computers to large HPC clusters. Recently, Chapel introduced support for GPUs, allowing the same code to run seamlessly on both NVIDIA and AMD GPUs, without modification.Programming GPUs in Chapel is significantly easier than using CUDA or ROCm/HIP, and more flexible than OpenACC, as you can run fairly generic Chapel code on GPUs. Obviously, you will benefit from GPU acceleration the most with calculations that can be broken into many independent identical pieces. In Chapel, data transfer to/from a GPU (and between GPUs) is straightforward, thanks to a well-defined coding model that associates both calculations and data with a clear concept of locality.As of this writing, on the Alliance systems, you can run multi-locale (multiple nodes) GPU Chapel natively on Cedar, and single-locale GPU Chapel on all other clusters with NVIDIA cards via a container. Efforts are underway to expand native GPU support to more systems.In this course, we will learn GPU programming in Chapel with many hands-on examples. We will provide the system to run on, but -- to follow the exercises -- you will need an ssh client on your computer to connect to this system.

Keywords: HPC, GPU

Venue: online


Activity log