Date: 4 April 2024 @ 17:00 - 19:00

RegisterIn scientific computing, Python is the most popular programming/scripting language. Wihile known for its high-level features, hundreds of fantastic libraries and ease of use, Python is slow compared to traditional (C, C++, Fortran) and new (Julia, Chapel) compiled languages. In this course we'll focus on speeding up your Python workflows using several different approaches. In Part 1 we will start with traditional vectorization with numpy, will talk about Python compilers (numba) and profiling and then will jump into parallelization. We'll do a little bit of multithreading (possible via numexpr, despite the global interpreter lock) but will target primarily multiprocessing.In Part 2 we will study Ray, a unified framework for scaling AI and Python applications. Since this is not a machine learning workshop, we will not touch most of Ray's AI capabilities, but will focus on its core distributed runtime and data libraries. We will learn several different approaches to parallelizing purely numerical (and therefore CPU-bound) workflows, both with and without reduction. If your code is I/O-bound, you will also benefit from this course, as I/O-bound workflows can be easily sped up with Ray.

Keywords: GPU, HPC, Python, Programming, Julia

Venue: Online


Activity log