Date: 31 October 2024 @ 17:00 - 19:00

Register In scientific computing, Python is the most popular programming language. While known for its high-level features, hundreds of fantastic libraries and ease of use, Python is slow compared to traditional (C, C++, Fortran) and new (Julia, Chapel) compiled languages. In this course we'll focus on speeding up your Python workflows using a number of different approaches.In Part 1, we will start with traditional vectorization with NumPy, talk about Python compilers (Numba) and profiling, and cover parallelization. We will do a little bit of multithreading (possible via NumExpr, despite the global interpreter lock prior to v3.13, and as free threading starting with v3.13), but will focus primarily on multiprocessing.In Part 2, we will study Ray, a unified framework for scaling AI and Python applications. Since this is not a machine learning workshop, we will not touch Ray's AI capabilities, but will focus on its core distributed runtime and data libraries. We will learn several different approaches to parallelizing purely numerical (and therefore CPU-bound) workflows, both with and without reduction. If your code is I/O-bound, you will also benefit from this course, as I/O-bound workflows can be easily processed with Ray.We will not cover GPU-accelerated computing in Python in this course (worth its own course), nor will we cover mpi4py (most popular MPI implementation for Python).

Keywords: GPU, HPC, Python, Programming, Julia

Venue: online


Activity log