# Selected Presentations

**M. Croci, ***Solving differential equations in reduced- and mixed-precision*, Oden Institute, Yale University and Ecole Polytecnique Federale de Lausanne (2022). Download.

*Solving differential equations in reduced- and mixed-precision*, Oden Institute, Yale University and Ecole Polytecnique Federale de Lausanne (2022).

Motivated by the advent of machine learning, the last few years saw the return of hardware-supported reduced-precision computing. Computations with fewer digits are faster and more memory and energy efficient, but a careful implementation and rounding error analysis are required to ensure that sensible results can still be obtained.

This talk is divided in two parts in which we focus on reduced- and mixed-precision algorithms respectively. Reduced-precision algorithms obtain an as accurate solution as possible given the precision while avoiding catastrophic rounding error accumulation. Mixed-precision algorithms, on the other hand, combine low- and high-precision computations in order to benefit from the performance gains of reduced-precision while retaining good accuracy.

In the first part of the talk we study the accumulation of rounding errors in the solution of the heat equation, a proxy for parabolic PDEs, in reduced precision using round-to-nearest (RtN) and stochastic rounding (SR). We demonstrate how to implement the numerical scheme to reduce rounding errors and we present \emph{a priori} estimates for local and global rounding errors. While the RtN solution leads to rapid rounding error accumulation and stagnation, SR leads to much more robust implementations for which the error remains at roughly the same level of the working precision.

In the second part of the talk we focus on mixed-precision explicit stabilised Runge-Kutta methods. We show that a naive mixed-precision implementation harms convergence and leads to error stagnation, and we present a more accurate alternative. We introduce new Runge-Kutta-Chebyshev schemes that only use $q\in{1,2}$ high-precision function evaluations to achieve a limiting convergence order of $O(\Delta t^{q})$, leaving the remaining evaluations in low precision. These methods are essentially as cheap as their fully low-precision equivalent and they are as accurate and (almost) as stable as their high-precision counterpart.