DLFP8Types.jl
Provide the 8-bits floats (FP8) proposed in FP8 Formats for Deep Learning (Float8_E4M3FN, Float8_E5M2) and 8-bit Numerical Formats For Deep Neural Networks (Float8_E4M3FNUZ, Float8_E5M2FNUZ). Mainly for handling data stored in this format. All floating-point arithmetics are with Float32 and convert the output back to FP8.
On this page
Languages
Julia100.0%
Latest Release
v0.1.0February 12, 2024MIT License
Created February 8, 2024
Updated October 16, 2025