Hi, why do you believe that bfloat16 is not supported? Can you please provide some references (specifically the part about the hardware "doesn't do it")?
For the hardware you are focussing on (gfx11), the reference manual [2] and the list of LLVM gfx11 instructions supported [1] describe the bfloat16 vdot & WMMA operations, and these are in fact implemented and working in various software such as composable kernels and rocBLAS, which I have used (and can guarantee they are not simply being run as float). I've also used these in the AMD fork of llm.c [3]
Outside of gfx11, I have also used bfloat16 in CDNA2 & 3 devices, and they are working and being supported.
Regarding cublasLt, what is your plan for support there? Pass everything through to hipblasLt (hipify style) or something else?
> Hi, why do you believe that bfloat16 is not supported?
Apologies, I appear to be talking nonsense. I conflated bfloat16 with nvidia's other wacky floating point formats. This is probably my cue to stop answering reddit/HN comments and go to bed. :D
So: ahem: bfloat16 support is basically just missing the fairly boring header.
> Regarding cublasLt, what is your plan for support there? Pass everything through to hipblasLt (hipify style) or something else?
Prettymuch that, yes. Not much point reimplementing all the math libraries when AMD is doing that part of the legwork already.
OK, so in the case of llm.c, if you're just including the HIP headers, using hipblasLt, etc, what would be the benefit of using scale instead of hipify?
For the hardware you are focussing on (gfx11), the reference manual [2] and the list of LLVM gfx11 instructions supported [1] describe the bfloat16 vdot & WMMA operations, and these are in fact implemented and working in various software such as composable kernels and rocBLAS, which I have used (and can guarantee they are not simply being run as float). I've also used these in the AMD fork of llm.c [3]
Outside of gfx11, I have also used bfloat16 in CDNA2 & 3 devices, and they are working and being supported.
Regarding cublasLt, what is your plan for support there? Pass everything through to hipblasLt (hipify style) or something else?
Cheers, -A
[1] https://llvm.org/docs/AMDGPU/AMDGPUAsmGFX11.html [2] https://www.amd.com/content/dam/amd/en/documents/radeon-tech... [3] http://github.com/anthonix/llm.c