CLBlast could not produce FP16 tuning results with NVIDIA GPUs like RTX4090 on Windows

Dear Nvidia experts,
I would like to report a potential problem on the OpenCL Windows driver on many NVIDIA GPU.
Recently, my students and I are trying to tune an OpenCL lib called CLBlast to help the community to speed up the accelerating experience in AI and many more. But somehow, CLBlast tuner report that the device does not support -precision 16.
We know that NVIDIA has upgraded the OpenCL compiler to support cl_kr_fp16 flag as stated in section 2.8 of noted in 535.98-win11-win10-release-notes.pdf and previous releases.
But still, we could not produce the tuning results for FP16.

to produce the results:

  1. Goto “Release CLBlast 1.6.0 · CNugteren/CLBlast · GitHub” to download CLBlast-1.6.0-windows-x64.7z

  2. Copy clblast.dll in the lib directory to bin directory

  3. execute any tuner such as “clbast_tuner_xgemm.exe -precision 16” command in the command line (not powershell).

In addition, the followings are all the test cases in case you would like to test all of them (you can create a .bat file and put the commands inside):

Best wishes,

Jinchuan Tang

bat file:

clblast_tuner_copy_fast.exe -precision 32
clblast_tuner_copy_fast.exe -precision 64
clblast_tuner_copy_fast.exe -precision 3232
clblast_tuner_copy_fast.exe -precision 6464
clblast_tuner_copy_fast.exe -precision 16
clblast_tuner_copy_pad.exe -precision 32
clblast_tuner_copy_pad.exe -precision 64
clblast_tuner_copy_pad.exe -precision 3232
clblast_tuner_copy_pad.exe -precision 6464
clblast_tuner_copy_pad.exe -precision 16
clblast_tuner_transpose_fast.exe -precision 32
clblast_tuner_transpose_fast.exe -precision 64
clblast_tuner_transpose_fast.exe -precision 3232
clblast_tuner_transpose_fast.exe -precision 6464
clblast_tuner_transpose_fast.exe -precision 16
clblast_tuner_transpose_pad.exe -precision 32
clblast_tuner_transpose_pad.exe -precision 64
clblast_tuner_transpose_pad.exe -precision 3232
clblast_tuner_transpose_pad.exe -precision 6464
clblast_tuner_transpose_pad.exe -precision 16
clblast_tuner_xaxpy.exe -precision 32
clblast_tuner_xaxpy.exe -precision 64
clblast_tuner_xaxpy.exe -precision 3232
clblast_tuner_xaxpy.exe -precision 6464
clblast_tuner_xaxpy.exe -precision 16
clblast_tuner_xdot.exe -precision 32
clblast_tuner_xdot.exe -precision 64
clblast_tuner_xdot.exe -precision 3232
clblast_tuner_xdot.exe -precision 6464
clblast_tuner_xdot.exe -precision 16
clblast_tuner_xger.exe -precision 32
clblast_tuner_xger.exe -precision 64
clblast_tuner_xger.exe -precision 3232
clblast_tuner_xger.exe -precision 6464
clblast_tuner_xger.exe -precision 16
clblast_tuner_xgemm.exe -precision 32
clblast_tuner_xgemm.exe -precision 64
clblast_tuner_xgemm.exe -precision 3232
clblast_tuner_xgemm.exe -precision 6464
clblast_tuner_xgemm.exe -precision 16
clblast_tuner_xgemm_direct.exe -precision 32
clblast_tuner_xgemm_direct.exe -precision 64
clblast_tuner_xgemm_direct.exe -precision 3232
clblast_tuner_xgemm_direct.exe -precision 6464
clblast_tuner_xgemm_direct.exe -precision 16
clblast_tuner_xgemv.exe -precision 32
clblast_tuner_xgemv.exe -precision 64
clblast_tuner_xgemv.exe -precision 3232
clblast_tuner_xgemv.exe -precision 6464
clblast_tuner_xgemv.exe -precision 16
clblast_tuner_invert.exe -precision 32
clblast_tuner_invert.exe -precision 64
clblast_tuner_invert.exe -precision 3232
clblast_tuner_invert.exe -precision 6464
clblast_tuner_invert.exe -precision 16
clblast_tuner_routine_xgemm.exe -precision 32
clblast_tuner_routine_xgemm.exe -precision 64
clblast_tuner_routine_xgemm.exe -precision 3232
clblast_tuner_routine_xgemm.exe -precision 6464
clblast_tuner_routine_xgemm.exe -precision 16
clblast_tuner_routine_xtrsv.exe -precision 32
clblast_tuner_routine_xtrsv.exe -precision 64
clblast_tuner_routine_xtrsv.exe -precision 3232
clblast_tuner_routine_xtrsv.exe -precision 6464
clblast_tuner_routine_xtrsv.exe -precision 16

Also, by looking at the clinfo, the same problem with no available FP16 on Ubuntu and 3080 LAPTOP GPU:

(base) lgpt@lgpt-Precision-7560:~$ clinfo
Number of platforms 1
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 3.0 CUDA 12.0.151
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_device_uuid cl_khr_pci_bus_info cl_khr_external_semaphore cl_khr_external_memory cl_khr_external_semaphore_opaque_fd cl_khr_external_memory_opaque_fd
Platform Extensions with Version cl_khr_global_int32_base_atomics 0x400000 (1.0.0)
cl_khr_global_int32_extended_atomics 0x400000 (1.0.0)
cl_khr_local_int32_base_atomics 0x400000 (1.0.0)
cl_khr_local_int32_extended_atomics 0x400000 (1.0.0)
cl_khr_fp64 0x400000 (1.0.0)
cl_khr_3d_image_writes 0x400000 (1.0.0)
cl_khr_byte_addressable_store 0x400000 (1.0.0)
cl_khr_icd 0x400000 (1.0.0)
cl_khr_gl_sharing 0x400000 (1.0.0)
cl_nv_compiler_options 0x400000 (1.0.0)
cl_nv_device_attribute_query 0x400000 (1.0.0)
cl_nv_pragma_unroll 0x400000 (1.0.0)
cl_nv_copy_opts 0x400000 (1.0.0)
cl_nv_create_buffer 0x400000 (1.0.0)
cl_khr_int64_base_atomics 0x400000 (1.0.0)
cl_khr_int64_extended_atomics 0x400000 (1.0.0)
cl_khr_device_uuid 0x400000 (1.0.0)
cl_khr_pci_bus_info 0x400000 (1.0.0)
cl_khr_external_semaphore 0x9000 (0.9.0)
cl_khr_external_memory 0x9000 (0.9.0)
cl_khr_external_semaphore_opaque_fd 0x9000 (0.9.0)
cl_khr_external_memory_opaque_fd 0x9000 (0.9.0)
Platform Numeric Version 0xc00000 (3.0.0)
Platform Extensions function suffix NV
Platform Host timer resolution 0ns
Platform External memory handle types Opaque FD
Platform External semaphore import types Opaque FD
Platform External semaphore export types Opaque FD

Platform Name NVIDIA CUDA
Number of devices 1
Device Name NVIDIA GeForce RTX 3080 Laptop GPU
Device Vendor NVIDIA Corporation
Device Vendor ID 0x10de
Device Version OpenCL 3.0 CUDA
Device UUID 5c120e9c-9523-d810-c2c3-524b79718c2c
Driver UUID 5c120e9c-9523-d810-c2c3-524b79718c2c
Valid Device LUID No
Device LUID 6d69-637300000000
Device Node Mask 0
Device Numeric Version 0xc00000 (3.0.0)
Driver Version 525.105.17
Device OpenCL C Version OpenCL C 1.2
Device OpenCL C all versions OpenCL C 0x400000 (1.0.0)
OpenCL C 0x401000 (1.1.0)
OpenCL C 0x402000 (1.2.0)
OpenCL C 0xc00000 (3.0.0)
Device OpenCL C features __opencl_c_fp64 0xc00000 (3.0.0)
__opencl_c_images 0xc00000 (3.0.0)
__opencl_c_int64 0xc00000 (3.0.0)
__opencl_c_3d_image_writes 0xc00000 (3.0.0)
Latest conformance test passed v2022-10-05-00
Device Type GPU
Device Topology (NV) PCI-E, 0000:01:00.0
Device PCI bus info (KHR) PCI-E, 0000:01:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 48
Max clock frequency 1365MHz
Compute Capability (NV) 8.6
Device Partition (core)
Max number of sub-devices 1
Supported partition types None
Supported affinity domains (n/a)
Max work item dimensions 3
Max work item sizes 1024x1024x64
Max work group size 1024
Preferred work group size multiple (device) 32
Preferred work group size multiple (kernel) 32
Warp size (NV) 32
Max sub-groups per work group 0
Preferred / native vector sizes
char 1 / 1
short 1 / 1
int 1 / 1
long 1 / 1
half 0 / 0 (n/a)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
External memory handle types Opaque FD
External semaphore import types Opaque FD
External semaphore export types Opaque FD
Global memory size 16899571712 (15.74GiB)
Error Correction support No
Max memory allocation 4224892928 (3.935GiB)
Unified memory for Host and Device No
Integrated memory (NV) No
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing No
Fine-grained system sharing No
Atomics No
Minimum alignment for any data type 128 bytes
Alignment of base address 4096 bits (512 bytes)
Preferred alignment for atomics
SVM 0 bytes
Global 0 bytes
Local 0 bytes
Atomic memory capabilities relaxed, work-group scope
Atomic fence capabilities relaxed, acquire/release, work-group scope
Max size for global variable 0
Preferred total size of global vars 0
Global Memory cache type Read/Write
Global Memory cache size 1376256 (1.312MiB)
Global Memory cache line size 128 bytes
Image support Yes
Max number of samplers per kernel 32
Max size for 1D images from buffer 268435456 pixels
Max 1D or 2D image array size 2048 images
Base address alignment for 2D image buffers 0 bytes
Pitch alignment for 2D image buffers 0 pixels
Max 2D image size 32768x32768 pixels
Max 3D image size 16384x16384x16384 pixels
Max number of read image args 256
Max number of write image args 32
Max number of read/write image args 0
Pipe support No
Max number of pipe args 0
Max active pipe reservations 0
Max pipe packet size 0
Local memory type Local
Local memory size 49152 (48KiB)
Registers per block (NV) 65536
Max number of constant args 9
Max constant buffer size 65536 (64KiB)
Generic address space support No
Max size of kernel argument 4352 (4.25KiB)
Queue properties (on host)
Out-of-order execution Yes
Profiling Yes
Device enqueue capabilities (n/a)
Queue properties (on device)
Out-of-order execution No
Profiling No
Preferred size 0
Max size 0
Max queues on device 0
Max events on device 0
Prefer user sync for interop No
Profiling timer resolution 1000ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Non-uniform work-groups No
Work-group collective functions No
Sub-group independent forward progress No
Kernel execution timeout (NV) Yes
Concurrent copy and kernel execution (NV) Yes
Number of async copy engines 2
IL version (n/a)
ILs with version (n/a)
printf() buffer size 1048576 (1024KiB)
Built-in kernels (n/a)
Built-in kernels with version (n/a)
Device Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_device_uuid cl_khr_pci_bus_info cl_khr_external_semaphore cl_khr_external_memory cl_khr_external_semaphore_opaque_fd cl_khr_external_memory_opaque_fd
Device Extensions with Version cl_khr_global_int32_base_atomics 0x400000 (1.0.0)
cl_khr_global_int32_extended_atomics 0x400000 (1.0.0)
cl_khr_local_int32_base_atomics 0x400000 (1.0.0)
cl_khr_local_int32_extended_atomics 0x400000 (1.0.0)
cl_khr_fp64 0x400000 (1.0.0)
cl_khr_3d_image_writes 0x400000 (1.0.0)
cl_khr_byte_addressable_store 0x400000 (1.0.0)
cl_khr_icd 0x400000 (1.0.0)
cl_khr_gl_sharing 0x400000 (1.0.0)
cl_nv_compiler_options 0x400000 (1.0.0)
cl_nv_device_attribute_query 0x400000 (1.0.0)
cl_nv_pragma_unroll 0x400000 (1.0.0)
cl_nv_copy_opts 0x400000 (1.0.0)
cl_nv_create_buffer 0x400000 (1.0.0)
cl_khr_int64_base_atomics 0x400000 (1.0.0)
cl_khr_int64_extended_atomics 0x400000 (1.0.0)
cl_khr_device_uuid 0x400000 (1.0.0)
cl_khr_pci_bus_info 0x400000 (1.0.0)
cl_khr_external_semaphore 0x9000 (0.9.0)
cl_khr_external_memory 0x9000 (0.9.0)
cl_khr_external_semaphore_opaque_fd 0x9000 (0.9.0)
cl_khr_external_memory_opaque_fd 0x9000 (0.9.0)

NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, …) NVIDIA CUDA
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, …) Success [NV]
clCreateContext(NULL, …) [default] Success [NV]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) Invalid device type for platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No platform

ICD loader properties
ICD loader Name OpenCL ICD Loader
ICD loader Vendor OCL Icd free software
ICD loader Version 2.3.1
ICD loader Profile OpenCL 3.0
(base) lgpt@lgpt-Precision-7560:~$ nvidia-smi
Wed Jun 7 17:10:37 2023
±----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … Off | 00000000:01:00.0 Off | N/A |
| N/A 36C P0 N/A / 55W | 6MiB / 16384MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1138 G /usr/lib/xorg/Xorg 4MiB |
±----------------------------------------------------------------------------+
(base) lgpt@lgpt-Precision-7560:~$

Hi @tjcgy, welcome to the NVIDIA developer forums.

I don’t think I can add much more than what Robert already posted as reply to the related post over in the CUDA category.

The release notes specify that the NVVM compiler got upgraded and allows for usage of the cl_khr_fp16 extension.

But actual support of that feature on specific Hardware will vary.

I hope that helps clarify the situation.

Thanks!

Dear Markus,

thank you very much for your fast response.
I will try to file a bug report for currently all the consumer cards I know do not have FP16 flag. RTX4090 as an example: New tuning results · Issue #1 · CNugteren/CLBlast · GitHub

Best wishes,
Jinchuan Tang