CUDA complains that there is no float16_t type, yet the manual says there is!

I get this error when compiling my CUDA code:

error: identifier "float16_t" is undefined

Yet. in the CUDA manual here:

it says that the float16 data type is the most optimal one to use for Compute Capability 5.3 (I.e. Jetson Nano).

How the heck do I use float16 in my code??


The manual indicates 16-bit floating point arithmetic is available. It doesn’t say that the name of the type is float16_t.

Take a look here. And there are other forum questions discussing use of fp16 as well.

And be sure to compile for the architecture you are running on (-arch=sm_53)

Thanks a lot. If only the documents used the correct nomenclature.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.