Can i use half float (16 bit) in optix?

I’m new in the OptiX and CUDA. When I’m trying to use float16 in optix by including “cuda_fp16.h”, I get something unusual. My NVCC didn’t give me any error, but when I try to use “createProgramFromPTXString” to get the program, I just get an error which is shown below:

Parse error (Details: Function "_rtProgramCreateFromPTXString" caught exception: (api input string): error: Failed to parse input PTX string
(api input string), line 434; error   : Incorrect type for operation in instruction 'mul'
Cannot parse input PTX string
)

My test code is here:

float sample_importance = 5.f;
const float inv_sqrt_samples = 1.f / __half2float(__hmul(__float2half(1.f), __float2half(sample_importance)));

If I change the code to " 1.f / __half2float(__float2half(sample_importance)); " than everything works fine.

My compiler_options is:

const char *compiler_options[] =
	{
		"-arch",
		"compute_61",
		"-use_fast_math",
		"-default-device",
		"-rdc",
		"true",
		"-D__x86_64",
		0
	};

Thanks so much.

I would not recommend to use half precision arithmetic in OptiX device code. Simply use float.
If the OptiX PTX parser handles that, you might still get varying performance among GPUs due to different levels of support for half precision floating point arithmetic.

You could use half data in input/output buffers though to save memory bandwidth.
Unfortunately cuda_fp16.h doesn’t define half4 which would have been nice when using RT_FORMAT_HALF4 in OptiX buffers, so some own data structure is needed.

If you’re new to OptiX, please have a look at the OptiX introduction GTC 2018 presentation and the example source code:
[url]https://devtalk.nvidia.com/default/topic/998546/optix/optix-advanced-samples-on-github/[/url]

Thanks a lot for your reply!