optixRaycasting SDK Example uses int for ray t

Hi,

I’ve been going through the optix SDK examples and they have been very helpful for learning about raytracing. I was modifying the code in optixRaycasting to render a depth image but found that there was some truncation/rounding errors because all the pixels in the rendered image would have integral values.

On closer inspection, it seems that the __closesthit__buffer_hit() function assigns the output of optixGetRayTmax() to an const unsigned int. I wanted to ask the intention behind using an int for this because after I changed it to a float I am able to get depth values that aren’t rounded/truncated to the nearest integer.

Thanks!

Yeah, that’s just wrong and must be const float t instead.
It’s even assumed to be float in the later __float_as_uint(t) reinterpretation of the 32 bits.

I’ll file a bug report to have it changed in the next OptiX SDK release.