Hi,
I’ve been going through the optix SDK examples and they have been very helpful for learning about raytracing. I was modifying the code in optixRaycasting to render a depth image but found that there was some truncation/rounding errors because all the pixels in the rendered image would have integral values.
On closer inspection, it seems that the __closesthit__buffer_hit()
function assigns the output of optixGetRayTmax()
to an const unsigned int
. I wanted to ask the intention behind using an int for this because after I changed it to a float I am able to get depth values that aren’t rounded/truncated to the nearest integer.
Thanks!