Optix Hit.t data represented as unsigned int

I have performed ray tracing using Optix 8.0 on mesh data. The output for intersection of rays with the mesh is represented as a float, but is always truncated to the lowest integer (like a floor function). This is in the “Hit” data structure, represented as the “t” data.

Is there a way to run ray tracing so the full float data is there for the hits (non-truncated)?

Hi @alex.chisholm777, welcome!

The “t” parameter you get out of optixGetRayTmax() is always a full fp32 float. If you store this in your payload values, which are typed as unsigned int, it can be easy to accidentally cast your payload values from float to unsigned int, which can cause the type of truncation you’re seeing. Make sure that you are using the functions __float_as_uint() and __uint_as_float() to handle type-casting between float and unsigned int, without affecting the bits.

For a complete working example of this, see the SDK sample optixPathTracer, and in particular examine the functions storeClosesthitRadiancePRD() and traceRadiance().


David.

@dhart Thank you for the response! However, I think my application is slightly different that what you mentioned above.

In paarticular, I am looking at the “optixRayCasting.cpp” sample code from the Optix 8.0 installation.

See the below screenshot of the “Hit” structure:

“t” is represented as a float there. When I copy this data back to the CPU, the results become truncated. COnsider the below screenshot:

When populating the “hits” vector here, (copying back from GPU), all the “t” values are truncated floats. Is there a way to set this to a true floating point representation?

The “out of box” sample data “DuckHole.gltf” produces truncated floats when copying the hit data back to CPU.

The blow screenshot in “__raygen_from_buffer()” “t” is defined as uint, but it still gets truncated.

Thanks again for your help!

Alex

Hey Alex,

cudaMemcpy does not truncate floats (or know anything about floats). That one we can rule out. So there must be a type casting problem or a struct mismatch problem or something else somewhere else in the code, and there isn’t sufficient information yet to see where that might be occurring. Check carefully at every point your t values are touched. This is a common issue when t values are transferred into and out of the payload since it necessarily involves a reinterpret cast using __float_as_uint, which is why I assumed that might be your problem, but it easily might be happening somewhere else in the host code, or somewhere else in your CPU code, or during the printout of the values.

If after triple checking every contact point of your floats it’s still mysterious, maybe you can put together a complete minimal reproducer against optixPathTracer or any of our samples, and send the code directly or via github or something?


David.

@dhart I think I found where the issue was coming from. Consider the below screen shot:

This is the native source source from the Optix 8.0 installation in “optixRaycasting.cu”.

The “t” value gets cast to an unsigned int from this line: “const unsigned int t = optixGetRayTmax();”.

I am using the “__closesthit__buffer_hit()” to set up the program groups for the ray tracing.

Setting the payload as such, fixes the problem (true floating point for “t”):

“optixSetPayload_0( __float_as_uint( optixGetRayTmax()) );”

Is this an issue with the Optix source code? Or is there something else I am missing

Thanks!

Aha, yes it is definitely a bug in the optixRayCasting sample in OptiX 8.0. Sorry for the mistake & confusion. It has been fixed internally, just not released yet. This will be corrected in the next release, OptiX 8.1. The new version reads:
const float t = optixGetRayTmax();


David.

@dhart Thanks so much for confirming this! Very helpful.

Is there a time estimate for when Optix 8.1 will be released?

Any day now, it’s overdue. ;)


David.

@dhart Thanks!

1 more question if that’s OK!.

I have a laptop with the GeForce 1060 GPU, driver version 560.76, CUDA 12.6 installed. Optix 8.0 runs fine on that system.

However, I have another desktop computer which is a Quadro K5200, driver version 475.14, which doesn’t seem to be compatible with CUDA 12.6. (I’m getting CUDA driver error when running Optix)

Is the Quadro K5200 not compatible with CUDA 12.6?

Yes questions are invited here! ;)

CUDA 12 requires a minimum compute capability (CC) of 5.0, and the K5200 has a CC of 3.5 [1], so I believe that yes CUDA 12 has dropped support for Kepler, and only supports Maxwell and later GPUs. Additionally, be aware that OptiX 8.0 requires a 535 driver or higher [2], so 475 also won’t work.

[1] CUDA GPUs - Compute Capability | NVIDIA Developer
[2] OptiX 8.0 Release Notes

Thanks!