optix::Aabb for Optix 7

Good morning,

I am interested in older code I found online at: Optix-PathTracer/quad_intersect.cu at master · knightcrawler25/Optix-PathTracer · GitHub

In particular the RT_Program void bounds(int, float result[6]) function line 87 (the optix::Aabb call. Is there a corresponding call like this in Optix 7 ?

Thanks,

Nope, there are no bounding box programs inside OptiX 7 anymore.

There are only built-in geometric primitives like triangles and curves (OptiX 7.1.0) which build their own axis aligned bounding boxes (AABB) and custom primitives for which you need to provide the pre-calculated AABBs.

Means you would normally calculate these AABBs either on the host or much faster (recommended!) with a native CUDA kernel doing the same calculation as that former bounding box program and give the CUdeviceptr with the results to the OptixBuildInputCustomPrimitiveArray aabbBuffers. (Mind the plural because motion blur requires a set of AABBs per motion key.)

Explained here, see Listing 5.3:
https://raytracing-docs.nvidia.com/optix7/guide/index.html#acceleration_structures#acceleration-structures
https://raytracing-docs.nvidia.com/optix7/api/html/struct_optix_build_input_custom_primitive_array.html

Please search the OptiX SDK 7.2.0 source code for “customPrimitiveArray” and you’ll find various examples showing that for very simple cases, like in optixWhitted where that happens on the host inside the sphere_bound() and parallelogram_bound() functions.

Thanks @droettger. I will give a look at the OptiX SDK 7.2.0 for “customPrimitiveArray” for some simple examples.

It’s also always a good idea to search this OptiX developer forum for the topic you’re interested in.
There are multiple threads dealing with custom primitives already.

You also need to have an intersection program for these custom primitives and the reporting looks a little different in the OptiX 7 API.
For example you need to define how many of the maximum 8 attribute registers your pipeline is using if it’s more than the default 2 for the triangle barycentrics.
https://raytracing-docs.nvidia.com/optix7/guide/index.html#device_side_functions#reporting-intersections-and-attribute-access

You should also tell the pipeline which geometric primitive types are actually used inside the code. This is mandatory for curves. See listing 8.1 here.
https://raytracing-docs.nvidia.com/optix7/guide/index.html#curves#differences-between-curves-and-triangles

Given that the registers defined by optixSetPayload_0 … _7 and optixSetAttribute_0 … _7 do not overlap, is it possible to pass up to 16 32b values from one OptiX program to another?

Whoa, wait a minute, these are completely different things.

Attribute registers are only written inside intersection programs by optixReportIntersection().
There is no optixSetAttribute call, only optixGetAttribute calls.
https://raytracing-docs.nvidia.com/optix7/guide/index.html#device_side_functions#device-side-functions

For questions about how to report intersections, go through these search results for optixReportIntersection()

Read these recent threads instead if you need a bigger per ray payload than fits into 8 32-bit registers.
https://forums.developer.nvidia.com/t/global-payload/159415
https://forums.developer.nvidia.com/t/optic-7-passing-multiple-ray-data-to-closesthit-program/160005

Thank you @droettger for the fast response. What you’re saying makes perfect sense.

What I wrote before was pretty dumb. I was kind of hoping for a simple solution - I will think things through better next time before I embarrass myself with another dumb question.

Thank you for the links. Much appreciated.

No problem.

In case you write custom intersection programs, make sure you only produce the minimal amount of attributes you need to calculate the final surface attributes of the hit, because the intersection program is the most often called program and it needs to be as efficient as possible.

It’s faster to calculate the dependent hit data deferred once you’ve reached the closest hit (or more often when reaching anyhit program).

The minimum is two attribute registers to have the barycentric coordinates beta and gamma for built-in triangles covered. Built-in curve primitives only use one attribute register.

Is there any information regarding VBO (from OpenGL/CG) for Optix 7? An array of floats such as float *vboArray and number of vertices as int vertices and offset as int vboOffset maybe stored a s single struct and copied from HOST to DEVICE? Or would one optimally use a SBT?

Thanks again for any help/hints.

If you mean how you would normally access the vertex attribute data of which you have been using the vertex position field to build the geometry acceleration structure (GAS), then accessing the position and additional vertex attributes like tangents, normals, texture coordinates etc. and the geometric primitive topology via indices inside the device programs, then yes, the SBT data is the place to store pointers to that.
That data can be retrieved with the optixGetSbtDataPointer() device function.

Example code here:
Declaring an SBT data structure for the hit groups
Setting the data inside the SBT data on the host
Using optixGetSbtDataPointer() to retrieve the data

Vertex positions could also be read from the GAS themselves, which requires the
OPTIX_BUILD_FLAG_ALLOW_RANDOM_VERTEX_ACCESS to be set.

If you mean how to do resource sharing of an OpenGL Vertex Buffer Object in OptiX 7, that would be done with OpenGL-CUDA interop.
If the buffer resource is created by the NVIDIA OpenGL implementation, you can access the data via the cuGraphicsGLRegisterBuffer() function (or the resp. CUDA runtime API function).

That’s the same for Pixel Buffer Objects for which OpenGL interop is shown here for example. Search that whole file for m_cudaGraphicsResource
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo3/src/DeviceSingleGPU.cpp#L64

That could get a little involved if you need that for many VBOs.

Though note that OpenGL and CUDA have different vector type alignment restrictions!
While it’s fine to use a tightly packed structure with, for example, { float3 position; float2 texcoord; } under OpenGL, using the same data under CUDA with the same vector types will not work because float2 needs to be 8-bytes aligned and it isn’t in that structure when it comes from OpenGL.
You would need to interpret that data as individual floats and load it manually into the respective vector types on CUDA side.

Thanks for the fast response @droettger.

Unfortunately, it looks like the latter is what I am looking for - OpenGL Vertex Buffer Object in OptiX 7. This is from legacy code that is at least 7 years old and I really appreciate all the information and links - I apologize for hitting the forum so much. It looks like the old code (shader side anyway) just declared a rtBuffer<float> vbobuff at the top of the intersect program.

The question is how you set that buffer on host side. You cannot simply name it “vbo” and that’s it.
There must be quite some code on the host side which would get the device pointer from OpenGL and set it inside the old OptiX buffer variable.
The same steps would need to happen in OptiX 7, just explicitly using the cited CUDA interop functions.

If that was using rtBufferCreateFromGLBO in your old code, then all that mapping and unmapping happened behind your back inside the old API. That resource management is now your responsibility in OptiX 7.

The major hassle would be to map that shared buffer resource and upload the pointers to the proper locations.
Something like this, here for a PBO: https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo3/src/DeviceSingleGPU.cpp#L168

Now the question is, what happens when switching between OpenGL and OptiX rendering back and forth, because I do not know what happens when using the resource in OpenGL while CUDA has it mapped. You should try that first. Hopefully OpenGL doesn’t care.
If that needs to be unmapped in CUDA to become usable in OpenGL, then you would need to update all VBO pointers on OptiX side everytime you switched APIs because the cuGraphicsResourceGetMappedPointer() function could return a different virtual pointer every time you call it per its documentation. (It normally doesn’t, but that’s implementation dependent behaviour which is always undefined.)

Anyway you have enough example code to figure out the necessary steps.
Vertex-BufferObjects and Pixel-BufferObjects are just linear memory. There should be no problem to get CUdeviceptr to them.
(BTW, FrameBuffer-Objects (note the different dash location!) are something completely different.)

1 Like

You are correct. There is quite a bit of code on HOST side and it is going to be a long haul just going through all the files. I am currently working the shaders, of which there are many.

Once again, thank you @droettger for walking me through much of the OptiX 7 code as I try and correlate it to what exists as OptiX 5/6. You have been invaluable in this process.