Basic question - world and object coordinates

Can anyone tell me what the difference between a world coordinate and object coordinate is with regards to OptiX? I think that object coordinate is local coordinate for a given object and world coordinate is the same coordinate transformed to relate to the entire scene but I am unsure.

Sorry for such a basic question, but I am relatively new to graphics programing.

Thanks

The object coordinate space is your original vertex attribute data without any transformations applied.

The world space is where the vertex attributes end up after transforming them with the matrix which represents the concatenation of the current transformation hierarchy above an instance of that object. Means the object-to-world matrix you get from combining all matrices in all instances, static or motion transform handles above that current geometry.
(Food for thought: Not starting the optixTrace() at the same top level traversable handle all the time means the world coordinate space can change. It’s defined by the current transformation hierarchy in all cases.)

Now in OptiX:

  • If you’re only using a single GAS level there is no transform hierarchy, means the “transform” is the identity and object coordinate space is identical to world coordinate space.
  • The same is true if all instance matrices are the identity inside an IAS->GAS hierarchy. That might actually happen. If all parts of a model are defined in world space and you would still like to move individual parts at one point. I have seen CAD data doing this.
  • For the other cases, the interesting question is: “In what coordinate space is my ray?” and the answer differs among device program domains.
    While the current ray is in world coordinate space inside the raygeneration,. closesthit and miss programs, it is in object coordinate space for the intersection and anyhit programs.
    This table inside the OptiX 6.5.0 Programming Guide explains it.
    Same in OptiX 7, but be aware of common pitfalls inside intersection programs due to the coordinate spaces:https://forums.developer.nvidia.com/t/objects-appearing-in-the-wrong-order-after-scaling/83884/7
1 Like

Thanks again @droettger - great information as always.

I apologize for asking this again about SBT versus a launch parameter structure that is passed to a shader, but I am not 100% - sorry.

I have texture and array(s) that I will pass to a given shader (i.e. I know that shader that will be employing the particular launch parameter) although there could be a number of these launch parameters in a given structure. Is the major reason for choosing a SBT when you don’t know exactly what shader will get a particular data or is it a matter of available memory?

Thanks again.

Not sure I understand the question.

Let’s start with the terminology. The launch parameter block is a single structure in constant memory per optixLaunch call.
The size of available constant memory is limited! Means you cannot have arbitrary many fields in that structure.
But if you store CUdeviceptr to global memory in there, that indirection allows using as much memory as the GPU can address.

That launch parameter block doesn’t need to be passed to shaders. That block of constant memory is accessible in all OptiX device modules directly if the necessary extern declaration is present.

Everything else depends on the connection between your texture objects and data arrays to the shaders.
For that to answer it depends on how the material system works. For example:
1.) Are the texture objects and data arrays constant per shader?
2.) Are different instances using the same shader but with different texture objects and data arrays?

The SBT is very flexible. Possible layouts for the cases above would be:

  • For 1: The SBT contains as many hit group records as there are different shaders.
    The instance sbtOffset defines which shader is applied to which instance.
    Each shader knows exactly (offset hardcoded inside the code) where its texture objects and data arrays are stored inside global memory pointed to by CUdeviceptrs inside the launch parameters.

  • For 1: Same thing as above but instead of hardcoded offsets the texture objects and CUdeviceptr to the data arrays are stored inside additional data fields of the SBT record, behind the 32 bytes of the shader header. That would allow changing these parameters between launches without recompiling the shaders.

  • For 2: The SBT contains as many hit groups as there are different shaders.
    The instance sbtOffset defines which shader is applied to which instance.
    The instanceId defines which particular texture object and data arrays are used inside the shader.
    (This would require less memory than the following case.)

  • For 2: The SBT contains as many hit record groups as there are instances.
    Means the shader header is set per instance and texture objects and data arrays are stored in additional data of the hit record. Benefit would be that assining different shaders wouldn’t require rebuilding the IAS because the sbtOffset doesn’t change and the instanceId isn’t used, resp. could be used for something else.

SBT entries are comparably small. They are at least OPTIX_SBT_RECORD_HEADER_SIZE (32) bytes for the shader header with an alignment of OPTIX_SBT_RECORD_ALIGNMENT (16).

There are more options (e.g. parameter indexing could even be per primitive per instance, per ray, etc.), but you get the idea.
It’s all your choice and depends on how you need to connect data to shaders.

Thanks again for the reply @droettger - I apologize for the unclear question.

Perhaps some background will help. I am converting old Optix 6.5 to Optix 7.x and there were several cases where the older code got variables passed via rtDeclareVariables and rtBuffer (at top of file with the defined shader such as rayGen). I have been humming along converting these older, passed in variables, to launch parameter structures. However, these shaders (of which there will be a single pipeline/launch for each) started to get more involved variables passed in such as textures and VBO arrays and I began to have serious doubts about my approach.

SBTs are looking like a better approach - however, being relatively new to the OptiX game I am looking for a veteran opinion. Also, did OptiX provide SBTs in older versions such as OptiX 6.5?

Thank you again for the massive assist.

You need to keep in mind that in OptiX 1 - 6 versions, all rtDeclareVariable have specific scopes and a lookup order.
Means you could actually declare the variable at different OptiX objects (program, material, geometry, geometry instance, context) at the same time and they shadowed each other in a very specific lookup order, which allowed some type of hierarchy of values. Though it was good practice to only declare variables in exactly one scope for performance reasons.
See Table 6 here: https://raytracing-docs.nvidia.com/optix6/guide_6_5/index.html#programs#program-variable-scoping

That concept doesn’t exist in OptiX 7 versions anymore. There everything is explicit.

Variables which were inside the context scope in former OptiX versions, normally go into a single launch parameter struct in OptiX 7.
Everything else either needs to be indexed somehow for example via the instances’ instanceId or other indices stored inside the SBT records, or values stored directly in SBT records.
Maybe think of the different SBT layouts I described above to be similar to program or geometry or geometry instance scope. Pick what matches your use case best.

Now I understand your other question about having multiple launch parameter blocks. With that background, I’d say that doesn’t make sense and you should use a different approach. Think about buffers (CUdeviceptr in your launch parameters or the SBT records) holding data and index into these.

In the end, all memory management is your responsibility in OptiX 7, which includes the design of how you want to store and access what kind of data.

Former versions of OptiX did not offer a Shader Binding Table. That API was not explicit like in OptiX 7.

1 Like

Thank you for the response and information.

After reading the information you provided, the best approach may be to go with SBT. This requires a lot of re-writes but it’s better to discover this sooner rather than later :)

Thanks again for the assist.

I noticed a lot of OptiX 7 examples that have the output buffer (float4*) stored in the launch parameter structure - is there a danger with this?

Thanks,

That is not the buffer itself, that is just a pointer to global memory.
You could also write that as CUdeviceptr and reinterpret that.
(I use that in my OptiX 7 samples when switching between float4 and half4 formats.)
Both are just a 64-bit values and need to be aligned to 8 bytes.

That’s the only way this works. The launch parameter structure itself lies in constant memory, which means you cannot write to it from device code. You need that indirection to output anything.

1 Like

That makes perfect sense. I feel so stupid now.

Thanks for the help.

Another dumb question.

Would it be in bad OptiX form to use the same SBT for a ‘hit’ program as another program such as ‘ray generation’ ?

For example, in a file called mySbtRecords.h I have a SBT defined as such:

template <typename T>
struct SbtRecord{ __align__ [OPTIX_SBT_RECORD_ALIGNMENT] char header [OPTIX_SBT_RECORD_HEADER_SIZE]; 
  T data; }

Within this same file I have defined a structure called RadianceData:
struct RadianceData { float value; }

Now inside my Ray Generation program shader, rayGen.cu, I get the SBT as follows:

extern "C" __global__ void __raygen__generateRays() {
    const RadianceData* rtData = (RadianceData*)optixGetSbtDataPointer();
    const float value = rtData->value;

And inside my ClosestHit program shader, closestHit.cu, I get the SBT record as follows:

extern "C" __global__ void __closesthit__intensity() {
    const RadianceData* rtData = (RadianceData*)optixGetSbtDataPointer();
    const float value = rtData->value;

Would there be an issue with using an SBT in this manner? I know that this could be done with a launch parameter structure passed but I would like to know if this can be better accomplished with SBT.

Thanks again, and sorry for the long (and probably dumb) question

Edit:

I think I just found the answer - sorry. However, if I am incorrect about passing the same SBT to different programs please let me know.

Thanks again for the assists. Your advise has been incredibly helpful and I really appreciate it.

Let’s clear up the terminology again. That template is the declaration of the SbtRecord struct.
The SBT holds multiple SbtRecords and they can be different SbtRecord types per record group inside the OptixShaderBindingTable structure:
https://raytracing-docs.nvidia.com/optix7/api/html/struct_optix_shader_binding_table.html

For example, I use two different SbtRecord structures because only the hit group records contain additional data.
All my other SBT record groups only need the 32 bytes shader header in my examples. Means they are not using optixGetSbtDataPointer().
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo3/inc/Device.h#L191

Your code excerpts should have also the actual SbtRecord declaration using that template or none of that will work.
typedef SbtRecord<RadianceData> SbtRecordRadianceData;

You normally don’t need SBT record data on the raygeneration program, because there can only be one and then that data can also be stored inside the launch parameters since those can change often (camera, output buffer, resolution, sample per pixels, sub-frame index, etc.)

Would there be an issue with using an SBT in this manner?

No, that is exactly how you’re supposed to use the SBT record data.

I know that this could be done with a launch parameter structure passed but I would like to know if this can be better accomplished with SBT.

Please forget the different launch parameter structures idea. There can be only one constant launch parameter block per optixLaunch(). But that can of course contain pointers to buffers of whatever data you need at what frequency. You only need to be able index into these buffers somehow with the instanceId, shader hit record data, primitiveIndex, or whatever fits your application needs.

I would normally not hardcode the offsets to material parameters data into shader code because I would want to be able to exchange materials on instances. That was just one simple example. The other methods I described are more flexible if that is required.

The SBT can get even more complicated than that because you can have multiple SBT records per GAS as well, for different build inputs.
Please read the chapter about the SBT as long as you need to understand its different options.
https://raytracing-docs.nvidia.com/optix7/guide/index.html#shader_binding_table#shader-binding-table

If you need to port from OptiX 6.5.0 to OptiX 7.x you should have read and understood both OptiX Programming Guides.

Thank you @droettger for the information - very helpful.

Sorry for the constant questions, I think the problem is that all the older OptiX code is jammed into a single directory without any defined structure. There appears to be multiple Ray Tracing shaders that make up many different Ray Tracing pipelines - for example I have multiple ray generation shaders, multiple hit shaders, etc. The names of the files don’t appear to help either (e.g. one ray generation shader is called Ray.cu) - no coherent naming convention to help and the one who designed it is no longer working. This is making it difficult to define SBT and Launch parameters.

I do however, appreciate your patience and assistance.

I would start with the ray generation programs.

The old OptiX API allows multiple ray generation programs in a “pipeline” (not an explicit object in that API) between which you can switch by the entry_point argument in the rtContextLaunch calls.

Means that would directly match OptiX 7, you can have multiple ray generation programs inside an OptixPipeline.
But you cannot switch with some entry_point index inside optixLaunch call anymore, because that takes the OptixPipeline and the SBT. Means the SBT’s single raygenRecord defines the raygen entry point.
That can either be done by having different SBTs per raygen program, or by manually exchanging the CUdeviceptr raygenRecord, or even by exchanging the raygenRecord’s 32-bytes shader header between launches.

Explanation and potential caveats in the links of this post:
https://forums.developer.nvidia.com/t/multiple-raygen-functions-within-same-pipeline-in-optix-7/122305
Read especially this one! https://forums.developer.nvidia.com/t/how-to-handle-multiple-ray-generators/83446

What approach makes more sense depends on how the other device programs are used inside the application.
You would need to know exactly which programs can be reached by what ray generation program and ray type to understand the shader hierarchy.

If different raygen programs can reach completely different hit and miss shaders then having different pipelines and SBT per raygen entry point would match better.

If there are just different raygen programs but the hit and miss shaders are mostly reused, then they could be all put in one pipeline and then you could either build different SBTs or exchange the raygen record or shader header to switch entry points as said above.
It all depends on how the application structured the materials, raytypes and variables.

The only things which could make things complicated are if the old programs used multiple materials per geometry instance, Selector nodes, or a lot of variable scope shadowing.
Either was not recommended, so your chances are good to have a rather straightforward port to OptiX 7.
The multiple materials case can be identified by looking at the rtReportIntersection argument. If that is not always zero that is the case, but that is also possible to unfold. (E.g. the material system in my OptiX 7 examples could even have different materials per ray if I wanted. It’s all defined by a single material index.)

1 Like

Cool.
Thanks again @droettger .