[OptiX 7.0] Fastes way to reflect ray

Hey!
Using Optix 7.0, CUDA 10.1, 441.66 driver and VS 2019

Just a quick question, for which I didn’t find a satisfying answer on the forum.

What is the fastest way to reflect a ray in the ch() program?
If I am not mistaken this is the only way to get the Intersection is the following:

float3 preDir = optixGetWorldRayDirection();
float3 P = optixGetWorldRayOrigin() + preDir * optixGetRayTmax();

When it comes to reflecting the ray I am not so sure. I don’t know which is the most efficient way to calculate the normal.

        float3 verteces[3];
    optixGetTriangleVertexData(optixGetGASTraversableHandle(),
        nPrim,
        optixGetInstanceIndex(),
        0,
        verteces);
    verteces[1] -= verteces[0];
    verteces[2] -= verteces[0];
    //normal
    verteces[0] = normalize(cross(verteces[1], verteces[2]));

I then get my normal in object space. Now I don’t fully grasp the difference between:

optixTransformVectorFromObjectToWorldSpace(float3 vec) and
optixTransformNormalFromObjectToWorldSpace(float3 normal)

What function should I use for my transformation?

Thank you for your help!

Please have a look into my OptiX 7 examples.
Those use only one closest hit program and have a specular BRDF sampling routine using a reflect() function.
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo3/shaders/closesthit.cu#L126
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo3/shaders/bxdf_specular.cu#L94

Taking these and removing everything you’re not interested in would leave you with something like this:

Not compiled, just to show what is remaining when reflecting the ray direction at the face normal.
Note that the reflect() function doesn’t care on which side the face normal lies. The reflection will be the same.
Normally you would flip the tangent space to the surface side the ray looks before.
This is using a manual matrix transformation (because this was planned to do this in other places as well which wouldn’t have access to the transform hierarchy.) but that should answer your question.
Normals need to be multiplied with the inverse transpose objectToWorld matrix, so it’s using worldToObject which is the inverse and multiplies transposed.
Tangent and bitangent are using transformVector() with the objectToWorld matrix!

// InverseMatrix3x4^T * normal. v.w == 0.0f
// Takes the inverse matrix as input and applies it as inverse transpose.
__forceinline__ __device__ float3 transformNormal(const float4* m, float3 const& v)
{
  float3 r;

  r.x = m[0].x * v.x + m[1].x * v.y + m[2].x * v.z;
  r.y = m[0].y * v.x + m[1].y * v.y + m[2].y * v.z;
  r.z = m[0].z * v.x + m[1].z * v.y + m[2].z * v.z;

  return r;
}

// In shaders/vector_math.h
__forceinline__ __device__ float3 reflect(const float3& i, const float3& n)
{
  return i - 2.0f * n * dot(n, i);
}


extern "C" __global__ void __closesthit__radiance()
{
  GeometryInstanceData* theData = reinterpret_cast<GeometryInstanceData*>(optixGetSbtDataPointer());

  // Cast the CUdeviceptr to the actual format for Triangles geometry.
  const unsigned int thePrimitiveIndex = optixGetPrimitiveIndex();

  const uint3* indices = reinterpret_cast<uint3*>(theData->indices);
  const uint3  tri     = indices[thePrimitiveIndex];

  const TriangleAttributes* attributes = reinterpret_cast<TriangleAttributes*>(theData->attributes);

  TriangleAttributes const& attr0 = attributes[tri.x];
  TriangleAttributes const& attr1 = attributes[tri.y];
  TriangleAttributes const& attr2 = attributes[tri.z];

  const float normalGeoObject = cross(attr1.vertex - attr0.vertex, attr2.vertex - attr0.vertex);

  // Not necessary if object coordinates == world coordinates and there are no non-uniform scaling transformations on instances, means transforms are all working like Identity on vectors after normalizing.
  const OptixTraversableHandle handle = optixGetTransformListHandle(0); // Assumes OPTIX_TRAVERSABLE_GRAPH_FLAG_ALLOW_SINGLE_LEVEL_INSTANCING only!
  const float4* worldToObject = optixGetInstanceInverseTransformFromHandle(handle);
  
  const float3 normalGeoWorld = normalize(transformNormal(worldToObject, normalGeoObjetc)); // It's a normal, use inverse transpose matrix.
  
  const float3 direction = optixGetWorldRayDirection();
  const float3 position = optixGetWorldRayOrigin() + direction * optixGetRayTmax();
  
  const float3 reflectionWorld = reflect(direction, normalGeoWorld);
  ...
}

As you see I store my vertex attributes and indices at the Shader Binding Table data per Instance at the expense of keeping the vertex data, but my vertex attributes are interleaved anyway.
That way I do not need to use optixGetTriangleVertexData() which can be slower and needs the specific flag OPTIX_BUILD_FLAG_ALLOW_RANDOM_VERTEX_ACCESS on the OptixAccelBuildOptions::buildFlags.