Is there any compiler settings to enable RT core?

Now I try to migrate to OptiX 6.0.
I replaced Geometry by GeometryTriangles and set global attribute before creating a OptiX context to enable RTX mode.
I measured performance of RTX 2080Ti but it seemed to work in similar speed as RTX off (using traditional Geometry).

Is there any required compiler settings like code generation (e.g. compute_75, sm_75) to enable RT core?
Or should I use NVRTC?

That should just do it.
You can use SM 3.0 as PTX target as before. Compiling to SM 7.5 is probably not yet handled by the OptiX internal parser.

What is your benchmark?
Things like scene complexity (number of GeometryTriangles nodes, number of triangles in each, depth of the transform hierarchy), shader complexity, attribute program used, other custom geometry primitives, motion blur?
Vsync disabled, OpenGL interop used, using a debug executable maybe, etc.?

Thanks, I wii try more scenes.

By the way, is there a way to check RTX capability of GPU?
I had guessed that setting global attribute fails with non-RTX GPU but it unexpectedly succeeded with GT 750M.
GT 750M is Kepler and shouldn’t support RTX.

rtGlobalSetAttribute doesn’t do any validation of that kind.

You could check the GPU device attributes before creating a context. Find a link to example code here:

The first Maxwell GPU has SM 5.0.

It’ll be good that documentation explains this.