How to open RTX2060 card RT Core with OptiX6

I made an application that USES OptiX6.
I wrote such an API in the code to manually open the RT Core of my graphics card:

const int enablingRTX = 1;
RTresult rtxResult= rtGlobalSetAttribute(RT_GLOBAL_ATTRIBUTE_ENABLE_RTX, sizeof(enablingRTX), &enablingRTX);
if (rtxResult != RT_SUCCESS)
std::cout << “using RTX” << std::endl;
else
std::cout << “not using rtx” << std::endl;
const int enablingRTX = 1;
RTresult rtxResult= rtGlobalSetAttribute(RT_GLOBAL_ATTRIBUTE_ENABLE_RTX, sizeof(enablingRTX), &enablingRTX);
if (rtxResult != RT_SUCCESS)
std::cout << “using RTX” << std::endl;
else
std::cout << “not using rtx” << std::endl;

But the console output is that I never turn on RT Core, which means I’m not using RTX mode to speed up the ray intersection.
Our program is to use the GeometryTriangles.
How can I be sure that I have the video card’s RTX mode turned on and used correctly?

Our scene is 200,000 triangles. When we set rtxResult to 1, the frame rate can reach 23. When we set rtxResult to 0, the frame rate is only 1.9. I want to know whether this is caused by RTX model or RT core?
Is any engineer here?

Under rtx2060, when the number of triangles in the scene is 10million, when rtxResult is set to 1, the fps is close to 30. Under GTX980, when the number of triangles in the scene is 10million, the fps is close to 25. What puzzles me is why GTX980 No rt core can have such performance.

Which OptiX 6 version (major.minor.micro) are you using?

How can I be sure that I have the video card’s RTX mode turned on and used correctly?

There is no need to enable the RTX execution strategy at all. That is the default since over a year.
Please do not use the old execution strategy anymore.

The performance of a ray tracer depends on many things, and it’s more dependent on the number of rays than on the number of triangles.
It’s not possible to say what the limiting factors are in your case with the given information.
Maybe you’re shading bound, or maybe you had vertical sync enabled during the benchmark, etc.

Please have a look into some related topics about what RT cores accelerate:
https://forums.developer.nvidia.com/t/optix-6-0-rtx-acceleration-is-supported-on-maxwell-and-newer-gpus/70206
https://forums.developer.nvidia.com/t/api-related-to-triangle-mesh/82909
https://forums.developer.nvidia.com/t/leveraging-rtx-hardware-capabilities-with-optix-7-0/107733

Thank you for your reply
In optix6.0.0 version, I can achieve 27fps by manually turning on, and only 1fps by manually turning off RTX mode. Does this mean that we have turned off RTX2060’s RT Core?

If you’re still using OptiX 6.0.0, I would seriously recommend to upgrade to the newest available version 6.5.0 of that API.

The OptiX 6.0.0 version was the first to support RT cores on RTX Turing boards and that has been improved considerably since then, because the OptiX core implementation as well as the denoiser reside inside the display drivers since 6.5.0 and you’re missing out on all core improvements in each released display driver when staying on that old version.

Please forget about that execution strategy attribute and do not use it anymore. It’s neither tested nor supported.
It is not actually just switching the RT cores on and off. That functionality doesn’t even exist. Instead it changes the whole code path taken inside the OptiX core implementation. That was only intended to have the former mega-kernel execution strategy still available for a transition period. I wouldn’t be surprised if it’s doing something unreasonable to result in that dramatic performance loss. Again, simply don’t do that.