Hello,
I am currently developing my hybrid rendering application which uses OptiX for ray tracing stage. In order to maintain stable framerate I am planning to develop a heuristic that terminates further ray calculations based on time restrictions. Currently, I ran into a strange problem - the timeout callback seems to be not working at all. No matter whether I set the min_polling_seconds argument to a very small value (like 0.0001), or a high one, the provided callback function is never being called, with my application running with either 120 or 5 frames per second.
My platform: Windows 10 Version 1709 x64, Microsoft Visual Studio 2015, OptiX 5.0.0, CUDA 9.0, GeForce GTX 960M, Driver version 391.35.
I’ve tried to use this timeout callback in both my application, and within OptiX samples, and both were refusing to work as expected.
The parts of code I’m using to set the callback are as follows:
int timeoutCallback()
{
std::cout << "Timeout callback!" << std::endl;
// for testing purposes - just ask for abort
return 1;
}
//called once
void load()
{
m_context = optix::Context::create();
m_context->setTimeoutCallback(timeoutCallback, 1.0 / 30.0); // try to maintain ~30 FPS
m_context->setEntryPointCount(1);
m_context->setRayTypeCount(2);
//(...)
}
// called every frame
void draw()
{
m_context->launch(0, width, height);
// buffer displaying stuff
}
I’m not receiving any error. The application just behaves exactly the same as without the call to setTimeoutCallback - no ray termination despite low performance, no messages on application’s console. I’ve read in the programming guide, that timeout callback doesn’t work with remote rendering, but I don’t use remote rendering anywhere in my application.
Do I understand the purpose or the behaviour of timeout callback wrong, am I setting something wrong way, not setting an important parameter somewhere or is this a strange bug?
Thanks in advance for any advice.