In the process of adding to support to Pascal, I am trying to upgrade from Optix 3.8.1 to 3.9.1. I cannot for the time being upgrade to 4.0 and further due to those versions dropping support to Fermi cards.
After upgrading the SDK, linking, dropping the new dlls in the folder, etc, my application simply refuses to run. The exception I receive is the following, whether I compile with CUDA 7.0 or 7.5:
"Unknown error (Details: Function “_rtContextLaunch2D” caught exception: Assertion failed: “splitPrimsIfBelowLimit <= INT_MAX && splitPrimsIfAboveLimit <= INT_MAX”, )
The reference to ‘primitives’ gave me a hint on where to look to try narrowing it down. Indeed, I found that if I replaced all Trbvh accelerations with any other type, everything works as it did before the upgrade.
I then took it a step further, and noticed that it didn’t matter which scene was in question, whether one with 4 vertices or one with 100s. If ‘Trbvh’ was involved, calling Launch would throw the exception.
Finally, I tried changing the properties of the accelerations, and found the actual culprit:
accel = m_Context->createAcceleration( "Trbvh", "Bvh" ); accel->setProperty( "vertex_buffer_name", VAR_VERTICES ); accel->setProperty( "index_buffer_name", VAR_TRIANGLES_VERTICES_INDICES ); accel->setProperty( "chunk_size", "-1" ); // THIS LINE CAUSES THE CRASHES
If I comment the “chunk_size” line, the application works fine. If the line is left in code, the exception is thrown. Note that this is different to 3.8.1 where the same code with the same exact input worked without a hitch.
For the moment I left it out, but I don’t really understand what this property does and what its effects are.
Is this just an optix bug, or is it a symptom of something deeper going on that deserves attention?
Win 7 x64
Tested against both CUDA 7.0 & 7.5
Tested on GT740, GTX750Ti, GTX1050