I’m using OptiX 4.1.0 with CUDA 8, NVIDIA drivers 382.05 on GTX 980ti, and have noticed the following issue:
I have to force stack size setting, otherwise I get problems during rendering, i.e. by default I get an issue during rendering (not all geometry is traversed), however if I do:
context->setStackSize( context->getStackSize() );
I don’t have the problem anymore!
I further noticed, that if I query stack size before setting it, I get “5120”, however, the rendering result looks similar to the result when I manually set the stack size to 1024, so I assume that there is a problem with the default stack size handling.
What is most annoying, is that this problem appears only on my GTX 980 Ti. I have tried GTX 970, Maxwell-based Titan X, GTX 1060 and GTX 1080 and even mobile 1070, and didn’t get the issue. My test, originally was in measuring free CUDA VRAM after OptiX launch is called (cudaMemGetInfo). I had pretty large stack size originally (24000) and on 980 Ti there was 0 free V-RAM left after OptiX launch; on all other GPUs, that I mentioned, there was still VRAM left after OptiX launch with the same settings, so I think that there might be another problem with memory allocation during launch, not only the incorrect default stack size.
Finally, I have solved the issue by reducing the stack size to ~ 4000, but I’m still puzzled whether I misunderstand something or there is really an issue with the stack set/get and memory allocation functions (btw, I didn’t have the issue with OptiX 3.8.0).
Any explanations would be helpful!