We have started looking to design a new renderer using OptiX and what we have learnt from our research task. One things we are interested in is the possibility to have better control over when accelerations structures are calculated. Part of the idea is to possibly have a process that just generates acceleration structures in the background as a worker.
The principle idea is that we could load in complex geometry and calculate the acceleration over multiple frames without affecting the next launch so rendering remains realtime however much happens in the frame. When the acceleration is complete we then copy the data into the rendering scene. The only side affect is that a model will appear a couple of frames late. Or for the case of character animation the updated character pose would be calculated in parallel with the rendering of the previous frame so removing the serial nature of acceleration-update followed by render.
The only way we can currently see achieve this at present is via a second Context which has a sole task of updating accelerations structures. We then get the acceleration caches and put them into the rendering context along with setting/switching out the geometry buffers which would be passed using simple CUDA interop so requiring gpu-gpu or no copy at all.
There are a couple of issues with our multi-context idea though:
- To copy the acceleration data it has to go via host memory so wouldn’t be as fast as technically possible.
- The acceleration context is still going to ‘compile’ when we do launch(0) which is redundant
- We will essentially double-buffer character geometry and swap the buffers which could cause some problems in OptiX?
Any views on how we could have control over some acceleration-build scheduling with the single or multi-context or other methods would be greatly appreciated. We see that OptiX is getting faster and faster builds so is it the view of the OptiX team that builds will remain automatic for foreseeable future?