I’m working on an offline renderer that could potentially handle 100GB+ of data for a scene.
Since it’s targeted towards users who own consumer market GPU’s, not ProVis organisations, users most likely only have up to ~8GB of VRAM on average.
Since CUDA supports out of core processing with unified memory where GPU can request paged memory ondemand, I would have thought Optix could support this as well.
However, based on previous forum posts, Optix does not support out of core rendering since its deprecation in Optix 3.x, meaning all the workload including the raw scene data must fit in the software user’s VRAM.
Is there any workaround with the current API? If not, how likely would a workaround be supported in the future? say 3 years down the line?