My renderer, Optix denoiser and NVENCODER share the same cuda context.
Is it considered a good practice?
I create Optix denoiser and NVENCODER one after another in the same CPU thread. Optix denoiser first, then NVENCODER.
I run Optix denoiser in the default cuda stream but it may run in a different stream.
a) Is there a recommendation in what order Optix denoiser and NVENCODER should be created when Optix denoiser uses default and non-default cuda stream?
b) What stream does NVENCODER run in? Is it cuda default stream or could it be any stream?
c) Is any synchronization required between Optix denoiser and NVENCODER creation when they use the same or different cuda context and the same or different cuda streams? If so, what synchronization API (cudaDeviceSynchronize or cuCtxSynchronize) should be used and when?
I’ve already answered in the OptiX forum to the best of my ability. The questions are asking about how the NVENCODER API works, which I have no expertise in. I sent @petr.mpp over here to find out. Please assign someone on the NVENCODER team to answer the questions as best they can. I glanced at the docs, and the NVENCODER Programming Guide suggests in multiple places that people should create a new “floating” context for encoder work, which is not something OptiX necessarily recommends by default, so the first part of the question is whether those recommendations imply that NVENCODER users should try to limit whatever else they do in the same context. My guess is that nobody knows exactly what the interaction is between OptiX denoiser and NVENCODER is, but I’m certain that best practices include doing the initialization for each API serially and not try to use one of them while the other one is initializing. That is the problem @petr.mpp had initially, and synchronizing around initialization seems to resolve the problem. The user just wanted to better understand this workaround so they can be confident in the fix, and to make sure there aren’t better ways to resolve it.