Multi-process access to a single Optix Context

Dear All,

We use Optix extensively for camera view synthesis in various robotics simulations. This currently involves exporting complex scene geometry out of a game engine environment to an Optix context via a binary network protocol. This works well and allows us to generate as many camera views as we wish at high speed without slowing the game engine, which currently runs in a single thread.

Lately, however, we have also had the requirement to simulate other sensor modalities such as lidar, sonar, radar and infrared, which all require different kinds of physics and hence distinct ray generation programs and post-processing. Rather than merging all of our sensor simulations into a single Optix application, or duplicating scene geometry across multiple applications, I am wondering if there is a way of sharing a single Optix Context between multiple applications, which could then simulate different physics for each sensor.

This would overcome a significant road-block that we currently experience in GPU memory usage.

Best Regards,

David.

Which OptiX versions are we talking about? Assuming OptiX 7.

That isn’t possible. You cannot even share OptiX contexts between GPU devices in the same application. They are per CUDA context and these are per device.
Although in a single process, you can share data between NVLINK devices in a multi-GPU system.
OptiX 7 doesn’t know about multiple GPUs. That would all be handled inside the application’s CUDA host code.

You might need to add some more details why you cannot share the acceleration structures among different rendering algorithms inside a single application.
How much memory would you need?
On which graphics boards and VRAM size are you running?

The different rendering algorithms are effectively implemented inside the OptiX Pipelines and Shader Binding Tables (SBT) which should be rather small compared to acceleration structures.
Having different ray generation programs means having different SBTs. There can only be one ray generation entry point per SBT.
Depending on the renderer algorithms, the other program records might even be possible to be shared among the Pipelines.

There isn’t really anything to gain from running multiple OptiX applications in parallel if the underlying system is running only one GPU device and at least one application saturates the GPU.
An OptiX 7 launch takes the pipeline and SBT as arguments which means effectively switching between algorithms.
Using multiple pipelines with multiple CUDA streams can potentially run more parallel in cases, esp. if your launch sizes are not saturating the GPUs.
Having multiple GPUs accessible would allow to do things completely parallel in a multi-threaded process.

Thanks for the prompt reply Detlef. There’s no real problem sharing acceleration structures from a software perspective. It just goes against modularity, which would enable us to mix and match sensor modalities in different situations. Sometimes you want lidar, vision and radar, sometimes just vision etc. I guess we’ll implement a monolithic Optix application with internal semaphores to run different ray types on demand.

I had thought there might be something new in Optix 7 which might allow a more modular approach. All of our existing code is written for Optix 5.1.1 and will shortly be ported to Optix 7, so it’s a significant re-write anyway.

Cheers,
David.

Hi David,

It’s not clear yet to me what your requirements are, but you might want to take a closer look at Detlef’s suggestion to use CUDA streams and multiple pipelines. The ability to use multiple streams and launch multiple overlapping launches, and to manage and interface with CUDA memory directly, is the new thing in OptiX 7 that allows a more modular approach.

From your description so far, this might do what you’re after. You could (just as an example) use multiple piplines & streams to implement an OptiX server that would allow multiple separate client applications to render with multiple different sensor modalities on the server in parallel. I am assuming that a measurement with a given sensor type is a separate launch from a measurement with another sensor type.

I wouldn’t think that sharing acceleration structures is an impediment to modularity, the shared BVH just needs to be treated like a shared resource. I don’t really understand your comment about using a semaphore, though, which might be an indication that I don’t understand your overall question or that I’m making some incorrect assumptions…


David.

Hi David, We’ll definitely explore those possibilities for parallelism and we are genuinely looking forward to moving to Optix 7, as we think the new memory management will give us greater flexibility. What I meant about the semaphores was:

  • Different sensors run at different rates, so yes, we will need separate launches, ray generation and post-processing for different ray types. Each launch will be independent, but parallelism would still be very useful to avoid blocking.

  • Different sensors have different ray types and payloads. Our camera payloads are just rgb values, lidar payloads have rgbi and xyz coordinates etc. Some of our other payloads are fairly large. With certain types of rays, we resort to wavefront processing, which requires an entirely different data flow.

So sharing Optix data across applications was just one idea (now discounted!), but there is no reason we can’t combine our different ray types and launches into a single application, but perhaps not use all of the ray types and data feeds in any given simulation.

Thank you both again for your guidance on this. No doubt we’ll have more questions as we get deeper into Optix 7.

Cheers,
David.