OptiX for usage in VR headsets?

I am thinking about simulating scenes with OptiX to present them in VR headsets. Do I need a separate graphics engine for this task (e.g. Unreal Engine)?

Are there any samples using OptiX with VR headsets? Do some VR headsets offer better support for OptiX than other?

2 Likes

Hi Superdev,

OptiX is purely a ray tracing API, and provides a way to render your geometry to framebuffers using ray tracing and your own shaders, but does not give you integrated authoring tools or a scripting environment or an interface to controllers like a game engine does. OptiX can do the rendering part, and a separate game engine is not required, but whether you would want to use Unreal or some other engine will depend entirely on what kinds of features you’re looking for and how you want to write your code and author your scenes. What kind of simulating do you need to do?

The OptiX samples are only meant to demonstrate how to use the OptiX API with as little code as possible and minimal dependencies, so there aren’t any that target VR headsets or stereo environments specifically.

As far as which headsets might work best, I don’t know. I’m not aware of any that have specific OptiX support, but others might know more and be able to jump in here and make suggestions. Obviously you’d want to pick something compatible with an external RTX board, and not an all-in-one headset that has it’s own integrated GPU.

–
David.

2 Likes

This ^^^.

In the end you need to provide an image to the VR runtime software for your specific VR headset.

The VR headsets on the market provide runtimes supporting various raster graphics APIs (e.g. OpenGL, Vulkan, DirectX11/12). Depending on which graphics APIs the different VR headset vendors support best, your own application would need to be able to generate an image for that expected raster API texture object.

The Khronos group is working on OpenXR which should make that task less vendor specific in the future: https://www.khronos.org/openxr

Since VR use cases are real-time at high refresh rates and big resolutions, the time budget is rather limited and there wouldn’t be many use cases for ray tracing under those circumstances.
Because of that, for optimal performance it makes sense to target a VR runtime raster API which has native support for ray tracing already. Means when targeting DirectX12 use DXR, when targeting Vulkan use Vulkan NVIDIA raytracing extensions. Those combinations would not need to care about CUDA (OptiX) context switches and interoperability, which would affect potential hybrid rasterizing/raytracing algorithms.

That said, it’s possible with OptiX and OpenGL as well, as this GTC 2019 presentation shows:
http://on-demand-gtc.gputechconf.com/gtc-quicklink/1sQYeyg
EDIT: The on-demand GTC pages have been moved. This is the older 2019 and a newer 2020 presentation from above:
https://www.nvidia.com/en-us/on-demand/session/gtcsiliconvalley2019-s9717/
https://www.nvidia.com/en-us/on-demand/session/gtcsj20-s21425/

3 Likes

Hi @droettger ! So, from your answer:

Because of that, for optimal performance it makes sense to target a VR runtime raster API which has native support for ray tracing already. Means when targeting DirectX12 use DXR, when targeting Vulkan use Vulkan NVIDIA raytracing extensions. Those combinations would not need to care about CUDA (OptiX) context switches and interoperability, which would affect potential hybrid rasterizing/raytracing algorithms.

Are you suggesting using either DXR or VKR APIs over the OptiX for raytracing in VR implementation? This post is from 2019, with the RTX tech. now its feasible working on ray and path tracing in high pixel density display like VR. Could you please add some more suggestion on that?

Hi @_Bi2022,

I don’t speak for Detlef, but since he’s out on holiday break already I’ll try to help.

There are several reasons one might want or need to use DXR or VKR instead of OptiX. Even though this thread is several years old, it’s still the case that many VR applications and headsets have pretty tight compute budgets. Even though they can handle high density display and high framerates, those things consume a lot of precious memory bandwidth. It’s also the case that some headsets are designed with support for specific APIs, and that trying to interop with an API the vendor didn’t explicitly plan for may increase the effort required considerably.

Detlef was pointing out that OptiX is pure ray-tracing only, while DXR and VKR have a core raster API in addition to the ray tracing extensions. This means that you can render your primary bounce surfaces using rasterization, and reserve your ray tracing workload for secondary effects like reflections or shadows or bounce lighting. It depends on your scene data, but speaking very generally, I would think less complex scenes with game-like assets will be more efficient to rasterize the first bounce, where ray tracing might pay off with high complexity and instancing.

If your VR budget allows for 100% ray tracing, and if your target headset will be easy to integrate with OptiX output, then there’s no particular reason to avoid OptiX. I imagine that OptiX would be simpler and preferable to trying to mix raster and ray tracing, as well as being a little simpler than DXR/VKR plus allowing seamless CUDA interop. But, the point is to pay attention to the details of your target VR devices, whether they support Direct X, Vulkan, OpenGL or other APIs, and be aware of the GPU driving the system including it’s memory bandwidth and RTX specs to make sure it can handle your plans and expectations. If you are developing for multiple VR headset models and/or multiple brands, it might be safer to start with the API that has the broadest advertised support across the models you’re targeting.

–
David.

2 Likes

Hello David!

I sincerely appreciate for the time you have invested to answer my question in details.

I am actually interested in research purpose, because I truly believe with the development of RTX, the future graphics pipeline would be path tracing. I am aware about the VR requirement of at least 90 fps (achieving that would be challenging of course), and latency not more than 20 ms. I did some internet research and found, OptiX (with CUDA) is faster with NVIDIA RTX cards over DXR and VKR, and of course that makes sense.

To test a real-time bidirectional path tracing, I have recently bought two RTX 3090 GPUs and I have also a Vive Pro Eye VR with Tobii eye Tracker. Honestly say, I am just a beginner in this real-time rendering domain. What I am missing a lead, some guideline how can I combine all these resources together. I also found in google that I will also need an SDK for VR development, e.g., OpenXR.

A sample example project would be very helpful to get a lead. As that is not available, could you please suggest some good reference for that? I believe this would not only help me, but also help future developers as well.

1 Like

A pair of 3090s is a very beefy GPU driver, that should be able to handle your VR application just fine, provided that you can push the rendered results into the Vive headset.

The main issue for you to solve is how to render something (using any means whatsoever) and send those rendered results to the headset display. We on the OptiX team are completely unfamiliar with the Vive SDKs for display, but assuming that Vive provides a path for you to build your own rendering engine, the place to look for such an example would be on their developer page and/or forums.

https://developer.vive.com/resources/
https://vr.tobii.com/sdk/develop/

From a little bit of searching, it appears that Vive headsets have the most support for Unity & Unreal based applications and content. But I can see a Wave Native SDK that you can use to build OpenGL applications. There are details and a tutorial here: https://hub.vive.com/storage/docs/en-us/RenderRuntime.html

I recommend going through their tutorial to build a very basic OpenGL based renderer for your display, without trying to use OptiX (and also don’t bother with eye tracking or head motion yet). Just get to the point where you can drive the headset display with your own code. Once that is up and running, you can then consider the problem of how to render using OptiX and then push the resulting image into an OpenGL buffer and use it in your Vive application. This part won’t be very difficult because the OptiX SDK samples already have built-in OpenGL interop, so for a path tracing sample you could use optixPathTracer, but pay attention to the OpenGL code we have in our sutil library, in particular GLDisplay.cpp. You’re going to need to create the GL context & display buffer in your own application, follow our example of how to render into the display buffer, and then pass it to the Vive SDK for display.

There will be several other bits and pieces to integrate, once you get the display rendering to work. Now you can investigate your eye tracking API and figure out how to pass the camera parameters over to your OptiX based renderer. There will be plenty of work figuring out how to synchronize your OptiX launches with the Vive display refresh. I’m assuming you’ll probably use one GPU for each eye. Because Vive supports OpenGL, you will have the option to do some raster based rendering in OpenGL before or after your OptiX ray tracing phase, if you want. This is how you might display on-screen text or UI. If you used a pre-render phase, you can consider rasterizing your first bounce, and ray tracing the secondary bounces (reflections, refractions, shadows, etc.) This is optional, and can be fairly complicated, so you could start by using pure OptiX and then investigate the raster integration later if it seems necessary.

This path will take some time to explore, I’m sure, and we will be interested to hear how it’s going and see demos of OptiX on Vive Pro Eye once it’s up and running. Good luck!!

–
David.

2 Likes

BTW I just realized I should at least step back from the OptiX question and mention a couple of things worth considering.

Unity and Unreal both have path tracing engines, so it’s worth carefully considering your goals with this project. If your goal is to do rendering engine research specifically, and a part of your primary goal is to implement the renderer yourself, then doing the OptiX integration is certainly a good idea. But if your goal is to have basic path tracing support to improve the visuals as part of user studies, or a higher level interaction study, or if your research interest is how to build a VR path tracing pipeline or a VR ray tracing application or game in general, or to support other non-rendering-specific research… in that case it’s worth carefully considering whether starting with Unity or Unreal path tracing modes might help you achieve your goals much faster with less effort.

It will be a considerable amount of work just to integrate OptiX with Vive, but that will only be the beginning. If that goes well, then you’ll need to write a path tracer that includes a content pipeline and can easily handle the VR’s high resolution and refresh rate demands. Getting content from somewhere into your renderer alone can easily take as much time as writing the renderer, even if you build on top of existing content pipeline and authoring tools.

Note that I’m speaking to anyone here thinking about developing VR applications using OptiX, so please don’t take this as me being patronizing to you or making any assumptions about what you are capable of. I also love OptiX and love seeing people using OptiX, so I don’t want to recommend against using it. Nonetheless, it would be a very good idea to think carefully about what your goals are and whether starting with a pre-existing game engine like Unity or Unreal will meet those goals more easily than a DIY approach. The benefits of having an existing path tracer, and having all the authoring tools these game engines provide cannot be understated. There are very good reasons to leverage the game engines, if you aren’t primarily focused on writing your own renderer.

–
David.

1 Like

I would like to see these statements because that actually doesn’t make any sense.

All three ray tracing APIs, OptiX, DXR and VulkanRT, should all run at the same maximum speed when programming things optimally since there is nothing in the underlying hardware which would work differently. They are even going through the same kernel driver.

I’ve ported one of my OptiX 7 examples to Vulkan RT before and it was actually a tad faster in Vulkan RT.

It’s just that the functionality of the three API differs, where OptiX is targeted at professional renderers with a few more features like multi-level traversal and built-in curve primitives for example.

What I was trying to explain before is that you should first figure out which graphics API has the best support for your HMD vendor’s SDK. That defines how you need to transfer the image to the HMD which normally happens via a texture inside a graphics API, that means not CUDA .
This in turn means when using OptiX you need to figure out how to best transfer your resulting image from CUDA into that target graphics API texture. And since that means there is some overhead in doing that step, it would be more efficient when directly using a raytracing API available inside the graphics API used by the HMD to transfer and display that image.
Again when the HMD prefers DirectX12 then you could use DXR for ray tracing and would be able share all graphics resources without any need for CUDA-graphics interop like when using OptiX. Similarly when the HMD prefers Vulkan for the texture and display, then using the Khronos Vulkan Raytracing extensions would be able to share the graphics resources without any interop necessity.

This discussion is all about the overhead necessary for the transfer of the final image to the HMD. Your time budget is limited, e.g. to 11.1 milliseconds at 90 Hz, and you don’t want to waste any time on just transferring the final image when that can be avoided by using the more suitable ray tracing API instead.

2 Likes

My main goal actually lowering the rendering load to faster the framerate (90 fps as target). For this, I chose using the eye-tracking data, a.k.a. foveated rendering/ gaze-contingent rendering. My initial idea is lowering the sampling rate towards the peripheral regions. The main drawback for this case, I do not have much time developing a in-house rendering engine. On the other hand, after did a short research on Unity and Unreal Engine, seems like both of the engines do not support ray tracing for VR yet, as those are mainly for commercial use.

After these long discussion in this thread (your replies are real gold for future users), it seems like OptiX at backend would be really good option to give a try if developers have plenty of time and resources. This would be really a good research as besides ESI Group (| NVIDIA On-Demand) non other has tried that yet. McCarthy et al. ( [PDF] Distributed VR Rendering Using NVIDIA OptiX | Semantic Scholar) used OptiX for CAVE environment, CalVR.

1 Like