Looking for guidelines about using OptiX rebustly

Hi David and Detlef!

My issue here is not about coding problems. I learned (and still learning) OptiX since last year, and my main task is to research, how to make path tracing faster to convergence.

Using the built-in SDK and some github repository open resources, I built my toy path tracing rendering engine. But I still doubt, it is not optimized at all. And the most important point is, due to my limited knowledge, it takes a lot of time on coding that I want to reduce this.

I know there is Falcor framework, Ingo Wald’s OWL wrapper library, shocker-0x15 has another wrapper library. Your OptiX_Apps is also the best available resource. The commercial grade rendering engines like Cycles, IRay, VRay also has OptiX backend. I am not sure about Omniverse though.

So, my question is, is there any better solution for research purposes other than developing my own rendering engine with OptiX? Of course, my codebase gives me better freedom and understanding. But if I want to use an already built-in code base with path tracing algorithms, that I can modify, manipulate, or extend, what would you suggest to use?

This is difficult to answer. If you’re not a seasoned graphics programmer this will always get complicated independent of the underlying framework. There is no “one SDK fits all” solution here.

Of course you would have the most benefit from developing exactly what you need yourself.

Most of the authors of papers which center around ray tracing research either have developed their own renderer over the years (put any of the big renderer developers and VFX studio names here) or have their own custom smaller example frameworks which do exactly the required things to be able to compare existing solutions against new ones.

my main task is to research, how to make path tracing faster to converge

That topic has generally nothing to do with what renderer framework you use. That can be researched even on simple CPU ray tracers, which actually is happening quite often, because those have the least limitations.

  • That starts as simple as this “small path tracer”: https://www.kevinbeason.com/smallpt/
  • My OptiX 7 examples are only showing how fundamental things can be done in OptiX, but their random number sampling and lighting algorithms are far from what you’d really do in a professional renderer (aside from the MDL_renderer materials). There are plenty of ways to make that converge faster but I cannot use them in open-source code due to intellectual property reasons.
  • The OptiX wrapper library OWL is just a convenience API on top of the OptiX 7 host API. It’s meant for quick-starting the OptiX host-side programming for beginners but would not affect the device-side programming tasks required to develop a renderer. If you understood how the OptiX 7 host API works, you don’t need it anymore.
  • The renderers in Omniverse are not programmable for end-users. That includes Iray.
  • Same for all non-open-source renderers.
  • While Blender Cycles is open-source, I’m not sure how suitable it would be as base for research purposes. The Blender 3.x Cycles is a wavefront renderer which is pretty involved. You’d need to know exactly what you’re doing to keep it working.
  • Falcor is DirectX/DXR.
  • Many people do raytracing research using the PBRT implementation https://pbrt.org/
  • Others use the Mitsuba renderer https://github.com/mitsuba-renderer
  • For more real-time raytracing there is also the new NVIDIA RTX Path Tracing SDK https://developer.nvidia.com/rtx/path-tracing. That’s based on DXR resp. Vulkan Raytracing APIs and is meant for games and makes use of other NVIDIA SDKs to converge and denoise faster, means not everything in that is fully programmable.

I have not worked with either of the latter four.

There are definitely a lot more things to look at than the list above. There are some other “small” versions of different light transport algorithms and plenty of research code accompanying published papers, etc.

The difficulty with using complete renderer implementations would then be, to figure out if it’s possible to change them in a way you need, and then how that is actually implemented.

1 Like

Detlef’s answer is fantastic. I’ll just add my own two cents. I don’t have any recommendations, just some ideas about how to identify and think about what you might need.

my main task is to research, how to make path tracing faster to convergence.

There are (at least) two fundamentally different kinds of path tracing speedup: algorithmic speedups and system (optimization) speedups. You talked about optimization, which implies system speedups, and also about research, which implies algorithmic speedups. Do you have one of those in mind, more than the other?

The reasons to choose someone else’s application vs choose your own may depend on whether you’re going for algorithmic or system speedups. Certain kinds of algorithmic speedups aren’t easy or even possible in existing renderers without a lot of refactoring and difficulty. System speedups also might be difficult to achieve given that the authors of all the frameworks and renderers you mentioned have spent time optimizing, and because their systems come with many constraints. On the flip side, you can learn a lot by looking at and modifying someone else’s source, and you get to start with a functionally complete renderer that comes with a lot of features. This shouldn’t be underestimated, it may be worth putting up with the overhead in order to be able to take advantage of the existing infrastructure.

it takes a lot of time on coding that I want to reduce this.

It’s good to consider what your expectations are, and what your time budget really is. Making path tracing faster, and optimizing GPUs, both are active areas of heavy research. It may take a pretty big time commitment and a lot of study of the existing work (what other people have done) to find any meaningful improvements. It would also be good to identify any deadlines and goals more specifically and concretely. Is your primary output a fast renderer, or knowledge of how to build a fast renderer, or is it a paper or thesis?

When I studied ray tracing in grad school, most people told me to find an existing renderer and make an incremental improvement, that writing my own would waste time. It’s very true and good advice! But I ignored the advice and wrote my own renderer. I learned a whole lot doing it, but certainly did spend some time reinventing some of the wheels, and I ended up with a not-so-fully-featured renderer, it couldn’t make pictures as pretty as other renderers could. My primary goal was to find a thesis topic and graduate, and by paving my own road, I found lots of things that I thought were solved problems that actually weren’t and could be researched. I also spent more time than I might have otherwise. This was a while ago now, and renderers are more sophisticated now, and GPU development is more involved, so the advice to start from someone else’s renderer is even better advice today than it was when I was in school. And the advice to do it yourself is as good as ever, if learning is your goal.

Good luck!

–
David.

1 Like

Hi Detlef! Hi David,

Thanks a lot for the deep insights. Honestly, you are the reason I started learning/working with OptiX API. Before learning this API, I read the long discussion after every problem and your suggestions, which motivated me a lot. Still, today, if I have a problem, and there is nowhere to discuss it, I feel free to put it up with the confidence that I will definitely get some of the best suggestions. Especially thanks to David for telling the grade school dilemma to choose an existing rendering engine, or reinventing some of the wheels. I guess I am in the same dilemma now.

I do not have an intellectual property constraint, so let me tell you a bit of detail here. My focus research topic is foveated rendering. So, I choose a path like this:

|--- Foveated rendering 
    |--- Ray tracing
         |-- Path Tracing 
             |-- Hardware Acceleration
             |-- Foveated Path Tracing
                 |-- Multi-display setup (current work)
                 |-- VR (future work)

I started from the level 0, and spent quite a bit of time exploring OpenGL vs. Vulkan vs. DirectX vs. OptiX. At that time, I was also studying Unreal Engine, Unity3D, and Blender to find the easiest path of algorithmic improvement. Now, it is my personal feeling that using a raw API is better than sticking with a commercial-grade production engine like Unity or Unreal. I do not know whether I am right or wrong.

Now, I am really on a tight budget time to produce some results. I spent a considerable amount of time learning OptiX API. As my main focus is not path tracing, but manipulating path tracing for foveation, my idea behind this post was to find a better code base (rendering engine), that already has OptiX, C++, path tracing implementation, documentation, etc.

From Detlef’s suggestion, it seems the pbrt or Mitsuba could be a good fit for the research, but I need to find out how much OptiX is in it. I am thrilled about the Path Tracing SDK, but again, due to the time limit, I do not know how much DXR I can work on. Finally, if nothing works, I may stick with my own toy-renderer.

Just to give you some examples what research results you’re up against, here are just three videos showing state-of-the-art interactive and real-time path tracing results, also in VR.

Two Minutes Papers on raytracing: https://www.youtube.com/watch?v=NRmkr50mkEE
NVIDIA Path Tracing SDK: https://youtu.be/dwH_u9cr4bM?t=82
Overview of NVIDIA Omniverse XR: https://www.youtube.com/watch?v=Jm155QkRjl0

Note the comments inside the Path Tracing SDK video about combining decades of NVIDIA research and technologies. These inventions don’t happen under a tight time budget. Many of these methods took years of continuous improvements and brilliant ideas from leading professional researchers.

1 Like

Hi Detlef! Just to extend this conversation (off-topic) a little more, you might already know, recently Unity3D game engine has extended its VR pipeline to DX12 and Vulkan. I think that would be a milestone for VR and Path tracing. If I am not wrong, OptiX is widely being used in Iray, Cycles, and also V-Ray (maybe). I have a personal feeling that, instead of real-time rendering, OptiX API is more focused on offline and production rendering. Probably that is what OptiX was originally developed for.

It’s true that OptiX was originally geared more for batch rendering than real-time, you’re right. That changed considerably after the OptiX 7 API was introduced; the limitations that would have impeded real-time apps were removed as part of the API re-design. Today OptiX is as usable for real-time as Vulkan & DX, they all have approximately the same functionality and limitations with respect to performance and interactivity. OptiX still has some optional extra features for film-quality rendering that are not available elsewhere, such as motion blur and multi-level instancing. The primary reason we don’t see OptiX in many game engines is that game engines need to support rasterization, which DX and Vulkan both do primarily, while OptiX is wholly dedicated to ray tracing. A game engine or real-time framework could easily use OptiX if it was designed around ray tracing only, maybe we’ll see that being to happen in the future as ray tracing support and hardware becomes more ubiquitous…

–
David.

1 Like

Thanks, David, I also hope to see more and more OptiX in the near future. Anyhow, the whole new OptiX 7.0+ is also comparatively new, while DX has been used for a long time, and they have won multiple platforms.