GTC 2020: RTX Accelerated Raytracing With OptiX 7

GTC 2020 S21888
Presenters: Tony Kanell,NVIDIA; Ingo Wald,NVIDIA
Abstract
This session is for programmers interested in using OptiX to write RTX-accelerated raytracing applications. We’ll start with the general concepts behind the OptiX 7 API, and then build up to more advanced topics such as how to properly use, and optimize for, the hardware raytracing cores of modern GPUs.

Watch this session
Join in the conversation below.

Great talk!
Excited to start writing my first OptiX ray-tracer! :)

Few feebacks/thoughts though:

  1. The cuts in the video are really garring and make it more difficult to follow - looks like either the un-cut version should have been published instead, however longer it might be, or the talk should have been split into 2.

  2. For the SBT section, given it’s multi-dimentional structure/nature, seems like some 2D/3D disgramming would have gone a long way into clarifying that structure in the presentation. The conveience of information in the presentation appears like a serialisation of a multidimentional story into a single dimentional stream of verbal and textual descriptions - that rarely works well for multi-dimentional stories.

  3. I have been working as a developer in VFX for many years, and shader binding is always a complex multi-dimentional story. Packing all these dimensions into a single dimentional array of structurs, while surely efficient, leaves a lot to be desired from a developer perspective. I don’t know how OptiX 6 was like, as I haven’t studied it, but the whole packing/unpacking story seems like a very low-level implementation detail that should never have been exposed the way it is. The combination of having to construct these multi-dimentional story manually and coordinate it against it’s indexing story on the shader side, all while accounting for a an implicit formula that’s embedded in the API, screams of “abstraction leakage” to me, if I ever saw one. The likelihood that any developer would get all of it correct is very low. The prevalance of confusions and bugs surrounding this is very telling, and should be considered strong evidence for there being a design issue there.
    A better design might have been to provide a much wider-surface-area API with per-dimension functionality for describing the multi-dimentional graph-like structure of the binding, with some identifiers per-dimention. The actual construction of the packed-form of the data-structure would then be done internally to the API. On the querying side, one could imagine then extracting the multi-dimentional identifiers out of the hit-data, then providing it to some shader-domain utility procedures that internally generate the proper “final” indices/offsets, and just produce the actual relevant data for the shaders as their output. Such higher-level abstraction would not be taking away any level of flexibility or control from developers. If anything, it would expose better visibility into that control more explicitly. Performance cost could proably be made easilly acceptable.
    The currently existing approach could still be left accessible for “advanced use-cases” or as a performance optimisation choice, but it seems wrong for it to be the default. I can’t imagine any production-level solution using it as-is ad-hock like that. My guess would be that they would all end-up devising some abstraction layer on-top of it, of the like that I’ve described here (perhapse that’s what OWL is…?)

Just my un-enlightened 2 cents on that :)