Writing Portable Rendering Code with NVRHI

Originally published at: Writing Portable Rendering Code with NVRHI | NVIDIA Developer Blog

Modern graphics APIs, such as Direct3D 12 and Vulkan, are designed to provide relatively low-level access to the GPU and eliminate the GPU driver overhead associated with API translation. This low-level interface allows applications to have more control over the system and provides the ability to manage pipelines, shader compilation, memory allocations, and resource descriptors…

1 Like

Thank you for your interest in NVRHI. It’s been used internally by NVIDIA DevTech for years, and now it’s available as open source for everyone else. I hope someone finds it a useful library for their project. If you have any comments or questions, you can post them here.

This looks very interesting, but I have a few questions:

a) Looks like this will work on Intel and AMD GPUs too, as long as they support Vulkan or Direct3d 11/12. Is that correct?

b) Could you comment on how this compares with libraries like gfx-rs ?

Hi,
Does NVHRI support multiple GPUs ? I mean, in the case of an application with two windows, running on a system with two GPU (each GPU having one output), where each output contain one single window, would NVHRI support rendering to each window with its respective GPU ?
I haven’t seen any code that highlight that feature, but I’m not really sure I haven’t missed it !
Cheers,
Greg

NVRHI does work with AMD GPUs, that’s right.

Comparison with gfx-rs: first, gfx-rs is written in Rust and for Rust, while NVRHI is in C++. Beyond that, the APIs seem somewhat similar - but they’re similar to the backend APIs as well. Gfx-rs has some functionality for adapter creation and windowing system interop, NVRHI does not. NVRHI has automatic resource state tracking and barriers, supports ray tracing.
That’s what I could find from a quick look at the gfx-rs documentation, anyway.

Yes, NVRHI should support multiple GPUs in that scenario. You can create multiple NVRHI device objects, one from each device, and interact with them separately.

I checked out the donut samples and NVRHI seems to be a very nice abstraction layer and donut itself is also very good if you want to avoid reinventing all the redundant work like a TAA pass.
Is there a specific reason why it doesn’t handle runtime compilation of shaders? I’m tinkering with the idea to write another client for Quake 3 where I translate every material to a custom shader that returns diffuse, specular and emmissive colors and handles all the texture rotations and blending operations.

Thank you for the positive feedback, Robert!

There is no single, strong reason why NVRHI/Donut do not handle runtime shader compilation; they just use a different path. One reason is that they are designed for shipping demos, among other things, and compiling shaders in shipping code is… well, let’s say frowned upon. Unsupported on some platforms even. Also, when you compile shaders offline and list all necessary permutations, you are less likely to leave bugs in the code, as all code paths will be at least somewhat checked by the compiler. Finally, you don’t need to handle situations in the engine when a shader reload is requested but the shaders cannot be compiled, and you don’t need to implement multi-threaded compilation to speed things up - and it can be tricky if the code requests shaders one by one.

That said, it’s certainly possible to implement a custom mechanism for runtime shader compilation in an app that uses NVRHI, as the library just takes a shader binary and doesn’t care where it came from.