Introduction to NVIDIA RTX and DirectX Ray Tracing

Originally published at:

“Ray tracing is the future, and it always will be!” has been the tongue-in-cheek phrase used by graphics developers for decades when asked whether real-time ray tracing will ever be feasible. Everyone seems to agree on the first part: ray tracing is the future. That’s because ray tracing is the only technology we know of…

I'm glad to see this is finally getting traction. I'm looking forward to videogames that aren't high frames per second shooters to using this, hopefully across the board. I play a lot of adventure games and similar that would look beautiful with this. It'll be a great day if developers can hit a switch in Unity or UE and just have it work after defining some light sources.

Can you tell us if Volta tensor unit is required for DXR acceleration? Wondering what I would lose by using this on a P6000 for example.

Linux support?

Did the "DirectX" part just fly under your radar?

Coool! Combine this with IBM's photon based super computer circuitry and we'll be onto sumthin' for real.

Might not say anything about Vulkan, but I'm pretty sure that Nvidia's raytracing technology will also affect that. Therefore, Linux support.

I'm really glad that Nvidia is finally getting more serious about DX12. We can finally start seeing games being released on that API rather than the lackluster of DX11. Hopefully we also see some Vulkan titles come out more rather than just DX. part just fly under your radar?

NVidia's general lack of gains on DX12 vs DX11 was more of a reflection of their superior DX11 drivers and better utilization of their GPU's. So many people seem confused by this.

ASync Compute on AMD GPU's helped games like DOOM more fully utilize the processors as they were otherwise having difficulty utilizing the GPU fully.

That doesn't make one product better or worse.

I believe NVidia said we MAY see DXR support in products without Tensor Cores but that the difference in performance would be huge thus you'd need Tensor Cores for any real-time applications like games.

Ray-Tracing may have been announced recently but don't expect much working software for a while.

We will see a MIXTURE of traditional rasterization and ray-tracing techniques for many years to come. This will be a slow process, though I would imagine a 2028 game console would be designed specifically with ray-tracing in mind.

I suspect we may not see any games or mainstream GPU's with ray-tracing until 2020. I think it will be mostly development hardware/software before that point then we'll see some games with again a mixture that may only make sense on the more expensive cards.

DX12 only.
DXR runs on top of DX12, and there's zero chance they will integrate this with DX11 since that would be a coding nightmare.
Not sure why it would matter since you'd need a video card with TENSOR CORES and that will support DX12 by default. The only reason you'd care is if you had W8.1 or W7 as your OS.

Of course NVidia's RT code will end up in applications based on Vulkan. Which in turn means they can be used in Linux. It's simply a matter of which applications and when.

I'm more curious whether 2020 game consoles like the PS5 will have anything like Tensor Cores to support a similar AMD method. I'm a PC gamer but adding ray-tracing (even if just a mix of 90% rasterization and say 10% ray-tracing) would push ray-tracing on PC much, much faster.

amd's rapid packed math does fp16 at twice the rate of fp32 so in a sense they already have tensor cores....kinda. well not exaclty but tensor cores do fp16 ops at twice the rate and so a 12 tflop console power by amd hardware would have 24 tflops of fp 16 so quite a bit below the titan v at 110 i think it was but the titan v is 3000 dolllars are 800mm^2 or so.

Is this global illumination made in two passes? First pass should improve only baking lightning information into vertexes. Second pass with rays interpolate this baked lightning for accurate one-pounce reflections. Finally everything is mixed up with material properties. However i do not know how soft shadows are calculated.
With pure ray-tracing shadows come crisp. Have these new shading languages something for shadow calculation? Something special.

Do you plan to make ray-triangle intersection builtins and BVH-acceleration routines disposable for developers directly from CUDA without OptiX or other media involved?