I have not found ANY tutorial that explains in depth how to implement all the RTX/Turing functionalities.
For instance, I have found code examples of shadows, reflections, ray intersection shader… But not well documented and no tutorial about it.
All the tutorials I have found only explain a very basic approach that simply mimics a basic rasterizer (Already found these, don’t waste time to send the links).
My project is a game engine built from scratch with a ray-tracing-only renderer, and I need to implement full raytracing shadows, reflections and intersection shaders.
Also, DLSS and Denoising would be great… I have not found anything on these, not even sample code.
If anyone knows about a way to learn these in depth or want to coach me directly, it will be very appreciated.
(Also, I would pay $$$ for good coaching on these topics)
It depends how comfortable you are with ray tracing already. I have found that once you’ve read some reference material (in my case I read the first two books of Ray Tracing In One Weekend, https://github.com/RayTracing/raytracing.github.io), then the NVIDIA VKRT API concepts are relatively easy to understand.
The hard bits are the Vulkan boilerplate code and the weird “virtual table dispatch” used to map different object groups to different sub-shaders.
Hi GPSnoopy, Funny enough I was already learning by your own source code in RayTracingInVulkan, and yes I have done all the *In One Weekend tutorials too.
Your project is very inspiring to learn from and I have advanced a lot in my understanding of NVidia’s extension and Ray Tracing in general, but I am still missing on DLSS and Denoising.
Also, I was actually trying to find out how to contact you without any success, and now here you are !
I would like to have a discussion with you (or coaching) for taking the right decisions for my project which is a very, VERY ambitious real-time game engine based on Vulkan and Ray Tracing.
The project is a full scale multiplayer space game engine where you can walk on the surface of trillions of realistic sized procedural planets, travel between stars (realistic distances), and realistic space physics.
Are you interested in talking about it ?
My engine currently in development:
The game’s Unity prototype I made last year:
Impressive Galaxy4D video, I like the space video as well. Sounds quite involved.
If you’ve read the ray tracing book series and have seen the RayTracingInVulkan source code, then you likely know as much as I do already. This was a hobby project for me.
Denoising is a subject I’m not familiar with, hence why I’ve not implemented it. I’d suggest trawling the internet for articles and demos to learn more about it. The Q2 RTX source code is probably a good example of a production quality real-time implementation (https://github.com/NVIDIA/Q2RTX). You can even try emailing Peter Shirley (the author of the aforementionned books) for pointers on the subject, he’s quite open on social medias.
DLSS is a proprietary implementation by NVIDIA, and the AI model needs to be trained by NVIDIA itself on their own compute platform. IMHO this is unlikely to succeed because it’s so closed. Even their Q2 RTX did not use DLSS.
Hope this helps.
Just saw NVIDIA has got a few videos on this topic in one of their YouTube channel. For example:
Conquering Noisy Images in Ray Tracing with Next Event Estimation
Real Time Path Tracing and Denoising in Quake II RTX
Using Path Tracing: Quake 2 on Vulkan
Quake 2 on Vulkan
Some of these videos are building on top of the experience gained from the previous implementation. So I’ll let you re-arrange them in the correct order.
I have gone through potentially all tutorials + all official documentation related to ray tracing in vulkan.
However, there is one very simple thing that I cannot figure out.
In every tutorial and documentation they keep saying that instead of having many BLAS, we should provide less BLAS, each with multiple geometries, each geometry with their own hit shader in the binding table.
But nowhere do they tell how to specify the shader offset from binding table for individual geometries… it seems to be only on a per-instance basis in the TLAS… or maybe the trick would be multiple instances with different geometries by specifying a geometry index in each instance but this also seems to be impossible…
So as I understand it, one can only have no more than one hit shader group per BLAS for the main rays…
But that is not what the documentation is saying…
Am I missing something here ?
Thanks a lot for the help.