I modified the code according to the picture and recompiled the program. Does this mean that verification is enabled?
Yes, as I said, if you rebuilt the application the application with #define USE_DEBUG_EXCEPTIONS 1
the OptiX validation mode was enabled.
It’s surprising that there is no additional output about the compiler error. I’m going to try reproducing this.
After I turned off verification, the FPS did improve a lot,
Right, as I explained in the Never benchmark OptiX applications in debug mode or with validation enabled! paragraph above.
In your results, it may be a coincidence that results seem to be limited to 60 fps but that is usually an indication for vertical sync being enabled for the OpenGL swap buffers on a 60 Hz monitor.
Do not benchmark graphics applications with vsync enabled!
Please go to the NVIDIA Display Control Panel, inside the Manage 3D Settings find the Vertical sync option and change it to Off.
Press Apply and close the Control Panel again.
Then benchmark again and check if the results exceed 60 fps now.
Also note that I do not update the HDR texture which displays the image for every render call.
Read the description of the present
option inside one of the system description text files.
The system_mdl_demo.txt
sets the rendering resolution to 1280x360 irrespective of the window client area size.
For comparisons, on my RTX 6000 Ada board, that scene accumulates and displays every sub-frame image (present 1, interop 1) with around 250 to 1200 fps depending on the view.
I guess the error may be related to the DevIL image loading library, because when I run rtigo9.exe -s system_rtigo9_demo.txt -d scene_rtigo9_demo.txt, the image is not loaded successfully.
I cannot reproduce the DevIL error 1291
(== #define IL_INVALID_FILE_HEADER 0x0508
) on my system when using DevIL 1.8.0.
Since the application started at all, at least the DevIL.dll was found, so I assume you copied the DevIL.dll, ILU.dll and ILUT.dll files from your local DevIL installation against which you compiled the example applications next to the executables.
Also when not actually finding the texture images, the error message would be saying ERROR: Picture::load() <filename> not found
, so that means you also copied all files from the data
folder next to the executables.
Both is exactly described inside the README.md Building chapter.
Note that in the MDL_renderer example, the textures used inside the MDL materials are loaded by the nv_openimageio.dll
by default (alternatively with the nv_freeimage.dll
when you switch to that inside the source code). DevIL is only used for the emissive textures on built-in lights (environment, rectangle, IES) in that example.
The working directory must be the module directory, that is, the folder with the executables, when starting the examples because the texture image files are looked for relative to that.
If that isn’t working, either your DevIL library is broken or not the right version (default build uses 1.8.0), or the image files are broken during download or copying.
Please try loading the *.png
and *.jpg
images inside the executable folder into some paint program to see if the files themselves work.
If not try downloading them again and verify that they are working.
Also I normally do not use the Windows PowerShell but the old Windows command prompt to run all my applications.
I would have expected that the command line inside the PowerShell would need to be .\rtigo9.exe
when running from inside the module directory, unless you added .
to the PATH environment variable.
Works on my Win10 system with either the command prompt or the PowerShell though.
Also note that rtigo9
is the slower example compared to rtigo10
and rtigo12
because it works differently and handles cutout opacity. They use mostly the same scene description, so the system and scene description files with _rtigo9_
in the name work for all three executables. rtigo12 can handle two more BXDFs and supports singular lights better.
For comparisons: rtigo12.exe -s system_rtigo9_demo.txt -d scene_rtigo9_demo.txt
accumulates around 1200 samples per second with display on an RTX 6000 Ada.
Again read the README.md
for the differences (also in lighting) between these examples.
The examples are not supporting unicode characters in paths. Means in case there is anything using Chinese characters inside these examples (including the environment variables MDL_SYSTEM_PATH
and MDL_USER_PATH
set by the MDL vMaterials installation) things files would probably not be found, but your error is different.
After I turned off verification, the FPS did improve a lot, but it seems to be slower than the CUDA renderer I developed.
The following picture is a screenshot of the rendering I developed with CUDA. RTX2070 has about 25FPS at 1280*720 resolution, so I want to use DXR or Vulkan or Optix to turn on BVH hardware acceleration to speed up the FPS, and then cooperate with MDL to provide more realistic image quality, but I don’t know if MDL will cause very low FPS
You cannot compare the performance of two ray tracers like this without exactly knowing what light transport they implement, how they do that, and esp. how many rays are shot per frame.
It’s not clear what the 25 fps mean for your images.
If you get that final frame quality for each individual frame with 25 fps on an RTX 2070, congratulations, that’s looking great.
If you mean 25 samples per pixel per second in a progressively accumulating Monte Carlo renderer and the posted images needed N samples per pixel, then that would be reasonable.
If you already have a CUDA based renderer, the part which could be enhanced by OptiX would be the acceleration structure (BVH) building and traversal and the ray-triangle intersection.
The shading part would not benefit much because that would stay mostly the same instructions in either ray tracing implementation.
Means if you can benchmark inside your CUDA application what percentage of the overall time you use for BVH traversal and ray-triangle intersection only, that would give you an estimation for how much you could improve your renderer performance when using the RT cores on RTX boards for that at maximum.
I would definitely not exchange the material shading part of your current ray tracer implementation before having the BVH traversal and ray-triangle intersection hardware accelerated with OptiX or DXR or Vulkan.
To understand some differences between OptiX and Vulkan raytracing (and DXR) please read this thread: What are the advantages and differences between Optix 7 and Vulkan Raytracing API?
Also, I don’t know how GLTF_renderer.exe runs, don’t know what parameters to pass, and I really want to try GLTF_renderer
1.) All my examples print a usage message when using the command line options ?
or help
or --help
or when there is an error inside the given command line options.
2.) The root README.md inside the repository describes what each example does. For the GLTF_renderer there is this paragraph inside it: https://github.com/NVIDIA/OptiX_Apps?tab=readme-ov-file#simple-and-fast-physically-based-rendering
3.) The Running chapter also mentions the GLTF_renderer at the very end. (Look at your very first screenshot you posted in this thread.)
4.) For the GLTF_renderer example there exists a very extensive README.md which describes all command line parameters, the whole GUI, and some more implementation details.
5.) You have the source code to the applications. All of them have code inside the main.cpp file which handles command line options.