OptiX advanced samples on github

In case you’re missing some of the more advanced samples like “glass” that were removed from the
SDK in OptiX 4.0, we’ve put a handful of them on github:

OptiX Advanced Samples

Along the way we switched from GLUT to GLFW and added UIs with the imgui library; for
example, the new “optixGlass” sample has sliders for glass color and other things.

Build instructions for Linux and Windows are in the top level directory.



The OptiX Advanced Samples repository on github received some updates in the meantime.

To accompany the GTC 2018 tutorial S8518 - An Introduction to NVIDIA OptiX, a set of nine increasingly complex examples has been added inside the optixIntroduction sub-folder.

The extensive README on this site explains which different features each example added.
Although it starts very basic, the tutorial quickly provides the necessary foundation for a renderer implementation with quite a set of features already in form of an elegant and easy to extend uni-directional path tracer architecture.

The recording and slides (without animations) of the GTC presentation are publicly available on

Then another new example optixParticleVolumes had been added recently.
It demonstrates how to use OptiX’ BVH traversal for volumetric depth samples of unstructured geometry and provides a reference for particle volume rendering, similar to splatting, within OptiX.


An new OptiX Introduction example optixIntro_10 has been added to show how to use the OptiX 5.1.0 HDR DL Denoiser implementation.

With the OptiX 5.1.0 release, the built-in DL denoiser has been improved to support HDR input directly, which allows to move the tone-mapping into the final post-processing after the denoising stage again.

Please compare the necessary code changes against the optixIntro_09 example which had shown how to use the standard DL denoiser as supported since OptiX 5.0.0.

1 Like

I’m happy to announce that new OptiX 7 advanced samples have gone live here:


The first two examples, intro_runtime and intro_driver, both port the seventh introduction example from the OptiX 5/6 API to the OptiX 7 API, using the CUDA Runtime API resp. the CUDA Driver API.

The intro_denoiser example adds the OptiX 7 HDR DL denoiser on top, with RGB, RGB+Albedo or RGB+Albedo+Normal buffers in either float4 or half4 buffer formats controlled via a compile time option.
It’s effectively matching the optixIntro_10 advanced sample, except for the motion blur and added normal buffer support.

The rtigo3 example is demonstrating different strategies for multi-GPU rendering distribution, where all GPUs work on the same frame, combined with different OpenGL interop methods. Of course it also works with single-GPU for comparisons.
Its mode of operation and scene layout is controlled by two simple text files. The rendering resolution is independent of the window size. It also contains code to load triangle mesh data from any file format supported by the ASSIMP library.

Please read the README.md and system_*.txt and scene_*.txt files inside the repository for more information.

For more introductory examples using the OptiX 7 API, please refer to the OptiX 7 SIGGRAPH course examples first:


The OptiX 7 advanced samples have been updated to also compile with the OptiX SDK 7.1.0 now.

The OptiX SDK 7.1.0 release changed a few API structures (listed in its release notes) which required some small adjustments to the existing OptiX SDK 7.0.0 based examples to compile. Look for compile-time decisions based on OPTIX_VERSION to find the differences.

Currently the examples still build against OptiX 7.0.0 by default. To change that, please simply replace the find_package(OptiX7 REQUIRED) against find_package(OptiX71 REQUIRED) in each individual example’s CMakeLists.txt you want to switch to the new SDK.

Note that the OptiX SDK 7.1.0 strictly requires R450 drivers and only works on Maxwell GPUs and newer.

I also overhauled the default structure initializations, the text output, and fixed a multi-GPU synchronization issue inside the rtigo3 compositor() routine.

A new example called nvlink_shared has been added, which demonstrates how to share textures and/or geometry acceleration structures among the GPU devices inside an NVLINK configuration to increase the loadable scene size.
It’s derived from the existing rtigo3 example but moved the resource management from the Device class up to the Raytracer class because that decides which device shares what resource.

Please read the README.md and source code comments for more information. Find them via the link in the post above.

1 Like

The OptiX 7 advanced samples have been updated to automatically use the latest installed OptiX SDK version which incudes support for OptiX SDK 7.2.0 now (Oct. 2020) and in the meantime OptiX 7.4.0.

Support for the NVIDIA Management Library (NVML) has been added to the nvlink_shared and rtigo3 examples to determine the system’s current NVLINK topology. That allows supporting arbitrary NVLINK configurations, esp. with more than two GPUs installed.

A new introduction example intro_motion_blur has been added as well.
It demonstrates how to implement motion blur with linear matrix transforms, scale-rotate-translate (SRT) motion transforms, and optional camera motion blur inside an animation timeline where frame number, frames per seconds, object velocity and angular velocity of the rotating object can be changed interactively. This example is only built when using the OptiX SDK 7.2.0 and newer.

Please always refer to the updated README.me which contains more information.

1 Like

The OptiX 7 advanced samples have been updated to support Microsoft Visual Studio 2022 and OptiX SDK 7.5.0 now.

When using OptiX SDK 7.5.0 and CUDA Toolkit 11.7 or newer, the examples’ CMakeLists.txt scripts will automatically switch the NVCC output from PTX to OptiX IR.

Additionally the CMake macro generating the custom build rules for the input *.cu files has been replaced against one which covers both *.ptx and *.optixir outputs (see NVCUDA_COMPILE_MODULE).

The examples themselves select between *.ptx and *.optixir input files at compile time, depending on the USE_OPTIX_IR definition added by the CMake scripts. That required centralizing the module filename definitions, which then lead to a more streamlined handling of the OptixModule, OptixProgramGroupDesc and OptixProgramGroup objects.

1 Like

The OptiX 7 advanced examples have been updated with two new examples (as explained inside the README.md there).

rtigo9 is similar to nvlink_shared, but optimized for single-GPU as well to not do the compositing step unless multiple GPUs are used. The main difference is that it shows how to implement more light types.

It’s supporting the following light types:

  • Constant environment light: Uniformly sampled, constant HDR color built from emission color and multiplier.
  • Spherical environment map light: Importance sampled area light. Now supporting arbitrary orientations of the enviroment via a rotation matrix. Also supporting low dynamic range textures scaled by the emission multiplier (as in all light types).
  • Point light: Singular light type with or without colored omnidirectional projection texture.
  • Spot light: Singular light type with cone spread angle in range [0, 180] degrees (hemisphere) and falloff (exponent on a cosine), with or without colored projection texture limited to the sphere cap described by the cone angle.
  • IES light: Singular light type (point light) with omnidirectional emission distribution defined by an IES light profile file which gets converted to a float texture on load. With or without additional colored projection texture.
  • Rectangular light: Area light with constant color or importance sampled emission texture. Also supports a cutout opacity texture.
  • Arbitrary triangle mesh light: Uniformly sampled light geometry, with or without emission texture. Also supports a cutout opacity texture.

To be able to define scenes with these different light types, this example’s scene description file format has been enhanced. The camera settings as well as the tonemapper settings defined inside the system description file now can be overridden inside the scene description. The previous hardcoded light definitions inside the system description file have been removed and the scene description has been changed to allow light material definitions and creation of specific light types with these emissive materials, resp. assigning them to arbitrary triangle meshes.
Please read the system_rtigo9_demo.txt and scene_rtigo9_demo.txt files which explain the creation of all supported light types inside a single scene.

Also the previous compile time switch inside the config.h file to enable or disable direct lighting (“next event estimation”) has been converted to a runtime switch which can be toggled inside the GUI. Note that all singular light types do not work without direct lighting enabled because they do not exist as geometry inside the scene and cannot be hit implicitly. (The probability for that is zero. Such lights do not exist in the physical world.)

Additionally to CUDA peer-to-peer data sharing via NVLINK, the rtigo9 example also allows that via PCI-E, but this is absolutely not recommended for geometry for performance reasons. Please read the explanation of the peerToPeer option inside the system description.


Light types shown in the image above: The grey background is from a constant environment light.
Then from left to right: point light, point light with projection texture, spot light with cone angle and falloff, spot light with projection texture, IES light, IES light with projection texture, rectangle area light, rectangle area light with importance sampled emission texture, arbitrary mesh light (cow), arbitrary mesh light with emission texture.

rtigo10 is meant to show how to architect a renderer for maximum performance with the fastest possible shadow/visibility ray type implementation and the smallest possible shader binding table layout.

It’s based on rtigo9 and supports the same system and scene description file format but removed support for cutout opacity and surface materials on emissive area light geometry (arbitrary mesh lights). The renderer architecture implements all materials as individual closesthit programs instead of a single closesthit program and direct callable programs per material as in all previous examples above. Lens shaders and the explicit light sampling is still done with direct callable programs per light type for optimal code size.

To reduce the shader binding table size, where the previous examples used a hit record entry per instance with additional data for the geometry vertex attribute data and index data defining the mesh topology plus material and light IDs, the shader binding table in rtigo10 holds only one hit record per material shader which is selected via the instance sbtOffset field. All other data is indexed with via the user defined instanceId field.

On top of that, by not supporting cutout opacity there is no need for anyhit programs in the whole pipeline. The shadow/visibility test ray type is implemented with just a miss shader, which also means there is no need to store hit records for the shadow ray type inside the shader binding table at all.


The OptiX 7 Advanced Examples repository has now been updated to support OptiX SDK 7.6.0.

1 Like