OptixHello embeded in new application run in Release but not in Debug mode

Hi,
after having built and run successfully (Release/Debug) all of the Optix 7.7 SDK sample (with Cuda 12.1)
I have managed to run the OptixHello from my own application.
Strangely, I can only see it running only in Release Configuration, In Debug Configuration I get an

Microsoft C++ exception: sutil::Exception at memory location 0x000000E64EEF4E48.

at :

        OPTIX_CHECK_LOG(optixModuleCreateFromPTX(
            contextOptix,
            &module_compile_options,
            &pipeline_compile_options,
            input,
            inputSize,
            LOG, &LOG_SIZE,
            &module
        ));

If I comment out this part of the code I get the same error at the next OPTIX_CHECK_LOG

OPTIX_CHECK_LOG(optixProgramGroupCreate(
            contextOptix,
            &miss_prog_group_desc,
            1,   // num program groups
            &program_group_options,
            LOG, &LOG_SIZE,
            &miss_prog_group
        ));

It looks like something is undefined in Debug mode, while it is defined and works well in Release mode.

Any possible ideas why it does not run in debug mode?
Thanks!

PS:
I don’t know if it is related, I have set a breakpoint before this OPTIX_CHECK_LOG and I see that pipeline is not defined in Debug mode:

Release:
pipeline user32.dll!0x00007ff959d4eb96 (load symbols for additional information) {...} OptixPipeline_t *
Debug:
pipeline 0x0000000000000000 <NULL> OptixPipeline_t *

It looks like something is undefined in Debug mode, while it is defined and works well in Release mode.

It’s not possible to analyze that without the complete source code.
Usually it’s the other way round with uninitialized data.

Some guesses:

Which optixHello source code are you looking at?
OptiX SDK 7.7. 0 does not define optixModuleCreateFromPTX anymore. It has been renamed to optixModuleCreate because it can take PTX and OptiX IR input.
https://raytracing-docs.nvidia.com/optix7/api/optix__host_8h.html#ad270324129ed5f291b80b128daf8edea

First of all please make sure that you always null all OptiX structures you use in your code.
Things like this: https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/MDL_renderer/src/Device.cpp#L347
That is required in case the structures change in the future and add new fields.
Usually zero is the default for OptiX field members.

You might have compiled the OptiX device code with debug information but didn’t set up the OptixModuleCompile and OptixPipelineCompilation options to use debug information.

To get more informatin please set a logger callback on the OptixDeviceContextOptions, set it to level 4 and enable the validationMode.
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/MDL_renderer/src/Device.cpp#L281
https://raytracing-docs.nvidia.com/optix7/guide/index.html#context#validation-mode

Note that the OptiX SDK examples are building the OptiX device code with debug information for the debug target. The resulting runtime performance is really, really slow.
I would not recommend doing that until really required. I usually translate the OptiX device code as release with line info for all targets which results in fast runtime in debug targets which helps as long as you only need to debug host code.

What is the proper process of including the optixHello sample code in a new testClienApp solution and debug this new app?

I wouldn’t like to use project references, I would prefer to use the provided optix dlls/pdbs (sutil_7_sdk, glad, glfw3) and set breakpoints in sutil for example when debugging the testClienApp solution.
So far,

  • I have managed to run the Release testClienApp with the Release dlls optix files, but when I run the testClienApp in Debug mode with Debug dlls optix files I get the exceptions described at my first post.

  • I can run the testClienApp in Debug mode with Release dlls optix files

Please provide a minimal and complete reproducer in failing state. The given information is insufficient to analyze what you’ve done exactly.

Other than that, if you’re not working inside the OptiX SDK example application framework itself, I would not recommend using the OptiX SDKsutil library in own applications. It’s meant to simplify the OptiX SDK examples. None of its code is strictly required for own projects and you also incur some issues with it, like hardcoded folder names for resources. Check what the sampleDataFilePath(), sampleFilePath() and getSampleDir() functions do. You simply don’t want that in your own applications.

glad and glfw3 are just 3rd party libraries used inside the OptiX SDK. You can include them into your own OptiX application framework from wherever you like. There is not really a need to use the ones built by the OptiX SDK example framework. You could even use completely different libraries to build your own OptiX application framework. This is all independent of the OptiX API itself.

Then the question is how you built your new application’s solution.

If you’re using CMake to build the solution for that, than it’s a matter of finding the 3rd party library include folders, add them to the additional includes and copy their DLLs over to your executable’s module directory.

Compiling the OptiX device code CUDA sources could be done different ways, with custom build rules or using the built-in CMake CUDA language support.
The OptiX SDK examples generate custom build rules on one way, my OptiX 7 examples (link in this sticky thread) do it in a different way with a custom macro NVCUDA_COMPILE_MODULE.

If you’re building the Microsoft Visual Studio solution completely by hand, then it’s basically a matter of adding the required *.cpp and *.h files of your application’s code, then adding the OptiX device CUDA code *.cu files and setting up the CUDA Visual Studio integration options for them exactly how it’s required!
https://raytracing-docs.nvidia.com/optix7/guide/index.html#program_pipeline_creation#program-input

If you’re building the Microsoft Visual Studio solution completely by hand, then it’s basically a matter of adding the required *.cpp and *.h files of your application’s code, then adding the OptiX device CUDA code *.cu files and setting up the CUDA Visual Studio integration options for them exactly how it’s required!

Indeed this is my case. I am trying to create the simplest VS solution which will use CUDA exactly the way it happens in OptixHello sample. I have actually implemented it as I wrote earlier in the RELEASE mode. I would like to be able to do it in Debug mode too.

From what you write I have already:

  • added 3rd party libs to the additional includes and copied the dlls to the executable directory
  • added the optix device Cuda code “draw_solid_color.cu”
  • wrote the necessary simple code to draw a square of a color (pink, red, blue, etc) using CUDA. This is mainly based on the optixHello sample.

What I don’t understand from your help (which seems to be the most important part) is that you don’t recommend the use of sutil, but trying to tweak and learn from the optixhello sample while reading the optix documentation is that sutil functions are multiply used.
For example in 7.6 optixhello:

const char* input = sutil::getInputData
sutil::CUDAOutputBuffer<uchar4> output_buffer(sutil::CUDAOutputBufferType::CUDA_DEVICE, width, height);
sutil::ImageBuffer buffer;
buffer.pixel_format = sutil::BufferImageFormat::UNSIGNED_BYTE4;
....

So, according to what you suggest I should rewrite my own sutil function from scrartch?

The issue with your problem descriptions is that only you know what you programmed. It’s simply not possible to analyze why the debug sutil DLL is not working in your new project without a reproducer.

If the CUDAOutputBuffer.h is all you’re using sutil for, then there shouldn’t be a need to link against its DLL for just that.

There isn’t necessarily a need to rewrite that CUDAOutputBuffer class either. It’s a header-only implementation which means you could simply copy that header (and the Exception.h it includes) into your project and use it directly if that removes the sutil library dependency.

The most important thing is to learn what CUDA runtime or driver API calls you need to do to setup the CUDA resources for OptiX inside your application. Means you should be able to implement this yourself in the end.
If you do that inside a class or directly inside your application doesn’t really matter for understanding the underlying concepts for the CUDA resource management required inside an OptiX application.

That output buffer is special in that it can be device-only memory, a pointer to a mapped OpenGL pixel buffer object on the device, pinned host memory, or even CUDA peer-to-peer access on another GPU connected with NVLINK.
It’s used in different OpenGL examples showing these things. For a first “Hello OptiX” program most of that flexibility isn’t needed.

It’s using GLAD for the CUDA-OpenGL interoperability which is one of the main points for that output buffer abstraction. Maybe you use Vulkan in the future to do the display part, then that wouldn’t be required either.

For other buffers you use inside OptiX programs that is usually not required and then you can use CUDA device buffers via their CUdeviceptr directly as well.

As an alternative example implementation, my OptiX 7 examples handle device or CUDA-OpenGL interop for the displayed output buffer directly.
It’s the only buffer needing that special handling.
If you search for m_systemParameter.outputBuffer inside this Application.cpp file of the first introductory example (using the CUDA runtime API), you see how that is either allocated as device buffer directly (cudaMalloc/cudaFree) which is then copied to host memory (m_outputBuffer) or set from an OpenGL pixel buffer object pointer when using CUDA-OpenGL interop (all code inside the m_interop cases).
It’s then finally uploaded to an OpenGL texture image from either the host memory or the PBO (faster).

Very useful notes, thanks for the guidance and the…patience :)