Segfault when using printf in closesthit shader

I keep getting a segfault when I use printf in the closesthit shader. I’m able to print in the raygen shader, so it’s weird I can’t do it here.

Works on my machine.

You might want to provide some more information about your system configuration and maybe provide a minimal and complete reproducer project to be able to try reproducing your behavior.

These are all mandatory for a bug report:
OS version, installed GPU(s), VRAM amount, display driver version, OptiX major.minor.micro version, CUDA toolkit version used to generate the input PTX, host compiler version.

  • If you’re not on the newest display drivers for your configuration, then try if upgrading them fixes the issue.

  • If you’re not on the newest OptiX SDK version supported by your display driver, then try switching to the newest possible OptiX SDK version.
    Please read the OptiX Release Notes, link below the download button of each version.

  • If you used a CUDA toolkit which is newer than the recommended version inside the OptiX Release Notes, try using the recommended one and see if that works.

If any of the above changes the behavior, please let us know which combination works and which fails.

I’m assuming you’re limiting the printf to a specific launch index to not exceed the CUDA printf output buffer limit?
Otherwise output is overwritten in a circular buffer but shouldn’t crash.

Did you try enabling OptiX validation mode with a logging callback to see if there is any additional information?
https://forums.developer.nvidia.com/t/need-help-understanding-why-optixlaunch-is-failing/275372/4

Hi @droettger, sorry for the late response.
This bug has occurred on 3 different machines with the following GPUs: 3090, 3090ti, RTX A6000
The only commonality between these machines is the use of Optix 7.6

The only commonality between these machines is the use of Optix 7.6

Are you saying that happens with different OS platforms, different OS versions, different display driver versions, different CUDA toolkits used to generate the OptiX module input, with PTX or OptiX-IR input?

I cannot say why you experience that with the given information.
That would require the complete system configuration descriptions of the failing systems and a minimal and complete reproducer to analyze internally.

I’m using printf to debug OptiX device code a lot and have not experienced this under Windows 10 with any OptiX 7 or 8 version.

What are the OptiX module and pipeline compile and link options?
There is one open issue with OptiX-IR debug device code, debug compile and link options and enabled validation.
https://forums.developer.nvidia.com/t/debuggable-optix-ir-makes-launching-a-pipeline-failed/274765

Does this happen with fully optimized OptiX device code and release module and pipeline compile and link options?
Does it happen with OptiX validation enabled or disabled in either case?

I don’t need to solve this issue that badly so I’m sorry but I’m not putting in the work to reproduce this. All I can say is that the issue happens under Arch Linux and Debian. I have not tested Windows.