OptiX 5.1.1 precompiled samples on GeForce RTX 3060 do not work

Good morning, I’ve changed computer and I’m having problems porting my program to Optix 6.5 as the selectors have been removed and I’m having problems with shadow calculation.

I wanted to try with Optix 5.1 but if I use the NVidia examples they work on a GTX 1660 Super (desktop) but not on a GeForce RTX 3060 Laptop.

If I run “optixDeviceQuery” (Optix 5.1.1) I get

`OptiX 5.1.1
Number of Devices = 1

Device 0 (0000:01:00.0): NVIDIA GeForce RTX 3060 Laptop GPU
Compute Support: 8 6
Total Memory: 6442450944 bytes
Clock Rate: 1425000 kilohertz
Max. Threads per Block: 1024
SM Count: 30
Execution Timeout Enabled: 1
Max. HW Texture Count: 1048576
TCC driver enabled: 0
CUDA Device Ordinal: 0

Constructing a context…
Created with 1 device(s)
Supports 2147483647 simultaneous textures
Free memory:
Device 0: 5407899648 bytes`

while instead if I run “optixConsole” I get

OptiX error: Unknown error (Details: Function "_rtBufferCreate" caught exception: Encountered a rtcore error: m_exports->rtcDeviceContextCreateForCUDA( context, devctx ) returned (2): Invalid device context)

Do I have any hope of using Optix on my new PC? (I don’t want to port the APP to Optix 7.x it would be too expensive…)


OptiX 5 is not supported on Ampere GPUs. It’s too old for that.
The OptiX core implementation moved into the display driver with OptiX 6 to handle that going forward.
Answered before here: https://forums.developer.nvidia.com/t/optix-5-1-1-crashes-on-a100-gpu-on-attempt-to-create-buffer/178530