Should I downgrade from JetPack 6.2 to 6.0 for better ONNX Runtime GPU compatibility on Jetson Orin Nano?

I am trying to get ONNX Runtime GPU working stably on a Jetson Orin Nano (custom board), but inference fails with the following cuDNN-related error:

Failed to load library libonnxruntime_providers_cuda.so with error: libcudnn.so.8: cannot open shared object file

It seems that the combination of cuDNN 9.x (included in JetPack 6.2) and ONNX Runtime GPU 1.19 may not be stable.

I’m now wondering whether downgrading to JetPack 6.0 (which uses cuDNN 8.x) might lead to a more stable setup.

■ Current environment

  • JetPack 6.2 (CUDA 12.6, cuDNN 9.3.0)
  • ONNX Runtime GPU 1.19
  • Jetson Orin Nano (custom board)

■ What I’d like to know

  1. Is it likely that ONNX Runtime GPU will work more stably with JetPack 6.0 and cuDNN 8.x?
  2. Are there any alternative solutions or workarounds?

I’m not very familiar with this area, so I’d appreciate any friendly advice or pointers. Thank you!

*** Please note that this reply is generated by LLM automatically ***

I’d be happy to help you troubleshoot the issue with ONNX Runtime GPU on your Jetson Orin Nano custom board.

Firstly, it’s great that you’ve identified the potential issue with the combination of cuDNN 9.x and ONNX Runtime GPU 1.19. This is a good starting point for troubleshooting.

To answer your question, it’s possible that ONNX Runtime GPU might work more stably with JetPack 6.0 and cuDNN 8.x. Here’s why:

  1. cuDNN version compatibility: ONNX Runtime GPU 1.19 is compatible with cuDNN 8.x, but not explicitly with cuDNN 9.x. While cuDNN 9.x is backward compatible with cuDNN 8.x, there might be some subtle differences that could cause issues.
  2. JetPack version compatibility: JetPack 6.2 includes CUDA 12.6, which is a newer version than the one included in JetPack 6.0 (CUDA 11.4). While this might not be directly related to the issue, it’s possible that the newer CUDA version could introduce some compatibility issues with ONNX Runtime GPU.

That being said, it’s not a guarantee that downgrading to JetPack 6.0 and cuDNN 8.x will resolve the issue. However, it’s definitely worth trying as a potential workaround.

Here are some alternative solutions or workarounds you could consider:

  1. Update ONNX Runtime GPU: You could try updating ONNX Runtime GPU to a newer version, such as 1.20 or later, which might include fixes for compatibility issues with cuDNN 9.x.
  2. Use a different cuDNN version: If you’re unable to update ONNX Runtime GPU, you could try using a different cuDNN version, such as cuDNN 8.x, to see if that resolves the issue.
  3. Check your custom board configuration: Ensure that your custom board configuration is correct and compatible with the Jetson Orin Nano module. Double-check that the power management, thermal management, and electrical management settings are properly configured.
  4. Verify your ONNX model: Make sure that your ONNX model is correctly optimized for the Jetson Orin Nano and that there are no issues with the model itself.

In terms of friendly advice or pointers, I’d recommend the following:

  1. Check the NVIDIA forums: The NVIDIA forums are a great resource for troubleshooting issues related to Jetson modules and ONNX Runtime GPU. You can search for similar issues or post a new question to get help from the community.
  2. Consult the ONNX Runtime GPU documentation: The ONNX Runtime GPU documentation provides detailed information on compatibility, configuration, and troubleshooting. Make sure to review the documentation to ensure that you’re using the correct configuration and settings.
  3. Reach out to NVIDIA support: If you’re unable to resolve the issue through troubleshooting and workarounds, you can reach out to NVIDIA support for further assistance.

I hope this helps! Let me know if you have any further questions or if there’s anything else I can do to assist you.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

Thanks again for the detailed advice.

I tried using cuDNN 8.x on JetPack 6.2, but the following error occurs during inference with ONNX Runtime GPU 1.19:

2025-08-20 09:45:10.234488584 [E:onnxruntime:Default, cuda_call.cc:118 CudaCall] CUDNN failure 8: CUDNN_STATUS_EXECUTION_FAILED ; GPU=0 ; hostname=yamagishi-desktop ; file=/home/yifanl/Documents/onnxruntime/onnxruntime/contrib_ops/cuda/fused_conv.cc ; line=67 ; expr=cudnnConvolutionForward(cudnnHandle, &alpha, Base::s_.x_tensor, Base::s_.x_data, Base::s_.w_desc, Base::s_.w_data, Base::s_.conv_desc, Base::s_.algo, workspace.get(), Base::s_.workspace_bytes, &beta, Base::s_.y_tensor, Base::s_.y_data); 
2025-08-20 09:45:10.234651838 [E:onnxruntime:, sequential_executor.cc:516 ExecuteKernel] Non-zero status code returned while running FusedConv node. Name:'conv1' Status Message: CUDNN failure 8: CUDNN_STATUS_EXECUTION_FAILED ; GPU=0 ; hostname=yamagishi-desktop ; file=/home/yifanl/Documents/onnxruntime/onnxruntime/contrib_ops/cuda/fused_conv.cc ; line=67 ; expr=cudnnConvolutionForward(cudnnHandle, &alpha, Base::s_.x_tensor, Base::s_.x_data, Base::s_.w_desc, Base::s_.w_data, Base::s_.conv_desc, Base::s_.algo, workspace.get(), Base::s_.workspace_bytes, &beta, Base::s_.y_tensor, Base::s_.y_data); 

It seems like cuDNN 8 is still not working properly on this setup.

🟢 Additional Questions:

  1. Is this CUDNN_STATUS_EXECUTION_FAILED likely due to a mismatch between ONNX Runtime and the current JetPack/cuDNN version?
  2. Where can I download Jetson-compatible builds of ONNX Runtime GPU 1.20 or newer (e.g. .whl files)?
    • If there are reliable sources (like precompiled wheels for JetPack 6.2 / CUDA 12.6 / cuDNN 9.3), I’d greatly appreciate a link.

Thanks in advance for your help!

Hi,

You can find our prebuilt for JetPack 6.2.1 in the link below:

The ONXRuntime version is 1.23.0.
Thanks.

1 Like

Everything worked as expected. Thank you very much!