Build onnxruntime v1.19.2 for Jetpack 5.1.4 L4T 35.6 Faild

According to NVIDIA Jetson TX1/TX2/Nano/Xavier/Orin, build onnx for Jetpack5.1.4 L4T 35.6.

git clone git@github.com:microsoft/onnxruntime.git
git checkout v1.19.2
git submodule update --init --recursive

cd onnxruntime/
export PATH="/usr/local/cuda/bin:${PATH}"
export CUDACXX="/usr/local/cuda/bin/nvcc"



./build.sh --config Release --update --build --parallel --build_wheel \
--use_tensorrt --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu \
--tensorrt_home /usr/lib/aarch64-linux-gnu

Failed with The compiler doesn't support BFLOAT16!!!

Detailed see log.txt (35.5 KB)

Please help!

EDIT: I have searched Jetson Zoo - eLinux.org, there is no v1.19.2 binary for Jetpack 5.1.4 L4T 35.6
EDIT2: Build onnxruntime v1.19.2 for Jetpack 5.1.4 L4T 35.6 Faild #23267

Hi,

This is a known issue. Please see the below link:

ONNXRuntime team’s suggestion is to upgrade to JetPack 6.

Thanks.

It’s caused by jetson orin arch64 bf16 NOT support.

Why jetson orin ARCH64 machine doesn’t have support for bf16?

Hi,

As you set up the JetPack 6 on the same Orin board, the ONNXRuntime can build and run normally.
This sounds more like a software capability issue from ONNXRuntime instead of a hardware problem.

Thanks.

I think most of arch64 support bf16, while Jetson Orin doesn’t.

And gcc 9.4 doesn’t handle this very well, so we have to upgrade to gcc 11, which is suggested in NVIDIA Jetson TX1/TX2/Nano/Xavier/Orin.

onnxruntime dev team will NOT put resources on this as Ubuntu20.04 will EOL by end of 25.4, probably???

And I want to use this on JP5.1.4, as it support ROS, while JP6 only support ROS2.

BTW, “JetPack 6.0 is not an option for Xavier or Nano users!”

Currently, gcc/g++ has been upgraded to version 13 on JP5.1.4, but…

Hi,

These questions are specific to the ONNXRuntime support.
Have you checked with them?

Thanks.

EOL of ubuntu20.04, they will NOT put resources on this, which is NOT good for Jetpack5 users.

It seems that most of issue is NVIDIA software version dependency related, see below:

[ 79%] Building CXX object CMakeFiles/onnxruntime_mlas_test.dir/home/daniel/Work/onnxruntime/onnxruntime/test/mlas/unittest/test_fgemm.cpp.o
/home/daniel/Work/onnxruntime/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc: In member function ‘onnxruntime::common::Status onnxruntime::TensorrtExecutionProvider::CreateNodeComputeInfoFromGraph(const onnxruntime::GraphViewer&, const onnxruntime::Node&, std::unordered_map<std::__cxx11::basic_string<char>, long unsigned int>&, std::unordered_map<std::__cxx11::basic_string<char>, long unsigned int>&, std::vector<onnxruntime::NodeComputeInfo>&)’:
/home/daniel/Work/onnxruntime/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:3053:17: error: ‘class nvinfer1::IBuilderConfig’ has no member named ‘setHardwareCompatibilityLevel’
 3053 |     trt_config->setHardwareCompatibilityLevel(nvinfer1::HardwareCompatibilityLevel::kAMPERE_PLUS);
      |                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/daniel/Work/onnxruntime/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:3053:57: error: ‘nvinfer1::HardwareCompatibilityLevel’ has not been declared
 3053 |     trt_config->setHardwareCompatibilityLevel(nvinfer1::HardwareCompatibilityLevel::kAMPERE_PLUS);
      |                                                         ^~~~~~~~~~~~~~~~~~~~~~~~~~
/home/daniel/Work/onnxruntime/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc: In lambda function:
/home/daniel/Work/onnxruntime/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:3644:21: error: ‘class nvinfer1::IBuilderConfig’ has no member named ‘setHardwareCompatibilityLevel’
 3644 |         trt_config->setHardwareCompatibilityLevel(nvinfer1::HardwareCompatibilityLevel::kAMPERE_PLUS);
      |                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/daniel/Work/onnxruntime/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:3644:61: error: ‘nvinfer1::HardwareCompatibilityLevel’ has not been declared
 3644 |         trt_config->setHardwareCompatibilityLevel(nvinfer1::HardwareCompatibilityLevel::kAMPERE_PLUS);
      |                                                             ^~~~~~~~~~~~~~~~~~~~~~~~~~

Detailed info, please checkout log.txt (46.6 KB)

  • current version:
Software part of jetson-stats 4.2.12 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Orin Nano Developer Kit - Jetpack 5.1.4 [L4T 35.6.0]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
 - P-Number: p3767-0005
 - Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
 - Distribution: Ubuntu 20.04 focal
 - Release: 5.10.216-tegra
jtop:
 - Version: 4.2.12
 - Service: Active
Libraries:
 - CUDA: 11.8.89
 - cuDNN: 8.6.0.166
 - TensorRT: 8.5.2.2
 - VPI: 2.4.8
 - Vulkan: 1.3.204
 - OpenCV: 4.9.0 - with CUDA: YES
DeepStream C/C++ SDK version: 6.3

Python Environment:
Python 3.8.10
    GStreamer:                   YES (1.16.3)
  NVIDIA CUDA:                   YES (ver 11.4, CUFFT CUBLAS FAST_MATH)
        OpenCV version: 4.9.0  CUDA True
          YOLO version: 8.3.33
         Torch version: 2.5.1+l4t35.6
   Torchvision version: 0.20.1a0+3ac97aa
DeepStream SDK version: 1.1.8

Hi,

The error comes from the ONNXRuntime source so need them to fix the issue.
But since the library is open-sourced, you can also try to fix it directly.

Thanks.

Yes, they say it’s CUDA related. And it should upgrade CUDA version, but I don’t know what version is suitable, as I have upgraded from 11.4 to 11.8.

Still no luck with CUDA 11.8 + ONNXRuntime 1.19.2