Onnxruntime 1.17.3 compilation error on Jetpack 6.0

Hello,

I am getting compilation errors trying to compile onnxruntime 1.17.3 against cuda 12.2 and tensorrt on our agx orin 64gb devkit. I am using the c++ bindings for onnxruntime as well as the cuda and tensorrt executionproviders, so I have no option but to compile from source. I have tried asking on the onnxruntime git repo as well, but a similar issue has been open for over a month now and has no reply, so I don’t have that much hope of it getting solved over there.

Description

The build seems to fail halfway through the build on the cuda provider, giving the message nvcc fatal : 'ON': expected a number
I am trying to compile onnxruntime 1.17.3 against cuda 12.2 (preinstalled version on the jetpack 6.0.0 release, as well as tensorrt).

For my application I need to be able to use tensorrt, due to the optimizations available.

Urgency

To me: relatively high. :)
Our development would get stuck on this issue.

Target system

Linux ubuntu 5.15.136-tegra #1 SMP PREEMPT Mon May 6 09:56:39 PDT 2024 aarch64 aarch64 aarch64 GNU/Linux
Jetpack 6.0.0

# R36 (release), REVISION: 3.0, GCID: 36106755, BOARD: generic, EABI: aarch64, DATE: Thu Apr 25 03:14:05 UTC 2024 \# KERNEL_VARIANT: oot

build script

cmake \
  -Donnxruntime_ENABLE_LANGUAGE_INTEROP_OPS=true \
  -Donnxruntime_ENABLE_CUDA_PROFILING=true \
  -Donnxruntime_USE_CUDA=true \
  -Donnxruntime_BUILD_FOR_NATIVE_MACHINE=true \
  -Donnxruntime_USE_TENSORRT=true \
  -Donnxruntime_BUILD_SHARED_LIB=true \
  -S /opt/onnxruntime/onnxruntime-1.17.3/cmake \
  -B /opt/onnxruntime/onnxruntime-1.17.3/build

compiler version

nvcc --version

Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0

gcc --version

gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0

Error/output

Terminal output:

[ 51%] Building CUDA object CMakeFiles/onnxruntime_providers_cuda.dir/opt/onnxruntime/onnxruntime-1.17.3/onnxruntime/core/providers/cuda/activation/activations_impl.cu.o
nvcc fatal   : 'ON': expected a number
gmake[2]: *** [CMakeFiles/onnxruntime_providers_cuda.dir/build.make:1365: CMakeFiles/onnxruntime_providers_cuda.dir/opt/onnxruntime/onnxruntime-1.17.3/onnxruntime/core/providers/cuda/activation/activations_impl.cu.o] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:2117: CMakeFiles/onnxruntime_providers_cuda.dir/all] Error 2
gmake: *** [Makefile:166: all] Error 2

makefile code at the location the crash seems to happen:

/usr/local/cuda-12.2/bin/nvcc -forward-unknown-to-host-compiler $(CUDA_DEFINES) $(CUDA_INCLUDES) $(CUDA_FLAGS) -MD -MT CMakeFiles/onnxruntime_providers_cuda.dir/opt/onnxruntime/onnxruntime-1.17.3/onnxruntime/core/providers/cuda/activation/activations_impl.cu.o -MF CMakeFiles/onnxruntime_providers_cuda.dir/opt/onnxruntime/onnxruntime-1.17.3/onnxruntime/core/providers/cuda/activation/activations_impl.cu.o.d -x cu -c /opt/onnxruntime/onnxruntime-1.17.3/onnxruntime/core/providers/cuda/activation/activations_impl.cu -o CMakeFiles/onnxruntime_providers_cuda.dir/opt/onnxruntime/onnxruntime-1.17.3/onnxruntime/core/providers/cuda/activation/activations_impl.cu.o

@j.heuver I pretty regularly build onnxruntime from source in container, you can see how I do it in build script here:

I haven’t specifically built onnxruntime 1.17.3, but have compiled 1.17.0 on JetPack 6 this way.

Thanks for the script, I will try to get that set up this afternoon! I’ll post whether it does work for me or not. I have tried building the pythonwheel(suggested on the onnxruntime website for installation) in the meantime too, but ran into other compilation errors there.

Sorry for the delay, I managed to extract the shared libs from the container, and run my application using those, so my issue can be closed. I run into issues running tensorrt examples now (none of the examples have worked so far), but I’ll open another issue for that. Thanks for the support!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.