Gcc and g++ versions on Jetpack 5.1.2

Hello

On Xavier Dev Board with Ubuntu20.04 and Jetpack5.1.2 we have default gcc and g++ versions as 9.

We can update the version to 10 with

sudo apt install gcc-10 g++-10 -y

In order to update to version 11 we need

sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt update
sudo apt install gcc-11 g++-11

We know that gcc is backward compatible and Jetpack 5.1.2 is based on gcc and g++ versions 9.
Is it safe to use newer versions 10 or 11 on Jetpack 5.1.2?

Wow, I don’t think there’s certain answer for it.

Hello

Thank you for the answer.

When we install onnxruntime .whl for Jetpack 5.1.2 from the link in Jetson Zoo - eLinux.org
We get GLIBCXX error and we need to update to gcc11 to have the relevant GLIBCXX.

Even we try to build from the source by following the links from Jetson Zoo - eLinux.org to Build with different EPs | onnxruntime

we observe that we need gcc11 to build onnxruntime from source.

The reliability of gcc11 on Jetpack5.1.2 is important for us.
We appreciate any help.

Hi,

Which onnxruntime package do you use?
Do you use Python 3.8 packages?
If not, could you give it a try?

Thanks.

Thank you for the answer

We use onnxruntime 1.17.0 and onnxruntime 1.18.0 Python3.8 versions for Jetpack5.1.2.
Both of these wheels need GLIBCXX libraries coming with gcc11.

Hi,

Sorry for the late update.

Confirmed that we can reproduce the same issue with ONNXRuntime 1.17.0.
Will check it further and provide more info to you.

Thanks.

Thank you for the answer,

I want to note that onnxruntime can be an alternative for running inference with CUDA12.2 on Jetpack 5.1.2 since TensorRT is not suitable to be used with CUDA12.2 on Jetpack 5.1.2.

Hi,

It looks like you are finding GPU inference frameworks.
As an alternative, have you tried to build PyTorch from the source with JetPack 5+CUDA12?

Thanks.

Hi,

We also built Torch for Jetpack 5.1.2 with updated CUDA 12.2.
I think with both Pytorch and onnxruntime it is possible to make inference using CUDA 12.2.
The TensorRT coming with Jetpack 5.1.2 is compatible with CUDA 11.4 but not compatible with CUDA 12.2.
That is why Pytorch and onnxruntime inferences must rely on CUDA instead of TensorRT.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.