Hi! I’m using a NVIDIA Jetson Orin NX 16GB, with jetpack 5.1.2, L4T 35.4.1, CUDA 11.4.315, cuDNN 8.6.0.166 and tensorRT 8.5.2.2. I use the docker image nvcr.io/nvidia/l4t-ml:r35.2.1-py3. and in it I install according to Jetson Zoo, onnxruntime version 1.18.0 and I have also tested 1.17.0.
When I run a model with onnxruntime I get this error: /usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider ‘CUDAExecutionProvider’ is not in available provider names.Available providers: ‘AzureExecutionProvider, CPUExecutionProvider’.
I tested to see if onnxruntime has the CUDAProvider with:
$ python3
>>> import onnxruntime as ort
>>> ort.get_available_providers()
What can I do? I have tried the docker image directly (supposedly it comes with onnxruntime) and I have tried installing according to Jetson Zoo - eLinux.org
Ok, in docker I had installed the packages: onnxruntime and onnxruntime-gpu, I deleted both, I reinstalled the whl of Jetson Zoo for 5.1.2 and python 3.8 and I got this error:
onnxruntime gpu nvidia jetson ImportError: /lib/aarch64-linux-gnu/libstdc++.so.6: version GLIBCXX_3.4.29’ not found (required by /home/orin/.local/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_pybind11_state.so)`