CudaExecutionProvider doesn't appear on onnxruntime 1.15.1 - Jetson Orin Nano

Hi, I’m trying to run the roboflow inference with CUDA on a jetson orin nano, but apparently isn’t working well. The python version is 3.9, I’m using an environment created with conda.

If you want to try, you can follow the steps here: https://inference.roboflow.com/quickstart/run_a_model/#install-inference

  1. pip install inference
  2. pip install inference-gpu (you need to install onnxruntime first)
  3. pip install supervision

I installed the version 1.15.1 of onnxruntime from here Jetson Zoo - eLinux.org because is needed to run roboflow inference apparently.

These are the versions that I installed for onnxruntime:
onnxruntime 1.15.1
onnxruntime-gpu 1.15.1

And when I run the project I just see CPUExecutionProvider from onnxruntime:
Providers: [‘CPUExecutionProvider’]

I run the same project on my computer with the same roboflow inference packages and onnxruntime versions and its working fine, but on my jetson orin nano doesn’t work.

If you had the same problem, could you help me please.
Have a nice day!

1 Like

Hi,

Just want to confirm first, do you install this package?

onnxruntime_gpu-1.15.1-cp39-cp39-linux_aarch64.whl

Thanks.

Yes I installed it, I don’t have problem with the installation and execution but the problem is that isn’t recognizing the cuda as provider to run with gpu 😕

Do you have any example of this onnxruntime_gpu working on a jetson orin nano?

Hi,

We give JetPack 5.1.2 and ONNXRuntime v1.17.0 a try and it can work correctly.
Please find the below for details:

$ sudo apt install python3.9 python3.9-dev
$ sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
$ sudo apt install -y g++-11
$ wget https://nvidia.box.com/shared/static/6orewbbm76n871pmchr7u3nfeecl5r20.whl -O onnxruntime_gpu-1.17.0-cp39-cp39-linux_aarch64.whl
$ python3.9 -m pip install onnxruntime_gpu-1.17.0-cp39-cp39-linux_aarch64.whl
$ python3.9
Python 3.9.5 (default, Nov 23 2021, 15:27:38)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import onnxruntime as rt
>>> rt.get_available_providers()
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
>>>

Thanks.

where did you get
https://nvidia.box.com/shared/static/6orewbbm76n871pmchr7u3nfeecl5r20.whl ?

How come all these wheels are just kind of sprinkled all over the forum. Where is proper documentation?
Why aren’t these packages available at https://pypi.nvidia.com/ or Index of /compute/redist/jp ?

Hi,

The wheel is in the Jetson Zoo:

https://elinux.org/Jetson_Zoo#ONNX_Runtime

JetPack 5.1.2
onnxruntime 1.17.0 + Python 3.9

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.