Hi, I’m trying to run the roboflow inference with CUDA on a jetson orin nano, but apparently isn’t working well. The python version is 3.9, I’m using an environment created with conda.
pip install inference-gpu (you need to install onnxruntime first)
pip install supervision
I installed the version 1.15.1 of onnxruntime from here Jetson Zoo - eLinux.org because is needed to run roboflow inference apparently.
These are the versions that I installed for onnxruntime:
onnxruntime 1.15.1
onnxruntime-gpu 1.15.1
And when I run the project I just see CPUExecutionProvider from onnxruntime:
Providers: [‘CPUExecutionProvider’]
I run the same project on my computer with the same roboflow inference packages and onnxruntime versions and its working fine, but on my jetson orin nano doesn’t work.
If you had the same problem, could you help me please.
Have a nice day!
Yes I installed it, I don’t have problem with the installation and execution but the problem is that isn’t recognizing the cuda as provider to run with gpu 😕