Error while running YOLOv5 on Orin

I am trying to run inference tasks on Orin dev. board using YOLOv5. I got the following error:

NotImplementedError: Could not run ‘torchvision::nms’ with arguments from the ‘CUDA’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘torchvision::nms’ is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, Tracer, AutocastCPU, Autocast, Batched, VmapMode, Functionalize, PythonTLSSnapshot].

CPU: registered at /media/amir/SSD-PUT/orin_pytorch_installation/vision/torchvision/csrc/ops/cpu/nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at /media/amir/SSD-PUT/orin_pytorch_installation/vision/torchvision/csrc/ops/quantized/cpu/qnms_kernel.cpp:124 [kernel]
BackendSelect: fallthrough registered at …/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at …/aten/src/ATen/core/PythonFallbackKernel.cpp:67 [backend fallback]
Named: registered at …/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at …/aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at …/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at …/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at …/aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: fallthrough registered at …/aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at …/aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at …/aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback]
AutogradXLA: fallthrough registered at …/aten/src/ATen/core/VariableFallbackKernel.cpp:51 [backend fallback]
AutogradLazy: fallthrough registered at …/aten/src/ATen/core/VariableFallbackKernel.cpp:55 [backend fallback]
AutogradXPU: fallthrough registered at …/aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback]
AutogradMLC: fallthrough registered at …/aten/src/ATen/core/VariableFallbackKernel.cpp:59 [backend fallback]
AutogradHPU: fallthrough registered at …/aten/src/ATen/core/VariableFallbackKernel.cpp:68 [backend fallback]
Tracer: registered at …/torch/csrc/autograd/TraceTypeManual.cpp:293 [backend fallback]
AutocastCPU: fallthrough registered at …/aten/src/ATen/autocast_mode.cpp:461 [backend fallback]
Autocast: fallthrough registered at …/aten/src/ATen/autocast_mode.cpp:305 [backend fallback]
Batched: registered at …/aten/src/ATen/BatchingRegistrations.cpp:1059 [backend fallback]
VmapMode: fallthrough registered at …/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at …/aten/src/ATen/FunctionalizeFallbackKernel.cpp:52 [backend fallback]
PythonTLSSnapshot: registered at …/aten/src/ATen/core/PythonFallbackKernel.cpp:71 [backend fallback]

I would appreciate some help in fixing this issue.

Hi,

Could you please share how you install the PyTorch package?

To install a PyTorch with CUDA support, it’s recommended to use our prebuilt.
The detailed guide can be found on the below page:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.