Description
Currently running on a Jetson Orin with jetpack 6.0, i’m trying to use Yolov7 (from: GitHub - linghu8812/yolov7: Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors) on a live camera.
Torch have been installed from : PyTorch for Jetson
and torchvision from: GitHub - pytorch/vision: Datasets, Transforms and Models specific to Computer Vision on v 0.17.1
To run my inference i’m using detect.py from the yolov5 repo ( ```
I tooked file from yolov5 repo (: dataloaders.py, common.py, Augmentations.py, general.py, downloads.py, metrics.py, export.py, plots.py and torch_utils.py which i change every yolov5 iterations for yolov7)
But when i tried to run, I keep getting this error:


Traceback (most recent call last):
File "/home/tensorrt_inference/tensorrt_custom/tensorrt_inference/yolov7/detect7.py", line 257, in <module>
main(opt)
File "/home/tensorrt_inference/tensorrt_custom/tensorrt_inference/yolov7/detect7.py", line 252, in main
run(**vars(opt))
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/tensorrt_inference/tensorrt_custom/tensorrt_inference/yolov7/detect7.py", line 128, in run
pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
File "/home/tensorrt_inference/tensorrt_custom/tensorrt_inference/yolov7/utils/general7.py", line 867, in non_max_suppression
i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
File "/usr/local/lib/python3.10/dist-packages/torchvision-0.17.1+4fd856b-py3.10-linux-aarch64.egg/torchvision/ops/boxes.py", line 41, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 743, in __call__
return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
I don't really know if this is my torchvision version that is not compatible for my cuda or torch.
If you have any suggestion , thanks in advance.
Sorry if there confusion in my explanation, i'm new to this.
## Environment
Jetpack 6.0
**TensorRT Version**: 8.6.2.3
**GPU Type**: Jetson Orin AGX (dev)
**Nvidia Driver Version**: 540.2.0
**CUDA Version**: 12.2
**Operating System + Version**: Docker Ubuntu 22.04.4
**Python Version (if applicable)**: 3.10
**PyTorch Version (if applicable)**: torch 2.2.0a0+6a974be.nv23.11-cp310-cp310-linux_aarch64; torchvision: 0.17.1+4fd856b
**Baremetal or Container (if container which image + tag)**: FROM nvcr.io/nvidia/l4t-tensorrt:r8.6.2-devel
#########################