Trying to use yolov7 on Jetson Orin


Currently running on a Jetson Orin with jetpack 6.0, i’m trying to use Yolov7 (from: GitHub - linghu8812/yolov7: Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors) on a live camera.
Torch have been installed from : PyTorch for Jetson
and torchvision from: GitHub - pytorch/vision: Datasets, Transforms and Models specific to Computer Vision on v 0.17.1

To run my inference i’m using from the yolov5 repo ( ```

I tooked file from yolov5 repo (:,,,,,,, and which i change every yolov5 iterations for yolov7)
But when i tried to run, I keep getting this error: 


![Screenshot from 2024-03-11 15-42-01|690x260](upload://hr9NCWD9yALSyzVfUGbD4XPA4wd.png)
Traceback (most recent call last):
  File "/home/tensorrt_inference/tensorrt_custom/tensorrt_inference/yolov7/", line 257, in <module>
  File "/home/tensorrt_inference/tensorrt_custom/tensorrt_inference/yolov7/", line 252, in main
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/tensorrt_inference/tensorrt_custom/tensorrt_inference/yolov7/", line 128, in run
    pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
  File "/home/tensorrt_inference/tensorrt_custom/tensorrt_inference/yolov7/utils/", line 867, in non_max_suppression
    i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS
  File "/usr/local/lib/python3.10/dist-packages/torchvision-0.17.1+4fd856b-py3.10-linux-aarch64.egg/torchvision/ops/", line 41, in nms
    return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
  File "/usr/local/lib/python3.10/dist-packages/torch/", line 743, in __call__
    return self._op(*args, **kwargs or {})

NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

I don't really know if this is my torchvision version that is not compatible for my cuda or torch.

If you have any suggestion , thanks in advance.

Sorry if there confusion in my explanation, i'm new to this. 

## Environment
Jetpack 6.0
**TensorRT Version**:
**GPU Type**: Jetson Orin AGX (dev)
**Nvidia Driver Version**:   540.2.0
**CUDA Version**:  12.2
**Operating System + Version**: Docker Ubuntu 22.04.4
**Python Version (if applicable)**: 3.10
**PyTorch Version (if applicable)**:  torch 2.2.0a0+6a974be.nv23.11-cp310-cp310-linux_aarch64; torchvision: 0.17.1+4fd856b
**Baremetal or Container (if container which image + tag)**: FROM

Found a solution by changing my torchvision version.

But I don’t understand because this solution work only when I install this new version (0.16.0-rc5) over the previous one (0.17.1).
when I do in python:
I’m suppose to have:
but when I did the same thing by building an image for my docker I can’t get the cuda.

Any idea?


Could you share the steps that how you installed 0.16 and 0.17 with us?

I avoid the problem by just create a new image based on the running container which got the right version


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.