Trying to use yolov7 on Jetson Orin

Description

Currently running on a Jetson Orin with jetpack 6.0, i’m trying to use Yolov7 (from: GitHub - linghu8812/yolov7: Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors) on a live camera.
Torch have been installed from : PyTorch for Jetson
and torchvision from: GitHub - pytorch/vision: Datasets, Transforms and Models specific to Computer Vision on v 0.17.1

To run my inference i’m using detect.py from the yolov5 repo ( ```


I tooked file from yolov5 repo (: dataloaders.py, common.py, Augmentations.py, general.py, downloads.py, metrics.py, export.py, plots.py and torch_utils.py which i change every yolov5 iterations for yolov7)
But when i tried to run, I keep getting this error: 

![image|690x260](upload://s1XFLgTTz1NVt5FI5YGYKRt946M.png)

![Screenshot from 2024-03-11 15-42-01|690x260](upload://hr9NCWD9yALSyzVfUGbD4XPA4wd.png)
Traceback (most recent call last):
  File "/home/tensorrt_inference/tensorrt_custom/tensorrt_inference/yolov7/detect7.py", line 257, in <module>
    main(opt)
  File "/home/tensorrt_inference/tensorrt_custom/tensorrt_inference/yolov7/detect7.py", line 252, in main
    run(**vars(opt))
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/tensorrt_inference/tensorrt_custom/tensorrt_inference/yolov7/detect7.py", line 128, in run
    pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
  File "/home/tensorrt_inference/tensorrt_custom/tensorrt_inference/yolov7/utils/general7.py", line 867, in non_max_suppression
    i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS
  File "/usr/local/lib/python3.10/dist-packages/torchvision-0.17.1+4fd856b-py3.10-linux-aarch64.egg/torchvision/ops/boxes.py", line 41, in nms
    return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
  File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 743, in __call__
    return self._op(*args, **kwargs or {})

NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].


I don't really know if this is my torchvision version that is not compatible for my cuda or torch.

If you have any suggestion , thanks in advance.

Sorry if there confusion in my explanation, i'm new to this. 

## Environment
Jetpack 6.0
**TensorRT Version**: 8.6.2.3
**GPU Type**: Jetson Orin AGX (dev)
**Nvidia Driver Version**:   540.2.0
**CUDA Version**:  12.2
**Operating System + Version**: Docker Ubuntu 22.04.4
**Python Version (if applicable)**: 3.10
**PyTorch Version (if applicable)**:  torch 2.2.0a0+6a974be.nv23.11-cp310-cp310-linux_aarch64; torchvision: 0.17.1+4fd856b
**Baremetal or Container (if container which image + tag)**: FROM nvcr.io/nvidia/l4t-tensorrt:r8.6.2-devel
#########################

Found a solution by changing my torchvision version.

But I don’t understand because this solution work only when I install this new version (0.16.0-rc5) over the previous one (0.17.1).
when I do in python:
torchvision.version.
I’m suppose to have:
torchvision.version.cuda
but when I did the same thing by building an image for my docker I can’t get the cuda.

Any idea?

Hi,

Could you share the steps that how you installed 0.16 and 0.17 with us?
Thanks.

Hi
I avoid the problem by just create a new image based on the running container which got the right version

Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.