Yolov5 + TensorRT

Hello,

I tried to use Yolov5 on an Nvidia Jetson with Jetpack 5 together with Tensor RT, following the instructons on Google Colab in the last cell. I used the following commands:

python export.py --weights yolov5s.pt --include engine --imgsz 640 640 --device 0

Since TensorRT should be preinstalled with Jetpack5 I did not use the first command from the notebook. Furthermore the first command does not work for me, since it says that there is no version.

During the pyhton export command I get the following error:

export: data=data/coco128.yaml, weights=[‘yolov5s.pt’], imgsz=[640, 640], batch_size=1, device=0, half=False, inplace=False, train=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=[‘engine’]
YOLOv5 v6.1-161-ge54e758 torch 1.12.0a0+2c916ef.nv22.3 CUDA:0 (Xavier, 31011MiB)

Fusing layers…
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients

PyTorch: starting from yolov5s.pt with output shape (1, 25200, 85) (14.1 MB)
/home/collins/.local/lib/python3.8/site-packages/pkg_resources/ init .py:123: PkgResourcesDeprecationWarning: 0.1.36ubuntu1 is an invalid version and will not be supported in a future release
warnings.warn(
/home/collins/.local/lib/python3.8/site-packages/pkg_resources/ init .py:123: PkgResourcesDeprecationWarning: 0.23ubuntu1 is an invalid version and will not be supported in a future release
warnings.warn(
requirements: nvidia-tensorrt not found and is required by YOLOv5, attempting auto-update…
ERROR: Could not find a version that satisfies the requirement nvidia-tensorrt (from versions: none)
ERROR: No matching distribution found for nvidia-tensorrt
requirements: Command ‘pip install ‘nvidia-tensorrt’ -U --index-url https://pypi.ngc.nvidia.com’ returned non-zero exit status 1.

ONNX: starting export with onnx 1.11.0…
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
ONNX: export failure: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument other in method wrapper__equal)

TensorRT: starting export with TensorRT 8.4.0.9…

TensorRT: export failure: failed to export ONNX file: yolov5s.onnx

Is there something I can do to fix this? From my understanding Onnx is already preinstalled on the Jetson. So I can not find an issue…

Kindly regards,

Robert

Hi,

Could you double-check your environment again?
JetPack 5.0 DP only support Xavier and Orin device. Do you use Nano?

Thanks.

The result of sudo apt show nvidia-jetpack is:

package: nvidia-jetpack
Version: 5.0-b114
Priority: standard
Section: metapackages
Maintainer: NVIDIA Corporation
Installed-Size: 199 kB
Depends: nvidia-cuda (= 5.0-b114), nvidia-opencv (= 5.0-b114), nvidia-cudnn8 (= 5.0-b114), nvidia-tensorrt (= 5.0-b114), nvidia-container (= 5.0-b114), nvidia-vpi (= 5.0-b114), nvidia-nsight-sys (= 5.0-b114), nvidia-l4t-jetson-multimedia-api (>> 34.1-0), nvidia-l4t-jetson-multimedia-api (<< 34.2-0)
Homepage: Autonomous Machines | NVIDIA Developer
Download-Size: 29,4 kB
APT-Sources: https://repo.download.nvidia.com/jetson/t194 r34.1/main arm64 Packages
Description: NVIDIA Jetpack Meta Package

I am 100% on the Xavier.

Kindly regards

Hi,

Thanks for the confirmation.
We ask for the device information since this is the Nano board rather than Xavier.

Based on the below error, the error is caused by the different placement of the layers.

ONNX: export failure: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument other in method wrapper__equal)

Do you get the ONNX model output after running the script?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.