Hello,
I tried to use Yolov5 on an Nvidia Jetson with Jetpack 5 together with Tensor RT, following the instructons on Google Colab in the last cell. I used the following commands:
python export.py --weights yolov5s.pt --include engine --imgsz 640 640 --device 0
Since TensorRT should be preinstalled with Jetpack5 I did not use the first command from the notebook. Furthermore the first command does not work for me, since it says that there is no version.
During the pyhton export command I get the following error:
export: data=data/coco128.yaml, weights=[‘yolov5s.pt’], imgsz=[640, 640], batch_size=1, device=0, half=False, inplace=False, train=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=[‘engine’]
YOLOv5 v6.1-161-ge54e758 torch 1.12.0a0+2c916ef.nv22.3 CUDA:0 (Xavier, 31011MiB)
Fusing layers…
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
PyTorch: starting from yolov5s.pt with output shape (1, 25200, 85) (14.1 MB)
/home/collins/.local/lib/python3.8/site-packages/pkg_resources/ init .py:123: PkgResourcesDeprecationWarning: 0.1.36ubuntu1 is an invalid version and will not be supported in a future release
warnings.warn(
/home/collins/.local/lib/python3.8/site-packages/pkg_resources/ init .py:123: PkgResourcesDeprecationWarning: 0.23ubuntu1 is an invalid version and will not be supported in a future release
warnings.warn(
requirements: nvidia-tensorrt not found and is required by YOLOv5, attempting auto-update…
ERROR: Could not find a version that satisfies the requirement nvidia-tensorrt (from versions: none)
ERROR: No matching distribution found for nvidia-tensorrt
requirements: Command ‘pip install ‘nvidia-tensorrt’ -U --index-url https://pypi.ngc.nvidia.com’ returned non-zero exit status 1.
ONNX: starting export with onnx 1.11.0…
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
ONNX: export failure: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument other in method wrapper__equal)
TensorRT: starting export with TensorRT 8.4.0.9…
TensorRT: export failure: failed to export ONNX file: yolov5s.onnx
Is there something I can do to fix this? From my understanding Onnx is already preinstalled on the Jetson. So I can not find an issue…
Kindly regards,
Robert