Have followed the following docs/guides
Building pytorch - PyTorch for Jetson
For ultralytics on Jetson - NVIDIA Jetson - Ultralytics YOLO Docs
Thank you for your support.
Here is my env info :
python3 -m torch.utils.collect_env
Collecting environment information…
PyTorch version: 2.1.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.19.1
Libc version: glibc-2.31
Python version: 3.8.19 (default, Mar 20 2024, 19:53:40) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.120-tegra-aarch64-with-glibc2.26
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/aarch64-linux-gnu/libcudnn.so.9.4.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv.so.9.4.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn.so.9.4.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0
/usr/lib/aarch64-linux-gnu/libcudnn_graph.so.9.4.0
/usr/lib/aarch64-linux-gnu/libcudnn_heuristic.so.9.4.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops.so.9.4.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 3
Vendor ID: ARM
Model: 1
Model name: ARMv8 Processor rev 1 (v8l)
Stepping: r0p1
CPU max MHz: 2201.6001
CPU min MHz: 115.2000
BogoMIPS: 62.50
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 3 MiB
L3 cache: 6 MiB
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, but not BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp uscat ilrcpc flagm
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.1.0
[pip3] torchvision==0.16.2+c6f3977
[conda] numpy 1.23.5 pypi_0 pypi
[conda] pytorch-cuda 11.8 h8dd9ede_2 pytorch
[conda] torch 2.1.0 pypi_0 pypi
Getting this exception when trying to export a model
yolo export model=yolov8n.pt format=engine
WARNING ⚠️ TensorRT requires GPU export, automatically assigning device=0
Ultralytics 8.3.7 🚀 Python-3.8.19 torch-2.1.0 CUDA:0 (Orin, 62800MiB)
YOLOv8n summary (fused): 168 layers, 3,151,904 parameters, 0 gradients
Traceback (most recent call last):
File “/home/jacob/work/github/jpisaac/testproj/env/bin/yolo”, line 8, in
sys.exit(entrypoint())
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/ultralytics/cfg/init.py”, line 831, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/ultralytics/engine/model.py”, line 736, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/torch/utils/_contextlib.py”, line 115, in decorate_context
return func(*args, **kwargs)
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/ultralytics/engine/exporter.py”, line 265, in call
y = model(im) # dry runs
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1527, in _call_impl
return forward_call(*args, **kwargs)
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/ultralytics/nn/tasks.py”, line 111, in forward
return self.predict(x, *args, **kwargs)
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/ultralytics/nn/tasks.py”, line 129, in predict
return self._predict_once(x, profile, visualize, embed)
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/ultralytics/nn/tasks.py”, line 150, in _predict_once
x = m(x) # run
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1527, in _call_impl
return forward_call(*args, **kwargs)
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/ultralytics/nn/modules/conv.py”, line 54, in forward_fuse
return self.act(self.conv(x))
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1527, in _call_impl
return forward_call(*args, **kwargs)
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/torch/nn/modules/conv.py”, line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File “/home/jacob/work/github/jpisaac/testproj/env/lib/python3.8/site-packages/torch/nn/modules/conv.py”, line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: GET was unable to find an engine to execute this computation
Sentry is attempting to send 2 pending events
Waiting up to 2 seconds
Press Ctrl-C to quit