YOLOv8 ONNX Export Fails on Jetson – Assertion Error in stl_vector.h, Core Dumped

Hi NVIDIA Team,

I’m working on an NVIDIA Jetson device and trying to export a YOLOv8 segmentation model (best.pt) to ONNX format using the Ultralytics CLI. However, the export process fails with an assertion error and core dump:
/opt/rh/gcc-toolset-14/root/usr/include/c++/14/bits/stl_vector.h:1130:
std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator
[with _Tp = unsigned int; _Alloc = std::allocator; reference = unsigned int&; size_type = long unsigned int]:
Assertion ‘__n < this->size()’ failed.

Aborted (core dumped)

System Details:

  • Device: NVIDIA Jetson (ARM Cortex-A78AE)
  • JetPack Version: 6.0
  • Python: 3.10.12
  • Torch: 2.3.0 (CPU only build)
  • Ultralytics: 8.3.107
  • GCC Toolset: gcc-toolset-14
  • Model: YOLOv8-seg (custom trained)
  • Command Used:
yolo export model=best.pt format=onnx simplify=True dynamic=False imgsz=640

Notes:

  • The model runs fine in PyTorch inference on the Jetson.
  • ONNX export works on my x86 system but fails on Jetson.
  • Seems like an std::vector access error during export.

Questions:

  1. Is PyTorch 2.3.0 officially supported on Jetson, or could this be the cause?
  2. Is there a known issue with ONNX export and vector assertions on Jetson?
  3. Should I downgrade PyTorch or do the export on x86 and run inference-only on Jetson?

Any help or pointers would be really appreciated!
Thanks,
Karthik

Hi,

Does YOLO11 work for you?
We test a YOLO11 segmentation model and it can work well on our latest JetPack 6.2 release.

$ yolo export model=yolo11n-seg.pt format=onnx
Downloading https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-seg.pt to 'yolo11n-seg.pt'...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.90M/5.90M [00:02<00:00, 2.12MB/s]
Ultralytics 8.3.82 🚀 Python-3.10.12 torch-2.6.0 CPU (ARMv8 Processor rev 1 (v8l))
YOLO11n-seg summary (fused): 113 layers, 2,868,664 parameters, 0 gradients, 10.4 GFLOPs

PyTorch: starting from 'yolo11n-seg.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) ((1, 116, 8400), (1, 32, 160, 160)) (5.9 MB)

ONNX: starting export with onnx 1.17.0 opset 19...
ONNX: slimming with onnxslim 0.1.48...
ONNX: export success ✅ 2.0s, saved as 'yolo11n-seg.onnx' (11.2 MB)

Export complete (3.2s)
Results saved to /home/nvidia
Predict:         yolo predict task=segment model=yolo11n-seg.onnx imgsz=640  
Validate:        yolo val task=segment model=yolo11n-seg.onnx imgsz=640 data=/ultralytics/ultralytics/cfg/datasets/coco.yaml  
Visualize:       https://netron.app
💡 Learn more at https://docs.ultralytics.com/modes/export

Thanks.

Hi,

Thanks for confirming that YOLO11 segmentation works on JetPack 6.2 — that’s really encouraging!

I tested the same export command on my Jetson device, which is currently running JetPack 6.0 (versions 6.0+b106 and 6.0+b87):

yolo export model=yolo11n-seg.pt format=onnx

The model downloads and begins the export, but it crashes with this error:

/opt/rh/gcc-toolset-14/root/usr/include/c++/14/bits/stl_vector.h:1130: 
std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](size_type) 
[with _Tp = unsigned int; _Alloc = std::allocator<unsigned int>; reference = unsigned int&; size_type = long unsigned int]: 
Assertion '__n < this->size()' failed.
Aborted (core dumped)

Environment details:

  • Ultralytics 8.3.108
  • Python 3.10.12
  • Torch 2.3.0 (CPU)
  • Cortex-A78AE architecture
  • JetPack 6.0

It seems like the export logic (possibly from ONNXSlim or export internals) is trying to access an index that’s out of bounds in a std::vector.

Any workaround for this issue that doesn’t require upgrading the JetPack version?

Thanks in advance!

Hi,

A possible reason is that the latest Ultralytics software is not compatible with the previous environment.

Have you checked this with the Ultralytics team?
Suppose there should be a version that can work on the JetPack 6.0.

Thanks.

Hi @AastaLLL
I Checked with Ultralytics team they replied saying
You could open an issue on NVIDIA Forums since it seems platform specific
Should i need to downgrade the jetson

You can try using the Ultralytics docker image for Jetson for export.

Hi,

Is upgrading to the latest JetPack 6.2 an option for you?
The segmentation model can work on the latest release.

Thanks.