Yolov8 on Orin NX with Jetpack 5.1.2

Hi,
i am trying to make a YOLOv8 model run on my Orin NX (8GB) module.
I thought i use GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 7.1 / 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models for the nvinfer adapter.
I build the model with Ultralytics on Colab as in the Standard Notebooks. Then i export it on colab to onnx (with Opset 13, Also tried 17). After that i transfer the onnx file to the Device and run int8 calibration as described in the DeepStream-Yolo Repo. The Test video runs fine.
After that i try to run my own videos with it with my own pipeline, and i get a Segfault.
I am out of ideas on this one. My old int8 engine i built from a Yolov4 Model works fine (if with low accuracy) with the very same setup- Replace the engine: Segfault.
UPDATE: I have in the meantime changed the processing to make onnx export on the target system. Also i have made python script to do the int8 calibration and make the trt engine in a more controlled way directly with trt on the device. Same result: Segfault when first infer is done.

You can use the gdb tool to address the location of the crash first.

$gdb --args <your_command>
$r
After the Segmentation
$bt

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.