Segmentation fault (core dumped) when running yoloV8 onnx model with Deepstream-YOLO

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) = Jetson nano
**• DeepStream Version = 6.0.1
**• JetPack Version (valid for Jetson only) = 4.6.1
**• TensorRT Version = 8.2.1.8-1+cuda10.2
**• Python version = 3.6.9
**• Issue Type( questions, new requirements, bugs)
I am getting

Error: Segmentation fault (core dumped)

while running

deepstream-app -c deepstream_app_config.txt

Hello NVIDIA Team,

I will divide my process in two parts,

First part : Creation of ONNX Model
I followed the first four steps from this link

All three steps where successful, but while executing the 4th step

sudo python3 export_yoloV8.py -w yolov8s.pt --dynamic

getting the below error

Illegal instruction

So I created and downloaded the ONNX model using the link below

Put the yolov8n.onnx model file in Deepstream-YOLO folder.

Second Part : Use that onnx model to run the program
I followed the 5th step from this link below

  1. Output when executing

CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo

Terminal Output

  1. Output when executing

deepstream-app -c deepstream_app_config.txt

Please let me know why i am getting this error and what is the solution for it.

The model you are using seems to be incorrect, you can see there is only one OUTPUT with size as 84x6300.
Below is the output of using yolov8s.onnx exported by export_yoloV8.py but runs in DeepStream6.3, you can see the output has 3 parts as bbox/scores/classes:
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 8400x4
2 OUTPUT kFLOAT scores 8400x1
3 OUTPUT kFLOAT classes 8400x1

yes you are right i have taken the onnx model from different place.

Now i generated the onnx model using this command

python3 export_yoloV8.py -w yolov8s.pt --dynamic

from this link

I followed all the steps present in the above link and able to generate the onnx model but while testing

deepstream-app -c deepstream_app_config.txt

getting error. please look below terminal images for output:

Why I am getting this error followed exactly the same steps.

please refer to this topic.

yes i already refer this topic but i didn’t encounter the same error as he does, also i have checked the solution and i have already taken care off but the error persist

about that "Assertion failed: inputs.at(0).isInt32() && " error, please refer to this topic. the 8.2.1 tensorrt can’t support that model directly.

I followed the topic, but sorry i was not able to figure out what should i need to change and how. Please help

sorry for the late reply, you can use " python3 export_yoloV8.py -w yolov8s.pt --dynamic --simplify" to generate model, I have verified on Jetson nano and DS6.0.1. here is the log:
log.txt (3.5 KB)

Thank you for your assistance.
Yes this will work i guess as per your log provided but i am unable to try it as i am getting error in installing onnxsim.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

OK, please fix the installing onnxsim issue first.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.