Segmentation fault when running DeepStream-Yolo with Yolov8

• Hardware Platform (Jetson / GPU) : Jetson Orin Nano 8GB dev kit
• DeepStream Version : 6.2
• JetPack Version : 5.1.1-b56 installed on a 500GB NVMe
• TensorRT Version :
• Issue Type : bug
• How to reproduce the issue ?

Hi, I’m trying to run a Yolov8n in a DeepStream pipeline using the following repo : GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models and followed theses instructions :

I have no problem generating the engine nor de-serializing it using DeepStream or even TensorRT (trtexec) but when deepstream is done de-serializing it and tries to run the pipeline the window opens with a black image and then closes abruptly with a Segmentation Fault error.

Here is an example of the log obtained :

$ deepstream-app -c deepstream_app_config.txt
0:00:04.714466199 32182 0xaaaae4fb96c0 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/yumain/workspace/DeepStream-Yolo/model_fp16.plan
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kHALF  images          3x640x640       
1   OUTPUT kHALF  output0         84x8400         

0:00:04.931299491 32182 0xaaaae4fb96c0 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/yumain/workspace/DeepStream-Yolo/model_fp16.plan
0:00:04.961969137 32182 0xaaaae4fb96c0 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/yumain/workspace/DeepStream-Yolo/config_infer_primary_yoloV8.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:239>: Pipeline ready

NvMMLiteOpen : Block : BlockType = 279 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 279 

**PERF:  FPS 0 (Avg)	
**PERF:  0.00 (0.00)	
** INFO: <bus_callback:225>: Pipeline running

Segmentation fault (core dumped)

I tried different gie configurations with fp32, fp16, using engines generated with trtexec or directly from deepstream using a onnx file but the result is always the same.

I was able to run different DeepStream test samples on this device with no problem but none with Yolov8 in place.
Could you help me, please ?

Issue solved, I was missing “–dynamic” when exporting my onnx model.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.