Hello,
I am encountering a segmentation fault while running a DeepStream pipeline that uses a custom TensorRT engine(Yolov8). Below are the details of the setup and error:
Model: NVIDIA Orin NX Developer Kit - Jetpack 5.1.2
Hardware:
Module: NVIDIA Jetson Orin NX (16GB ram)
Platform:
Distribution: Ubuntu 20.04 focal
Libraries:
CUDA: 11.4.315
cuDNN: 8.6.0.166
TensorRT: 8.5.2.2
VPI: 2.3.9
Vulkan: 1.3.204
OpenCV: 4.5.5 - with CUDA: YES
Error Output
The pipeline initializes successfully, and the TensorRT engine (i2.engine) is deserialized without issues.
During pipeline execution, the following warning appears:
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
Shortly after, the pipeline crashes with a segmentation fault:
Segmentation fault (core dumped)
Steps Taken So Far
Verified that the TensorRT engine (i2.engine) is compatible with the platform’s TensorRT version.
Confirmed that the config_infer_primary_bins.txt file is correctly configured and loads successfully.
Ensured that the RTSP source is working and provides a valid stream.
Questions
What could be the possible reasons for the segmentation fault in this context?
Is the warning related to the kEXPLICIT_BATCH flag a potential cause for this issue? If so, how can I address it?
Are there any additional debug steps I can take to identify the root cause of the segmentation fault?
Any help or suggestions to resolve this issue would be greatly appreciated.
Thank you for your response.
DeepStream-6.3
Based on the inspection of the stack trace, I’ve identified that the execution is reaching the NvDsInferParseCustomYoloV8 function in the custom YOLOv8 parsing logic. The tensor in question is named output0, and its shape appears to be quite complex:
Given this complex tensor shape, it seems that the issue may be arising from how this tensor is processed in the parsing function. Could you confirm if there is a dedicated parsing function for the custom YOLO models that handles the tensor output0? This would help ensure that the parsing logic is properly aligned with the tensor format and might resolve any parsing issues.
Thank you again for your help, and I look forward to your advice on how to proceed.
Where did you get this API? We currently have no post-processing for yolov8 model in DeepStream. If you have the source code of this API, it is recommended that you debug it yourself according to the model you are using first.
Thank you for your response. I implemented the solution using the following repository: YOLOv8-TensorRT by triple-Mu.
Could you please confirm which versions of YOLO are officially supported by DeepStream? Additionally, do you have any suggestions or recommendations for adapting YOLOv8 for DeepStream if there is no official post-processing support?
Since we currently have no post-processing for yolov8 model in DeepStream, We suggest you follow the code of the project I attached earlier nvdsparsebbox_Yolo.cpp. If you have any questions, you can consult the project owner directly.