YoloV4 infer stuck in the Jetson Nx Deepstream pipline

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson NX
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only) L4T 32.7.1
• TensorRT Version 8.2.1.8
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi,
I train the Yolov4_tiny network using my own dataset in the TAO toolkit and deploy it in my Jetson Nx through Deepstream SDK.
But, there is a problem, after the network success infers a few frames, the hold pipeline seems stuck without any ERROR or WARNING. The most obvious situation is the Nveglglessink GUI looks like freeze.
However, I use the vscode to debug the program, all the thread shows running in the call stack, nothing goes wrong, so weird.

The Log and the result showed as below:

The Yolov4 pgie config file showed as below:

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=yolov4_labels.txt
model-engine-file=../export/yolov4_cspdarknet_tiny_epoch_050.etlt_b1_gpu0_int8.engine
int8-calib-file=../export/cal.bin
tlt-encoded-model=../export/yolov4_cspdarknet_tiny_epoch_050.etlt
tlt-model-key=YWZsMWNlMW40NTNpaG8zb2dtZDM5aGFlZWk6NGE2NDg1NmUtY2U2Mi00ZGUxLWIwNTgtOTQ2NzdjMWM4ZWMw
infer-dims=3;384;1248
maintain-aspect-ratio=1
uff-input-order=0
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=1
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
cluster-mode=3
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=../post_processor/libnvds_infercustomparser_tlt.so

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

what’s your pipeline?
Can you try to run it with a GStreamer command line?
I suspect this is not caused by the model and its config file, and other DeepStream GST plugings.

yes, it is not about the model. I have found out the cause.
It is due to the appSink Element, in the callback function, the sample has to be pulled from the sink, and unrefed, otherwise, the pipeline will be blocked at there.

sample = gst_app_sink_pull_sample (GST_APP_SINK (sink));
## some operation ##
gst_sample_unref (sample);
1 Like

Thanks for update!
So I closed this.