The deepStream infer hangs at special frame everytime when use cascades secondary networ

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) XAVIER NX
**• DeepStream Version 5.0
**• JetPack Version (valid for Jetson only) 4.4
**• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs) bugs

I want to recognize human behavior. My steps are:

  1. Identify the location of people.
  2. Using openpose to detect the key points of human.

The system works well in real time, but it will block at a certain frame. And every run is blocked in this frame.like the following pic shows :

Running both models separately is normal, so the model and video files are normal. But when the two models are combined, there is a blocking situation.

I think the reason for the blocking is that the recognition speed of the first model is fast, while the recognition speed of the second model is slow. But I don’t know how to solve this problem. Does anyone have a similar problem? Can anyone help me?

I am modify the deepstream-test1 to complete my work. my config file as follows:

(1)dstest1_pgie_config.txt
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=tlt_encode
tlt-encoded-model=…/…/…/…/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt
labelfile-path=labels_peoplenet.txt
model-engine-file=…/…/…/…/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine
input-dims=3;544;960;0
uff-input-blob-name=input_1
#force-implicit-batch-dim=1
batch-size=1
process-mode=1
model-color-format=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=3
cluster-mode=1
interval=0
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

[class-attrs-all]
pre-cluster-threshold=0.5

Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)

eps=0.6
minBoxes=1

(2)dstest1_sgie_config.txt
[property]
gpu-id=0
#net-scale-factor=1
net-scale-factor=0.0174292
#net-scale-factor=0.0039215697906911373
offsets=123.675;116.28;103.53
model-file=pose_estimation.onnx
model-engine-file=pose.engine

force-implicit-batch-dim=1
batch-size=1

0=FP32 and 1=INT8 mode

network-mode=2
input-object-min-width=35
input-object-min-height=56
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
#is-classifier=1
classifier-async-mode=1
classifier-threshold=0.3
output-tensor-meta=1
network-type=100
workspace-size=3000
#secondary-reinfer-interval=15

Hey, which 2nd model(the openpose) are you using and how you handle the post process?

i am using the model provided by Creating a Human Pose Estimation Application with NVIDIA DeepStream | NVIDIA Developer Blog .And the post process also use this blog provided.

What is the batch size for the 2nd model, could you try to increase the batch size and try again?

thanks, I have reconvert the 2nd onnx model to increase the batch size, and it work