Deepstream pipeline stops when it runs into multiple Faces while doing Face recognintion

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU V100
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2.3
• NVIDIA GPU Driver Version (valid for GPU only) 460+
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I have a deepstream app running which takes input from 4 rtsp cameras for the task of facial recognition. The pipeline is as follows:

rtsp(srcs)*4 → nvstreammux → queue → face_detector → queue → face_classifier → queue → fakesink

The issue arises when we see more than 5 faces at a time and the execution of the pipeline halts without sending any error message. The app shows as a running process but does not provide any output.

1 Like

Can you refer to Pipeline freezes when secondary batchsize less than detected objects - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums?

Hi @Fiona.Chen

The link you have pointed me towards using nvinferserver for inference, I am currently using nvidia’s nvinfer plugin for gstreamer to run inference. Could you point me to a solution using nvinfer?

Attaching config files for refernce:

PGIE:
[property]
gpu-id=0
process-mode=1
net-scale-factor=0.0039215697906911373
#model-file=./models/Secondary_FaceDetect/fd_lpd.caffemodel
#proto-file=./models/Secondary_FaceDetect/fd_lpd.prototxt
model-engine-file=/home/nxtgen/deepstream-fr/models/Secondary_FaceDetect/fd_lpd.caffemodel_b1_gpu0_fp16.engine
labelfile-path=./models/Secondary_FaceDetect/labels.txt
#int8-calib-file=/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
network-mode=2
num-detected-classes=3
interval=2
gie-unique-id=1
output-blob-names=output_bbox;output_cov
input-object-min-width=64
input-object-min-height=64
maintain-aspect-ratio=1

Person has class-id 2 for the primary detector. This ensures that this secondary

detector only works on persons.

#operate-on-class-ids=2

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

SGIE:

gpu-id=0
net-scale-factor=0.0039215686274
#net-scale-factor=1
#force-implicit-batch-dim=1
#onnx-file=./facenet.onnx
model-engine-file=facenet.onnx_b1_gpu0_fp16.engine
batch-size=1

0=FP32 and 1=INT8 mode

network-mode=2
#infer-dims=3;40;160
#input-object-min-width=30
#input-object-min-height=30
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
classifier-async-mode=0
classifier-threshold=0.0
process-mode=2
output-tensor-meta=1
#interval=5
#scaling-filter=1
#scaling-compute-hw=0

What is your config for the face_classifier? What is the config of nvstreammux? Can you try with “gst-launch-1.0” pipeline to run your pipeline?

Streammux:

streammux.set_property(‘width’, 640)
streammux.set_property(‘height’, 360)
streammux.set_property(‘batch-size’, 4)
streammux.set_property(‘batched-push-timeout’, 400000)
streammux.set_property(‘attach-sys-ts’, True)
streammux.set_property(‘compute-hw’,1)
streammux.set_property(‘live-source’,1)

Face classifier:

gpu-id=0
net-scale-factor=0.0039215686274

model-engine-file=facenet.onnx_b1_gpu0_fp16.engine
batch-size=1

network-mode=2
#infer-dims=3;40;160
#input-object-min-width=30
#input-object-min-height=30
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
classifier-async-mode=0
classifier-threshold=0.0
process-mode=2
output-tensor-meta=1
#interval=5
#scaling-filter=1
#scaling-compute-hw=0

The pipeline runs fine normally. It just stops execution when there is an overload of faces detected at one camera.

If you only use PGIE, can the pipeline run?

Will run the test and get back to you. The error occurance is very rare in 24hours, so I will post an update in 24 hours from now.

But there is no error in just running PGIE.

The pipeline did not fail. I also checked out the interval parameter, at the cost of frames the amount of time between failures is reduced with an increased interval.