No Detection for Centerface Model Inference with Deepstream(for Centerface)

Hi Nvidia Team,

I am trying to run the Centerface model Inference(following this exactly the same: deepstream_triton_model_deploy/centerface at master · NVIDIA-AI-IOT/deepstream_triton_model_deploy · GitHub) on my Laptop using Deepstream Container nvcr.io/nvidia/deepstream:5.0.1-20.09-triton , App is running Successfully(I am getting the Detection in the output) with DeepStream and Triton Server Integration(by exactly following the reference in the GitHub repo).

But when I am trying to run the inference with only Deepstream(no integration with triton server), there is no detection in the output generated .mp4 file.

I am attaching the config files below:
1.) config_infer_primary.txt:

[property]
gpu-id=0
#net-scale-factor=0.0039215697906911373
net-scale-factor=1.0
model-color-format=0
onnx-file=model.onnx
model-engine-file=centerface_b1_gpu0_fp16.engine
labelfile-path=labels.txt
batch-size=1
network-mode=2
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
output-blob-names=537;538;539;540
parse-bbox-func-name=NvDsInferParseCustomCenterNetFace
custom-lib-path=libnvds_infercustomparser_centernet.so

2.) deepstream_app_config.txt:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=kitti-trtis

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
num-sources=1
uri=file:/workspace/trt/centerface/4.mp4
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
iframeinterval=10
bitrate=2000000
output-file=/workspace/trt/centerface/out.mp4
source-id=0


[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
batch-size=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=centerface.txt

[tests]
file-loop=0

Kindly please help me to resolve this Issue.

Thanks in advance,
Darshan C G

centerface_nvinfer.tgz (4.9 KB)

please try attached change.
I verified it works on my side

Launch docker
$ docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -v /home/$user/:/home/$user/ -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream nvcr.io/nvidia/deepstream:5.1-21.02-triton

In docker
# git clone GitHub - NVIDIA-AI-IOT/deepstream_triton_model_deploy: How to deploy open source models using DeepStream and Triton Inference Server
# cd deepstream_triton_model_deploy/
# tar xpf …/centerface_nvinfer.tgz
# cd centerface/centerface/1/ && ./run.sh
# cd customparser/ && make
# cd nvinfer_config
# deepstream-app -c source1_primary_detector.txt

Hi,I ran into the same problem in version 5.1, there was no visual result of the test

@mchi I have also followed the above instruction but the out.mp4 doesn’t show boxes on faces

I confirmed again it works.

Add few more commands below

# pip3 install onnx
# git clone [https://github.com/NVIDIA-AI-IOT/deepstream_triton_model_deploy.git ](https://github.com/NVIDIA-AI-IOT/deepstream_triton_model_deploy.git)
# cd deepstream_triton_model_deploy/
# tar xpf …/centerface_nvinfer.tgz
# cd centerface/centerface/1/ && ./run.sh
# cd customparser/
# make clean                // clean the origianl lib
# make
# cd nvinfer_config
# wget https://developer.nvidia.com/blog/wp-content/uploads/2020/02/Redaction-A_1.mp4  // download the test video
# deepstream-app -c source1_primary_detector.txt
 // check the out.mp4

Hello, I configured the deepstream5.1 SDK on my own server and did not use deepstream_triton. Does it have anything to do with this?

you can try http://nvcr.io/nvidia/deepstream:5.1-21.02-triton firstly…

Should you not need to rely on Docker after installing the 5.1 DS SDK?

Yes, you can certainly run DS outside of docker.

I did deploy it by myself. I am the same as the owner of this problem in my DS5.1 environment. I can’t see the output of MP4.

Try Hardcoding the heatmap values in custom parser code.

So do you think it is a calculation problem in the parsing code? Is it convenient to share the changes?

Change these two lines and check:

int fea_h = 120; //#heatmap.size[2];
int fea_w = 160; //#heatmap.size[3];

thanks

I saw a PR on your github. There seems to be an error in the array value in your code. I follow this PR and it’s no problem.

sorry, which PR?
Did you get it working now with the PR?

Yes, I output the face box normally, which seems to be a small error in the array dimension.A small error is fixed : the BOX cannot be output. In fact, the h and w of the network output dimension are wrong, resulting in a product of 0. by positive666 · Pull Request #14 · NVIDIA-AI-IOT/deepstream_triton_model_deploy · GitHub

1 Like