Custom model with deepstream sdk

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU (2080 ti)
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2.1
• NVIDIA GPU Driver Version (valid for GPU only) 450.102.04
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

deepstream-app -c source_config_file.txt.

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, i used a custom model for facedetection (Centerface). Link-

I have few doubts-

I tried to run with deepstream sdk and updated the config file also. There was no face detection.
I tried to debug the heatmap dims output in custom_parser.cpp,
Debug results-
heatmap->inferDims[0] = 1
heatmap->inferDims[1] = 8
heatmap->inferDims[2] = 8
heatmap->inferDims[3] = 0
face detected = 0

While debugging with docker tritonserver based deepstream i got (Where i get detections)-
heatmap->inferDims[0] = 1
heatmap->inferDims[1] = 1
heatmap->inferDims[2] = 120
heatmap->inferDims[3] = 160
face size 3

What is the issue with model dims? how can i use this with deepstream sdk? is there parsing logic need to be changed?
I also provided infer-dims = 0(NCHW). in infer-config configuration. But no results.

I also tried to load the model as tensorRT code, it gave me error for model-
error is-
[TRT] binding to input 0 input.1 binding index: 0
[TRT] binding to input 0 input.1 dims (b=1 c=1 h=3 w=480) size=5760
[TRT] binding to output 0 537 binding index: 1
[TRT] binding to output 0 537 dims (b=1 c=1 h=1 w=120) size=480
[TRT] binding to output 1 538 binding index: 2
[TRT] binding to output 1 538 dims (b=1 c=1 h=2 w=120) size=960
[TRT] binding to output 2 539 binding index: 3
[TRT] binding to output 2 539 dims (b=1 c=1 h=2 w=120) size=960
[TRT] binding to output 3 540 binding index: 4
[TRT] binding to output 3 540 dims (b=1 c=1 h=10 w=120) size=4800

Is there any difference using tritonserver based deepstream app and local deepstream sdk?
Any suggestions?

Hey customer, your model is centerface and where is it from?

The model is from -
CenterFace/models/onnx at master · Star-Clouds/CenterFace · GitHub

Any update?

Please let me know if still require some more information or not able to reproduce the issue.

I’m trying to repro locally and will update you ASAP

Any update on this?

Sorry for the late, for using local deepstream sdk, you are still using nvinferserver as the infer pluin, rihgt?
So that’s must be the problem since we only support Triton server via docker on x86, you can see the release note

No i have updated the config file to be used for local SDK.

Then i was having issue-
You can check these for your reference-

model is from same website.
As default model is 32x32 size, so i have updated the dims of model also like-

inputs[0].type.tensor_type.shape.dim[0].dim_value = 1
inputs[0].type.tensor_type.shape.dim[2].dim_value = 480
inputs[0].type.tensor_type.shape.dim[3].dim_value = 640

for output in outputs:
    output.type.tensor_type.shape.dim[0].dim_value = 1
    output.type.tensor_type.shape.dim[2].dim_value = 120 #
    output.type.tensor_type.shape.dim[3].dim_value = 160 (3.7 KB)

Please see the following info in release note I shared, currently we only support Triton server via docker on x86 officially, so please don’t use nvinferserver as a infer pluin without docker.

As i mentioned above. I may be unable to make you understand.

I said i have made config_file for local SDK. Thus nvinfer-type is for nvinfer only.

Can you check the files i have provided you above?

So you are using inferserver as infer plugin on triton docker and use nvinfer on local sdk, right?

So you are tring to deploy the model which can run well on inferserver on nvinfer, can I say this?

Priority is on local SDK only.

As the author(link i shared at the begining) already said that this centerface can run on both trt server or local deepstream sdk. I have tested it with triton server and it is working.

My aim is to get this working on local sdk.

Any other information/files i can share?

Are you working on the same and able to reproduce the issue?

Really sorry for the late, have you got this issue fixed?

Yes, its working now. Need to make dims of heatmap fixed in customparser code.
Thanks for the support.

Great work!

Hello, Thank for contributing
Can you share this custom parse file ? Thanks

The custom parse code is same to as “deepstream_triton_model_deploy/centerface at master · NVIDIA-AI-IOT/deepstream_triton_model_deploy · GitHub

the only change is to have-
int fea_w = 160;
int fea_h = 120;

Hello, Thank for quick reply
Can I ask an question ? Can you share preprocess of centerface model.
Deepstream 5.0
Nvidia T4

In my process, I download model from this link :
Afer that, I change this dims of models follow this code below to make input size of model is 480x640 and 1088 x 1920.
I change the fea_h and fea_w follow your instruction.
480x640 model : fea_w = 160, fea_h = 120
1088x1920 model : fea_w = 480, fea_h = 272

After that I use config file I attached below to run deepstream but the result look strange. I attched the result in video below.
Can you help me to reslove this ? Thanks (568 Bytes)
config_infer_primary_centerface.txt (838 Bytes)

If you can not video please download from here :