Custom model with deepstream sdk

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU (2080 ti)
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2.1
• NVIDIA GPU Driver Version (valid for GPU only) 450.102.04
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

deepstream-app -c source_config_file.txt.

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, i used a custom model for facedetection (Centerface). Link-

I have few doubts-

I tried to run with deepstream sdk and updated the config file also. There was no face detection.
I tried to debug the heatmap dims output in custom_parser.cpp,
Debug results-
heatmap->inferDims[0] = 1
heatmap->inferDims[1] = 8
heatmap->inferDims[2] = 8
heatmap->inferDims[3] = 0
face detected = 0

While debugging with docker tritonserver based deepstream i got (Where i get detections)-
heatmap->inferDims[0] = 1
heatmap->inferDims[1] = 1
heatmap->inferDims[2] = 120
heatmap->inferDims[3] = 160
face size 3

What is the issue with model dims? how can i use this with deepstream sdk? is there parsing logic need to be changed?
I also provided infer-dims = 0(NCHW). in infer-config configuration. But no results.

Note-
I also tried to load the model as tensorRT code, it gave me error for model-
error is-
[TRT] binding to input 0 input.1 binding index: 0
[TRT] binding to input 0 input.1 dims (b=1 c=1 h=3 w=480) size=5760
[TRT] binding to output 0 537 binding index: 1
[TRT] binding to output 0 537 dims (b=1 c=1 h=1 w=120) size=480
[TRT] binding to output 1 538 binding index: 2
[TRT] binding to output 1 538 dims (b=1 c=1 h=2 w=120) size=960
[TRT] binding to output 2 539 binding index: 3
[TRT] binding to output 2 539 dims (b=1 c=1 h=2 w=120) size=960
[TRT] binding to output 3 540 binding index: 4
[TRT] binding to output 3 540 dims (b=1 c=1 h=10 w=120) size=4800

Is there any difference using tritonserver based deepstream app and local deepstream sdk?
Any suggestions?
Thanks.

Hey customer, your model is centerface and where is it from?

Hi,
The model is from -
CenterFace/models/onnx at master · Star-Clouds/CenterFace · GitHub

Hi,
Any update?

Hi,
Please let me know if still require some more information or not able to reproduce the issue.
Thanks

I’m trying to repro locally and will update you ASAP

Hi,
Any update on this?
Thanks

Sorry for the late, for using local deepstream sdk, you are still using nvinferserver as the infer pluin, rihgt?
So that’s must be the problem since we only support Triton server via docker on x86, you can see the release note https://docs.nvidia.com/metropolis/deepstream/DeepStream_5.1_Release_Notes.pdf

No i have updated the config file to be used for local SDK.

Then i was having issue-
You can check these for your reference-

model is from same website.
As default model is 32x32 size, so i have updated the dims of model also like-

inputs[0].type.tensor_type.shape.dim[0].dim_value = 1
inputs[0].type.tensor_type.shape.dim[2].dim_value = 480
inputs[0].type.tensor_type.shape.dim[3].dim_value = 640

for output in outputs:
    output.type.tensor_type.shape.dim[0].dim_value = 1
    output.type.tensor_type.shape.dim[2].dim_value = 120 #
    output.type.tensor_type.shape.dim[3].dim_value = 160

centerface_sdk.zip (3.7 KB)

Please see the following info in release note I shared, currently we only support Triton server via docker on x86 officially, so please don’t use nvinferserver as a infer pluin without docker.
image

As i mentioned above. I may be unable to make you understand.

I said i have made config_file for local SDK. Thus nvinfer-type is for nvinfer only.

Can you check the files i have provided you above?

So you are using inferserver as infer plugin on triton docker and use nvinfer on local sdk, right?

So you are tring to deploy the model which can run well on inferserver on nvinfer, can I say this?

Yes,
Priority is on local SDK only.

As the author(link i shared at the begining) already said that this centerface can run on both trt server or local deepstream sdk. I have tested it with triton server and it is working.

My aim is to get this working on local sdk.

Any other information/files i can share?

Hi,
Are you working on the same and able to reproduce the issue?

Really sorry for the late, have you got this issue fixed?

Yes, its working now. Need to make dims of heatmap fixed in customparser code.
Thanks for the support.

Great work!