Please provide complete information as applicable to your setup. • Hardware Platform (Jetson / GPU) GPU (2080 ti) • DeepStream Version 5.0 • JetPack Version (valid for Jetson only) • TensorRT Version 7.2.1 • NVIDIA GPU Driver Version (valid for GPU only) 450.102.04 • Issue Type( questions, new requirements, bugs) Question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
deepstream-app -c source_config_file.txt.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi, i used a custom model for facedetection (Centerface). Link-
I have few doubts-
I tried to run with deepstream sdk and updated the config file also. There was no face detection.
I tried to debug the heatmap dims output in custom_parser.cpp,
Debug results-
heatmap->inferDims[0] = 1
heatmap->inferDims[1] = 8
heatmap->inferDims[2] = 8
heatmap->inferDims[3] = 0
face detected = 0
While debugging with docker tritonserver based deepstream i got (Where i get detections)-
heatmap->inferDims[0] = 1
heatmap->inferDims[1] = 1
heatmap->inferDims[2] = 120
heatmap->inferDims[3] = 160
face size 3
What is the issue with model dims? how can i use this with deepstream sdk? is there parsing logic need to be changed?
I also provided infer-dims = 0(NCHW). in infer-config configuration. But no results.
Note-
I also tried to load the model as tensorRT code, it gave me error for model-
error is-
[TRT] binding to input 0 input.1 binding index: 0
[TRT] binding to input 0 input.1 dims (b=1 c=1 h=3 w=480) size=5760
[TRT] binding to output 0 537 binding index: 1
[TRT] binding to output 0 537 dims (b=1 c=1 h=1 w=120) size=480
[TRT] binding to output 1 538 binding index: 2
[TRT] binding to output 1 538 dims (b=1 c=1 h=2 w=120) size=960
[TRT] binding to output 2 539 binding index: 3
[TRT] binding to output 2 539 dims (b=1 c=1 h=2 w=120) size=960
[TRT] binding to output 3 540 binding index: 4
[TRT] binding to output 3 540 dims (b=1 c=1 h=10 w=120) size=4800
Is there any difference using tritonserver based deepstream app and local deepstream sdk?
Any suggestions?
Thanks.
Please see the following info in release note I shared, currently we only support Triton server via docker on x86 officially, so please don’t use nvinferserver as a infer pluin without docker.
As the author(link i shared at the begining) already said that this centerface can run on both trt server or local deepstream sdk. I have tested it with triton server and it is working.
Hello, Thank for quick reply
Can I ask an question ? Can you share preprocess of centerface model.
SPEC :
Deepstream 5.0
Nvidia T4
In my process, I download model from this link : https://github.com/Star-Clouds/CenterFace/raw/master/models/onnx/centerface.onnx
Afer that, I change this dims of models follow this code below to make input size of model is 480x640 and 1088 x 1920.
I change the fea_h and fea_w follow your instruction.
480x640 model : fea_w = 160, fea_h = 120
1088x1920 model : fea_w = 480, fea_h = 272
After that I use config file I attached below to run deepstream but the result look strange. I attched the result in video below.
Can you help me to reslove this ? Thanks