Resnet-50 based uff-model is giving error due to mismatch.

Hi all,
I am using a resnet 50 as uff file, I am getting:

$ deepstream-app -c config_car.txt 

** (deepstream-app:28543): CRITICAL **: gst_ffmpeg_cfg_set_property: assertion 'qdata->size == sizeof (gint64)' failed
Using FP32 data type.
Parameter check failed at: ../builder/Network.cpp::addInput::364, condition: isValidDims(dims)
Segmentation fault (core dumped)

potentially this can be because of mismatch in the input/output node names as per https://devtalk.nvidia.com/default/topic/1032314/tensorrt-4-0-uff-parser-fails-to-parse-keras-resnet50/.

can anyone shed some light, we are having more uff models converted from .pb files.

configuration is as follows; have used the nvparsing plugin as well

[primary-gie]
enable=1
gpu-id=0
net-scale-factor=0.0039215697906911373
uff-file=./carshape.uff
input-dims=3;224;224;0
#model-file=../../models/Primary_Detector/resnet10.caffemodel
#proto-file=../../models/Primary_Detector/resnet10.prototxt
model-cache=./carshape.cache
labelfile-path=./carshape.txt
#int8-calib-file=../../models/Primary_Detector/cal_trt4.bin
net-stride=16
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
bbox-border-color4=1;0;1;1
num-classes=5
class-thresholds=0.2;0.2;0.1;0.2;0.2
class-eps=0.2;0.2;0.2;0.2;0.2
class-group-thresholds=1;1;1;1;1
roi-top-offset=0;0;0;0;0
roi-bottom-offset=0;0;0;0;0
detected-min-w=0;0;0;0;0
detected-min-h=0;0;0;0;0
detected-max-w=1280;1280;1280;1280;1280
detected-max-h=720;720;720;720;720
interval=0
gie-unique-id=1
parse-func=0
parse-bbox-func-name=parse_bbox_custom_resnet
parse-bbox-lib-name=/home/xxx/Deep/DeepStream_Release/sources/libs/nvparsebbox/libnvparsebbox.so
output-bbox-name=conv2d_bbox
output-blob-names=conv2d_cov
parser-bbox-norm=35.0;35.0
#config-file=config_infer_resnet.txt

Seems like we can’t give output node name through config files which is necessary for creating tensorrt engine.

have used the nvparsing plugin as well
What’s “nvparsing plugin” here ?

Seems like we can’t give output node name through config files which is necessary for creating tensorrt engine
Is “output node name” “output layer”? You can set “output-bbox-name” “output-blob-names” in config file.

We will release Deepstream 3.0 this month, which support uff model better, also support tensorRT IPlugin by
IPluginCreator(Only for TensorRT 5.0)

Hi Chris,

I meant the nv-parser plugin. I compiled it changing the parsing functionalities accordingly. So using the same for parsing.

I am not sure waht the parameters ““output-bbox-name” “output-blob-names”” means. can you elaborate.
Looking forward to hear from you.

can you have a quick look at below config and tell me if it is good to go; now it is working fine, but the outpub has no detections.
It haven’t used nvparser here.

# Copyright (c) 2018 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
flow-original-resolution=1
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../streams/sample_720p.mp4
num-sources=4
gpu-id=0

[sink0]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=0
bitrate=2000000
output-file=car_shape.mp4
source-id=0

[osd]
enable=1
gpu-id=0
osd-mode=1
border-width=1
text-size=15
text-color=1;1;1;1
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0

[primary-gie]
enable=1
gpu-id=0
net-scale-factor=0.0039215697906911373
uff-file=./uffmodel.uff
input-dims=3;224;224;0
#model-file=../../models/Primary_Detector/resnet10.caffemodel
#proto-file=../../models/Primary_Detector/resnet10.prototxt
model-cache=./carshape.cache
labelfile-path=./carshape.txt
#int8-calib-file=../../models/Primary_Detector/cal_trt4.bin
net-stride=16
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
bbox-border-color4=1;0;1;1
num-classes=5
class-thresholds=0.2;0.2;0.1;0.2;0.2
class-eps=0.2;0.2;0.2;0.2;0.2
class-group-thresholds=1;1;1;1;1
roi-top-offset=0;0;0;0;0
roi-bottom-offset=0;0;0;0;0
detected-min-w=0;0;0;0;0
detected-min-h=0;0;0;0;0
detected-max-w=1280;1280;1280;1280;1280
detected-max-h=720;720;720;720;720
interval=0
gie-unique-id=1
parse-func=4
output-bbox-name=output_neckline/Softmax
output-blob-names=output_neckline/Softmax
parser-bbox-norm=35.0;35.0
#config-file=config_infer_resnet.txt


[tracker]
enable=0
tracker-width=640
tracker-height=368
gpu-id=0

[tests]
file-loop=0

Hi,

Check out the output;

https://drive.google.com/file/d/1o5IrZCD1c-eGVoYn_UwdOKiqGkcx7gQB/view?usp=sharing

“output-bbox-name” is the NN layer name of bbox output
“output-blob-names” is the NN layer name of coverage data output

The bbox output layer buffer and coverage layer output data buffer from tensorRT will be as the parameters for the parser funciton,

void parse_bbox_custom_resnet(DimsCHW outputDims, DimsCHW outputDimsBBOX,
vectorcv::Rect *rectList, int class_num, int batch_th, int net_width, int net_height,
float *output_cov_buf,float *output_bbox_buf, float *classthreshold);

Did you change the below config to your bbox parser function name and library name ?
parse-bbox-func-name=parse_bbox_custom_resnet
parse-bbox-lib-name=/usr/local/deepstream/libnvparsebbox.so

It looks good. Congratulation!

Hi, there is an issue I guess, not sorted; this is out of the sample source codes output;

now I realized that the sample source code is not giving proper output.
The labels are not there; there is no indication of secondary-gies.

# Copyright (c) 2018 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
flow-original-resolution=1
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../../streams/sample_720p.mp4
num-sources=4
gpu-id=0

[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0

[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=0
bitrate=2000000
output-file=x.mp4
source-id=0

[osd]
enable=1
gpu-id=0
osd-mode=0
border-width=1
text-size=16
text-color=0;0;0.7;1
text-bg-color=0;0;0;0.5
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;1

[primary-gie]
enable=1
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=../../models/Primary_Detector/resnet10.caffemodel
proto-file=../../models/Primary_Detector/resnet10.prototxt
model-cache=../../models/Primary_Detector/resnet10.caffemodel_b4_int8.cache
labelfile-path=../../models/Primary_Detector/labels.txt
int8-calib-file=../../models/Primary_Detector/cal_trt4.bin
net-stride=16
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
num-classes=4
class-thresholds=0.2;0.2;0.1;0.2
class-eps=0.2;0.2;0.2;0.2
class-group-thresholds=1;1;1;1
roi-top-offset=0;0;0;0
roi-bottom-offset=0;0;0;0
detected-min-w=0;0;0;0
detected-min-h=0;0;0;0
detected-max-w=1280;1280;1280;1280
detected-max-h=720;720;720;720
interval=0
gie-unique-id=1
parse-func=0
#parse-bbox-func-name=parse_bbox_custom_resnet
#parse-bbox-lib-name=/home/graymatics/Deep/DeepStream_Release/sources/libs/nvparsebbox/libnvparsebbox.so
parse-func=4
output-bbox-name=conv2d_bbox
output-blob-names=conv2d_cov
parser-bbox-norm=35.0;35.0
#config-file=config_infer_resnet.txt

[tracker]
enable=1
tracker-width=640
tracker-height=368
gpu-id=0

[secondary-gie0]
enable=1
net-scale-factor=1
model-file=../../models/Secondary_VehicleTypes/resnet18.caffemodel
proto-file=../../models/Secondary_VehicleTypes/resnet18.prototxt
model-cache=../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_int8.cache
mean-file=../../models/Secondary_VehicleTypes/mean.ppm
labelfile-path=../../models/Secondary_VehicleTypes/labels.txt
int8-calib-file=../../models/Secondary_VehicleTypes/cal_trt4.bin
gpu-id=0
batch-size=16
num-classes=6
network-mode=1
detected-min-w=128
detected-min-h=128
detected-max-w=1280
detected-max-h=720
model-color-format=1
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51

[secondary-gie1]
enable=1
net-scale-factor=1
model-file=../../models/Secondary_CarColor/resnet18.caffemodel
proto-file=../../models/Secondary_CarColor/resnet18.prototxt
model-cache=../../models/Secondary_CarColor/resnet18.caffemodel_b16_int8.cache
mean-file=../../models/Secondary_CarColor/mean.ppm
labelfile-path=../../models/Secondary_CarColor/labels.txt
int8-calib-file=../../models/Secondary_VehicleTypes/cal_trt4.bin
batch-size=16
network-mode=1
detected-min-w=128
detected-min-h=128
detected-max-w=1280
detected-max-h=720
model-color-format=1
num-classes=12
gpu-id=0
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=0;
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51

[secondary-gie2]
enable=0
net-scale-factor=1
model-file=../../models/Secondary_CarMake/resnet18.caffemodel
proto-file=../../models/Secondary_CarMake/resnet18.prototxt
model-cache=../../models/Secondary_CarMake/resnet18.caffemodel_b16_int8.cache
mean-file=../../models/Secondary_CarMake/mean.ppm
labelfile-path=../../models/Secondary_CarMake/labels.txt
int8-calib-file=../../models/Secondary_CarMake/cal_trt4.bin
batch-size=16
network-mode=1
num-classes=24
detected-min-w=128
detected-min-h=128
detected-max-w=1280
detected-max-h=720
model-color-format=1
gpu-id=0
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=0;
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51

[tests]
file-loop=0

Why the out is not having labels?

sgie is attaching outputs but they are just not getting displayed through the OSD component, and this is deliberately done for increasing performance

How can make it happen?
Because we need the inference output for the time being; we need the output with the OSD component; and I have checked the OSD tile it is enabled as you can see- why it is still not giving out?

Can you try to add “sync=1” at config file [osd] part ?
If still no labels, I will check later.

Hi Chris,
The sync is not an attribute of [osd] instead its attribute of the sink, I have tried it; but it is not working as expected.
Also, I have tried using classifier-async-mode=1 under the secondary-gies, but that too leaves no output which is expected.

Set appCtx[0]->show_bbox_text = TRUE; in deepstream_app_main.c before g_main_loop_new
and make/run this new deepstream-app.

Attention, label/text osd will make performance drop half, becasue it needs to get dot matrix data of letters from cpu side by cairo/pango lib, and copy this data to gpu side, draw by cuda/gpu.

hi,
Where is the deepstream_app_main.c?
I am sorry I am not having the source code for the deepstream-app apart from deepstream-app1 and app2 available.

the installation is using the maked deepstream-main-app.cpp which is available in a.tar file/

Oh we didn’t opensource deepstream-app in Deepstream 2.0.
We will open it in Deepstream 3.0 which has been code freeze and will release soon.

Hi,

Is it possible to display the output of the deepstream in readable formats? like the label,bounding_box, confidence through either terminal or any other means to check the output?
I highly appreciate your response. I have to check the output and its efficiency- especially for secondary geis.

Set `appCtx[0]->show_bbox_text = TRUE;` in `deepstream_app_main.c` before `g_main_loop_new`
and make/run this new deepstream-app.

Did you try this on Deepstream 3.0 deepstream-app ?