Sample: deepstream-app using DetectNet v2 ResNet 18

Please provide complete information as applicable to your setup.

Hardware / Config

HW: Ubuntu (Intel i7) / RTX 3060 TI 8 GB
DeepStream: 6.1
TensorRT: using docker image: nvcr.io/nvidia/deepstream:6.1-devel
Driver: 515
deepstream-app version 6.1.0
DeepStreamSDK 6.1.0
deepstream-app version 6.1.0
DeepStreamSDK 6.1.0
CUDA Driver Version: 11.7
CUDA Runtime Version: 11.6
TensorRT Version: 8.2
cuDNN Version: 8.4
libNVWarp360 Version: 2.0.1d3

Samples work fine

I built deepstream-app (no changes) using default resnet10.caffe model. Works perfectly. With my RTSP cameras, pretty good accuracy.

Using TAO/cv_samples 1.4.1/detectnet_v2 I trained the DetectNet V2 ResNet18. Worked perfectly. Good evaluation (65-80%)

Subsitute resnet10.caffe with DetectNet V2 Resnet 18

This is where I ran out of talent. I want to substitute the resnet10.caffe model (default) on deepstream-app with my newly trained DetectNet V2 ResNet18 model.

I made changes to the app config file:

primary-gie:
enable: 1
gpu-id: 0
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size: 2
#Required by the app for OSD, not a plugin property
bbox-border-color0: 1;0;0;1
bbox-border-color1: 0;1;1;1
bbox-border-color2: 0;0;1;1
bbox-border-color3: 0;1;0;1
interval: 0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id: 1
nvbuf-memory-type: 0
model-engine-file: /project/deepstream/nvidia/tao/tao-experiments/detectnet_v2/experiment_dir_final/resnet18_detector_qat.trt.int8
labelfile-path: /project/deepstream/nvidia/tao/tao-experiments/detectnet_v2/experiment_dir_final/labels.txt
config-file: /project/deepstream/nvidia/tao/tao-experiments/detectnet_v2/experiment_dir_final/config_infer_primary.yml

using nvinfer_config.txt, I made a config file: config_infer_primary.yml
This is the contents of the config_infer_primary.yml file now:

property:
gpu-id: 0
net-scale-factor: 0.00392156862745098
int8-calib-file: /project/deepstream/nvidia/tao/tao-experiments/detectnet_v2/experiment_dir_final/calibration_qat.bin
batch-size: 30
process-mode: 1
offsets: 0.0;0.0;0.0
infer-dims: 3;384;1248
tlt-model-key: cHJoOHMxMzc0ZjU4NmhjY2E5YXFxOXZwcjQ6ODk2NzFiMzAtYTJiMy00Yjg4LWFkM2ItNWVhOGYxN2JkNTI5
network-type: 0
num-detected-classes: 3
uff-input-order: 0
output-blob-names: output_cov/Sigmoid;output_bbox/BiasAdd
uff-input-blob-name: input_1
model-color-format: 0
maintain-aspect-ratio: 0

the deepstream-app runs. speed (fps) is comparable to the resnet10 implementation. However the accuracy is horrible (never finds much). It finds a car occasionally (correct detection and the label is correct). So, that seems to be evidence that everything is connected but I’m missing something. Is there an example of a PGIE nvinfer element configured to work with a DetectNet V2 ResNet 18 model?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Yes. There is the sample deepstream_reference_apps/deepstream_app_tao_configs at master · NVIDIA-AI-IOT/deepstream_reference_apps (github.com)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.