Invalid confidence values for trafficcamnet kitti detector output

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2
• NVIDIA GPU Driver Version (valid for GPU only) 460.80
• Issue Type( questions, new requirements, bugs) Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I am running trafficcamnet directly out of the 5.1-21.02-devel container using the included deepstream_app_source1_trafficcamnet.txt and config_infer_primary_trafficcamnet.txt. When I use the gie-kitti-output-dir option to save out the detection data, the confidence values for the detections are always -0.1. Example from my output:

car 0.0 0 0.0 654.666687 498.970581 802.666687 557.205872 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.100000
person 0.0 0 0.0 290.666656 516.176453 310.666656 569.117615 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.100000
road_sign 0.0 0 0.0 150.666672 477.794128 168.000000 501.617645 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.100000

When running peoplenet under a similar configuration, I see valid confidence values from the detector output:

person 0.0 0 0.0 468.712311 239.408798 672.233765 643.830872 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.979707
person 0.0 0 0.0 0.060968 393.945251 170.399490 651.968201 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.964589

Have you modified the original config files inside deepstream package?

No major changes. I have included a zip folder with my config file and a bash script that will repro my situation. Executing run-trafficcamnet-container.sh will download and unzip the model file, launch the container with the model and my app config mounted, and write the kitti detector output to the folder kitti-detections/. You can then see the detections with -0.1 confidence.

The only changes I have made to the config are: disabling the EGL previewer (same issue occurs if it is enabled), changing the inference config path to make mounting easier, and enabling kitti detection output.

nvidia-test.zip (2.9 KB)

Hey @admayber , have you resolved this topic?

No

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Could you check it tracker is enabled in the pipeline?