People Net Inference Kitti Dump

I have ran inference using the detectnet v2 peoplenet, and I am going through the kitti dump. I am expecting to see all the predictions made by the model with all the different confidence thresholds (similar to how SSD does the inference output). But, instead I get kitti file formats where the confidence thereshold seems to be above 1 and sometimes below 1 - what does this mean? Example is below:

Person 0.00 0 0.00 233.848 72.381 288.068 189.313 0.00 0.00 0.00 0.00 0.00 0.00 0.00 7.231
Person 0.00 0 0.00 35.421 221.233 227.554 320.000 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.395
Person 0.00 0 0.00 191.437 61.039 216.414 127.582 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.762
Person 0.00 0 0.00 366.526 0.000 566.195 284.986 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.320
Person 0.00 0 0.00 0.000 0.000 174.785 285.052 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.258
Bag 0.00 0 0.00 28.256 179.259 146.084 288.715 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.396
Face 0.00 0 0.00 246.556 78.662 259.312 93.874 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.482

Please refer to Why is tlt-infer’s label confidence threshold high above 1? - #10 by Morganh

Please download the latest tlt_2.0 docker. It released today.
Then try with with “mean_cov” mode and default thereshold 0.1.
It should work now.

More reference in new doc:
https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#bbox_handler

In mean_cov mode, the final confidence is the mean confidence of all the bboxes in the cluster.

Thanks, I would like to print out all the confidence threshold scores just ssd does with their kitti dump. Is that possible?

Currently, getting this error when doing inference in the new updated tlt:
Traceback (most recent call last):
File “/usr/local/bin/tlt-infer”, line 8, in
sys.exit(main())
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/715c8bafe7816f3bb6f309cd506049bb/execroot/ai_infra/bazel-out/k8-py3-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/magnet_infer.py”, line 54, in main
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/715c8bafe7816f3bb6f309cd506049bb/execroot/ai_infra/bazel-out/k8-py3-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/scripts/inference.py”, line 194, in main
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/715c8bafe7816f3bb6f309cd506049bb/execroot/ai_infra/bazel-out/k8-py3-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/scripts/inference.py”, line 150, in inference_wrapper_batch
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/715c8bafe7816f3bb6f309cd506049bb/execroot/ai_infra/bazel-out/k8-py3-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/inferencer/tlt_inferencer.py”, line 143, in infer_batch
File “/usr/local/lib/python3.6/dist-packages/keras/engine/training.py”, line 1169, in predict
steps=steps)
File “/usr/local/lib/python3.6/dist-packages/keras/engine/training_arrays.py”, line 294, in predict_loop
batch_outs = f(ins_batch)
File “/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py”, line 2715, in call
return self._call(inputs)
File “/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py”, line 2675, in _call
fetched = self._callable_fn(*array_vals)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py”, line 1472, in call
run_metadata_ptr)
tensorflow.python.framework.errors_impl.UnimplementedError: The Conv2D op currently only supports the NHWC tensor format on the CPU. The op was given the format: NCHW
[[{{node model_1/conv1/convolution}}]]

Yes, it is possible.

When you run tlt-infer with tlt_2.0_py3 version, is the tlt model trained by detectnet_v2 by tlt_2.0_dp?

I am running inference using the Peoplenet model.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Can you share your command and the spec when you get above error?