Unable to get confidence from TLT model in DeepStream

• Hardware Platform: GPU
• DeepStream Version: 5.1
• TensorRT Version: 7.2.2
• NVIDIA GPU Driver Version: 460.84
• Issue Type: Bugs

I trained a Faster RCNN + EfficientNet B1 in TLT and successfuly ran it in DeepStream SDK. But, using python custom app, the obj_meta.confidence always is -0.10000000149011612. How can I solve this?

I’m using the deepstream_tlt_apps/post_processor.

May I know more details about which app you are running? Officially, TLT provides GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream for inference.

I’m running a custom DeepStream Python App.

Example:

while l_obj is not None:
    try:
        obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
    except StopIteration:
        break

    print(obj_meta.confidence)

    try: 
        l_obj = l_obj.next
    except StopIteration:
        break

When you mentioned that you ran the tlt model successfully in deepstream SDK, which deepstream SDK did you run?

In a word, can you share the reproduce steps?

I’m using DeepStream SDK 5.1.

Steps in TLT: Training > Export > Convert

Steps in DeepStream:
1 - Move deepstream_tlt_apps/post_processor to my PGIE folder
2 - Compile post_processor (CUDA_VER=11.1 make -C post_processor)
3 - Edit config_infer_primary.txt
4 - Build engine with deepstream-app
5 - Run the model (with config_infer_primary.txt and generated engine) in my custom python app
6 - Got -0.10000000149011612 in print(obj_meta.confidence)

FIles (I removed my KEY from config_infer_primary.txt file):
labels.txt (36 Bytes)
config_infer_primary.txt (688 Bytes)
deepstream-app_config.txt (806 Bytes)

Could you share your custom python app in above step 5 as well? Thanks.

I can’t share the python code because it’s private project. But it should appear in any deepstream-test app. I’m getting confidence from others models TLT YOLOv4 + CSPDarknet-53, TLT DetectNet_v2 + Resnet34, converted Darknet YOLOv4, etc.

OK, so can you reproduce this behavior in deepstream_tlt_apps too? Please try to use official faster_rcnn model in GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream, It provides official models.
$ wget https://nvidia.box.com/shared/static/i1cer4s3ox4v8svbfkuj5js8yqm3yazo.zip -O models.zip

If yes, please share your modification. Then we can just focus on deepstream_tlt_apps.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.