Classification output is always same when probed at classifier "src" pad and output tensor meta is always None

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : Tesla V100-PCIE
• DeepStream Version: 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version: 8.4.1.5
• NVIDIA GPU Driver Version (valid for GPU only) : 470.129.06
• Issue Type( questions, new requirements, bugs): Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing):

Modified Deepstream Test 3 sample application by adding a secondary gie for classification,
test_classifier_config.txt (3.5 KB)
deepstream_test_3.py (25.1 KB)

Ran using this command, Attaching the configuration files and the modified python file

We just need to know if the classifier works with any of the detector module in place. Right now all we see is a single class out of the probe.

Here is the output during the run

INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT input:0 3x224x224 min: 1x3x224x224 opt: 16x3x224x224 Max: 16x3x224x224
1 OUTPUT kFLOAT dropout_1 128 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT dense 29 min: 0 opt: 0 Max: 0

When we add probe at the nvosd module we see the following output

labelCount= 1
label-num_classes 0
label-res_label n01729977
label-classid 55
label-prb 2.802734375
label-labelid 0
label Info list <pyds.GList object at 0x7fa7e308a8b0>
unique_component_id 2
num_labels 2
base_meta <pyds.NvDsBaseMeta object at 0x7fa7e308a9b0>
labelCount= 2
label-num_classes 0
label-res_label n01729977
label-classid 55
label-prb 2.802734375
label-labelid 0
label Info list <pyds.GList object at 0x7fa7e308a8b0>
unique_component_id 2
num_labels 2
base_meta <pyds.NvDsBaseMeta object at 0x7fa7e308a730>

And When we add the same probe function to the Classifier module We see the following output, and every time probability and classId seems to be same.

num_labels 2
base_meta <pyds.NvDsBaseMeta object at 0x7f719638e7b0>
None
labelCount= 1
label-num_classes 0
label-res_label
label-classid 27
label-prb 0.9999014139175415
label-labelid 1
label Info list <pyds.GList object at 0x7f719638e530>
unique_component_id 2
num_labels 2
base_meta <pyds.NvDsBaseMeta object at 0x7f719638ebf0>
labelCount= 2
label-num_classes 0
label-res_label
label-classid 27
label-prb 0.9999014139175415
label-labelid 1
label Info list <pyds.GList object at 0x7f719638e530>
unique_component_id 2
num_labels 2
base_meta <pyds.NvDsBaseMeta object at 0x7f719638e5b0>

python3 deepstream_test_3.py -i file:///opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/exp/data/input/1637673766363_00098960_None_1.mp4 file:///opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/exp/data/input/1637673766363_00098960_None_1.mp4 --pgie nvinfer -c config_infer_primary_yolov4-tiny.txt --no-display --silent

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Could you attach the files below via google driver or other methods?

"exp2/test2.onnx", "exp2/labels_resnet.txt","1637673766363_00098960_None_1.mp4" ,"config_infer_primary_yolov4-tiny.txt"

Did you set the output-tensor-meta para?

Hello Yuweiw, Could you share your emailId, I will enable the permission for just your emailId.

Yes, I had done experimentation with the output-tensor-meta parameter in the classifier configuration, but I always see the metadata to be None

Hi, @nagabharath.vadla , You can send messages to me through the forum. Just click my id and use the Message icon.
Also you can refer the link below:
https://forums.developer.nvidia.com/t/how-to-look-at-the-tensor-output-in-deepstream-infer-tensor-meta-test/145985

1 Like

Hello Yuweiw,

Thank you for your support on chat window.

Here is the summary that solved the above issues.

  1. The classifier (secondary sgie) tensor data can only be accessed using obj_user_meta_list, not from frame_user_meta_list. frame_user_meta_list is used when we are referring to primary gie’s tensor output. Got this information from the documentation Gst-nvinfer — DeepStream 6.1.1 Release documentation
  2. I had to change the classifier network type to 100, without which the tensor output does not show up
  3. Followed ```
    /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-infer-tensor-meta-test
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.