Secondary Classifier outputs integer instead of labels

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) -> dGPU aws T4
• DeepStream Version -> 5.0 devel
• TensorRT Version -> 7
• NVIDIA GPU Driver Version (valid for GPU only) -> 440.82

This is primary detector config file:

[property]
gpu-id=0
#net-scale-factor=0.0039215697906911373
net-scale-factor=1.0
model-color-format=0
offsets=123.0;117.0;104.0
model-engine-file=./tensorrt_engines_awsT4/retina_r50.engine

# create labels file
labelfile-path=./labels.txt
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=1
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=prob
parse-bbox-func-name=NvDsInferParseCustomFD
custom-lib-path=./nvdsinfer_customparser_fd/libplugin.so


[class-attrs-all]
pre-cluster-threshold=0.1
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

This is the secondary classifier config file:

[property]
gpu-id=0
net-scale-factor=1
model-engine-file=./tensorrt_engines_awsT4/resnet18beard.engine
labelfile-path=./beard_labels.txt
batch-size=1
# 0=FP32 and 1=INT8 mode
network-mode=2
input-object-min-width=224
input-object-min-height=224
process-mode=2
model-color-format=0
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
output-blob-names=prob
classifier-async-mode=1
classifier-threshold=0.
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

The output over the bbox is numbers instead of labels (No_beard, beard)

Are you using deepstream-app?
The label of PGIE can show normally?

@bcao Yes deepstream-app. Yes there is only one label in PGIE -> face which shows correctly!!

can you share me with your deepstream-app config file?

Shared both pgie and sgie config files in the first post!!

deepstream-app config file

Okay you mean this deepstream.c (21.6 KB)

No, when you run deepstream-app -c ‘config-file’, I mean this config-file.This config file is used by deepstream-app.

So @bcao I am using two config files one for face detector and the other for classification, both the config files are shared in the first post!!

So you don’t use deepstream-app, right? I had asked you in 2nd comment about this. deepstream-app will use additonal config, your 2 config files are for nvinfer, do you understand?

Yes!! I don’t have an extra config file for deepstream app. Only a .c file. Am I supposed to use an extra config file for the app. Because I did not find it in the test app

@bcao I am using a .c file to create deepstream-app configurations I am getting correct output up till the tracker but not the classification part!
Like a have pipeline as FaceDetector -> tracker -> classifier and I am getting output for 4 faces as
face 4, face 3, face 2, face 1. Also my detector’s labels.txt file has only one label i.e. face. But I can 't seem to get the secondary classifiers labels. Below is the terminal output:

./deepstream-custom -c retinaface_pgie_config.txt -i download.jpeg 
Now playing: retinaface_pgie_config.txt
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:02.041383090   220 0x55c4bb290360 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/deepstream-retinaface/tensorrt_engines_awsT4/beard_resnet18.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT data            3x224x224       
1   OUTPUT kFLOAT prob            2x1x1           

0:00:02.041475711   220 0x55c4bb290360 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/deepstream-retinaface/tensorrt_engines_awsT4/beard_resnet18.engine
0:00:02.042334763   220 0x55c4bb290360 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 2]: Load new model:beard_sgie_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:02.428758247   220 0x55c4bb290360 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/deepstream-retinaface/tensorrt_engines_awsT4/retina_r50.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT data            3x640x1088      
1   OUTPUT kFLOAT prob            428401x1x1      

0:00:02.428832403   220 0x55c4bb290360 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/deepstream-retinaface/tensorrt_engines_awsT4/retina_r50.engine
0:00:02.452104069   220 0x55c4bb290360 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:retinaface_pgie_config.txt sucessfully
Running...
KLT Tracker Init
End of stream
Returned, stopping playback
Deleting pipeline

Below is the output from the model

Have you tried deepstream-app , please refer https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide/deepstream_app_config.3.1.html#

1 Like

@bcao I’m having the same exact issue, but I’m using Python instead of C. Why is it that the example apps don’t need to use deepstream-app but you are recommending that we do?

@y14uc339 I had the same issue but it worked when I changed the secondary classifier to be synchronous instead of asynchronous in its config file. Let me know if it helps because I want to make sure that it is an actual fix.