Classification result is always the same with sgie classifier

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1
• TensorRT Version 7.1.3
• Issue Type( questions, new requirements, bugs) Question

Hi have the following process for inferece: pgie->sgie1->sgie2. Pgie is a detector for people, Sgie1 detects faces on resulting bbox of pgie, and sgie2 is a classifier for labels mask/no-mask.

The output of sgie2 is always the same (mask) with deepstream pipeline. I tested the tlt model and inference is correct outside deepstream pipeline.

What setting may be causing this?

Config file for pgie (people detection):

[property]
gpu-id=0
enable-dla=1
use-dla-core=0
net-scale-factor=0.0039215697906911373
tlt-model-key=tlt_encode
tlt-encoded-model=../../app-be/resources/models/tailgating/tlt_pretrained_models/peoplenetv2/resnet34_peoplenet_pruned.etlt
labelfile-path=../../app-be/resources/models/tailgating/tlt_pretrained_models/peoplenetv2/labels.txt
model-engine-file=../../app-be/resources/models/tailgating/tlt_pretrained_models/peoplenetv2/resnet34_peoplenet_pruned_int8.etlt_b1_dla0_int8.engine
int8-calib-file=../../app-be/resources/models/tailgating/tlt_pretrained_models/peoplenetv2/resnet18_peoplenet_int8_dla.txt
input-dims=3;544;960;0
uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=1
cluster-mode=1
interval=0
gie-unique-id=1
force-implicit-batch-dim=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

[class-attrs-all]
pre-cluster-threshold=0.4
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.7
minBoxes=1

configuration file for sgie1 (face detector):

[property]
#Running in GPU:
gpu-id=0

#Running in DLA:
#enable-dla=1
#use-dla-core=0

net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=../../sauroneye-be/resources/models/maskdetection/labels.txt
int8-calib-file=../../sauroneye-be/resources/models/maskdetection/model_tlt3/cal.bin
tlt-encoded-model=../../sauroneye-be/resources/models/maskdetection/model_tlt3/ssd_resnet18_epoch_100.etlt
tlt-model-key=tlt_encode
output-tensor-meta=1
model-engine-file=../../sauroneye-be/resources/models/maskdetection/model_tlt3/ssd_resnet18_epoch_100.etlt_b1_gpu0_int8.engine
infer-dims=3;544;960
uff-input-order=0
uff-input-blob-name=Input
batch-size=1
maintain-aspect-ratio=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=2
interval=0
gie-unique-id=2
process-mode=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=0
output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomNMSTLT
custom-lib-path=../../sauroneye-be/lib/post_processor_ssd/libnvds_infercustomparser_tlt.so

[class-attrs-all]
threshold=0.8
pre-cluster-threshold=0.5
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

configuration file for sgie2 (mask usage classifier):

[property]
gpu-id=0
# preprocessing parameters: These are the same for all classification models generated by TLT.
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
batch-size=1

# Model specific paths. These need to be updated for every classification model.
int8-calib-file=../../app-be/resources/models/maskclassification/calibration.bin
labelfile-path=../../app-be/resources/models/maskclassification/labels_mask.txt
tlt-encoded-model=../../app-be/resources/models/maskclassification/final_model_int8.etlt
tlt-model-key=tlt_encode
output-tensor-meta=1
model-engine-file=../../app-be/resources/models/facedetection/face_mask_classification.trt
input-dims=3;224;224;0
uff-input-blob-name=input_1
output-blob-names=predictions/Softmax

## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
# process-mode: 2 - inferences on crops from primary detector, 1 - inferences on whole frame
process-mode=1
interval=0
# defines that the model is a classifier.
network-type=1
gie-unique-id=3
operate-on-gie-id=2
operate-on-class-ids=0;1
classifier-threshold=0.5

labels.txt (labels file for sgie1):
face

labels_mask.txt (labels file for sgie2):
no-mask;mask

Did you try to reduce the classifier-threshold=0.5 and see if any change?

Yes, I tried various values for classifier-threshold but the result is always the same. The probabilities arent always the exact number but are always around this:

no-mask: 0.00937588
mask: 0.990624

@bcao I used this patch to visualize the input images for each gie:

I noticed the images where not correct as they were very dark and distortedin sgie. To fix it, I removed the offset configuration and added enable-padding=1.

After altering the configurations, the resulting images for sgie classifier are what I expected but the resulting classification are still wrong.
I used these images to run inference with the same model in TLT and the result is correct. But in deepstream pipeline, it is not, as it is predicted always the same label, with the values I said in the previous comment.

Could you dump the raw output tensor from the model in deepstream to check the difference with TLT. I think you can add the dump in nvdsinfer_context_impl_output_parsing.cpp → fillClassificationOutput

Hi.
The output is an array of probabilities of the object belonging to each class. I dumped the outputLayersInfo[l].buffer in that function and the result is [0.9911895990371704, 0.008810401].

So what’s the input of sgie2, can you dump it too and check the difference with TLT result

@catia.mourao.896,
The values of your offsets=103.939;116.779;123.68, Is for your own dataset or for resnet pretrain model?
I trained(fine-tune) the pretrained resnet10 with TLT3.0, and I want to use that in deepstream with nvinfer, and I have a config file for that model, and I want to know, What are the offsets, model-color-format, net-scale-factor values? This values related to custom dataset or are static for all resnet trained model?

and other question is that when I set enable-center-crop=False, then evaluation accuracy is dropped, Is it necessary to set in config file when I want to do inference?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

He @catia.mourao.896 , is the issue fixed?