Problem accessing secondary gie classifier result

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1
• TensorRT Version 7.1.3
• Issue Type( questions, new requirements, bugs) question

I’m building a deepstream pipeline with python. As pgie I’m using Face Detect model (exported to int8 with TLTv3). As sgie I’m using a classifier with two labels (mask, no-mask). This classifier was trained with TLTv3. This pipeline is based on test2 example, but I removed the tracker.

When running the pipeline, only pgie outputs show on output video. In this pipeline pgie links directly to sgie.

Also, how can I access results from sgie in probe? I tried to access through l_class = obj_meta.classifier_meta_list but the result is always None.

Do I need to include the tracker?

configuration file for pgie:

[property]
gpu-id=0
enable-dla=1
use-dla-core=0
# preprocessing parameters.
model-color-format=0
net-scale-factor=0.0039215697906911373
tlt-model-key=nvidia_tlt
tlt-encoded-model=../../app-be/resources/models/facedetection/model.etlt
model-engine-file=../../app-be/resources/models/facedetection/face_detection.trt
labelfile-path=../../app-be/resources/models/facedetection/labels.txt
input-dims=3;416;736;0
uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=1
cluster-mode=1
interval=0
gie-unique-id=1
force-implicit-batch-dim=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

[class-attrs-all]
pre-cluster-threshold=0.4
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.7
minBoxes=1

configuration file for sgie:

[property]
gpu-id=0
# preprocessing parameters: These are the same for all classification models generated by TLT.
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
batch-size=1

# Model specific paths. These need to be updated for every classification model.
int8-calib-file=../../app-be/resources/models/maskclassification/calibration.bin
labelfile-path=../../app-be/resources/models/maskclassification/labels_mask.txt
tlt-encoded-model=../../app-be/resources/models/maskclassification/final_model_int8.etlt
tlt-model-key=tlt_encode
model-engine-file=../../app-be/resources/models/facedetection/face_mask_classification.trt
input-dims=3;224;224;0
uff-input-blob-name=input_1
output-blob-names=predictions/Softmax

## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
# process-mode: 2 - inferences on crops from primary detector, 1 - inferences on whole frame
process-mode=2
interval=0
network-type=1 # defines that the model is a classifier.
gie-unique-id=2
operate-on-gie-id=1
classifier-async-mode=0
classifier-threshold=0.2
process-mode=2

Hey,

It’s funny, I’m working on the same problem: mask/no_mask detection
I’m using the default 4-classes model in python-apps as pgie, and my own mask detector trained in TLT as sgie. Sgie works only on class “Person” from pgie (Because in the future I need to train to detect other equipment as well)
In my case I can see the sgie’s bounding boxes on the display, maybe you should add operate-on-class-id=0 to tell the model if should run only on this class, and not on the whole frame? Give it a try…

As for getting the regults from sgie in probe, I’m having the same problem: Although I CAN see the BB on the display, obj_meta.classifier_meta_list is always None.
I’ve checked on this user’s Github and it seems like it only works for caffe models. It returns None for models like ours (Trained in TLT).
Can someone from Nvidia support give us a hand in this case?

The label_mask.txt file had the wrong format (I followed the TLT’s getting started guide). The format is mask;no-mask instead of mask\nno-mask

So the issue resolved when you change the labels?
For TLT doc issue, you can create a new topic in TLT forum.

After altering the labels file, the result didn’t change. I found what was causing this issue. The pgie and sgie were not properly linked in the pipeline.

That’s great!
Are you able to see your sgie detections on display as well as in the command prompt (This is, extract the metadata from the probe)?
I mean, are:
l_class = obj_meta.classifier_meta_list
and
print("Result:", label_info.result_label)
Working for you?

Yes. But I’m having another issue where the resulting classification is always the same. Classification result is always the same with sgie classifier - #5 by catia.mourao.896