Secondary-gie Classifier only show one class for all inputs

So I have created a simple pipeline for face mask detection using the triton inference server of DeepStream5.0. The primary-gie is a face detector and the secondary-gie is a classifier that classifies [other, mask, no mask].
The primary-gie face detector is working without any issue, however the secondary-gie which is a classifier only classifies its inputs as mask (the second element of the o/p of the classifier) for all o/ps generated by the primay-gie
I tried using a different classifier model with same o/ps but this time it class prediction is stuck at others (the first element of the o/p )
What am I doing wrong
source1_primary_face_detection.txt (2.9 KB) config_infer_secondary_xception_mask_classification.txt (1.3 KB)

below is the application-config file

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1920
height=1080
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
num-sources=1
uri=file:/opt/nvidia/deepstream/deepstream-5.0/models/vid.mp4
gpu-id=0
cudadec-memtype=0

[streammux]
gpu-id=0
batch-size=1
batched-push-timeout=40000
enable-padding=0
##Set muxer output width and height
width=1920
height=1080
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
container=1
bitrate=4000000
output-file=/opt/nvidia/deepstream/deepstream-5.0/models/output.mp4
codec=1

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[primary-gie]
enable=1
#(0): nvinfer; (1): nvinferserver
plugin-type=1
#infer-raw-output-dir=trtis-output
batch-size=1
interval=0
gie-unique-id=1
config-file=config_infer_primary_face_detection.txt

[secondary-gie0]
enable=0
plugin-type=1
batch-size=4
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=1;
config-file=config_infer_secondary_mask_classification.txt

[secondary-gie1]
enable=1
plugin-type=1
batch-size=1
gie-unique-id=3
operate-on-gie-id=1
operate-on-class-ids=1;
config-file=config_infer_secondary_xception_mask_classification.txt

[tracker]
enable=1
tracker-width=640
tracker-height=384
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=…/deepstream-app/tracker_config.yml
#ll-config-file=…/deepstream-app/iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=1
display-tracking-id=1

[tests]
file-loop=0

        infer_config {
  unique_id: 3
  gpu_ids: [0]
  max_batch_size: 1
  backend {
    trt_is {
      model_name: "xception_mask_classifier"
      version: -1
      model_repo {
        root: "/opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo"
        strict_model_config: true
        log_level: 2
        tf_gpu_memory_fraction: 0.5
        tf_disable_soft_placement: 0
      }
    }
  }
 
  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_NHWC
    maintain_aspect_ratio: 0
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    frame_scaling_filter: 1
    normalize {
      scale_factor: 1.0
      channel_offsets: [0, 0, 0]
    }
  }
 
  postprocess {
    labelfile_path: "/opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/xception_mask_classifier/labels.txt"
    classification {
      threshold: 0.51
    }
  }
  
}
input_control {
  process_mode: PROCESS_MODE_CLIP_OBJECTS
  operate_on_gie_id: 1
  operate_on_class_ids: [1]
  interval: 0
  async_mode: true
  object_control {
    bbox_filter {
      min_width: 16
      min_height: 16
    }
  }
}

The above code is the inferece-config for the secondary-gie

labels:

other;mask;no-mask

Any help is appreciated

**• Hardware Platform GPU
**• DeepStream Version5.0
**• I am using the nvidia docker nvcr.io/nvidia/deepstream:5.0-20.07-triton
• Issue Type( questions)

I saw you had disabled the sgie

I had included multiple sgie to check if the model loaded was the issue. In the code I provided, I have disabled sgie0 and enabled sgie1. Both are different models that do the same mask classification.

[secondary-gie1]
enable=1
plugin-type=1
batch-size=1
gie-unique-id=3
operate-on-gie-id=1
operate-on-class-ids=1;
config-file=config_infer_secondary_xception_mask_classification.txt

both are giving the same issue

Hey, did you customize your postprocess for the sgie? Your sgie’s output are 3 (ie mask, no mask, other), right?

Yes my sgie model outputs are 3 [other;mask;no-mask] in that order if it matters.

This is all the postprocessing I have done via Deepstream

  postprocess {
    labelfile_path: "/opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/xception_mask_classifier/labels.txt"
    classification {
      threshold: 0.51
    }
  }

as far as the model itself goes the output is the o/p is the result of a softmax acitivation on the 3 element array (pretty standard classifier)

Please refer nvdsinfer_customclassifierparser.cpp-> NvDsInferClassiferParseCustomSoftmax

can you please elaborate how you solved it? Facing similar issue. Used both custom and default parser but getting only one class for all inputs

Hi saransh,

Please help to open a new topic for your issue. Thanks