Issue in operate-on-class-ids

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have setup a pipeline with classification model as pgie (gie-unique-id is 1) and detection model as sgie (gie-unique-id is 2)

Now in detection config file I have set the operate-on-gie-id as 1 and operate-on-class-ids as 22 (pgie model has 24 classes)

even after these configuration when the pgie model classify the frame as class-id 0 or 1, but still detection model is running on these class ids. Ideally it should only run on class-id 22.

Here are the config file

Pgie Classification model

[property]
gpu-id=0
net-scale-factor=1
onnx-file=/app/models/csgo/status.onnx
model-engine-file=/app/models/csgo/status.engine
labelfile-path=/app/models/csgo/status.txt
batch-size=16
# 0=FP32 and 1=INT8 mode
network-mode=1
process-mode=1
network-type=1
model-color-format=1
gpu-id=0
gie-unique-id=1
is-classifier=1
classifier-async-mode=1
classifier-threshold=0.05
#scaling-filter=0
#scaling-compute-hw=0
offsets=103.939;116.779;123.68
infer-dims=3;384;384
uff-input-blob-name=input_1
uff-input-order=0
output-blob-names=predictions/Softmax

Sgie detection model. I have to pass full frame to sgie as well

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=/app/models/csgo/bbox.onnx
model-engine-file=/app/models/csgo/bbox.engine
labelfile-path=/app/models/csgo/bbox.txt
batch-size=1
network-mode=0
num-detected-classes=10
interval=0
gie-unique-id=2
operate-on-gie-id=1 
operate-on-class-ids=22;
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=/app/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

Could you add some log in our source code below to check whether the parameter effective?

deepstream\sources\gst-plugins\gst-nvinfer\gstnvinfer.cpp
static inline gboolean
should_infer_object (GstNvInfer * nvinfer, GstBuffer * inbuf,
    NvDsObjectMeta * obj_meta, gulong frame_num,
    GstNvInferObjectHistory * history)
{
...

  /* Infer on object if the operate_on_class_ids list is empty or if
   * the flag at index  class_id is TRUE. */
  if (!nvinfer->operate_on_class_ids->empty () &&
      ((int) nvinfer->operate_on_class_ids->size () <= obj_meta->class_id ||
          nvinfer->operate_on_class_ids->at (obj_meta->class_id) == FALSE)) {
    return FALSE;
  }
...
}

My pgie model is classifier, and in that case obj_meta.class_id is -1 since the classifier data is inside the classifier_meta, that’s why it is running inference on all the classes.

Yes. We don’t support that when the parent plugin is classifier. And you can customize it by referring to the current source code.

I am facing the same issue in your deepstream-test-2 sample application
In test 2 applicaiton I modified following things

pgie model has 4 classes car, bicycle, person, road_sign, I added filter-out-class-ids=0 in pgie-config file
sgie model1 is set to operate on pgie model with operate-on-class-ids=0, similarly for sgie2 model.

Now ideally the sgie models shouldn’t run since there is no class-id 0 passed from the pgie to sgies.
But when I set the process-mode to 1 for both sgies. I can see results from classification models.

Here is modified pad_probe function

def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
    }
    num_rects=0
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            # obj_counter[obj_meta.class_id] += 1
            print(f"Detection {obj_meta.obj_label}")
            l_classifier = obj_meta.classifier_meta_list
            while l_classifier is not None:
                try:
                    classifier_meta = pyds.NvDsClassifierMeta.cast(l_classifier.data)
                except StopIteration:
                    break

                l_label = classifier_meta.label_info_list
                while l_label is not None:
                    try:
                        label_info = pyds.NvDsLabelInfo.cast(l_label.data)
                    except StopIteration:
                        break
                    print(f"Classification Result Object : {label_info.result_label} ID : {label_info.result_class_id}")
                        
                    try:
                        l_label = l_label.next
                    except StopIteration:
                        break

                try:
                    l_classifier = l_classifier.next
                except StopIteration:
                    break

            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
    #past tracking meta data
    # l_user=batch_meta.batch_user_meta_list
    # while l_user is not None:
    #     try:
    #         # Note that l_user.data needs a cast to pyds.NvDsUserMeta
    #         # The casting is done by pyds.NvDsUserMeta.cast()
    #         # The casting also keeps ownership of the underlying memory
    #         # in the C code, so the Python garbage collector will leave
    #         # it alone
    #         user_meta=pyds.NvDsUserMeta.cast(l_user.data)
    #     except StopIteration:
    #         break
    #     if(user_meta and user_meta.base_meta.meta_type==pyds.NvDsMetaType.NVDS_TRACKER_PAST_FRAME_META):
    #         try:
    #             # Note that user_meta.user_meta_data needs a cast to pyds.NvDsTargetMiscDataBatch
    #             # The casting is done by pyds.NvDsTargetMiscDataBatch.cast()
    #             # The casting also keeps ownership of the underlying memory
    #             # in the C code, so the Python garbage collector will leave
    #             # it alone
    #             pPastDataBatch = pyds.NvDsTargetMiscDataBatch.cast(user_meta.user_meta_data)
    #         except StopIteration:
    #             break
    #         for miscDataStream in pyds.NvDsTargetMiscDataBatch.list(pPastDataBatch):
    #             print("streamId=",miscDataStream.streamID)
    #             print("surfaceStreamID=",miscDataStream.surfaceStreamID)
    #             for miscDataObj in pyds.NvDsTargetMiscDataStream.list(miscDataStream):
    #                 print("numobj=",miscDataObj.numObj)
    #                 print("uniqueId=",miscDataObj.uniqueId)
    #                 print("classId=",miscDataObj.classId)
    #                 print("objLabel=",miscDataObj.objLabel)
    #                 for miscDataFrame in pyds.NvDsTargetMiscDataObject.list(miscDataObj):
    #                     print('frameNum:', miscDataFrame.frameNum)
    #                     print('tBbox.left:', miscDataFrame.tBbox.left)
    #                     print('tBbox.width:', miscDataFrame.tBbox.width)
    #                     print('tBbox.top:', miscDataFrame.tBbox.top)
    #                     print('tBbox.right:', miscDataFrame.tBbox.height)
    #                     print('confidence:', miscDataFrame.confidence)
    #                     print('age:', miscDataFrame.age)
    #     try:
    #         l_user=l_user.next
    #     except StopIteration:
    #         break
    return Gst.PadProbeReturn.OK	

Here is the output, there is no detected car so there should be no classification result.

Frame Number=1437 Number of Objects=5 Vehicle_count=0 Person_count=0
Detection 
Classification Result Object : largevehicle ID : 1
Detection 
Classification Result Object : mercedes ID : 15
Detection person
Detection person
Detection person
Detection person
Frame Number=1438 Number of Objects=6 Vehicle_count=0 Person_count=0
nvstreammux: Successfully handled EOS for source_id=0
Detection 
Classification Result Object : largevehicle ID : 1
Detection 
Classification Result Object : mercedes ID : 15
Detection person
Detection person
Detection person
Detection person
Frame Number=1439 Number of Objects=6 Vehicle_count=0 Person_count=0
Detection 
Classification Result Object : largevehicle ID : 1
Detection 
Classification Result Object : mercedes ID : 15
Detection person
Detection person
Detection person
Frame Number=1440 Number of Objects=5 Vehicle_count=0 Person_count=0
Detection 
Classification Result Object : suv ID : 3
Detection 
Classification Result Object : bmw ID : 2
Frame Number=1441 Number of Objects=2 Vehicle_count=0 Person_count=0

Can you guide me about this?
Is there any impact of process-mode=1 on operate-on-class-ids in classification models

I have tried on my side with our DeepStream 7.1. It works well without any car classifier. Could you upgrade your DeepStream to 7.1?

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2
1. filter-out-class-ids: 0 for pgie
2. operate-on-class-ids: 0 for sgie1 and sgie2
3. process-mode: 1 for sgie1 and sgie2
4./deepstream-test2-app dstest2_config.yml

did you tried with the filter-out-class-ids=0 for pgie file?

Yes. I updated the operation steps.

I’m using DeepStream 7.1 with two detection models: one as the primary inference engine (pgie) and the other as the secondary inference engine (sgie). The operate-on-class-ids property works correctly when process-mode is set to 2. However, when I change process-mode to 1, the sgie begins detecting objects independently of the pgie’s output, ignoring the operate-on-class-ids.

You need to implement your feature by customize the source code I attached before gstnvinfer.cpp.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks