How to extract only specific class on deepstream with triton inference server

Please provide complete information as applicable to your setup.

• Hardware Platform: GPU
• DeepStream Version: 7.1
• Cuda Version: 12.6
• NVIDIA GPU Driver Version: 560.35.03
• Issue Type: Question

So i am trying to do some inferences in deepstream with python, having a triton inference server.
It’s working, but i don’t know how to get only one specific class from the inference results.
Let’s say my model has 6 classes as output, i only want to get the class number 2.

Thank you in advance!

This is my nvinferserver configuration file

infer_config {
  unique_id: 1
  gpu_ids: [0]
  max_batch_size: 1
  backend {
    inputs: [ {
      name: "input_1:0"
    }]
    outputs: [
      {name: "output_bbox/BiasAdd:0"},
      {name: "output_cov/Sigmoid:0"}
    ]
    triton {
      model_name: "peoplenet"
      version: -1
      grpc {
        url: "0.0.0.0:8001"
        enable_cuda_buffer_sharing: true
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_LINEAR
    tensor_name: "input_1:0"
    maintain_aspect_ratio: 0
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    frame_scaling_filter: 1
    normalize {
      scale_factor: 0.0039215697906911373
      channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
    labelfile_path: "/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/apps/myapp/models/peoplenet/1/labels.txt"
    detection {
      num_detected_classes: 3
      per_class_params {
        key: 2
        value { pre_threshold: 0.4 }
      }
      nms {
        confidence_threshold:0.2
        topk:20
        iou_threshold:0.5
      }
    }
  }

  extra {
    copy_input_to_host_buffers: false
    output_buffer_pool_size: 2
  }
}
input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
  operate_on_class_ids: [2]
  interval: 0
}

You can add a probe function to the src pad of the nvinferserver element, and then get the specified class in the obj_meta_list. This code snippet from deepstream_test3_app.c

for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
        //int offset = 0;
        for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
                l_obj = l_obj->next) {
            obj_meta = (NvDsObjectMeta *) (l_obj->data);
            if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) {
                vehicle_count++;
                num_rects++;
            }
            if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) {
                person_count++;
                num_rects++;
            }
        }

Thanks for this idea. I was thinking there is a config parameter like there is filter-out-class-ids in the nvinfer config file that does this for the nvinferserver too.

Something like this
filter-out-class-ids = 0;1;3;4;5;6;7;8;9;10;11;12 ...

Currently not supported,

nvinferserver is open source, you can add this feature by yourself, you can refer to this patch.

diff --git a/sources/gst-plugins/gst-nvinferserver/gstnvinferserver_meta_utils.cpp b/sources/gst-plugins/gst-nvinferserver/gstnvinferserver_meta_utils.cpp
index d13a005..39c7336 100755
--- a/sources/gst-plugins/gst-nvinferserver/gstnvinferserver_meta_utils.cpp
+++ b/sources/gst-plugins/gst-nvinferserver/gstnvinferserver_meta_utils.cpp
@@ -105,6 +105,11 @@ attachDetectionMetadata(
             }
         }
 
+        const auto& ids = config.output_control().filter_out_class_ids();
+        if (!ids.empty() &&
+            (std::find(ids.begin(), ids.end(), obj.classIndex) != ids.end()))
+            continue;
+
         /* Scale the bounding boxes proportionally based on how the object/frame
          * was scaled during input. */
         obj.left = (obj.left - offsetLeft) / scaleX + roiLeft;
diff --git a/sources/gst-plugins/gst-nvinferserver/nvdsinferserver_plugin.proto b/sources/gst-plugins/gst-nvinferserver/nvdsinferserver_plugin.proto
index 3d39caf..3843ec8 100755
--- a/sources/gst-plugins/gst-nvinferserver/nvdsinferserver_plugin.proto
+++ b/sources/gst-plugins/gst-nvinferserver/nvdsinferserver_plugin.proto
@@ -124,6 +124,8 @@ message PluginControl {
       /* Classifier type of a particular nvinferserver component. */
       string classifier_type = 3;
     }
+    /* Filter specified class IDs */
+    repeated int32 filter_out_class_ids = 4;
   }
# /opt/nvidia/deepstream/deepstream/sources/includes/nvdsinferserver_plugin.proto also needs to be modified
make CUDA_VER=12.6 install 

Finally, add this to your application configuration file

output_control {
  filter_out_class_ids: [1, 2, 3]
}

Add to FAQ

1 Like

Thank you so much for the solution!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.