**• Hardware Platform ------> GPU
**• DeepStream Version -----> 7.0
• TensorRT Version -------> 8.6
**• NVIDIA GPU Driver Version ------> 545
My pipeline flow is NVinfer ----> NVtracker ------> NVanalytics.
let’s say I have loaded two models such as head_detection_model and bag_detection_model. In that while making my analytics config for head_detection_model, I have passed class-id=0 which means head and for bag_detection_model analytics config also class-id=0 which means bag. so, due to the same class id sharing, head detections coming in bag_ROI and vice versa. To resolve this problem, is there any solution other than custom plugin. If not, how can I make the custom plugin.
You may add pad probe function after the two nvinfer plugins to change the class id of one.
E.G. To configure the head detection model “gie-unique-id=1” in nvinfer configuration file. To configure the bag detection model “gie-unique-id=2” in nvinfer configuration file. In the probe function get the object meta NVIDIA DeepStream SDK API Reference: _NvDsObjectMeta Struct Reference | NVIDIA Docs. If the “unique_component_id” equals 2, set the “class_id” as 1.
Is there any way to do this class-id changing, before I get obj_meta. Because, once I get obj_meta and do this, it’s a little time taking and complex. Is there something I can do in the nvinfer and nvanalytics stage (any properties that can restrict class-id) only.
No. To replace the class_id will not take too much time. The gst-nvinfer is a plugin for common use, it will not know how many detectors you will use in your application. You can modify the gst-nvinfer source code with some hard code for you already know how many detectors you will use and which detector will do what.
The gst-nvinfer and gst-nvdsanalytics are all open source, you can modify them according to your own requirement.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks