Preprocessing image for secondary classifier in Deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) NVIDIA Jetson Nano (Developer Kit Version)
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) Jetpack 4.4
• TensorRT Version 7.1.3.0
• NVIDIA GPU Driver Version (valid for GPU only)

I am using deepstream -5.0 test5 example, which is helping me run both Primary and Secondary classifiers.
Path of the deepstream-test5 : /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5
I had made necessary changes in the config file (path, /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/test5_config_file_src_infer_tracker_sgie.txt), which make it work with primary classifier. But it’s not working with a secondary classifier.
I have successfully tested the secondary classifier and it is working fine when implemented in case of primary classifier.
I thought we have to enlarge the bounding -box cropped images with the help of padding. So, after following some forum discussion I try to made changes in the gst-dsexample folder ( path, /opt/nvidia/deepstream/deepstream-5.0/sources/gst-plugins/gst-dsexample). But its also not working.
My question is,
Am I on the right track? If not what’s the right way to solve this issue.

Hi,
A better way should be improve accuracy of secondary-gie model(s). After primary-gie, the detected objects are put in metadate and the whole frame is sent to next secondary-gie. It seems strange if you process the frame to scale up the detected objects.

Hi
I’m running face recognition. At the second model, i need to transpose copped image then feed it. I inserted gstdsexample between face detection model and face embedding model. however it seems incorrect. this is code to build pipeline


and this is code i modified in gstdsexample.cpp

in the second model, it dont take copped image at dsexample->cvmat?
are you suggest me any solution?

Hi,
Primary detector and secondary classifier are nvinfer plugins. Input to nvinfer is a complete frame, not cropped images. The secondary classifier crops images according to the metadata from primary detector.

So for your usecase, we would suggest improve accuracy of classifier, or give larger input resolution.

If you prefer to take advance approach, you can check source of nvinfer and do customization. The source code is in

/opt/nvidia/deepstream/deepstream-5.0/sources/gst-plugins/gst-nvinfer/