Proper way to integrate classifiers into deepstream pipeline

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) dGPU
• DeepStream Version 7.0
• TensorRT Version 8.6.1.6
• NVIDIA GPU Driver Version (valid for GPU only) 550.107.02
• Issue Type( questions, new requirements, bugs) Questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I took models from this repo.

I am trying to integrate these two classifiers into my code. However, I faced with some problems. These models predict only Male as gender and (0-2) as age, and they predict it not for all faces. How should I link elements (I have ArcFace model enabled too) and configure those models? Should I write parsers for this elements? Could you guide me, please.

I need to write code in Python.

The pipeline now is following:
nvurisrcbin->streammux->nvinfer(retinaface as pgie)->nvinfer(arcface as sgie)->nvvideoconvert->nvosd->sink

The models used in the link you post are all caffe models. DeepStream does not support caffe models any more.

You may try to get onnx models. And there are classifier samples in DeepStream SDK. /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_C_Sample_Apps.html

Thanks! Is that the reason why models are not working properly?

Yes. Please follow the way we demonstrate in the samples.

I have converted those models into .onnx file and followed deepstream-test2 sample app. However, results are still same. I’ve also tried converting other models (in .tflite format) to .onnx and put them in my code, however, they don’t give any results: they predict occasionally and its prediction is always the first label.

Is there way of conversion to .onnx file specifically for deepstream? What is the possible reason of such behavior?

No.

There are many factors which will impact the classification output. Some possible reasons:

  1. Your model’s output layers are not compatible to the default classifier postprocessing algorithm inside gst-nvinfer, you may need to customize the postprocessing according to the model’s output layers. You can refer to deepstream_tao_apps/apps/tao_others/deepstream_lpr_app at master · NVIDIA-AI-IOT/deepstream_tao_apps for how we customize the LPR model’s postprocessing.
  2. Your classifier model’s label file is not correct. Please refer to the classifier model sample used in /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2, the label file can be found in /opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleMake
  3. The classifier model itself is not accurate enough. You may need to consult the author of the model for how to improve the accuracy of the model.