Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) → GPU
• DeepStream Version → 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version → 7.2
• NVIDIA GPU Driver Version (valid for GPU only) → 455.32.00
• Issue Type( questions, new requirements, bugs) How to correctly setup the spec file for training Efficientnet classification model and the inference specs for using Efficientnet image classification model in Deepstream
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
classification_dvd_spec.cfg (1.3 KB)
sgie_efficientnet_tlt_config.txt (2.9 KB)
The inference result produced by Deepstream and TAO inference.py script is different. The TAO inference works very well for the cropped images from the primary engine (object detection). The images are almost all classified correctly.
However, it does not work well on deepstream itself when the model is deployed as a secondary engine. Many misclassifications and sometimes unable to classify also.
How do I configure the spec file on both sides so that both can produce the same/similar inference results? I have also tried following the config file in deepstream-test2 and tried playing around with almost all the parameters in the docs. The classifier secondary engine is still not outputting the correct result.
I have attached the spec file for training the efficientnet image classification model and the spec file to run the classifier as Secondary Inference Engine in Deepstream. Please help me take a look and see if there is any incompatibility between them.