Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I have trained a classifier model (darknet53 backbone) with the newest version of TAO (3.22.05), to classify detections from PeopleNet (v2.6) into gender (Female/Male). The classifier works as expected when running inference on some test data with
tao classifier inference.
However, when the model is exported as .etlt format, and deployed to DS, the outputs are completely broken with only one label getting predicted with 100% confidence;
The model is deployed with the nvinfer_config.txt file generated from the
--gen_ds_config flag in the
tao classification export command as below:
When inspecting the raw output tensor by setting
output-tensor-meta=1 the results are the same, with only one label being predicted by 100%.
Model .etlt file at the following link: https://transfer.sh/(/ELcTKy/gender.etlt).zip
Config and label file attached.
gender_config.txt (461 Bytes)
gender_labels.txt (12 Bytes)
Any help is greatly appreciated,
Changing the scaling-filter or flipping from BGR to RGB with offsets and input color or vica versa unfortunately didn’t help. Do you have any other suggestions?
Thanks in advance,
From the link I shared, actually the classification model is working as primary trt engine.
How about you? Did you set it as primary trt engine directly?
The classifier works as SGIE on bounding boxes produced by PeopleNet PGIE. Setting the classifier as PGIE does not make sense to me since it will then do inference on the entire frame instead of just the bounding boxes?
Got it. And actually I also provide the way in Issue with image classification tutorial and testing with deepstream-app - #12 by Morganh when work as secondary trt engine. Please check.
And please modify to
I’ve changed the network input dimensions to 3;256;128 in the training phase (instead of the default 3;224;224) - the model is now producing correct results when deployed to DS.
I’m using custom image means when training the model since our images are quite “dark” - should I use these means as offsets when deploying to DS as well or the default offsets=103.939;116.779;123.68 as suggested by the resulting nvinfer_config.txt ?
Thanks in advance,
OK, how did you train your model? You set above in input_image_size,right?
experiment settings as below:
I will change the nvinfer_config.txt from
Perfect - appreciate the swift response.
Perhaps for future releases of TAO you can consider if the generated nvinfer_config.txt file when exporting should be generated with the custom image means used in the spec file (currently it always outputs offsets=103.939;116.779;123.68)
Currently, there is “
--gen_ds_config” option when exporting. See Image Classification — TAO Toolkit 3.22.05 documentation