Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
8.2.5.1
• NVIDIA GPU Driver Version (valid for GPU only)
512.95
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi,
I have trained a classifier model (darknet53 backbone) with the newest version of TAO (3.22.05), to classify detections from PeopleNet (v2.6) into gender (Female/Male). The classifier works as expected when running inference on some test data with tao classifier inference
.
However, when the model is exported as .etlt format, and deployed to DS, the outputs are completely broken with only one label getting predicted with 100% confidence;
The model is deployed with the nvinfer_config.txt file generated from the --gen_ds_config
flag in the tao classification export
command as below:
[property]
net-scale-factor=1.0
batch-size=1
offsets=103.939;116.779;123.68
infer-dims=3;224;224
tlt-encoded-model=gender.etlt
tlt-model-key=06052019
labelfile-path=gender_label.txt
network-type=1
num-detected-classes=2
uff-input-order=0
output-blob-names=predictions/Softmax
uff-input-blob-name=input_1
model-color-format=1
maintain-aspect-ratio=0
output-tensor-meta=0
process-mode=2
network-mode=0
gie-unique-id=3
operate-on-gie-id=1
classifier-threshold=0.0
When inspecting the raw output tensor by setting output-tensor-meta=1
the results are the same, with only one label being predicted by 100%.
Model .etlt file at the following link: https://transfer.sh/(/ELcTKy/gender.etlt).zip
Config and label file attached.
gender_config.txt (461 Bytes)
gender_labels.txt (12 Bytes)
Any help is greatly appreciated,
/M