Trained tlt model works with more than 95% accuracy but exported etlt performs poorly

Please provide the following information when requesting support.

• Hardware: Ubantu
• Network Type : Detectnet_v2
Tao: 4

I have trained resnet_18 with 1280x1280 images using tao detectnet_v2. After training the tlt is generated and tested the the tlt. The detection accuracy is quite good, more than 95%.

But once after exporting the model to etlt, I have tested using same set of images. But now the accuracy is is too bad. The etlt can not almost detect anything. Can you advise why there is such a difference in performance between tlt and etlt.

How exported to etlt:

tao detectnet_v2 calibration_tensorfile -e /home/azure_devops/nvidia/tlt-experiments/badge_model/train_fsd3/train_config_badgep.txt -m 15 -o /home/azure_devops/nvidia/tlt-experiments/badge_model/train_fsd3/cal_tensor/calibration.tensor --use_validation_set

tao detectnet_v2 export -m /home/azure_devops/nvidia/tlt-experiments/badge_model/train_fsd3/newmodel/model.tlt -k badge -o /home/azure_devops/nvidia/tlt-experiments/badge_model/train_fsd3/engine_cal/model_badge_step_v21.etlt --cal_image_dir /nfsdata/badge/kitti/data/ --cal_data_file /home/azure_devops/nvidia/tlt-experiments/badge_model/train_fsd3/cal_tensor/calibration.tensor --data_type int8 --batches 15 --batch_size 16 --cal_cache_file /home/azure_devops/nvidia/tlt-experiments/badge_model/train_fsd3/engine_cal/model_badge_step_v21.bin --engine_file /home/azure_devops/nvidia/tlt-experiments/badge_model/train_fsd3/engine_cal/model_badge_step_v21.etlt_b16_gpu0_int8.engine

How did you test? Could you share more info?

Hi,

I have done inference for some sample images (other than training set). Used tao command as
tao detectnet_v2 inference -e /home/azure_devops/nvidia/tlt-experiments/badge_model/train_fsd3/infer2.txt -i /home/azure_devops/nvidia/tlt-experiments/badge_model/train_fsd2/testdata2/ -o /home/azure_devops/nvidia/tlt-experiments/badge_model/train_fsd2/testout6/ -k badge > /home/azure_devops/nvidia/tlt-experiments/badge_model/train_fsd3/logs/eval_badge_infer_v12.log
infer2.txt (1.4 KB)

ALso attaching training config.
train_config_badgep.txt (3.7 KB)

Please delete --use_validation_set , and run “export” and “inference” to test again.

Tried, but the result is same. I am confused and unable to understand if tlt can detect so well, why the etlt fails to detect.

To narrow down, please generate fp16 or fp32 engine and test again.

Hi,
Thanks the fp16 performs almost in similar accuracy as the tlt does. How can I achieve similar accuracy using int8 engine.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Could you enlarge to use all the training images?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.