Hi, i would like to seek some help to understand the floating points of the TLT models. i do understand when exporting the models, we can set the data type to be fp32, fp16 and INT8. But just to double confirm, before exporting my trained models, the floating point of the model is in Fp 64 am i right? Thus when exporting the model, there will be a slight decrease in accuracy right?
It is fp32 model.
May i know why is there a drop of accuracy when exporting the models into deepstream?
Which network did you use?
Im using Frcnn and Yolov4 for object detection
Please share the config file of running deepsteam.
Please try the default deepstream github. GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
pgie_yolov4_tlt_config_50.txt (2.1 KB)
sorry but how does this help?
Please try lower pre-cluster-threshold.
Sorry but from what i know, pre-cluster-threshold is the confidence level used to perform the inference. II tried to ensure that both confidence level remains the same before and after exporting but there still seems to be a drop in accuracy.
So im just trying to figure out why is there a slight drop in accuracy when logically the accuracy should remain the same
No, usually it is not the same.
The inference with "tao inference " and the inference with deeptream may diff.
More, suggest you to run “tao evaluate” against the xxx.engine to double check if there is mAP drop.
but from what i know, tlt evalute does not support evaluating engine files…
-m, --model: The path to the model file to use for evaluation. The model can be either a
.tltmodel file or TensorRT engine.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.