Accuracy loss in the trained SSD- mobilenetV1 model in detectnet while converting into tensorRT

I trained the SSD-mobilenet v1 model for a custom dataset using the PyTorch codes provided with the jetson-inference repository but when I converted it into the TRT model using the given method, F1 score drops from 76% to 49%. Is there any cause or workaround to avoid this accuracy drop?

@dusty_nv I hope you can again help me here.

Hi @ash.hemax, not sure based on your dataset and how you are evaluating the F1 score, but my suggestions would be the same as your other thread:

https://forums.developer.nvidia.com/t/inferencing-pretrained-custom-tensoflow-ssd-mobilnetv2-model-using-tensorrt-and-detectnet/175322