Inference time of ssdlite_mobilenetv2 using TF-TRT

Hi everyone,
I converted the ssdlite_mobilenetv2 with 1 class using TF-TRT, but I didn’t get any improvement in speed. In your opinion, This problem because as the model is lightweight and originally optimized network or I got an incorrect way for converting the model. I got TensorRT Engine node = 0, But for the ssd_resnet50 model, I got TensorRT Engine node ~= 23.
I reach to 27 FPS with ssdlite_mobilenetv2 (one class) on Jetson TX2, Is speed good? Is there a room to improve?


Can you share a repro package with the

  1. original model
  2. converted model
  3. scripts that you ran to convert the model

so we can further debug this?

Please also share your specific TensorRT/Tensorflow/Cuda/Cudnn versions

NVIDIA Enterprise Support