is there anyone knows what is the common fps for mobilenet v2 ssd (300) on tx2

Hi, I convert mobilenet v2 ssd (300) from tensorflow model zoo to tensorrt model, but i can only get 30 fps on tx2,is there anyone knows what is the common fps for these configuration ?

Hello,

Per https://medium.com/@jonathan_hui/object-detection-speed-and-accuracy-comparison-faster-r-cnn-r-fcn-ssd-and-yolo-5425656ae359

22-59 FPS is the highest and lowest FPS reported by the corresponding papers for mobilenet v2 ssd.

@abyss.heidiz - I’d love to learn how did you get 30FPS on TX2. I’m trying the same on Drive PX 2 (similar HW specs) and can’t go over 11FPS on that model… My problem is probably in the fact that TensoRT doesn’t want to optimize anything for me, so I’d appreciate if you could share your solution.

I just convert the model from tensorflow model zoo to tensorrt ,

you can find some repo on github, which can run like 4x or even 50 fps with the model converted from caffe, but i can only get 30 fps with the model i converted by myself …

Can you post your code for conversion, and the environment setup (I mean which versions of TensorFlow, TensorRT, etc), please?
I’m trying to convert with the original NVIDIA code from https://github.com/NVIDIA-AI-IOT/tf_trt_models and so far I haven’t got any improvements. My steps described at:
https://devtalk.nvidia.com/default/topic/1047512/tensorrt/no-improvements-from-tensorrt-on-nvidia-ai-iot-tf_trt_models-/

Indeed, AFAIR I read a few comments here and there already that Caffe implementation seems to perform better on TX2 comparing to TensorFlow.

I use the config file from the tensorrt example => <tensorrt_home>/samples/sampleUffSSD/config.py

with the command “convert-to-uff tensorflow --input-file net.pb -O NMS -p config.py”

env :

tensorrt 4

tensorflow 1.12.0

thanks I’ll try the caffe model