run ssd_mobilenet_v2_quantized_300x300_coco

Hi
I want to run ssd_mobilenet_v2_quantized_300x300_coco on Jetson Nano to compare the performance with Google TPU.
When I download this model from the Tensorflow detection model zoo, it contains following files
*model.ckpt.data-00000-of-00001
*model.ckpt.index
*model.ckpt.meta
*pipeline.config
*tflite_graph.pb
*tflite_graph.pbtxt
How can I apply this detection model to a .mp4 video on Jetson Nano?

Hi,

Please check this tutorial:
https://github.com/NVIDIA-AI-IOT/tf_trt_models

Thanks.

Hi,

I followed the tutorial and managed to run mobilenet_v1_coco. However, the results were very disappointing, 100-200ms per inference. comparing the resulting program to the uff_ssd sample and the cpp sample used for benchmarking, its seems a completely different approach was used in these. Alas, the uff_ssd is very specific to the ssd_inception and the cpp code is difficult to convert to python.

Would it be possible to make or point us to a sample that performs like the benchmark? It looks like the secret is converting to uff and using a plugin to speedup some layers that are not supported natively by trt. Please correct me if this assertion is wrong.

The whole point of benchmarks is showing performance that is repeatable with other models. Currently this is not really possible and can lead people to believe they can get 30 fps when this is not really possible.

Hi,

Our benchmark is using pure TensorRT and C++ version.
The one you used is TFTRT, which introduces some latency from TensorFlow.

You will get much better performance with pure TensorRT but the plugin is required.
I’m not familiar with the detail layers in mobilenet_v1_coco.
But we already implement serveral plugins which is used in the SSD model.

Here is a python version sample code: https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#uff_ssd
We also have some benchmark results here: https://devblogs.nvidia.com/jetson-nano-ai-computing/

Thanks.