Hello, I have a Jetson Nano. I want to run object detection trained with tensorflow object detection API on the Nano. By default, the tensorflow frozen model runs with very huge latency (Mobilenet v2 ~ 134ms per image). I have tried running the mobilenet V2 model given by @dusty_nv by following the topic: https://devtalk.nvidia.com/default/topic/1049802/jetson-nano/object-detection-with-mobilenet-ssd-slower-than-mentioned-speed/post/5327974. When I tried to convert my model, I encountered the following error:
ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time
CUDA Version: 10.0
TensorRT version: 184.108.40.206
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
I have been stuck with conversion for a long time, any help is highly appreciated. Also will output plugin for FPN be provided soon?