Conversion of mobilenetV2 to tensorrt failing for model trained with Tensorflow Object Detection API

Hello, I have a Jetson Nano. I want to run object detection trained with tensorflow object detection API on the Nano. By default, the tensorflow frozen model runs with very huge latency (Mobilenet v2 ~ 134ms per image). I have tried running the mobilenet V2 model given by @dusty_nv by following the topic: [url]https://devtalk.nvidia.com/default/topic/1049802/jetson-nano/object-detection-with-mobilenet-ssd-slower-than-mentioned-speed/post/5327974[/url]. When I tried to convert my model, I encountered the following error:

ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time

Jetson Nano
CUDA Version: 10.0
TensorRT version: 5.0.6.3
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic

I have been stuck with conversion for a long time, any help is highly appreciated. Also will output plugin for FPN be provided soon?

Hi,

I have double checked the comment of Dusty and it can work correctly:
[url]https://devtalk.nvidia.com/default/topic/1049802/jetson-nano/object-detection-with-mobilenet-ssd-slower-than-mentioned-speed/post/5327974/#5327974[/url]

Please noticed that the steps is running with pure TensorRT rather than TF-TRT.
Thanks.

Hello,

Thank you for your reply. I have also been able to run the SSD mobilenet v2(~ 26ms per image) provided by him (Sorry for not mentioning this earlier). However I am having trouble converting my trained mobilenet with Tensorflow Object Detection API. I am not able to create trt engine after the model is converted to uff. I read in a separate thread that the recent protoc of the Tensorflow Object Detection API is causing some problems.

Could you please let me know how to overcome this?

Hi,

Is your there any new layer you used from the SSD mobilenet v2?
Would you mind to share the model or the difference with us?

Thanks.

Thank you for looking into this.

I will attach my pipeline.config file, a set of weights, the frozen tf model and the converted uff model.

[url]Avishek Mobilenet V2 - Google Drive

I have figured out this particular issue. During the conversion of the model to frozen graph, I mentioned the batch size to be -1. In the newer tensorflow object detection API, the box coder is built through “research/object_detection/predictors/box_head.py” file, which has the following line (Line 183)

box_encodings = tf.reshape(box_encodings, [batch_size, -1, 1, self._box_code_size])

Since I am providing batch size as -1, the reshape encounter two -1 dimensions. But after I mention non zero positive batch size, I am getting yet another error.

ERROR: Parameter check failed at: …/builder/Layers.h::setAxis::315, condition: axis>=0
ERROR: Concatenate/concat: all concat input tensors must have the same dimensions except on the concatenation axis

Could you please look into this.

Hi,

Please check this comment for some information:
[url]https://devtalk.nvidia.com/default/topic/1050465/jetson-nano/how-to-write-config-py-for-converting-ssd-mobilenetv2-to-uff-format/[/url]

Thanks.

Thank you so much. :-)

Hi @avishek.alex15

I am trying to optimise SSD Mobilenet v2 model trained on custom dataset.

Which commit of object detection API /models you are using to generate frozen graph ?

I am at commit e21dcdd03250900da35b267f34010efd738d93cf

I am getting error like when I try to optimise model:

[TensorRT] ERROR: UFFParser: Validator error: Cast: Unsupported operation _Cast

Please help.