Conversion of ssd_inception_v2_coco in current form


I would like to convert my tensorflow models to TensorRT. For example ssd_inception_v2_coco_2017 from the detection_api model_zoo. This works due to the helpful demo that is provided in the toolkit.

Now I want to convert my own trained version of the same net. This does not work with the same code even when adjusting numClasses. In the current tensorflow version one needs to set

override_base_feature_extractor_hyperparams: true

in the pipeline.config while training, which did not seem to exist in the version from the model_zoo.

Is this a problem stemming from tensorflow version incompatibilities?

Is there maybe some more robust way of converting at least the common models from tensorflow to TRT? That would be incredibly helpful and would allow us to use this great library to its full potential.

The recommended method of importing TensorFlow models to TensorRT is using TensorFlow with TensorRT (TF-TRT). I am also trying to convert my own trained TensorFlow model into TensorRT, but i found it’s really tricky because we have no idea about uff convertor and TensorRT uff parser.

I understand that TfTRT is the recommended method of using tensorflow models.

In my testing I found that there is a huge performance discrepancy at inference between TensorRT and TfTRT especially concerning the amount of RAM used. For ssd_inception_v2 models the tfTRT on the Jetson Nano uses around 1.5GB of RAM while a pure TensorRT version only needs a much more reasonable 500 MB.

Since my use case is somewhat RAM limited it would be optimal to have tools to convert tf models to TRT engines in a more flexible way than what currently exists. Tensorflow inference on GPU seems to add some unneeded overhead.