Conversion tool from tensorflow roadmap

Hi,

I would like to inquire whether a robust tool to convert Tensorflow models to TensorRT is in the works.

First let me give some background on the situation as I perceive it.

I have used the OpenVINO and TensorRT inference frameworks. They both provide amazing inference speeds and allow applications that would not be otherwise feasible.

One action that works much smoother in the openVINO setting is converting models from other frameworks and building engines from them. In the openVINO setting that is called the model-optimizer. It allows one to convert models from tensorflow, caffe, onnx and others to their intermediate representation. Lots of plugins for individual layers and extensions that allow one to convert for example ssd detection models are provided. This results in a user experience of pointing the optimizer at your model and receiving an optimized engine ready for inference tasks.

Compared to that the TensorRT framework lacks some crucial features and usability. TensorRT provides the convert_to_uff tool which generates a uff file from which one can then build an engine. For most networks however this involves remapping namespaces to certain plugins, collapsing namespaces and more. This results in a lot of guesswork on the user’s side. It is not clear how to set the config.py one needs to provide correctly. It does not seem to be a very stable approach to converting models which inspires confidence in being able to easily convert the next model one trains.

The OpenVINO conversion tools seem to work on a slightly higher level than those of TensorRT. I believe that to lots of users such higher level tools that are able to convert all or most standard models would be very beneficial.

So here are my questions:

  • Is a robust toolchain for converting models from major frameworks in the works? If so, when can we expect it to be released?
  • If not, can the TensorRT team provide some more examples for conversion along with common pitfalls. At the moment the ssd_inception_v2 sample is all we have and as I mentioned that does not generalize well.

Thank you