Tensorflow Slim models - Inception Resnet V2 using TensorRT


I am trying to utilize the accelerated TensorRT engine for my inception resnet v2 model on TX2. However, I am not able to create the engine from a frozen graph (.pb file using the TensorRT 5.0 CUDA 10 python API) because of limitations of the supported operations (“https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#samplecode3”).

An end to end example on pb to trt as on “https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#samplecode3” exists but the example demonstrates loading of uff file and infering. Also saving a slim .pb model like inception resnet v2 is not available.

If at all, a model like inception resnet v2 is to be deployed on TX2 without a Tensorflow engine, what are the set of parameters that are needed to be handled during training?

Thank you.