Tf-Faster RCNN on tx2

Hi , i am trying to run the faster rcnn model built using tensorflow . These are the following observations from deploying the model on tx2 board

  1. it works with tensorflow-cpu / moving tf.where operation to gpu
  2. it doesnot working tensorflow -gpu gives memory issue
    3.splitting the graph method doesnot work

Can you share insights hwo to run a frcnn+inception/resent on a tx2

Similar post from Jetson forum:

[i]We recommend to use TensorRT for deep model on Jetson.

In TensorRT, we provide fp16 mode which can cut the required memory in half.
This will allow users to run a deeper model on a memory limited device, such as TX2.

Please find more detail about fp16 mode in our document:
http://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#googlenet_sample

You can check our supported layer in detail here:
UFF parser: http://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#tfops
TensorRT engine: http://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#layers
[/i]

when i look into tensor rt these are the following observations
1.Tensor rt supports ssd arch
2.Tensor rt support c++ version of Faster Rcnn built from caffe

Can you pls share the link or references where i can use tensor rt for running faster rcnn + inception v2 models