Trying to go from tensorflow faster rcnn frozen model to tensorrt engine

I have the faster_rcnn_resnet101_coco_2018_01_28 frozen model for tensorflow(from the google object detection model zoo) and im attempting to optimize with tensorrt 4. I know there is a c++ example using caffe and a couple modified layer, but it is not much help for tensorflow and python.
Code im using is here:

import tensorrt as trt
import uff
from tensorrt.parsers import uffparser
import glob
G_LOGGER=trt.infer.ConsoleLogger(trt.infer.LogSeverity.INFO)
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction = .67)
uff_model=uff.from_tensorflow_frozen_model('frozen_inference_graph.pb',outputs)
parser=uffparser.create_uff_parser()
parser.register_input("input",(3,448,448),0)
parser.register_output(outputs)
engine=trt.utils.uff_to_trt_engine(G_LOGGER,uff_model,parser,1,4000000000,trt.infer.DataType.HALF)

I have 2 questions:

  1. Does anyone understand how to obtain the output layer name(s)? The model is from the google detection api and I do not know what the output layer is called.

  2. Does tensorrt support all the layers from the faster_rcnn_resnet101_coco_2018_01_28? If not, is there an easy solution for that?

Thanks!

same problem…

1.) Following code will print the output nodes of tensorflow graph.

for i in sess.graph.get_operations():
        print(i.name)

2.) tensorRT wont support dropout layers. When I was checking dropout layers gave me error after the removal everything was fine.