TF-TRT questions on output_node_name and inference

I am following this tf-trt guide: https://docs.nvidia.com/deeplearning/dgx/tf-trt-user-guide/index.html#using-metagraph-checkpoint. I have a couple questions.

  1. what does
sess.run(output_node)

– the very last line of the example shown in the guide above – do that is different from regular tensorflow’s

sess.run

? If I want to run inference with TF-TRT with the model I successfully loaded, what do I do from the following example?

  1. I need to supply
output_node_names

as an argument to the function below

frozen_graph = tf.graph_util.convert_variables_to_constants(
            sess,
            tf.get_default_graph().as_graph_def(),
            output_node_names=[“your_outputs”])

How do I find out the output node names used? I am using both estimators and low-level tensorflow API.

  1. How is TF-TRT different from TensorRT?

Hello,

  1. To my understanding, the output_node is a string argument for the sess.run method. It indicates running operations within the node.

  2. You can print the list of node or use TensorBoard to see the node names.

  3. TF-TRT transforms as much operations as it supports, whereas TensorRT supports plugins to extend its compatibility on different operations.

Thanks.

Hello NVESJ,

thanks for your answers.

I was hoping to ask a few follow-up questions:

so it is the name of the very last node, am I correct?

I know that you can build custom layers for operations that are not natively supported by TensorRT. Is that the functionality you refer to as “supports plugins to extend its compatibility”?

Are tf-trt and tensorRT two different programs? From my understanding, they seem to be based on two different code repositories. One is under Tensorflow’s contrib directory while TensorRT seems to be from its independent repository. Also, tf-trt seems to not require additional installation other than installing TensorFlow but TensorRT requires some more extensive installation.