How to Visualize Optimized Graph in TensorBoard using optimized_model.engine?

Dear All,

I am able to run TensorRT sample code given in example directory, uff_mnist.py. I have saved my optimized engine using trt.utils.write_engine_to_file() . I used Tensorflow protobuf (.pb) model file to generated optimized TensorRT engine. I am getting accurate result and 100 time less time using TensorRT. Now I want to see this optimized graph in Tensorboard. Need help for this, I searched but did not get any reference regarding this. So please help me how we can visualize the graph using optimized_model.engine and Tensorboard/any_other_library? Thanks in Advance.

Hello,
Did you get any information about this issue?
Or maybe,
Did you solve it by yourself?

If so, it will be great if you will share the solution or any good reference.

Thanks,

Would love to know too.

Hello,

For generating the Tensorboard, I used to use the tf.import_graph_def to import the optimized graph_def into a tf.Graph, then create a session with the tf.graph, and output tensorboard with tf.summary.FileWriter.

I didn’t see the uff_mnist.py file in our latest tensorrt container. Please let me know where you find it. It’d be good if I can look into the code.

Thank you.

Hello NVESJ,
Thanks for your answer.

If i understand it right, using the method you described above is for the not optimized TensorFlow graph.
The question is how it possible to visualize the TensorRT optimized graph after it generated by the uff parser as a CUDA engine using the following C++ APIs:
nvuffparser::IUffParser::m_parser,
nvinfer1::IBuilder::buildCudaEngine

Is there a way to visualize this optimized graph?

Regard,

Hello,

No. Currently, Tensorrt APIs do not support getting a Tensorflow graph from a plan file.

However, as per the documentation, it’s recommended to use the integration as a method for
converting your TensorFlow network. In this case, you will get a graph.

Steps:

  1. Use the trt.create_inference_graph() method, which returns an optimized graph.
  2. Then generate the tensorboard with the optimized graph as described above.

Thanks you.

Hello NVESJ,
Thanks for you answer.

I understand that for now there is no option to visualize the plan file that was generated by the nvinfer1::IBuilder::buildCudaEngine.

These are my questions:

  1. Is there a roadmap to support it in future release?
  2. How much the plan file and the generated graph (model), that was generated by the trt.create_inference_graph service, are close?
  3. Can I use the generated graph from the trt.create_inference_graph service by the TensorRT C++ APIs? How?

My purpose is to have the ability to debug the model\plan\graph that was generated by the TensorRT.
I’m working with the TensorRT C++ and not with the TensorRT Python and using the Jetson Xavier for inference it.
I’m also have the ability to inference the original Tensorflow model using the Tensorflow c++ APIs without any involvement of the TensorRT.
The Tensorflow C++ inference working well while the TensorRT C++ isn’t.
I only have the Tensorflow forzen graph without any access to its source code.

Regard,

I have same issue with visualizing .plan file and check it’s struct for debug. I use trtexec to convert onnx file to plan( tensorrt engine file).