Visualize TF-TRT optimized model using Tensorboard

Description

We are trying to visualize the TF-TRT optimized model using tensorboard and haven’t had any luck. Find the code for the model conversion and saving the graph below. In the official documentation from NVIDIA, it says to use tf.summary.FileWriter('./tensorboard_events', sess.graph)) which is depreciated now. So we tried to use create_file_writer and trace_on. When the logs were loaded using tensorboard, it says Graph visualization failed.

Environment

TensorRT Version :
GPU Type : Tesla T4
Nvidia Driver Version : 450.80.02
CUDA Version : 11.0
Operating System + Version : AWS Sagemaker instance ml.g4dn.xlarge
Python Version (if applicable) : 3.7
TensorFlow Version (if applicable) : 2.5

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

import logging
import tensorflow as tf

def convert_to_trt_graph_and_save(precision_mode='float32', input_saved_model_dir='resnet_v2_152_saved_model', calibration_data=''):

    if precision_mode == 'float32':
        precision_mode = trt.TrtPrecisionMode.FP32
        converted_save_suffix = '_TFTRT_FP321'

    if precision_mode == 'float16':
        precision_mode = trt.TrtPrecisionMode.FP16
        converted_save_suffix = '_TFTRT_FP161'

    # Per usual, set the precision_mode
    if precision_mode == 'int8':
        precision_mode = trt.TrtPrecisionMode.INT8
        converted_save_suffix = '_TFTRT_INT81'

    output_saved_model_dir = input_saved_model_dir + converted_save_suffix

    conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
        precision_mode=precision_mode, 
        max_workspace_size_bytes=8000000000
    )

    converter = trt.TrtGraphConverterV2(
        input_saved_model_dir=input_saved_model_dir,
        conversion_params=conversion_params
    )

    # Here we define a simple generator to yield calibration data
    def calibration_input_fn():
            yield (calibration_data, )
            
    if precision_mode == trt.TrtPrecisionMode.INT8:
        # When performing INT8 optimization, we must pass a calibration function to convert
        converter.convert(calibration_input_fn=calibration_input_fn)

    else:
        converter.convert()
    
    converter.save(output_saved_model_dir=output_saved_model_dir)
    print('Complete')

logdir = 'path to log dir'
writer = tf.summary.create_file_writer(logdir)
tf.summary.trace_on(graph=True, profiler=True)
batched_input = ''
with writer.as_default():
    output = convert_to_trt_graph_and_save(precision_mode='float32',input_saved_model_dir='model', calibration_data= batched_input)
with writer.as_default():
    tf.summary.trace_export(
      name="my_func_trace",
      step=0,
      profiler_outdir=logdir)

Hi, Please refer to the below links to perform inference in INT8
https://github.com/NVIDIA/TensorRT/blob/master/samples/opensource/sampleINT8/README.md

Thanks!

Hi, we tried the INT8 optimization along with the create_file_writer and trace_export to capture the graph of the optimized model and when we tried to load it using tensorboard, it gave the following error. We also tried optimizing the model for float32 and float16 that failed to capture the graph of the optimized model using the code from above. The models were optimized successfully but when we tried to load the logs, it failed on tensorboard. We have also attached the logs that were written for INT8 optimization for your reference.

### No dashboards are active for the current data set.

Probable causes:

* You haven’t written any data to your event files.
* TensorBoard can’t find your event files.


logs_int8.zip (164.2 KB)

Hello @dhivya.jayaraman,

We are looking into this issue, please allow us sometime.

Thank you.

@dhivya.jayaraman,

We took a look at the issue, and the crux of the matter is that one needs to perform a forward pass with the model to be able to visualize the graph. Here is a script that performs just the tensorboard visualization given a model that has already been converted to TFTRT.

Thank you.

@spolisetty Thank you very much for the response. We followed your suggestion and were able to generate the graph but noticed that we weren’t able to see an optimizer node in certain layers as shown in the attached image (optimized_model_with_no_layer_replacement.png). As you can see in the images, the upsampling_2d layer has the optimizer node. According to NVIDIA official document, upsampling_2d layer is not supported whicvh makes us wonder if this is the optimizer we should be looking for in a TensorRT optimized model?

We also noticed that not all our conv_2d layers have this optimizer. As shown in the image (device_color.png), only the GPU supported layers are the ones that have the optimizer node in them. Is there any reason why the other conv_2d layers. aren’t supported by the GPU?

We found this issue from 2018, where they show the TensorRT optimizer layers have a node named my_trt_op which wasn’t found in any of our layers. We have also attached the logs that were generated while running the tf_trt_model_visualize.py for our model for your reference.

@dhivya.jayaraman,

Please allow us sometime to get back on this.

Hi @dhivya.jayaraman,

As you can see in the images, the upsampling_2d layer has the optimizer node. According to NVIDIA official document, upsampling_2d layer is not supported whicvh makes us wonder if this is the optimizer we should be looking for in a TensorRT optimized model?

It’s a layout optimizer, not an optimizer (name of the layer). It’s design to apply a transposition between NCHW to NHWC to improve compute speed. Nothing to do with TF-TRT.

We also noticed that not all our conv_2d layers have this optimizer.

Once again this optimizer has nothing to do with TF-TRT and actually introduced by TF(Tensorflow).

Is there any reason why the other conv_2d layers. aren’t supported by the GPU?

Conv2D is supported on GPU.

As this looks like more about user understanding. We recommend you to go through related docs for better understanding.

Thank you.