TensorRT view the layers that are converted

Description

I’m trying to convert a saved_model to a TensorRT model using tensorflow.python.compiler.tensorrt.trt_convert.TrtGraphConverterV2 but I’m not able to see the log with the details on which layers have been converted or optimized for TensorRT. Also, is there a way to view the layers a of the model after being converted to TRT saved_model?

Environment

TensorRT Version:
GPU Type: Tesla T4
Nvidia Driver Version: 450.80.02
CUDA Version: 11.0
Operating System + Version: AWS Sagemaker instance ml.g4dn.xlarge
Python Version (if applicable): 3.7
TensorFlow Version (if applicable): 2.5

Relevant Files

def convert_to_trt_graph_and_save(precision_mode='float32', input_saved_model_dir='resnet_v2_152_saved_model', calibration_data=''):
    
    if precision_mode == 'float32':
        precision_mode = trt.TrtPrecisionMode.FP32
        converted_save_suffix = '_TFTRT_FP32'
        
    if precision_mode == 'float16':
        precision_mode = trt.TrtPrecisionMode.FP16
        converted_save_suffix = '_TFTRT_FP16'

    # Per usual, set the precision_mode
    if precision_mode == 'int8':
        precision_mode = trt.TrtPrecisionMode.INT8
        converted_save_suffix = '_TFTRT_INT8'
        
    output_saved_model_dir = input_saved_model_dir + converted_save_suffix
    
    conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
        precision_mode=precision_mode, 
        max_workspace_size_bytes=8000000000
    )

    converter = trt.TrtGraphConverterV2(
        input_saved_model_dir=input_saved_model_dir,
        conversion_params=conversion_params
    )

    
    print('Converting {} to TF-TRT graph precision mode {}...'.format(input_saved_model_dir, precision_mode))
    
    if precision_mode == trt.TrtPrecisionMode.INT8:
        
        # Here we define a simple generator to yield calibration data
        def calibration_input_fn():
            yield (calibration_data, )

        # When performing INT8 optimization, we must pass a calibration function to convert
        converter.convert(calibration_input_fn=calibration_input_fn)
    
    else:
        converter.convert()
    
    print(converter.graph.structured_outputs)
    print('Saving converted model to {}...'.format(output_saved_model_dir))
    converter.save(output_saved_model_dir=output_saved_model_dir)
    print('Complete')

convert_to_trt_graph_and_save(precision_mode='float32', input_saved_model_dir='savedmodel', calibration_data= batched_input)

Hi @dhivya.jayaraman,

If you are using TF-TRT, TensorRT logs appear as part of TensorFlow logs. The verbosity level of TensorFlow logs affects the verbosity level of TensorRT logs. Please refer,
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#verbose

Thank you.