Printing "float_val: " log for a long time when converting saved model

Description

Printing "float_val: " log for a long time when converting saved model.

Environment

TensorRT Version: 5.1.5
GPU Type: GeForce GTX 1080Ti
Nvidia Driver Version: 455.45.01
CUDA Version: 10.0
CUDNN Version: 7.4
Operating System + Version: ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.15.4
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Codes as follow:

from tensorflow.python.compiler.tensorrt import trt_convert as trt
converter = trt.TrtGraphConverter(input_saved_model_dir='/root/models/v0/cpu/embed_v3_no_scope', max_workspace_size_bytes=(11<32), precision_mode='FP32', maximum_cached_engines=100, input_saved_model_signature_key='predict_y', input_saved_model_tags=['serve'], is_dynamic_op=True)
converter.convert()
converter.save('/tmp/tensorrt_model')

When convert, it print "float_val: " log for a long time and don’t stop for 1 hour at now. Is it normal? My model is actual a large model.

Log like:

         float_val: -0.0009008728666231036
            float_val: -0.0017594805685803294
            float_val: 0.0001926921249832958
            float_val: 0.0007630143663845956
            float_val: 0.0010934184538200498
            float_val: 0.0013937398325651884
            float_val: 0.0011748558608815074
            float_val: -0.0005035705980844796

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi , UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.

Thanks!

Thanks, I will try.