What is Best Practice in inference using SavedModel?

Reference :``https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html

An error occurred when converting the pb file with ‘trt.TrtGraphConverterV2’, reading the newly generated pb file, and performing inference.

Even in 'tensorflow / tensorflow / python / framework / convert_to_constants.py ', I think it can be used because it is defined, but it is not executed.

Please tell me how to solve this or any other way to infer the converted pb file.

Version

tensorflow==2.0.0
Keras==2.2.4
tensorrt==7.0.0

ubuntu18.04
cuda10.2
Python 3.6.9

Python Code

	x = X_test[0][None,:,:,:]	
	loader = tf.saved_model.load(
    							convert_path, 
    							tags=['serve'] 
    							)
	infer = loader.signatures["serving_default"]
	frozen_func = convert_to_constants.convert_variables_to_constants_v2(infer)
	output = frozen_func(x)[0].numpy()

Error

# Error
Traceback (most recent call last):
  File "mnist_leran.py", line 214, in <module>
    execute_new_model(convert_path, X_test)
  File "mnist_leran.py", line 178, in execute_new_model
    frozen_func = convert_to_constants.convert_variables_to_constants_v2(infer)
NameError: name 'convert_to_constants' is not defined