Error on transferring Tensorflow model to TensorRT

Description

A clear and concise description of the bug or issue.

Environment

TensorRT Version:
8.0.1
CUDA Version:
10.2
Operating System + Version:
Jetson NX (jetpack 4.6)

TensorFlow Version (if applicable):
2.5.0

I followed the steps on this repo:

The steps:

python3 save_model.py --weights ./data/custom.weights --output ./checkpoints/custom.tf --input_size 416 --model yolov4
python3 convert_trt.py --weights ./checkpoints/custom.tf --quantize_mode float16 --output ./checkpoints/custom-trt-fp16-416

but got the error:

Traceback (most recent call last):
  File "convert_trt.py", line 100, in <module>
    app.run(main)
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 303, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "convert_trt.py", line 96, in main
    save_trt()
  File "convert_trt.py", line 58, in save_trt
    max_batch_size=8)
  File "<string>", line 28, in _replace
ValueError: Got unexpected field names: ['max_batch_size']

Any idea on this issue?

Hi,
We recommend you to check the below samples links in case of tf-trt integration issues.
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#samples
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#framework-integration
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#integrate-ovr
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#usingtftrt

If issue persist, We recommend you to reach out to Tensorflow forum.
Thanks!