Error on converting ONNX to FP16 TensorRT with my model

Hello.

I am trying to convert my model to FP16 TensorRT.

My process is PyTorch to TensorRT.

I got success in PyTorch to ONNX.
However, the rest still got a problem.

My coversion code is

import os
import sys
c_folder = os.path.abspath(os.path.dirname(__file__))
p_folder = os.path.abspath(os.path.dirname(c_folder))
sys.path.append(c_folder)
sys.path.append(p_folder)

import argparse
import tensorrt as trt


def fp16_convert_main(args):
    """trt settings"""
    trt_logger = trt.Logger()
    builder = trt.Builder(trt_logger)
    network_flags = 0
    if args['explicit_batch']:
        network_flags |= 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
    if args['explicit_precision']:
        network_flags |= 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_PRECISION)

    network = builder.create_network(network_flags)
    trt_parser = trt.OnnxParser(network, trt_logger)

    """Parse ONNX"""
    with open(args['onnx_file_path'], 'rb') as onnx_model:
        print('[INFO] Beginning ONNX file parsing')
        trt_parser.parse(onnx_model.read())
    print('[INFO] Completed parsing of ONNX file')

    """allow TensorRT to use up to 1GB of GPU memory for tactic selection"""
    builder.max_workspace_size = 1 << 30

    """we have only one image in batch"""
    builder.max_batch_size = 5

    """select FP16 mode if possible"""
    if builder.platform_has_fast_fp16:
        builder.fp16_mode = True

    """generate TensorRT engine optimized for the target platform"""
    print('[INFO] Building an engine...')
    # network.mark_output(network.get_layer(network.num_layers).get_output(0))
    engine = builder.build_cuda_engine(network)
    context = engine.create_execution_context()
    print("[INFO] Completed creating Engine")

    """save built engine"""
    folder_path = os.path.split(args['trt_engine_save_path'])[0]
    if not os.path.exists(folder_path):
        os.makedirs(folder_path)

    with open(args['trt_engine_save_path'], 'wb') as f:
        f.write(engine.serialize())

    return None



if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--onnx_file_path", default=os.path.join(c_folder, 'mobilenet0.25_Final.onnx'), type=str, help='Path of the onnx file.')
    parser.add_argument("--trt_engine_save_path", default=os.path.join(c_folder, 'mobilenet0.25_Final.trt'))
    parser.add_argument("--select_mode", default="fp16", type=str, help="Select mode type of trt engine. (Currently, only fp16 is available.)")
    parser.add_argument("--explicit_batch", default=True, help="Set trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH.")
    parser.add_argument("--explicit_precision", default=False, help="Set trt.NetworkDefinitionCreationFlag.EXPLICIT_PRECISION.")
    args = vars(parser.parse_args())

    fp16_convert_main(args)

and Get error like

[INFO] Beginning ONNX file parsing
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] Completed parsing of ONNX file
[INFO] Building an engine…
[TensorRT] ERROR: Network must have at least one output
[TensorRT] ERROR: Network validation failed.

If you need my model, please let me know.
For more your information, my model’s backbone is mobilenet and I am using the docker image, nvcr.io/nvidia/tensorrt:20.03-py3