Description
Trying to convert the yolov3-tiny-416 model to TensorRT with a dynamic batch size, with code modified from tensorrt_demos/yolo at master · jkjung-avt/tensorrt_demos · GitHub
The resulting engine is always None. Code snippets below.
Environment
Using the docker container nvcr.io/nvidia/tensorrt:20.08-py3
TensorRT Version: 7.1.3.4
GPU Type: Titan X
Nvidia Driver Version: 450.51.06
CUDA Version: 11.0
CUDNN Version:
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6.9
ONNX Version: 1.4.1
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorrt:20.08-py3
Relevant Files
Modified the build_engine
function in tensorrt_demos/onnx_to_tensorrt.py at master · jkjung-avt/tensorrt_demos · GitHub to
def build_engine(onnx_file_path, category_num=80, verbose=True):
"""Build a TensorRT engine from an ONNX file."""
TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE) # if verbose else trt.Logger()
with trt.Builder(TRT_LOGGER) as builder, builder.create_network(*EXPLICIT_BATCH) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
builder.max_workspace_size = 1 << 30
# builder.max_batch_size = 32
builder.fp16_mode = True
#builder.strict_type_constraints = True
config = builder.create_builder_config()
# Parse model file
print('Loading ONNX file from path {}...'.format(onnx_file_path))
with open(onnx_file_path, 'rb') as model:
if not parser.parse(model.read()):
print('ERROR: Failed to parse the ONNX file.')
for error in range(parser.num_errors):
print(parser.get_error(error))
return None
shape = list(network.get_input(0).shape)
shape[0] = -1
network.get_input(0).shape = shape
print(network.get_input(0).shape)
print('Adding yolo_layer plugins...')
model_name = onnx_file_path[:-5]
network = add_yolo_plugins(
network, model_name, category_num, TRT_LOGGER)
profile = builder.create_optimization_profile()
profile.set_shape(network.get_input(0).name, (1, 3, 416, 416), (16, 3, 416, 416), (32, 3, 416, 416))
config.add_optimization_profile(profile)
print('Building an engine. This would take a while...')
print('(Use "--verbose" to enable verbose logging.)')
engine = builder.build_engine(network, config)
print('Completed creating engine.')
return engine
ONNX model: https://drive.google.com/file/d/1-WJCijVL9_JdEVVLOanzGvl9RmIyJWF_/view?usp=sharing
Model IR version: 4
Opset Version: 9
Steps To Reproduce
- Run the above
build_engine
function with the ONNX model in the link, the resultingengine
is always None
I am not sure I am missing something simple, or if there is a compatibility issue here. I thought adding the optimization profile would do the trick.
Any help is much appreciated.