I have a Tensorflow model derived from VGG16 which worked fine when converted with TensorRT 5.0.2. (tensorrt:19.02-py3 being served with tensorrtserver:19.02-py3).
TensorRT 5.1.2 is making my life miserable. (tensorrt:19.03-py3 being served with tensorrtserver:19.03-py3)
In 5.0.2 I specified the parser input as follows:
parser.register_input(tname, (3, 224, 224), trt.UffInputOrder.NHWC)
In 5.1.2 I get the following error when creating the plan:
[TensorRT] ERROR: import/conv1/convolution: kernel weights has count 32670 but 2439360 was expected
[TensorRT] ERROR: UffParser: Parser error: import/conv1/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
I can get it to compile if I do either:
- parser.register_input(tname, (224, 224, 3), trt.UffInputOrder.NHWC)
- parser.register_input(tname, (3, 224, 224), trt.UffInputOrder.NCHW)
But then my soft maxes come out as garbage numbers, regardless of whether I provide input images in HWC or CHW.
Other Tensorflow architectures that I use are working fine with 5.1.2.
Any advice on debugging this problem? Until I fix this, I can’t use the 1.0.0 TRT Inference Server because it is not backwards compatible with 5.0 TRT plan files.