I use TensorRT 5.1.5.0 for windows 10, CUDA 9.0
I have run into a problem where certain dimensions of inputs in the registerInput function for the parser.
Using the line
parser->registerInput("Placeholder", DimsCHW(1, 400, 231), UffInputOrder::kNCHW);
Causes the program to return the following errors:
[E] [TRT] concat: all concat input tensors must have the same dimensions except on the concatenation axis
[E] [TRT] UffParser: Parser error: dconv1_1/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
[E] [TRT] Network must have at least one output
Changing the line to:
parser->registerInput("Placeholder", DimsCHW(1, 496, 400), UffInputOrder::kNCHW);
Causes the program to work without any errors.
Why are the dimensions causing errors with my program? With the fixed dimensions I was able to properly perform inference with my network and get the my desired output.
P.S I am using a UFF file.
The dimensions of image should not matter with this network.