Note:TensorRT expects the input tensor be in CHW order. When importing from TensorFlow, ensure that the input tensor is in the required order, and if not, convert it to CHW.
Where the answer accepted by the forum admin states:
If your tensorflow model is trained with NHWC format, please feed HWC image into tensorRT.
The API for nvuffparser::IUffParser::registerInput has an argument for inputOrder which does accept an option for nvuffparser::UffInputOrder::kNHWC, which leads me to believe that NHWC format is supported.
Although networks can use NHWC and NCHW, when importing TensorFlow models, users are encouraged to convert their networks to use NCHW data ordering explicitly in order to achieve the best possible performance. TensorRT’s C++ API input and output tensors are in NCHW format.
This didn’t seem to be the case for me. I am using a network configured as NHWC. Despite that, I did input data in NCHW format as you mentioned was a requirement. However, the output data I am copying out via TensorRT’s C++ API was in NHWC format. I determined this empirically by analyzing/visualizing the output data.
Moving forward I plan to use a NCHW network for the performance reasons you mention, but I wanted to point out my experience is not matching your quoted statement.
I agree with behrooze.sirang. Using IUffParser, the TRT engine’s output format is consistent with the UFF model (so if the original model’s output is NHWC, the output shape is NHWC), but the input tensor is always expected to be NCHW.
Again, a weird TensorRT “design” (if not a bug) that nowhere mentions. So tired being a TensorRT user…