TensorRT 5 Input Tensor Format NCHW / NHWC

Hi,

I am seeing some conflicting information and looking for a conclusive answer.

In the SDK documentation here: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#import_tf_c

There is a note under section 2.2.4 which states:

Note:TensorRT expects the input tensor be in CHW order. When importing from TensorFlow, ensure that the input tensor is in the required order, and if not, convert it to CHW.

I stumbled across a forum post on this forum here: https://devtalk.nvidia.com/default/topic/1025594/jetson-tx2/how-to-feed-a-3-channel-image-to-tensorrt/

Where the answer accepted by the forum admin states:

If your tensorflow model is trained with NHWC format, please feed HWC image into tensorRT.

The API for nvuffparser::IUffParser::registerInput has an argument for inputOrder which does accept an option for nvuffparser::UffInputOrder::kNHWC, which leads me to believe that NHWC format is supported.

Can this discrepancy please be explained? Thanks.

Hello,

Although networks can use NHWC and NCHW, when importing TensorFlow models, users are encouraged to convert their networks to use NCHW data ordering explicitly in order to achieve the best possible performance. TensorRT’s C++ API input and output tensors are in NCHW format.

This didn’t seem to be the case for me. I am using a network configured as NHWC. Despite that, I did input data in NCHW format as you mentioned was a requirement. However, the output data I am copying out via TensorRT’s C++ API was in NHWC format. I determined this empirically by analyzing/visualizing the output data.

Moving forward I plan to use a NCHW network for the performance reasons you mention, but I wanted to point out my experience is not matching your quoted statement.

I agree with behrooze.sirang. Using IUffParser, the TRT engine’s output format is consistent with the UFF model (so if the original model’s output is NHWC, the output shape is NHWC), but the input tensor is always expected to be NCHW.

Again, a weird TensorRT “design” (if not a bug) that nowhere mentions. So tired being a TensorRT user…

Same as you, I’ve been spent several days on the input and output problem, and at last the input is NCHW and the output is NHWC… not reasonable.

1 Like

PS.My TensorRT version is the newest 6.0.1

Has anyone noticed a difference in inference time when converting a Tensorflow model/network from NHWC to NCHW ?

Thanks.