TensorRT UFF Tensorflow NHWC (channels last) to NCHW (channels first) conversion buggy

Hi,

I need to understand exactly what is going on with the conversion from Tensorflow’s NHWC to TensorRT to use NCHW. Does the UFF parser convert everything into NCHW, allowing me to assume that .uff files are completely NCHW? Does TensorRT inference always run in NCHW mode? What’s the recommendation if my Tensorflow model was in NHWC if I want to go the route of using UFF parsers?

Some conflicting information and broken links at these related threads that I would love clarified.
https://devtalk.nvidia.com/default/topic/1045832/tensorrt/tensorrt-5-input-tensor-format-nchw-nhwc/
https://devtalk.nvidia.com/default/topic/1036701/jetson-tx2/tensorrt-support-nhwc-model-/

I found a problem in my code where, the concat layer translated by UFF still concatenates on the C dimension in HWC mode (followed by a reshape layer added by UFF parser to reshape to CHW), while it says in the TensorRT documentation that TensorRT inference models are in CHW all the way. Is this concat layer broken? Is the translation from .pb to UFF broken? Am I not understanding something properly?

Thank you

Hello,

The UFF parser will insert transposes wherever required. The end user can use the same format they used with the TF model, and expect the same format (as the TF model) back.

If that’s not the case, please share a small repro that demonstrates the issue you are seeing and we will triage.

regards,
NVIDIA Enterprise Support

Hi,

I have a similar problem here. I’m trying to convert my custom op in Tensorflow to TensorRT’s plugin. The raw input data is a batch of PointClouds. The input shape is (batch_size, num_Points, channel). I forced TensorFlow to use NHWC when training. But right now, it seems that I got to use the NCHW layout as the only option in TensorRT. Since the op_kernel I wrote assuming we are using NHWC. Do I need to rewrite all those kernels to make them compatible with NCHW? Or, any other better solutions exist?

Thank you!