Hi,
I need to understand exactly what is going on with the conversion from Tensorflow’s NHWC to TensorRT to use NCHW. Does the UFF parser convert everything into NCHW, allowing me to assume that .uff files are completely NCHW? Does TensorRT inference always run in NCHW mode? What’s the recommendation if my Tensorflow model was in NHWC if I want to go the route of using UFF parsers?
Some conflicting information and broken links at these related threads that I would love clarified.
https://devtalk.nvidia.com/default/topic/1045832/tensorrt/tensorrt-5-input-tensor-format-nchw-nhwc/
https://devtalk.nvidia.com/default/topic/1036701/jetson-tx2/tensorrt-support-nhwc-model-/
I found a problem in my code where, the concat layer translated by UFF still concatenates on the C dimension in HWC mode (followed by a reshape layer added by UFF parser to reshape to CHW), while it says in the TensorRT documentation that TensorRT inference models are in CHW all the way. Is this concat layer broken? Is the translation from .pb to UFF broken? Am I not understanding something properly?
Thank you