Why is there a need to transpose the image before feeding into the UFF model?

I trained my model in ‘channel first’, and since parser.register_input() takes in NCHW, I would like to check why the results are correct only when I transpose the input image?

Also, if I were to train my model in ‘channel_last’, how do should the input image be processed before feeding into the UFF model? Transposing doesn’t work for me in this case.


“NHWC is the TensorFlow default and NCHW is the optimal format to use when training on NVIDIA GPUs using cuDNN.”


Hi user_jay,

I had to transpose the input explicitly before feeding, neither UffInputOrder flag nor transpose layer worked. I didn’t check whether it’s fixed in the recent release and won’t be surprised if it’s not.

Hi dhingratukl,

I’m aware the NHWC is the TensorFlow default, but I changed the default to NCHW when training, so technically there shouldn’t be a need to transpose.

Hi aleksandr.gorlin,

Not sure if this helps, but I changed my keras flatten layer to tf.reshape, and my protobuf frozen model produced a result equivalent to my uff model. (refer to issue: https://devtalk.nvidia.com/default/topic/1037712/tensorrt/uff-model-fails-when-there-is-more-than-1-convolution2d-layer/) Still hoping that someone can explain to me why this is happening.

In TRT, you can specify the format

parser->registerInput(INPUT_BLOB_NAME, DimsCHW(1, 28, 28), UffInputOrder::kNCHW);