TensorRT specify layer NCHW -> NHWC

TensorRT expects inputs to a network in NCHW format - is there any way to specify a different format, when you are constructing the network from a Caffe / UFF / ONNX parser? I can’t find anything in the https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt_401/tensorrt-api/c_api that suggests that you can, but I wanted to ask if anyone ran into a similar problem.

I know that NCHW is better performance-wise, but I am willing to take the hit on one single layer (my input layer) to avoid having to reorder my memory manually (or using a CUDA helper).

Hi,

It’s recommended to use NCHW format to get better performance with TensorRT.

Actually, we allow a user to input an NHWC model but automatically insert several format converter to make it compatible.
We choose NCHW as our implementation due to GPU acceleration.

If NHWC format is preferred, you can just let the uff parser to handle the compatibility for you.
If performance is more important, it’s recommended to use NCHW across all the model.

Thanks.

I see that this is possible with the UFF parser - is there any way this function can be applied after a network has already been parsed, to the input layer? I am working with a Caffe (and not a UFF) model and didn’t realize the functionality for importing was different.

For example, if I have an ITensor (https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt_401/tensorrt-api/c_api/classnvinfer1_1_1_i_tensor.html) there doesn’t seem to be a way to override the ordering for just that layer, as opposed to the entire model?

Hi,

In general, Caffe use NCHW format and doesn’t have the format compatible issue.

Do you want to use the converter as part of your model?
May I know more about of your use case?

Thanks.

I have a model in Caffe and I am importing it for use in TensorRT. I want to change the first layer (or add a layer) that takes an input in HWC (or frankly even CWH format) but have the rest of the network use CHW format.

Hi,

You can add a permute layer between input the the rest of the network:
[url]https://github.com/intel/caffe/blob/master/src/caffe/layers/permute_layer.cpp[/url]

Thanks.

Sorry, am I missing something? That layer doesn’t have an equivalent in TensorRT, so I would need to implement it manually anyway? Is that right?