TensorRT memory layout

Hi,

I have a question about the TensorRT memory layout. I’m converting a model trained using Tensorflow to be served by TensorRT. As I know, the memory layout of TensorRT and Tensorflow are different. Since I have several Tensorflow custom op implemented with CPU and GPU and need to be implemented by using the plugin layer. Should I change the source code to make it compatible with NCHW. BTW, the data I’m processing is pointcloud, the shape is [batch_size, point_size, 3], so, the shape of tensor in TRT is [batch_size, 3, point_size], right?

Hi,
Input formats for TRT:

  • (N, C, H, W) for 2D Convolution.
  • (N, C, D, H, W) for 3D convolution.

Thanks