TensorRT: Multi-dimensional plugins and InnerProduct layer documentation

The plug-in interface seems to generally support 3D (CHW) inputs and 3D (CHW) outputs. Is it supported to have 4D inputs or outputs? How does this affect batchSize?

My intuition is that batchSize is always constant and would not be modified even if we begin using a 4th dimension… Instead we would just be adding another totally independent dimension.

It is not clear to me that TensorRT supports this 4th dimension? The best example might be the RPROIFused layer in FasterRCNN, but that is closed source. It would be really interesting to examine if the “pool5” output from that layer is 4D. Can someone provide some information here?

More frustratingly, AFAICT the TensorRT InnerProduct layer is completely undocumented. How does it handle a 4th dimension? What is the relationship of outputs to inputs?

Any general guidance or information specific to this example would be very helpful. Thanks!

Hi,

TensorRT takes 4-D inputs: BATCHxCxHxW.
Cross-batch handler is not supported.

We apply the same implementation to the BATCH data.
That’s why you only need to take care CxHxW data.

When building TensorRT engine, need to specify the MAX-BATCH-SIZE value, for example:
https://github.com/dusty-nv/jetson-inference/blob/master/tensorNet.cpp#L158

When inferencing, need to specify the actual execute batch size, for example:
https://github.com/dusty-nv/jetson-inference/blob/master/imageNet.cpp#L306

By the way, TensorRT’s document is located at /usr/share/doc/tensorrt/.