Combining Dimensions (Volume?)


In order to properly allocate input and output buffers for my TensorRT engine, I have to combine all the dimensions into a single number of bytes that I can then use appropriately in my C++ program.

Here’s the code that I have been using (I wrote this many versions of TRT ago):

// Returns the number of bytes we need.
size_t get_trt_io_size(const nvinfer1::Dims& dims, const size_t batch_size) {
  // We start with the batch size, since this is NOT provided via the dims
  // parameter and we need to account for this dimension. We also assume
  // that all I/O to/from the engine is 32-bit floats.
  size_t num_floats = batch_size;
  for (int i = 0; i < dims.nbDims; ++i) {
    const auto type = dims.type[i];
    if ((type == nvinfer1::DimensionType::kSPATIAL) ||
        (type == nvinfer1::DimensionType::kCHANNEL)) {
        num_floats *= dims.d[i];
  return num_floats * sizeof(float);

The problem I am now facing is that I have an output layer that doesn’t seem to have valid types for its dimensions. That is, I get the expected dimension back by looking at dims.d[i], but the type is a value that doesn’t map to the nvinfer1::DimensionType enum.

I saw in some documentation that the dimension type has been deprecated. As a result, I’m wondering what the correct way is to compute the value I’m looking for. The reason for checking the dimension type in the first place was to remove any possibility of counting the batch size twice.

I have also found this in the Python API: Based on some testing, this function appears to ignore the dimension type entirely. I’m wondering if my code should also ignore dimension type and if I’m guaranteed to not account for the batch size when multiplying all the individual Dim components together (i.e., multiplying all dims.d[i] together and not checking their type).

Edited to add:

The Dims I am checking are returned from nvinfer1::ICudaEngine::getBindingDimensions(). I want to be sure that these Dims do NOT include the batch size, so I don’t count the batch size multiple times.

Could you please let us know if you are still facing this issue?