TensorRT "Assertion `isIndexedCHW(d)' failed" error when the network contains tensor whose ndim>4

Hello. I have been using TensorRT for a while, basically by API-based network construction.

When there is a layer that produces a tensor whose number of dimensions (ndim) is larger than 4 wherever in the network, TensorRT build process always raises the following error.

% ./repr
repr: helpers.cpp:39: nvinfer1::DimsCHW nvinfer1::getCHW(const nvinfer1::Dims&): Assertion `isIndexedCHW(d)' failed.

However, the TensorRT user guide (page 3, “1.2. Key Concepts” chapter) says

Tensors can have at most Dims::MAX_DIMENSIONS dimensions in total, where that constant is set to 8.

Therefore this is not an intentional behavior I believe, so I think I am making some mistakes. But as far as I can see in the user guide and reference html, I cannot figure out what’s wrong.

Can anyone give me some advices?

Below is the minimum reproduction code.
I tested this on the following environment.

  • Ubuntu 16.04
  • TensorRT3.0.4 deb based install (But I have had the same issue since I was using 2.1, 3.0RC and 3.0.2)
  • CUDA 9.0
  • cuDNN7.0.5

When I change the line 38 to “out_dims.nbDims = 4;” it doesn’t fail.
(when we use CHW instead of NCHW, which means we set the first axis to kCHANNEL (at line 39) and others to kSPATIAL, nbDims has to be <=3 otherwise we get the same error)

reproduction.cpp

#include <NvInfer.h>
using namespace nvinfer1;

class Reshape : public IPlugin
{
    Dims dims;
public:
    Reshape(Dims _dims) :dims(_dims){ }

    int getNbOutputs() const override { return 1; }
    int initialize() override { return 0; }
    void terminate() override { }
    size_t getWorkspaceSize(int) const override { return 0; }
    size_t getSerializationSize() override { return 0; }
    void serialize(void *buffer) override {}
    void configure(const Dims* inputDims, int nbInputs, const Dims* outputDims, int nbOutputs, int maxBatchSize) override { }
    int enqueue(int batchSize, const void* const* inputs, void** outputs, void* workspace, cudaStream_t stream) { }

    Dims getOutputDimensions(int index, const Dims* inputs, int nbInputDims) override { return dims; }
};

struct Logger : public ILogger
{
    void log(ILogger::Severity severity, const char* msg) override { }
} logger;

int main(int argc, char** argv)
{
    auto builder = createInferBuilder(logger);
    auto network = builder->createNetwork();

    // input
    auto input_tensor = network->addInput("input", DataType::kFLOAT, DimsCHW(3, 32, 32));

    // reshape
    Dims out_dims;
    out_dims.nbDims = 5;
    out_dims.d[0] = 1; out_dims.type[0] = DimensionType::kINDEX;
    out_dims.d[1] = 3; out_dims.type[1] = DimensionType::kCHANNEL;
    out_dims.d[2] = 32; out_dims.type[2] = DimensionType::kSPATIAL;
    out_dims.d[3] = 8; out_dims.type[3] = DimensionType::kSPATIAL;
    out_dims.d[4] = 4; out_dims.type[4] = DimensionType::kSPATIAL;
    Reshape reshape_plugin(out_dims);
    auto reshape = network->addPlugin(&input_tensor, 1, reshape_plugin);
    auto output_tensor = reshape->getOutput(0);

    network->markOutput(*output_tensor);

    auto engine = builder->buildCudaEngine(*network);
}

Makefile

SRC=reproduction.cpp
TARGET=repr
${TARGET}: ${SRC}
	g++ -O2 -std=c++14 ${SRC} -lcudart -lnvinfer -o ${TARGET}
clean:
	rm -rf ${TARGET}

How to run

% ls
Makefile reproduction.cpp
% make
% ./repr
repr: helpers.cpp:39: nvinfer1::DimsCHW nvinfer1::getCHW(const nvinfer1::Dims&): Assertion `isIndexedCHW(d)' failed.

I tested with TensorRT4.0RC that has been released right after the above post, and confirmed this problem doesn’t happen anymore.
I guess

Dimension Types are now ignored in the API
is related.

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth