Bug in nvinfer1::plugin::createSSDAnchorGeneratorPlugin.

Currently I’m building a ssd network by tensorrt, the model is converted from tensorflow.

However, as long as I change the numLayers in the nvinfer1::plugin::createSSDAnchorGeneratorPlugin lower than 6, it reported error as

After concat removal: 326 layers
Graph costruction and optimization completed in 0.0583201 seconds.
uff_to_plan: ../builder/tacticOptimizer.cpp:2169: nvinfer1::query::AbstractTensor nvinfer1::builder::{anonymous}::makeAbstractTensor(const nvinfer1::builder::Tensor&): Assertion `a.dims[i] != 0' failed.

However, when I set the numLayers to 6. It can go through with the config for this ops and show like this:

After concat removal: 333 layers
Graph costruction and optimization completed in 0.183224 seconds.

--------------- Timing (Unnamed Layer* 0) [Padding](18)
Tactic 0 is the only option, timing skipped

--------------- Timing FeatureExtractor/InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d/depthwise(3)
Tactic 0 time 0.449184

When I check the code in the sampleUffSSD, I found that in FlattenConcat’s getOutputDimensions func, the nbInputDims is hard coded as 6 rather than use the input nbInputDims like this:

#ifdef SSD_INT8_DEBUG
        std::cout << " Concat nbInputs " << nbInputDims << "\n";
        std::cout << " Concat axis " << mConcatAxisID << "\n";
        for (int i = 0; i < 6; ++i)
            for (int j = 0; j < 3; ++j)
                std::cout << " Concat InputDims[" << i << "]"
                          << "d[" << j << " is " << inputs[i].d[j] << "\n";
#endif

Since it is in the scope of SSD_INT8_DEBUG. In the general cases, it does not matter.

But it seems that in the func of nvinfer1::plugin::DetectionOutputParameters, it also used the hard coded value rather than used the input param numLayers.

Is possible that someone can help me check the relative implementation in that function?

Thanks a lot!

Best,

Yes, I tried it,6 is fixed.