Assertion Error in checkDimsSanity : 0 (dims.d[i] >= 0)

This sample has two engines built. Preprosserengine and Predictionengine.

Prediction engine was built successfully ( finally ha ha… step by step ).
Now is issue at building preprocessorengine.

I checked my plugin inputs at runtime at configurePlugin and supportsFormatCombination.
Both came out as

pos 0 dims (88, 1, 35) format 0 type 0
pos 1 dims (1) format 0 type 3
pos 2 dims (1, 20) format 0 type 3
pos 0 dims (88, 1, 35) format 0 type 1
[09/01/2020-13:05:08] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[09/01/2020-13:05:16] [I] [TRT] Detected 1 inputs and 1 output network tensors.
nbInput 2 nbOutput 1
in 0 min (88, 1, 35)
in 0 max (88, 1, 35)
in 1 min (1)
in 1 max (1)
out 1 min (1, 20)
out 1 max (1, 20)

So looked fine. All inputs and output have dimensions.

But when preprocessor engine was built, there was error as

[09/01/2020-13:05:16] [E] [TRT] ../builder/cudnnBuilderGraph.cpp (794) - Assertion Error in checkDimsSanity: 0 (dims.d[i] >= 0)

What could be wrong?

Hi @edit_or,
This sample does not use a plugin, preprocessor just uses resize layer and prediction network uses an ONNX model.
So where is your plugin coming from?

Thanks!

My network needs plugin so I created one.

I made you confused. This is not related to my plugin.
It is happening inside buildPreprocessorEngine.
Once build at this line, I have error.

mPreprocessorEngine = makeUnique(builder->buildEngineWithConfig(*preprocessorNetwork, *preprocessorConfig));

bool NumPlateRecognition::buildPreprocessorEngine(const SampleUniquePtr<nvinfer1::IBuilder>& builder)
{
    // Create the preprocessor engine using a network that supports full dimensions (createNetworkV2).
    auto preprocessorNetwork = makeUnique(builder->createNetworkV2(1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH)));
    if (!preprocessorNetwork)
    {
        sample::gLogError << "Create network failed." << std::endl;
        return false;
    }

    // Reshape a dynamically shaped input to the size expected by the model, (1, 1, 28, 28).
    auto input = preprocessorNetwork->addInput("input:0", nvinfer1::DataType::kFLOAT, Dims4{-1, 24, 94, 3});
    auto resizeLayer = preprocessorNetwork->addResize(*input);
    resizeLayer->setOutputDimensions(mPredictionInputDims);
    preprocessorNetwork->markOutput(*resizeLayer->getOutput(0));
    cout << "mPredictionOutputDims1 " << mPredictionOutputDims << endl;
    // Finally, configure and build the preprocessor engine.
    auto preprocessorConfig = makeUnique(builder->createBuilderConfig());
    if (!preprocessorConfig)
    {
        sample::gLogError << "Create builder config failed." << std::endl;
        return false;
    }

    // Create an optimization profile so that we can specify a range of input dimensions.
    const int batchSize{1};
    auto profile = builder->createOptimizationProfile();
    // This profile will be valid for all images whose size falls in the range of [(1, 1, 1, 1), (1, 1, 56, 56)]
    // but TensorRT will optimize for (1, 1, 28, 28)
    // We do not need to check the return of setDimension and addOptimizationProfile here as all dims are explicitly set
    profile->setDimensions(input->getName(), OptProfileSelector::kMIN, Dims4(batchSize, 24, 94, 3));
    profile->setDimensions(input->getName(), OptProfileSelector::kOPT, Dims4(batchSize, 24, 94, 3));
    profile->setDimensions(input->getName(), OptProfileSelector::kMAX, Dims4(batchSize, 24, 94, 3));
    preprocessorConfig->addOptimizationProfile(profile);

    // Create a calibration profile.
    auto profileCalib = builder->createOptimizationProfile();
    const int calibBatchSize{16};
    // We do not need to check the return of setDimension and setCalibrationProfile here as all dims are explicitly set
    profileCalib->setDimensions(input->getName(), OptProfileSelector::kMIN, Dims4{calibBatchSize, 24, 94, 3});
    profileCalib->setDimensions(input->getName(), OptProfileSelector::kOPT, Dims4{calibBatchSize, 24, 94, 3});
    profileCalib->setDimensions(input->getName(), OptProfileSelector::kMAX, Dims4{calibBatchSize, 24, 94, 3});
    preprocessorConfig->setCalibrationProfile(profileCalib);

    std::unique_ptr<IInt8Calibrator> calibrator;
    if (mParams.int8)
    {
        preprocessorConfig->setFlag(BuilderFlag::kINT8);
        const int nCalibBatches{10};
        MNISTBatchStream calibrationStream(
            calibBatchSize, nCalibBatches, "train-images-idx3-ubyte", "train-labels-idx1-ubyte", mParams.dataDirs);
        calibrator.reset(
            new Int8EntropyCalibrator2<MNISTBatchStream>(calibrationStream, 0, "MNISTPreprocessor", "input"));
        preprocessorConfig->setInt8Calibrator(calibrator.get());
    }

    mPreprocessorEngine = makeUnique(builder->buildEngineWithConfig(*preprocessorNetwork, *preprocessorConfig));
    if (!mPreprocessorEngine)
    {
        sample::gLogError << "Preprocessor engine build failed." << std::endl;
        return false;
    }else{
        sample::gLogError << "Preprocessor engine build succeeded." << std::endl;

    }
    sample::gLogInfo << "Profile dimensions in preprocessor engine:" << std::endl;
    sample::gLogInfo << "    Minimum = " << mPreprocessorEngine->getProfileDimensions(0, 0, OptProfileSelector::kMIN)
                     << std::endl;
    sample::gLogInfo << "    Optimum = " << mPreprocessorEngine->getProfileDimensions(0, 0, OptProfileSelector::kOPT)
                     << std::endl;
    sample::gLogInfo << "    Maximum = " << mPreprocessorEngine->getProfileDimensions(0, 0, OptProfileSelector::kMAX)
                     << std::endl;
    return true;
}

The only thing I found difference between the sample program and my network is input format.
Sample program has NCHW format and mine is converted to Tensorflow , so it is NHWC.

Sample
mPredictionInputDims
{static MAX_DIMS = 8, nbDims = 4, d = {1, 1, 28, 28, 85, 1439782968, 85, 1729716496}, type = {85, 119, 124,
-10368, 127, -1441266780, 127, -1440363920}}
mPredictionOutputDims
{static MAX_DIMS = 8, nbDims = 2, d = {1, 10, 0, 0, 0, 0, 0, 0}, type = {nvinfer1::DimensionType::kSPATIAL,
nvinfer1::DimensionType::kSPATIAL, nvinfer1::DimensionType::kSPATIAL, nvinfer1::DimensionType::kSPATIAL,
nvinfer1::DimensionType::kSPATIAL, nvinfer1::DimensionType::kSPATIAL, nvinfer1::DimensionType::kSPATIAL,
nvinfer1::DimensionType::kSPATIAL}}

My network has
mPredictionInputDims
{static MAX_DIMS = 8, nbDims = 4, d = {-1, 24, 94, 3, 85, 1441699688, 85, 1754525184}, type = {85, 119, 124,
-10592, 127, -1445747804, 127, -1444844944}}

mPredictionOutputDims
{static MAX_DIMS = 8, nbDims = 2, d = {1, 20, 0, 0, 0, 0, 0, 0}, type = {nvinfer1::DimensionType::kSPATIAL,
nvinfer1::DimensionType::kSPATIAL, nvinfer1::DimensionType::kSPATIAL, nvinfer1::DimensionType::kSPATIAL,
nvinfer1::DimensionType::kSPATIAL, nvinfer1::DimensionType::kSPATIAL, nvinfer1::DimensionType::kSPATIAL,
nvinfer1::DimensionType::kSPATIAL}}

Would it be the issue?

Any idea for this problem. I am stuck there.

Hi @edit_or,

This should not make any difference as NHWC input format is supported.
However, did you try changing the input format to NCHW?
And which version of TRT you are using?

Thanks!

I am using TensorRT7.1. Can I send you my code so that you can help check?

For the sample program, line number 306 has
mPredictionInput.resize(mPredictionInputDims);

Initialization of mPredictionInputDims to mPredictionInput.
Printed mPredictionInputDims and mPredictionInputDims has

    {static MAX_DIMS = 8, nbDims = 4, d = {1, 1, 28, 28, 85, 1439782968, 85, 
    1729716496}, type = {85, 119, 124, -10688, 127, -1441266780, 127, 
    -1440363920}}

For me I have error. Thread 1 "platerecg_debug" received signal SIGABRT, Aborted.
When I checked mPredictionInputDims

  mPredictionInputDims={static MAX_DIMS = 8, nbDims = 4, d = {-1, 24, 94, 3, 85, 1441707880, 85, 1742426192}, 
  type = {85, 119, 124, -10592, 127, -1445747804, 127, -1444844944}}

The only difference I found is NCHW and NHWC difference.
How to change model’s format to NCHW? Is it to be done in ONNX model?

Changed to NCHW but still have the same error.

Can help me I have no clue how to proceed? Stuck here for a few days already.

Now I know my imput dimension is -1,3,24,94. TensorRT doesn’t accept -1 as input. That is the problem. dims.d[i] >= 0, so it needs to be 1.