No optimization profile has been defined

I have changed my plugin using IPluginV2DynamicExt.

Build and successfully created plugin.

[08/31/2020-13:17:46] [I] [TRT] ModelImporter.cpp:135: No importer registered for op: CTCGreedyDecoder. Attempting to import as plugin.
[08/31/2020-13:17:46] [I] [TRT] builtin_op_importers.cpp:3659: Searching for plugin: CTCGreedyDecoder, plugin_version: 1, plugin_namespace: 
[08/31/2020-13:17:46] [I] [TRT] builtin_op_importers.cpp:3676: Successfully created plugin: CTCGreedyDecoder
[08/31/2020-13:17:46] [E] [TRT] Network has dynamic or shape inputs, but no optimization profile has been defined.
[08/31/2020-13:17:46] [E] [TRT] Network validation failed.
[08/31/2020-13:17:46] [E] Prediction engine build failed.

But I have new error in runtime.
Network has dynamic or shape inputs, but no optimization profile has been defined.

I have optimization profile as follows.

    auto profile = builder->createOptimizationProfile();
    // This profile will be valid for all images whose size falls in the range of [(1, 1, 1, 1), (1, 1, 56, 56)]
    // but TensorRT will optimize for (1, 1, 28, 28)
    // We do not need to check the return of setDimension and addOptimizationProfile here as all dims are explicitly set
    profile->setDimensions(input->getName(), OptProfileSelector::kMIN, Dims4{1, 1, 1, 1});
    profile->setDimensions(input->getName(), OptProfileSelector::kOPT, Dims4{1, 1, 24, 94});
    profile->setDimensions(input->getName(), OptProfileSelector::kMAX, Dims4{1, 1, 36, 126});
    preprocessorConfig->addOptimizationProfile(profile);

What do I still missing? My code is here NumPlateRecognition.cpp (27.4 KB)

The model is in this link.

I am using Jetpack4.4.

Hi @edit_or
Request you to refer below link:
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-713/developer-guide/index.html#opt_profiles

Thanks!

According to your document is I have to set
context.setOptimizationProfile(0)

I have two contexts
SampleUniquePtr<nvinfer1::IExecutionContext> mPreprocessorContext{nullptr}, mPredictionContext{nullptr};

When I tried as follow

mPredictionContext.setOptimizationProfile(0)

Then I have build error as

Recognition::SampleUniquePtr<nvinfer1::IExecutionContext> {aka class std::unique_ptr<nvinfer1::IExecutionContext, samplesCommon::InferDeleter>}‚Äô has no member named ‚ÄėsetOptimizationProfile

Moreover the error came out at building engine.

mPredictionEngine = makeUnique(builder->buildEngineWithConfig(*network, *config));

Added profile for optimization.

    config->addOptimizationProfile(profileCalib);
    mPredictionEngine = makeUnique(builder->buildEngineWithConfig(*network, *config));

My network input size is ?,24,94.

Now the error is

input:0: for dimension number 1 in profile 0 does not match network definition (got min=1, opt=1, max=1), expected min=opt=max=24).

The problem is solved. Added Opetimization profile before engine build.

    auto profile = builder->createOptimizationProfile();
    const int batchSize{1};
    profile->setDimensions(inputName, OptProfileSelector::kMIN, Dims4{batchSize, 24, 94, 3});
    profile->setDimensions(inputName, OptProfileSelector::kOPT, Dims4{batchSize, 24, 94, 3});
    profile->setDimensions(inputName, OptProfileSelector::kMAX, Dims4{batchSize, 24, 94, 3});
    config->addOptimizationProfile(profile);

Dimension is set to match my input dimension batchSize, 24, 94, 3.

1 Like