Network has dynamic or shape inputs but no optimization profiles have been defined

Description

I have been stuck on this error for the past 3 days. Looks minor but still unable to figure out. Anyone kindly helps me get past this.

I have trained my model on CIFAR10 on TensorFlow & then exported to ONNX. Do I need to play around with some dynamic shapes while exporting? Also, I have exported the whole “.pb” I haven’t frozen any “graph or ckpt”. Is fine. If freezing a graph or something is required kindly shed some light on that (with links).
Also, I have attached the netron output do let me know if it’s correct.onnx-model|75x500

Also, I am referring to “sample_dynamic_reshape.cpp”.

What are these formats for the images & how do I pass my CIFAR10 in such format? My CIFAR10 images are available in batches when downloaded in a binary file. How can I feed it in here ie in which format & how many images?
“train-images-idx3-ubyte”, “train-labels-idx1-ubyte”

Is it necessary to pass image in PGM / PPM? Aren’t there other ways to pass an image.

If yes what are the ways?
If No, then how do I convert each of my images in this format? I have my CIFAR10 images in NumPy array.

Here’s the GitHub link for my code & GitHub - yashraj02/Tensor-RT

Tensorflow- 2.2.0
onnx-1.7.0
tf2onnx-1.6.2

Using TensorRT Conatianer Image (20.06) Latest
CUDA <<11.0.167>>
CUDA <<11.0.167>>
<<TensorRT 7.1.2>>
Method : TernsorRT C++ API for inference
Which samples(from TensorRT C++ API) should be used for my task?

PS:
Also, a suggestion if anyone is reading from TensorRT team. Kindly add numbering (1,2,…) & sub-numbering [a,b,…] to the TensorRT GitHub Readme section. Its a nightmare for a beginner like me to get started. There are certain sections which are optional certain important can’t differentiate easily. It’s just a suggestion.

A clear and concise description of the bug or issue.

Environment

TensorRT Version 7.1.3:
GPU Type Tesla V100:
Nvidia Driver Version 450.:
CUDA Version 8.0:
CUDNN Version:
Operating System + Version Ubuntu 18.04:
Python Version (if applicable) 3.6:
TensorFlow Version (if applicable) 2.2:
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered
1 Like

When using runtime dimensions, you must create at least one optimization profile at build time. Please refer below link:

https://github.com/NVIDIA/TensorRT/blob/master/samples/opensource/sampleDynamicReshape/sampleDynamicReshape.cpp#L153

For ONNX model generation for saved model, checkpoint or using graphdef format, please refer below link:

Supported data format in TRT:

For pre-processing of input image for additional format please refer below link, examples are provided for streaming from live camera feed and processing images

Thanks

2 Likes

I guess you haven’t refereed my code from my github link provided above.
I have used the same example as that of sample_dynamic_shape.
Hence,
“auto profile = builder->createOptimizationProfile();”
this is already in my code.

Understand my concern correctly!!
Your are talking about int8,fp16 etc while I am asking is image formats ie .png,jpg etc

What formats are accepted from jpg,png,etc & how do I pass my CIFAR10 images in such format to onnx model (any working example or github link)?
My CIFAR10 images are available in batches (batch1, batch2,…) when downloaded, in a binary file. How can I feed it in here ie in which format & how many images?
https://www.cs.toronto.edu/~kriz/cifar.html

Hi @yashkhokarale,

Input name used in your code is incorrect (Similarly output tensor name needs to be updated)

As per your model it should be " conv2d_input:0"

I ran your onnx model using trtexec command line tool and i am able to successfully generate the TRT engine file:
trtexec --onnx=cifar.onnx --explicitBatch --minShapes=conv2d_input:0:1x32x32x3 --optShapes=conv2d_input:0:16x32x32x3 --maxShapes=conv2d_input:0:32x32x32x3 --shapes=conv2d_input:0:5x32x32x3 --verbose
[07/13/2020-12:59:32] [I] min: 0.0390015 ms
[07/13/2020-12:59:32] [I] max: 0.0667419 ms
[07/13/2020-12:59:32] [I] mean: 0.0411783 ms
[07/13/2020-12:59:32] [I] median: 0.0410156 ms
[07/13/2020-12:59:32] [I] percentile: 0.0432129 ms at 99%
[07/13/2020-12:59:32] [I] total compute time: 2.42643 s
&&&& PASSED TensorRT.trtexec # trtexec --onnx=cifar.onnx --explicitBatch --minShapes=conv2d_input:0:1x32x32x3 --optShapes=conv2d_input:0:16x32x32x3 --maxShapes=conv2d_input:0:32x32x32x3 --shapes=conv2d_input:0:5x32x32x3 --verbose

https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec#example-4-running-an-onnx-model-with-full-dimensions-and-dynamic-shapes

You can refer to this link, it has multiple example on formatting the input image including jpg and camera input. On similar line you can perform the input image pre-processing.

Thanks

1 Like

Thanks for your valuable feed back. I am able to run using the model using trtexec but the issue persists while I make file & run in bin as ./sample_cifar.
I have changed the input & output names as suggested by you.

Hi,

You can’t directly port same code to run your model.
You were getting the error because buildPredictionEngine doesn’t have any optimization profile setting and buildPreprocessorEngine code is adding additional input to create a dynamic input case in sample code.

In order to support your model, you can try removing the buildPreprocessorEngine related codes and update the buildPredictionEngine function similar to below code to generate a TRT model.

const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
    auto network = makeUnique(builder->createNetworkV2(explicitBatch));
    auto parser = nvonnxparser::createParser(*network, gLogger.getTRTLogger());
    bool parsingSuccess = parser->parseFromFile(
        locateFile(mParams.onnxFileName, mParams.dataDirs).c_str(), static_cast<int>(gLogger.getReportableSeverity()));
    if (!parsingSuccess)
    {
        throw std::runtime_error{"Failed to parse model"};
    }

    mPredictionInputDims = network->getInput(0)->getDimensions();
    mPredictionOutputDims = network->getOutput(0)->getDimensions();

    // Create a builder config
    auto preprocessorConfig = makeUnique(builder->createBuilderConfig());

    // Create an optimization profile so that we can specify a range of input dimensions.
    auto profile = builder->createOptimizationProfile();

    profile->setDimensions("conv2d_input:0", OptProfileSelector::kMIN, Dims4{1, 32, 32, 3});
	gLogInfo << "Passed min" << std::endl;
    profile->setDimensions("conv2d_input:0", OptProfileSelector::kOPT, Dims4{16, 32, 32, 3});
	gLogInfo << "Passed mid" << std::endl;
    profile->setDimensions("conv2d_input:0", OptProfileSelector::kMAX, Dims4{32, 32, 32, 3});
	gLogInfo << "Passed max" << std::endl;
    preprocessorConfig->addOptimizationProfile(profile);
	
    preprocessorConfig->setMaxWorkspaceSize(100_MiB);
    // Build the prediciton engine.
    mPredictionEngine = makeUnique(builder->buildEngineWithConfig(*network, *preprocessorConfig));

Please refer to below documentation as well for more details

Thanks

1 Like