Dear @spolisetty .
How are you doing ? Do you have any updates regarding to the current issue ?
Meantime, I’d like to ask you for some explanations.
I’ve returned back to sampleOnnxMNIST.cpp code.
Due to our model should use input dynamic shapes, I’ve added code to SampleOnnxMNIST::constructNetwork():
Case A:
// Create an optimization profile so that we can specify a range of input dimensions.
auto profile = builder->createOptimizationProfile();
profile->setDimensions(“input_1”, OptProfileSelector::kMIN, Dims4{1,64,64, 3});
profile->setDimensions(“input_1”, OptProfileSelector::kOPT, Dims4{20,64,64, 3});
profile->setDimensions(“input_1”, OptProfileSelector::kMAX, Dims4{100,64,64, 3});
config->addOptimizationProfile(profile);
Case B:
// Create an optimization profile so that we can specify a range of input dimensions.
auto profile = builder->createOptimizationProfile();
profile->setDimensions(“input_1”, OptProfileSelector::kMIN, Dims4{1,64,64, 3});
profile->setDimensions(“input_1”, OptProfileSelector::kOPT, Dims4{1,64,64, 3});
profile->setDimensions(“input_1”, OptProfileSelector::kMAX, Dims4{1,64,64, 3});
config->addOptimizationProfile(profile);
So, for cases (A) and (B) I’ve got runtime error:
****************** infer() *******************
terminate called after throwing an instance of ‘std::bad_alloc’
what(): std::bad_alloc
When we try to get buffers from Engine:
Create RAII buffer manager object
samplesCommon::BufferManager buffers(mEngine, mParams.batchSize = 1);
However, when I’ve got engine from trtexec tool:
sudo ./trtexec --verbose --onnx=/usr/src/tensorrt/data/mnist/apm_one_input.onnx --explicitBatch=1 --dumpProfile --int8 --shapes=input_1:1x64x64x3,input_1:20x64x64x3,input_1:100x64x64x3 --saveEngine=engine.trt
Then upload the engine file to my app:
IRuntime* runtime = createInferRuntime(gLogger);
ICudaEngine *engine = runtime->deserializeCudaEngine(…)
No errors happens
Could you please correct me, why trtexec applies shapes, but the app via profile makes runtime error ?
Thank you for support,