Assertion `engine->getNbBindings() == 2' failed when after doing INT8 calibration

I have some models running on TX2 devices well, then I want to move to NX device. I did INT8 calibration for my models from onnx. When I load INT8 models on my NX device, I got error on assert(engine->getNbBindings() == 2). Copy a few code below.

engine = runtime->deserializeCudaEngine(modelStream, size);
delete[] modelStream;
assert(engine != nullptr);
context = engine->createExecutionContext();
assert(context != nullptr);
assert(engine->getNbBindings() == 2);

What may cause this kind of issues? How should I debug this issue? Is there any additional code change for new INT8 models?

Thanks
Harry

Hi,

The error indicates the binding buffer doesn’t equal to 2.
The binding buffer indicates the layer marked as input and output.

For the classification model, it’s expected to have one image input and one softmax output.
So the sample tries to check the buffer binding number = 2.

However, this is a model-dependent checker.
If your model has much more output, please modified it to the corresponding number.

Thanks.

Looks it is engine model issue. At first, I used yolov5’s export.py tool to convert my yolov5 pytorch model to onnx format, then used GitHub - qq995431104/Pytorch2TensorRT: CUDA10.0, CUDNN7.5.0, TensorRT7.0.0.11 to do INT8 calibration and generated engine file. I always got “assert(engine->getNbBindings() == 2)” error on this engine file. But I have no issue to generate another googlenet model with this tool.

Then I used GitHub - wang-xinyu/tensorrtx: Implementation of popular deep learning networks with TensorRT network definition API to do INT8 calibration and generate engine file for my yolov5 model. This engine file works.

I dont know why this happens.

Hi,

Since GoogleNet is a classifier with one data input and one prob output.
So the expected binding number is two, which meet the condition.

However, for YOLOv5, the output needs a customized parser.
It is implemented as a plugin library in the second link you shared.

Thanks.