Does tensorrt support calibration on dynamic shape?


Calibration with dynamic shape caused cuda error

log.txt (286.0 KB)


TensorRT Version: 8.0.16

Relevant Files

FaceDetector.onnx (1.7 MB)

Steps To Reproduce

I define a calibrator and implement get_batch method, where the shape of self.buffer may be different during calibration.

    def get_batch(self, *args, **kwargs):
        if self.count < len(self.images):

            for i in range(self.batch_size):

                idx = self.count % len(self.images) # roll around if not multiple of dataset
                cv_im = cv2.imread(self.image_path + self.images[idx])
                self.buffer = self.preprocess(cv_im)
                self.count += 1

            return []
            return []

Does tensorrt support calibration on dynamic shape? Does calibration with different shape help the accuracy?

Hi @OnePieceOfDeepLearning,
Please refer to below link in case it helps in your case:


good. I will try next week.

Note: If the calibration optimization profile is not set, the first network optimization profile are used as a calibration optimization profile.

So is really necessary to set calibration profile?

I found out it may cause segmentation fault in other backbone (resnet50) when using dynamic shape calibration

The total images for calibration is 3000, the batch size is set to 1, and it proceeds to 2886 images it cause the sementation fault.

log.txt (5.4 KB)

Below link might help you with your query, Kindly check below link for all 3d support layers:


I don’t understand what you mean.

I found out it may be crash at certain input shape.

by the way, is this equivalent to the post training quantization (Basic Functionalities — pytorch-quantization master documentation) ?


Yes, post-training quantization we call it as calibration. Are you still facing above segmentation fault, if yes please share us issue repro steps to try from our end for better help.

Thank you.

I am preparing the code to reproduce now


Could you please share issue repro.

Thank you.