Tensorrt int8 nms

I’m trying to convert pytorch -->onnx -->tensorrt, and it can running successfully. But use the int8 mode, there are some errors as fallows. And I found the erroer is caused by keep = nms(boxes_for_nms, scores, iou_threshold), Could you help me fix it,thanks a lot.

[03/01/2023-19:51:41] [TRT] [E] 2: [helpers.h::divUp::70] Error Code 2: Internal Error (Assertion n > 0 failed. )
[03/01/2023-19:51:41] [TRT] [E] 3: [engine.cpp::~Engine::306] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/engine.cpp::~Engine::306, condition: mObjectCounter.use_count() == 1. Destroying an engine object before destroying objects it created leads to undefined behavior.
)
[03/01/2023-19:51:41] [TRT] [E] 2: [calibrator.cpp::calibrateEngine::1181] Error Code 2: Internal Error (Assertion context->executeV2(&bindings[0]) failed. )
[03/01/2023-19:51:41] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )

the environment are:

jetpack 5.1
torch 1.13.0
torchvison 0.15.0
tensorrt 8.5.3.1

And the int8 calibrater is:

class MyCalibrator(trt.IInt8EntropyCalibrator2):

def __init__(self, calibrationDataPath, nCalibration, inputShape, cacheFile):
    trt.IInt8EntropyCalibrator2.__init__(self)
    self.imageList = glob(calibrationDataPath + "*.jpg")[:100]
    self.nCalibration = nCalibration
    self.shape = inputShape  # (N,C,H,W)
    self.buffeSize = trt.volume(inputShape) * trt.float32.itemsize
    self.cacheFile = cacheFile
    _, self.dIn = cudart.cudaMalloc(self.buffeSize)
    self.oneBatch = self.batchGenerator()

    print(int(self.dIn))

def __del__(self):
    cudart.cudaFree(self.dIn)

def batchGenerator(self):
    for i in range(self.nCalibration):
        print("> calibration %d" % i)
        subImageList = np.random.choice(self.imageList, self.shape[0], replace=False)
        yield np.ascontiguousarray(self.loadImageList(subImageList))


def loadImageList(self, imageList):
    # transform = transforms.Compose([transforms.ToTensor()])
    transform = transforms.Compose([transforms.Resize([800, 800]), transforms.ToTensor()])
    imgs = []
    for i in range(self.shape[0]):
        original_img = Image.open(imageList[i]).convert('RGB')
        # original_img.show()
        trans_img = transform(original_img)
        imgs.append(trans_img)
    batch_img = torch.stack(imgs, dim=0)
    data =batch_img.numpy()
    return data

Hi,

Could you check if your model can work with our trtexec binary first?

$ /usr/src/tensorrt/bin/trtexec --onnx=<file> --int8
$ /usr/src/tensorrt/bin/trtexec --loadEngine=<file> --int8

Thanks.

Yes, It can works with trtexec.

I would like to know why I can use trtexec for conversion, but not my own rewritten Python program. Also, if I use trtexec for INT8 calibration, how can I use my own dataset for calibration?

Hi,

Could you check if your model can work with the python sample below?
https://elinux.org/Jetson/L4T/TRT_Customized_Example#OpenCV_with_PLAN_model

TensorRT calibration tutorial can be found below:

Thanks.

I’m so sad that my model can’t work with it. It got error:

[E] 2: [helpers.h::divUp::70] Error Code 2: Internal Error (Assertion n > 0 failed. )

I just want to convert MaskRcnn with pytorch to tensorrt with int8, but there are kinds of errors. Could you give me a full example to do it ?

Hi,

Could you share the complete output log with us?
Since your model can work with trtexec, it’s expected to work with the eLinux sample above.

Thanks.

I have the same question

Hi,
How can I get the output log?

Hi,

The output log from the console.

Thanks.

I don’t have this problem for NVIDIA/TensorRT/blob/release/8.5/tools/Polygraphy/examples/api/04_int8_calibration_in_tensorrt/README.md

I only use int8 calibrator for multi input have this problem

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.