Int8 calibrator issue

Hi.

I have a customized yolov3 model build in darknet and converted to caffemodel. I can run this converted model in fp32 and fp16 successfully. I’m now trying to use int8. I followed the sample code in “sampleINT8” very closely. However, I get this error…

[E] [TRT] C:\p4sw\sw\gpgpu\MachineLearning\DIT\release\5.1\engine\engine.cpp (572) - Cuda Error in nvinfer1::rt::ExecutionContext::commonEmitTensor: 11 (invalid argument)
[E] [TRT] Failure while trying to emit debug blob.

I suspect it is because I’m using some layers that are in the Plugin library (upsample, etc).

Anyone has an idea what I can do to resolve this?

Thanks.

Colin