platform: Jetson xavier with Jetpack 4.1.1, TensorRT 5.0
Hello,
I have implemented the engine with the FP32 and FP16 format and it works well, but, the moment I want to create engine with INT8 format, a problem occured as follows when it perform INT8 calibration,
ERROR:engine.cpp(404) - Cuda Error in commonEmitTensor: 11
ERROR:Failure while trying to emit debug blob
ERROR: Calibration failure occured with no scaling factors detected. This could be due to no int8 calibrator or insufficient custom scales for network layers. Please see int8 sample to setup calibration correctly.
My network has some custom layers, which implement the nvcaffeparser1::IPluginFactoryExt and nvinfer1::IPluginFactory classes and use the caffe parser to parse. When I change the network input to the ssd.prototxt which the sampleSSD provide,it also works well, so that I wonder that maybe these plugin layers don’t support INT8 format.If so, what should I do to modify the code of these plugin layers?
Thanks for your reply.