Hi, I am trying to do int8 inference on TensorRT for faster-rcnn. I have followed the ssd sample in TensorRT-5 samples. Int8 calibration is working with ssd, but failing for faster-rcnn.
I am getting the following error.
&&&& RUNNING TensorRT.sample_pva # ./sample_ssd --int8
[I] Begin parsing model…
[I] INT8 mode running…
[I] End parsing model…
[I] Using Entropy Calibrator 2
[I] Begin building engine…
[W] [TRT] TensorRT was compiled against cuDNN 7.5.0 but is linked against cuDNN 7.1.4
[W] [TRT] TensorRT was compiled against cuDNN 7.5.0 but is linked against cuDNN 7.1.4
[W] [TRT] TensorRT was compiled against cuDNN 7.5.0 but is linked against cuDNN 7.1.4
[E] [TRT] engine.cpp (572) - Cuda Error in commonEmitTensor: 11 (invalid argument)
[E] [TRT] Failure while trying to emit debug blob.
engine.cpp (572) - Cuda Error in commonEmitTensor: 11 (invalid argument)
[E] [TRT] engine.cpp (572) - Cuda Error in commonEmitTensor: 11 (invalid argument)
[E] [TRT] Failure while trying to emit debug blob.
engine.cpp (572) - Cuda Error in commonEmitTensor: 11 (invalid argument)
[E] [TRT] cuda/customWinogradConvActLayer.cpp (342) - Cuda Error in execute: 11 (invalid argument)
[E] [TRT] cuda/customWinogradConvActLayer.cpp (342) - Cuda Error in execute: 11 (invalid argument)
Cuda failure: 77
Aborted (core dumped)
Kindly help me with it.
Thanks
Sambasivarao