Core dump when using fasterRCNN Model

Hi,
There is an core dump error here:
Begin parsing model…
End parsing model…
[INFO] Setting Per Tensor Dynamic Range
Begin building engine…
WARNING: Int8 mode specified but no calibrator specified. Please ensure that you supply Int8 scales for the network
layers manually.
ERROR: …/builder/cudnnBuilderUtils.cpp (255) - Cuda Error in findFastestTactic: 77
ERROR: runtime.cpp (30) - Cuda Error in free: 77
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)

after I edited the sample/sampleFasterRCNN.cpp, try using INT8 instead of FP32. I use network->getInput(i)->setDynamicRange(-37.5,50.0); and network->getLayer(i)->getOutput(j)->setDynamicRange(-37.5, 50.0);
instead of calibration.
Please help found out the root cause so I can finish this testing,thks !

I met the same problem.

I tried do calibration in fasterRCNN sample,and still hit this issue. There is an sample in the github https://github.com/Beastmaster/faster-RCNN-trtINT8 which doing calibration in the fasterRCNN, has the same issue too.

I also have same issue.

Got the same issue here. Anyone knows how to fix it?

It’s a known issue for TRT 5.0 GA and will be fixed in the next version.

Dear Nfeng.
Is this issue fixed on v5.1.2.2?
it might still remains. Could you check this?