TensorRT fails to build FasterRCNN GIE model with using INT8

samples in sampleINT8 simply generate the batch files with the data+label format, but image detection’s output has classification label and bounding box’s coordinates, so when using the dataset to generate the batch file, how I wrote the label into the batches file? Whether the label contains the bounding box?

I have reproduced the errors.

fasterrcnnint8: cudnnBuilderWeightConverters.cpp:118: float nvinfer1::builder::makeFullyConnectedInt8Weights(nvinfer1::FullyConnectedParameters&, const nvinfer1::cudnn::EngineTensor&, const nvinfer1::cudnn::EngineTensor&, nvinfer1::CpuMemoryGroup&, bool): Assertion `in.region->getDimensions() == in.extent’ failed.

Any update?

Hi, I using VGG16_train prototxt and its corresponding caffemodel to generate batch file in VOC2007 dataset, but getting the following error:

I0814 18:31:07.706250 3612 net.cpp:255] Network initialization done.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:537] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 548317115
I0814 18:31:08.013990 3612 net.cpp:744] Ignoring source layer drop6
I0814 18:31:08.028578 3612 net.cpp:744] Ignoring source layer drop7
I0814 18:31:08.028596 3612 net.cpp:744] Ignoring source layer fc7_drop7_0_split
I0814 18:31:08.028959 3612 net.cpp:744] Ignoring source layer loss_cls
I0814 18:31:08.028966 3612 net.cpp:744] Ignoring source layer loss_bbox
I0814 18:31:08.030992 3612 net.cpp:744] Ignoring source layer silence_rpn_cls_score
I0814 18:31:08.031002 3612 net.cpp:744] Ignoring source layer silence_rpn_bbox_pred
I0814 18:31:08.032707 3612 caffe.cpp:290] Running for 400 iterations.
F0814 18:31:08.818816 3612 syncedmem.cpp:71] Check failed: error == cudaSuccess (2 vs. 0) out of memory
*** Check failure stack trace: ***
@ 0x7fc11b9c45cd google::LogMessage::Fail()
@ 0x7fc11b9c6433 google::LogMessage::SendToLog()
@ 0x7fc11b9c415b google::LogMessage::Flush()
@ 0x7fc11b9c6e1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7fc11c160788 caffe::SyncedMemory::mutable_gpu_data()
@ 0x7fc11c118b32 caffe::Blob<>::mutable_gpu_data()
@ 0x7fc11c1a5520 caffe::ConvolutionLayer<>::Forward_gpu()
@ 0x7fc11c1695f1 caffe::Net<>::ForwardFromTo()
@ 0x7fc11c1696f7 caffe::Net<>::Forward()
@ 0x409337 test()
@ 0x4072e0 main
@ 0x7fc11a934830 __libc_start_main
@ 0x407b09 _start
@ (nil) (unknown)
Aborted (core dumped)

Could you give me some suggestion on it?

Hi, I am working with TensorRT 2.1 right now, I adopted the entropy INT8 calibrator and batch stream from the sampleINT8 (I generated the batches data by myself so it corresponds to the Faster RCNN input), however when trying to build the CUDA engine, I am getting the following error:

Begin parsing model…
End parsing model…
Begin building engine…
locatefile in readcalibrationcache
sample_fasterRCNN_int8: sampleFasterRcnnINT8.cpp:108: std::__cxx11::string locateFile(const string&): Assertion `i != MAX_DEPTH && “Make sure the data is set properly. Check README.txt”’ failed.
Aborted (core dumped)

It means that can’t find the calibrationtable, and I checked that there is no calibrationtable generated after parsing the model. Then I checked the parameters I have setted, it seems the same as the sampleINT.
@kometa_triatlon @FanYE can you give me some suggestions?

Hi,I am new to TensorRT 2.1,when I run the example,i got a error like that:
tang@tang:/usr/src/tensorrt$ ./bin/giexec --deploy=/usr/src/tensorrt/data/mnist/mnist.prototxt --model=/usr/src/tensorrt/data/mnist/mnist.caffemodel --output=prob --batch=12
deploy: /usr/src/tensorrt/data/mnist/mnist.prototxt
model: /usr/src/tensorrt/data/mnist/mnist.caffemodel
output: prob
batch: 12
Input “data”: 1x28x28
Output “prob”: 10x1x1
cudnnEngine.cpp (48) - Cuda Error in initializeCommonContext: 4
could not build engine
Engine could not be created
Engine could not be created

My GPU is GTX1060,and the device version is 384.69,cuda toolkit is 8.0,cudnn 7.0.

Is there anyone can give me some suggestions?
thanks a lot.

TensorRT-3.0.0 still not solved the issue.

Plugin layer output count is not equal to caffe output count

@FanYe I am also getting the same error(on PX2) but i believe the calibration file is correct. What other reason can be for this error

Calibrating using cache file: ../../data/dnn/mnist_data/calibration_cache.bin

1
MaxPool_1: 3fdb8f17
Relu_1: 3fde031a
MaxPool: 3c8d0fb0
(Unnamed ITensor* 2): 3c008912
Reshape_1: 3fdb8f17
(Unnamed ITensor* 11): 3fdb8f17
MatMul: 427c85cc
Relu_2: 427c8bbf
Output/MatMul: 44498e13
Output/output: 44498e27
Relu: 3c8d0fb0
Input/Placeholder: 3c008912
Reshape: 3c008912

sample_cityscapes_pruned: cudnnBuilder2.cpp:996: nvinfer1::cudnn::Engine* nvinfer1::builder::buildEngine(nvinfer1::CudaEngineBuildConfig&, const nvinfer1::cudnn::HardwareContext&, const nvinfer1::Network&): Assertion `it != tensorScales.end()' failed.

moving topic to TensorRT area.

-Siddharth

^ Resolved after fixing the “freezing” function