TensorRT parsing crashes when more than one Caffe models

Hi,

My application uses multiple Caffe models. My goal is to enable TensorRT optimzation for all of them, and I have integrated TensorRT into my project with reference to Sample_MNIST code.

The problem now is, if I enable TensorRT on any single one of my models, the program runs fine, but if I enabled at least 2 models, it crashes when using nvparsers.dll.

In more detail, the call of nvcaffeparser1::IBlobNameToTensor* blobNameToTensor = parser->parse(…) to the 1st Caffe model didn’t report any error, while afterwards the same call to the 2nd Caffe Model will crash…
At that time, this parse() function reported an alert message box saying “0x00007FFABA70CB58 (nvparsers.dll) exception: 0xC0000005: reading location 0x0000000000002100 access violation”.

Is there any clue or help on this?

My Environment:
Geforce 1080Ti, very recent driver
Win10 64-bit, TensorRT 5RC for Windows
CUDA10, cuDNN7.3.1

thanks
Charles

Hello,

it’d help us debug if you can provide a small repro package containing the source, model, dataset that exhibits this symptom.

It’s OK now after fixing problem, which is due to nvcaffeparser1::shutdownProtobufLibrary().
In sampleMNIst, this function is called in caffeToTRTModel() function, which mislead me to call it multiple times in my project where there are more than 1 models. But actually this shutdownProtobufLibrary() has global impact and seems that it should be called ONLY after all Caffe models are handled.

Though in NvCaffeParser.h there’s a few comment about shutdownProtobufLibrary(), the developer guide document did not mention it explicitly. There could be some improvement in documentation, in my opinion.