TensorRT caffe parse CRASHES

Hi everyone,

Thanks for releasing the tensorRT library.
I would like to describe a problem that I face with the --otherwise great-- library.

When trying to load two caffe networks with using the caffe parser that comes with tensorRT, the program crashes when trying to load the second model. It is very easy to reproduce this.
For example, one can edit the sampleMNIST sample. All you have to do is add the following lines after the caffeToTRTModel call that is already in the code (inside the body of the main function).

IHostMemory* trtModelStream2{nullptr};
caffeToTRTModel("mnist.prototxt", "mnist.caffemodel", std::vector<std::string>{OUTPUT_BLOB_NAME}, 1, trtModelStream2);

It would be great if someone could answer whether loading multiple model is supported when using the tensorRT caffe parser.

Thanks for your time!

I’m using ubuntu 16.04 and the following packages (plus cuda 8.0)
{libnvinfer-dev,libnvinfer4} 4.1.2-1+cuda8.0


Loading multiple models is supported by TensorRT. The reason the sample code with the changes you posted crashes is that shutdownProtobufLibrary() is called prematurely. If you move it from caffeToTRTModel() to just before main() returns your code should work just fine.

Hope this helps.