Why tensorRT can't parse two model simultaneously ?


I have two caffe model, and I want to parse them use tensorRT, but when I parse first, the second will break in function

const IBlobNameToTensor *blobNameToTensor = parse->parse(deploy.c_str(),model.c_str(), *network, modelDataType);

When the first model has parse and save to cache file, and I start the application again, it can parse second model.


I created two instances to parse two model, and I found in func caffeToTRTModel() has a function


, when I don’t use it in the first instance, it can parse two model continuously.Why this function will affect in two instace ?


You can create two caffeParser to avoid this issue.

As you mentioned, some buffer will be created for reading the model into memory when initial.
Reuse the buffer may lead to some memory error.

shutdownProtobufLibrary() is a library for loading Caffe model.
Please hold this library on untill all the model is parsed.

More, we recommended converting caffemodel to TensorRT PLAN instead of parsing it from Caffe every time.
The TensorRT PLAN can launch engine directly and save you lots of initial time.