TensorRT CaffeParser for models > 1GB

Hi,

I’m trying to parse a .caffemodel using the ICaffeParser interface class from TensorRT (V3.0.2). My model is 1.1GB in size which results in the following protobuf error:

[libprotobuf ERROR google/protobuf/io/coded_stream.cc:207] A protocol message was rejected because it was too big (more than 1073741824 bytes).  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.

I tried setting a higher limit on size using the ICaffeParser::setProtobufBufferSize() method, but any value greater than 2^30 (1GB) is apparently set to zero internally giving the following error:

[libprotobuf ERROR google/protobuf/io/coded_stream.cc:207] A protocol message was rejected because it was too big (more than 0 bytes).  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.

What can be done to fix this, or is this the hard limit on what you can do with TensorRT?

Thanks in advance,

Maarten

Did you fix it?
I have the same problem…

Nope, I started using a smaller network. This is probably an issue in google protobuf which is used by caffe. Still not clear to me if this is a hard limit that is not easily to patch or just a config.

mhh did you shrink your network?
Cause i don’t need the fully-connected part at the end of mine, but i don’t know how to remove the weights for it.
But still thank you for your reply!

Have you solved it?
I have the same problem.

Have you solved it?
I have the same problem.

Have you solved it?
I have the same problem.

Have you solved it?
I have the same problem.

One less than perfect option would be to bypass their nvcaffe_parser, and use the builder API yourself. Then you have complete control over protobuf. Of course this means writing your own parser.