How to parse a caffe model from memory in TensorRT 4.1.2

I’m using TensorRT 4.1.2.
I found there is only a method to parse caffe model from disk, that is

virtual const IBlobNameToTensor*    parse(const char* deploy,
                                              const char* model,
                                              nvinfer1::INetworkDefinition& network,
                                              nvinfer1::DataType weightType) = 0;

But now my models have been read into memory, how do I parse it?

Hello,

are you saying the const char* deploy and char* model are already in memory? TensorRT API doesn’t require that the files reside on a physical disk. If you have a in-RAM file system, then just commit your deploy and models to that filesystem and set the deploy and model paramter pointers to your in-RAM file system. This is beyond the scope of TensorRT.

regards,
NVES

Thank you for your reply.
I don’t have a in-RAM filesystem, I only have opened the files as follows:

const char * deploy = "/tmp/googlenet.deploy";
const cahr * model = "/tmp/googlenet.caffemodel";
std::ifstream is1(deploy);
std::string deploy_content ((std::istreambuf_iterator<char>(is1)),
                 std::istreambuf_iterator<char>());
std::ifstream is2(model);
std::string weight_content ((std::istreambuf_iterator<char>(is2)),
                 std::istreambuf_iterator<char>());

Now what can I do to parse it into a nvInfer1::INetworkDefinition ?
If you ask why I not parse from “/tmp/googlenet.deploy”, there may be many reasons, for example I may get the deploy content from a network socket.
Why not add an api such as:

virtual const IBlobNameToTensor*    parse(const char* deploy_content, size_t deply_content_size,  /* it's the deply content in memory instead of filepath */
                                                  const char* model_content, size_t model_content_size, 
                                                  nvinfer1::INetworkDefinition& network,
                                                  nvinfer1::DataType weightType) = 0;

Thank you for your suggestion. We are always evaluating customer feedback with engineering.

It’s not a fun option, but you can parse your own network using the builder API.

Can you explain how to parse caffe model from memory using builder?

I found TRT6 support load model from memory!
refer to parseBuffers