are you saying the const char* deploy and char* model are already in memory? TensorRT API doesn’t require that the files reside on a physical disk. If you have a in-RAM file system, then just commit your deploy and models to that filesystem and set the deploy and model paramter pointers to your in-RAM file system. This is beyond the scope of TensorRT.
Now what can I do to parse it into a nvInfer1::INetworkDefinition ?
If you ask why I not parse from “/tmp/googlenet.deploy”, there may be many reasons, for example I may get the deploy content from a network socket.
Why not add an api such as:
virtual const IBlobNameToTensor* parse(const char* deploy_content, size_t deply_content_size, /* it's the deply content in memory instead of filepath */
const char* model_content, size_t model_content_size,
nvinfer1::DataType weightType) = 0;