Hello,
I went over Developer Guide :: NVIDIA Deep Learning TensorRT Documentation samples and I have two questions:
How do I extend imported network to custom build network? In sample code
// parse the caffe model to populate the network, then set the outputs
INetworkDefinition* network = builder->createNetwork();
ICaffeParser* parser = createCaffeParser();
parser->setPluginFactory(pluginFactory);
std::cout << "Begin parsing model..." << std::endl;
const IBlobNameToTensor* blobNameToTensor = parser->parse(locateFile(deployFile).c_str(),
locateFile(modelFile).c_str(),
*network,
DataType::kFLOAT);
std::cout << "End parsing model..." << std::endl;
I can use network and create more layers by calling function
network->addFullyConnected(*pool2->getOutput(0), 500, weightMap["ip1filter"], weightMap["ip1bias"])
. How do I set weights, If my original caffe2 model can not be saved to protobuf .pb file, because of custom layers, that I want to write manually in C++ API?
Is it possible to initiate model with randomly created weights?