Loading Tensor RT Model by nvinfer plugin works well as below.
[config.file]
model-engine-file=MODEL_PATH
But When I load the model by c++ code, It doesn’t work.
Segmentation fault error occurs in “deserializeCudaEngine”.
How can i do?
[c++ code]
ifstream cache(MODEL_PATH);
if (!cache)
cout << "file no exist error" << endl;
stringstream gieModelStream;
gieModelStream.seekg(0, gieModelStream.beg);
gieModelStream << cache.rdbuf();
cache.close();
infer = nvinfer1::createInferRuntime(marvinpose->gLogger);
gieModelStream.seekg(0, std::ios::end);
const int modelSize = gieModelStream.tellg();
gieModelStream.seekg(0, std::ios::beg);
void *modelMem = malloc(modelSize);
gieModelStream.read((char*)modelMem, modelSize);
engine = infer->deserializeCudaEngine(modelMem, modelSize, NULL);
free(modelMem);