Description
std::shared_ptr<nvinfer1::ICudaEngine> parse_planmodel() {
std::shared_ptr<nvinfer1::ILogger> logger(new Logger());
std::shared_ptr<nvinfer1::IRuntime> runtime(nvinfer1::createInferRuntime(*logger));
std::shared_ptr<nvinfer1::ICudaEngine> engine = nullptr;
std::ifstream planFile(planmodel, std::ios::binary);
planFile.seekg(0, planFile.end);
int model_size = planFile.tellg();
planFile.seekg(0, planFile.beg);
std::vector<char> serialize_model(model_size);
planFile.read(serialize_model.data(), model_size);
engine.reset(runtime->deserializeCudaEngine(serialize_model.data(), model_size));
if(engine == nullptr){
std::cout << "deserialization fails" << std::endl;
exit(1);
}else{
std::cout << "deserialization success" << std::endl;
}
return engine;
}
int main(void){
auto engine = parse_planmodel();
engine->createExecutionContext();
return 0;
}
A clear and concise description of the bug or issue.
This program will be segment fault at engine destructor in main function. Why?
Environment
TensorRT Version: 8.6.1.6
GPU Type: Tesla P40
Nvidia Driver Version: 535.86.10
CUDA Version: 12.2
CUDNN Version:
Operating System + Version: CentOS 7
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered