segmentation fault when using deserializeCudaEngine in C++ api

I was using C++ api to serialize and deserialize tensorrt model from file. However, segmentation fault happens when I call deserializeCudaEngine. Here is my code .

// for serializing
std::ofstream trt_file(trt_model_path, std::ios::binary);
auto *serialized_model = this->engine_->serialize();
if(serialized_model == nullptr) {
      std::cout<< "could not serialize engine." << std::endl;
}

trt_file.write(reinterpret_cast<const char*>(serialized_model->data()), serialized_model->size());
trt_file.close();

// for deserializing
std::vector<char> trt_model_stream;
std::size_t size{0};
std::ifstream in_file(trt_file, std::ios::binary);
if(in_file.good())
{
    in_file.seekg(0, in_file.end);
    size = in_file.tellg();
    in_file.seekg(0, in_file.beg);
    trt_model_stream.resize(size);
    in_file.read(trt_model_stream.data(), size);
    in_file.close();
}

nvinfer1::IRuntime *runtime = nvinfer1::createInferRuntime(*logger);
nvinfer1::ICudaEngine *engine =runtime->deserializeCudaEngine(trt_model_stream.data(), size, nullptr);
 
runtime->destroy();

I can serialize the cudaEngine successfully. However, I’ll get segmentation fault when calling deserializeCudaEngine api. I even printed the values in serialized_model->data() and trt_model_stream.data(). They are exactly the same.

I don’t know where can generate the trouble. Thanks in advance.

Hi kealennieh,

Are you running this all in the same process? What happens if you write to disk in one process and exit, then start a new process for reading? If you still have trouble, maybe you can provide a gdb backtrace to get some hint what’s happening inside TRT (and any console output, if any).

By the way, there are simpler ways to read a file in C++:

std::vector<char> ReadFile(std::string const& filename)
{
  std::ifstream input{ filename, std::ios::binary };
  return { std::istreambuf_iterator<char>(input), {} };
}

Cheers,
Tom

Hi Tom,

Thanks for your help. I run this in two processes. You just remind me that there may be a problem of tensorrt. Therefore, I update the tensorrt from 5.1.2.2 to 5.1.5.0. Surprisingly, it works out in the new tensorrt.

Sincerely,
Kealennieh