Model parsing source code

I was looking at the source code for deepstream reference app and also that of the gstnvinfer plugin. I am unable to locate where exactly the contents of the model file is read from the disk. Kindly help with this.

It’s in

file : /opt/nvidia/deepstream/deepstream-5.0/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp

Function:

/* Deserialize engine from file */
std::unique_ptr
TrtModelBuilder::deserializeEngine(const std::string& path, int dla)
{
std::ifstream fileIn(path, std::ios::binary);
if (!fileIn.is_open())
{
dsInferError(
“Deserialize engine failed because file path: %s open error”,
safeStr(path));
return nullptr;
}

fileIn.read(data.data(), size); // read the file into buffer

UniquePtrWDestroynvinfer1::ICudaEngine engine =
runtime->deserializeCudaEngine(data.data(), size, factory); // call TensorRT API to deserialize the buffer as TRT CUDA engine
}

Thanks.