Nvinfer use model from memory over path

Dear Forum or Mods,

It’s said here that plan models can be reverse-engineered very easily: Can compiled .engine be reverse engineered? - #6 by mathiasbertorelli

Therefore, I’d like to encrypt the model when it’s on the filesystem and only decrypt it during runtime. However, nvinfer takes a path to a model file as input; compare: Gst-nvinfer — DeepStream 6.1.1 Release documentation
Is it possible to load the model from a memory pointer instead? Otherwise I’d have to dump the model as plan file to the filesystem which makes it unnecessarily easy for a possible attacker gain the model and reverse engineer the architecture and the weights. (According to the post above)

Please note that nvinfer is proprietary to nvidia and one cannot change the code for this reason. Or is it okay for us to change and recompile that code?

Cheers

TRT does build the TRT engine from a memory buffer which data can be loaded from TRT engine file from storage.
For refereence - https://github.com/NVIDIA/TensorRT/blob/main/samples/sampleMNIST/sampleMNIST.cpp#L176

Dear mchi,

thank you for your swift reply.
I see that this is possible for TRT. Yes, but I don’t see how that could be used for nvinfer (deepstream)? Am I missing something here?
To spell out the point I’m trying to make: nvinfer only uses paths for loading a plan file. Or are you saying it’s not possible without modifying the proprietary source code?

nvinfer can load a plan file into a memory buffer or load a “uff”, “onnx” or “etlt” model and convert the model into TRT engine in a memory buffer.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.