Can I embed a TensorRT model into a Deepstream plugin?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson Tx2)
• DeepStream Version 6.0
• JetPack Version (4.6)
• TensorRT Version 8
• NVIDIA GPU Driver Version (na)
• Issue Type( QUESTION )
• How to reproduce the issue ? (na)
• Requirement details( ? )

Hi,

Is there a way to protect a TensorRT model inside Deepstream ?

In the past, I protected a model by encoding the input and decoding the output, but that uses a lot of resources.

For limited resources hardware, I can convert the model (in the case of tflite) to a c array and embed it into the code. It looks like this:

unsigned char converted_model_tflite[] = {
  0x18, 0x00, 0x00, 0x00, 0x54, 0x46, 0x4c, 0x33, 0x00, 0x00, 0x0e, 0x00, 
 // <Lines omitted>
};
unsigned int converted_model_tflite_len = 18200;

And once built, it obfuscates the model and I can add more control to it (licensing, etc).

I wonder if this is something I could do that to TensorRT and embed it into an inference plugin for Deepstream, or if there are other ways to do protect the model.

Thanks.

Hi,

Suppose yes.

In nvdsinfer, it expects the model to be a binary file for deserialization.
You can modify the following function to meet your requirement:

/opt/nvidia/deepstream/deepstream-6.0/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp

/* Deserialize engine from file */
std::unique_ptr<TrtEngine>
TrtModelBuilder::deserializeEngine(const std::string& path, int dla)
{
    std::ifstream fileIn(path, std::ios::binary);
    if (!fileIn.is_open())
    ...
}

Thanks.

1 Like

That might work better that I imagined. Thank you very much!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.