Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson Tx2)
• DeepStream Version 6.0
• JetPack Version (4.6)
• TensorRT Version 8
• NVIDIA GPU Driver Version (na)
• Issue Type( QUESTION )
• How to reproduce the issue ? (na)
• Requirement details( ? )
Hi,
Is there a way to protect a TensorRT model inside Deepstream ?
In the past, I protected a model by encoding the input and decoding the output, but that uses a lot of resources.
For limited resources hardware, I can convert the model (in the case of tflite) to a c array and embed it into the code. It looks like this:
unsigned char converted_model_tflite[] = {
0x18, 0x00, 0x00, 0x00, 0x54, 0x46, 0x4c, 0x33, 0x00, 0x00, 0x0e, 0x00,
// <Lines omitted>
};
unsigned int converted_model_tflite_len = 18200;
And once built, it obfuscates the model and I can add more control to it (licensing, etc).
I wonder if this is something I could do that to TensorRT and embed it into an inference plugin for Deepstream, or if there are other ways to do protect the model.
Thanks.