Preventing engine duplication

Some potential customers want to use our object detection TRT engine model without the rest of our software application.

Is there a way for us to distribute a TRT engine (built for a specific Jetson/Jetpack version) to customers and lock its use to a specific hardware device? We have a runtime licence for our full application that is locked to the devices’ serial number, but if they are just using our model, then we have no way of preventing unauthorised duplication.


This looks like a Jetson issue. Please refer to the below samples in case useful.

For any further assistance, we will move this post to to Jetson related forum.


I can create TRt models no problem. My question relates to preventing someone from duplicating a TRt model then using it on multiple machines without permission

By default, TensorRT engines are compatible only with the version of TensorRT and the hardware on which they are built. In the 8.6 version, Hardware and Version compatibility are introduced. But currently, hardware compatibility is supported only for Ampere and later device architectures and is not supported on NVIDIA Drive OS or JetPack.

Please refer to the following document for more details:

We are moving this post to the Jetson related forum to get better help.

Thank you.

Yes so we will create a TRT model on a jetpack x with TRT version y. We plan to then use it on a customer’s hardware which will also have the same jetpack x and TRT version y. So the model will work without issue which is fine and already proven.

My question relates to if there is any way of preventing the customer from just copying the engine file and using it on an infinite amount of compatible jetsons without permission? There’s probably not a built-in mechanism, but perhaps there is a workaround we could implement?


Could you share more about the duplication?
Do you mean copy or redistribution?

Is the model run with a custom application or trtexec?
If a custom application is used, maybe you can add the encrytion/decryption mechanism on the top of TensorRT inference.


As in, we sell a single trt.engine model which we only want the customer to use on a single jetson without using it on many more devices.

When we run our custom application, we have a licence to stop redistribution. The issue arises if we were to try and sell someone just the model without our application.

Maybe we could create a little application that we could control redistribution and that decrypts an encrypted engine file. Without this new custom app, they would just have an encrypted file. Does that sound like the way to do it?


Let us check with our internal team to see if we have this feature in TensorRT or not.
Will update more information with you later.



Here are some suggestions from our internal team.

Using your own encryption between loading the file and passing it to TRT would be the most secure approach available.
You can create a custom serialization/deserialization function on top of the TensorRT engine.
Like, prepend magic number/serial numbers so that others cannot use the engine without permission.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.