Deepstream engine, tlt-converter security practices

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am currently working with tlt-converter and deepstream and was hoping you can provide insight into model encryption and decryption. Currently, tlt-converter requires the key to be passed through the command line; on the other hand, allowing deepstream to compile the model requires without tlt-converter requires the key to be in plain text within the pgie.txt file. Is there a way we can override this behavior so that all the command line arguments are never in plain text or viewable from logs? If not, is there somewhere I can look in order to get an idea of the tlt-converter source code so we can implement the same thing?

Any insight into this is much appreciated as model security is crucial and storing the key as an environment variable, passing it through the command line, or leaving it in plain text in the pgie.txt is counterproductive to the whole idea of encryption.

Moving this topic into TAO.

If run .etlt model with deepstream, the key is needed to set in plain text file.
It will generate tensorrt engine in the first run.
Then you can directly run this tensorrt engine in the next runs.

The tlt-converter is not open-source. It is a tool to generate tensorrt engine from .etlt model to tensorrt engine.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.