Do different DeepStream SDK versions link config files with each other?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
: Jetson Nano
• DeepStream Version
: DeepStream 5.0 SDK
• JetPack Version (valid for Jetson only)
: Jetpack 4.4.1
• Issue Type( questions, new requirements, bugs)
deepstream-app Inference
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I ran it on DeepStream 5.0 with the config.txt file I used in Deepstream 4.0.

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Do you have a config file configured with DeepStream 5.0?
Does the custom(tlt).engine file ran on DeepStream4.0.1 work with DeepStream 5.0?

Hi,

You can use the same TLT file with Deepstream 5.0.
However, since the TensorRT engine file is not portable, please recreate it on the Deepstream 5.0 environment.

Below is our migration document for your reference:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Application_migration.html

Thanks.

Hi AastaLLL,

Thank you for answer.
I am a beginner in DeepStream. Sorry

I installed DeepStream 5.0 SDK for the first time on JetPack4.4.
I tried to fetch only the engine file and config file from another device and run it.

this is my config file.
config_test.txt (4.4 KB) source_test.txt (5.1 KB)

However, it does not work normally with the following error.


Is this a TensorRT portable problem?
Will migration solve this problem?
Or should I retrain to the environment suitable for DeepStream 5.0?

Hi,

Magic tag does on match

Yes. The error indicates you are using an engine file created from another TensorRT version.

To solve this issue, please recreate the engine on the target platform directly.
More clearly, you don’t need to retrain the model but bring the original file rather than the compiled engine.

For example, in your use case, please use the xxxxx.caffemodel and xxxxx.prototxt model instead.
Once the engine file is not available, Deepstream will automatically generate the xxxxx.engine for the target.

Thanks.

1 Like

Yes,

Thank you for answer.

I used tlt-converter in DeepStream 5.0 to create an etlt file as an engine file.

So I solved this problem.

Thank you.