Problem with NVIDIA-AI-IOT/deepstream_lpr_app

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0.1
• TensorRT Version 7.0.0-1
• NVIDIA GPU Driver Version (valid for GPU only) 450.102.04
• Issue Type( questions, new requirements, bugs) bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

/home/tlt-converter-7.0-cuda10.2-x86/tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt

/home/tlt-converter-7.0-cuda10.2-x86/tlt-converter: invalid option -- 'p'
Unrecognized argument
Aborted (core dumped)
/home/tlt-converter-7.0-cuda10.2-x86/tlt-converter -k nvidia_tlt -d image_input,1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt

terminate called after throwing an instance of 'std::invalid_argument'
  what():  stoi
Aborted (core dumped)
/home/tlt-converter-7.0-cuda10.2-x86/tlt-converter -k nvidia_tlt -d 1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt

[ERROR] UffParser: Could not parse MetaGraph from /tmp/fileDCLZQ2
[ERROR] Failed to parse the model, please check the encoding key to make sure it's correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello, I am trying to reproduce the work exposed in GitHub deepstream_lpr_app, however when I try to reproduce the step of using tlt-converter, I get the errors previously exposed, what am I doing wrong, according to me I follow the tutorial to the letter, with the exception that in the page it is specified to use ./tlt-converter … -p …, however that option does not exist, and I use ./tlt-converter … -d …, Help.

extra tip: I only use the -p and -k options, which are the required flags, in addition to the input model file.


Where did you download tlt-converter?

From here Tlt-converter

It is available for download from the tlt getting tarted,choose the one for cuda 10.2

A latest version of tlt-converter is needed. It can support “-p” option.
I’m still syncing with internal team for its release.
Sorry for confusion. Will update the info when I have.

Sorry for the inconvenience. We haven’t officially released TLT3.0. We will release it this week. Please stay tuned.

Thanks Morganh

Please download the latest tlt-converter according to your devices.
See Transfer Learning Toolkit Get Started | NVIDIA Developer