Problem with NVIDIA-AI-IOT/deepstream_lpr_app

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0.1
• TensorRT Version 7.0.0-1
• NVIDIA GPU Driver Version (valid for GPU only) 450.102.04
• Issue Type( questions, new requirements, bugs) bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

/home/tlt-converter-7.0-cuda10.2-x86/tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt

/home/tlt-converter-7.0-cuda10.2-x86/tlt-converter: invalid option -- 'p'
Unrecognized argument
Aborted (core dumped)
/home/tlt-converter-7.0-cuda10.2-x86/tlt-converter -k nvidia_tlt -d image_input,1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt

terminate called after throwing an instance of 'std::invalid_argument'
  what():  stoi
Aborted (core dumped)
/home/tlt-converter-7.0-cuda10.2-x86/tlt-converter -k nvidia_tlt -d 1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt

[ERROR] UffParser: Could not parse MetaGraph from /tmp/fileDCLZQ2
[ERROR] Failed to parse the model, please check the encoding key to make sure it's correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello, I am trying to reproduce the work exposed in GitHub deepstream_lpr_app, however when I try to reproduce the step of using tlt-converter, I get the errors previously exposed, what am I doing wrong, according to me I follow the tutorial to the letter, with the exception that in the page it is specified to use ./tlt-converter … -p …, however that option does not exist, and I use ./tlt-converter … -d …, Help.

extra tip: I only use the -p and -k options, which are the required flags, in addition to the input model file.

Thanks

Where did you download tlt-converter?

From here Tlt-converter

It is available for download from the tlt getting tarted,choose the one for cuda 10.2

A latest version of tlt-converter is needed. It can support “-p” option.
I’m still syncing with internal team for its release.
Sorry for confusion. Will update the info when I have.

Sorry for the inconvenience. We haven’t officially released TLT3.0. We will release it this week. Please stay tuned.

Thanks Morganh

Please download the latest tlt-converter according to your devices.
See https://developer.nvidia.com/tlt-getting-started

@Morganh can you please provide the link to the tlt converter. The link here: https://developer.nvidia.com/cuda102-trt71-jp44
Doesn’t work. Same issue with flag -p not recognized

./tlt-converter: invalid option -- 'p'
Unrecognized argument
Aborted (core dumped)

Please check again. There is no issue. It has “-p” option.

nvidia@nvidia:~/morganh/cuda10.2_trt7.1_jp4.4$ ./tlt-converter -h
usage: ./tlt-converter [-h] [-v] [-e ENGINE_FILE_PATH]
[-k ENCODE_KEY] [-c CACHE_FILE]
[-o OUTPUTS] [-d INPUT_DIMENSIONS]
[-b BATCH_SIZE] [-m MAX_BATCH_SIZE]
[-w MAX_WORKSPACE_SIZE] [-t DATA_TYPE]
[-i INPUT_ORDER] [-s] [-u DLA_CORE]
input_file

Generate TensorRT engine from exported model

positional arguments:
input_file Input file (.etlt exported model).

required flag arguments:
-d comma separated list of input dimensions(not required for TLT 3.0 new models).
-k model encoding key.

optional flag arguments:
-b calibration batch size (default 8).
-c calibration cache file (default cal.bin).
-e file the engine is saved to (default saved.engine).
-i input dimension ordering – nchw, nhwc, nc (default nchw).
-m maximum TensorRT engine batch size (default 16). If meet with out-of-memory issue, please decrease the batch size accordingly.
-o comma separated list of output node names (default none).
-p comma separated list of optimization profile shapes in the format <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has the format: xxx. Can be specified multiple times if there are multiple input tensors for the model. This argument is only useful in dynamic shape case.
-s TensorRT strict_type_constraints flag for INT8 mode(default false).
-t TensorRT data type – fp32, fp16, int8 (default fp32).
-u Use DLA core N for layers that support DLA(default = -1, which means no DLA core will be utilized for inference. Note that it’ll always allow GPU fallback).
-w maximum workspace size of TensorRT engine (default 1<<30). If meet with out-of-memory issue, please increase the workspace size accordingly.

Thanks a lot for getting back to me. I downloaded the the executable from the link I shared and I don’t see --p flag

@THdev-desktop:/opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPR$ ./tlt-converter -h
usage: ./tlt-converter [-h] [-v] [-e ENGINE_FILE_PATH]
	[-k ENCODE_KEY] [-c CACHE_FILE]
	[-o OUTPUTS] [-d INPUT_DIMENSIONS]
	[-b BATCH_SIZE] [-m MAX_BATCH_SIZE]
	[-w MAX_WORKSPACE_SIZE] [-t DATA_TYPE]
	[-i INPUT_ORDER]
	input_file

Generate TensorRT engine from exported model

positional arguments:
  input_file		Input file (.etlt exported model).

required flag arguments:
  -d		comma separated list of input dimensions
  -k		model encoding key

optional flag arguments:
  -b		calibration batch size (default 8)
  -c		calibration cache file (default cal.bin)
  -e		file the engine is saved to (default saved.engine)
  -i		input dimension ordering -- nchw, nhwc, nc (default nchw)
  -m		maximum TensorRT engine batch size (default 16). If meet with out-of-memory issue, please decrease the batch size accordingly
  -o		comma separated list of output node names (default none)
  -t		TensorRT data type -- fp32, fp16, int8 (default fp32)
  -w		maximum workspace size of TensorRT engine (default 1<<30). If meet with out-of-memory issue, please increase the workspace size accordingly

I am assuming I am using the wrong executable.
Thanks again

Please check below step. I confirm it has “-p”.

In my Nano,

wget https://developer.nvidia.com/cuda102-trt71-jp44
unzip cuda102-trt71-jp44
cd cuda10.2_trt7.1_jp4.4
chmod +x tlt-converter
./tlt-converter -h

1 Like