Deepstream-lpr-python-version

• GeForce GTX 1660 Ti Mobile
• DeepStream 6.1
• Ubuntu 20.04
• TensorRT 8.2.5.1
• GStreamer 1.16.2
• NVIDIA driver 510.47.03
• CUDA 11.6 Update 1

I want to start deepstream-lpr-python-version (GitHub - preronamajumder/deepstream-lpr-python-version: Python version for NVIDIA Deepstream's LPR. https://developer.nvidia.com/blog/creating-a-real-time-license-plate-detection-and-recognition-app/) project. But for start it I need to generate .engine file. There is no version tlt-converter for CUDA 11.6 and TensorRT 8.2.5.1 (may be i did’nt find it).

How I can generate engine file?

tao-converter is available in NGC. TAO Converter | NVIDIA NGC

For DS 6.1 dGPU, you need to download v3.21.11_trt8.0_x86 version


I have this error after starting converter

I tried to use v3.22.05_trt8.2_x86 version and got this error:

Please double check your command line. And make sure the .etlt file is available.

It’s available, but lavels file is empty. I downloaded it by download_us.sh
image

Please share the full command line and log again. Not use screenshot, please.

When I use v3.22.05_trt8.2_x86 version:

codeinside@CI-1442:~/deepstream_lpr_app$ sudo sh ./tao-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96            models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
./tao-converter: 7: Syntax error: Unterminated quoted string

When I use v3.21.11_trt8.0_x86 version

codeinside@CI-1442:~/deepstream_lpr_app$ sudo sh ./tao-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96            models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
./tao-converter: 1: ELF: not found
./tao-converter: 1: : not found
./tao-converter: 2: @:@J@888: not found
./tao-converter: 3: �: not found
./tao-converter: 4: Syntax error: ")" unexpected

How about
$ ./tao-converter -h

codeinside@CI-1442:~/deepstream_lpr_app$ sudo sh ./tao-converter -h
./tao-converter: 1: ELF: not found
./tao-converter: 1: : not found
./tao-converter: 2: @:@J@888: not found
./tao-converter: 3: �: not found
./tao-converter: 4: Syntax error: ")" unexpected
codeinside@CI-1442:~/deepstream_lpr_app$ sudo sh ./tao-converter_8 -h
./tao-converter_8: 7: Syntax error: Unterminated quoted string

Can you remove “sh” ?

codeinside@CI-1442:~/deepstream_lpr_app$ sudo ./tao-converter_8 -h
sudo: ./tao-converter_8: command not found

codeinside@CI-1442:~/deepstream_lpr_app$ ./tao-converter_8 -h
bash: ./tao-converter_8: Permission denied

Please
$ chmod +x tao-converter*

This fixed ./tao-converter -h. But I have the same error:

codeinside@CI-1442:~/deepstream_lpr_app$ ./tao-converter_8 -h
usage: ./tao-converter_8 [-h] [-e ENGINE_FILE_PATH]
	[-k ENCODE_KEY] [-c CACHE_FILE]
	[-o OUTPUTS] [-d INPUT_DIMENSIONS]
	[-b BATCH_SIZE] [-m MAX_BATCH_SIZE]
	[-w MAX_WORKSPACE_SIZE] [-t DATA_TYPE]
	[-i INPUT_ORDER] [-s] [-u DLA_CORE]
	input_file

Generate TensorRT engine from exported model

positional arguments:
  input_file		Input file (.etlt exported model).

required flag arguments:
  -d		comma separated list of input dimensions(not required for TLT 3.0 new models).
  -k		model encoding key.

optional flag arguments:
  -b		calibration batch size (default 8).
  -c		calibration cache file (default cal.bin).
  -e		file the engine is saved to (default saved.engine).
  -i		input dimension ordering -- nchw, nhwc, nc (default nchw).
  -m		maximum TensorRT engine batch size (default 16). If meet with out-of-memory issue, please decrease the batch size accordingly.
  -o		comma separated list of output node names (default none).
  -p		comma separated list of optimization profile shapes in the format <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has `x` as delimiter, e.g., NxC, NxCxHxW, NxCxDxHxW, etc. Can be specified multiple times if there are multiple input tensors for the model. This argument is only useful in dynamic shape case.
  -s		TensorRT strict_type_constraints flag for INT8 mode(default false).
  -t		TensorRT data type -- fp32, fp16, int8 (default fp32).
  -u		Use DLA core N for layers that support DLA(default = -1, which means no DLA core will be utilized for inference. Note that it'll always allow GPU fallback).
  -w		maximum workspace size of TensorRT engine (default 1<<30). If meet with out-of-memory issue, please increase the workspace size accordingly.

codeinside@CI-1442:~/deepstream_lpr_app$ sudo sh ./tao-converter_8 -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96            models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
./tao-converter_8: 7: Syntax error: Unterminated quoted string

Please type the command again. Not directly copy something from website to avoid unexpected string.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.