LPRNET - tao_converter Error: no input dimensions given

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc)
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)

Configuration of the TAO Toolkit Instance
task_group: ['model', 'dataset', 'deploy']
format_version: 3.0
toolkit_version: 5.0.0
published_date: 07/14/2023

• Training spec file(If have, please share here)

random_seed: 42
lpr_config {
  hidden_units: 512
  max_label_length: 3
  arch: "baseline"
  nlayers: 18 #setting nlayers to be 10 to use baseline10 model
training_config {
  batch_size_per_gpu: 128
  num_epochs: 250
  learning_rate {
  soft_start_annealing_schedule {
    min_learning_rate: 1e-6
    max_learning_rate: 1e-5
    soft_start: 0.001
    annealing: 0.5
  regularizer {
    type: L2
    weight: 5e-4
  checkpoint_interval: 15
  max_queue_size: 16
  n_workers: 8
  visualizer {
    enabled: true
    num_images: 15
eval_config {
  validation_period_during_training: 15
  batch_size: 128
augmentation_config {
    output_width: 96
    output_height: 48
    output_channel: 3
    max_rotate_degree: 5
    rotate_prob: 0.5
    gaussian_kernel_size: 5
    gaussian_kernel_size: 7
    gaussian_kernel_size: 15
    blur_prob: 0.5
    reverse_color_prob: 0.5
    keep_original_prob: 0.3
dataset_config {
  data_sources: {
    label_directory_path: "/workspace/tao-experiments/dataset/train/label_line_1"
    image_directory_path: "/workspace/tao-experiments/dataset/train/image"
  characters_list_file: "/workspace/tao-experiments/lprnet/specs/line_1_characters.txt"
  validation_data_sources: {
    label_directory_path: "/workspace/tao-experiments/dataset/val/label_line_1"
    image_directory_path: "/workspace/tao-experiments/dataset/val/image"

• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

/opt/tao-converter/tao-converter \ 
 /opt/models_deploy/lprnet/lprnet_moto_line_1_20230913.onnx \
 -e /models/NVIDIA-GeForce-RTX-2070-SUPER/lprnet/20230913/model.plan \
 -k nvidia_tlt \
 -p image_input,1x3x48x96,32x3x48x96,64x3x48x96 \
 -t fp16
Error: no input dimensions given

I use Triton Server 22.09 to build my TRT files. Recently, I trained the LPRNET model on TAO 4.0, and exporting the model using tao-converter worked perfectly with the same command as documented.

I upgraded TAO from version 4.0 to version 5.0 and performed a new training on a different dataset. The training went smoothly, including generating the TensorRT engine using tao deploy.
The command below worked.

# Convert to TensorRT engine(FP16).
tao deploy lprnet gen_trt_engine --gpu_index=0 -m $USER_EXPERIMENT_DIR/export/lprnet_epoch-250-line_1.onnx \
                                  --data_type fp16 \
                                  --engine_file $USER_EXPERIMENT_DIR/export/lprnet_epoch-250_line_1.fp16.engine \
                                  --min_batch_size 1 \
                                  --opt_batch_size 16 \
                                  --max_batch_size 32 \
                                  --results_dir $USER_EXPERIMENT_DIR/export

However, I am facing an issue when trying to generate the TensorRT engine in Triton Server using the tao-converter utility (v4.0.0_trt8.5.2.2_x86).
I need to use tao-converter because I have developed software that automates the deployment of Docker images and the generation of TRT engines based on the GPU available in my environment.

Could someone please help me with this issue?

Refer to TRTEXEC with LPRNet - NVIDIA Docs, you can use trtexec instead.

Thank you, it worked.
In previous versions, ONNX models (also known as ETLT) used to be encrypted with keys.
Has this been changed on TAO 5.0 ? Is there any note regarding this?

I found it.

  • Pipeline features
    • Export to deserialize ONNX models for direct integration with TensorRT (except MaskRCNN)
    • Decrypted checkpoint serialization across all networks

Deprecated Features

  • The ability to use tao-converter to generate TensorRT engine from .etlt files has deprecated. All networks support direct integration with TensorRT and the trtexec sample

This indicates that exported models are not encrypted.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.