License Plate detection error: ValueError: axes don't match array

Hello. When I do inference of license plate detection in the tao_deploy.ipynb notebook, I got an error message as follow: ValueError: axes don’t match array.

How should I do to deal with the problem ?

My TensorRT version is 8.5.1.7 and python version is 3.8.10

The contents below are the error message of license plate detection inference

Running inference for LicensePlateDetection on /root/nvidia-tao/tao_deploy/trt_out_folderLicensePlateDetection/LicensePlateDetection.trt.fp32
Loading uff directly from the package source code
[03/26/2023-18:15:08] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
[03/26/2023-18:15:08] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[03/26/2023-18:15:08] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[03/26/2023-18:15:08] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[03/26/2023-18:15:08] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[03/26/2023-18:15:08] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
Producing predictions:   0%|                              | 0/1 [00:00<?, ?it/s][03/26/2023-18:15:08] [TRT] [W] The enqueue() method has been deprecated when used with engines built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. Please use enqueueV2() instead.
[03/26/2023-18:15:08] [TRT] [W] Also, the batchSize argument passed into this function has no effect on changing the input shapes. Please use setBindingDimensions() function to change input shapes instead.
Producing predictions:   0%|                              | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "</usr/local/lib/python3.8/dist-packages/nvidia_tao_deploy/cv/detectnet_v2/scripts/inference.py>", line 3, in <module>
  File "<frozen cv.detectnet_v2.scripts.inference>", line 169, in <module>
  File "<frozen cv.detectnet_v2.scripts.inference>", line 85, in main
  File "<frozen cv.detectnet_v2.inferencer>", line 136, in infer
  File "<frozen cv.detectnet_v2.inferencer>", line 37, in trt_output_process_fn
ValueError: axes don't match array
2023-03-26 18:15:09,328 [INFO] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: Sending telemetry data.
2023-03-26 18:15:09,359 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: Telemetry data couldn't be sent, but the command ran successfully.
2023-03-26 18:15:09,359 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: [Error]: <urlopen error [Errno -2] Name or service not known>
2023-03-26 18:15:09,359 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: Execution status: FAIL
if ptm_model_name in ("VehicleMakeNet", "VehicleTypeNet"):

The contents below are the infer spec file

inferencer_config{
  # defining target class names for the experiment.
  # Note: This must be mentioned in order of the networks classes.
  target_classes: "lpd"
  # Inference dimensions.
  image_width: 640
  image_height: 480
  # Must match what the model was trained for.
  image_channels: 3
  batch_size: 16
  gpu_index: 0
  stride: 16
  # model handler config
  tensorrt_config{
	parser: ETLT  
    etlt_model: "lpd.etlt"
    backend_data_type: INT8
    save_engine: true
    trt_engine: "lpd.trt"
    calibrator_config: {
    calibration_cache: "lpd.cache"
       }
	}
}

bbox_handler_config{
  kitti_dump: true
  disable_overlay: false
  overlay_linewidth: 2
  classwise_bbox_handler_config{
    key:"lpd"
    value: {
      confidence_model: "aggregate_cov"
      output_map: "lpd"
      bbox_color{
        R: 255
        G: 0
        B: 0
      }
      clustering_config{
        coverage_threshold: 0.00
        clustering_algorithm: DBSCAN
        dbscan_confidence_threshold: 0.9
        dbscan_eps: 0.3
        dbscan_min_samples: 0.05
        minimum_bounding_box_height: 4
      }
    }
  }
  classwise_bbox_handler_config{
    key:"default"
    value: {
      confidence_model: "aggregate_cov"
      bbox_color{
        R: 0
        G: 255
        B: 0
      }
      clustering_config{
        coverage_threshold: 0.005
        clustering_algorithm: DBSCAN
        dbscan_confidence_threshold: 0.9
        dbscan_eps: 0.3
        dbscan_min_samples: 0.05
        minimum_bounding_box_height: 4
      }
    }
  }
}

The content below is command of run inference

if ptm_model_name in ("PeopleNet","LicensePlateDetection","DashCamNet","TrafficCamNet","FaceDetect","FaceDetectIR"):
    action = "_infer"

os.environ["inference_experiment_spec"] = f"{os.environ.get('COLAB_NOTEBOOKS_PATH')}/tao_deploy/specs/{ptm_model_name}/{ptm_model_name}{action}.txt"

# FIXME 10 - inference_out_folder: Folder path to write the inference results to
os.environ["inference_out_folder"] = "/root/nvidia-tao/tao_deploy/tao_ptm_inference/"
!rm -rf $inference_out_folder

# FIXME 11 - inference_input_images_folder: Folder path containing images to run the inference on
os.environ["inference_input_images_folder"] = f"/root/nvidia-tao/tao_deploy/tao_deploy_input_images/{ptm_model_name}"

print(f"Running inference for {ptm_model_name} on {os.environ['trt_out_file_name']}")
if ptm_model_name in ("PeopleNet","LicensePlateDetection","DashCamNet","TrafficCamNet","FaceDetect","FaceDetectIR"):
    !detectnet_v2 inference -e $inference_experiment_spec \
                                   -m $trt_out_file_name \
                                   -r $inference_out_folder \
                                   -i $inference_input_images_folder

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Above notebook is for google Colab. Instead, for running locally, please download notebook from TAO Toolkit Quick Start Guide - NVIDIA DocsTAO Toolkit Getting Started | NVIDIA NGC

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.