Referenced before assignment after tao-deploy detectnet_v2 inference

Please provide the following information when requesting support.
• Hardware : A100
• Network Type: Detectnet_v2
• TLT Version: I am using the TAO Docker x86
• Training spec file: Same as the TAO toolkit 4.0 jupyter but adding 2 new classes for testing (copy/paste from previous class and changing the name of the class)
• How to reproduce the issue ?: Just run the code of the new TAO Toolkit 4.0 jupyter:

!tao-deploy detectnet_v2 inference -e $SPECS_DIR/detectnet_v2_inference_kitti_etlt.txt \
                                   -m $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector.trt.int8.engine \
                                   -r $USER_EXPERIMENT_DIR/etlt_infer_testing \
                                   -i $DATA_DOWNLOAD_DIR/test_samples

Hi, I am trying the new TAO toolkit jupyter and run over this error. I don’t know the cause since it appears to be a problem inside the TAO docker. Everything before that work perfectly fine.

Error:

2022-12-26 21:59:41,816 [INFO] root: Registry: ['nvcr.io']
2022-12-26 21:59:41,829 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:4.0.0-deploy
Loading uff directly from the package source code
[12/26/2022-22:00:06] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
[12/26/2022-22:00:06] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
Producing predictions:   0%|                             | 0/60 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "</usr/local/lib/python3.8/dist-packages/nvidia_tao_deploy/cv/detectnet_v2/scripts/inference.py>", line 3, in <module>
  File "<frozen cv.detectnet_v2.scripts.inference>", line 169, in <module>
  File "<frozen cv.detectnet_v2.scripts.inference>", line 85, in main
  File "<frozen cv.detectnet_v2.inferencer>", line 136, in infer
  File "<frozen cv.detectnet_v2.inferencer>", line 44, in trt_output_process_fn
UnboundLocalError: local variable 'output_meta_cov' referenced before assignment
2022-12-26 22:00:07,671 [INFO] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: Sending telemetry data.
2022-12-26 22:00:07,683 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: Telemetry data couldn't be sent, but the command ran successfully.
2022-12-26 22:00:07,683 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: [Error]: <urlopen error [Errno -2] Name or service not known>
2022-12-26 22:00:07,683 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: Execution status: FAIL
2022-12-26 22:00:08,081 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

Can you attach the training spec file and the inference spec file(detectnet_v2_inference_kitti_etlt.txt) ?

Hi, after digging up a little, my problem was a simple misspell in the class name that I wrote wrong in the middle of the copy paste on the conf file while trying to test it on 5 classes. Everything works now :^).