IndexError when running inference trafficcamnet in docker

Please provide the following information when requesting support.

• Hardware: GeForce RTX 3050
• Network Type: Detectnet_v2
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file:

inferencer_config{
  # defining target class names for the experiment.
  # Note: This must be mentioned in order of the networks classes.
  target_classes: "car"
  # Inference dimensions.
  image_width: 960
  image_height: 544
  # Must match what the model was trained for.
  image_channels: 3
  batch_size: 1
  gpu_index: 0
  #model handler config
  tlt_config{
    model: "/home/tao/resnet18_trafficcamnet.tlt"
  }
}
bbox_handler_config{
  kitti_dump: true
  disable_overlay: true
  overlay_linewidth: 4
  classwise_bbox_handler_config{
    key:"car"
    value: {
      confidence_model: "aggregate_cov"
      output_map: "car"
      bbox_color{
        R: 0
        G: 255
        B: 0
      }
      clustering_config{
        coverage_threshold: 0.00
        dbscan_eps: 0.3
        dbscan_min_samples: 1
        minimum_bounding_box_height: 4
      }
    }
  }
  classwise_bbox_handler_config{
    key:"default"
    value: {
      confidence_model: "aggregate_cov"
      bbox_color{
        R: 255
        G: 255
        B: 255
      }
      clustering_config{
        coverage_threshold: 0.00
        dbscan_eps: 0.3
        dbscan_min_samples: 1
        minimum_bounding_box_height: 4
      }
    }
  }
}

• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I’m trying to run inference on trafficcamnet
in a docker container using the following command

docker run --rm --gpus all -v ./:/home/tao/ nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 \
detectnet_v2 inference -e /home/tao/resnet18_trafficcamnet_infer_spec.txt \
-i /home/tao/input_images/ -r /home/tao/input_images/ -k tlt_encode

and i’m getting the following error

INFO: Initialized model
INFO: Commencing inference
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 3, 544, 960)       0         
_________________________________________________________________
model_1 (Model)              [(None, 4, 34, 60), (None 11558548  
=================================================================
Total params: 11,558,548
Trainable params: 11,546,900
Non-trainable params: 11,648
_________________________________________________________________
  0%|          | 0/7 [00:03<?, ?it/s]
INFO: list index out of range
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/usr/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
    return list(map(*args))
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/postprocessor/bbox_handler.py", line 93, in render_single_image_output
    bbox_list, confidence_list = _get_bbox_and_confs(class_wise_detections[key][idx],
IndexError: list index out of range
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/inference.py", line 294, in <module>
    raise e
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/inference.py", line 278, in <module>
    main()
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/inference.py", line 267, in main
    inference_wrapper_batch(inferencer_config, bbox_handler_config,
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/inference.py", line 190, in inference_wrapper_batch
    bboxer.render_outputs(classwise_detections,
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/postprocessor/bbox_handler.py", line 434, in render_outputs
    pool.map(partial(render_single_image_output,
  File "/usr/lib/python3.8/multiprocessing/pool.py", line 364, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/usr/lib/python3.8/multiprocessing/pool.py", line 771, in get
    raise self._value
IndexError: list index out of range
Execution status: FAIL

the inference spec file is listed below.

The model was downloaded from TrafficCamNet | NVIDIA NGC

The error comes from https://github.com/NVIDIA/tao_tensorflow1_backend/blob/main/nvidia_tao_tf1/cv/detectnet_v2/postprocessor/bbox_handler.py#L93.

Does your model train for only one class - car?

@Morganh thanks for the quick reply. No, the model was trained on 4 classes. It wasn’t clear to me that all classes are required.

Once I’ve added the 4 classes it doesn’t error anymore. However, it doesn’t produce any output images with rendered bounding boxes even though disable_overlay: true. When kitti_dump is set to true it produces some text files.

Is there an issue with disable_overlay is there some additional configuration that is required?

That is expected. If set disable_overlay: true, then according to https://github.com/NVIDIA/tao_tensorflow1_backend/blob/c7a3926ddddf3911842e057620bceb45bb5303cc/nvidia_tao_tf1/cv/detectnet_v2/scripts/inference.py#L110 and https://github.com/NVIDIA/tao_tensorflow1_backend/blob/main/nvidia_tao_tf1/cv/detectnet_v2/postprocessor/bbox_handler.py#L128, the images will not be saved.

OK, thanks. That works. However, that’s very misleading

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.