Evaluating pretrained models with Detectnet_v2

• Hardware NVIDIA TITAN Xp . Computer has Intel® Xeon(R) CPU X5680 @ 3.33GHz × 12 with 24Gb ram and is running Ubuntu 22.04.5 LTS
• Network Type Detectnet_v2
• TAO Version (Please run “tlt info --verbose” and share “docker_tag” here)
(launcher) harold@TrainingComp:~/workspace/tao-experiments/data/training/image_2$ tao info --verbose
Configuration of the TAO Toolkit Instance

task_group:
model:
dockers:
nvidia/tao/tao-toolkit:
5.5.0-pyt:
docker_registry: nvcr.io
tasks:

  1. action_recognition
  2. centerpose
  3. visual_changenet
  4. deformable_detr
  5. dino
  6. grounding_dino
  7. mask_grounding_dino
  8. mask2former
  9. mal
  10. ml_recog
  11. ocdnet
  12. ocrnet
  13. optical_inspection
  14. pointpillars
  15. pose_classification
  16. re_identification
  17. classification_pyt
  18. segformer
  19. bevfusion
    5.0.0-tf1.15.5:
    docker_registry: nvcr.io
    tasks:
  20. bpnet
  21. classification_tf1
  22. converter
  23. detectnet_v2
  24. dssd
  25. efficientdet_tf1
  26. faster_rcnn
  27. fpenet
  28. lprnet
  29. mask_rcnn
  30. multitask_classification
  31. retinanet
  32. ssd
  33. unet
  34. yolo_v3
  35. yolo_v4
  36. yolo_v4_tiny
    5.5.0-tf2:
    docker_registry: nvcr.io
    tasks:
  37. classification_tf2
  38. efficientdet_tf2
    dataset:
    dockers:
    nvidia/tao/tao-toolkit:
    5.5.0-data-services:
    docker_registry: nvcr.io
    tasks:
  39. augmentation
  40. auto_label
  41. annotations
  42. analytics
    deploy:
    dockers:
    nvidia/tao/tao-toolkit:
    5.5.0-deploy:
    docker_registry: nvcr.io
    tasks:
  43. visual_changenet
  44. centerpose
  45. classification_pyt
  46. classification_tf1
  47. classification_tf2
  48. deformable_detr
  49. detectnet_v2
  50. dino
  51. dssd
  52. efficientdet_tf1
  53. efficientdet_tf2
  54. faster_rcnn
  55. grounding_dino
  56. mask_grounding_dino
  57. mask2former
  58. lprnet
  59. mask_rcnn
  60. ml_recog
  61. multitask_classification
  62. ocdnet
  63. ocrnet
  64. optical_inspection
  65. retinanet
  66. segformer
  67. ssd
  68. trtexec
  69. unet
  70. yolo_v3
  71. yolo_v4
  72. yolo_v4_tiny
    format_version: 3.0
    toolkit_version: 5.5.0

I have a very general question about evaluation of the stock NGC models that we have not trained ourselves.

What is the correct way to determine mAP values for an off the shelf pretrained model from the NGC such as trafficcamnet?

In my case I would like to evaluate Trafficcamnet. I can download the .onyx for v1.0.3 and I can create an .engine file for my machine with it, but I have been trying to use:
tao deploy detectnet_v2 evaluate to get mAP for this model using the kitti images from the detectnet_v2 Jupyter notebook tutorial and I can’t seem to get the syntax right for the spec file for evaluation.

In the Jupyter notebook when we trained our own model (which I successfully did this spring) you used the spec file that we used for training in the evaluation step. At that time I used:
!tao deploy detectnet_v2 evaluate -e $SPECS_DIR/detectnet_v2_retrain_trafficcamnet_kitti.txt
-m $USER_EXPERIMENT_DIR/experiment_dir_final/trafficcamnet_detector_pruned.engine
-i $DATA_DOWNLOAD_DIR/training/image_2
-l $DATA_DOWNLOAD_DIR/training/label_2
-r $USER_EXPERIMENT_DIR/experiment_dir_final_Mar7/
within the notebook.

I am assuming the problem is that I do not have the spec file that was used for training of the off the shelf pre-trained version of Trafficcamnet? I assume that there must be some way for us to derive mAP values for a fully trained model from the NGC catalog. But I have not been able to figure out what it is.

Is tao deploy detectnet_v2 evaluate the right approach? If so, what should the spec file look like? If not, what should I be using?

I’m not able to upgrade to Tao 6.0 yet at this time (I’m limited by my GPU there) so I am hoping that someone can still shed some light on this for me with Tao 5.5. Thank you and best regards!

Here is a reference when run evaluation against Peoplenet directly.

Please see PeopleNet v1.0 unpruned model shows very bad results on COCO dataset - #11 by Morganh

But if you want to run in TAO5.5 version and trafficcamnet, some changes are needed.
Please refer to below.

random_seed: 42
dataset_config {
data_sources {
tfrecords_path: "your tfrecord"
image_directory_path: "your own image"
}
image_extension: "jpg"
target_class_mapping {
key: "car"
value: "car"
}
target_class_mapping {
key: "person"
value: "person"
}
target_class_mapping {
key: "bicycle"
value: "bicycle"
}
target_class_mapping {
key: "road_sign"
value: "road_sign"
}

validation_fold: 0
}
augmentation_config {
preprocessing {
output_image_width: 960
output_image_height: 544
min_bbox_width: 1.0
min_bbox_height: 1.0
output_image_channel: 3
}
spatial_augmentation {
hflip_probability: 0.5
zoom_min: 1.0
zoom_max: 1.0
translate_max_x: 8.0
translate_max_y: 8.0
}
color_augmentation {
hue_rotation_max: 25.0
saturation_shift_max: 0.20000000298
contrast_scale_max: 0.10000000149
contrast_center: 0.5
}
}
postprocessing_config {
target_class_config {
key: "car"
value {
clustering_config {
coverage_threshold: 0.00499999988824
dbscan_eps: 0.20000000298
dbscan_min_samples: 1
minimum_bounding_box_height: 4
}
}
}
target_class_config {
key: "person"
value {
clustering_config {
coverage_threshold: 0.00499999988824
dbscan_eps: 0.15000000596
dbscan_min_samples: 1
minimum_bounding_box_height: 4
}
}
}
target_class_config {
key: "bicycle"
value {
clustering_config {
coverage_threshold: 0.00499999988824
dbscan_eps: 0.15000000596
dbscan_min_samples: 1
minimum_bounding_box_height: 4
}
}
}
target_class_config {
	key: "road_sign"
	value {
	clustering_config {
	coverage_threshold: 0.00499999988824
	dbscan_eps: 0.15000000596
	dbscan_min_samples: 1
	minimum_bounding_box_height: 4
	}
	}
	}
}
model_config {
pretrained_model_file: "resnet18_trafficcamnet.tlt"
num_layers: 18
#load_graph: True
use_batch_norm: true
objective_set {
bbox {
scale: 35.0
offset: 0.5
}
cov {
}
}
arch: "resnet"
}
evaluation_config {
validation_period_during_training: 1
first_validation_epoch: 1
minimum_detection_ground_truth_overlap {
key: "car"
value: 0.5
}
minimum_detection_ground_truth_overlap {
key: "person"
value: 0.5
}
minimum_detection_ground_truth_overlap {
key: "bicycle"
value: 0.5
}
minimum_detection_ground_truth_overlap {
key: "road_sign"
value: 0.5
}
evaluation_box_config {
key: "car"
value {
minimum_height: 10
maximum_height: 9999
minimum_width: 10
maximum_width: 9999
}
}
evaluation_box_config {
key: "person"
value {
minimum_height: 10
maximum_height: 9999
minimum_width: 10
maximum_width: 9999
}
}
evaluation_box_config {
key: "bicycle"
value {
minimum_height: 10
maximum_height: 9999
minimum_width: 10
maximum_width: 9999
}
}
evaluation_box_config {
	key: "road_sign"
	value {
	minimum_height: 10
	maximum_height: 9999
	minimum_width: 10
	maximum_width: 9999
	}
}
average_precision_mode: INTEGRATE
}
cost_function_config {
target_classes {
name: "car"
class_weight: 1.0
coverage_foreground_weight: 0.0500000007451
objectives {
name: "cov"
initial_weight: 1.0
weight_target: 1.0
}
objectives {
name: "bbox"
initial_weight: 10.0
weight_target: 10.0
}
}
target_classes {
name: "person"
class_weight: 8.0
coverage_foreground_weight: 0.0500000007451
objectives {
name: "cov"
initial_weight: 1.0
weight_target: 1.0
}
objectives {
name: "bbox"
initial_weight: 10.0
weight_target: 10.0
}
}
target_classes {
name: "bicycle"
class_weight: 4.0
coverage_foreground_weight: 0.0500000007451
objectives {
name: "cov"
initial_weight: 1.0
weight_target: 1.0
}
objectives {
name: "bbox"
initial_weight: 10.0
weight_target: 10.0
}
}
target_classes {
	name: "road_sign"
	class_weight: 4.0
	coverage_foreground_weight: 0.0500000007451
	objectives {
	name: "cov"
	initial_weight: 1.0
	weight_target: 1.0
	}
	objectives {
	name: "bbox"
	initial_weight: 10.0
	weight_target: 10.0
	}
	}
enable_autoweighting: true
max_objective_weight: 0.999899983406
min_objective_weight: 9.99999974738e-05
}
training_config {
batch_size_per_gpu: 16
num_epochs: 10
learning_rate {
soft_start_annealing_schedule {
min_learning_rate: 10e-10
max_learning_rate: 10e-10
soft_start: 0.0
annealing: 0.3
}
}
regularizer {
type: L1
weight: 3.00000002618e-09
}
optimizer {
adam {
epsilon: 9.99999993923e-09
beta1: 0.899999976158
beta2: 0.999000012875
}
}
cost_scaling {
initial_exponent: 20.0
increment: 0.005
decrement: 1.0
}
checkpoint_interval: 10
}
bbox_rasterizer_config {
target_class_config {
key: "car"
value {
cov_center_x: 0.5
cov_center_y: 0.5
cov_radius_x: 0.40000000596
cov_radius_y: 0.40000000596
bbox_min_radius: 1.0
}
}
target_class_config {
key: "person"
value {
cov_center_x: 0.5
cov_center_y: 0.5
cov_radius_x: 1.0
cov_radius_y: 1.0
bbox_min_radius: 1.0
}
}
target_class_config {
key: "bicycle"
value {
cov_center_x: 0.5
cov_center_y: 0.5
cov_radius_x: 1.0
cov_radius_y: 1.0
bbox_min_radius: 1.0
}
}
target_class_config {
	key: "road_sign"
	value {
	cov_center_x: 0.5
	cov_center_y: 0.5
	cov_radius_x: 1.0
	cov_radius_y: 1.0
	bbox_min_radius: 1.0
	}
	}
deadzone_radius: 0.400000154972
}

Then run
# detectnet_v2 evaluate xxx

Thank you @Morganh !

I had to step away for some medical reasons, but I’m back to trying to figure this out.
I notice that this spec file is designed for the .tlt version of trafficcamnet. That seems to imply that its for version 1.0.0.
Is it possible to evaluate the etlt versions? (like the pruned_v1.0.3 which is an etlt or the pruned_onnx_v1.0.4 which is an onnx)

I did try this spec file with the unpruned_v1.0 which is resnet18_trafficcamnet.tlt, but I got the following:

Set variable for the v1.0.0 UNPRUNED TLT model

%env TLT_MODEL_DIR=/workspace/pretrained_models/trafficcamnet_v1.0
# Set variable for the spec file
%env FINAL_SPECS_DIR={os.environ['USER_EXPERIMENT_DIR']}/specs/detectnet_v2_eval_trafficcamnet_kitti_FP16_forum.txt
# Run the evaluation
!tao model detectnet_v2 evaluate \
    -e $FINAL_SPECS_DIR \
    -m $TLT_MODEL_DIR/resnet18_trafficcamnet.tlt \
    -k nvidia_tlt
env: TLT_MODEL_DIR=/workspace/pretrained_models/trafficcamnet_v1.0
env: FINAL_SPECS_DIR=/workspace/experiments/eval_stock_trafficcamnet/specs/detectnet_v2_eval_trafficcamnet_kitti_FP16_forum.txt
2025-09-16 11:59:34,110 [TAO Toolkit] [INFO] root 160: Registry: ['nvcr.io']
2025-09-16 11:59:34,262 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 360: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
2025-09-16 11:59:34,344 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 301: Printing tty value True
2025-09-16 18:59:35.222087: I tensorflow/stream_executor/platform/default/dso_loader.cc:50] Successfully opened dynamic library libcudart.so.12
2025-09-16 18:59:35,274 [TAO Toolkit] [WARNING] tensorflow 40: Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2025-09-16 18:59:36,714 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-16 18:59:36,747 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-16 18:59:36,752 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-16 18:59:38,228 [TAO Toolkit] [WARNING] matplotlib 500: Matplotlib created a temporary config/cache directory at /tmp/matplotlib-aa_2yb84 because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
2025-09-16 18:59:38,439 [TAO Toolkit] [INFO] matplotlib.font_manager 1633: generated new fontManager
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-16 18:59:40,283 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-16 18:59:40,331 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-16 18:59:40,337 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-16 18:59:41,217 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.spec_handler.spec_loader 113: Merging specification from /workspace/experiments/eval_stock_trafficcamnet/specs/detectnet_v2_eval_trafficcamnet_kitti_FP16_forum.txt
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/evaluate.py", line 253, in <module>
    raise e
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/evaluate.py", line 223, in <module>
    main()
  File "/usr/local/lib/python3.8/dist-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/utilities/timer.py", line 46, in wrapped_fn
    return_args = fn(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/evaluate.py", line 183, in main
    experiment_spec = load_experiment_spec(
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/spec_handler/spec_loader.py", line 136, in load_experiment_spec
    experiment_spec = load_proto(spec_path, experiment_spec, default_spec_path,
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/spec_handler/spec_loader.py", line 114, in load_proto
    _load_from_file(spec_path, proto_buffer)
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/spec_handler/spec_loader.py", line 100, in _load_from_file
    merge_text_proto(f.read(), pb2)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 719, in Merge
    return MergeLines(
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 793, in MergeLines
    return parser.MergeLines(lines, message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 818, in MergeLines
    self._ParseOrMerge(lines, message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 837, in _ParseOrMerge
    self._MergeField(tokenizer, message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 967, in _MergeField
    merger(tokenizer, message, field)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 1042, in _MergeMessageField
    self._MergeField(tokenizer, sub_message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 910, in _MergeField
    name = tokenizer.ConsumeIdentifierOrNumber()
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 1379, in ConsumeIdentifierOrNumber
    raise self.ParseError('Expected identifier or number, got %s.' % result)
google.protobuf.text_format.ParseError: 98:1 : '//load_graph: True': Expected identifier or number, got /.
Telemetry data couldn't be sent, but the command ran successfully.
[WARNING]: HTTPSConnectionPool(host='telemetry.metropolis.nvidia.com', port=443): Max retries exceeded with url: /api/v1/telemetry (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')))
Execution status: FAIL
2025-09-16 11:59:43,003 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 363: Stopping container.

Also on that note, should I be running tao deploy or tao model for the detectnet_v2 evaluate command?

Thank you for your patient advice!

H

Please delete //load_graph: True and retry.

In detectnet_v2 evaluate, it does not support evaluating against .onnx file(i..e, .etlt file). It only support evaluating against .hdf5 file(i.e., .tlt file) or tensorrt engine. Please refer to tao_tutorials/notebooks/tao_launcher_starter_kit/detectnet_v2/detectnet_v2.ipynb at tao_5.5_release · NVIDIA/tao_tutorials · GitHub.

Really? How is one able to judge their own trained model against the latest in the NGC zoo? (which for trafficcamnet is v1.0.4)

as for my attempts to evaluate the v1.0 tlt file, I removed the line //load_graph: True and I am still having errors. I’ll paste my notebook entry and the output and try to link my spec file:

Set variable for the v1.0.0 UNPRUNED TLT model

%env TLT_MODEL_DIR=/workspace/pretrained_models/trafficcamnet_v1.0
# Set variable for the spec file
%env FINAL_SPECS_DIR={os.environ['USER_EXPERIMENT_DIR']}/specs/detectnet_v2_eval_trafficcamnet_kitti_FP16_forum.txt
# Run the evaluation
!tao model detectnet_v2 evaluate \
    -e $FINAL_SPECS_DIR \
    -m $TLT_MODEL_DIR/resnet18_trafficcamnet.tlt \
    -k nvidia_tlt
env: TLT_MODEL_DIR=/workspace/pretrained_models/trafficcamnet_v1.0
env: FINAL_SPECS_DIR=/workspace/experiments/eval_stock_trafficcamnet/specs/detectnet_v2_eval_trafficcamnet_kitti_FP16_forum.txt
2025-09-17 22:04:22,876 [TAO Toolkit] [INFO] root 160: Registry: ['nvcr.io']
2025-09-17 22:04:23,031 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 360: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
2025-09-17 22:04:23,126 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 301: Printing tty value True
2025-09-18 05:04:24.194229: I tensorflow/stream_executor/platform/default/dso_loader.cc:50] Successfully opened dynamic library libcudart.so.12
2025-09-18 05:04:24,246 [TAO Toolkit] [WARNING] tensorflow 40: Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2025-09-18 05:04:26,184 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-18 05:04:26,283 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-18 05:04:26,290 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-18 05:04:28,289 [TAO Toolkit] [WARNING] matplotlib 500: Matplotlib created a temporary config/cache directory at /tmp/matplotlib-c_1z7p8_ because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
2025-09-18 05:04:28,548 [TAO Toolkit] [INFO] matplotlib.font_manager 1633: generated new fontManager
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-18 05:04:30,226 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-18 05:04:30,259 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-18 05:04:30,263 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-18 05:04:30,943 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.spec_handler.spec_loader 113: Merging specification from /workspace/experiments/eval_stock_trafficcamnet/specs/detectnet_v2_eval_trafficcamnet_kitti_FP16_forum.txt
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/evaluate.py", line 253, in <module>
    raise e
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/evaluate.py", line 223, in <module>
    main()
  File "/usr/local/lib/python3.8/dist-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/utilities/timer.py", line 46, in wrapped_fn
    return_args = fn(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/evaluate.py", line 183, in main
    experiment_spec = load_experiment_spec(
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/spec_handler/spec_loader.py", line 136, in load_experiment_spec
    experiment_spec = load_proto(spec_path, experiment_spec, default_spec_path,
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/spec_handler/spec_loader.py", line 114, in load_proto
    _load_from_file(spec_path, proto_buffer)
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/spec_handler/spec_loader.py", line 100, in _load_from_file
    merge_text_proto(f.read(), pb2)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 719, in Merge
    return MergeLines(
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 793, in MergeLines
    return parser.MergeLines(lines, message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 818, in MergeLines
    self._ParseOrMerge(lines, message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 837, in _ParseOrMerge
    self._MergeField(tokenizer, message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 967, in _MergeField
    merger(tokenizer, message, field)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 1042, in _MergeMessageField
    self._MergeField(tokenizer, sub_message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 967, in _MergeField
    merger(tokenizer, message, field)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 1042, in _MergeMessageField
    self._MergeField(tokenizer, sub_message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 932, in _MergeField
    raise tokenizer.ParseErrorPreviousToken(
google.protobuf.text_format.ParseError: 154:1 : Message type "EvaluationConfig.EvaluationBoxConfigEntry" has no field named "evaluation_box_config".
Telemetry data couldn't be sent, but the command ran successfully.
[WARNING]: HTTPSConnectionPool(host='telemetry.metropolis.nvidia.com', port=443): Max retries exceeded with url: /api/v1/telemetry (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')))
Execution status: FAIL
2025-09-17 22:04:33,407 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 363: Stopping container.

detectnet_v2_eval_trafficcamnet_kitti_FP16_forum.txt (4.7 KB)

There is some format issue in the spec file.
Please modify below

evaluation_box_config {
key: "bicycle"
value {
minimum_height: 10
maximum_height: 9999
minimum_width: 10
maximum_width: 9999
}
evaluation_box_config {
	key: "road_sign"
	value {
	minimum_height: 10
	maximum_height: 9999
	minimum_width: 10
	maximum_width: 9999
	}
}
evaluation_box_config {
key: "bicycle"
value {
minimum_height: 10
maximum_height: 9999
minimum_width: 10
maximum_width: 9999
}
}
evaluation_box_config {
	key: "road_sign"
	value {
	minimum_height: 10
	maximum_height: 9999
	minimum_width: 10
	maximum_width: 9999
	}
}

More, you can refer to the format in tao_tutorials/notebooks/tao_launcher_starter_kit/detectnet_v2/specs/detectnet_v2_train_resnet18_kitti.txt at tao_5.5_release · NVIDIA/tao_tutorials · GitHub.