DetectNet_v2: enable_auto_resize: true causes 0 mAP in TAO 5.0.0

Please provide the following information when requesting support.

• Hardware: T4 (g4dn at AWS)
• Network Type: TrafficCamNet (detectnet_v2)
• TLT Version: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
• Training spec file:

detectnet_v2_train_resnet18_kitti.txt (7.0 KB)

detectnet_v2_tfrecords_kitti_trainval.txt (608 Bytes)

I am trying to retrain the TrafficCamNet model with my own dataset using tao.

  • The pretrained TrafficCamNet input size is 960×544
  • My dataset images are 1920×1200
  • In my training spec, I set the model resolution to 960×544

Case 1: Without enable_auto_resize

Initially, I did not set enable_auto_resize: true. Training gave reasonable validation results around epoch 25, before overfitting:

epoch_20: vehicle AP ≈ 0.55
epoch_30: vehicle AP ≈ 0.98

However, inference on test data was poor (missed many objects).

Case 2: With enable_auto_resize: true

To explicitly handle the resolution mismatch, I added enable_auto_resize: true to my spec (everything else unchanged).

But with this option enabled, validation mAP was always 0.0, even after 30 epochs:

epoch_20: vehicle AP = 0.0
epoch_30: vehicle AP = 0.0

It looks like enable_auto_resize resizes the images but does not adjust bounding box coordinates from the TFRecords, so labels become misaligned.

Did I misunderstand how enable_auto_resize should work, or is this a bug in TAO 5.0.0?

Please share an image file and label file.
More, please run experiments on 4.0.1 docker to narrow down. docker run --runtime -it --rm nvcr.io/nvidia/tao/tao-toolkit:4.0.1-tf1.15.5 /bin/bash

I ended up resizing my images manually. Still, the AP stayed at 0.0 up to epochs 100–105, but finally at the 120th epoch it reached ~80% AP and showed decent inference on new images. So, probably there is no issue with enable_auto_resize; instead, the specifics of my dataset caused this slow AP growth.

However, I faced a problem when generating the .engine.

  1. My pruned and retrained .hdf5 model produces good inference::

  1. I exported the model with the following command:
detectnet_v2 export -m detectnet_v2/experiment_dir_retrain/weights/resnet18_detector_pruned.hdf5 -e specs/detectnet_v2_retrain_resnet18_kitti.txt -o detectnet_v2/experiment_dir_final/resnet18_detector.onnx –onnx_route tf2onnx –gen_ds_config

(By the way, I also tried exporting with --data_type=int8 and provided calibration parameters, but the command never generated a calibration file.)

  1. After the export, I created an .engine file on my Jetson Xavier NX using the command:

“trtexec —onnx=resnet18_detector.onnx –saveEngine=resnet18_detector_night.engine”

When I ran inference with this engine, it produced huge bounding boxes almost the size of the full frame:

  1. I also tried to generate an engine on the T4, which I used for training.
    Using the nvcr.io/nvidia/tao/tao-toolkit:5.2.0-deploy container, I invoked the command:
detectnet_v2 gen_trt_engine -m detectnet_v2/experiment_dir_final/resnet18_detector.onnx --data_type int8 --batches 10 --batch_size 4 --max_batch_size 64 --engine_file detectnet_v2/experiment_dir_final/resnet18_detector.trt.int8 --cal_cache_file detectnet_v2/experiment_dir_final/calibration.bin -e specs/detectnet_v2_retrain_resnet18_kitti.txt --results_dir detectnet_v2/experiment_dir_final --verbose

(I passed the actual image directory instead of the data root to detectnet_v2_retrain_resnet18_kitti.txt, as suggested in the Jupyter notebook prior to launching the command.)

This generated the engine and calibration files. However, inference with

detectnet_v2 inference -e specs/detectnet_v2_inference_kitti_engine.txt -m detectnet_v2/experiment_dir_final/resnet18_detector.trt.int8 -r detectnet_v2/infer_testing -i data/testing -b 4

was also meaningless:

What am I doing wrong?

Here are my retrain and inference config files

detectnet_v2_retrain_resnet18_kitti.txt (6.9 KB)

detectnet_v2_inference_kitti_tlt.txt (3.0 KB)

detectnet_v2_inference_kitti_engine.txt (3.1 KB)

For above result, which docker did you use? TAO 5.0 or TAO 4.0.1?

Could you check against more test images? Are the inference expected?

The calibration file should be available. Could you double check? Any log for it? Did you ever run tao_tutorials/notebooks/tao_launcher_starter_kit/detectnet_v2/detectnet_v2.ipynb at tao_5.5_release · NVIDIA/tao_tutorials · GitHub successfully?

TAO 5.0 (http://nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5)

Sure, I’ve tested against new 300 images (they have not been used in training) - the inference is good as expected.

Here are the logs:

  1. detectnet_v2 calibration_tensorfile
root@4e8c31023546:/workspace/tao-experiments# detectnet_v2 calibration_tensorfile -m 10 -e specs/detectnet_v2_retrain_resnet18_kitti.txt -o detectnet_v2/calibration.tensor
2025-09-15 08:50:22.638980: I tensorflow/stream_executor/platform/default/dso_loader.cc:50] Successfully opened dynamic library libcudart.so.12
2025-09-15 08:50:23,080 [TAO Toolkit] [WARNING] tensorflow 40: Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2025-09-15 08:50:28,026 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 08:50:28,171 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 08:50:28,192 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 08:50:36,161 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 08:50:36,201 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 08:50:36,205 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 08:50:36,457 [TAO Toolkit] [INFO] __main__ 172: This method is soon to be deprecated. Please use the -e option in the export command to instantiate the dataloader and generate samples for calibration from the training dataloader.
2025-09-15 08:50:36,457 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.spec_handler.spec_loader 113: Merging specification from specs/detectnet_v2_retrain_resnet18_kitti.txt
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:153: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

2025-09-15 08:50:36,464 [TAO Toolkit] [WARNING] tensorflow 137: From /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:153: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

2025-09-15 08:50:37,015 [TAO Toolkit] [INFO] root 522: Sampling mode of the dataloader was set to user_defined.
2025-09-15 08:50:37,016 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 175: Serial augmentation enabled = False
2025-09-15 08:50:37,016 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 177: Pseudo sharding enabled = False
2025-09-15 08:50:37,016 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 269: Max Image Dimensions (all sources): (0, 0)
2025-09-15 08:50:37,016 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 380: number of cpus: 4, io threads: 8, compute threads: 4, buffered batches: 4
2025-09-15 08:50:37,016 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 387: total dataset size 5720, number of sources: 1, batch size per gpu: 8, steps: 715
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/tensorflow_core/python/autograph/converters/directives.py:119: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead.

2025-09-15 08:50:37,060 [TAO Toolkit] [WARNING] tensorflow 137: From /usr/local/lib/python3.8/dist-packages/tensorflow_core/python/autograph/converters/directives.py:119: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead.

2025-09-15 08:50:39,336 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataloader.default_dataloader 546: Bounding box coordinates were detected in the input specification! Bboxes will be automatically converted to polygon coordinates.
2025-09-15 08:50:42,483 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 409: shuffle: True - shard 0 of 1
2025-09-15 08:50:42,489 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 479: sampling 1 datasets with weights:
2025-09-15 08:50:42,489 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 481: source: 0 weight: 1.000000
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/tensorflow_core/python/autograph/converters/directives.py:119: The name tf.image.resize_images is deprecated. Please use tf.image.resize instead.

2025-09-15 08:50:43,224 [TAO Toolkit] [WARNING] tensorflow 137: From /usr/local/lib/python3.8/dist-packages/tensorflow_core/python/autograph/converters/directives.py:119: The name tf.image.resize_images is deprecated. Please use tf.image.resize instead.

Writing calibration tensorfile:   0%|                                                            | 0/10 [00:00<?, ?it/s]WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/calibration_tensorfile.py:102: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2025-09-15 08:50:43,944 [TAO Toolkit] [WARNING] tensorflow 137: From /usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/calibration_tensorfile.py:102: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

Writing calibration tensorfile: 100%|███████████████████████████████████████████████████| 10/10 [00:07<00:00,  1.32it/s]
Time taken to run __main__:main: 0:00:15.073744.
Telemetry data couldn't be sent, but the command ran successfully.
[WARNING]: HTTPSConnectionPool(host='telemetry.metropolis.nvidia.com', port=443): Max retries exceeded with url: /api/v1/telemetry (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')))
Execution status: PASS
  1. detectnet_v2 export
root@4e8c31023546:/workspace/tao-experiments# detectnet_v2 export -m detectnet_v2/experiment_dir_retrain/weights/resnet1
8_detector_pruned.hdf5 --data_type int8 --cal_data_file detectnet_v2/calibration.tensor --cal_cache_file detectnet_v2/ex
periment_dir_final/cal.bin -e specs/detectnet_v2_retrain_resnet18_kitti.txt -o detectnet_v2/experiment_dir_final/resnet1
8_detector.onnx --batches 10 --max_batch_size 16 --batch_size 8
2025-09-15 08:57:38.261319: I tensorflow/stream_executor/platform/default/dso_loader.cc:50] Successfully opened dynamic library libcudart.so.12
2025-09-15 08:57:38,314 [TAO Toolkit] [WARNING] tensorflow 40: Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2025-09-15 08:57:39,939 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 08:57:39,980 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 08:57:39,984 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
WARNING:tensorflow:TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 08:57:43,938 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 08:57:43,977 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 08:57:43,981 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 08:57:44,659 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.common.export.app 264: Saving exported model to detectnet_v2/experiment_dir_final/resnet18_detector.onnx
2025-09-15 08:57:44,659 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.common.export.keras_exporter 119: Setting the onnx export route to keras2onnx
2025-09-15 08:57:44,659 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.spec_handler.spec_loader 113: Merging specification from specs/detectnet_v2_retrain_resnet18_kitti.txt
2025-09-15 08:57:44,823 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.common.export.keras_exporter 429: Using input nodes: ['input_1']
2025-09-15 08:57:44,823 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.common.export.keras_exporter 430: Using output nodes: ['output_cov/Sigmoid', 'output_bbox/BiasAdd']
Checking for quantized layers in the exporter.
Quantized model: False
Loaded model
The ONNX operator number change on the optimization: 137 -> 53
2025-09-15 08:57:48,437 [TAO Toolkit] [INFO] keras2onnx 347: The ONNX operator number change on the optimization: 137 -> 53
2025-09-15 08:57:48,438 [TAO Toolkit] [WARNING] onnxmltools 71: The maximum opset needed by this model is only 9.
2025-09-15 08:57:49,117 [TAO Toolkit] [INFO] root 522: Sampling mode of the dataloader was set to user_defined.
2025-09-15 08:57:49,118 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 175: Serial augmentation enabled = False
2025-09-15 08:57:49,118 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 177: Pseudo sharding enabled = False
2025-09-15 08:57:49,118 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 269: Max Image Dimensions (all sources): (0, 0)
2025-09-15 08:57:49,118 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 380: number of cpus: 4, io threads: 8, compute threads: 4, buffered batches: 4
2025-09-15 08:57:49,118 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 387: total dataset size 5720, number of sources: 1, batch size per gpu: 8, steps: 715
2025-09-15 08:57:51,457 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataloader.default_dataloader 546: Bounding box coordinates were detected in the input specification! Bboxes will be automatically converted to polygon coordinates.
2025-09-15 08:57:54,300 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 409: shuffle: True - shard 0 of 1
2025-09-15 08:57:54,305 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 479: sampling 1 datasets with weights:
2025-09-15 08:57:54,305 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 481: source: 0 weight: 1.000000
2025-09-15 08:57:55,765 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.export.detectnet_calibrator 113: Number of samples from the dataloader: 5720
2025-09-15 08:57:56,117 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.common.export.keras_exporter 479: Calibration takes time especially if number of batches is large.
Telemetry data couldn't be sent, but the command ran successfully.
[WARNING]: HTTPSConnectionPool(host='telemetry.metropolis.nvidia.com', port=443): Max retries exceeded with url: /api/v1/telemetry (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')))
Execution status: PASS

cal.bin file has not been created:

root@4e8c31023546:/workspace/tao-experiments# ls detectnet_v2/experiment_dir_final
resnet18_detector.onnx

Alternatively, when using images for calibration:

detectnet_v2 export

root@4e8c31023546:/workspace/tao-experiments# detectnet_v2 export -m detectnet_v2/experiment_dir_retrain/weights/resnet1
8_detector_pruned.hdf5 --data_type int8 --cal_image_dir data/training/image_2 --cal_data_file detectnet_v2/experiment_di
r_final/cal.tensorfile --cal_cache_file detectnet_v2/experiment_dir_final/cal.bin -e specs/detectnet_v2_retrain_resnet18
_kitti.txt -o detectnet_v2/experiment_dir_final/resnet18_detector.onnx --batches 10 --max_batch_size 16 --batch_size 8
2025-09-15 09:06:24.862175: I tensorflow/stream_executor/platform/default/dso_loader.cc:50] Successfully opened dynamic library libcudart.so.12
2025-09-15 09:06:24,916 [TAO Toolkit] [WARNING] tensorflow 40: Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2025-09-15 09:06:26,520 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 09:06:26,560 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 09:06:26,564 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
WARNING:tensorflow:TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 09:06:30,498 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 09:06:30,538 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 09:06:30,542 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2025-09-15 09:06:31,223 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.common.export.app 264: Saving exported model to detectnet_v2/experiment_dir_final/resnet18_detector.onnx
2025-09-15 09:06:31,223 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.common.export.keras_exporter 119: Setting the onnx export route to keras2onnx
2025-09-15 09:06:31,224 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.spec_handler.spec_loader 113: Merging specification from specs/detectnet_v2_retrain_resnet18_kitti.txt
2025-09-15 09:06:31,384 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.common.export.keras_exporter 429: Using input nodes: ['input_1']
2025-09-15 09:06:31,384 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.common.export.keras_exporter 430: Using output nodes: ['output_cov/Sigmoid', 'output_bbox/BiasAdd']
Checking for quantized layers in the exporter.
Quantized model: False
Loaded model
The ONNX operator number change on the optimization: 137 -> 53
2025-09-15 09:06:34,860 [TAO Toolkit] [INFO] keras2onnx 347: The ONNX operator number change on the optimization: 137 -> 53
2025-09-15 09:06:34,861 [TAO Toolkit] [WARNING] onnxmltools 71: The maximum opset needed by this model is only 9.
2025-09-15 09:06:35,378 [TAO Toolkit] [INFO] root 522: Sampling mode of the dataloader was set to user_defined.
2025-09-15 09:06:35,378 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 175: Serial augmentation enabled = False
2025-09-15 09:06:35,378 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 177: Pseudo sharding enabled = False
2025-09-15 09:06:35,379 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 269: Max Image Dimensions (all sources): (0, 0)
2025-09-15 09:06:35,379 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 380: number of cpus: 4, io threads: 8, compute threads: 4, buffered batches: 4
2025-09-15 09:06:35,379 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 387: total dataset size 5720, number of sources: 1, batch size per gpu: 8, steps: 715
2025-09-15 09:06:37,743 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataloader.default_dataloader 546: Bounding box coordinates were detected in the input specification! Bboxes will be automatically converted to polygon coordinates.
2025-09-15 09:06:40,630 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 409: shuffle: True - shard 0 of 1
2025-09-15 09:06:40,634 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 479: sampling 1 datasets with weights:
2025-09-15 09:06:40,635 [TAO Toolkit] [INFO] nvidia_tao_tf1.blocks.multi_source_loader.data_loader 481: source: 0 weight: 1.000000
2025-09-15 09:06:42,096 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.export.detectnet_calibrator 113: Number of samples from the dataloader: 5720
2025-09-15 09:06:42,450 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.common.export.keras_exporter 479: Calibration takes time especially if number of batches is large.
Telemetry data couldn't be sent, but the command ran successfully.
[WARNING]: HTTPSConnectionPool(host='telemetry.metropolis.nvidia.com', port=443): Max retries exceeded with url: /api/v1/telemetry (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')))
Execution status: PASS

Again cal.bin has not been generated:

root@4e8c31023546:/workspace/tao-experiments# ls detectnet_v2/experiment_dir_final
resnet18_detector.onnx

I used the detectnet_v2.ipynb notebook, but launched commands directly in the docker container.

Please refer to tao_tutorials/notebooks/tao_launcher_starter_kit/detectnet_v2/detectnet_v2.ipynb at tao_5.5_release · NVIDIA/tao_tutorials · GitHub to use detectnet_v2 gen_trt_engine to generate cal.bin.
For poor int8 inference result, you can narrow down by running below.

  • How about fp32 trt engine generated by the pruned onnx file?
  • How about running int8 trt engine generated by unpruned onnx file?
  • How about running fp32 trt engine generated by unpruned onnx file?

In the detectnet_v2_retrain_resnet18_kitti.txt file model_config → pretrained_model_file was set to the pruned (and not retrained model). I’ve changed it to the retrained model before launching detectnet_v2 export and gen_trt_engine commands.

After this:

  • fp32 trt engine generated by either the retrained or unpruned onnx file works well
  • int8 trt engine generated by the retrained onnx file does not generate any bboxes
  • int8 trt engine generated by the unpruned onnx file also generates almost no bboxes besides unmeaningful ones in rare cases, like this:

So the problem seems to be with calibration.

Is my tensorrt_config at detectnet_v2_inference_kitti_engine.txt (used in detectnet_v2 inference) enough?

tensorrt_config{
    trt_engine: "/workspace/tao-experiments/detectnet_v2/experiment_dir_final/resnet18_detector.trt.int8"
    backend_data_type: INT8
    calibrator_config{
      calibration_cache: "/workspace/tao-experiments/detectnet_v2/experiment_dir_final/cal.bin"
    }
  }

OK, glad to know it can work.

Please use total training dataset to generate cal.bin again.

Generated cal.bin using the total training dataset. Still int8 engine does not generate any bboxes (launched inference on T4).

There is an error when generating cal.bin:

2025-09-17 15:52:19,011 [TAO Toolkit] [INFO] nvidia_tao_deploy.engine.calibrator 88: Calibrating image 6000 / 6000
[09/17/2025-15:52:19] [TRT] [I]   Calibrated batch 1499 in 0.0747333 seconds.
--- Logging error ---
Traceback (most recent call last):
  File "<frozen engine.calibrator>", line 87, in get_batch
StopIteration

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/logging/__init__.py", line 1100, in emit
    msg = self.format(record)
  File "/usr/lib/python3.10/logging/__init__.py", line 943, in format
    return fmt.format(record)
  File "/usr/lib/python3.10/logging/__init__.py", line 681, in format
    s = self.formatMessage(record)
  File "/usr/lib/python3.10/logging/__init__.py", line 650, in formatMessage
    return self._style.format(record)
  File "/usr/lib/python3.10/logging/__init__.py", line 440, in format
    return self._format(record)
  File "/usr/lib/python3.10/logging/__init__.py", line 436, in _format
    return self._fmt % values
TypeError: %d format: a real number is required, not NoneType
Call stack:
  File "/usr/local/lib/python3.10/dist-packages/nvidia_tao_deploy/cv/detectnet_v2/scripts/gen_trt_engine.py", line 3, in <module>
    __pyarmor_vax_001219__(__name__, __file__, b'\x50\x59\x41\x52\x4d\x4f\x52\x00\x00\x03\x0a\x00\x6f\x0d\x0d\x0a\x09\x34\xe0\x02\x00\x00\x00\x00\x01\x00\x00\x00\x40\x00\x00\x00\x9f\x14\x00\x00\x00\x00\x00\x18\x64\x6e\x3c\xfb\xd8\x47\x99\x39\x26\x27\x9f\x27\x9e\xf8\xcc\x05\x00\x00\x00\x00\x00\x00\x00\x00\x9a\xa6\xd8\x08\x2a\xa5\xc6\xf1\x76\x34\x8a\x8a\x8d\x4e\xee\x9b\x2b\x0a\x08\x73\x60\x95\x3a\x09\x92\xef\x79\x75\x00\xa5\xb8\xcf\xa1\x6a\xe9\x0a\xa9\xc1\x69\xbf\x1f\xe5\xf7\xd3\x4c\x3e\x5f\x68\xa8\x12\x27\x92\xac\x17\x77\xe7\xc7\x73\x09\x50\x9d\x58\x84\xc3\xd8\x97\xd9\x6a\x51\x16\xa0\xde\x8c\x43\x1b\x20\x0f\x02\xf5\xab\x58\x96\xc4\x39\xbb\x14\x75\xff\x89\x5d\x61\xa8\x71\x2b\x1c\x32\x87\x25\xf4\xd3\x06\x02\xca\x74\x77\x9f\x1e\xd2\xcc\x51\xd2\x0e\x32\x54\xbd\x1f\x6f\x5f\x9a\x64\xa3\xa4\x9b\xd1\x21\xd7\xbf\x57\x10\xef\xc9\xb8\x59\xa4\xd5\x22\x17\xce\xaf\x7c\x81\xff\x87\x00\x57\xff\x8a\xe3\xeb\x35\xa6\xc8\xf9\x1d\xe7\xd7\xa5\xd6\xa7\xb9\x21\xc3\xf3\xba\x49\xd8\x7f\x2e\x57\x79\x3c\x70\xfe\xf9\x37\x68\x44\xfc\xf3\x74\x6b\xcd\x3a\x99\xbe\xea\x01\x1d\xfe\x1c\x07\x63\xa9\xa3\x3a\xd3\x69\x57\x80\x7a\x7a\x4c\x22\xc1\x90\xdd\xee\x30\xd9\x59\x8f\x7d\x1a\xeb\x09\x8a\xb5\x5a\xfc\xb4\x70\x71\xdd\x28\xe3\xec\x8f\xae\x23\x0c\x76\x53\xfe\x7d\x26\x53\xdf\x24\xb9\x78\xbb\x4f\x81\x6b\x4a\xf1\x14\xf0\x28\x6e\x6d\xb3\xac\x7f\x12\xcf\x29\x30\x9f\x74\xa1\xe4\xb0\x70\x04\x5a\x9c\x2c\x1f\x56\x9a\x85\x7f\x50\xd5\xd5\x1e\x1a\x24\x33\xa6\xd0\x9f\x49\xc3\x83\x93\xf6\x2d\xad\xe4\x6b\x13\xd5\x96\x86\x63\x4a\x60\xe1\x49\xbe\x5c\x08\xee\xc1\xad\xef\x0b\xca\x79\x83\x88\x4d\xc7\x7d\xe1\x7a\x2a\x2a\x93\x0d\x4a\x2e\x94\x1f\xb3\x48\x38\xff\xc8\xf5\x59\x1b\x26\x42\x4d\x28\x07\x80\x4d\x93\x9a\x33\x63\x1a\xc2\xaf\xb2\xbf\xb4\x1c\xc7\xc7\xec\xd1\xe1\xd9\x01\x94\x96\xcc\x97\x23\x0a\x19\xfb\x2d\x72\xcd\x04\x59\xa9\xdc\x99\xa0\x8f\x39\x29\x6c\x1e\x36\x24\xba\xf3\x33\x5c\xdb\xed\x7b\x21\xb5\x33\x14\x08\xd7\xe6\xb7\x60\x3b\xab\x1e\x9f\xd2\x70\x78\x06\xc4\xf3\x19\x35\x84\xcb\xfc\xc0\x05\x67\x1f\xe3\x1d\x30\xa9\xfa\x6d\x25\x5e\x40\xb5\xa1\x96\xd0\xba\x2b\x9a\x05\xd5\x30\x17\x30\x21\x6b\x6c\x61\xa3\x88\x4a\x21\xc6\x07\x23\xaf\x33\xda\x82\x14\xd8\x46\x99\xdd\x97\xa3\xfa\x65\xc1\xa9\x9d\x10\x00\xbc\x92\x1e\xc3\xa6\x05\x64\xad\x39\xc0\x05\x13\x71\x2a\x1d\x61\x14\xd4\x8b\xf0\x09\x74\x94\x31\x6a\x06\xa2\x1e\xa1\x8e\x3e\x77\x77\x08\x9e\xff\x77\xb3\x84\xcf\x0f\x5c\x70\xbb\x7f\xf6\x3e\x65\x63\xcc\x62\xa1\x61\x25\x8c\x0f\x5a\xb5\x79\x4c\xb4\x03\x64\x29\x21\x88\x8d\xfb\xdc\xd7\x28\x9f\xe4\xf0\xb2\x75\x4f\x5f\x34\x64\x92\x0e\x37\x65\x04\xc6\x71\x70\x5e\xd7\xf7\xb5\x75\xd0\x12\x03\xf0\x84\xe8\x1b\x53\x9a\xd8\x0a\xba\xcc\xe8\x3e\xf9\x26\xd4\xac\xdd\x92\x0e\xdb\x97\xc8\xb0\xf6\xdc\x95\xb1\x94\x13\x30\xe3\xa0\xd4\x8c\x86\x4d\xbc\xa8\x53\xfa\x3b\x3b\xcc\x4b\xd1\xba\x94\xa1\x66\xab\x9c\x74\x66\x4c\xf2\x64\xd5\x5b\xa9\x19\x06\x55\xf8\x72\xb8\x03\x6d\xca\x57\x6a\xd7\xe2\xfc\x45\x7b\xb3\xc8\xa1\x33\x4b\x05\x6f\x4f\x11\x5e\x82\x64\x7c\x35\xfe\xe3\xe6\xb8\xa7\xfd\x69\xf5\x66\x84\xc6\xf1\x2a\x15\x11\x4b\xc2\xcf\x73\x45\x4e\xbd\xdd\xd4\xbc\x77\xe9\xff\xea\xe4\xa7\x88\x12\xc5\x9d\xe0\xc0\x64\x92\x55\xd1\xe6\x34\x7f\x3e\xd4\x34\xc4\x99\x09\xcf\x49\xf2\x5e\x62\x04\x5a\x0f\xb3\x56\xca\x64\x9b\x63\x5d\xad\xb0\xfd\x02\x85\x23\xbd\x7b\x6c\x99\x58\x0a\x2f\x9e\x1c\x9b\x8c\x93\x83\x5e\xf8\x62\x1c\xb3\xdd\x16\x2b\x99\xe4\xaf\x76\x60\xc0\xf7\xc6\x26\x6c\x21\xef\xf8\xab\xf8\x7d\x34\xc6\x6a\xfa\x43\x8a\x8f\xe8\xca\xfa\xcc\x1e\x7d\xde\x8d\x11\x73\xd9\x4c\x4a\xeb\x69\xba\x29\x6f\xb9\xf7\xe3\xe2\x60\x0e\x9d\x90\x29\xee\x0f\x5f\xbc\x52\xc0\x9e\xc4\x50\xc7\xa3\xb7\x58\x31\xd0\xab\x69\xec\x9b\x23\x5f\x40\x0e\x1b\x9e\x72\x26\xb6\x0d\xf4\x6e\x3e\x65\xc8\x7b\xf0\x73\xb1\x18\x3a\x76\xf4\xb0\xa1\x2e\xdb\x74\x9a\x54\x12\x14\x7f\x44\xb3\x49\xce\xe0\xa6\x2e\xd0\xf8\xe3\x5c\x1e\xf6\x66\xbb\xa2\xb0\xf7\x18\x09\xd1\x2a\x54\x09\x59\x7e\x2a\x2c\x1f\xff\xcf\x11\x1e\x9f\xcb\xee\xce\x77\x5f\xcc\x59\x34\xe4\x19\x5e\x8d\x33\x99\x25\x3b\xcc\x88\x25\x23\x5d\x35\x06\xce\x5e\x4c\xf6\x4c\x17\x3e\xed\x3c\xea\xac\x46\xd6\xcb\x8c\x7c\xd0\x25\x24\xa6\x86\x0c\xdb\xd7\xd7\x4d\xac\xed\x1a\x18\xd2\x02\xca\xf5\xf0\xcf\x82\xfa\xe2\xf3\xf3\xf9\xb3\x94\x6b\xe5\x30\x84\x91\xd4\xf1\xfa\xd6\xd0\x4b\xc7\x61\xc2\x1c\xa5\xf9\x1a\x90\x4e\xc4\xd1\xa4\x80\x20\xc6\x5a\x3d\x62\xd8\x18\x5e\xa3\xbb\xfe\x53\xa6\x12\x68\x5f\x60\x1d\x35\x2f\x70\x57\xae\x21\x73\x11\x79\x65\xe3\x4c\x08\x21\xd0\xa7\x24\x0c\xbd\x8c\x82\x1a\x99\xc5\x19\xf0\x64\xf8\x0d\x44\x49\xca\x32\x01\x95\xc7\x9a\x09\xba\x48\x53\x63\x4d\xfc\x22\x2e\x19\x6f\xbb\x4a\x0f\x16\x0d\x37\xdb\x9c\x30\xed\x1e\x66\x5e\x1b\xc9\xf1\x63\x4e\x5c\x37\x71\x29\x20\xca\xf3\x28\x5a\x85\xac\x74\xd6\xcb\xdd\x9f\x7d\xa9\xe1\xa0\xca\xf2\x03\x68\xf5\xdc\xa5\x10\x04\x30\x67\x60\x40\x91\x60\x22\xaa\xc9\x85\xb5\xc3\xb4\xec\x06\xe9\xb9\x2e\x35\xf7\xf6\x9f\xc4\x4d\xd9\xd2\xae\xfe\xbd\x9a\x96\xc7\xd6\xb2\xff\x52\x1d\xfd\x17\xf2\x89\xf5\x7e\xd5\x9b\xcc\xe7\x97\x9e\x3a\x90\x1a\x04\x13\x9a\xea\x8d\x6f\x73\x73\x48\xfd\xb9\xaa\xca\xf6\xf1\x9f\x4e\x0b\xc5\x50\x26\x16\xf2\xfa\xc9\xff\x8a\x32\xa9\x8e\x14\xad\x8a\x2b\x37\x71\xba\xfc\x76\x8d\x25\xfa\x35\xd2\x6b\x98\xaa\x35\x93\x13\x12\x6e\xa7\x5d\x0f\x47\x94\x4d\x54\x1f\x23\xef\x4d\xda\x0b\xfe\xc5\xc1\xdd\x92\x6c\xed\x18\x9c\x84\x34\x2f\x59\xfc\x7a\x2c\x67\x04\x0e\xed\x80\x76\xf8\xae\x12\x71\x4b\xad\xae\x55\xe0\xe2\xb1\xf8\xb3\xf9\x72\xfe\xfc\x17\xaa\x29\xba\x32\xf5\xb6\xc7\x76\x74\x16\x6f\x11\xf1\xc5\xf3\x83\x94\x37\x5d\x75\x98\x9f\x6e\x27\x13\xbe\x7b\x9e\x1e\xf7\x7a\x61\x46\xbb\x29\x0e\x72\xdc\x69\xf3\xbc\x41\x96\xcf\x05\x30\x04\xcf\xe2\xd7\xb6\xa6\x15\x4c\x96\x8a\x8a\xf6\x84\xca\x5d\x6a\x23\xe5\xfd\x7a\x13\x04\x40\x68\x47\xee\x4e\xf0\x1c\x1c\xaa\x39\x3f\x12\xe2\x26\x1a\x56\xc6\x84\xfb\xa4\x38\xab\x57\xe5\xae\x75\xab\x8a\x66\x7b\x87\xee\xc7\xea\x20\x9c\x90\x7f\x0e\x59\xe1\x60\xe5\x7b\xf5\xe9\xe2\x16\xbe\x07\xa5\xb5\x16\xe7\x41\xdc\x60\x58\xa5\x01\x34\xe6\x31\x6f\x26\x40\x55\xc7\x65\xaf\xa4\xa3\x84\x9d\xcb\xa1\x90\xc9\x2d\x93\xba\x7a\xba\xac\x51\x39\xa1\xea\xc6\xbb\xbd\x2e\x00\xce\x75\xf3\x12\x9c\x91\xa4\xe7\x93\x05\xa4\xc9\x24\x9c\x1c\xf5\xff\xe2\x55\xfc\xbb\x4a\x06\xe7\x35\x59\x6a\x3a\xc4\x04\x7a\x26\x66\x65\x31\x5e\xf9\x66\x42\x7d\xc4\xbf\x9c\xbb\x8a\x68\x81\xb0\xfd\xca\x48\x7f\xa6\x56\xb5\xcd\x05\xe6\xfc\x6e\x7f\xd6\xe3\x35\xd5\x8c\x20\x31\x35\x4d\x2f\x79\xc6\xd5\xf8\x39\x0b\x6d\x78\xea\xe0\xb1\xe3\x0c\xd4\x96\x38\x6c\xa0\xc2\xf9\xa1\x0d\x55\x95\x42\xc4\xd5\xc5\x7e\xac\xbe\x15\xf9\x5b\x76\x65\xa8\x4a\x80\x3d\xd5\xa8\x4d\xe0\xa2\x74\x5b\xcc\xb3\x8e\x34\x08\x31\xff\x3c\xf5\x3a\xd0\xfd\x2d\x68\x6b\xc1\xa2\x25\x99\xf6\x69\xdc\x47\x6e\xb8\x86\x80\xc9\xf7\xd3\xf0\x88\x58\xf4\x70\x82\x2b\x5b\x53\x2d\x5c\x0d\x92\xd1\x82\x7d\x16\xd8\xaf\x76\xf2\xee\xb8\x30\x72\x45\x49\x0f\x58\x11\x7b\x27\xd6\xa8\x75\xfa\x30\x66\x50\x3b\xb2\x10\xec\xa8\x09\x9d\xea\x50\x65\xb9\xe9\x9b\xbc\x6f\xce\x0c\x67\x24\x52\xd1\xbb\xb1\xe8\x78\x40\x7c\xe0\xfc\xd1\x32\x6c\x44\x45\x58\x0b\xec\x25\x03\x00\x39\xc1\x7d\x9b\x45\x68\xd0\x13\xa4\x44\xe8\x2c\xe4\x12\xf9\xd8\xd0\x30\x4c\x84\xda\x52\x9d\x04\x79\x00\xc2\x87\xa7\x2c\xb3\x31\x66\x1c\x23\xd6\x5e\x43\xbb\x2f\xf7\x4c\xb9\x53\xca\x16\x4c\x3e\xac\x64\xfb\x86\xf6\x63\xe0\xc8\xc4\xd9\x34\x3f\x7b\x03\xa7\x08\xc8\x4d\xd1\xbd\x0f\x50\x83\x34\x8c\x3b\x82\xf3\x58\x62\x25\xdd\x9b\xdd\x87\x32\x0d\xe7\x61\xcd\x16\x45\x89\x99\x53\x1f\xe8\x56\xf1\x64\xc6\xec\xce\xd1\xcb\x80\xb3\xa4\xed\x3d\x64\x53\x18\x56\xe8\x87\x29\xa8\x2a\xdc\x09\x54\xd3\x1c\xff\x21\xba\x42\x59\x00\x0d\xb1\x2d\xd0\x73\x90\x78\x5f\x70\x3d\x52\xbc\xab\xc3\xa3\xbd\xd3\x97\xfd\xdd\x2d\x62\xba\x60\x56\x3f\xda\xe6\x38\x8c\x89\xd1\xf0\x32\xcc\x8c\x78\xee\x68\xdb\xb8\x15\x36\x89\x92\xde\xdf\x73\x04\x31\x11\x31\x48\x28\x34\x25\x25\x41\xc4\xa6\xf6\xb1\x07\x72\xf5\x5d\x2e\xe9\x54\x3d\xff\x00\xdf\x4b\x6a\x70\x59\x01\x7e\x87\x8c\xf7\x15\xed\xd4\x24\x19\xcd\x2c\x7a\x2f\x1d\x79\x2c\xd8\x07\xa0\x23\x13\xbe\x24\x10\xd1\x44\x62\x51\x98\x06\x22\x23\x55\x83\x61\x25\x39\x68\x45\x4d\xe3\x4d\x7b\x22\x18\x93\xcb\x19\x2c\xcc\x30\x87\x97\x29\x8a\x42\xa3\xe9\x9d\x66\x1c\x67\xe8\x46\xa0\x0b\xfb\x95\xf6\xbb\xdf\x56\x77\xf6\xd1\xdb\xba\xbe\x11\x88\xa8\xdd\xaa\x7e\x62\x0b\x4f\x10\x29\x92\x85\x17\x60\xeb\x36\xe2\xfc\x01\x01\x06\x45\xca\x52\x13\xe4\x17\x6e\x1e\x7d\x24\x7b\x6b\x96\xee\xca\x18\xe0\x4e\x2b\x66\x18\xf7\xad\xc2\x41\xc2\x02\x51\x8a\xa5\x30\x86\x5a\x32\x2e\xd6\xaf\x8a\x2c\xd7\xad\xe8\x73\xec\x1e\xc5\x45\xc7\x07\x5c\xb5\x48\x55\x83\xcb\xa3\xa6\x74\xfb\x7a\xf5\x4d\x1e\xd3\xf5\xca\xe6\x4a\x08\x7d\xb5\x97\x8b\xb7\xa7\x80\x39\x89\xfc\xe3\xe2\xd1\x40\x9f\xcb\x8b\xf5\xd5\x7d\xe9\x1a\x76\x3c\xef\x34\x61\x75\x5f\xb7\x21\xc1\xa5\x77\xc4\x7f\xe2\x8c\x69\x35\xab\x67\x8b\x0e\x34\xfb\xa2\x1d\x76\xc0\x53\xf9\x5b\xd9\xfd\x7e\x4a\x5e\xcb\x2b\xb4\x3b\x36\x4d\x7c\x9e\x31\x16\x16\xbe\xc9\xa8\x74\xc0\x5f\x5d\x8c\xe7\x15\x87\x9b\xfe\x78\xd7\x9a\xac\xa2\xb1\xa4\x13\xcd\x41\xfe\xfb\x9e\xd8\xbc\x3c\xcf\x22\x6d\x37\x56\xe3\x8d\xc4\xab\x73\x97\xcb\x4c\x1d\x74\x9b\x38\x4f\xc1\x3f\x01\x7f\x1c\xa0\x58\x47\x30\x82\x9b\x0a\x27\xb1\x9b\xed\x15\x15\xca\x5b\x43\x99\x45\x53\x33\xd8\xe6\xf5\x7b\x83\x99\x1e\x09\x69\xaa\x36\x74\x99\xd7\x9e\x3e\x7a\x38\x85\x2e\x86\xac\xa8\x31\x42\xe4\x54\xc1\xb0\xf2\xdf\xb0\x6f\x69\x04\xd0\xab\x7b\xb9\x6b\x65\x40\xd0\x9a\xa1\xf9\x10\x95\x33\x44\x94\x1a\x75\x74\x87\x06\x13\xfd\x02\x71\xbe\x3b\x45\x70\x8b\x2b\x87\x3d\xeb\xf4\xef\x8a\x8d\x0c\x09\x02\x43\xbb\x73\x1a\xd2\x52\xc6\xc1\xbf\x07\xac\xa2\x68\xdb\xc2\x08\xc0\xab\xec\x2d\x74\x23\xaf\x82\x50\x74\xdd\xc2\x72\x10\x26\x87\x30\x42\x83\x02\x3e\xd6\x8f\xba\x60\x28\x54\x12\x8e\x62\xc2\xe5\x14\xb5\x0f\x9b\x44\x57\x88\xd8\x87\xb3\xff\xde\x9b\xbc\x69\x73\x26\x31\x7e\xd5\x4b\x29\xeb\x9d\xb7\x2d\xee\xa6\xe3\x2c\xba\x68\xda\x61\xaf\x54\xb9\x8c\x2f\x32\x0a\x0a\x63\x2b\xba\xdc\x99\x68\x9d\x34\x1b\x3f\x9e\x6a\xd3\x7a\xa7\x6c\x5f\xbd\x8c\xd3\xb8\x28\xa4\x2b\x39\xa5\x23\x70\x38\x33\xe9\x93\xea\x3e\x60\x42\xb7\xca\x6a\xa4\xf1\x84\x3f\x87\x93\x1e\x5f\x82\x72\x85\x42\xe9\xcf\x2b\x24\x20\x5c\x65\x06\x89\x51\x81\x7c\xc4\x51\xbf\xe8\xf8\x6d\x10\x45\x36\xb1\x0e\xac\x2b\xd0\x4a\xe1\x42\xec\xe5\x8d\x5b\xd8\xeb\x5f\xce\x19\xb6\x24\x03\x98\xe5\xa6\x39\x18\xcc\x07\xfb\xff\x75\x95\xa6\x0f\x70\xf0\x22\x44\xc6\x52\x48\xf7\xd2\x26\x5a\xae\xaa\xd4\xe7\xdc\x51\xc1\x2e\xee\xc1\x81\x2c\xb9\xe9\x55\x32\x15\x12\x60\x6c\x5b\xe9\xa7\x9f\x99\x10\x65\xb2\xda\x3f\xf9\x6f\x57\x49\x26\x4e\x61\x3c\x80\x73\xdb\x27\x95\xda\xe2\xc2\xab\x7d\xe2\xf8\x70\x63\x25\x10\x12\x93\xf9\x33\xeb\xaa\xec\xae\x20\x1f\xf7\xfa\x19\x24\x3d\x81\x40\xd2\xd9\x0c\xbb\x2b\xcc\x87\x89\x95\x76\x1c\xae\xdf\xad\xfc\x0c\x97\x1d\x18\x13\x8e\x5c\xd2\x34\xbb\x3e\xf9\x4c\x51\x32\xc4\x4e\xe1\x73\x8f\xb4\x9a\x7a\x78\x96\x66\x71\x4b\x7b\xca\xb2\xe8\x6f\x31\x28\x99\xd7\xff\x9b\x05\x39\x56\xe2\xa3\xda\xdb\x57\x91\x75\x7f\x85\xad\xcf\x87\x6c\x5a\x0c\xf3\x29\x7b\x66\x5b\xee\xa4\xdd\xb4\xa9\xa5\x49\xda\xdf\x7f\x51\x83\xcc\x22\x92\x61\x46\x5e\x55\x55\xd6\x65\x22\x3f\x43\x85\x51\x8f\xa2\x01\xa6\x64\xb7\x3d\xae\x10\x26\x3d\x0e\xae\x4d\x5f\x74\x8e\x8f\x1d\x0c\x80\x36\xbf\xf6\x86\x12\xec\x52\x5e\x73\xd4\xc2\xb4\x1e\xea\x7c\x5a\x58\xf5\x2c\x46\x78\xb7\x98\x54\xe3\x12\xf9\xfc\xf2\x68\xbb\x05\xae\x5e\x65\x58\xbb\x92\xd0\xc1\x76\x02\x9d\xca\x36\x05\x3a\x9f\x8c\xfe\x55\x36\x95\x87\xdc\x2a\x35\x3a\xb2\x11\x49\x1f\x1e\xe1\xfe\x22\xf7\xfa\x9f\x3f\x53\x7b\x59\xa3\x29\xc9\x56\x4f\x54\xa6\x44\x3a\x77\x7a\x9b\xc7\x68\x11\xa6\x06\x3b\x30\x21\x6c\xe5\x6f\xe0\x5a\xbb\xf2\x5a\xf5\xb0\xcb\xa2\xa3\x76\x40\x62\x57\x44\x68\xd7\x0e\x80\x9d\x66\xbc\xe4\x67\x06\x1e\x7d\x76\xf7\x75\x51\x3b\x92\x87\xda\x2d\x53\xae\x78\x56\x11\xb9\xb5\x00\xc5\x48\x3d\x50\x32\x2a\x53\x39\x0a\x8c\x58\x87\xcf\x58\x98\x08\xb6\x4e\x61\xb5\x06\x19\xe9\x9d\xf9\x76\xa5\x08\xbf\xad\x6d\x3f\xa9\x56\x41\x0e\xaf\xeb\x17\x2d\x2b\x16\xc0\xdb\x40\x1c\xca\xe5\x97\x0f\xcb\x5a\x11\xdb\x15\x21\xfe\xfc\xaf\x1b\x87\xbd\xcf\xef\xae\x5f\x71\x13\x63\x76\xb7\x57\x06\xcd\x05\x0a\xf9\xd6\x2b\x84\xbd\xba\x25\x70\xf6\xe8\x5e\x6a\xe8\x17\xfb\x76\x92\xcd\xb4\x26\x38\xa4\x5f\x1c\x0d\x0e\xb4\x7a\x3d\xeb\x9b\x5c\x4a\xe6\xb5\x1e\x69\x7d\xcb\xef\x62\x5d\x37\x0c\xbc\x76\xc6\xc1\x36\xf1\x81\x87\x12\x03\x64\x1f\x6d\xaf\xd3\x57\x7c\xce\xb0\x49\x0d\x5d\xa8\x14\xf9\xb4\xbb\xb5\x83\x7f\xac\xeb\x3a\x7d\x27\x4e\xd4\xbe\x6f\x9d\x21\x88\x51\x48\x0d\x27\x1f\x46\x18\x52\x20\xbe\xb4\xee\x93\x62\x92\x8b\x28\x5a\xd7\x35\xde\x31\xc9\xb2\xae\xee\xc8\x79\xca\x97\x25\x22\x6d\x71\xcd\xba\xe1\x43\x83\xaa\x79\xa0\xb6\x77\x15\xa8\xc1\x4b\xf0\xea\x1d\x3a\xea\x70\xb8\xd2\x31\xd6\x7a\x1c\x40\x2f\xac\xb8\x12\xe7\x5a\x39\x61\x5d\x2f\x0e\xdc\xca\x3b\xd1\x27\xd9\x44\x6f\x3e\xb8\x57\xad\xe0\xe2\x72\xe9\x35\x7b\x37\x7a\x65\xb4\x32\xce\x62\x78\xe2\x30\xc7\xe2\x9b\x46\x93\x9b\x6e\x7e\x50\xb1\xac\xb2\x5d\x44\x6a\x4b\x3b\x14\x42\x14\xd2\x33\xda\x14\x0c\xbc\xcb\x91\x45\xa4\x08\x62\x23\x22\x1a\x44\x6e\x66\x82\xf2\xdc\x9e\xbe\xe9\x07\x76\xe1\xe8\x51\xb2\x09\xbe\xdf\xfb\xb8\x91\x05\xb4\x07\x86\x1c\x64\xf6\xed\x74\x1e\xb8\xf8\x9e\xbd\x21\xf9\x61\x9d\x4c\x10\xa5\x9d\xde\xda\xba\x81\x4e\x3e\x9e\x1d\x9f\xf4\xb2\xb3\xbd\x43\x71\xb8\xe7\xcc\x9c\x82\xdf\x3b\xa7\x2e\x93\x5f\x56\x45\x07\xa3\xf9\x6a\x58\xa9\xf9\x41\x40\x0d\x7a\x70\x86\x7d\xd7\x76\xff\xfc\xac\x9c\xe1\x07\x2f\x6b\x7d\x80\xf0\x0b\xf7\xf2\xc0\xac\x97\x36\x39\xfb\x6b\xd4\x2b\xc6\x01\x54\xd3\x45\xf3\x16\x42\xf6\xa5\xfa\x66\xf7\xe0\x40\xff\x01\xc6\xca\xe5\x4d\x84\xe9\xae\x92\x72\xf1\x2c\xc6\x10\x76\x7b\x12\x42\xf5\xf0\xff\x33\x73\xb6\x2f\xc9\x57\x8e\xa8\xc5\x51\xd3\x0d\x63\x74\x80\x46\xd5\x96\x0b\x1e\x49\xfe\xb6\x5f\x45\xdb\x9b\x7a\x40\xc9\xe4\x27\xae\x8a\x0e\xd4\x11\xe4\xfe\x76\x6f\xe2\xc6\xea\x84\x9c\x30\xbf\x39\xb0\xc1\x5a\x0b\xc5\x5d\x55\x72\x27\xbe\x28\x59\xf6\x59\x6a\x5d\x0f\x3e\x5b\x8f\xff\x78\xb0\x79\xcb\xaf\x94\x81\x83\x9b\x05\xf8\x5c\x2c\x11\x92\x9c\x99\x0e\xed\xda\xde\x89\x94\xa3\xc5\x67\x90\x73\x08\xa1\xc7\x57\x80\x4d\xe7\x06\x65\xe3\xc1\x1a\x4b\x48\x0c\xeb\x7f\x07\x24\x25\xc3\xd0\x08\x4c\xf9\xcb\x34\x55\x8d\xae\x45\x9b\x7f\xc2\x13\xcf\x37\xe8\x6d\x6d\xe7\xf8\x42\xbb\xc2\x2e\x86\x6c\x54\xfd\xb2\x92\x35\xde\x98\xcc\x8e\x75\x8a\x4d\xdf\xb2\xe6\x56\xef\x6c\x3a\x6f\xe1\xd6\x35\xca\xc6\xfe\x15\x5f\xa2\xb0\xc2\x4a\xe5\xf8\x03\x06\x85\x5b\xab\xa7\xbd\xdb\xb8\x6d\x41\xa4\x4c\x1a\x1f\xa8\xc6\x5b\x11\x39\xd1\xe8\x3f\xd4\x31\x33\x91\xb3\x06\x46\x61\x4a\xfb\x10\x02\x73\x6c\x4d\xd9\x22\xa2\x3f\xb8\x1e\x35\x39\x2d\x49\xfd\x48\x74\x2a\x50\x98\x88\xfb\xbb\x2f\xc5\xfb\xc7\x13\x9d\x62\xb3\x5f\xce\xb5\x36\x19\x20\x70\x5c\x68\xcf\xd0\x9c\x17\x5e\x07\x90\xcd\xb5\x67\x79\xe6\xee\xe1\xe0\xef\x65\x04\x68\x6b\xc9\x90\x59\x3d\x52\x1a\x55\xe2\xe6\x9f\x59\x97\xec\x4a\x71\xee\xcf\xab\x44\xc6\xcd\xa2\x3d\xe8\xe9\x16\xf0\x2e\xa4\xde\x2a\x1d\x19\x93\x44\xe2\x7a\x25\xe6\x19\xf7\x2a\x91\xca\x08\xac\xd0\x58\xf7\x82\x59\x18\xa1\xc7\x41\xd3\xf4\xac\xeb\x47\x5b\x13\x7d\xef\xa7\x39\x36\x7e\x4f\x72\xd8\x8a\x44\x21\x8b\x64\xab\xe4\x7d\x4b\xdc\x36\xdb\xeb\x53\x5d\xe8\x00\x65\x7a\xef\xe8\xb8\x24\xe0\xa6\xab\x97\xa7\x5a\xfd\x31\xbe\xab\xf2\x54\x80\xef\x79\xda\xca\x62\x2a\x10\xb4\x15\xf5\xfa\x3f\xbc\xba\x96\x7a\x6f\xad\x50\xdd\xb2\xf1\xd7\x12\x49\x24\x7d\x7e\x2a\x26\x98\x7d\xab\xbf\x28\xdb\xda\xa7\xaf\x4a\xa1\x46\x84\x3e\xfe\x73\xb2\xc0\xe7\xb5\x4f\xdf\x42\x2e\x2e\x30\x2f\x36\x89\x3f\x56\xa8\xa7\x57\x31\x00\x67\xbd\x6e\x3a\xb3\x91\x48\x45\x16\x7e\xbe\xe5\x00\xc3\xb5\x4e\x1f\xb7\x59\xb6\x35\xe9\x43\xd6\x19\x9a\x28\x9b\x5d\xdf\x0f\xa5\x19\xe7\x3c\xa6\x46\x40\xf1\xea\x9d\xe8\xe6\x59\xf3\x1f\xad\xf2\xaf\x8a\xf9\x6e\xdc\x5e\xfc\xe1\xbc\x29\x79\xc9\x5a\xd0\xe8\x35\x60\x4a\x84\xa2\x38\xff\xc3\x65\xef\x93\xb2\x7f\xc6\x6c\xe2\x2a\xad\x18\x90\x25\x33\xf1\x30\x23\xff\x37\x95\xac\x1b\xf0\x6a\xb1\x7a\x10\xf6\x0d\x2e\xff\xab\x46\xfa\xc7\xbe\x5a\x9a\x3a\x81\x0d\x70\xd4\x33\x3e\xac\x6c\xc9\x04\xc1\x68\x19\xe4\x4b\x7e\xc7\x85\xa4\x45\xbd\xfd\x30\x39\x66\xef\x30\xb3\xd9\x09\x5a\xb5\x81\x55\xed\x0a\x25\x49\xc4\x76\x86\xf1\xd8\xff\xeb\x75\xa0\x11\x51\x89\x3b\xf9\x33\xa7\x3b\x67\x4e\xfd\xca\x8f\xc8\x70\xd0\xd7\xc8\x0b\x72\x74\x4f\x09\x47\xa2\x2e\x85\x1c\xc6\xf0\x3f\x4b\x5f\x61\xba\xdf\x7c\x39\xd7\xdc\xc2\xaf\x06\x15\xec\xd8\xcf\x04\x84\x06\x96\x2e\xd5\xca\x37\xf9\x5b\xdb\x6e\xb9\x1c\xf5\xd8\xdf\x6a\xfa\x44\xe4\xb3\x67\xf6\xe9\x63\x8f\xd4\x51\xb3\x41\xba\x46\x84\xab\x5d\x8e\x08\xaa\x73\xe8\x38\xe1\xaa\x27\x78\x88\xc2\xa1\xfa\xc8\x15\xb5\x83\x1b\x51\x0c\xea\x03\x36\xd6\xc1\x60\x57\x9d\xe5\x9e\x92\x01\x5e\xf3\x18\x6c\x5b\x33\x2b\x24\x56\xf3\xa2\x2e\x5d\xe5\xf9\x26\x58\x93\xbf\xce\x8a\x04\xb6\xf7\x32\xa0\xdf\x38\x96\xef\x83\x66\x22\xb8\x1d\x41\x43\x44\xc4\x43\xdb\x7f\xae\x98\x06\x50\x39\x7b\xc1\xaa\x97\xe9\x98\x25\xe3\x73\xce\x15\xd7\xf3\x91\x01\x30\x35\x17\x13\x9b\xcf\x0d\x7f\xff\x80\x13\x31\xc3\xdc\x2c\xcd\x1a\x4f\x24\x63\x70\x12\x1c\xec\xa3\xe9\x23\xf7\xee\x5d\x59\x4a\x1c\x92\xfb\x73\x27\xd3\xa9\x77\x67\x64\x4a\xcb\xb4\xdb\x4b\x32\x19\x79\xd2\xc8\x84\x5f\x39\x99\x6b\x29\xd5\x5a\x62\x84\xad\xf0\xf7\x9f\x40\x12\x47\x3f\x44\x82\x49\x46\x12\xca\xa9\x07\xa5\x48\xb3\xc8\x9d\x30\x01\xa1\xe9\x5f\x74\xc3\x47\x6a\x4b\x4e\x5d\x41\xa8\x8f\x5d\xfa\xde\x8e\x07\xfb\x9b\x6d\x8a\x72\xf8\xfb\x8e\xb6\xf1\xd8\xde\x5f\x99\x8c\xbb\x23\xb6\xa6\x60\xa2\xa7\xc8\x7b\xe6\xa0\x28\xf7\xcc\xf8\xec\xda\x7e\x9c\x22\x5c\x09\x17\x78\x3f\x06\x5f\x77\x1c\x16\xbc\xf0\x0e\x5a\x91\x9f\x4c\x08\xd1\x29\x9f\x61\x2c\xd3\x53\xcb\xce\xc8\xd5\x82\x4f\xa6\xb9\x34\x36\x93\x74\x81\x53\x68\x10\xd7\xe8\x1c\x65\x4b\xc9\x00\xc7\x35\x3d\x69\x92\x42\x48\xd3\x4a\x20\xcd\x9b\x7a\x9c\x10\xd1\xec\x7b\x2d\x6a\xb7\x6f\xcc\xf1\x2d\xff\xf9\xde\xe1\xd7\xa1\xb4\xd6\xc4\xe8\x37\x4c\x37\xe6\x78\x97\xaa\x99\x61\xa5\xe2\xd2\xad\xef\xbd\x07\x80\x88\xab\x69\x02\x11\xe3\x5f\xf5\x59\x64\xd0\x00\xc7\xdb\x27\x79\xb5\xde\x3f\xbf\xeb\x46\xcd\x03\x10\x4f\x4d\xfb\x01\xc5\xb0\xae\x05\x2e\x83\x78\x34\xda\xf0\xdc\xe5\xd5\x07\x47\xff\xf0\x51\x61\x83\x70\x50\x60\x15\x5e\xe3\xc8\xf2\xb7\x42\x4b\xc9\xf1\x0a\x22\x55\xbb\x13\xc5\xed\x9d\xcb\x04\xf7\xb0\x8c\x99\x60\x23\x23\x17\xde\x41\x31\x34\xb4\x32\x92\x2b\xe3\x69\x8e\x24\xba\xe2\x65\xff\xdf\xc8\x66\xdf\x14\x4d\x6e\x3e\x64\x3e\x4b\xbb\x6e\xf6\x4a\x72\x1e\x9b\x01\x21\xd3\xa7\xda\x5b\xe9\x45\x18\xe7\x41\xbb\x03\xf5\x1c\x46\x31\x17\xc8\x42\x62\xba\xa6\x3b\x2c\xf9\x4b\x1f\x92\xe0\x3a\x8f\x97\x96\xed\xac\x1a\x23\x1c\x03\xa0\xc8\x9c\x5e\xf3\x49\x23\x99\xc6\x4f\x46\x92\x95\xa0\xd5\x1a\x75\x75\x5a\xa8\x53\x88\xcf\x5d\x38\x01\xdc\x03\x5a\xfd\xf1\x1c\xf4\x8a\xd1\x78\x35\xec\x15\x17\x42\x73\x4d\x06\xfb\x61\x49\x72\x0e\xb0\xec\xd4\x34\x92\xd2\xe8\x0a\xff\x4a\xbf\xc3\x85\x42\x6e\x2e\x6c\x74\xc0\xe3\x01\x9f\x26\xcd\xc8\x3b\xc2\x82\xe9\x54\x80\xab\x7a\xb6\x38\xee\xdd\x28\x0d\x8c\x65\x80\x6d\x40\x5d\x4e\x39\x27\x88\xc6\x02\x6a\xfa\x44\x7b\x1e\x26\x5e\x34\x36\x14\xd4\x67\x74\x39\xf9\xd4\xed\x7e\xef\xe3\x49\x3d\x41\x4d\x28\x07\x6c\x3b\x11\xc3\xd0\x1e\x02\xb0\xe0\x18\x93\x04\x42\x54\xd8\xeb\x79\xed\xad\xb3\xd5\x66\x45\x1a\x40\x5d\xca\x37\xfc\xf5\xe5\xde\x9b\xff\xce\xe1\xea\xd1\x46\x28\x79\x04\x6a\xe2\x6b\x84\x9b\x02\x03\xdf\xb5\xb3\x6c\xcc\xef\x3c\x2c\x80\x5f\x1b\xb6\x8e\xd4\x0d\x15\x87\x42\xbd\x95\x61\xad\x7c\x49\x8c\x20\x6e\x21\x58\x86\xbf\xd0\x62\xe2\xd6\xa3\xc0\xfc\x81\x0c\x0f\x5a\xa6\x9f\xb4\x4d\xb7\xfa\x78\x2f\x67\x21\xa8\x52\xf3\x66\x9e\x35\x78\xf2\x64\x62\xc9\x4e\x5d\x46\x7f\x8c\x8f\xc3\x17\x28\x30\x08\x0b\x81\x46\x7c\xbc\x79\x5b\x0d\x94\x62\xc0\x1e\x94\x58\xcd\xc3\x0e\x8d\x88\xb4\xce\x35\x6d\xe5\xa9\x60\x7a\xfb\xad\x16\xd1\x0a\x36\xca\x7b\x1b\xe7\xe0\xc8\x47\x3e\x11\x8b\x9b\xbc\x2b\x9d\x86\x76\xfb\xf0\x72\xd2\x30\xb9\xb4\xa1\x88\xae\x85\xe1\x25\x43\x26\x62\xb5\xc8\xe9\x92\xc4\xa2\xc6\xb8\xe7\x1a\xf4\x29\x3a\x45\xcf\xc1\x82\xbd\x79\xba\x28\x87\xe0\xc9\x3e\x55\x4c\xb5\x49\xfa\x38\x3d\xee\x43\x12\xc1\x8e\xc0\xc7\x70\x49\x3c\xd8\xe6\xc1\xd1\x25\xb0\x2e\x6c\x6b\x39\x3a\x87\x48\xe2\xf6\x90\xb3\xfd\x4c\xf7\xdb\x26\xa7\xc3\x4e\x83\x22\x19\x50\xa6\x88\xb4\x9b\xea\xda\x1c\xd9\x6a\x58\xef\x5c\x48\x36\x8f\xf8\xab\x26\x68\x08\xe8\x3f\x20\x3e\xcd\x93\x01\xd4\xc2\xcc\x31\x3f\x29\x61\x56\x4d\x47\x90\x49\x7d\x88\x0c\x8e\x98\x93\x3e\x3b\x77\x2a\x9a\xba\x46\xe8\xd9\x71\x72\x79\x0c\x1e\x8c\x6c\xd8\x78\x04\xcf\xe7\x65\x0e\x0a\x3f\x1c\x6b\x6b\x93\xb0\xf6\x3f\x51\xf6\x74\x30\x38\x51\x7e\x6e\xa3\x6d\xfe\x04\x61\x85\xf7\x7d\x11\x31\x9e\x12\x86\xac\x59\x11\xb7\xbb\x89\x9e\xc5\xce\x83\x31\x47\x16\xb1\x47\x71\x7e\x7e\x32\x74\x3d\x0f\xdd\xb4\x50\xa6\x8f\xba\xf9\x6e\xdf\x67\x73\xbe\xb6\xe6\x6d\xd2\x60\xd1\xdd\x81\xd1\x9b\xf9\xde\xe1\x8e\x11\xaa\xa7\x1b\xaf\x86\x96\xe8\xc0\x54\x4e\x6b\x6a\xe5\x3c\xa0\x10\x67\xd8\x28\xc9\x9c\x1e\xb2\x1f\xe0\xfb\x84\x5d\x93\x83\x02\x18\x50\xc3\xb4\x04\xe0\x67\x7c\xcb\xfd\xc8\xad\x41\xdd\x41\x49\x12\x53\x13\x07\x0e\x44\x34\xc5\xb4\x48\x13\xe9\x8a\x0b\xb7\x7d\x6e\xc2\x66\x1c\x47\x07\x50\x34\x37\x83\x39\x36\xff\xe1\x43\xdd\xee\x85\x6e\x02\x54\x2a\xcd\x3f\x93\xc8\x72\x62\x15\xb5\x45\x35\x79\xc3\x28\xd3\x29\xdb\x3b\x67\x1a\xe7\x66\x65\x7d\x21\x2d\x09\x50\x34\xda\xe9\xa7\xe6\x44\x92\x25\xdd\x93\x2f\x30\x6c\x45\x8a\x36\x6c\x58\x48\x6f\x4e\x0e\x6b\xe1\x02\xff\x9f\xdb\x87\xfb\x07\x26\xc7\xc7\xec\x40\x4d\xed\x8f\x12\xb6\xd3\x3f\xd6\x1e\x46\x5a\xb0\x59\x66\xbf\x18\xb8\x64\x59\x7e\x5b\x7c\x90\xfa\xe4\x70\x34\x77\x2d\x8e\x0c\x6b\x25\xbf\x79\x33\xd0\x06\x8c\xf5\x70\x09\x00\x30\x6c\xdb\x70\x40\x69\xde\x0a\x96\xbd\x68\x85\x2e\x51\x22\x35\xba\x6f\x34\x2e\xf9\x09\xcc\x89\x36\x6a\xc9\xc8\xc7\xfd\x5a\x7a\x10\x8f\xdc\x81\x37\x54\x85\xb7\xe0\x16\x57\x9e\xf4\xd4\x21\xef\xf0\x9e\xee\x90\x31\xde\xb5\xa7\x69\x11\x6a\x6f\x94\xe5\xc6\x09\xb4\xe4\xca\xd2\x94\x16\xf1\x36\x54\xf2\xee\xec\x72\x88\x9c\xd9\x4b\xb4\x7c\x64\xb4\xdd\xec\x0d\x0f\x81\xe8\xf0\x41\xde\x92\x48\xdb\x90\x82\x14\xcb\xf3\x90\x1a\xf6\x27\x78\xfd\xca\x23\x37\xac\x99\x8a\x95\x41\x60\x02\xce\xc5\x9a\xc8\xad\xff\x02\x9d\xe3\x65\x97\xd4\x70\x05\xdf\xb1\x7f\x6b\x91\x8e\xbf\x53\x65\x46\x03\x6f\x4b\xad\xad\xee\xe2\x47\x66\xaa\xe3\x35\x94\x84\xd3\x59\x55\x22\x53\xe9\x27\xbb\x4f\x11\x5b\xf7\xf8\x8a\xb1\xb9\xfd\x15', 2)
  File "<frozen cv.detectnet_v2.scripts.gen_trt_engine>", line None, in <module>
  File "<frozen cv.common.decorators>", line 47, in _func
  File "<frozen cv.detectnet_v2.scripts.gen_trt_engine>", line 73, in main
  File "<frozen engine.builder>", line 307, in create_engine
  File "<frozen engine.calibrator>", line None, in get_batch
Message: 'Finished calibration batches'

I don’t think it influenced calibration, though.

Please find the detectnet_v2 export

detectnet_v2 export -m detectnet_v2/experiment_dir_retrain/weights/resnet18_detector_pruned.hdf5 -e specs/detectnet_v2_export_resnet18_kitti.txt -o detectnet_v2/experiment_dir_final/resnet18_detector.onnx --onnx_route tf2onnx --gen_ds_config

export_log.txt (17.9 KB)

and detectnet_v2 gen_trt_engine

detectnet_v2 gen_trt_engine -m detectnet_v2/experiment_dir_final/resnet18_detector.onnx --data_type int8 --batches 1500 --batch_size 4 --max_batch_size 64 --engine_file detectnet_v2/experiment_dir_final/resnet18_detector.trt.int8 --cal_cache_file detectnet_v2/experiment_dir_final/cal.bin -e specs/detectnet_v2_export_resnet18_kitti.txt --results_dir detectnet_v2/experiment_dir_final --verbose

engine_log.txt (5.4 MB)

logs attached as well as config files:

detectnet_v2_export_resnet18_kitti.txt (7.1 KB)

detectnet_v2_inference_kitti_engine.txt (3.4 KB)

My case looks similar to Issue running tlt trained SSD-resnet18 on Xavier with deepstream-app .
Here the author used deepstream-custom method to generate the int8 engine. But this method seems to be deprecated long ago.

Please set to --batches 6000 --batch_size 1 --max_batch_size 1 and retry.

I have identified the root cause of the problem.

  • The images I used for training were nighttime black-and-white images with equal color values (R=G=B):

  • The images I used for inference were also nighttime black-and-white, but they were captured with another camera. In these, the R, G and B values were slightly different - and that difference was enough for the retrained model to stop working:

Probably, when the model was converted from FP32 to INT8, it also lost robustness due to this overtraining.

Moreover, the retrained model lost its ability to make correct inferences on daytime images. Because of this, I decided to compile a new dataset from different sources, containing both nighttime and daytime images with multiple viewing angles (11,328 images in total), and retrain it from scratch (using the pretrained Trafficcamnet model).

However, I encountered the same issue: training consistently produces 0.0 mAP up to the 120th epoch (end of training).

Here are examples of the images I used for training (all resized to 960x544 prior to training):

They contain these 4 classes:

  1. bus count = 665 min_height=11.33 px
  2. truck count = 2230 min_height=17.00 px
  3. twowheeler count = 4179 min_height=11.05
  4. car count = 17142 min_height=4.08 px

Here is the training log:

train_log_ending.txt (268.0 KB)

train_log_beginning.txt (48.5 KB)

Attached is the status.json and my training config file.

status.json.txt (61.3 KB)

detectnet_v2_train_resnet18_kitti.txt (6.8 KB)

I tried different parameters, tried freezing a few first layers, but faced final 0 mAP with all of them (sometimes there were non-0 mAP at some iterations during the training).

I would appreciate your advice on how to correctly retrain the TrafficCamNet model. I need it to work on both nighttime and daytime images. I am considering these options:

  • Increase the number of layers beyond 18 in the ResNet backbone
  • Increase the number of epochs beyond 120
  • Exclude small objects. If so, what minimum object height should I keep? Should I remove them from the label files or avoid using images with small objects entirely?
  • Keep only images shot from low and medium elevations, and exclude high-elevation images
  • Keep only images from the Jetson device camera (the one used in production)

What do you think about these options? Do you have any other recommendations?

The objects are too small for the detectnet_v2 to run training. As mentioned in the model card, it is expected to be larger than 16 pixels. Larger the better.
Suggest you to do some experiments. For example,
Use larger backbone.
Use larger objects or set a larger input size in the spec file.
More, the most important thing is to train with more images which used for inference ( nighttime black-and-white, captured with another camera).

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.