Error while coverting the mode

I got the following error while converting the exported etlt model to trt

[ERROR] UffParser: Output error: Output NMS not found

505[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct

506[INFO] Detected 1 inputs and 2 output network tensors.

507[INFO] Starting Calibration with batch size 8.

508[INFO] Post Processing Calibration data in 2.605e-06 seconds.

509[INFO] Calibration completed in 2.47101 seconds.

510[ERROR] Calibration failure occurred with no scaling factors detected. This could be due to no int8 calibrator or insufficient custom scales for network layers. Please see int8 sample to setup calibration correctly.

511[ERROR] Builder failed while configuring INT8 mode.

512[ERROR] Unable to create engine

513Segmentation fault (core dumped)
Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) : Tesla V4
• Network Type (Detectnet_v2
• TLT Version 3.0
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
I got this error while running tlt convert command

Please share the full command line and full log.

actually, i just saw that my cal.bin file was getting created at /workspace while when i ran the export command as below:
detectnet_v2 export -e /workspace/tlt-experiments/specs/detectnet_v2_retrain_resnet18_kitti.txt -m /workspace/tlt-experiments/experiment_dir_pruned/weights/resnet18_detector.tlt -k ZHFkNzJmbGhhOGpocXNzcnRpaXRjM2dsZnQ6MDNhYmEyNzAtNTYwZS00Y2FhLTgzZWItMWJlNjI1NDZhMGYx -o /workspace/tlt-experiments/experiment_dir_pruned/weights/resnet18_detector_int8.etlt --data_type int8

and while launching the container i am mounting the /workspace/tlt-experiment path so the cal.bin is not getting saved. Can you tell me how to save cal.bin at desired path?

Please refer to DetectNet_v2 — Transfer Learning Toolkit 3.0 documentation to generate etlt model and cal.bin.

Where did you run the command? Inside the docker? If yes, how did you login the docker?

yes, i ran

sudo docker run --gpus all -it -v /workspace/tlt-experiments/:/workspace/tlt-experiments -p 8888:8888 nvcr.io/nvidia/tlt-streamanalytics:v3.0-py3 /bin/bash

So, you can refer to below. It can generate calibration.bin and resnet18_detector.etlt

detectnet_v2 export
-e $USER_EXPERIMENT_DIR/experiment_dir_retrain/experiment_spec.txt
-m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt
-o $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector.etlt
-k $KEY
–cal_data_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.tensor
–data_type int8
–batches 10
–cal_cache_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.bin
–engine_file $USER_EXPERIMENT_DIR/experiment_dir_final/resnet_18.engine

Reference: DetectNet_v2 — Transfer Learning Toolkit 3.0 documentation

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.