I got the following error while converting the exported etlt model to trt
[ERROR] UffParser: Output error: Output NMS not found
505[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
506[INFO] Detected 1 inputs and 2 output network tensors.
507[INFO] Starting Calibration with batch size 8.
508[INFO] Post Processing Calibration data in 2.605e-06 seconds.
509[INFO] Calibration completed in 2.47101 seconds.
510[ERROR] Calibration failure occurred with no scaling factors detected. This could be due to no int8 calibrator or insufficient custom scales for network layers. Please see int8 sample to setup calibration correctly.
511[ERROR] Builder failed while configuring INT8 mode.
512[ERROR] Unable to create engine
513Segmentation fault (core dumped)
Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc) : Tesla V4
• Network Type (Detectnet_v2
• TLT Version 3.0
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
I got this error while running tlt convert command
actually, i just saw that my cal.bin file was getting created at /workspace while when i ran the export command as below: detectnet_v2 export -e /workspace/tlt-experiments/specs/detectnet_v2_retrain_resnet18_kitti.txt -m /workspace/tlt-experiments/experiment_dir_pruned/weights/resnet18_detector.tlt -k ZHFkNzJmbGhhOGpocXNzcnRpaXRjM2dsZnQ6MDNhYmEyNzAtNTYwZS00Y2FhLTgzZWItMWJlNjI1NDZhMGYx -o /workspace/tlt-experiments/experiment_dir_pruned/weights/resnet18_detector_int8.etlt --data_type int8
and while launching the container i am mounting the /workspace/tlt-experiment path so the cal.bin is not getting saved. Can you tell me how to save cal.bin at desired path?