When I ran
tlt-convert on Jetson Nano, the process is killed. I have tried several times and the problems still persists.
detectnet_v2 model with new data and perform Int8 Optimization. I used sample script from
detectnet_v2.ipynb from docker. I realized there is different TensorRT version between in docker which is
220.127.116.11-1 and TensorRT in Jetson Nano is
18.104.22.168-1. Is there any problem there?
Here is the script and result, when I run
$ ./tlt-converter resnet18_detector.etlt \ > -k $KEY \ > -c calibration.bin \ > -o output_cov/Sigmoid,output_bbox/BiasAdd \ > -d 3,384,1248 \ > -i nchw \ > -m 64 \ > -t int8 \ > -e resnet18_detector.trt \ > -b 4 [WARNING] Int8 support requested on hardware without native Int8 support, performanively affected. [INFO] Reading Calibration Cache for calibrator: EntropyCalibration2 [INFO] Generated calibration scales using calibration cache. Make sure that calibraatest scales. [INFO] To regenerate calibration cache, please delete the existing one. TensorRT wiw calibration cache. Killed