Hi, there!
Currently, I am trying to convert my .etlt model to .trt inside a Jetson TX2.
In my Jupyter notebook, I performed Int8 optimization by executing this code:
!tlt-int8-tensorfile detectnet_v2 -e $SPECS_DIR/detectnet_v2_train_resnet18_kitti_OCR_third_trial_2.txt \
-m 10 \
-o $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/calibration.tensor \
Then, we exported the model into .etlt by executing this code:
!tlt-export $USER_EXPERIMENT_DIR/experiment_third_trial_OCR/weights/resnet18_detector_OCR_third_trial.tlt \
-o $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/resnet18_detector_jeff.etlt \
--outputs output_cov/Sigmoid,output_bbox/BiasAdd \
--enc_key $KEY \
--input_dims 3,320,832 \
--max_workspace_size 1100000 \
--export_module detectnet_v2 \
--cal_data_file $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/calibration.tensor \
--data_type int8 \
--batches 10 \
--cal_cache_file $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/calibration.bin \
--cal_batch_size 4 \
--verbose
After that, we tried to generate TensorRT engine by executed this code inside the Jupyter:
!tlt-converter $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/resnet18_detector_jeff.etlt \
-k $KEY \
-c $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/calibration.bin \
-o output_cov/Sigmoid,output_bbox/BiasAdd \
-d 3,320,832 \
-i nchw \
-m 64 \
-t int8 \
-e $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/resnet18_detector.trt \
-b 4
All seems to work inside Jupyter x86. Then, we tried to generate TensorRT engine from Jetson TX2, so we tried to execute this code:
./tlt-converter ./resnet18_detector_jeff.etlt \
-k XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \
-c ./calibration.bin \
-o output_cov/Sigmoid,output_bbox/BiasAdd \
-d 3,320,832 \
-i nchw \
-m 64 \
-t int8 \
-e ./resnet18_detector_jetson.trt \
-b 4
Unfortunately, we met this problem:
[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse uff model
[ERROR] Network must have at least one output
[ERROR] Unable to create engine
Do you have any insights about this problem? I would appreciate any ideas.
(Note: by they way, the Readme.md inside https://developer.nvidia.com/tlt-converter is broken.)
Best regards,
Jeff