I want to deploy the detectnet_v2_resnet18 on jetson nano. and I follow all steps for training the model and I want to export the .tlt to .etlt as fp16 on pc gtx 1080, but when I run this command I get error, but I don’t get error in fp32 export.
tlt-export detectnet_v2 \ -m /workspace/tmp2/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt \ -o /workspace/tmp2/experiment_dir_final/resnet18_detector.etlt \ -k key \ --data_type fp16 \
How to solve this problem?
In the examples of TLT for detectnet_v2.ipypb, That note Detectnet_v2 supports int8 only or int8/fp16/fp32 ? If only support int8, for jetson nano that doesn’t support int8, How to run that model on jetson nano? Is it possible?