Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc) RTX4080
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) Yolov4
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
I trained yolov4 in docker and tried to export onnx in int8 format using the following command.
yolo_v4 export -m /workspace/tao_tutorials/data/experiment_dir_unpruned/resnet18/weights/yolov4_resnet18_epoch_080.hdf5 -k nvidia_tlt -e /workspace/tao_tutorials/notebooks/tao_launcher_starter_kit/yolo_v4/specs/yolo_v4_train_resnet18_kitti.txt --batch_size 3 --data_type int8 --cal_image_dir /workspace/tao_tutorials/data/images --batches 10 --cal_cache_file /workspace/tao_tutorials/data/experiment_dir_unpruned/resnet18/weights/yolov4_resnet18.cal --cal_data_file /workspace/tao_tutorials/data/experiment_dir_unpruned/resnet18/weights/yolov4_resnet18.tensorfile
Only onnx and tensorfile are produced. There is no cal file. How to have cal file?
Do you mean the /workspace/tao_tutorials/data/experiment_dir_unpruned/resnet18/weights/yolov4_resnet18.cal
is not available?
Yes not produced in export. Only onnx and tensorfile are produced.
Thanks.
I can produce onnx using the following command.
yolo_v4 export -m /workspace/tao_tutorials/StAndrew/data/experiment_dir_unpruned/resnet18/weights/yolov4_resnet18_epoch_080.hdf5 \
-o /workspace/tao_tutorials/StAndrew/data/experiment_dir_unpruned/resnet18/weights//yolov4_resnet18_epoch_80.onnx \
-e /workspace/tao_tutorials/notebooks/tao_launcher_starter_kit/yolo_v4/specs/yolo_v4_train_resnet18_kitti.txt \
--target_opset 12 \
--gen_ds_config
But for the following command to produce int8 engine,
yolo_v4 gen_trt_engine -m /workspace/tao_tutorials/StAndrew/data/experiment_dir_unpruned/resnet18/weights/yolov4_resnet18_epoch_80.onnx \
-e /workspace/tao_tutorials/notebooks/tao_launcher_starter_kit/yolo_v4/specs/yolo_v4_train_resnet18_kitti.txt \
--cal_image_dir /workspace/tao_tutorials/StAndrew/data/images \
--data_type int8 \
--batch_size 16 \
--min_batch_size 1 \
--opt_batch_size 8 \
--max_batch_size 16 \
--batches 10 \
--cal_cache_file /workspace/tao_tutorials/StAndrew/data/experiment_dir_unpruned/resnet18/weights/cal.bin \
--cal_data_file /workspace/tao_tutorials/StAndrew/data/experiment_dir_unpruned/resnet18/weights/cal.tensorfile \
--engine_file /workspace/tao_tutorials/StAndrew/data/experiment_dir_unpruned/resnet18/weights/trt.engine.int8 \
--results_dir /workspace/tao_tutorials/StAndrew/data/experiment_dir_unpruned/resnet18/weights
I have error as yolo_v4 has no gen_trt_engine
.
usage: yolo_v4 [-h] [--num_processes NUM_PROCESSES] [--gpus GPUS] [--gpu_index GPU_INDEX [GPU_INDEX ...]] [--use_amp] [--log_file LOG_FILE]
{train,prune,kmeans,inference,export,evaluate,dataset_convert} ...
yolo_v4: error: argument /tasks: invalid choice: 'gen_trt_engine' (choose from 'train', 'prune', 'kmeans', 'inference', 'export', 'evaluate', 'dataset_convert')
I am using yolo_v4 from docker nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
Because you are using docker run
way instead of tao_laucher
. For tao launcher, you can find the docker info when you run $tao info --verbose
. The gen_trt_engine
is from tao-deploy docker, i.e., nvcr.io/nvidia/tao/tao-toolkit:5.5.0-deploy (TAO Toolkit | NVIDIA NGC). You can pull it.
ok i need to use deploy docker