• Hardware GeForce RTX 4070 Ti
• Network Type (Detectnet_v2)
• TLT Version (format_version: 2.0, toolkit_version: 4.0.1)
This is link to my previous question for more reference
Very low precision while Training detectnet_v2 model using custom data in TAO
Now I have got considerable accuracy of about 0.79 mAP and my aim is to deploy this model on jetson xavier NX device in deepstream.
I have generated following files in export folder
- cal.bin -------- calibration cache file
- cal.tensorfile ---------tensorfile
- trt.engine ------- fp32 engine file
- trt.engine.fp16 ----------fp16 engine file
- trt.engine.fp8 ----------fp8 engine file
- labels.txt ----------------- name of classes
- nvinfer_config.txt ----------------- it generates incomplete deepstream related config file( not sure though in detectnet_v2 it is mentioned.)
Now i want to deploy it on jetson for that QAT training is required but i assumed for training it will be using pretrained .etlt but i just observed that it is using downloaded .hd5 model
pretrain_model_path: “/workspace/tao-experiments/yolo_v4_tiny/pretrained_cspdarknet_tiny/pretrained_object_detection_vcspdarknet_tiny/cspdarknet_tiny.hdf5”
- so is there a way i can use .etlt model instead for training model with pretrained model and train from scratch?
- Also i want to ask if i use int8 model trained with TAO (and not QAT trained model) directly on jetson then is it compulsory to use TAO converter?? also will it give same accuracy after conversion?
- finally for deployment on jetson should i train QAT trained int8 model or convert the model on jetson as i stated earlier?