How we can generate dla bin file for LPD model?

• Hardware Platform (Jetson / GPU): jetson
• DeepStream Version : 5.1
• JetPack Version (valid for Jetson only) : 4.5
• TensorRT Version : 7.1.x

Hi,
I fine-tuned usa_lpd model for custom dataset, and I want to run that model with DLA on jetson xavier nx , but I don’t know How I can generate bin file for the trained model?
Is it possible to use usa_lpd_dla.bin file for my trained model? Is it correct way?

I converted the trained model to engine file with below command on GTX 1080 ti, and this generated two files, one is engine file and other is bin file.
But when I use that bin file, That’s not work for generating engine with jetson when I want to run DLA, but for jetson’s GPU is ok.
And when I want to use the usa_lpd_dla.bin, the model is generated on DLA.

tlt detectnet_v2 export \
                  -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt \
                  -o $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector.etlt \
                  -k $KEY  \
                  --cal_data_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.tensor \
                  --data_type int8 \
                  --batches 10 \
                  --batch_size 4 \
                  --max_batch_size 4\
                  --engine_file $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector.trt.int8 \
                  --cal_cache_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.bin \
                  --verbose

Moving this topic from DS forum into TLT forum.

Please use " force_ptq" to generate cal.bin
See DetectNet_v2 — Transfer Learning Toolkit 3.0 documentation
and
Improving INT8 Accuracy Using Quantization Aware Training and the NVIDIA Transfer Learning Toolkit | NVIDIA Developer Blog