Label files generated by tlt-infer

For initial_weight and weight_target, see reference; How to set initial_weight and weight_target at detectnet_v2 spec file?

For “AttributeError: Specified FP16 but not supported on platform.”, I am afraid the gpu inside your host PC does not support fp16. See more in Support Matrix :: NVIDIA Deep Learning TensorRT Documentation and CUDA GPUs - Compute Capability | NVIDIA Developer

Note that all the etlt model is fp32 mode even you set any other mode when you run tlt-export. You can run tlt-export in the host PC and generate the etlt model.

For Nano, it supports fp32 and fp16. You can deploy the etlt model in it.
Or, you can use tlt-convert to generate trt engine for deployment.