Can't convert to fp16 for detectnet_v2_resnet18 trained model using TLT on desktop gtx 1080

Hi all,
I want to deploy the detectnet_v2_resnet18 on jetson nano. and I follow all steps for training the model and I want to export the .tlt to .etlt as fp16 on pc gtx 1080, but when I run this command I get error, but I don’t get error in fp32 export.

tlt-export detectnet_v2 \
	    -m /workspace/tmp2/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt \
            -o /workspace/tmp2/experiment_dir_final/resnet18_detector.etlt \
            -k key \
            --data_type fp16 \

How to solve this problem?

In the examples of TLT for detectnet_v2.ipypb, That note Detectnet_v2 supports int8 only or int8/fp16/fp32 ? If only support int8, for jetson nano that doesn’t support int8, How to run that model on jetson nano? Is it possible?

The jupyter notebook Detectnet_v2 just shows end uers how to run tlt-export in int8/fp16/fp32.
When you run tlt-export, need to check if your host PC supports fp16.
See https://developer.nvidia.com/cuda-gpus#compute and https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#hardware-precision-matrix. If your device does not support fp16, the tlt-export will prompt the log " Specified FP16 but not supported on platform".

More,if your edge device does not support int8, you need not run tlt-converter int8.
For nano, it does not support INT8 precision, so it is not possible to generate a int8 trt engine in Nano.

My post is GTX 1080,
And it is work with nvcr.io/nvidia/tlt-streamanalytics:v1.0_py2 and convert to fp16, but with nvcr.io/nvidia/tlt-streamanalytics:v2.0_dp_py2 I get such error when I want to convert to fp16.

That’s because 1.0 docker does not check the platform’s compatibility checking.
For gtx1080, the compute capability is 6.1. The fp16 is not supported. It supports INT8.

How about GTX 1080TI? support fp16?
Is it possible to convert this phase in google colab?

Please check

CUDA-Enabled GeForce and TITAN Products

and
https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#hardware-precision-matrix

Thanks,
Is it possible to use .tlt model with fp32 and convert to fp16 with deep stream in jetson nano?
Is it possible to use tlt-export on jetson nano for converting .tlt model to .etlt model fp16?

Inside docker,please use tlt-export to generate etlt model. Actually all the etlt model is fp32 mode.See Difference in data type specified during tlt export and tlt convert

In nano, please use tlt-converter to generate fp16 trt engine.
Nano supports fp16.