Difference in data type specified during tlt export and tlt convert

There is a data type(fp16, fp32, int8) optional argument in both tlt-export and tlt-convert. How do they differ ?

They are different data type.
Please see https://devtalk.nvidia.com/default/topic/1065558/transfer-learning-toolkit/trt-engine-deployment/ for more info.

Hi,
I mean to say why do we need to specify the data type in both tlt-export and tlt-convert. Let’s say if I have defined data type as fp16 in tlt-export and fp32 in tlt-convert, then what will happen and vice-versa?

The tlt-export will generate an etlt model.If in int8 mode,the calibration table is also generated.
The tlt-convert will generate a trt engine.

More info:

  1. For tlt-export, no matter FP32/FP16/INT8, the etlt model is exactly the same. The datatype is always fp32.
  2. For tlt-export, if set to INT8, it can do calibration and generate the INT8 calibration table for deployment.
  3. FP16 tlt-export + FP32 tlt-convert will behave exactly like FP32 all the way. It will generate FP32 engine
  4. FP32 tlt-export + FP16 tlt-convert will generate FP16 engine.

Hi Morganh,
Thanks for the reply.