Training Precision ISSUE

i wonder can i use
training precision can be backend_floatx : FLOAT16? or backend_floatx : INT8??

because i found only use backend_floatx: FLOAT32 in the API

if i want to use mixed precision should i export TF_ENABLE_AUTO_MIXED_PRECISION=1 in the docker?

For training precision, only FLOAT32 and FLOAT16 are supported.

For AMP, please refer to https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#auto_mixed_precision,

In TLT, enabling AMP is as simple as setting the environment variable TF_ENABLE_AUTO_MIXED_PRECISION=1 when running tlt-train. This will help speedup the training by using FP16 tensor cores. Note that AMP is only supported on GPUs with Volta or above architecture.

For example,

TF_ENABLE_AUTO_MIXED_PRECISION=1 tlt-train …

Thanks for reply!