How does INT4 work in the Tesla T4?


I have a T4. I am testing it.
The GPU can do inference with the following precisions: FP32, FP16, INT8 and INT4.

When using INT8, it is necessary to have a calibration dataset.

Where can I find more information on how to do inference using INT4? Does it also need a calibration dataset?
Does it work with Tensorflow-TRT or only TensorRT?