Quantization of Weight/Activation on TX2

Hi,
I want to shrink bit-width of weights and activation maps to speed up my CNN network, especially to 8-bit INT and 16-bit float.

  1. Do I must use TensorRT ? Is there any other way, some kind of using framework functions(caffe/tensorflow) ?
  2. As I heard, there is no 8-bit INT accelerator on TX2, then, does it mean that I cannot run 8-bit INT network on TX2 ? Or, I can still run as 8-bit INT mode, but no gain in the point of speed-up ?

Besides,
I have a problem in my account of ‘NVIDIA Developer site’ and ‘forum’. How can I access to account manager?
( I’m trying to register with my company e-mail account, but it always fails to reset password. )

You can get web site support here (e.g., account issues):
[url]https://developer.nvidia.com/contact[/url]

Do beware that some downloads/documentation require free registration separate from the forums (it might look like the same login, but it is actually a separate registration).

Hi,

Are there any answers regarding the INT8 & 16bit float support for the TX2?
I saw that Pascal architecture has such support, is it the case for the TX2 as well?

Many thanks, Matan

Hi,

Float16: YES.

INT8: NO. INT8 only works on P4/P40/TitanX/…, not for TX1 and TX2.

Thanks.