I want to shrink bit-width of weights and activation maps to speed up my CNN network, especially to 8-bit INT and 16-bit float.
- Do I must use TensorRT ? Is there any other way, some kind of using framework functions(caffe/tensorflow) ?
- As I heard, there is no 8-bit INT accelerator on TX2, then, does it mean that I cannot run 8-bit INT network on TX2 ? Or, I can still run as 8-bit INT mode, but no gain in the point of speed-up ?
I have a problem in my account of ‘NVIDIA Developer site’ and ‘forum’. How can I access to account manager?
( I’m trying to register with my company e-mail account, but it always fails to reset password. )