Is it even possible to perform INT8 PTQ on a Jetson Nano devkit? I’ve looked at the support matrix, and due to a lack of tensor cores it can’t even be done?
I’ve tried briefly to do it, but with the int8 calibrator I’ve written it can’t build the model, it becomes a NoneType object. If this is the case it’s a poor way to indicate that it is impossible.
Environment
TensorRT Version: 8.0 GPU Type: Jetson nano Maxwell GPU Nvidia Driver Version: CUDA Version: 10.2 CUDNN Version: Operating System + Version: Ubuntu 18.4 Python Version (if applicable): 3.6.9
I don’t think either of those resources answer my question whether it is possible to at all reduce precision to INT8 on Jetson Nano. But yes, this is related to Jetson, thank you for moving it for moving it, didn’t realize I wasn’t posting it in jetson section