Is INT8 PTQ even possible on Jetson Nano?

Description

Is it even possible to perform INT8 PTQ on a Jetson Nano devkit? I’ve looked at the support matrix, and due to a lack of tensor cores it can’t even be done?
I’ve tried briefly to do it, but with the int8 calibrator I’ve written it can’t build the model, it becomes a NoneType object. If this is the case it’s a poor way to indicate that it is impossible.

Environment

TensorRT Version: 8.0
GPU Type: Jetson nano Maxwell GPU
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: Ubuntu 18.4
Python Version (if applicable): 3.6.9

Hi,

This looks like a Jetson issue. Please refer to the below samples in case useful.

For any further assistance, we will move this post to to Jetson related forum.

Thanks!

I don’t think either of those resources answer my question whether it is possible to at all reduce precision to INT8 on Jetson Nano. But yes, this is related to Jetson, thank you for moving it for moving it, didn’t realize I wasn’t posting it in jetson section

Hi,

Unfortunately, this is a hardware limitation.
INT8 operation on Jetson is available from the Xavier series (sm>7).

Thanks.

Alright thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.