Int16 Inference on Jetson Xavier NX/Orin NX

Hi everyone

I’m currently working on a special use case where I need INT16 inference (not fp16 or INT8). I saw in this thread that the Jetson Xavier supports INT8 inference and on this blog post, that the predecessor of Xavier’s Volta architecture supports INT16 inference. So my question: do the Jetson Xavier or Orin also support INT16 inference?


Do you want to use TensorRT for inference?
If yes, TensorRT supports fp32, fp16, int8, int32, and bool.
You can find more details below:

If you are trying other frameworks that have INT16 implementation.
XavierNX does have IMMA operation as other volta generation’s GPU.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.