How to use int8 inference with JetsonTX2+TensorRT-2.1.2

When I run the sample-"./sample_int8 mnist" of TensorRT2.1.2, I got the following error-“Int8 support requested on hardware without native Int8 support, performance will be negatively affected. ERROR LAUNCHING INT8-to-INT8 GEMM: 8”. While, as the Use Guide of TensorRT declared, “INT8 inference is available only on GPUs with compute capability 6.1”.So this mean the compute capability of JetsonTX2 GPU is below 6.1? What is the exact compute capability of JetsonTX2 GPU, is it equivalent to the GeForce GTX 750?


INT8 only support SM=61 GPU architecture.
TX2 is SM=62, doesn’t have INT8 feature.

GPU architecture information can be found here:

Thanks and sorry for the inconvenience.