platformHasFastInt8 returns true on Geforce GTX1060


Hi, as the title clearly states, I’m getting a true return from the function platformHasFastInt8 in a GeForce 1060.
From the hardware-precision-matrix documentation I can see that in compute capability 6.1 Int8 is supported but only in Tesla architecture. I checked the compute capability here


TensorRT Version: 6.0.1-1+cuda10.0
GPU Type: GEForce GTX 1060
Nvidia Driver Version: 440.64
CUDA Version: 10.0
CUDNN Version:
Operating System + Version: Ubuntu 18.04
Baremetal or Container (if container which image + tag): Baremetal

Steps To Reproduce

Creating a simple builder and querying for the int8 availability should suffice:

nvinfer1::IBuilder* builder = nvinfer1::createInferBuilder(logger);
if( builder->platformHasFastInt8() ){
	std::cout<<"INT8 Enabled"<<std::endl;

I’m doing a function that checks fastest available precision, I don’t really think INT8 is enabled on GeForce, I’m cross compiling for xavier and of course there it is.
Any insight into this? Could cause some issues down the line

Hi @bpinaya,

I believe INT8 should be supported on your device (and all other CC 6.1 devices), please see this post: FP16 support on gtx 1060 and 1080

The table you mentioned just refers to Tesla P4 as an example of a CC 6.1 GPU, it’s not the only CC 6.1 GPU with INT8 support.

Oh I see @NVES_R, thanks for the quick reply, so it means that I don’t have fp16 but there is int8 available right? since something like builder->platformHasFastFp16() returns false. Thanks for the help!