sampleINT8 problem in TensorRT 3.0.4

Hi,

When I run the sampleINT8 program, it says,

Engine could not be created at this precision

Is it right? Or the sample code bundled with TensorRT 3.0.4 has errors?

This is the message:

$ ./sample_int8 mnist

FP32 run:400 batches of size 100 starting at 100
........................................
Top1: 0.9904, Top5: 1
Processing 40000 images averaged 0.00264616 ms/image and 0.264616 ms/batch.

FP16 run:400 batches of size 100 starting at 100
........................................
Top1: 0.9904, Top5: 1
Processing 40000 images averaged 0.00239607 ms/image and 0.239607 ms/batch.

INT8 run:400 batches of size 100 starting at 100
Engine could not be created at this precision

The platform is TensorRT 3.0.4, Ubuntu 16.04, cuda9.0, cudnn7.

I don’t konw whether the sample code can perform INT8 calibration.

Thanks

I’ve found the reason.

That graphics card doesn’t support INT8.

Hi,

What graphics card were you using? I’m getting this error myself as well. I’m using a GeForce GTX 1050, the Nvidia docs say that this card should support INT8.

Thanks

At first I used Tesla P100 and got that error. When I changed to GTX 1080, it ran successfully. I think 1050 should work.