Hi NVIDIA,
I want to do calibration with TensorRT, so I tried the example tf-trt. I run the example on two graphics card: Tesla K40 and GTX970m. The log show that both card doesn’t support INT8 and FP16, but it still work and output the pb file. The pb file is larger than I expected.
Follow is the size of pb file and the Tensorflow/TensorRT Environment:
The sample doesn’t work correctly and I guess that it may related with my graphics card.
From follow page I find that FP16 and INT8 only support cards with compute capability above 6.1.
https://devtalk.nvidia.com/default/topic/1023708/gpu-accelerated-libraries/fp16-support-on-gtx-1060-and-1080/
Does it mean I can’t do calibration on my old graphics card? Or they just work slower than card with high cc.
I get sample from this page: https://devblogs.nvidia.com/tensorrt-integration-speeds-tensorflow-inference/
Best Regards
Matthias