I convert an tensorflow model to uff. How can I do inference in Int8 model using the uff file?
INT8 is a feature for GPU architecture 6.1 and 7.x.
Jetson TX2 is sm=6.2 which cannot support INT8 inference.
I want to do it on 1060
Suppose you can control it with FieldType.
Check our document for details: