inference an uff file using int8

I convert an tensorflow model to uff. How can I do inference in Int8 model using the uff file?

Hi,

INT8 is a feature for GPU architecture 6.1 and 7.x.
Jetson TX2 is sm=6.2 which cannot support INT8 inference.

Thanks.

Hi AastaLLL:

I want to do it on 1060

Hi,

Suppose you can control it with FieldType.
Check our document for details:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/topics/classnvuffparser_1_1_field_map.html#details

Thanks.