How to run uff_resnet50.py sample in INT8 mode

hi all, I am working with the sample uff_resnet50.py which implements Tensorrt using the Python API, located at /usr/src/tensorrt/samples/python/introductory_parser_samples. I have the below questions:

1- How can I run uff_resnet50.py sample in INT8 mode?.
2- Why it doesn’t have the INT8 calibration step like TRT C++ API or TF-TRT implementations?
3- What is the default inference mode of this script?

Thanks

Hello,

By default, TensorRT uses FP32 inference. To enable uff_resnet50 for int8, please reference: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#enable_int8_python

Hello,

Is it enough just setting the builder flag trt_builder.int8_mode = True?, or is it needed also to set the dynamic range for each network tensor in order to perform INT8 inferences?. Could you please provide a code sample?