How to run sample in INT8 mode

hi all, I am working with the sample which implements Tensorrt using the Python API, located at /usr/src/tensorrt/samples/python/introductory_parser_samples. I have the below questions:

1- How can I run sample in INT8 mode?.
2- Why it doesn’t have the INT8 calibration step like TRT C++ API or TF-TRT implementations?
3- What is the default inference mode of this script?



By default, TensorRT uses FP32 inference. To enable uff_resnet50 for int8, please reference:


Is it enough just setting the builder flag trt_builder.int8_mode = True?, or is it needed also to set the dynamic range for each network tensor in order to perform INT8 inferences?. Could you please provide a code sample?