Important about the documents of INT8 enable!!!

Could you please give some examples one the INT8 optimization, including calibration and so on?

you know, the documents here [https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#optimizing_int8_python] is too short for the details! What’s the ImageBatchStream? How to import it? How to construct calibration files? When I search the Internet, many examples shows this:

import tensorrt as trt
Int8_calibrator = trt.infer.EntropyCalibrator()...

The question is, in the version 5.0.2 of tensorrt, it does not have the attribute of infer or module named tensorrt.infer, OK?

Asking for help!!! I am going crazy…

How to @ the developer of tensorrt for help?

Agreed. I guess more information will be added when TensorRT 5.10 got released.

Hello,

Please see the example code on this page related to int8 Python: https://devblogs.nvidia.com/int8-inference-autonomous-vehicles-tensorrt/. This has examples of ImageBatchStream, calibrators, etc.

However, it looks like some of the documentation around this code is out of date for TensorRT 5. Please see https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/migrationGuide.html for tips on migrating the code. In general for these examples, it looks like this is the most relevant:

The new API removes the infer, parsers, utils, and lite submodules. Instead, all functionality is now included in the top-level tensorrt module.

So for example, it looks like

trt.infer.EntropyCalibrator

is now

trt.IInt8EntropyCalibrator

.

A quick / short-term fix for debugging purposes that might work instead of changing the code, could just be to replace

import tensorrt as trt

with

import tensorrt.legacy as trt

as mentioned in the migration guide.

Thanks,
NVIDIA Enterprise Support

Thanks very much in advance! I will make a try following your suggestions.