Fast INT8 Inference for Autonomous Vehicles with TensorRT 3

Originally published at: https://developer.nvidia.com/blog/int8-inference-autonomous-vehicles-tensorrt/

Autonomous driving demands safety, and a high-performance computing solution to process sensor data with extreme accuracy. Researchers and developers creating deep neural networks (DNNs) for self driving must optimize their networks to ensure low-latency inference and energy efficiency. Thanks to a new Python API in NVIDIA TensorRT, this process just became easier. Figure 1. TensorRT…

Can you release the model?

The blog is informative and helpful!
I have one question: how to write the read_calibration_cache() function? Could you please provide an example?

What is the reduction of inference accuracy using INT8, the AI scientific circle are talking about upscale float point precision for AI not downscale.

Could you please clarifying what should be the result for getLabels function for semantic segmentation? Thanks.

getLabels() is only necessary for LegacyCalibrator. You don't need the score function and ground truth when using EntropyCalibrator.

thank you for the blog, however ,may I ask have u ever tried the trt.lite.Engine() method with the parameter as "calibrator" to transfer to INT8, I tried but find the method is not supported now ,how did you make it?would you mind telling more details , many thanks!