TensorRT dump weights of INT8 Engine

with the reference of [url]https://devblogs.nvidia.com/int8-inference-autonomous-vehicles-tensorrt/[/url], we managed to use the calibrator from python interface to convert the MNIST caffemodel to INT8 inference engine. we can dump the weight from the original caffemodel in FP32 through the protobuf utility. is there a way that we can dump the INT8 engine weights?

1 Like