However, I cannot find the definition of ImageBatchStream in python API, so I don’t know how to do the following steps. I also checked the samples, but can only find INT8 samples writing in C++. And class BatchStream is defined at a header file in samples.
So can we do INT8 inference using python API? If we can, how to build the data pipeline?
Thanks for your reply. I have written a program referring to https://devblogs.nvidia.com/int8-inference-autonomous-vehicles-tensorrt/. However, there still exists a problem in the definition of Int8Calibrator::write_calibrator_cache().
In the example, It accept a parameter ‘ptr’, and convert ptr by int(ptr). However, in 5.0.2.6, This function accept this parameter in type ‘data:capsule’, and int(ptr) will cause an error.
How to fix this? I think this problem is caused by the api difference between tensorrt 3 and 5