Some questions on TensorRT 2.1.2 :
I have read the user guide for the TensorRT 2.1.2 Section 3.5 : SampleInt8 - Calibration and 8-bit inference
I have some doubts now:
- What does 1000 batch files batch0 to batch999 contain ?
- If there are 1000 batch files, how many images are there inside it ? I guess 10000 images since I built batches with the test dataset.
- “The calibration dataset must be representative of input data at runtime” - To validate this arguement, How does TensorRT selects all possible input images representatives since it only takes first 0-500 images for calibration dataset.
- If the first 10 batches of size 50, (500 images) are taken as calibration dataset, does TensorRT runs inference on the images except calibration dataset ?
- The user guide mentions 1000 iterations to generate 1003 batch files, why is it so ? Why 1000 ? What does it affect if I do only 1 iteration ?
- What is the role batch size play in inference in TensorRT ? I saw some presentations which indicates batch size is a major factor for inference time improvement. Why ?
- Can sampleInt8 example be used for inference for CIFAR and IMAGENET?
Thanks in advance