I’m trying to make my object detection on TensorRT program infer two images at the same time(batch inference). Inferred results from TensorRT are stored in an output array. When batchsize==1, the program works well; however, when I set batchsize==2, I can only get correct results of the first image, the results of the second image in the output array are zeros. The following is detailed description of my output result.
If I set batchSize=1, the size of output array is 136459, and I can get the predicted bounding boxes. When I set batchSize=2, the size of output array is 272918, I can get the predicted results of the first image in the indices of 0~136458. But the results of the second image, which should be starts from the 136059 index of the array, are zeros.
Snapshots of content of the output array:
[url]https://drive.google.com/open?id=1ZOV4SInQVzLCG_n_-vwoVkeqrVzZfckX[/url] ; Results of the first image
[url]https://drive.google.com/open?id=18YwUggoTjaEsS9WjAJ7QySCCp0heZna9[/url] ; Results of the second image
Additional information:
- I’ve tried mTrtContext->execute() and mTrtContext->enqueue(), and the inferred results are the same.
- Inference time if batchSize == 1: 18.5816 ms
- Inference time if batchSize == 2: 35.7012 ms
I want to ask:
- If my input array containing two images in a batch is correct, do the outputs after inferencing contain bounding boxes of two images?
- Are the inferred result by using mTrtContext->execute() or mTrtContext->enqueue() guranteed correct?
- Is there size limitation of output when using batch inference in TensorRT, which means is the output array of size 272918 is too big for TensorRT?