Hi all, I want to know how the existing sample code can be modified to infer with batch size more than 1.
In the existing sample code from the directory
/usr/src/tensorrt/samples/python/network_api_pytorch_mnist/sample.py and model.py
the inference on gpu will be done with only one image (784, ). which is obtained by calling the method
img, expected_output = model.get_random_testcase()
It will return the tensor with shape(784, )
784 is 28 * 28 which is the size of only one image.
I want to know how I can modify the existing code for doing inference with batch size more than 1. Please let me know.
Thanks and Regards