Hi, I was analyzing the model.py and sample.py code from the directory
/usr/src/tensorrt/samples/python/network_api_pytorch_mnist
I want to know what type of data the below statement downloads. Is it the handwritten digits or any other images or what type of images it downloads. Please clarify me.
self.train_loader = torch.utils.data.DataLoader(
datasets.MNIST(‘/tmp/mnist/data’, train=True, download=True, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=self.batch_size,
shuffle=True,
num_workers=1,
timeout=600)
OK Thank you.
With respect to this I have another query. I want to determine the difference in the inference time between the model that runs only on the CPU and in the GPU using conversion to the TensorRT.
How can this existing code can be modified to first measure the inference time on the CPU and then on the GPU using tensorrt.
This is required for the research thesis I am submitting. Its purpose is to improve the inference speed of a pretrained model on the NVIDIA devices.
If you provide this information then it will be helpful to me.