Want to know what type of data the statement torch.utils.data.DataLoader downloads

Hi, I was analyzing the model.py and sample.py code from the directory
/usr/src/tensorrt/samples/python/network_api_pytorch_mnist

I want to know what type of data the below statement downloads. Is it the handwritten digits or any other images or what type of images it downloads. Please clarify me.
self.train_loader = torch.utils.data.DataLoader(
datasets.MNIST(‘/tmp/mnist/data’, train=True, download=True, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=self.batch_size,
shuffle=True,
num_workers=1,
timeout=600)

Hi,

Yes, it is a PyTorch built-in dataset.
Please find the details below:

https://pytorch.org/vision/main/datasets.html

MNIST(root[, train, transform, …]) MNIST Dataset.

Thanks.

Thank you for providing me the information.

I have one more clarification w.r.t to this file sample.py
How to measure the inference time in this file.

Thanks
Nagaraj Trivedi

Hi,

You can measure the elapsed time of the below API call:

https://github.com/NVIDIA/TensorRT/blob/release/8.2/samples/python/network_api_pytorch_mnist/sample.py#L112

For example:

import time
...

def main():
    ...
    start_time = time.time()
    [output] = common.do_inference_v2(context, bindings=bindings, inputs=inputs, outputs=outputs, stream=stream)
    print("execute times "+str(time.time()-start_time))

Thanks.

OK Thank you.
With respect to this I have another query. I want to determine the difference in the inference time between the model that runs only on the CPU and in the GPU using conversion to the TensorRT.
How can this existing code can be modified to first measure the inference time on the CPU and then on the GPU using tensorrt.

This is required for the research thesis I am submitting. Its purpose is to improve the inference speed of a pretrained model on the NVIDIA devices.
If you provide this information then it will be helpful to me.

It is already present. Kindly ignore my previous post.

def test(epoch):
self.network.eval()
test_loss = 0
correct = 0
for data, target in self.test_loader:
with torch.no_grad():
data, target = Variable(data), Variable(target)
output = self.network(data)
test_loss += F.nll_loss(output, target).data.item()
pred = output.data.max(1)[1]
correct += pred.eq(target.data).cpu().sum()
** test_loss /= len(self.test_loader)**
print(‘\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n’.format(test_loss, correct, len(self.test_loader.dataset), 100. * correct / len(self.test_loader.dataset)))

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.