Can't use GPU to accelerate inference resnet model

@814928072 for Jetson Nano, you should downgrade to PyTorch 1.10, as that was the last PyTorch version to officially support Python 3.6. You can find the wheel in this topic: PyTorch for Jetson

If you follow the PyTorch part of the Hello AI World tutorial from jetson-inference, it covers how to do the training, export the model to ONNX, and then run it with TensorRT: https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-transfer-learning.md