CUDA in Pytorch fails to perform inference

Environment:
Jetson Orin Nano
Jetpack 5.1.1
torchvision 2.0.0

R35 (release), REVISION: 3.1, GCID: 32827747, BOARD: t186ref, EABI: aarch64, DATE: Sun Mar 19 15:19:21 UTC 2023

pyhon 3.8.10
deepstream-app version 6.1.1
DeepStreamSDK 6.1.1
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.4
TensorRT Version: 8.5
cuDNN Version: 8.6
libNVWarp360 Version: 2.0.1d3
Linux tensor 5.10.104-tegra #1 SMP PREEMPT Sun Mar 19 07:55:28 PDT 2023 aarch64 aarch64 aarch64 GNU/Linux

Issue:
When I try to upgrade to CUDA for torch using
torch-2.0.0a0+8aa34602.nv23.03-cp38-cp38-linux_aarch64.whl
Which is slightly different than the instructions online:

https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html

torch recognizes the CUDA driver, that is:

torch.cuda.is_available()
returns True

but breaks image processing in torchvision. In addition it fails to recognize anything in inference. Reinstalling torchvision 2.0.0 tosses out CUDA upgrade and normal function is restored but only cpu is recognized.