CUDA in Pytorch fails to perform inference

Jetson Orin Nano
Jetpack 5.1.1
torchvision 2.0.0

R35 (release), REVISION: 3.1, GCID: 32827747, BOARD: t186ref, EABI: aarch64, DATE: Sun Mar 19 15:19:21 UTC 2023

pyhon 3.8.10
deepstream-app version 6.1.1
DeepStreamSDK 6.1.1
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.4
TensorRT Version: 8.5
cuDNN Version: 8.6
libNVWarp360 Version: 2.0.1d3
Linux tensor 5.10.104-tegra #1 SMP PREEMPT Sun Mar 19 07:55:28 PDT 2023 aarch64 aarch64 aarch64 GNU/Linux

When I try to upgrade to CUDA for torch using
Which is slightly different than the instructions online:

torch recognizes the CUDA driver, that is:

returns True

but breaks image processing in torchvision. In addition it fails to recognize anything in inference. Reinstalling torchvision 2.0.0 tosses out CUDA upgrade and normal function is restored but only cpu is recognized.