Hello dear all,
I want to implement a deep learning project which has been developed with Anaconda on a computer GPU to my Jetson Nano 4GB. The project uses VGG19 and as benchmark declares it is okay to inference that network on Nano, which I’m able to run the project on CPU without any error and I get good results too. But my problem is on one line of the project, where it says
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") if device == None else device
But I changed it into
self.device = torch.device(“cuda”)
Because when I don’t change it either works on CPU (which is not what I want) or it gets AssertionError: Torch not compiled with CUDA enabled. Even though I downloaded the enabled version.
My system info’ s are:
JetPack 4.6 L4T32.6.1
PyTorch as 1.7.1
Note: I applied some sample CUDA Deep Learning projects with the same card and they work fine on GPU, so the CUDA is ofc downloaded with JetPack image but I think since the project I want to apply is built with Anaconda and Anaconda makes it’s own CMake files, when I apply this project to my Nano I’m not able to reach CUDA?