Torch not compiled with CUDA enabled over Jetson Nano

I’ve got a Jetson nano Jetpack 4.5.1 and torch.cuda.is_available() returns False, how could I solve this issue?. Torch 1.9.0.
Is there any command to install torch1.9.0+cu, perhaps.
Thanx in advance.

You need to build it manually or use the pre-built binaries (or docker images in ngc).

https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-9-0-now-available/

Compiling pytorch manually for CUDA/CUDNN requires setting some build options. Usually if you just pip install pytorch, you only get a CPU version and often, if some deps are not found, you do not even get an CPU optimized version (e.g., for AVX/SSE capable hosts).

Official pytorch build pipeline uses magma and other libs to build optimized CPU versions. On amd64 you either use the official pre-built ones or via conda, which take care all those optimizations are enabled.

And, as easy as posible, the command is…
This would facilitate also for the next who question this, if there is a command-line order. Thanx.

How to Compiling pytorch anually for CUDA/CUDNN requires setting some build options.
What are these Build Options?
Could you please explain me in details how to do that, what are the proper steps?
PLease answer me as soon as possible, I will be very thankful to you.

Do you need to build it manually or are the NVIDIA pre-builts sufficient? I did build it myself once as the required version was not available back then from NVIDIA and I needed to test a model for someone. But if 1.9.0 (newest from NVIDIA) is sufficient, you can get the one from the link I posted. Instructions are there too.

A solution?. YES. I’ve downloaded jetpack 4.6 [L4T 32.6.1] and pytorch install is cuda enabled. solved this way. But I have not been ab le to install tensorflow. I’ll open new topic this issue. My nano is the normal nano, 4Gb memory.