Issue with CUDA driver trying to run Alpaca-LoRa in Jetson Nano


I am trying to reproduce the following steps replicate[dot]com/blog/fine-tune-alpaca-with-lora in order to fine tune and run a model like ChatGPT. Following this othe repo: github[dot]com/tloen/alpaca-lora I read it is possible to run it in a Raspberry Pi, so why not try it with a Jetson Nano instead?

Following the steps, I am stuck running the transformation script, I get the following error:

Running ‘python -m transformers.models.llama.convert_llama_weights_to_hf --input_dir unconverted-weights --model_size 7B --output_dir weights’ in Docker with the current directory mounted as a volume…
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=11.7 --pid=16965 /var/lib/docker/overlay2/39b33e1b5509878b89f2fab614d680d3e7d941511243b044fd2d377c0286525d/merged]
nvidia-container-cli: requirement error: unsatisfied condition: cuda >= 11.7: unknown.

Then I have updated CUDA toolkit up to 12.1 following this thread: and after I retried I got the same error.

So, then I launched the deviceQuery sample for checking if everything is correct, but:

jetson@nano:~/cuda-samples/Samples/1_Utilities/deviceQuery$ ./deviceQuery
./deviceQuery Starting…

CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 35
→ CUDA driver version is insufficient for CUDA runtime version
Result = FAIL

After that, I googled the error, but it seems there is nothing to do with CUDA driver version at this point in Jetson Nano. As far as I understand it is linked to jetpack.

Is there anything I could do to sort this out?

FYI, I have installed nvidia-jetpack v4.6-b197 and CUDA Toolkit vs 12.1

Nanos don’t support CUDA 12. They’re limited to the release of CUDA installed by JetPack/SDK Manager (CUDA 10) version 4.x or earlier. CUDA supplied for most computers expect a GPU connected via PCI, but the GPU of a Nano is directly wired to the memory controller. Those other drivers won’t work with a Nano, and Nanos reached end of life with regard to new features some time back. You’d need JetPack/SDKM 5.x+ for CUDA 12, and this is not compatible with a Nano.

1 Like

thanks for your response. It makes sense. What about updating up to CUDA 11.x? I found some threads about it, I will give it a chance

CUDA 11 won’t work with a Nano. You need Xavier or newer. Orin is currently the newest.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.