How to to install cuda 10.0 on jetson nano separately ?

That should exist on the default rootfs provided. If it doesn’t, something is probably very wrong and you may wish to just reflash JetPack 4.3 on a new SD card. SDK Manager is meant to be installed on a Linux Desktop and cannot be installed on the Nano itself (However there is no longer any real need for it at all, at least with the Dev kit).

I just used another microSD card and downloaded the Jetpack 4.3 image from https://developer.nvidia.com/embedded/jetpack to see if that would make a difference - it didn’t.

What does this command show when you run it on your nano?

apt list --installed | grep cuda

how about?

ls /usr/local/cuda

It shows CUDA 10.0.326-1 installed, automatic (unknown, stable, now). I must mention that I’m not new, but VERY rusty with programming. Thanks for helping me.

No worries. Glad to help. That’s what the forum is for. Feel free to ask for any examples in a new thread or search the forum with Google.

Hallo…

What about cudnn??

I flashed the image in jetson nano through baleno etcher. It shows cuda 10.0, but shows cudnn 7.5.
I want the cudnn rather 7.6 ver. How can i upgrade it?

Thanks in advance
Abhi J K

Same. they can’t be upgraded individually, only through JetPack with the version included.

I currently have cuda10.2, so what exactly do you suggest for downgrading it

My understanding of what @kayccc is saying is the versions are tied to the JetPack. I’m not aware of any way currently to downgrade the JetPack other than backing up and reflashing, but someone else might suggest a possible path.

I also flashed the same image onto my Nano via SD and it also installed CUDA 10.2.
I installed CUDA 10.0.0 via apt install as @mdegans pointed out. This was enough for me to be able to successfully use the version of PyTorch installed by the install_torch.sh script that comes with the SD image from the “Getting Started” page. So, while you can’t “downgrade”, you can have 10.0.0 installed parallel to 10.2 .

It would be useful and save time for new users if SDKs that use CUDA were built to use the latest version of CUDA supplied by the flashable image.

It may work, but be aware that the configuration is unsupported for downgrading. I did notice it’s possible to have parallel versions of some libraries since some offline apt repos were still in the lists on my Xavier, but I expect some side effects.

If you notice any unusual behavior, I would backup and start from scratch. I really don’t think Nvidia tested it like this. I was watching the other thread and @dusty_nv is right that PyTorch should be building. It may be there is something else going on. A reflash never hurts to rule things out.

I’ve reflashed twice just to make sure that I wasn’t missing something or didn’t pull a package down from a different repo. I’ve retraced my steps through the “Getting Started” and "Hello AI World " tutorials and no, Pytorch fails to install from the installer script run here:
https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md

It looks like torch-1.1.0-cp36-cp36m-linux_aarch64.whl that is downloaded is not built against CUDA 10.2 and is failing looking for 10.0.0

@dusty_nv you may want to have a look at this.

It looks like the script you refer to might need to be updated. Try downloading 1.5 from here for the latest JetPack:

To be honest I don’t know a lot about PyTorch, but my understanding is 1.5 is for the latest JetPack with CUDA 10.2

Do you need a specific version? If so it might be worth it to reflash with 4.3, and only upgrade when the thing you need is ready for 4.4. Upgrading should work without a problem, but downgrading… Well it’s untested. Apt itself works pretty well, but the packages themselves probably weren’t designed for this.

Hi all, the install_pytorch.sh script has been updated to install the PyTorch wheels for CUDA 10.2, sorry about that. Please refer to this post for more information: https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-5-0-now-available/72048/291

It is not recommended or necessary to downgrade to CUDA 10.0, please stay with CUDA 10.2.

Hi,

When I try to install CUDA 10.0 it tells me the following:

Reading package lists… Done
Building dependency tree
Reading state information… Done
E: Unable to locate package cuda-core-10-0

I have Jetson Nano Jetpack 4.4. Is CUDA 10.0 not supported?

Thank you

Svetlana

1 Like

The cuda version on JetPack 4.4 is 10.2. There might be a way to install an old version but it would likely break more things than anything else. Is there a reason you can’t use 10.2?

Thank you for your reply.

I am trying to convert a model which was developped in Tensorflow 2.0 and Keras backend (tf.keras). Our model is in the saved.model format and I am using 2ft0nnx to convert it to ONNX. When I try to do that I get the following segmentation fault:

`python3 -m tf2onnx.convert --saved-model . --opset 12 --output model.onnx --fold_const `

2020-06-30 14:00:40.443942: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.2/lib64
2020-06-30 14:00:40.444009: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2020-06-30 14:00:40.444227: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.2/lib64
2020-06-30 14:00:40.444265: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Segmentation fault (core dumped`

When I tried using Tensorflow 1.15 to do the conversion (which I probably shouln’t ) I get the following error:

2020-06-29 16:09:21.083854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 570 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
Traceback (most recent call last):
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py”, line 501, in _import_graph_def_internal
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 1 of node StatefulPartitionedCall was passed float from fe_0_conv0/kernel:0 incompatible with expected resource.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/usr/lib/python3.6/runpy.py”, line 193, in _run_module_as_main
main”, mod_spec)
File “/usr/lib/python3.6/runpy.py”, line 85, in _run_code
exec(code, run_globals)
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tf2onnx/convert.py”, line 169, in
main()
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tf2onnx/convert.py”, line 142, in main
tf.import_graph_def(graph_def, name=’’)
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py”, line 507, in new_func
return func(*args, **kwargs)
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py”, line 405, in import_graph_def
producer_op_list=producer_op_list)
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py”, line 505, in _import_graph_def_internal
raise ValueError(str(e))
ValueError: Input 1 of node StatefulPartitionedCall was passed float from fe_0_conv0/kernel:0 incompatible with expected resource.

So I am running out of choices converting the model to ONNX…

Thanks again

Svetlana

Hello,

Same problem with Jetson Nano and cuda. I need to cuda 10.0 version, but 10.2 is installed with JetPack 4.4.
I’ve tried to install it with sudo apt-get install cuda-core-10-0, but got error:
E: Unable to locate package cuda-core-10-0

I’ve tried to install via


, but it seems there is no target for aarch64 architecture.
So how do we install cuda 10.0 for Nano? Anyone please help!

Thanks,
Bence

@espressobot, how did you install CUDA 10.0.0 exactly? Would you please help me?

Thanks,
Bence

Me helps Jetcard system configuration!