How to to install cuda 10.0 on jetson nano separately ?

Hi,

When I try to install CUDA 10.0 it tells me the following:

Reading package lists… Done
Building dependency tree
Reading state information… Done
E: Unable to locate package cuda-core-10-0

I have Jetson Nano Jetpack 4.4. Is CUDA 10.0 not supported?

Thank you

Svetlana

1 Like

The cuda version on JetPack 4.4 is 10.2. There might be a way to install an old version but it would likely break more things than anything else. Is there a reason you can’t use 10.2?

Thank you for your reply.

I am trying to convert a model which was developped in Tensorflow 2.0 and Keras backend (tf.keras). Our model is in the saved.model format and I am using 2ft0nnx to convert it to ONNX. When I try to do that I get the following segmentation fault:

`python3 -m tf2onnx.convert --saved-model . --opset 12 --output model.onnx --fold_const `

2020-06-30 14:00:40.443942: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.2/lib64
2020-06-30 14:00:40.444009: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2020-06-30 14:00:40.444227: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.2/lib64
2020-06-30 14:00:40.444265: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Segmentation fault (core dumped`

When I tried using Tensorflow 1.15 to do the conversion (which I probably shouln’t ) I get the following error:

2020-06-29 16:09:21.083854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 570 MB memory) → physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
Traceback (most recent call last):
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py”, line 501, in _import_graph_def_internal
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 1 of node StatefulPartitionedCall was passed float from fe_0_conv0/kernel:0 incompatible with expected resource.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/usr/lib/python3.6/runpy.py”, line 193, in _run_module_as_main
main”, mod_spec)
File “/usr/lib/python3.6/runpy.py”, line 85, in _run_code
exec(code, run_globals)
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tf2onnx/convert.py”, line 169, in
main()
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tf2onnx/convert.py”, line 142, in main
tf.import_graph_def(graph_def, name=‘’)
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py”, line 507, in new_func
return func(*args, **kwargs)
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py”, line 405, in import_graph_def
producer_op_list=producer_op_list)
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py”, line 505, in _import_graph_def_internal
raise ValueError(str(e))
ValueError: Input 1 of node StatefulPartitionedCall was passed float from fe_0_conv0/kernel:0 incompatible with expected resource.

So I am running out of choices converting the model to ONNX…

Thanks again

Svetlana

Hello,

Same problem with Jetson Nano and cuda. I need to cuda 10.0 version, but 10.2 is installed with JetPack 4.4.
I’ve tried to install it with sudo apt-get install cuda-core-10-0, but got error:
E: Unable to locate package cuda-core-10-0

I’ve tried to install via
https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Linux
, but it seems there is no target for aarch64 architecture.
So how do we install cuda 10.0 for Nano? Anyone please help!

Thanks,
Bence

1 Like

@espressobot, how did you install CUDA 10.0.0 exactly? Would you please help me?

Thanks,
Bence

Me helps Jetcard system configuration!

Hi,
I currently have JP43 and that came with Cuda10.2 on Nano. Now i am trying to run the carter example of Isaac platform and that gives me error regarding the libnppicc.so. Cuda 10.2 has libnppicc.so.10. The example seems to be needing libnppicc.so.10.0, which i am assuming is in Cuda10.0. I am also getting error of undefined symbol IsaacGatherComponentInfo from libperception_module.so. In order to resolve this i am trying other version of these libraries. So I am installed Cauda 11 and Cuda 10.0 on my host and Nano/tx. I also am facing error after installing the cuda10.0 iwth regards to the fact that it has no lib64 folder itself. I was able to install cuda10 for amd64 on the host using the local run file of cuda10 installation. But for arm64 the only way I was able to get the debian package was through using the sdk manager. But even after installing that using dpkg , I still get issue with regards to lib64 not being present in cuda10.0. See attached text file.log_cuda10.txt (1.5 KB)