Issue when installing CUDA toolkit on TX2

First, I have already checked some previous topics from others, I got the idea that I can check the usage of GPU on my TX2 by using the command: sudo ~/tegrastats, and GR3D means GPU, but still have no idea on xx%@yy, what does these number really mean??
Second, I want to use gpu to run my code on TX2, so I followed some instructions from some other topics, download the (linux->X86_64->ubuntu->16.04->runfile(local)) from
After Run sudo sh, it shows that the disk space is not enough… How should I do for this? Would it be possible that I insert an additional SD card to solve this problem? Or still need to do something else to make the CUDA can be installed in additional SD card??
All I want to do is to run tensorflow code by GPU, are the above steps necessary?? Or something else I need to do?? Thanks!

I can’t answer the CUDA-specific stuff, but most everything related to this is installed in “/usr/local”. You could mount a hard drive or SD card there (make sure they are formatted as type ext4…you may get unexpected results if they are the windows VFAT type).

To see the current system disk space run “df -H -t ext4”. It is possible the system was flashed not taking advantage of all of the eMMC…if that is the case then you might not need an external disk (or excess unused disk could be mounted at “/usr/local”).

Note that normally Ubuntu repositories are on some remote network-accessible server. In the case of much of the CUDA software the CUDA repo package installs an entire repository in “/var”, e.g., “/var/cuda-repo-8-0-local/”. If you look at the files present there you’ll see many “.deb” files…total occupied space should be around 1.2Gb. I wouldn’t recommend it since apt update mechanisms might not like it, but once packages are installed you could move this somewhere else (e.g., to an SD card), and only run the apt update when that card is mounted. This could be problematic for the small amount of space it saves.


1. xx%@yy: yy means the maximum frequency, and xx indicates the current frequency.
2. x86-based CUDA toolkit can’t be used on Jetson.
Please install aarch64 version CUDA toolkit from JetPack.


Hi AastaLLL,
Thank you!
For 1., I always see GR3D 0%@114 when there is no task on my TX2, so 0% means that there is no gpu used now, is that right? I still confused that from the spec I found that TX2 has 256 gpus, if I want to use all 256 gpus as much as possible in some tasks, how can I check whether all gpus are used or not? Or where can I monitor the usage of 256 gpus?

For 2., so you mean that I need to install new OS from JetPack? Because I found that many topics said that the OS which has already installed in TX2 originally is not so good for development? would that be necessary to installed new OS from JetPack?


JetPack can both flash and install extras after the flash. Without a more recent L4T (Ubuntu with NVIDIA hardware accelerated drivers on top of it) the newer packages can’t be installed. Reliability is also much better with a recent L4T.

If you don’t install the extra packages, then you can flash with just command line (a combination of the driver package plus sample rootfs…JetPack uses these when flashing…JetPack itself just runs the driver package and does not understand flash without it).


1. xx indicates the utilization of GPU. TX2 only has ONE GPU. 256 is the core number.

  • Check deviceQuery sample to get more information of platform:
    /usr/local/cuda-9.0/bin/ .
    cd NVIDIA_CUDA-9.0_Samples/1_Utilities/deviceQuery
  • Read this blog to get faimair with GPU architecture

2. Different installation approaches of x86 and aarch64 machine.

  • x86: - Desktop GPU, ex GTX-1080, TITAN Xp, ... - Install CUDA from the package on
  • aarch64(Jetson): - Built-in GPU - Install CUDA from JetPack directly