nvidia-smi installation on Jetson TX2

How can i install nvidia-driver in Jetson TX2 Board having Ubuntu 18.04 (ARM64 Architecture). I am unable to install packages, due to which i am not able to use nvidia-smi utility and post the GPU status to kubernetes nodes.

Please help on this.

Hi venkatashivak, nvidia-smi isn’t supported on Jetson platforms, and the GPU driver already comes bundled with JetPack-L4T. The nvidia-driver package isn’t the correct driver and won’t work - it’s the PCIe driver, whereas the Jetson’s integrated GPU uses a userspace driver provided by L4T.

To query GPU status, I would recommend checking the tegrastats application or jtop.

1 Like

Hi dusty,

Thanks for the information.
When i try to run a sample docker container on Jetson Board TX2, i am facing NVML Error. Please find the below log.

root@demo-desktop:~# docker run --security-opt=no-new-privileges --cap-drop=ALL --network=none -it -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins nvidia/k8s-device-plugin:1.0.0-beta4

2020/02/06 05:12:20 Loading NVML
2020/02/06 05:12:20 Failed to initialize NVML: could not load NVML library.
2020/02/06 05:12:20 If this is a GPU node, did you set the docker default runtime to nvidia?
2020/02/06 05:12:20 You can check the prerequisites at: https://github.com/NVIDIA/k8s-device-plugin#prerequisites
2020/02/06 05:12:20 You can learn how to set the runtime at: https://github.com/NVIDIA/k8s-device-plugin#quick-start

Below is the Jetson Configuration:
root@demo-desktop:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic

root@demo-desktop:~# uname -a
Linux fccl-desktop 4.9.140-tegra #1 SMP PREEMPT Mon Aug 12 21:29:52 PDT 2019 aarch64 aarch64 aarch64 GNU/Linux

root@demo-desktop:~# docker --version
Docker version 18.09.7, build 2d0083d

Are there any specific packages that need to be installed??


NVML and nvidia-smi are one in the same (nvidia-smi uses NVML library to get it’s info). Since NVML is based on discrete GPU driver architecture, it isn’t supported on Jetson which uses integrated GPU driver.

Are you able to use kubernetes without this status plugin? Perhaps instead you could pipe jtop or tegrastats output.

Hi Dusty,

Yes i am able to use kubernetes without this plugin. But the thing is, i am unable to assign GPU’s for containers that are running on top of kubernetes.

when i do a describe on a kubernetes node, it should display the GPU’s available on that node. If it is showing GPU’s available, then we can assign GPU’s for the container from kubernetes. As of now kubernetes is not reading the GPU’s available.

I am trying the nvidia-smi because, if we install NVML library, i think the kubernetes will read the GPU’s available on nodes.