couldn't use sdk manager command line in a dockerfile

Hi guys,

I want using sdk manager command line to install host packages to build a docker image, and it gives:

(sdkmanager:1): Gtk-WARNING **: 02:06:53.903: cannot open display: 
The command '/bin/bash -o pipefail -c /opt/nvidia/sdkmanager/sdkmanager --cli install --responsefile /opt/sdkm_responsefile_drive.ini' returned a non-zero code: 1

we do docker build.

Dockerfile:

FROM ubuntu:18.04

# Install Nvidia SDK Manager
RUN apt-get update && apt-get -y --no-install-recommends install \
        libcanberra-gtk-module \
        libgconf-2-4 \
        libgtk-3-0 \
        libnss3 \
        libx11-xcb1 \
        libxtst6 \
        libxss1 \
        locales \
        && dpkg -i sdkmanager_1.0.0-5517_amd64.deb \
        && apt-get clean && rm -rf /var/lib/apt/lists/*

RUN /opt/nvidia/sdkmanager/sdkmanager --cli install --responsefile /opt/sdkm_responsefile_drive.ini

I think sdk manager command line mode doesn’t require x11 and can run successfully when building docker image. Any idea of how to solve this?

Thanks.

Dear wenbinjlumj,
We will check internally and update you.

It does require x11, but shouldn’t. It’s been a known issue since they added the command line mode. A few ways around it are to use a fake display or to manually move the files inside. Neither is a pretty solution.

Dear wenbinjlumj,
Currently it requires GUI to run on docker. We are working on running SDKM CLI on docker without such GUI dependency and update you on this feature

Thanks

Dear wenbinjlumj,
Do you have any questions further?

Has this ever been solved? I have the same issue

Best solution I can suggest is not to use SDK Manager at all. You can obtain a Linux_for_Tegra folder from the BSP driver package and that negates any need to use SDK Manager to obtain those files. You can use proot and some patched scripts to get around any requirement for elevated privilges (eg. mount). Any nvidia software, including the fun stuff like DeepStream can be installed from their apt repositories from within a proot inside a docker container. Normally you would have to use --privileged but that isn’t possible in all environments. Advice from Docker’s security experts is to treat root inside as root outside and it is usually a bad idea to build as root. --privileged only compounds the problem by removing the few safeguards there are preventing root inside the container from, say, remounting / and shredding all files on the host.

OK. I have already basically done what you have said, but I did it by installing the sdkmanager on my host machine and then by doing a recursive copy of Linux_for_Tegra into my Docker container. One caveat. No Docker build method I used (ADD, COPY) retains the suid flag on files which causes a problem with the /rootfs sub folder. I can flash my Xavier board, but since the suid flag is dropped, I can’t sudo on the OS on the board. I finally fixed this by capturing the file permissions ( on the host computer) for Linux_for_Tegra/* using getfacl -R . > permissions_backup, and then in my Dockerfile, I apply these permissions back using setfacl --restore=permissions_backup.

As far as running root and --privileged. I am definitely aware of the security implications

My approach has been to download and extract everything during the image build stage so it’s not tied to my workstation. The idea is to build flashable images in rootless docker, on the local server (which I don’t want stuff to run as root on) or in the cloud this way. If I make some softwaere for, say Nano, I want to have the ability to offer, say nightly builds of flashalble images with my software installed as well as all the Nvidia stuff on top.

So, that’s one of the reasons I am using rootless docker. Usually it’s good practice to remove them because of endless exploits like this, but that shouldn’t apply to rootless docker. I have tried it and suid binaries owned by other users which can then be mapped to a fake root work fine… so from inside a proot inside rootless docker, inside the tarball extracted inside during image build:

root@e0f81356fb5d:/usr/bin# find . -perm /4000
./newgrp
./chsh
./pkexec
./passwd
./arping
./gpasswd
./sudo
./chfn
root@e0f81356fb5d:/usr/bin# id -u
0 (this is a lie, by proot)

The way proot works is, I extract the rootfs as my regular user and it maps that user to a fake root user. Then I do my apt stuff, my compilation, whatever. When I leave the proot, the magic is gone, but the permissions, other than the owner, is as they should be, including any suid bits.

The idea for creating an actual filesystem out of that is to use mke2fs’s new options to create a filesystem from a source directory. It also has some gid/uid mapping features from what I remember of skimming the manual. Flash.sh, among others, will need to be patched, but I’m 80% certain it’s doable. Worse comes to worse, I can’t build the image inside the container, but even building just a system image tarball is very useful. Then I just extract it as root, to /rootfs/ on some SDK Manager installation and the usual image creation scripts.

I suppose that works too, but you might run into problems when installing software that adds stuff that might have suid bits inside your chroot/proot. IDK if what’s happening to you is because of Docker or the way your tarball is extracted.

I have stopped running Docker at all unless it’s rootless. It’s a total trashfire and unlikely to improve. They dind’t even have functional 2f on docker hub for years, despite being warned, despite multiple breaches. The way c:\Users is mounted 777 on windows is dangerous, and it creates a network share that’s accessable from everything.

I have 0 trust whatsoever in Docker anymore, and at least I know by running rootless docker that if sh** happens, it’s not Docker’s fault. The one exception is stuff like DIGITS that requires NVIDIA Docker and I don’t think it’s runtime is compatible yet with rootless docker. Frankly I hope Nvidia starts looking at other container runtimes (podman, maybe).