sdkmanager --cli option requires X

$ sdkmanager --cli

(sdkmanager:7381): Gtk-WARNING **: 21:09:52.580: cannot open display:

I was hoping to use it in a container but this seems to not be currently possible.

Hi mdegans,

This is a known issue. It’s not a simple fix. We want to fix it but there is no schedule for it yet.

BTW, check out this thread if you are trying to run SDK Manager from Docker container:

[url]NVIDIA SDK Manager on Docker container - Jetson AGX Xavier - NVIDIA Developer Forums

Thanks, bit I can wait. The reason I don’t want to run that is this:

XSOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth
touch $XAUTH
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge -

docker run --privileged --rm -it \
           --volume=$XSOCK:$XSOCK:rw \
           --volume=$XAUTH:$XAUTH:rw \
           --volume=/dev:/dev:rw \
           --shm-size=1gb \
           --env="XAUTHORITY=${XAUTH}" \
           --env="DISPLAY=${DISPLAY}" \
           --env=TERM=xterm-256color \
           --env=QT_X11_NO_MITSHM=1 \
           --net=host \
           -u "jetpack"  \
           jetpack:latest \
           bash

It looks like it still requires x, runs privileged, mounts way too much stuff rw on the host, etc. I’d like to run it on my (headless) server to securely churn out custom sd card images in the background at regular intervals. That currently isn’t possible.

OK, gotcha. Well, another thing you can currently do is run SDK Manager from your PC once and have it download the packages. Then you can just use these packages (L4T, CUDA, ect) as you wish. They are all pretty much installed according to the procedures from the individual package documentation.

Unfortunately that solution isn’t really compatible with the CI workflow I have in mind. What I’d like to be able to do is something like is done with nginx:

FROM ubuntu:bionic
RUN get and install nvidia gpg key \
   && sudo add nvidia apt source \
   && sudo apt update && sudo apt install -y \
   sdkm_packages_here_including_l4t_home \
   && cleanup
...
COPY my_scripts
USER someuser:somegroup
...
ENTRYPOINT entrypoint.sh

Then one can do FROM that image to apply various customization or add an entrypoint script that takes a defconfig or runs menuconfig interactively and spits out a flashable image to a volume accessable by sftp/whatever. It would certainly make it easier for those on the forum to customize the kernel/rootfs/etc and one could even do all this in the cloud (eg, gcr.io) from something as potato as a Chromebook. It would entirely remove the requirement to have a Linux workstation and make automated sd image builds a breeze.

Provided the packages are signed with your keys, anybody building this way will know with near 100% certainty that the files come from nvidia. Yes, I can tarball up l4t home, put it somewhere, and have it verified with a sha or sign it myself, but then I’m the man in the middle, when there shouldn’t be one. I also can’t distribute the image this way as it would probably be against your license terms.

A Dockerfile like the above on the other hand has none of these drawbacks. It’s a recipe. I can just put on GitHub and anybody can then configure a cloud build of the image and push to their private repo. Then they just pull that and run it at their leisure to spit out arrays of sd cards to provision the many devices they will buy from you. Right now that seems to be a bit of a problem (the customization and provisioning part, at least). Right now, people are cloning images with dd and probably deploying them all over the country. That’s going to have consequences.

I’m glad you’ve moved to apt for OTA updates. It’s great. What I would love now is the ability to securely download the required files to build a SD card image without needing a GUI or a separate Linux workstation. A tarball with l4t home and a bunch of debian packages for the dependencies would work in a pinch, but what would really be great is an x86 apt repository that did the same job. It would enable so many nice things.

I encountered the same issue using sdkmanager version 1.0.0-5517. This is preventing the integration with a build-server and a fix should be prioritized in my opinion.

You may consider vagrant instead of docker for the moment. SDKM seems to install and build images fine in a VM. Only flashing is a problem. It’s not as lightweight but it should work.

Alternatively, you could try making an image that downloads the kernel sources, rootfs, and apt key and use that to build an image. All the packages SDKM install, including DeepStream seem to be in the online repositories as well, so the goal is closer now at least. You will probably need a few scripts like flash.sh and the image creator script depending on whether you want to build an actual image.

The container will need to run --privileged since Nvidia scripts need to be able to mount to inside the container, but you might be able to avoid this by moving the key to into the rootfs instead of Nvidia’s scripts to apply the binaries. You will have to read apply_binaries.sh and replicate the functionality.

See the apt-key manage about alternatives to running the apt-key command for details:
http://manpages.ubuntu.com/manpages/cosmic/man8/apt-key.8.html

Hi mdegans, thanks for your suggestions, but I am already using the workaround suggested by nvidia (download files manually using sdkmanager, push them to git lfs, and then do the build using those files).

Using vagrant instead of docker is not really an option, because gitlab-ci supports docker out of the box and using vagrant would be a huge change.

The thing is that there is really no reason to require X-server when running the sdkmanager in command-line mode, using ncurse alone is enough and this should be easy to fix for nvidia.

Re: gitlab, yeah, I agree. I can’t use gcr.io either, which is my usual goto. Also I don’t like Vagrant much.

Re: ease of fix, eh, It may not be an easy fix depending on how SDK Manager is designed. Imo it was designed to look “green” with functionality and native look and feel as an afterthought. What was wrong with Gtk?

I would rather they delete SDK Manager entirely in favor of a more traditional approach.

Instead, simply provide some reasonable, minimal, flashable images (openoffice isn’t necessary, and a pure cli image would be nice) with Nvidia’s apt repository enabled to add more software. Maybe a few scripts to modify the image before flash, like add users, set up first boot services, and run apt. It would probably solve a lot of the common problems people complain about around here.

That direction seems to be one they are already moving in. I am very happy with the online apt repo, but SDK Manager is still more necessary that it should be. Gotta be a hell of a thing to maintain anyway. Just delete it and get it over with.