Where do the local debian repositories during post-install come from?

I see during post-install, post flash and reboot, the host system ssh’s into the target (Jetson platform) and installs a bunch of packages from local repos laid down in /var/.

Is there a way I can replicate that behavior myself? Where do these debs come from?

What I’m trying to do is create a rootfs like sdkmanager does but slightly stripped down and building everything from source seems like complete overkill and prone to errors. Same issue too if I wanted to use a docker image instead - I would need these debs.

The download source tarballs seem like overlays on top of the sample rootfs (which is great!). But I don’t see for instance CUDA. CUDA seems to be downloaded as part of the SDK Manager software (not sure from where though) instead.

So digging deeper, it seems I found the spot (I won’t post it since the URLs are all 403 Forbidden anyway).

Is there any reason why we can’t download the same deb packages NVidia uses during install and run the same remote exec bash lines (hint!) as you do? I see how the installer now works but it seems all the assets are private to the installer which is odd. Why not give us the binaries too?

The only way for me to replicate a Jetpack install and then overlay my stuff on top is to first completely install the Jetpack, write out the rootfs image using flash.sh and then extract the raw image as my “build” rootfs directory and overlay on top of that. Ouch. Is that what NVidia really envisions everyone doing?

Seems like there has GOT to be a better way to replicate Jetpack manually.

The standard Ubuntu stuff is remotely accessed via apt at official Ubuntu repos. The “/var/” content is a full local repository…apt sees this and uses it as if it were a remote repo. The repos are themselves the “repo” “.deb” files downloaded by SDKM or JetPack. The downloads you see on SDKM of the host PC (if you don’t remove them after use) are what provide this. If you manually use dpkg to install the repo “.deb” files you basically get the same result (though I suppose the apt repo list has to be updated to look at it). Those files could be used for a private server.

I don’t know about the future, but somewhere here on the forums I remember seeing something about the possibility (in the future) of having a regular repo server. Keep in mind that there is a EULA tied to the NVIDIA-specific packages, so there might be some sort of EULA step if this is added.

We are saying the same thing HOWEVER, you didn’t answer my question!

How do I get those repo deb files? It seems like they are forbidden (403) if I use the NVidia URLs directly. I am reading them RIGHT out of the installer’s JSON file which defines the assets per Tegra release.

The SDKM uses a login first. If you go to that address after going to another address allowing login, then it should work. Probably go here, then login, and then go to the address you want:
…it’s SSL plus password. The browser should be able to negotiate that if you log in to the correct location first. If not, then you might post a specific URL.

If you look at your sdkm_downloads directory, you will see a ‘dkml3_jetpack_l4t_42.json’ file (or some derivative of that depending on platform and Jetson version). In that JSON file are basically URL definitions and install directives which the SDK Manager uses to create little post-flash scripts that are executed after the first boot up.

Most of these scripts install the repo deb and/or build something (which makes sense to since they need a run-time to build say OpenCV). These set of debs is what I need to get access to in order to replicate what sdkmanager does without actually having to run sdkmanager. I want a streamlined build where I can do ‘make image’ and it will:

  • Call apply_binaries/flash.sh
  • Boot up the system and execute some init script that I squirrel away in rootfs that in turn:
    • Downloads these needed deb repo files
    • Installs the repo
    • Installs the packages
    • Performs any custom setup

Trying to do this in a static rootfs system without a run-time is extremely tedious. It’s basically having to image the TX2 after setup and restore it. But that’s hard to track in source control. I rather have a set of patches on top of the sample rootfs to apply and then build with.

Anyway, I tried using authentication but I suspect I have to generate some session key to get to these debs:


Try to get that one!

Addendum: https://developer.nvidia.com/embedded/downloads is not password protected. All of those links I can wget all day long without having to save any session key.

Another addendum: These JSON files are also available in ~/.nvsdkm/dist/*.json which outlines how packages get installed at post-flash time. There is also the json file in the sdkm_downloads directory too which I just realized!

But the bottom line is this: Is there a way to download the same debs that sdkmanager does to populate ~/Downloads/nvidia/sdkm_downloads at run-time? Note that this would also help in creating docker images as well.

There is no official support for bypassing SDKM.

The session key is why I suggest logging in via browser and then using the browser to look for the file…the session would be valid from that login. You would only need to do that once since you’d have the files downloaded.

If you look closely at SDKM, then you’ll see you can check to download all files but not actually flash or install. That would get you a copy of all of the files.

Of the files downloaded, only the “noarch”, “arm64”, or “aarch64” apply to the TX2. Ignore the others (which are probably going to the host PC).

Once you have those files, then essentially all SDKM does is copy the “repo” files first, and among those the “cuda” repo file is first. Once that is in, and you run “sudo apt update”, then you can put in the other files. Probably you’d start with the other repo files, and then another “sudo apt update”. Once repos are in, then you can install individual files from the unpack, or you can view what is in “/var/cuda-repo-10-0-local-10.0.166/” and pick something to install.

The magic of SDKM is mostly that it understands all of the dependencies. The order of install won’t matter to SDKM. You might have to figure out an order. In some cases you can just name a lot of packages in a single apt install command and apt will figure out the order of install.


If you are looking for an easier way to automatically download each time, then you are probably out of luck. If you have a pre-downloaded set of files, then it shouldn’t be too bad.

And that’s the problem! Btw I’m not trying to bypass SDKM - I’m just trying to emulate it so I can automate our internal build.

Yes, I suppose I could run through the sdkmanager and collect all of these debs manually, tarball them up, and then use that for an official build. It just seems utterly ridiculous to do that.

I understand how the update works. Check out the JSON file I outlined above. It has the EXACT commands SDKM uses after the flash and first boot-up. It even outlines how to build OpenCV.

I really wish NVidia did not obscure all of this package management stuff. It seems counterproductive to me.

Can you confirm if https://devtalk.nvidia.com/default/topic/1055063/jetson-tx2/custom-postinstall-hook-script-executed-as-part-of-install/post/5350399/#5350399 solved your problem?