Is that possible to flash jetson nano over network

exploring if this is a option. i am wondering if we can flash jetson nano over network to update the devices in the field.

OTA updates are possible with apt-get since JetPack 4.3. If you want to update your own software you could use some docker container orchestration solution or a custom apt repository.

I did not try flashing nano over the network;
but with dd full disk rewrite worked over the network worked
However, you may find some reference

Thanks . since our devices are setup in private network. there is no public internet access … I have setup private apt repository but it is a lot of works to update dependencies libs after deployment … i think docker maybe something worth to try. however i have some questions .

Currnetly I took nvidia sdk rootfs and customized it by adding more open source libs . if using docker i need install these libs into dockers right? what about cuda tensorrt it was on nano host rootfs i think … do i need install inside the docker again or it can be shared ?

is there a example how to run docker inside the nano ?


if using docker i need install these libs into dockers right?

strictly speaking you could add the libs and such to a .csv file and put them in /etc/nvidia-container-runtime/host-files-for-container.d/

Just follow the examples in that folder… but it’s probably a much better solution to install them in the docker image, yes.

Re: tensorrt, that is bind mounted from the host when you run a container (see tensorrt.csv), so it does not need to be installed inside unless you need those libraries temporarily at build time.

I take this strategy to build OpenCV rather than set the default runtime to Nvidia, which I would highly recommend against in the strongest terms.

This script is run inside a Dockerfile. First, depenedencies are installed:

install_dependencies () {
    # open-cv has a lot of dependencies, but most can be found in the default
    # package repository or should already be installed (eg. CUDA).
    echo "Installing build dependencies."
    apt-get update && apt-get install -y --no-install-recommends \
        gosu \
        cuda-compiler-10-2 \
        cuda-minimal-build-10-2 \
        cuda-libraries-dev-10-2 \
        libcudnn8-dev \
        build-essential \
        cmake \
        git \
        gfortran \
        libatlas-base-dev \
        libavcodec-dev \
        libavformat-dev \
        libavresample-dev \
        libeigen3-dev \
        libgstreamer-plugins-base1.0-dev \
        libgstreamer-plugins-good1.0-dev \
        libgstreamer1.0-dev \
        libjpeg-dev \
        libjpeg8-dev \
        libjpeg-turbo8-dev \
        liblapack-dev \
        liblapacke-dev \
        libopenblas-dev \
        libpng-dev \
        libpostproc-dev \
        libswscale-dev \
        libtbb-dev \
        libtbb2 \
        libtesseract-dev \
        libtiff-dev \
        libv4l-dev \
        libx264-dev \
        pkg-config \
        python3-dev \
        python3-numpy \
        python3-pil \
        python3-matplotlib \
        v4l-utils \

then OpenCV is built, and finally, the dependencies that will be mounted at runtime are removed (as wella s others not needed).

cleanup () {
    echo "REMOVING build files"
    rm -rf ${BUILD_TMP}

    echo "REMOVING build dependencies"
    apt-get purge -y --autoremove \
        gosu \
        build-essential \
        cmake \
        git \
        cuda-compiler-10-2 \
        cuda-minimal-build-10-2 \
        cuda-libraries-dev-10-2 \
        libcudnn8-dev \
    # there are probably more -dev packages that can be removed if the 
    # runtime packages are explicitly added below in install_dependencies
    # but the above ones I know offhand can be removed without breaking open_cv
    # TODO(mdegans): separate more build and runtime deps, purge build deps

    # this shaves about 20Mb off the image
    echo "REMOVING apt cache and lists"
    apt-get clean
    rm -rf /var/lib/apt/lists/*

    echo "REMOVING builder user and any owned files"
    deluser --remove-all-files builder

This leads to a slim image. Otherwise cudnn, cuda, etc… stay in the image pointlessly and are mounted over anyway at runtime.

If you need a base image with Nvidia apt sources enabled, one is here:

Linked is the Dockerfile to build that image as well. It should not matter much what tag you use when building (xavier, nano, etc) since the libraries that differ are mounted over at runtime anyway. The tags are intended if you decide you want to delete all the .csv files and use a purely containerized approach instead (no bind mounting).