Is the 'dli-nano-ai' Dockerfile published anywhere?

I’d like to add a couple more Python libraries to a version of this image (Seaborn and Pandas), but still have the new image launch Jupyter when run. Been making some progress working this out from base image, but it would be so much easier to just clone the ‘dli-nano-ai’ Dockerfile and add two libraries.

Thanks.

Hi,

You can find some in this GitHub:

Thanks.

1 Like

Thanks. I had already found that repo, and a few others, but none of them fire up Jupyter as part of the image. So I am specifically looking to see if the Dockerfile for the DLI course has been shared anywhere.

Hi @atlay, the l4t-ml container from jetson-containers automatically starts Jupyter server. It also has Pandas pre-installed.

It has not, but I don’t believe you need the Dockerfile of a base image to use it to build a new container.

Add these lines to your own Dockerfile:

ARG BASE_IMAGE=nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4
FROM ${BASE_IMAGE}

and then proceed to install the packages you want later in your Dockerfile. This will base your new container off of the DLI container, and then you can add your extra stuff to it.

1 Like

Perfect, thanks. My bad for not spotting the ML Dockerfile does almost all of what I need. Was sure I’d parsed them all on that repo before asking here.

Hi @dusty_nv, I’m fairly new to docker and would like to add torch2trt on top of the dli-nano-ai image, could you please assist as to how I could add that into the container? I’m assuming I would just need to create a dockerfile, add these commands:

ARG BASE_IMAGE=nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4
FROM ${BASE_IMAGE}

And a few more commands for installing torch2trt, then just build and run the container. But I’m not sure exactly what those lines are and therefore your assistance would be highly appreciated.

P.S. I have already tried installing it on the container itself by cloning the repo and running the setup.py install and everything works perfectly but re-installing over and over again is not quite efficient.

Regards,
Abu

Hi Abu, you’ve got the first two lines correct there. Next you want to add RUN commands to your Dockerfile, which allow you to run things like git/pip3/apt in your Dockerfile to install the packages you want.

For example:

RUN git clone https://github.com/NVIDIA-AI-IOT/torch2trt && \
    cd torch2trt && \
    python3 setup.py install

Then build your Dockerfile with docker build:
sudo docker build -t my-container:r34.4.4 -f Dockerfile
(assuming the filename of your Dockerfile is indeed Dockerfile)

Then to run it, just change the name of the container in your DLI run script to my-container:r34.4.4 (or whatever you name your new container)

Hi @dusty_nv, thanks a bunch for your response, I’ve tried out the exact commands that you mentioned but unfortunately I ran into an error when cloning into ‘torch2trt’.

Any idea on how I could solve this problem? Once again thank you.

Best,
Abu

Ah, ok - you need to set your default docker-runtime to nvidia and reboot - https://github.com/dusty-nv/jetson-containers#docker-default-runtime

This will allow CUDA/cuDNN/ect to be used while you are building the container.

Hi @dusty_nv, thank you very much, it worked! Interesting so is that the reason why we don’t add --gpus all during the docker run?

I have one more question if you don’t mind, I have a new notebook which I created, would it be possible to also add that permanently into the docker container without requiring the Dockerfile? that would be very helpful to me.

Also is there a reason for the Dockerfile not being shared publicly? I’m just wondering.

Best,
Abu

That’s correct, you don’t need --gpus all on Jetson because --runtime nvidia is used and takes care of it. I’ve seen mixed feedback of --gpus all working on Jetson unless you also use --runtime nvidia.

What I would recommend doing is using a COPY command in your Dockerfile that you wrote above to install torch2trt. If you didn’t want to use Dockerfile, you could use the docker cp command on a running container, and then use docker commit to save your changes to that container.

The original Dockerfile for the dlinano container isn’t public because the source used to generate the course isn’t open-source. Regardless, it is probably easier/faster for you to make a derivative Dockerfile using dli-nano as a base (like you did above), rather than rebuilding the whole container.

Hi @dusty_nv, I see, thank you very much for the help I really appreciate it and for the clarification as well. The reason I asked about the Dockerfile not being public was because there were things like a default password set “dlinano” when opening the Jupyter Lab and some notebooks were there as well (The course materials obviously).

And so I was thinking of a way that I could maybe change the default login password to something else as well as delete the original notebooks and replace them with my new project notebooks. I’m assuming those are pre-set inside the container, or could those be changed too? If not then I’ll probably just have to resort to creating my own container and installing all the decencies.

Regards Abu,
Newbie in Docker

You should be able to set your own Jupyter password by adding this command to your Dockerfile and setting your desired password there. It will overwrite the setting made by the original dli-nano Dockerfile:

https://github.com/dusty-nv/jetson-containers/blob/1e10908a104494a883f6855d1e9947827f2a17bc/Dockerfile.ml#L162

RUN python3 -c "from notebook.auth.security import set_password; set_password('nvidia', '/root/.jupyter/jupyter_notebook_config.json')"

You should be able to overwrite the original notebooks, but in Docker unfortunately you can’t easily delete files from earlier container. I would recommend creating a new directory for your own notebooks. Or if you wanted to create totally new container, you could base yours off of l4t-ml container, as that already has many of the dependencies (including PyTorch and JupyterLab)

https://ngc.nvidia.com/catalog/containers/nvidia:l4t-ml

1 Like

Hi @dusty_nv, just curious, may I know instead of Jupyter Lab what other packages does the “DLI Getting Started with AI on Jetson Nano” container have on top of the " NVIDIA L4T PyTorch" container, or is that only it?

Because unlike the other containers such as " NVIDIA L4T ML" and " NVIDIA L4T PyTorch" the “DLI Getting Started with AI on Jetson Nano” does not state the packages that it contains in the catalog. Thank you.

Hi @Abuelgasim, basically it’s just JupyterLab and jetcam that get installed on top of l4t-pytorch. And I think opencv-python.

1 Like

@dusty_nv Amazing!! Thank you very much for the reply. One last question if you don’t mind, I know how to overwrite the default Jupyter login password but is it possible to modify the dli-nano Dockerfile to get rid of the password entirely when launching the Jupyter Lab? Once again thank you.

I haven’t tried doing this, but I found a suggestion for disabling password here: https://github.com/jupyterlab/jupyterlab/issues/4667#issuecomment-443672397

You would want to make your own Dockerfile which uses the dlinano container as base and changes the start-up command:

FROM nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.0

CMD /bin/bash -c "jupyter lab --LabApp.token='' --LabApp.password='' --ip 0.0.0.0 --port 8888 --allow-root &> /var/log/jupyter.log" & \
	echo "allow 10 sec for JupyterLab to start @ http://$(hostname -I | cut -d' ' -f1):8888 (password ${JUPYTER_PASSWORD})" && \
	echo "JupterLab logging location:  /var/log/jupyter.log  (inside the container)" && \
	/bin/bash

Then build your Dockerfile with sudo docker build -t my-dli-nano-ai:latest -f Dockerfile .

And remember to change your run script to use the my-dli-nano-ai:latest container instead of nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.0

Alternatively, to change password try this Dockerfile instead:

FROM nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.0
RUN  python3 -c "from notebook.auth.security import set_password; set_password('MYPASSWORD', '/root/.jupyter/jupyter_notebook_config.json')"
1 Like

Thank you very much for the response, it worked! but I just needed to add LabApp.password='' as well together with --LabApp.token='' otherwise it would still request for the password. Found the suggestion from this link here:

However, I couldn’t help but look at the safety/security issues that contributors were mentioning with regards to doing this and so my next step would be figuring out a way to set-up a one time password just like in the jetbot course! where you only need to type in the password once and any other time you launch the Jupyter using the same IP address the password is no longer required. Would really appreciate some guidance on that as well.

Here is the code from the JetBot Dockerfile where the password is set: https://github.com/NVIDIA-AI-IOT/jetbot/blob/442105ae4480fe67b99d660084f768f58bb9edb2/docker/jupyter/Dockerfile#L16

However it is the same way that is set above:

python3 -c "from notebook.auth.security import set_password; set_password('${JUPYTER_PASSWORD}', '/root/.jupyter/jupyter_notebook_config.json')" 

So I’m not sure how that behavior is achieved. I wonder if it has to do with the --restart always flag that is used to run the container: https://github.com/NVIDIA-AI-IOT/jetbot/blob/442105ae4480fe67b99d660084f768f58bb9edb2/docker/jupyter/enable.sh#L14

Let’s say that you close your JupyterLab browser window, and then without shutting down container, you open a new browser window. Does JupyterLab ask you to login?

I’ve just tested this out adding --restart always instead of --rm when running the container and this solved my problem! The password is only required once, then even after exiting the container (but not shutting it down), when I re-open the JupyterLab, authentication is no longer required. Thank you very much!

So using the same IP address works as long as the docker container remains running in the background, as soon as you shutdown and re-run the container, password is required again.