Machine Learning Containers for Jetson

Hi All,

We’ve released the following framework containers for Jetson and JetPack 4.4 Developer Preview on NVIDIA GPU Cloud (NGC):

  • TensorFlow Container (l4t-tensorflow) - contains TensorFlow pre-installed in a Python 3.6 environment to get up & running quickly with TensorFlow on Jetson.

  • PyTorch Container (l4t-pytorch) - contains PyTorch and torchvision pre-installed in a Python 3.6 environment to get up & running quickly with PyTorch on Jetson.

  • Machine Learning Container (l4t-ml) - contains TensorFlow, PyTorch, JupyterLab, and other popular ML and data science frameworks such as scikit-learn, scipy, and Pandas pre-installed in a Python 3.6 environment.

You can also find the Dockerfiles and build scripts on GitHub - have fun!

try sudo docker run -it --rm --runtime nvidia --network host

what is this?
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused “process_linux.go:430: container init caused “process_linux.go:413: running prestart hook 0 caused \“error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --compat32 --graphics --utility --video --display --pid=12903 /var/lib/docker/overlay2/f4c44c75650f359a9d2a4be86376a0af5eef21f0cc31715eba5a020f8853e28d/merged]\\nnvidia-container-cli: mount error: file creation failed: /var/lib/docker/overlay2/f4c44c75650f359a9d2a4be86376a0af5eef21f0cc31715eba5a020f8853e28d/merged/usr/lib/ file exists\\n\”””: unknown.

Hmm I am not sure, haven’t seen that one before. Are you on JetPack 4.4 Developer Preview? As a test, could you try one of the other containers like l4t-pytorch or l4t-tensorflow?

For the situation, I used SDK Manager 1.0.1-5538 instead of jetpack 4.4 and installed jetpack 4.3 on TX2. I also performed l4t-pytorch or l4t-tensorflow, but the same error is displayed.

I installed jetpack 4.4 and it was successful. Thank you.

Is the container running a GPU?
If it’s not working, do I need to include tensorflow-gpu, CUDA, cuDNN? I want to know how to operate GUP.

Hi @yoshifumi_watanabe_aa, yes, the containers include support for GPU in the TensorFlow and PyTorch packages. There is also pyCUDA included. The CUDA Toolkit, CUDA, and cuDNN are automatically mapped into the containers from the Jetson device.

I want to install opencv in a l4t-ml container. Do you have a reference website? I also want to know the commands that will be helpful because I will use the camera when running

If you run this from within the container, it should install OpenCV for you:

$ sudo apt-get update
$ sudo apt-get install libopencv-dev

You might want to either commit your docker container to save these changes, or use a Dockerfile to create your own image using l4t-ml as the base container.

The container of l4t-ml contains tensorflow, but isn’t tensorflow-gpu used gpu?

I did it, but it seems to be an error.

Reading package lists… Done
Building dependency tree
Reading state information… Done
E: Unable to locate package opencv-dev

This is awesome good job Nvidia!!! My robotic animatronic project can really benefit from this.

Hi @yoshifumi_watanabe_aa, I’m sorry, I meant libopencv-dev. I have corrected this above as well.

As per the Release Notes of the TensorFlow for Jetson installer wheel, the package name has changed from tensorflow-gpu to just tensorflow. So yes, GPU is still available in this TensorFlow container. You can check it by running the following from python3:

from tensorflow.python.client import device_lib

Enter l4t-ml container sudo apt-get update sudo apt-get install libopencv-dev has been implemented. However, import cv2 from python3 results in No module named ‘cv2’.

confirmed. Thank you

What I want to do is install opencv in the l4t-ml container and get the USB camera ready for use. Then I would like to ask how to install opencv. I also want to know how to make the USB camera recognized from the container. I also want to know the option about the camera of the run command when this container is completed.

Try also installing the python3-opencv package with apt-get.

When you start the container with sudo docker run, you need to include the --device flag for the V4L2 camera nodes that you want to use. For example:

sudo docker run -it --rm --runtime nvidia --network host --device /dev/video0

Note the addition of the --device /dev/video0 flag. If your V4L2 USB camera is on a different device than /dev/video0, substitute that instead.