We’ve released the following framework containers for Jetson and JetPack 4.4 Developer Preview on NVIDIA GPU Cloud (NGC):
TensorFlow Container (l4t-tensorflow) - contains TensorFlow pre-installed in a Python 3.6 environment to get up & running quickly with TensorFlow on Jetson.
PyTorch Container (l4t-pytorch) - contains PyTorch and torchvision pre-installed in a Python 3.6 environment to get up & running quickly with PyTorch on Jetson.
Machine Learning Container (l4t-ml) - contains TensorFlow, PyTorch, JupyterLab, and other popular ML and data science frameworks such as scikit-learn, scipy, and Pandas pre-installed in a Python 3.6 environment.
You can also find the Dockerfiles and build scripts on GitHub - have fun!
Hmm I am not sure, haven’t seen that one before. Are you on JetPack 4.4 Developer Preview? As a test, could you try one of the other containers like l4t-pytorch or l4t-tensorflow?
For the situation, I used SDK Manager 1.0.1-5538 instead of jetpack 4.4 and installed jetpack 4.3 on TX2. I also performed l4t-pytorch or l4t-tensorflow, but the same error is displayed.
Is the nvcr.io/nvidia/l4t-ml:r32.4.2-py3 container running a GPU?
If it’s not working, do I need to include tensorflow-gpu, CUDA, cuDNN? I want to know how to operate GUP.
Hi @yoshifumi_watanabe_aa, yes, the containers include support for GPU in the TensorFlow and PyTorch packages. There is also pyCUDA included. The CUDA Toolkit, CUDA, and cuDNN are automatically mapped into the containers from the Jetson device.
I want to install opencv in a l4t-ml container. Do you have a reference website? I also want to know the commands that will be helpful because I will use the camera when running
You might want to either commit your docker container to save these changes, or use a Dockerfile to create your own image using l4t-ml as the base container.
As per the Release Notes of the TensorFlow for Jetson installer wheel, the package name has changed from tensorflow-gpu to just tensorflow. So yes, GPU is still available in this TensorFlow container. You can check it by running the following from python3:
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
Enter l4t-ml container $ sudo apt-get update
$ sudo apt-get install libopencv-dev has been implemented. However, import cv2 from python3 results in No module named ‘cv2’.
What I want to do is install opencv in the l4t-ml container and get the USB camera ready for use. Then I would like to ask how to install opencv. I also want to know how to make the USB camera recognized from the container. I also want to know the option about the camera of the run command when this container is completed.