Hi, I have been instructed to create a docker container to run a couple scripts on a Jetson TX2.
I was wondering what the best docker image to choose is on a Jetson TX2? Should I use just raw ubuntu as it has already been flashed or does that not matter? I’m not to well versed with docker but any help is appreciated. If you need more information, just let me know
Ideally, you could send me a Dockerfile that would have Python3.6, a virtual environment and the same idk files or whatever that you would get from flashing a Jetson TX2 with the SDK-like TensorRT, CUDA, Nvidia-Toolkit (all from Jetpack 4.6.2) but if you don’t want to do that because you fear I won’t learn then by all means walk me through the process slowly
Hi @frankvanpaassen3, on Jetson it’s recommended to use l4t-base container with --runtime nvidia
so that you are able to use GPU acceleration inside the container.
On JetPack 4.x, CUDA/cuDNN/TensorRT will be mounted in l4t-base container (and your derived containers built upon l4t-base) when --runtime nvidia
is used when starting the container.
Pick an l4t-base container tag which matches your version of JetPack-L4T (you can check this with cat /etc/nv_tegra_release
). For example, if you are running L4T R32.7, then use nvcr.io/nvidia/l4t-base:r32.7.1
There are also other NVIDIA-provided containers such as l4t-pytorch, l4t-tensorflow, l4t-ml, and deepstream-l4t that you may choose to use if you need those components as well.
@dusty_nv thank you that helps a lot. With that being said though, when I choose nvcr.io/nvidia/l4t-base:r32.7.2 as my base image, It would come with TensorRT, and all other packages that are normally flashed to the Jetson TX2 right?
For L4T R32.7.2, it should just use nvcr.io/nvidia/l4t-base:r32.7.1
as R32.7.2 shares the R32.7.1 containers. But yes, if CUDA/cuDNN/TensorRT are installed on your device then they should be in the container.
Ok perfect. That helps a lot
@dusty_nv I have a question though-I am current using this as my base Image
FROM nvcr.io/nvidia/l4t-base:r32.7.1
And then I am copying a main.py file which contains import tensorrt. Then the file gets run in the docker container and I get an error - Is there anything that you know of that I’m doing wrong? The error is cannot import tensorrt, but when I do python3 - c “import tensorrt” it all works?
Also with that, If I wanted to include the pytorch package aswell, is there any specific way I could do that?
Are you running your main.py with python3 as opposed to python 2.7? It should be python3
Let me double check - give me 2 seconds
If you want PyTorch, then base your image off of l4t-pytorch container instead
I’m still getting the no module found error for tensorRT, and I understand what you mean with the Pytorch thing aswell now - keep in mind the error is even with CMD [“python3”, “main.py”]
Just an idea - what if you put #!/usr/bin/env python3
at the top of your .py file, then set it as executable (+x) and just launch the .py file directly?
how do I set it as executable?
and how do I launch the .py file directly? Like do you mean i use the -it flag?
@dusty_nv never mind I understand what you mean now- however here’s the error I get
It looks like you haven’t included --runtime nvidia
in your docker run
command - can you try that?
@dusty_nv Thank you so much- it worked <3
OK, great! Glad you got it working :)
Hi @dusty_nv I’m back again. I am wondering if it is possible to get the same base image but on python3.7 instead? I need python3.7 since I need detectron2 and detectron2 won’t take python3.6
Sorry, we build the containers (and PyTorch pip wheels) for the default version of Ubuntu - and on JetPack 4.x, that’s Ubuntu 18.04 and Python 3.6.