Export customized Docker image based on offical container

Hello,

Is it to export customized container image supporting CUDA by adding tools such as vim, ssh, … based on official docker image, l4t-base r32.5.0 ?

I exported image and remove container, then import the image again.

The result of testing CUDA is failed. I didn’t record the details of log, sorry.

Terminal reports that no CUDA-supporting device is detected.

My testing about building flow of the customized image :

l4t-base r32.5.0 → install software in container ( Customized part. Do nothing currently. ) → export docker image → remove container → import image → run container → simple cuda sample testing → fail

It seems that this flow doesn’t work …

If it is possible, how should I do ?

Thank you in advance !!

Hi @LeoLiao, did you run the container with --runtime nvidia? That is necessary to use CUDA inside of it.

Also, you may find adding your customizations in a Dockerfile that uses l4t-base as the base image is easier to manage. That’s what I do in these jetson-containers: GitHub - dusty-nv/jetson-containers: Machine Learning Containers for NVIDIA Jetson and JetPack-L4T

1 Like

Hi @dusty_nv ,

I run the flow again today.

The details of log showed " Error: Only 0 Devices available. 1 requested. Existing" after I ran the cuda samples copied from the host to the container.

Finally, result of running samples is failed until I add "–gpus ‘“device=0”’ " and “–runtime nvidia” in the docker command.

Thanks for your reply.

Thanks @LeoLiao, good to know you were able to get it working. I haven’t had to use the --gpus argument on Jetson before - does --gpus all also work for you?

Hi @dusty_nv,

 "--gpus all " also works !!

Thanks for your reply !!