Is it possible to build a docker file with ROS melodic and the NVidia hello Ai world inside the same container?

I am on a host machine emulating ARM64 in order to build a docker file that contains both. I run into issues that seem to be related to python versions.

I wanted to know if it is possible to even do so?

I have seen ROS in Docker files and even with Nvidia but not such an old version.

Hi @kobrien2, the ros-*-pytorch-* container images from https://github.com/dusty-nv/jetson-containers use the jetson-inference container as a base image, and hence have Hello AI World installed into them.

However, I’ve not built one of these “PyTorch” container images for Melodic, because Melodic uses Python 2.7 (which is EOL) whereas PyTorch and Hello AI World are on Python >= 3.6.

If you’d consider upgrading to ROS Noetic, you could use the dustynv/ros:noetic-pytorch-l4t-r32.7.1 container image for example. Or you could attempt to build a ros:melodic-pytorch-* container by enabling it in my docker_build_ros.sh script. Or you could start with dustynv/ros:melodic-ros-base-l4t-r32.7.1 and install jetson-inference on top of it yourself.

Thanks for the quick reply, that was the conclusion I came to. Trying to build them in individual docker files they both work fine, together they don’t.

Noetic looked like the solution but I don’t think we will be able to change for our current use case.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.