ROS2 in nvidia-pytorch NGC container

Hi,

I’m hoping to get some help.

I’m completely stuck trying to install ros2 (from deb or from source) inside the ngc pytorch container.

The problem I’m encountering is that the default python in the container is from conda, and ros installs system python packages. For compiling from source there are still python dependencies that exist for apt, but I can’t seem to install with pip in the conda environment (pykdl for example).

So I’m wondering if anyone has successfully gotten ros2 installed inside of the pytorch container, and what were the steps you followed?

(this is for desktop not any of the jetsons)

Thanks in advance.

Hi @BrannigansLaw ,

If you get an error similar to the lines below

Err:6 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64  InRelease                                       
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A4B469963BF863CC

try to add in your docker this line at the beginning of your Dockerfile, this will temporarily fix your error:

# Fundamentals
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub

Thanks for the suggestion.

I’m not having key issues. The problem is that for:

(1) Installing ros2 from deb: python packages are installed with apt-get and are not accessible from the conda environment, which breaks a lot of ros.

(2) Building from source: colcon is a python package, and rosdep uses apt-get to pull in python dependencies. So building packages with colcon breaks (even for msg types, since it needs some deps that don’t get installed), and pulling in dependencies with ros-dep also breaks because the cconda environment can’t see them.

Thanks.

@BrannigansLaw
Robostack builds their version of ROS for Anaconda and can be installed via the conda package manager into the same virtual environment that PyTorch exists in the base NGC PyTorch container.
https://robostack.github.io/GettingStarted.html

Thanks, but I don’t think this would work for our use case.

If this package was maintained by Nvidia, or had an Nvidia maintainer that might have worked as we’re already depending on your tools. We’re likely going to need to rely on this package for a long time, and would rather not add too many extra dependencies.

I think we really do need to be able to work with native ROS. Building it from source is fine.

I can see on NGC that there are a number of other containers. Would you be able to point me to the container just upstream of the pytorch one which has all of the nvidia dependencies installed (cudnn, tensorrt, etc.). This way we could install ros and pytorch (as well as torch-tensorrt) ourselves?

Maybe if torch-tensorrt can get a new wheel release (new regular releases?) as well, we could then assemble our own images.

Thanks.

@jtichy sorry to bump this, but could you please take a look at my last question.

Would there be a good starting container that has CUDNN and everything installed but not conda yet?

Thanks.

I don’t know everything you’ll need inside of a container to run PyTorch, but you can start with the TensorRT container and build up from there. TensorRT | NVIDIA NGC