sudo apt-get install nvidia-cuda-toolkit && sudo apt-get update in the container did not solve this issue for me, i still see:
6.672 The following information may help to resolve the situation:
6.672
6.672 The following packages have unmet dependencies:
6.985 ros-humble-gxf-isaac-triton : Depends: libnvinfer10 but it is not installable
6.985 Depends: libnvonnxparsers10 but it is not installable
6.985 ros-humble-isaac-ros-tensor-rt : Depends: libnvinfer-plugin10 but it is not installable
6.985 Depends: libnvinfer10 but it is not installable
6.985 Depends: libnvonnxparsers10 but it is not installable
6.985 Depends: ros-humble-gxf-isaac-tensor-rt but it is not going to be installed
6.992 E: Unable to correct problems, you have held broken packages.
------
despite not having ros-humble-isaac-ros-tensor-rt or ros-humble-gxf-isaac-triton in the install list.
the solution for me was not to installnvidia-cuda-toolkit, but rather to add the following to the Dockerfile:
# very very dumb that we have to do this...nvidia issue! (?)
# add the arm64 if were on arm, x86_64 if were on x86_64
RUN if [ "$(uname -m)" = "aarch64" ]; then \
echo "arm64 architecture detected, adding arm64 cuda repo"; \
add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/ /"; \
else \
echo "x86_64 architecture detected, adding x86_64 cuda repo"; \
add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /"; \
fi
I’m happy to report that the issue I was facing has been successfully resolved using the method you provided. By following your detailed steps, I was able to fix the dependency problems with nvblox after restarting my Docker container.