Getting error in running the docker

Hi, I am using jetson orin nano one month ago i tried to built the isaac ros packages the docker built successfully after reflashing my jetson i do all the steps of development guide when i am running the command
cd ${ISAAC_ROS_WS}/src/isaac_ros_common &&
./scripts/run_dev.sh -d ${ISAAC_ROS_WS}
after soo much long time it gives

cd ${ISAAC_ROS_WS}/src/isaac_ros_common && ./scripts/run_dev.sh -d ${ISAAC_ROS_WS}
Launching Isaac ROS Dev container with image key aarch64.ros2_humble.realsense.user: /mnt/nova_ssd/workspaces/isaac_ros-dev/
Building aarch64.ros2_humble.realsense.user base as image: isaac_ros_dev-aarch64
Building layered image for key aarch64.ros2_humble.realsense.user as isaac_ros_dev-aarch64
Using configured docker search paths: /mnt/nova_ssd/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker
Checking if base image nvcr.io/isaac/ros:aarch64-ros2_humble-realsense-user_474334a38bc61614605d8bc1af11882f exists on remote registry
Checking if base image nvcr.io/isaac/ros:aarch64-ros2_humble-realsense_6c677b302ddf2d9594dd22518b05fcae exists on remote registry
Checking if base image nvcr.io/isaac/ros:aarch64-ros2_humble_5d698e0d23e98e2567b1c9b70abd0c1f exists on remote registry
Checking if base image nvcr.io/isaac/ros:aarch64_614b366df729318fe81c054b575cee53 exists on remote registry
Resolved the following 4 Dockerfiles for target image: aarch64.ros2_humble.realsense.user
/mnt/nova_ssd/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker/Dockerfile.user
/mnt/nova_ssd/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker/Dockerfile.realsense
/mnt/nova_ssd/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker/Dockerfile.ros2_humble
/mnt/nova_ssd/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker/Dockerfile.aarch64
Building /mnt/nova_ssd/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker/Dockerfile.aarch64 as image: aarch64-image with base:
[+] Building 3.8s (26/26) FINISHED docker:default
=> [internal] load build definition from Dockerfile.aarch64 0.0s
=> => transferring dockerfile: 8.32kB 0.0s
=> [internal] load metadata for nvcr.io/nvidia/l4t-cuda:12.2.12-devel 3.6s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [stage-0 1/22] FROM nvcr.io/nvidia/l4t-cuda:12.2.12-devel@sha256:f9c 0.0s
=> CACHED [stage-0 2/22] RUN mkdir -p /opt/nvidia/isaac_ros_dev_base && 0.0s
=> CACHED [stage-0 3/22] RUN --mount=type=cache,target=/var/cache/apt a 0.0s
=> CACHED [stage-0 4/22] RUN --mount=type=cache,target=/var/cache/apt a 0.0s
=> CACHED [stage-0 5/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
=> CACHED [stage-0 6/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
=> CACHED [stage-0 7/22] RUN --mount=type=cache,target=/var/cache/apt a 0.0s
=> CACHED [stage-0 8/22] RUN update-alternatives --install /usr/bin/pyt 0.0s
=> CACHED [stage-0 9/22] RUN --mount=type=cache,target=/var/cache/apt a 0.0s
=> CACHED [stage-0 10/22] RUN python3 -m pip install -U Cython p 0.0s
=> CACHED [stage-0 11/22] RUN update-alternatives --install /usr/bin/llv 0.0s
=> CACHED [stage-0 12/22] RUN --mount=type=cache,target=/var/cache/apt a 0.0s
=> CACHED [stage-0 13/22] RUN --mount=type=cache,target=/var/cache/apt m 0.0s
=> CACHED [stage-0 14/22] RUN mkdir -p /opt/nvidia/tao && cd /opt/nvidia 0.0s
=> CACHED [stage-0 15/22] RUN python3 -m pip install --no-cache 0.0s
=> CACHED [stage-0 16/22] RUN --mount=type=cache,target=/var/cache/apt a 0.0s
=> CACHED [stage-0 17/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
=> CACHED [stage-0 18/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
=> CACHED [stage-0 19/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
=> CACHED [stage-0 20/22] RUN --mount=type=cache,target=/var/cache/apt a 0.0s
=> CACHED [stage-0 21/22] RUN python3 -m pip install -U jetson-stats 0.0s
=> CACHED [stage-0 22/22] RUN mkdir -p /opt/nvidia/isaac_ros_dev_base && 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:1160f0291eeebf04f89a4685a12f5294b771f7a026caa 0.0s
=> => naming to Docker Hub Container Image Library | App Containerization 0.0s

1 warning found (use docker --debug to expand):

  • LegacyKeyValueFormat: “ENV key=value” should be used instead of legacy “ENV key value” format (line 18)
    Building /mnt/nova_ssd/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker/Dockerfile.ros2_humble as image: ros2_humble-image with base: aarch64-image
    [+] Building 0.2s (27/27) FINISHED docker:default
    => [internal] load build definition from Dockerfile.ros2_humble 0.0s
    => => transferring dockerfile: 13.71kB 0.0s
    => WARN: InvalidDefaultArgInFrom: Default value for ARG $BASE_IMAGE resu 0.0s
    => WARN: LegacyKeyValueFormat: “ENV key=value” should be used instead of 0.0s
    => [internal] load metadata for Docker Hub Container Image Library | App Containerization 0.0s
    => [internal] load .dockerignore 0.0s
    => => transferring context: 2B 0.0s
    => [stage-0 1/22] FROM Docker Hub Container Image Library | App Containerization 0.0s
    => [internal] load build context 0.0s
    => => transferring context: 276.05kB 0.0s
    => CACHED [stage-0 2/22] RUN mkdir -p /opt/nvidia/isaac_ros_dev_base && 0.0s
    => CACHED [stage-0 3/22] RUN locale-gen en_US en_US.UTF-8 && update-loc 0.0s
    => CACHED [stage-0 4/22] RUN echo "Warning: Using the PYTHONWARNINGS en 0.0s
    => CACHED [stage-0 5/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
    => CACHED [stage-0 6/22] RUN --mount=type=cache,target=/var/cache/apt a 0.0s
    => CACHED [stage-0 7/22] RUN python3 -m pip install -U flake8-b 0.0s
    => CACHED [stage-0 8/22] RUN --mount=type=cache,target=/var/cache/apt a 0.0s
    => CACHED [stage-0 9/22] COPY rosdep/extra_rosdeps.yaml /etc/ros/rosdep 0.0s
    => CACHED [stage-0 10/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
    => CACHED [stage-0 11/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
    => CACHED [stage-0 12/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
    => CACHED [stage-0 13/22] COPY patches/rclcpp-disable-tests.patch /tmp/ 0.0s
    => CACHED [stage-0 14/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
    => CACHED [stage-0 15/22] RUN --mount=type=cache,target=/var/cache/apt a 0.0s
    => CACHED [stage-0 16/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
    => CACHED [stage-0 17/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
    => CACHED [stage-0 18/22] RUN --mount=type=cache,target=/var/cache/apt a 0.0s
    => CACHED [stage-0 19/22] RUN --mount=type=cache,target=/var/cache/apt 0.0s
    => CACHED [stage-0 20/22] RUN python3 -m pip install -U paho-mqt 0.0s
    => CACHED [stage-0 21/22] RUN sudo sed -i '917i #ifdef GTEST_INTERNAL_NE 0.0s
    => CACHED [stage-0 22/22] RUN mkdir -p /opt/nvidia/isaac_ros_dev_base && 0.0s
    => exporting to image 0.0s
    => => exporting layers 0.0s
    => => writing image sha256:8b95dc50f7437e4615f0487fe8ff9c120f3e7065344f6 0.0s
    => => naming to Docker Hub Container Image Library | App Containerization 0.0s
    Building /mnt/nova_ssd/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker/Dockerfile.realsense as image: realsense-image with base: ros2_humble-image
    [+] Building 0.2s (12/12) FINISHED docker:default
    => [internal] load build definition from Dockerfile.realsense 0.0s
    => => transferring dockerfile: 1.33kB 0.0s
    => WARN: InvalidDefaultArgInFrom: Default value for ARG ${BASE_IMAGE} re 0.0s
    => [internal] load metadata for Docker Hub Container Image Library | App Containerization 0.0s
    => [internal] load .dockerignore 0.0s
    => => transferring context: 2B 0.0s
    => [1/7] FROM Docker Hub Container Image Library | App Containerization 0.0s
    => [internal] load build context 0.0s
    => => transferring context: 11.54kB 0.0s
    => CACHED [2/7] COPY scripts/build-librealsense.sh /opt/realsense/build- 0.0s
    => CACHED [3/7] COPY scripts/install-realsense-dependencies.sh /opt/real 0.0s
    => CACHED [4/7] RUN chmod +x /opt/realsense/install-realsense-dependenci 0.0s
    => CACHED [5/7] RUN mkdir -p /opt/realsense/ 0.0s
    => CACHED [6/7] COPY scripts/hotplug-realsense.sh /opt/realsense/hotplug 0.0s
    => CACHED [7/7] COPY udev_rules/99-realsense-libusb-custom.rules /etc/ud 0.0s
    => exporting to image 0.0s
    => => exporting layers 0.0s
    => => writing image sha256:1ab4d76fa300b323e2dc08eef6e566dc7c821b6335696 0.0s
    => => naming to Docker Hub Container Image Library | App Containerization 0.0s

1 warning found (use docker --debug to expand):

  • InvalidDefaultArgInFrom: Default value for ARG ${BASE_IMAGE} results in empty or invalid base image name (line 12)
    Building /mnt/nova_ssd/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker/Dockerfile.user as image: isaac_ros_dev-aarch64 with base: realsense-image
    [+] Building 0.2s (16/16) FINISHED docker:default
    => [internal] load build definition from Dockerfile.user 0.0s
    => => transferring dockerfile: 2.23kB 0.0s
    => WARN: InvalidDefaultArgInFrom: Default value for ARG ${BASE_IMAGE} re 0.0s
    => [internal] load metadata for Docker Hub Container Image Library | App Containerization 0.0s
    => [internal] load .dockerignore 0.0s
    => => transferring context: 2B 0.0s
    => [internal] load build context 0.0s
    => => transferring context: 2.00kB 0.0s
    => [stage-0 1/11] FROM Docker Hub Container Image Library | App Containerization 0.0s
    => CACHED [stage-0 2/11] RUN --mount=type=cache,target=/var/cache/apt a 0.0s
    => CACHED [stage-0 3/11] RUN if [ $(getent group triton-server) ]; then 0.0s
    => CACHED [stage-0 4/11] RUN if [ ! $(getent passwd admin) ]; then 0.0s
    => CACHED [stage-0 5/11] RUN echo admin ALL=(root) NOPASSWD:ALL > /etc/ 0.0s
    => CACHED [stage-0 6/11] RUN mkdir -p /usr/local/bin/scripts 0.0s
    => CACHED [stage-0 7/11] COPY scripts/entrypoint.sh /usr/local/bin/scr 0.0s
    => CACHED [stage-0 8/11] RUN chmod +x /usr/local/bin/scripts/
    .sh 0.0s
    => CACHED [stage-0 9/11] RUN mkdir -p /usr/local/share/middleware_profi 0.0s
    => CACHED [stage-0 10/11] COPY middleware_profiles/*profile.xml /usr/loc 0.0s
    => CACHED [stage-0 11/11] RUN --mount=type=cache,target=/var/cache/apt 0.0s
    => exporting to image 0.0s
    => => exporting layers 0.0s
    => => writing image sha256:7713b4f83600ff9f2f4bd0e977bd40b7c7ae2995b0a92 0.0s
    => => naming to Docker Hub Container Image Library | App Containerization 0.0s

1 warning found (use docker --debug to expand):

  • InvalidDefaultArgInFrom: Default value for ARG ${BASE_IMAGE} results in empty or invalid base image name (line 10)
    Running isaac_ros_dev-aarch64-container
    docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #1: error running hook: exit status 1, stdout: , stderr: time=“2024-11-07T10:36:10+05:00” level=info msg=“Symlinking /mnt/nova_ssd/docker/overlay2/a3d16823b619173212686978cbf09b9a5be3651beac06171720f86b57722d33e/merged/usr/lib/aarch64-linux-gnu/nvidia/libgstnvcustomhelper.so to libgstnvcustomhelper.so.1.0.0”
    time=“2024-11-07T10:36:10+05:00” level=info msg=“Symlinking /mnt/nova_ssd/docker/overlay2/a3d16823b619173212686978cbf09b9a5be3651beac06171720f86b57722d33e/merged/usr/lib/aarch64-linux-gnu/nvidia/libgstnvdsseimeta.so to libgstnvdsseimeta.so.1.0.0”
    time=“2024-11-07T10:36:10+05:00” level=error msg=“failed to create link [/usr/lib/aarch64-linux-gnu/nvidia/nvidia_icd.json /etc/vulkan/icd.d/nvidia_icd.json]: failed to check if link exists: unexpected link target: /mnt/nova_ssd/docker/overlay2/a3d16823b619173212686978cbf09b9a5be3651beac06171720f86b57722d33e/merged/etc/vulkan/icd.d/nvidia_icd.json”: unknown.
    /mnt/nova_ssd/workspaces/isaac_ros-dev/src/isaac_ros_common

in the last it gives some error i researched for the older version but didn’t find for the newer versions also the above warnings are coming one month ago and also coming right now is their is any solution to all of this

sudo chown -R your_user:your_user /mnt/nova_ssd/docker
sudo: /etc/sudo.conf is owned by uid 1000, should be 0
sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set
also getting some permission error after reading people fixes like they are saying you have to downngrade the docker but its not allowing me
sudo apt install docker-ce=5:27.2.1-1~ubuntu.22.04~jammy
sudo: /etc/sudo.conf is owned by uid 1000, should be 0
sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set

Hi @musmannoor2004,

Thanks for your post.
Please help to check the jetpack version, is it Jetpack 6.0?

My jetpack version is 6.1

Hi @musmannoor2004,

The Isaac ROS 3.1 is only compatible with Jetpack 6.0 at this time.
Please meet the requirements * Getting Started — isaac_ros_docs documentation and give it a try, thank you.

1 Like

Might be unrelated, but I found this thread as the only hit with Google after running into a bit similar error from docker build.
Failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #1: error running hook: exit status 1, stdout: , stderr: time="2024-11-07T16:04:58+02:00" level=error msg="failed to create link [/usr/lib/aarch64-linux-gnu/tegra/nvidia_icd.json /etc/vulkan/icd.d/nvidia_icd.json]: failed to check if link exists: unexpected link target: ...

This happened just after I had upgraded nvidia-container-toolkit package from 1.16.2-1 to 1.17.0-1. Downgrading it back cured the issue. I didn’t look deeper into this yet.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.