Isaac ROS 3.2 Dockerfile for ZED SDK & zed-ros2-wrapper

Hi,
Isaac ROS 3.2 documentation has instructions to set up a ZED camera as a user install in the container. I was wondering how to add it to a Dockerfile.

My current attempt is creating the following Dockerfile.zed under isaac_ros_common/docker/:

ARG BASE_IMAGE
FROM ${BASE_IMAGE} AS catscanners

# disable terminal interaction for apt
ENV DEBIAN_FRONTEND=noninteractive
ENV SHELL=/bin/bash
SHELL ["/bin/bash", "-c"]

# Download dependencies for zed SDK
RUN apt-get install -y --no-install-recommends \
    lsb-release \
    wget \
    less \
    zstd \
    udev \
    sudo \
    apt-transport-https

# ZED SDK itself
RUN --mount=type=cache,target=/var/cache/apt \
    wget -q --no-check-certificate -O ZED_SDK_Linux.run https://stereolabs.sfo2.cdn.digitaloceanspaces.com/zedsdk/4.2/ZED_SDK_Tegra_L4T36.4_v4.2.2.zstd.run \
    && chmod 777 ./ZED_SDK_Linux.run \
    && ./ZED_SDK_Linux.run silent skip_od_module skip_python skip_drivers \
    && ln -sf /usr/lib/aarch64-linux-gnu/tegra/libv4l2.so.0 /usr/lib/aarch64-linux-gnu/libv4l2.so \
    && rm -rf /usr/local/zed/resources/* \
    && rm -rf ZED_SDK_Linux.run \
    && rm -rf /var/lib/apt/lists/*

# Does not work
# zed-ros2-wrapper
# RUN --mount=type=cache,target=/var/cache/apt \
#     mkdir -p ${ROS_ROOT}/src && cd ${ROS_ROOT}/src \
#     && git clone --recurse-submodules https://github.com/stereolabs/zed-ros2-wrapper && cd .. \
#     && apt update \
#     && rosdep install --from-paths src/zed-ros2-wrapper --ignore-src -r -y \
#     && colcon build --symlink-install --packages-above zed_wrapper

After entering the container with cd ${ISAAC_ROS_WS}/src/isaac_ros_common && ./scripts/run_dev.sh -i "ros2_humble.zed", ZED Explorer works, but only after a chown -hR admin /usr/local/zed.

For a persistent build of the wrapper, I can run this in the container:

cd ${ISAAC_ROS_WS} && \
sudo apt update && \
rosdep update && rosdep install --from-paths src/zed-ros2-wrapper --ignore-src -r -y && \
colcon build --symlink-install --packages-up-to zed_wrapper

After which, all new containers via the run_dev script allow me to launch the zed node, but only after
chown -hR admin /usr/local/zed && source $(pwd)/install/local_setup.bash

Is there a way to include that wrapper build step in the Dockerfile itself or am I going about this the wrong way? I noticed Isaac ROS 3.1 had a user install Dockerfile for the SDK, but build is handled inside the container. At least Stereolabs has an unrelated zed-ros2-wrapper Dockerfile where they run colcon build in it. Thanks in advance.

Edit: Just tried following these steps again on Orin NX with a (mostly) fresh isaac_ros_common and it appears now the colcon build isn’t persistent, but I can launch ZED_Explorer without the chown & source. Not sure what the difference is…

Some further developments. Modified the Dockerfile as follows:

ARG BASE_IMAGE
FROM ${BASE_IMAGE} AS catscanners

# disable terminal interaction for apt
ENV DEBIAN_FRONTEND=noninteractive
ENV SHELL=/bin/bash
SHELL ["/bin/bash", "-c"]

# Download dependencies for zed SDK
RUN apt-get install -y --no-install-recommends \
    lsb-release \
    wget \
    less \
    zstd \
    udev \
    sudo \
    apt-transport-https

# isaac_ros_yolov8 deps
RUN --mount=type=cache,target=/var/cache/apt \
apt-get update && apt-get install -y \
    ros-humble-isaac-ros-yolov8 \
    ros-humble-isaac-ros-dnn-image-encoder \
    ros-humble-isaac-ros-tensor-rt \
    ros-humble-isaac-ros-examples \
    ros-humble-isaac-ros-stereo-image-proc \
    ros-humble-isaac-ros-zed

# ZED SDK 
RUN --mount=type=cache,target=/var/cache/apt \
    wget -q --no-check-certificate -O ZED_SDK_Linux.run https://stereolabs.sfo2.cdn.digitaloceanspaces.com/zedsdk/4.2/ZED_SDK_Tegra_L4T36.4_v4.2.2.zstd.run \
    && chmod 777 ./ZED_SDK_Linux.run \
    && ./ZED_SDK_Linux.run silent skip_od_module skip_python skip_drivers \
    && ln -sf /usr/lib/aarch64-linux-gnu/tegra/libv4l2.so.0 /usr/lib/aarch64-linux-gnu/libv4l2.so \
    && rm -rf /usr/local/zed/resources/* \
    && rm -rf ZED_SDK_Linux.run \
    && rm -rf /var/lib/apt/lists/*

RUN --mount=type=cache,target=/var/cache/apt \
    mkdir -p ${ROS_ROOT}/src && cd ${ROS_ROOT}/src \
    && git clone --recurse-submodules https://github.com/stereolabs/zed-ros2-wrapper && cd .. \
    && apt update \
    && apt-get update \
    && rosdep install --from-paths src/zed-ros2-wrapper --ignore-src -r -y

Entering the container

cd ${ISAAC_ROS_WS}/src/isaac_ros_common && ./scripts/run_dev.sh -i "ros2_humble.zed"

1st time build only (seems persistent?)

cd ${ISAAC_ROS_WS} && \
colcon build --symlink-install --packages-up-to zed_wrapper

Any new container afterwards:

source $(pwd)/install/local_setup.bash

Launching ZED node

ros2 launch zed_wrapper zed_camera.launch.py camera_model:=zed

Took about 11 min for “optimizing AI model”, which happens every time when the node is first launched in a new container. Perhaps swaying off-topic from my original question but running the isaac YOLOv8 example instead seems to skip that step. Not sure how to either have that persistent in the image, or skip it like the example

ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=zed_mono_rect,yolov8 model_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/yolov8/yolov8s.onnx engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/yolov8/yolov8s.plan interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_yolov8/zed2_quickstart_interface_specs.json

Firstly, I wouldn’t suggest to build the package in image if you use as developer container and not deploying it.

Just build it when you are in the container and the workspace is already mounted to the container so it will stay even you close the container.

As you mentioned, they had docker file for ZED with 3.1 release I also use that too.

For the Yolov8 model to be persistent, find where it is optimized in the ZED files you need to find the directory and mount it to the docker container in the run_dev.sh script.

In the ISAAC ROS example it is persistent because the model is ISAAC_ROS_WS which is mounted to the container.

You can follow this calibration file post, which you can do the same thing for the AI Model to be persistent :
How can I use the ZED with Docker on a robot with no internet connection? – Help Center | Stereolabs

1 Like

Actually AI Models also written in the link

Robots and AI modules

The same problem occurs when the robot uses an AI module of the ZED SDK.

The solution is to add a volume for the folder /usr/local/zed/resources/:

-v /usr/local/zed/resources/:/usr/local/zed/resources/

Example:

docker run --runtime nvidia -it --privileged \
  -v /usr/local/zed/resources/:/usr/local/zed/resources/ \
  <docker_image> sh
1 Like

Thanks for the response. I forgot to update this post after I resolved the issue following similar instructions in the zed-ros2-wrapper README.

As for building in the image, the target directory $ROS_ROOT was a total oversight since it points to /opt/ros/humble instead of a separate workspace. I created a separate directory in the Dockerfile.zed as follows:

# following ZED SDK install as above

ENV ZED_ROOT=/zed_ws

# zed-ros2-wrapper
RUN --mount=type=cache,target=/var/cache/apt \
    mkdir -p ${ZED_ROOT}/src && cd ${ZED_ROOT}/src \
    && git clone --recurse-submodules https://github.com/stereolabs/zed-ros2-wrapper && cd .. \
    && apt update \
    && rosdep install --from-paths src/zed-ros2-wrapper --ignore-src -r -y

RUN cd ${ZED_ROOT} && source /opt/ros/${ROS_DISTRO}/setup.bash && colcon build --symlink-install --cmake-args=-DCMAKE_BUILD_TYPE=Release --parallel-workers $(nproc)

RUN echo "source ${ZED_ROOT}/install/local_setup.sh" | sudo tee --append /etc/bash.bashrc 

As for running the Isaac container & keeping persistent ZED AI model:

cd ${ISAAC_ROS_WS}/src/isaac_ros_common && ./scripts/run_dev.sh -i "ros2_humble.zed" -a "-v /usr/local/zed/resources/:/usr/local/zed/resources/"

The bind mounted directory came out read-only in the Docker at first, I adjusted it with chown on host. One potential unknown I still have is with the ownership/permissions of the ZED SDK when installed via Dockerfile. On my Orin Nano the entrypoint needs sudo chown -hR admin /usr/local/zed while it’s fine from the getgo on another Orin (NX) with seemingly the same exact setup.

References for anyone stumbling across this in the future:

Hi @mitjarislakki

Thank you for your detailed posts and solution.

You can use the deploy script docker_deploy.sh to build your Docker image and package it in a standalone container.

I set your last post as the solution of your question.

Best,
Raffaello

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.