Hi,
Isaac ROS 3.2 documentation has instructions to set up a ZED camera as a user install in the container. I was wondering how to add it to a Dockerfile.
My current attempt is creating the following Dockerfile.zed under isaac_ros_common/docker/:
After entering the container with cd ${ISAAC_ROS_WS}/src/isaac_ros_common && ./scripts/run_dev.sh -i "ros2_humble.zed", ZED Explorer works, but only after a chown -hR admin /usr/local/zed.
For a persistent build of the wrapper, I can run this in the container:
After which, all new containers via the run_dev script allow me to launch the zed node, but only after chown -hR admin /usr/local/zed && source $(pwd)/install/local_setup.bash
Is there a way to include that wrapper build step in the Dockerfile itself or am I going about this the wrong way? I noticed Isaac ROS 3.1 had a user install Dockerfile for the SDK, but build is handled inside the container. At least Stereolabs has an unrelated zed-ros2-wrapper Dockerfile where they run colcon build in it. Thanks in advance.
Edit: Just tried following these steps again on Orin NX with a (mostly) fresh isaac_ros_common and it appears now the colcon build isn’t persistent, but I can launch ZED_Explorer without the chown & source. Not sure what the difference is…
Took about 11 min for “optimizing AI model”, which happens every time when the node is first launched in a new container. Perhaps swaying off-topic from my original question but running the isaac YOLOv8 example instead seems to skip that step. Not sure how to either have that persistent in the image, or skip it like the example
Firstly, I wouldn’t suggest to build the package in image if you use as developer container and not deploying it.
Just build it when you are in the container and the workspace is already mounted to the container so it will stay even you close the container.
As you mentioned, they had docker file for ZED with 3.1 release I also use that too.
For the Yolov8 model to be persistent, find where it is optimized in the ZED files you need to find the directory and mount it to the docker container in the run_dev.sh script.
In the ISAAC ROS example it is persistent because the model is ISAAC_ROS_WS which is mounted to the container.
Thanks for the response. I forgot to update this post after I resolved the issue following similar instructions in the zed-ros2-wrapper README.
As for building in the image, the target directory $ROS_ROOT was a total oversight since it points to /opt/ros/humble instead of a separate workspace. I created a separate directory in the Dockerfile.zed as follows:
# following ZED SDK install as above
ENV ZED_ROOT=/zed_ws
# zed-ros2-wrapper
RUN --mount=type=cache,target=/var/cache/apt \
mkdir -p ${ZED_ROOT}/src && cd ${ZED_ROOT}/src \
&& git clone --recurse-submodules https://github.com/stereolabs/zed-ros2-wrapper && cd .. \
&& apt update \
&& rosdep install --from-paths src/zed-ros2-wrapper --ignore-src -r -y
RUN cd ${ZED_ROOT} && source /opt/ros/${ROS_DISTRO}/setup.bash && colcon build --symlink-install --cmake-args=-DCMAKE_BUILD_TYPE=Release --parallel-workers $(nproc)
RUN echo "source ${ZED_ROOT}/install/local_setup.sh" | sudo tee --append /etc/bash.bashrc
As for running the Isaac container & keeping persistent ZED AI model:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && ./scripts/run_dev.sh -i "ros2_humble.zed" -a "-v /usr/local/zed/resources/:/usr/local/zed/resources/"
The bind mounted directory came out read-only in the Docker at first, I adjusted it with chown on host. One potential unknown I still have is with the ownership/permissions of the ZED SDK when installed via Dockerfile. On my Orin Nano the entrypoint needs sudo chown -hR admin /usr/local/zed while it’s fine from the getgo on another Orin (NX) with seemingly the same exact setup.
References for anyone stumbling across this in the future: