Ros2_jetson repo: Unable to run ROS2 example app inside container built from repo Dockerfile

I’ve built a Docker container using this Dockerfile from the ros2_jetson repo:

I then follow steps here (to run an example) inside the container:

  1. cd /workspace/ros2_ws/src/ros2_deepstream/
  2. colcon build
  3. source /opt/ros/eloquent/setup.bash
  4. . install/setup.bash
  5. ros2 run single_stream_pkg single_stream --ros-args -p input_source:=“/dev/video0”

Steps 1-4 run fine, but running step #5 I receive this error:

** (process:139): WARNING **: 18:44:29.567: Failed to load shared library ‘libgstreamer-1.0.so.0’ referenced by the typelib: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29’ not found (required by /usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0)
Traceback (most recent call last):
File “/workspace/ros2_ws/src/ros2_deepstream/install/single_stream_pkg/lib/single_stream_pkg/single_stream”, line 11, in
load_entry_point(‘single-stream-pkg==0.0.0’, ‘console_scripts’, ‘single_stream’)()
File “/usr/lib/python3/dist-packages/pkg_resources/init.py”, line 480, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File “/usr/lib/python3/dist-packages/pkg_resources/init.py”, line 2693, in load_entry_point
return ep.load()
File “/usr/lib/python3/dist-packages/pkg_resources/init.py”, line 2324, in load
return self.resolve()
File “/usr/lib/python3/dist-packages/pkg_resources/init.py”, line 2330, in resolve
module = import(self.module_name, fromlist=[‘name’], level=0)
File “/workspace/ros2_ws/src/ros2_deepstream/install/single_stream_pkg/lib/python3.6/site-packages/single_stream_pkg/single_stream.py”, line 24, in
from single_stream_pkg.single_stream_class import InferencePublisher
File “/workspace/ros2_ws/src/ros2_deepstream/install/single_stream_pkg/lib/python3.6/site-packages/single_stream_pkg/single_stream_class.py”, line 39, in
from gi.repository import GObject, Gst
File “”, line 971, in _find_and_load
File “”, line 955, in _find_and_load_unlocked
File “”, line 656, in _load_unlocked
File “”, line 626, in _load_backward_compatible
File “/usr/lib/python3/dist-packages/gi/importer.py”, line 146, in load_module
dynamic_module = load_overrides(introspection_module)
File “/usr/lib/python3/dist-packages/gi/overrides/init.py”, line 125, in load_overrides
override_mod = importlib.import_module(override_package_name)
File “/usr/lib/python3.6/importlib/init.py”, line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File “/usr/lib/python3/dist-packages/gi/overrides/Gst.py”, line 58, in
class Bin(Gst.Bin):
File “/usr/lib/python3/dist-packages/gi/module.py”, line 181, in getattr
interfaces = tuple(interface for interface in get_interfaces_for_object(info)
File “/usr/lib/python3/dist-packages/gi/module.py”, line 105, in get_interfaces_for_object
interfaces.append(getattr(module, name))
File “/usr/lib/python3/dist-packages/gi/overrides/init.py”, line 39, in getattr
return getattr(self.introspection_module, name)
File “/usr/lib/python3/dist-packages/gi/module.py”, line 220, in getattr
wrapper = metaclass(name, bases, dict
)
File “/usr/lib/python3/dist-packages/gi/types.py”, line 234, in init
register_interface_info(cls.info.get_g_type())
TypeError: must be an interface

This shared object file in question (libgstreamer-1.0.so.0) is located here (inside the container);
root@ubuntu:/# find . -name libgstreamer-1.0.so.0
./usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0
./usr/lib/aarch64-linux-gnu/tegra/libgstreamer-1.0.so.0

So I tried:

  • export LD_LIBRARY_PATH=/usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0:$LD_LIBRARY_PATH
  • (receive same error as above)
  • export LD_LIBRARY_PATH=/usr/lib/aarch64-linux-gnu/tegra/libgstreamer-1.0.so.0:$LD_LIBRARY_PATH
  • (receive same error as above)

I’m not sure where to go from here. It seems the container is not built correctly to run code from the ros2_deepstream repo, even though that seems to be the intended purpose of these containers?

Hi @coreyslick, from what I can tell from searching for this, it seems related to Ubuntu 20.04 using newer version of GLIBC than Ubuntu 18.04. Are you on JetPack 4.x or JetPack 5.x?

I think you may need to adjust the ARG BASE_IMAGE=nvcr.io/nvidia/deepstream-l4t:5.0.1-20.09-samples line of the Dockerfile to use a deepstream-l4t container image that’s compatible with your version of JetPack.

Note that I don’t believe this ros2_deepstream container has been maintained recently and as such you may run into some issues using it with updated versions of JetPack.

Hi @dusty_nv,

Thanks for your response. I forget sometimes that Docker containers are somewhat dependent on the host OS, unlike virtual machines.

I’m using JetPack 5.0.2 - I’ll try changing to ARG_BASE_IMAGE=nvcr.io/nvidia/deepstream-l4t:6.1.1-samples, as I’ve been able to use that container on the Xavier NX for other purposes.

As an alternative, are you aware of a more up-to-date Docker container (either pre-built or a Dockerfile) that contains both DeepStream and ROS? I’d really like to be able to run the examples from the ros2_Deepstream repo.

Thanks.

An alternative might be to change the base image used in the ROS2 containers from my jetson-containers repo. Those don’t contain the ros2_deepstream package, but I do keep those Dockerfiles updated to build ROS2 Foxy/Galactic/Humble. BTW the base container for those gets set here: https://github.com/dusty-nv/jetson-containers/blob/eb2307d40f0884d66310e9ac34633a4c5ef2e083/scripts/docker_build_ros.sh#L170

You may also be interested to check out the packages from Isaac ROS to see if any of those contain similar functionality to what you are trying to achieve.

I changed your Dockerfile.ros.humble.
I commented out the #ARG_BASE line and added:
FROM nvcr.io/nvidia/deepstream-l4t:6.1.1-samples

Then I ran:
./scripts/docker_build_ros.sh --distro humble --package ros_base

This builds up to the point of the OpenCV install and then an error:

ImportError: libavcodec.so.58: cannot open shared object file: No such file or directory

Is the base image that I used possibly just a poor choice for compatibility? I need a container that also has DeepStream installed, so figured that would be the easiest way but maybe not…

Hmm, it would seem that the deepstream-l4t container doesn’t have the FFMPEG libraries installed, whereas the previouse base image I was using (l4t-jetpack) does. Could you try adding this command to Dockerfile.ros.humble:

RUN apt-get update && \
    apt-get install -y --no-install-recommends \
		  libavcodec58
    && rm -rf /var/lib/apt/lists/* \
    && apt-get clean

I added that command right before the OpenCV installation command, but unfortunately I got the same libavcodec.so.58 error as before…

Can you copy & paste the error stack that you’re getting? I will also try building it here against nvcr.io/nvidia/deepstream-l4t:6.1.1-samples base image.

Great, thank you…

This captures a little more than the error stack for some context:

Processing triggers for libc-bin (2.31-0ubuntu9.9) …

  • rm -rf /var/lib/apt/lists/auxfiles /var/lib/apt/lists/lock /var/lib/apt/lists/packages.ros.org_ros2_ubuntu_dists_focal_InRelease /var/lib/apt/lists/packages.ros.org_ros2_ubuntu_dists_focal_main_binary-arm64_Packages.lz4 /var/lib/apt/lists/partial /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-backports_InRelease /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-backports_main_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-backports_universe_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal_InRelease /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal_main_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal_multiverse_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal_restricted_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-security_InRelease /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-security_main_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-security_multiverse_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-security_restricted_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-security_universe_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal_universe_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-updates_InRelease /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-updates_main_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-updates_multiverse_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-updates_restricted_binary-arm64_Packages.lz4 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_focal-updates_universe_binary-arm64_Packages.lz4 /var/lib/apt/lists/repo.download.nvidia.com_jetson_common_dists_r35.1_InRelease /var/lib/apt/lists/repo.download.nvidia.com_jetson_common_dists_r35.1_main_binary-arm64_Packages.lz4
  • apt-get clean
  • cd …/
  • rm -rf opencv
    ++ python3 -c ‘import sys; version=sys.version_info[:3]; print(“{0}.{1}”.format(*version))’
  • PYTHON3_VERSION=3.8
  • ‘[’ aarch64 = aarch64 ‘]’
  • local_include_path=/usr/local/include/opencv4
  • local_python_path=/usr/local/lib/python3.8/dist-packages/cv2
  • ‘[’ -d /usr/local/include/opencv4 ‘]’
  • ‘[’ -d /usr/local/lib/python3.8/dist-packages/cv2 ‘]’
  • ln -s /usr/include/opencv4 /usr/local/include/opencv4
  • ln -s /usr/lib/python3.8/dist-packages/cv2 /usr/local/lib/python3.8/dist-packages/cv2
  • echo ‘testing cv2 module under python…’
    testing cv2 module under python…
  • python3 -c ‘import cv2; print(’'‘OpenCV version:’'‘, str(cv2.version)); print(cv2.getBuildInformation())’
    Traceback (most recent call last):
    File “”, line 1, in
    File “/usr/local/lib/python3.8/dist-packages/cv2/init.py”, line 96, in
    bootstrap()
    File “/usr/local/lib/python3.8/dist-packages/cv2/init.py”, line 86, in bootstrap
    import cv2
    ImportError: libavcodec.so.58: cannot open shared object file: No such file or directory
    The command ‘/bin/bash -c cd /tmp && ./opencv_install.sh ${OPENCV_URL} ${OPENCV_DEB}’ returned a non-zero code: 1

OK, thanks - what I found was that the deepstream-l4t container thought those FFMPEG packages were already installed, but the libraries were missing (presumably they got dropped during a multi-stage build or something like that). So first I had to remove those packages from apt and then re-install them, and then it worked:

RUN apt-get update && \
	apt-get purge -y libavcodec58 libavutil56 && \
	apt-get install -y --no-install-recommends libavcodec58 libavutil56 && \
	rm -rf /var/lib/apt/lists/* && \
	apt-get clean

Interesting. I’ve seen something like that happen before. Definitely not obvious what the issue is when it happens.

It looks to have gotten past the OpenCV install this time.

However, it then failed when trying to build ROS from source. The error stack is here:

Finished <<< sensor_msgs [10min 25s]
Starting >>> cv_bridge
Starting >>> stereo_msgs
Starting >>> image_geometry
Starting >>> visualization_msgs
— stderr: image_geometry
CMake Error at /usr/share/cmake-3.24/Modules/FindCUDA.cmake:859 (message):
Specify CUDA_TOOLKIT_ROOT_DIR
Call Stack (most recent call first):
/usr/lib/cmake/opencv4/OpenCVConfig.cmake:86 (find_package)
/usr/lib/cmake/opencv4/OpenCVConfig.cmake:108 (find_host_package)
CMakeLists.txt:18 (find_package)


Failed <<< image_geometry [12.4s, exited with code 1]
Aborted <<< stereo_msgs [21.5s]
Aborted <<< cv_bridge [22.4s]
Aborted <<< visualization_msgs [22.7s]
Aborted <<< rcl [8min 5s]

Summary: 155 packages finished [1h 8min 50s]
1 package failed: image_geometry
4 packages aborted: cv_bridge rcl stereo_msgs visualization_msgs
37 packages had stderr output: ament_copyright ament_cppcheck ament_cpplint ament_flake8 ament_index_python ament_lint ament_lint_cmake ament_mypy ament_package ament_pep257 ament_pycodestyle ament_uncrustify ament_xmllint console_bridge_vendor cv_bridge domain_coordinator foonathan_memory_vendor google_benchmark_vendor iceoryx_posh image_geometry launch launch_testing launch_xml launch_yaml libyaml_vendor mimick_vendor orocos_kdl_vendor osrf_pycommon osrf_testing_tools_cpp rmw_connextdds_common rosidl_cli rosidl_runtime_py rpyutils rti_connext_dds_cmake_module shared_queues_vendor uncrustify_vendor zstd_vendor
73 packages not processed
The command ‘/bin/bash -c mkdir -p ${ROS_ROOT}/src && cd ${ROS_ROOT} && rosinstall_generator --deps --rosdistro ${ROS_DISTRO} ${ROS_PKG} launch_xml launch_yaml launch_testing launch_testing_ament_cmake demo_nodes_cpp demo_nodes_py example_interfaces camera_calibration_parsers camera_info_manager cv_bridge v4l2_camera vision_opencv vision_msgs image_geometry image_pipeline image_transport compressed_image_transport compressed_depth_image_transport > ros2.${ROS_DISTRO}.${ROS_PKG}.rosinstall && cat ros2.${ROS_DISTRO}.${ROS_PKG}.rosinstall && vcs import src < ros2.${ROS_DISTRO}.${ROS_PKG}.rosinstall && rm -r ${ROS_ROOT}/src/ament_cmake && git -C ${ROS_ROOT}/src/ clone GitHub - ament/ament_cmake: Supporting CMake packages for working with ament -b ${ROS_DISTRO} && apt-get update && cd ${ROS_ROOT} && rosdep init && rosdep update && rosdep install -y --ignore-src --from-paths src --rosdistro ${ROS_DISTRO} --skip-keys “libopencv-dev libopencv-contrib-dev libopencv-imgproc-dev python-opencv python3-opencv” && rm -rf /var/lib/apt/lists/* && apt-get clean && colcon build --merge-install --cmake-args -DCMAKE_BUILD_TYPE=Release && rm -rf ${ROS_ROOT}/src && rm -rf ${ROS_ROOT}/logs && rm -rf ${ROS_ROOT}/build && rm ${ROS_ROOT}/*.rosinstall’ returned a non-zero code: 1

OK, it appears that deepstream-l4t container doesn’t have the full version of CUDA Toolkit installed (presumably to reduce the container size), so I’m now trying to install that before building ROS in the Dockerfile:

RUN apt-get update && \
    apt-get install -y --no-install-recommends \
		  cuda-toolkit-11-4 \
    && rm -rf /var/lib/apt/lists/* \
    && apt-get clean

OK, with the addition of that change, it was able to build the Humble-deepstream container successfully here - were you able to get it built as well?

The build failed here on my end. Here is the error stack (and the Dockerfile is attached):

[Processing: compressed_image_transport, depth_image_proc, robot_state_publisher, rosbag2_transport, stereo_image_proc, v4l2_camera]
[Processing: compressed_image_transport, depth_image_proc, robot_state_publisher, rosbag2_transport, stereo_image_proc, v4l2_camera]
[Processing: compressed_image_transport, depth_image_proc, robot_state_publisher, rosbag2_transport, stereo_image_proc, v4l2_camera]
— stderr: rosbag2_transport
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
make[2]: *** [CMakeFiles/test_play_services__rmw_cyclonedds_cpp.dir/build.make:76: CMakeFiles/test_play_services__rmw_cyclonedds_cpp.dir/test/rosbag2_transport/test_play_services.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:450: CMakeFiles/test_play_services__rmw_cyclonedds_cpp.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs…
make: *** [Makefile:146: all] Error 2

Failed <<< rosbag2_transport [1h 19min 57s, exited with code 2]
Aborted <<< v4l2_camera [1h 12min 47s]
Aborted <<< compressed_image_transport [1h 12min 59s]
Aborted <<< robot_state_publisher [1h 18min 42s]
Aborted <<< stereo_image_proc [1h 15min 30s]
Aborted <<< depth_image_proc [1h 19min 21s]

Summary: 221 packages finished [3h 35min 15s]
1 package failed: rosbag2_transport
5 packages aborted: compressed_image_transport depth_image_proc robot_state_publisher stereo_image_proc v4l2_camera
59 packages had stderr output: ament_copyright ament_cppcheck ament_cpplint ament_flake8 ament_index_python ament_lint ament_lint_cmake ament_mypy ament_package ament_pep257 ament_pycodestyle ament_uncrustify ament_xmllint camera_calibration console_bridge_vendor demo_nodes_py domain_coordinator foonathan_memory_vendor google_benchmark_vendor iceoryx_posh launch launch_ros launch_testing launch_testing_ros launch_xml launch_yaml libyaml_vendor mimick_vendor orocos_kdl_vendor osrf_pycommon osrf_testing_tools_cpp rmw_connextdds_common ros2action ros2cli ros2component ros2doctor ros2interface ros2launch ros2lifecycle ros2multicast ros2node ros2param ros2pkg ros2run ros2service ros2test ros2topic rosbag2_transport rosidl_cli rosidl_runtime_py rpyutils rti_connext_dds_cmake_module shared_queues_vendor sros2 stereo_image_proc tf2_ros_py tf2_tools uncrustify_vendor zstd_vendor
6 packages not processed
The command ‘/bin/bash -c mkdir -p ${ROS_ROOT}/src && cd ${ROS_ROOT} && rosinstall_generator --deps --rosdistro ${ROS_DISTRO} ${ROS_PKG} launch_xml launch_yaml launch_testing launch_testing_ament_cmake demo_nodes_cpp demo_nodes_py example_interfaces camera_calibration_parsers camera_info_manager cv_bridge v4l2_camera vision_opencv vision_msgs image_geometry image_pipeline image_transport compressed_image_transport compressed_depth_image_transport > ros2.${ROS_DISTRO}.${ROS_PKG}.rosinstall && cat ros2.${ROS_DISTRO}.${ROS_PKG}.rosinstall && vcs import src < ros2.${ROS_DISTRO}.${ROS_PKG}.rosinstall && rm -r ${ROS_ROOT}/src/ament_cmake && git -C ${ROS_ROOT}/src/ clone GitHub - ament/ament_cmake: Supporting CMake packages for working with ament -b ${ROS_DISTRO} && apt-get update && cd ${ROS_ROOT} && rosdep init && rosdep update && rosdep install -y --ignore-src --from-paths src --rosdistro ${ROS_DISTRO} --skip-keys “libopencv-dev libopencv-contrib-dev libopencv-imgproc-dev python-opencv python3-opencv” && rm -rf /var/lib/apt/lists/* && apt-get clean && colcon build --merge-install --cmake-args -DCMAKE_BUILD_TYPE=Release && rm -rf ${ROS_ROOT}/src && rm -rf ${ROS_ROOT}/logs && rm -rf ${ROS_ROOT}/build && rm ${ROS_ROOT}/*.rosinstall’ returned a non-zero code: 2
Dockerfile.ros.humble (6.9 KB)

This typically means that your board ran out of memory…which is kind of odd since you are on AGX Xavier and I built mine on Xavier NX - however, I do have additional swap mounted. I would recommend mounting additional swap.

I’ve also uploaded the container image that I built to here: dustynv/ros:humble-ros-base-deepstream-l4t-r35.1.0

I have multiple large Docker images on my Xavier NX, along with a large amount of image data (current disk usage of 94%, after the last build attempt). I will clear out some space and try the build process again, thanks. In the meantime, I will try the container you uploaded.

For my production environment, I need a container with both DeepStream and ROS (not ROS2) - likely Noetic. Do you think these same modifications you made to the Dockerfile.ros.humble can be applied to your Dockerfile.ros.noetic?

I believe they should, yes - although I don’t install the custom OpenCV+CUDA package into the noetic container so those may not even be needed (I’ll try)

Note that those ros2_deepstream nodes are for ROS2 only, but perhaps you have your own DeepStream nodes for ROS1.

My understanding of communication between ROS (both 1 & 2) and DeepStream (python, really…not DS directly) has evolved a good bit since I originally posted here. My main goal was to be able to communicate object detection messages from DS to ROS1, but the only solid examples of DS-to-ROS communication I’d found were the ros2_deepstream repo.

The main difference between python / DS comms with ROS1 & ROS2 seems to be the use of either “rospy” or “rclpy”.

I’ve successfully tested basic communication between a ROS node & my own python code.

I’d already had DS python bindings-based code to run my object detection model. Based on parts of this function (ros2_deepstream/single_stream_class.py at main · NVIDIA-AI-IOT/ros2_deepstream · GitHub), I modified my code to send the {class, bounding box info, confidence} inference results to ROS1 using the “rospy” package. I believe the code is mostly correct (assuming the example linked above works…), but haven’t tested it yet due to the lack of having a container with both ROS1 & DS.

I just pulled the dustynv/ros:humble-ros-base-deepstream-l4t-r35.1.0 container and attempted to run an example from the ros2_deepstream repo.
Here is the input (as per directions here: GitHub - NVIDIA-AI-IOT/ros2_deepstream: ROS 2 package for NVIDIA DeepStream applications on Jetson Platforms) & output from inside the container:

mkdir -p ~/ros2_ws/src
cd ~/ros2_ws/src
git clone GitHub - NVIDIA-AI-IOT/ros2_deepstream: ROS 2 package for NVIDIA DeepStream applications on Jetson Platforms
colcon build
source /opt/ros/humble/install/setup.bash
. install/setup.bash
ros2 run single_stream_pkg single_stream --ros-args -p input_source:=“/dev/video0”

Traceback (most recent call last):
File “/root/ros2_ws/src/install/single_stream_pkg/lib/single_stream_pkg/single_stream”, line 33, in
sys.exit(load_entry_point(‘single-stream-pkg==0.0.0’, ‘console_scripts’, ‘single_stream’)())
File “/root/ros2_ws/src/install/single_stream_pkg/lib/single_stream_pkg/single_stream”, line 25, in importlib_load_entry_point
return next(matches).load()
File “/usr/lib/python3.8/importlib/metadata.py”, line 77, in load
module = import_module(match.group(‘module’))
File “/usr/lib/python3.8/importlib/init.py”, line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File “”, line 1014, in _gcd_import
File “”, line 991, in _find_and_load
File “”, line 975, in _find_and_load_unlocked
File “”, line 671, in _load_unlocked
File “”, line 848, in exec_module
File “”, line 219, in _call_with_frames_removed
File “/root/ros2_ws/src/install/single_stream_pkg/lib/python3.8/site-packages/single_stream_pkg/single_stream.py”, line 24, in
from single_stream_pkg.single_stream_class import InferencePublisher
File “/root/ros2_ws/src/install/single_stream_pkg/lib/python3.8/site-packages/single_stream_pkg/single_stream_class.py”, line 29, in
from vision_msgs.msg import Classification2D, ObjectHypothesis, ObjectHypothesisWithPose, BoundingBox2D, Detection2D, Detection2DArray
ImportError: cannot import name ‘Classification2D’ from ‘vision_msgs.msg’ (/opt/ros/humble/install/lib/python3.8/site-packages/vision_msgs/msg/init.py)
[ros2run]: Process exited with failure 1

So it appears that “vision_msgs” is installed, it just doesn’t contain the “Classification2D” name. I checked inside ptpython and these are the names available in vision_msgs:

BoundingBox2D
Detection2DArray
Pose2D
_bounding_box2_d
_detection2_d_array
_pose2_d
BoundingBox2DArray
Detection3D
VisionInfo
_bounding_box2_d_array
_detection3_d
_vision_info
BoundingBox3D
Detection3DArray
doc
_bounding_box3_d
_detection3_d_array
BoundingBox3DArray
ObjectHypothesis
file
_bounding_box3_d_array
_object_hypothesis
Classification
ObjectHypothesisWithPose
name
_classification
_object_hypothesis_with_pose
Detection2D
Point2D
package
_detection2_d
_point2_d

I attempted to upgrade the “vision_msgs” python package, but it was already up to date, so I’m not sure how to fix this issue…

From my own code, it appears that Classification was renamed to Classification2D at some point (or vice-versa) in the vision_msgs package - so I would just try changing the name of it in the code.

OK, it appears that the ROS Noetic + DeepStream container is building now - interestingly, I didn’t need the previous patches from this thread, but I did need to apply this fix to the Dockerfile.ros.noetic for some reason:

python3 ./src/catkin/bin/catkin_make_isolated --install --install-space ${ROS_ROOT} -DCMAKE_BUILD_TYPE=Release -DSETUPTOOLS_DEB_LAYOUT=OFF && \

OK, it did indeed build with that -DSETUPTOOLS_DEB_LAYOUT=OFF workaround and I uploaded the container image to dustynv/ros:noetic-ros-base-deepstream-l4t-r35.1.0