Ros Noetic Jetson TX2

Hello everyone!
I am facing trouble.
I have to use python3 because I am running TensorFlow 2.4.
My env is:
Jetson TX2, Jetpack 4.5, Python3, ROS melodic.
I want to publish the frame video from one node to other node so I have built vision_opencv inside the catkin_ws.
After runing catkin built I do not get any errors but when I ran in python3

from cv_bridge.boost.cv_bridge_boost import getCvType

I got this error

Traceback (most recent call last):
File "/home/jetson/catkin_ws/src/soka_drone/Scripts/test_opencv_2.py", line 104, in <module>
           cam_test()
File "/home/jetson/catkin_ws/src/soka_drone/Scripts/test_opencv_2.py", line 87, in cam_test
            msg = bridge.cv2_to_imgmsg(img, encoding="bgr8")
File "/home/jetson/catkin_ws/src/vision_opencv/cv_bridge/python/cv_bridge/core.py", line 263, in cv2_to_imgmsg
            if self.cvtype_to_name[self.encoding_to_cvtype2(encoding)] != cv_type:
File "/home/jetson/catkin_ws/src/vision_opencv/cv_bridge/python/cv_bridge/core.py", line 91, in encoding_to_cvtype2
            from cv_bridge.boost.cv_bridge_boost import getCvType
     ModuleNotFoundError: No module named 'cv_bridge.boost.cv_bridge_boost'

This is how I built cv_bridge

sudo apt-get install python-catkin-tools python3-dev python3-catkin-pkg-modules python3-numpy python3-yaml ros-melodic-cv-bridge
# Create catkin workspace
mkdir catkin_ws
cd catkin_ws
catkin init
# Instruct catkin to set cmake variables
catkin config -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_LIBRARY=/usr/lib/aarch64-linux-gnu/libpython3.6m.so
# Instruct catkin to install built packages into install place. It is $CATKIN_WORKSPACE/install folder
catkin config --install
# Clone cv_bridge src
git clone https://github.com/ros-perception/vision_opencv.git src/vision_opencv
# Find version of cv_bridge in your repository
apt-cache show ros-melodic-cv-bridge | grep Version
cd src/vision_opencv/
git checkout 1.13.0
cd ../../
# Build
catkin build cv_bridge
# Extend environment with new package
source install/setup.bash --extend

Is there a way to install ROS noetic in Jetson TX2?

Thank you so much in advance.

Hi @alberto18_90, I haven’t encountered this myself, but can you check your PYTHONPATH to make sure the cv_bridge module can be found? Can ROS find the cv_bridge package?

Are you able to import cv_bridge or is it just the extended from cv_bridge.boost.cv_bridge_boost import getCvType that gives error?

Dear @dusty_nv
Thank you for your reply.
Due to I am living in Japan I cannot answer right now but I am running the same project in my laptop and it is runing well (too slow, I realizeit’s because I don’t have GPU)
So I realize the problem is the path, can you send me the commands to setup correctly the paths of cv_bridge and python?
Thank you so much

Sorry, I’m not exactly familiar so you may need to play around with it. I believe ROS should automatically set the correct PYTHONPATH when your workspace overlay is sourced (hence my question about if ROS can find the cv_bridge package)

Also, after you source your workspace, try running:

echo $PYTHONPATH
python3 -c 'import cv_bridge'
python -c 'import cv_bridge'

Is it able to import them with either python3 or python?

@dusty_nv
Thank you for your help. I have been able to built the package by adding this two lines into catkin_ws/src/vision_opencv/cv_bridge/CMakeLists.txt

set(PYTHON_NUMPY_INCLUDE_DIR ~/.local/lib/python3.6/site-packages/numpy/core/include)
set(PYTHON_INCLUDER_PATH /usr/include/python3.6)

But when I run my code it seems that cv_bridge is too slow for video streaming real time. Do you know how can I improve it?
this is my code:

#!/usr/bin/env python3
import numpy as np
import cv2
import rospy
from std_msgs.msg import String
from sensor_msgs.msg import Image
import sys
from cv_bridge import CvBridge
bridge = CvBridge()

def cam_test():                         
  cap = cv2.VideoCapture(-1)
  pub = rospy.Publisher('chatter', Image, queue_size=1000)
  rospy.init_node('talker', anonymous=True)
  rate = rospy.Rate(100)
  while True:
      ret, img = cap.read()
      gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
      cv2.circle(img,(320,240),15,(0,0,255),2)
      image_message = bridge.cv2_to_imgmsg(img, encoding="passthrough")
     rospy.loginfo(image_message)
      pub.publish(image_message)
      rate.sleep()
      #cv2.imshow('img',img)
      k = cv2.waitKey(30) & 0xff
      if k == 27:
          break

  cap.release()
  cv2.destroyAllWindows()


if __name__ == '__main__':
       try:
           cam_test()
       except rospy.ROSInterruptException:
           pass

In my ros_deep_learning nodes, I use CUDA to convert the images as opposed to OpenCV. It is using my imageConverter class and jetson-utils library.

From Python, you could try using jetson.utils directly for colorspace conversion - https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-image.md#color-conversion

1 Like

I am trying to publish the video I’m getting from the webcam and then subscribe from other ROS slaves and show the video in real-time. Is it possible to do that by using your ros_deep_learning node?
opencv_video → convert_image_to_other_format -->publish_msg. Is it correct this process?

What kind of webcam is it? You could try using my video_source node, which uses CUDA:

https://github.com/dusty-nv/ros_deep_learning#video_source-node

It supports these kind of interfaces: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#input-streams

A good idea is to try the video-viewer program first to check that jetson-utils can properly capture your camera.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.