dustyNV's ros-leep-learning, CUDA version, jetpack and sudo apt-get update questions

Hi

I want to use @dustyNV 's ros-deep-learning with jetson-inference, and error reports plus git forum posts seem to suggest it needs CUDA 8.0 to run (dustyNV can you confirm this is correct please?). My TX2 had 9.0 by default and without realising that it is only loaded during post-install, I uninstalled 9.0. So I need to put CUDA back in via jetpack, which I removed from the host months back due to space constraints.

Both times I installed Jetpack (3.1 and 3.2) it broke sudo apt-get update on my PC when using 16.04 Ubuntu. Has this issue been fixed in the latest version? I am wary of knowingly breaking my PC in case it cannot be recovered this time.

If I need CUDA 8.0, can I force jetpack to install that version?

My use case is passing rgb images from a kinectV2, through kinect2_bridge on a ros topic to ros-deep-learning, to run an inference / object detection step and output commands to servos.

Many thanks

Can’t answer most of that, but the L4T version (which is what JetPack installs when flashing) is tied to the CUDA version which is compatible. So on a Jetson you will most likely fail if you mix and match. You are advised to use the same CUDA version as that which comes with the JetPack used to flash the TX2.

Hi RoboRoss, that repo doesn’t strictly require CUDA 8.0, it depends on [url]https://github.com/dusty-nv/jetson-inference[/url] which supports a variety of JetPack and CUDA versions.

What errors are you getting?

Hi guys, thanks for the replies! I will come back to this topic as am doing a lot of firefighting right now around associated problems. Just didn’t want the topic locked!

Cheers

Hi @dustyNV, my apologies for the delay in coming back to you.

Please can you advise if it is possible to pass a ROS RGB image topic to ros-deep-learning and imagenet-camera, i.e live images? I already have a kinect v2 set up with iai-kinect2-bridge for another project, that produces a nice RGB stream. It would be awesome if that can be used.

My intention is to use the output from the inference step to drive a servo and stepper motor via arduino over serial, and would like to have it all neatly tied into ROS and visualise the output in rviz.

Unfortunately I cannot use the onboard TX2 camera due to the physical design of the project, and cash is running low to go and buy a separate webcam, hence another reason I would really like to use the kinect v2!

Many thanks!

Hi RoboRoss, the imagenet ROS node is currently setup to receive live BGR8 images and classify them. Is that the encoding that your topic publishes?

See this section of the readme for an example of setting up the imagenet nodes: https://github.com/dusty-nv/ros_deep_learning#imagenet

By default, it receives video on the /imagenet/image_in topic and outputs the classification results to /imagenet/classification topic.

Hi @dusty_nv,

I am afraid I am a little confused, doesn’t the example refer to just publishing a single image rather than a camera stream? Or should that be somehow modified?

To answer your question, yes the kinect2_bridge node puts out the /kinect2/hd/image_color topic, which is message of encoding type BGR8:

$ rostopic echo /kinect2/hd/image_color/encoding
"bgr8"
---
"bgr8"
.....

At the moment I have tried using

roslaunch ros-deep-learning imagenet.launch

and get the error message:

[ERROR] [1547071038.693147336]: Failed to load nodelet [/gst_camera] of type [ros_jetson_video/gst_camera] 
even after refreshing the cache: According to the loaded plugin descriptions the class
 ros_jetson_video/gst_camera with base class type nodelet::Nodelet does not exist. Declared types
 are  depth_image_proc/convert_metric depth_image_proc/crop_foremost depth_image_proc/disparity
 depth_image_proc/point_cloud_xyz depth_image_proc/point_cloud_xyz_radial 
depth_image_proc/point_cloud_xyzi depth_image_proc/point_cloud_xyzi_radial
 depth_image_proc/point_cloud_xyzrgb depth_image_proc/register image_proc/crop_decimate 
image_proc/crop_nonZero image_proc/crop_non_zero image_proc/debayer image_proc/rectify 
image_proc/resize image_publisher/image_publisher image_rotate/image_rotate image_view/disparity
 image_view/image kinect2_bridge/kinect2_bridge_nodelet 
kobuki_safety_controller/SafetyControllerNodelet nodelet_tutorial_math/Plus pcl/BAGReader 
pcl/BoundaryEstimation pcl/ConvexHull2D pcl/CropBox pcl/EuclideanClusterExtraction 
pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP 
pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX 
pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter 
pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer 
pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers 
pcl/RadiusOutlierRemoval pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SHOTEstimation 
pcl/SHOTEstimationOMP pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/VFHEstimation 
pcl/VoxelGrid ros_deep_learning/ros_imagenet rtabmap_ros/data_odom_sync rtabmap_ros/data_throttle 
rtabmap_ros/disparity_to_depth rtabmap_ros/icp_odometry rtabmap_ros/obstacles_detection 
rtabmap_ros/obstacles_detection_old rtabmap_ros/point_cloud_aggregator rtabmap_ros/point_cloud_xyz
 rtabmap_ros/point_cloud_xyzrgb rtabmap_ros/pointcloud_to_depthimage rtabmap_ros/rgbd_odometry 
rtabmap_ros/rgbd_sync rtabmap_ros/rgbdicp_odometry rtabmap_ros/rtabmap rtabmap_ros/stereo_odometry 
rtabmap_ros/stereo_sync rtabmap_ros/stereo_throttle rtabmap_ros/undistort_depth 
stereo_image_proc/disparity stereo_image_proc/point_cloud2 
yocs_velocity_smoother/VelocitySmootherNodelet

[ERROR] [1547071038.693274439]: The error before refreshing the cache was: 
According to the loaded plugin descriptions the class ros_jetson_video/gst_camera with base class
 type nodelet::Nodelet does not exist. Declared types are  depth_image_proc/convert_metric 
depth_image_proc/crop_foremost depth_image_proc/disparity depth_image_proc/point_cloud_xyz 
depth_image_proc/point_cloud_xyz_radial depth_image_proc/point_cloud_xyzi 
depth_image_proc/point_cloud_xyzi_radial depth_image_proc/point_cloud_xyzrgb 
depth_image_proc/register image_proc/crop_decimate image_proc/crop_nonZero image_proc/crop_non_zero
 image_proc/debayer image_proc/rectify image_proc/resize image_publisher/image_publisher 
image_rotate/image_rotate image_view/disparity image_view/image 
kinect2_bridge/kinect2_bridge_nodelet kobuki_safety_controller/SafetyControllerNodelet 
nodelet_tutorial_math/Plus pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/CropBox 
pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation
 pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX
 pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader 
pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer 
pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers 
pcl/RadiusOutlierRemoval pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SHOTEstimation
 pcl/SHOTEstimationOMP pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/VFHEstimation
 pcl/VoxelGrid ros_deep_learning/ros_imagenet rtabmap_ros/data_odom_sync rtabmap_ros/data_throttle 
rtabmap_ros/disparity_to_depth rtabmap_ros/icp_odometry rtabmap_ros/obstacles_detection 
rtabmap_ros/obstacles_detection_old rtabmap_ros/point_cloud_aggregator rtabmap_ros/point_cloud_xyz 
rtabmap_ros/point_cloud_xyzrgb rtabmap_ros/pointcloud_to_depthimage rtabmap_ros/rgbd_odometry 
rtabmap_ros/rgbd_sync rtabmap_ros/rgbdicp_odometry rtabmap_ros/rtabmap rtabmap_ros/stereo_odometry 
rtabmap_ros/stereo_sync rtabmap_ros/stereo_throttle rtabmap_ros/undistort_depth 
stereo_image_proc/disparity stereo_image_proc/point_cloud2 
yocs_velocity_smoother/VelocitySmootherNodelet

[FATAL] [1547071038.693538822]: Failed to load nodelet '/gst_camera` of type 
`ros_jetson_video/gst_camera` to manager `standalone_nodelet'
[gst_camera-3] process has died [pid 6596, exit code 255, cmd /opt/ros/kinetic/lib/nodelet/nodelet 
load ros_jetson_video/gst_camera standalone_nodelet ~image_raw:=/image_raw __name:=gst_camera 
__log:=/home/nvidia/.ros/log/e2bc8628-1458-11e9-85e8-00044b8d1fa5/gst_camera-3.log].

log file: /home/nvidia/.ros/log/e2bc8628-1458-11e9-85e8-00044b8d1fa5/gst_camera-3*.log

The named log shows:

e[0m[ INFO] [1547071035.381774585]: Loading nodelet /gst_camera of type ros_jetson_video/gst_camera to manager standalone_nodelet with the following remappings:e[0m
e[0m[ INFO] [1547071035.381907033]: /gst_camera/image_raw -> /image_rawe[0m
e[0m[ INFO] [1547071035.389916992]: waitForService: Service [/standalone_nodelet/load_nodelet] has not been advertised, waiting...e[0m
e[0m[ INFO] [1547071035.436526706]: waitForService: Service [/standalone_nodelet/load_nodelet] is now available.e[0m

I have also tried using a separate program, ros-virtual-cam, which creates a v4l2 camera object via v4l2loopback. As part of another effort to get inference working, I edited gstcamera.cpp to look for the index of the camera object (dev/video1) as follows:

gstCamera::gstCamera()
{	
	mAppSink    = NULL;
	mBus        = NULL;
	mPipeline   = NULL;	
	mV4L2Device = 1;

...

As I (probably wrongly) guessed that gstcamera.cpp was what the launch file was referring to. I was hoping that the

<!-- gst_camera nodelet -->
    <node name="gst_camera"
          pkg="nodelet" type="nodelet"
          args="load ros_jetson_video/gst_camera standalone_nodelet">
          <remap from="~image_raw" to="/image_raw"/>
    </node>

section in the launch file would then just see my new v4l2 camera object and use that, but no dice.

I am not sure if its relevant, but I needed to change the paths.yaml to get rid of file not found errors. I changed it to:

imagenet_node:
    prototxt_path: "/home/nvidia/jetson-inference/data/networks/googlenet.prototxt"
    model_path: "/home/nvidia/jetson-inference/data/networks/bvlc_googlenet.caffemodel"
    class_labels_path: "/home/nvidia/jetson-inference/data/networks/ilsvrc12_synset_words.txt"

Am I missing something simple? I must apologise as I am an educational TX2 user and still very much learning, so I am also bumping up against the limits of my coding ability as I learn :(

Thanks for reading!
RR

The imagenet.launch uses the nodelet. The node should be easier to get running. Can you try launching the imagenet node like this, connecting it to your kinect2 BGR topic?

$ rosrun ros_deep_learning imagenet /imagenet/image_in:=/kinect2/hd/image_color

If the name of the topic that the BGR camera image is published on is different than ‘/kinect2/hd/image_color’, please substitute that instead in the command above.

@dusty_nv apologies for the delay in replying, this solution works perfectly thank you!