ROS packages for working with Cameras on Jetson Nano

Hi all

I am currently trying to collect ROS bag files with a Jetson Nano containing images. I have the Jetson Nano version B01 flashed with the newest L4T release (32.4.4) , and one Raspberry V2.1 Camera and one USB Camera HBV 1615 (amazon link).
Now, if I follow along with the great tutorial from JetsonHacks on running a camera, the Raspberry camera works fine using GStreamer. The HBV camera does not work at all, it just returns a blank frame.

I attempted to follow some recommendations from related forum topics on debugging camera issues, such as 731141, 72003, 75058, all to no avail.

So I decided on skipping this part and attempted to integrate the Raspberry V2.1 camera with ROS. I first tried to use the melodic release of video_stream_opencv, see here, which returned with a vidioc_qbuf error, for which I was unable to find a solution online.
The raspicam_node package was missing the mmal library, which I was unable to find a fix for too. USB_Cam threw an unsupported error, as did uvc_cam. gscam, including the wrapper found here do not work either.

The fix seemed to be to follow along with the jetbot ros package, which also uses the camera. However, to run the camera, the jetson-utils modality is required, which is part of the jetson inference package, but it is linked against a fixed commit and jetson-utils does not have any instructions for a standalone installation.

Upon installing and running those packages it turns out that some things, such as camera-id are hard-coded and the entire application is built around the jetson-utils package and its use of gstreamer and v4l2, see here. The output of running the node is simply an image, which is a nice start.
For development with ROS I am unfortunately required to do common manipulation tasks to the image stream, such as camera calibration, resizing the images, flipping the images, adapting the framerates and other operations that are often part of a launch file setup, see example here .

Now I must admit that I am a little clueless on how to continue with development from here on out. I believe that using a camera with common nodes should not be this cumbersome. Either I am missing something fundamental or this is harder than it is supposed to be. I attempted to use virtually every package I could find for working with cameras and getting the cameras to work. I had to tinker with a few cmake paths to find the correct OpenCV system install and keep running into errors that I cannot debug.
While the jetson-utils-package apparently provides the necessary options to wrap GStreamer commands in a fashion that would maybe allow running the cameras, I do not believe it to be viable for me to spend weeks on extending the package and functionalities just to record data and do calibration. Furthermore, I do not even know if the cameras I have will work with the Jetson Nano at all.

I would greatly recommend any hints on how to proceed. How can I get basic functionalities for recording ros bag files with camera images and a few image options out of a Jetson Nano without sinking weeks of development into gstreamer and ROS wrappers? @dusty_nv, the package functionalities seem great and look like they are just what I need. Any hints what I could do here?

Cheers
Nico

Hi @NicoMandel, if you checkout the ros_deep_learning package, that has been updated more recently than jetbot_ros and includes the new videoSource node:

The node now includes the ability to change the CSI camera index. For your Raspberry Pi Camera v2, you would want to use csi://0. Your USB camera would use V4L2 (i.e. /dev/video1)

Can you run these commands with your USB camera plugged in and post the output of the second command?

$ sudo apt-get install v4l-utils
$ v4l2-ctl --device=/dev/video1 --list-formats-ext

Also, when you install the updated jetson-inference from master, can you run the video-viewer /dev/video1 and see the USB camera? If not, please post the console log.

I’m not experience with recording rosbag files myself, but my understanding is that it is independent of the nodes and you can simply point it to a topic that it will record. So you could record the raw output of the video_source node, for example.

You may want to perform these image manipulations in another ROS node, or you could add functions into the video_source node to do it there. For example, included in jetson-utils are CUDA-accelerated functions for intrinsic/extrinsic camera calibration de-warping:

Hi @dusty_nv

thank you very much for taking the time to read the lengthy description and providing me with a detailed answer with all the tipps, this is very helpful and I genuinely appreciate this interaction. I am splitting the further discussion into two parts, one concerning the USB-camera, the other the ROS-integration, for ease of reference. This answer is regarding the USB-camera.

I had previouylsy already installed the Video for Linux utility. The output of v4l2-ctl --device=/dev/video1 --list-formats-ext is:

ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'YUYV'
	Name        : YUYV 4:2:2
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
		Size: Discrete 320x240
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
		Size: Discrete 160x120
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
		Size: Discrete 352x288
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
		Size: Discrete 176x144
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
		Size: Discrete 1280x1024
			Interval: Discrete 0.133s (7.500 fps)

Running git fetch master and git status on jetson-inference actually tells me that the local repo is up to date with the remote master. Running video-viewer /dev/video1 results in the following:

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video1

(video-viewer:7681): GStreamer-CRITICAL **: 10:43:10.297: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed

(video-viewer:7681): GStreamer-CRITICAL **: 10:43:10.298: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed

(video-viewer:7681): GStreamer-CRITICAL **: 10:43:10.298: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed
[gstreamer] gstCamera -- found v4l2 device: USB 2.0 PC Cam
[gstreamer] v4l2-proplist, device.path=(string)/dev/video1, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"USB\ 2.0\ PC\ Cam", v4l2.device.bus_info=(string)usb-70090000.xusb-2.3, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found 6 caps for v4l2 device /dev/video1
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)1024, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)15/2;
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] gstCamera -- selected device profile:  codec=raw format=yuyv width=1280 height=1024
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video1 ! video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)1024 ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video1
[video]  created gstCamera from v4l2:///dev/video1
------------------------------------------------
gstCamera video options:
------------------------------------------------
  -- URI: v4l2:///dev/video1
     - protocol:  v4l2
     - location:  /dev/video1
     - port:      1
  -- deviceType: v4l2
  -- ioType:     input
  -- codec:      raw
  -- width:      1280
  -- height:     1024
  -- frameRate:  7.500000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- X window resolution:    1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     output
  -- codec:      raw
  -- width:      1920
  -- height:     1080
  -- frameRate:  0.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
video-viewer:  failed to capture video frame
video-viewer:  failed to capture video frame
video-viewer:  failed to capture video frame
^Creceived SIGINT
video-viewer:  failed to capture video frame
video-viewer:  shutting down...
[gstreamer] gstCamera -- stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstCamera -- pipeline stopped
video-viewer:  shutdown complete

I am a little stumped that GStreamer throws an early critical error but continues regardless - successfully creating source and sink and OpenGL screen application, just not to read information. The gstreamer pipeline string v4l2src device=/dev/video1 ! video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)1024 ! appsink name=mysink is pretty similar to what I attempted to run from the command line, just with different sinks.
I would have two suspicions of what could be issues here:

  1. Could this have to do with the camera format YUYV 4:2:2? I tried to look up what the 4:2:2 meant and if I would have to adapt something, but could not find anything.
  2. The camera also has no autofocus, could this be an issue?

Thank you for taking the time to look at all this again. Any ideas how to proceed here?

Kind regards
Nico

Hi dusty

this is the second answer, regarding running the Raspberry Camera in ROS. Again, thank you for taking the time to read the issue and providing feedback.

I first ran the simple camera script from JetsonHacks, which displays the CSI Camera just fine. Then I made sure that ros-deep-learning was checked out on master and I ran the ./imagenet-camera application in jetson-inference (in aarch64/bin) to ensure the connection there works too.
roslaunch ros_deep_learning video_viewer.ros1.launch works fine with the camera.

I was able to launch the RPi camera as well with roslaunch ros_deep_learning video_source.ros1.launch input:=csi://0 and display the image with the ros image_view package. However, the only topic that showed up was an image_raw topic. No camera_info or compressed data. As far as I understand the dependency is on the video_options and videoSource utilities, here, as well as the image_converter class, which appear to be tightly integrated c++ and CUDA-codes with a ROS-wrapper for compatibility around them. Unfortunately, I might have to adapt some parameters on the spot, depending on the system functionality in the field.

Maybe the context will help to illustrate what we are trying to achieve and why we are required to use ROS. We are putting the nano with the camera on a drone and want record images (along with some other timestamped data - which is why we need rosbag), so there are a few things where I am not certain how I can accomplish these with the packages at hand. Ideally I would be able to set parameters in a launch file, similar to the video_stream_opencv utility Any recommendations would be greatly appreciated.

  1. We may be required to flip the image in the launch file,due to mounting constraints
  2. The data needs to be timestamped in the header, on ros time
  3. We need to log the data at maximum framerate, however, the control application running on the UAV is time-critical, which would require us to make swift changes to image compression (for ROS) or framerates if necessary.

What are you recommendations on achieving these features? I have worked my way partially through the udacity cs344 course for CUDA programming, so understand some fundamentals, but am not proficient enough with gstreamer and cuda to be able to find the positions in the code I would need to change to add the required parameters to the launch file. Would you say that attempting to use gscam might be worth investing more time into? What do you think would have to be the gstreamer string for this usage?

Cheers
Nico

I don’t believe the YUYV 422 (aka YUY2) shouldn’t be an issue, as my code does support it. I’m not sure why it isn’t getting data. If there were an error during colorspace conversion, you would see that, but it doesn’t get that far (doesn’t appear to be getting a frame). You can try running it with the --debug flag to see if any more info is logged.

You might want to try running that camera in a lower resolution in case there is a bandwidth issue somehow. Use one of the resolutions from your v4l2-ctl output. You can do that by specifying the --input-width and --input-height arguments like below:

$ video-viewer --input-width=320 --input-height=240 --debug /dev/video1

That is correct, the video_source node only publishes the raw topic. It doesn’t public camera_info or compressed data. If you wanted to compress the video to a video file on disk or stream it out via RTP, the video_output node can do that - it doesn’t send the encoded video out via ROS topic though, because the GStreamer pipelines it uses for encoding terminate with an output element (like file or RTP stream).

For CSI camera, there is the input-flip setting which can be used to flip/rotate the video.

--input-flip=FLIP      flip method to apply to input (excludes V4L2):
                             * none (default)
                             * counterclockwise
                             * rotate-180
                             * clockwise
                             * horizontal
                             * vertical
                             * upper-right-diagonal
                             * upper-left-diagonal

However I don’t seem to have exposed that as a ROS parameter in the video_source node, sorry about that. It should be simple for you to add to the node if you needed it.

The image messages should already be timestamped - see here in the node code:

Each of the nodes in the ros_deep_learning package timestamp their messages like that.

Personally I would just use the video_output node to save the video to a H.264/H.265-encoded file (like MP4, MKV, AVI, ect). I realize this is not a rosbag. I’m not sure how you would get an encoded bitstream into a rosbag or make sense of it.

Hi dusty

thanks for the inputs, this was very helpful. We are making slight progress. So, first I checked the camera on my desktop and made sure it works there. Then I plugged it into its own USB tower on the Nano (I have a keyboard and a mouse in the other tower).
First I ran video-viewer --help and checked that the --debug and --verbose flags result in the same output. Then I proceeded to try to run through all of these configurations:

tl;dr:

  • 160x120 and 176x144 @ 30 Hz work
  • 320x240 and above do not work
  • the --input-rate parameter does not get passed through

Long version:

I tried running the smaller sizes and got the following results for video-viewer --input-width=160 --input-height=120 --input-rate=15 --verbose /dev/video1

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video1

(video-viewer:10401): GStreamer-CRITICAL **: 10:39:26.638: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed

(video-viewer:10401): GStreamer-CRITICAL **: 10:39:26.638: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed

(video-viewer:10401): GStreamer-CRITICAL **: 10:39:26.639: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed
[gstreamer] gstCamera -- found v4l2 device: USB 2.0 PC Cam
[gstreamer] v4l2-proplist, device.path=(string)/dev/video1, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"USB\ 2.0\ PC\ Cam", v4l2.device.bus_info=(string)usb-70090000.xusb-2.3, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found 6 caps for v4l2 device /dev/video1
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)1024, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)15/2;
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] gstCamera -- selected device profile:  codec=raw format=yuyv width=160 height=120
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video1 ! video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120 ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video1
[video]  created gstCamera from v4l2:///dev/video1
------------------------------------------------
gstCamera video options:
------------------------------------------------
  -- URI: v4l2:///dev/video1
     - protocol:  v4l2
     - location:  /dev/video1
     - port:      1
  -- deviceType: v4l2
  -- ioType:     input
  -- codec:      raw
  -- width:      160
  -- height:     120
  -- frameRate:  30.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- X window resolution:    1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     output
  -- codec:      raw
  -- width:      1920
  -- height:     1080
  -- frameRate:  0.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstCamera recieve caps:  video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)30/1, colorimetry=(string)2:4:5:1, interlace-mode=(string)progressive
[gstreamer] gstCamera -- recieved first frame, codec=raw format=yuyv width=160 height=120 size=38400
RingBuffer -- allocated 4 buffers (38400 bytes each, 153600 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
RingBuffer -- allocated 4 buffers (57600 bytes each, 230400 bytes total)
[gstreamer] gstreamer message qos ==> v4l2src0
video-viewer:  captured 1 frames (160 x 120)
[OpenGL] glDisplay -- set the window size to 160x120
[OpenGL] creating 160x120 texture (GL_RGB8 format, 57600 bytes)
[cuda]   registered openGL texture for interop access (160x120, GL_RGB8, 57600 bytes)
video-viewer:  captured 2 frames (160 x 120)
video-viewer:  captured 3 frames (160 x 120)
video-viewer:  captured 4 frames (160 x 120)
video-viewer:  captured 5 frames (160 x 120)
video-viewer:  captured 6 frames (160 x 120)
video-viewer:  captured 7 frames (160 x 120)
video-viewer:  captured 8 frames (160 x 120)
video-viewer:  captured 9 frames (160 x 120)
video-viewer:  captured 10 frames (160 x 120)
video-viewer:  captured 11 frames (160 x 120)
video-viewer:  captured 12 frames (160 x 120)
video-viewer:  captured 13 frames (160 x 120)
video-viewer:  captured 14 frames (160 x 120)
video-viewer:  captured 15 frames (160 x 120)
video-viewer:  captured 16 frames (160 x 120)
video-viewer:  captured 17 frames (160 x 120)
video-viewer:  captured 18 frames (160 x 120)
video-viewer:  captured 19 frames (160 x 120)
video-viewer:  captured 20 frames (160 x 120)
video-viewer:  captured 21 frames (160 x 120)
video-viewer:  captured 22 frames (160 x 120)
video-viewer:  captured 23 frames (160 x 120)
video-viewer:  captured 24 frames (160 x 120)
video-viewer:  captured 25 frames (160 x 120)
video-viewer:  captured 26 frames (160 x 120)
video-viewer:  captured 27 frames (160 x 120)
video-viewer:  captured 28 frames (160 x 120)
video-viewer:  captured 29 frames (160 x 120)
video-viewer:  captured 30 frames (160 x 120)
video-viewer:  captured 31 frames (160 x 120)
video-viewer:  captured 32 frames (160 x 120)
video-viewer:  captured 33 frames (160 x 120)
video-viewer:  captured 34 frames (160 x 120)
video-viewer:  captured 35 frames (160 x 120)
video-viewer:  captured 36 frames (160 x 120)
video-viewer:  captured 37 frames (160 x 120)
video-viewer:  captured 38 frames (160 x 120)
video-viewer:  captured 39 frames (160 x 120)
video-viewer:  captured 40 frames (160 x 120)
^Creceived SIGINT
video-viewer:  shutting down...
[gstreamer] gstCamera -- stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstCamera -- pipeline stopped
video-viewer:  shutdown complete

the video-options sections tells me that the --input-rate=15 parameter does not get passed through. I tried with --input-rate=15/1, also without success.

Running higher resolutions with the --debug flag (and attempted --input-rate) resulted in the following:
cmd: video-viewer --input-width=640 --input-height=480 --input-rate=15 --debug /dev/video1

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video1

(video-viewer:10683): GStreamer-CRITICAL **: 10:47:53.226: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed

(video-viewer:10683): GStreamer-CRITICAL **: 10:47:53.227: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed

(video-viewer:10683): GStreamer-CRITICAL **: 10:47:53.227: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed
[gstreamer] gstCamera -- found v4l2 device: USB 2.0 PC Cam
[gstreamer] v4l2-proplist, device.path=(string)/dev/video1, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"USB\ 2.0\ PC\ Cam", v4l2.device.bus_info=(string)usb-70090000.xusb-2.3, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found 6 caps for v4l2 device /dev/video1
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)1024, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)15/2;
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1 };
[gstreamer] gstCamera -- selected device profile:  codec=raw format=yuyv width=640 height=480
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video1 ! video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480 ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video1
[video]  created gstCamera from v4l2:///dev/video1
------------------------------------------------
gstCamera video options:
------------------------------------------------
  -- URI: v4l2:///dev/video1
     - protocol:  v4l2
     - location:  /dev/video1
     - port:      1
  -- deviceType: v4l2
  -- ioType:     input
  -- codec:      raw
  -- width:      640
  -- height:     480
  -- frameRate:  30.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- X window resolution:    1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     output
  -- codec:      raw
  -- width:      1920
  -- height:     1080
  -- frameRate:  0.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer message stream-start ==> pipeline0
video-viewer:  failed to capture video frame
video-viewer:  failed to capture video frame
video-viewer:  failed to capture video frame
video-viewer:  failed to capture video frame
^Creceived SIGINT
video-viewer:  failed to capture video frame
video-viewer:  shutting down...
[gstreamer] gstCamera -- stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstCamera -- pipeline stopped
video-viewer:  shutdown complete

So I think we are getting somewhere. Any thoughts why the --input-rate parameter does not get fed through? I am not quite sure which file in which package requires modification for this, video-source.cpp or video-options.cpp? What do you think?

Kind regards
Nico

Thank you for pointing me to the line, that is really helpful. One problem down.

Thank you for putting that in as a parameter. However, I am not sure where I should put this parameter to pass it through, similar to this section I mentioned before:

Or even node_video_source - file?


On a higher-level, maybe we can circumvent the adaptation of entire ROS nodes. The issue for us is that we rely on ros1-melodic at the current stage, since UAVs and flight controller communication are not yet robust enough with ROS2 to move our experiments across. As you discussed in #142517, 20.04 won’t be available anytime soon, and the only solutions would be to use docker or build from source.

We need time-stamped transform data from an external localization system, which is why the rosbag recording is important for us, as it automatically takes care of that. And the potential for use with the standard image_pipeline package, is a major benefit as it would provide post-recording calibration potential. GScam claims to provide these capabilities,

gscam is fully compatible with the ROS Camera interface and can be calibrated to provided rectified images. For details, see the appropriate ROS documentation on the image_pipeline wiki page.

However, I cannot figure out a way to set the correct string for the gstreamer sink that is required - modifying the recommended export GSCAM_CONFIG="v4l2src device=/dev/video2 ! video/x-raw-rgb,framerate=30/1 ! ffmpegcolorspace" to – what I assume would be a minimally working string on both ends – export GSCAM_CONFIG="nvarguscamerasrc ! video/x-raw-rgb(memory:NVMM),format=(string)NV12 ! ffmepgcolorspace" resulted only in:

[ INFO] [1608340492.481443660]: Using gstreamer config from env: "nvarguscamerasrc ! video/x-raw-rgb(memory:NVMM),format=(string)NV12 ! ffmepgcolorspace"
[ INFO] [1608340492.488712090]: using default calibration URL
[ INFO] [1608340492.488836102]: camera calibration URL: file:///home/jetson/.ros/camera_info/camera.yaml
[ INFO] [1608340492.489010844]: Unable to open camera calibration file [/home/jetson/.ros/camera_info/camera.yaml]
[ WARN] [1608340492.489090584]: Camera calibration file /home/jetson/.ros/camera_info/camera.yaml not found.
[ INFO] [1608340492.489145950]: Loaded camera calibration from 
[ WARN] [1608340492.490017732]: No camera frame_id set, using frame "/camera_frame".

(gscam:12442): GStreamer-WARNING **: 11:14:52.610: 0.10-style raw video caps are being created. Should be video/x-raw,format=(string).. now.
[FATAL] [1608340492.611546851]: GStreamer: cannot link launchpipe -> sink
[FATAL] [1608340492.611783209]: Failed to initialize gscam stream!

Do you have any hints how a feasible string should look like for this case?

Kind regards
Nico

Ah, I think there are two things needing adjusted for that setting to take effect for V4L2 cameras:

  1. See gstCamera.cpp:L163. It seems like I forgot to add mOptions.frameRate to the caps string there like I do with the width & height.

  2. gstCamera::matchCaps() and gstCamera::parseCaps() pick the highest supported framerate for that camera format. In your situtation you should be able to ignore the framerate it picks and keep the one you requested by making a change like this to gstCamera.cpp:342

if( !parseCaps(bestCaps, &mOptions.codec, &mFormatYUV, &mOptions.width, &mOptions.height, &bestFrameRate) )
		return false;

After you change the code, remember to re-run make and sudo make install again.

Note that the input-flip argument currently only applies to MIPI CSI cameras, because that GStreamer pipeline has access to the nvvidconv element that can do the rotations. If you needed that for V4L2 camera too, you may be able to add a similar nvvidconv element to the V4L2 version of the pipeline. That stuff is all done around here in the code:

In terms of adding it as a ROS parameter, you would add it around here in the video_source node:

You would probably want to add something like this:

std::string flip_str;

ROS_DECLARE_PARAMETER("flip", flip_str);
ROS_GET_PARAMETER("flip", flip_str);

if( flip_str.size() != 0 )
    video_options.flipMethod = videoOptions::FlipMethodFromStr(flipStr.c_str());

video_options.flipMethod =

Hi @dusty_nv

happy New Year and a little belated Merry Christmas to you. Thank you for all the insightful tips and tricks you provided me with. In the end we decided to use the the Intel RealSense D435 for recording videos, since it seems to be a fairly straightforward process.
Now I am about to work with the bag files and want to run a suite of NNs on them. I may have to do part of this on the Jetson and part on my desktop, and see a whole range of options again, where I am not sure how to proceed. I wrote a standard node which transforms the ros image type to an OpenCV image. I can run the OpenCV dnn module on the images, however this will still only use the CPU. OpenCV does not let me set the options

net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)

as suggested to be used by PyImageSearch at the beginning of last year.
, which I assume is due to the missing compile flag -D OPENCV_DNN_CUDA= YESfor CMake (see JetsonHacks), despite the JetsonHacks resource (Nov. 2019) being older than the PyImageSearch article (February 2020). Now the cv2.getBuildInformation() does not say that the default OpenCV install of 4.1.1. uses CUDA at all. Was this on purpose? Would you propose following the instructions from another PyImageSearch blogpost on updating the default versions for the Jetson Nano?

Another option I had come across was the blogpost you had done for nVidia in January 2020, where you make use of the jetson-utils and jetson-inference imports in the python section. The simplicity of the wrapping code caught my attention and I was hoping to be able to adapt it for my own purposes, however, running pdb told me that the img that gets returned from videoCapture is a cudaImage tyoe, which I assume is in the graphics card memory. This would not be a straightforward conversion from my point of view, as I would like to jam the code right in this line. I am aware that OpenCV images are inherently numpy arrays, I am just not sure how I would go about that conversion, any suggestions?
Also, is it possible to change the locations of the tensorrt optimized model locations, as in, make them part of the config folder of the ros package in question?

I hope you find some more time to help me shine a light in this confusion between all the packages, I feel a little lost without your help.
With kind regards
Nico

Yes, it is intentionally disabled by default because some CUDA tests do not pass in OpenCV (neither for desktop GPU or Jetson), so you would need to re-build OpenCV with CUDA/cuDNN enabled - for that, you can see @mdegans script here: https://github.com/mdegans/nano_build_opencv

The jetson-inference library already has ROS node wrappers available here: https://github.com/dusty-nv/ros_deep_learning
So you could use those to run recognition, detection, and segmentation DNN models.

To convert the CUDA images to/from Numpy arrays and OpenCV, see these resources:

Hi dusty

thank you for the continuous support. Any thoughts on the approach of just updating ubuntu through dist-upgrade, documented in this blog post? Any reasons this would fail on Nanos?
The dockers you published for newer ROS versions appear to me like they are standard installs, with constant deletion of apt lists. Can I find the ros_entrypoint script anywhere to see what it does?

Kind regards
Nico

Hi @NicoMandel, I haven’t tried dist-upgrade myself and am not personally sure if all the CUDA/GPU stuff would still work - but that blog author seemed to have good luck with it. You could always try it after backing up your work or on a fresh SD card.

Here is the link to the ros_entrypoint.sh script: https://github.com/dusty-nv/jetson-containers/blob/master/packages/ros_entrypoint.sh

It basically just actives the ROS environment variables, and runs the user command (if specified)

Thanks @dusty.

is the CUDA on the Nano a special version or could it just be compiled from sources or through a package manager as on a desktop?

Kind regards
Nico

The CUDA libraries in JetPack (like the CUDA Toolkit, cuBLAS, cuDNN, TensorRT, ect) are specific to Jetson. Through the nvidia-l4t apt repo (which that blog post re-enables) they can be installed from the package manager - however these are different packages than you would install on an Ubuntu desktop (like the nvidia-driver-* ones for desktop)

1 Like

I assume the keyserver is specific to jetson, which ‘‘Updating the Host’’ section points at?
Is there somewhere I can compile the packages from source (including maybe installing the correct driver for the Graphics Unit), because the x86_64 repos seem to be for 16.04 and 18.04 only?

That keyserver is for the JetPack apt repo, which does contain packages for both Jetson/ARM and x86_64. You don’t actually need the host PC packages if you don’t want them - they are for cross-compilation for ARM (from PC) and the like. You can’t compile them from source because they are distributed as binaries.

@Andrey1984 has also had an interesting idea on upgrading to 20.04 inside the l4t-base container, and keeping it all inside there:

https://forums.developer.nvidia.com/t/jetpack-4-5-production-release-with-l4t-32-5/166473/9

So then you could install 20.04 packages inside the upgraded container, and it would help with keeping a clean environment on your Jetson.