Ideal Gstreamer Pipeline for using Appsink

Hi there, first off this forum is extremely helpful so thank you to NVIDIA for being so active around here.

Second, I am trying to find the ideal method of getting RGB images into C++ using the NVIDIA Jetson Nano and a CSI IMX219 camera. Right now I have tried numerous pipelines being executed through the OpenCV VideoCapture object as well as trying to construct a GST pipeline manually in code. My ultimate goal is to have a ROS2 publisher which has a consistent 30fps RGB image being published. The image would ideally be at 1280x720 or 640x480. I have not been able to get even close to 30FPS consistently and at a normal CPU usage. Right now the only way I can get the appsink element to function is to have a ‘videoconvert’ element in the pipeline which uses the CPU to do color space conversions therefore using a lot of CPU resources. Is there a way to get the GPU to do all of the color processing and then simply copy or share the RGB buffer to the CPU for image processing? I have tried using nvvidconv but it will not correctly connect to appsink which I assume is because the buffer is still in the GPU memory but I am not sure about that. The current pipeline I am using that I know works is given below, but the performace gets around 16 fps when published to my ROS2 topic. I can include some of that code if needed since I think some of the copying done using OpenCV might have room for improvement.

nvarguscamerasrc silent=false sensor_id=0 sensor_mode=4 ! video/x-raw(memory:NVMM), width=1280, height=720, framerate=30/1 ! nvvidconv flip-method=0 ! video/x-raw(memory:NVMM), format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink

A more ideal pipeline would be something like the following:

nvarguscamerasrc silent=false sensor_id=0 sensor_mode=4 ! video/x-raw(memory:NVMM), width=1280, height=720, framerate=30/1 ! nvvidconv flip-method=0 ! video/x-raw(memory:NVMM), format=I420 ! nvvidconv ! video/x-raw, format=RGBA ! appsink

Also, is nvarguscamerasrc the best option for this application? I have seen that there is also v4l2 which can get camera input and also vpi but I have not seen any evidence that these would be better.

I am quite new to image pipelines but I assume there has to be a good way to do this. Any help is welcome! Thank you in advanced!

Hi,
The hardware converter does not support BGR, so you would need to use software converter. Please check discussion in
[Gstreamer] nvvidconv, BGR as INPUT

To get optimal performance, please execute sudo nvpmodel-m 0 and sudo jetson_clocks. So that CPU cores are at maximum clock.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.