nvcamerasrc+OpenCV image latency

I’m testing image latency when captured using Opencv+gstreamer+nvcamerasrc on the TX2 using the Jetson’s on-board omnivision camera.

My capture pipeline looks something like:

cv::VideoCapture(nvcamerasrc sensor-id=0 name=camerasrc ! video/x-raw(memory:NVMM), format=UYVY, width=some_width, height=some_height, framerate=30/1 ! nvvidconv ! video/x-raw, format=BGRx, width=some_width, height=some_height ! videoconvert ! video/x-raw, format=BGR ! appsink name=appsink);

Where I specify some_width and some_height.

I measure the latency of incoming frames with Gstreamer’s PTS timestamp of each incoming frame and add the system’s monotonic clock time of the start of the pipeline:

frame_timestamp = GST_TIME_AS_MSECONDS(gst_element_get_base_time(_pipeline) + GST_BUFFER_PTS(buffer));

and then subtract that value from the current monotonic clock time, so:

latency = get_time_mono_ms() - frame_timestamp

Heres a list of latencies in ms for different frame sizes:

720p: ~11ms
1080p: ~35ms
2592x1944: ~155ms

I’m looking to get that number much lower, particularly for the 2592x1944 size. I realize there are a lot of memory copies currently happening in my OpenCV capture pipeline, but based on other forum posts I’m not sure how to reduce them. Additionally I was hoping to adjust the output of nvcamerasrc or nvvidconv to have a better fit for de-bayer or conversion to BGR for OpenCV, but I don’t see any obvious path there. Lastly I’ve played around with some nvcamerasrc element properties like intent, but haven’t seen much (if any) improvement.

Any suggestions on how to reduce latency of incoming frames using the OpenCV+nvcamerasrc capture method?

Or, do I need to consider a different API, like libargus?

Hi Allanm,

I’m not sure if you have checked. But according to Nvidia’s documentation, the nvcamerasrc plugin is being put to rest from the 28.1 L4T release. It is deprecated and will no longer be supported. So libargus is the way to go moving forward.

There is also a hint that a new gstreamer plugin based on libargus will be released. But there is no estimate on when it will be available.

Do you know of an example of libargus and OpenCV being used to create a cv::Mat?

Hi Allanm,

The pipeline is slow because currently it only accepts BGR and Gray. videoconvert uses only cpu to do the conversion and cause slow.

I see Opencv TOT accept more format now. Please check

Hi WayneWWW-

Thanks for the pointer, I’ll take a look. However, as SirRobert pointed out it sounds like nvcamerasrc is going away, so I am interested in learning more about using libargus with OpenCV. I took a look at some of the tutorials and it doesn’t appear there is any direct LibArgus->EGLStreams->OpenCV code in there. Any direction there would be very helpful.


WayneWW- Apologies for the multiple messages, but I tried modifying my nvcamerasrc pipeline to output just GRAY8 images (removing videoconvert):

nvcamerasrc sensor-id=0, name=camerasrc ! video/x-raw(memory:NVMM), format=UYVY, width=some_width, height=some_height, framerate=30/1 ! nvvidconv ! video/x-raw, format=GRAY8, width=some_width, height=some_height ! appsink name=appsink

but my latencies are still rather high. For the 2592x1944 the frames are at least 200, usually at least 250ms old by the time I receive them (via a GstAppSinkCallback on the new_buffer signal).

It seems like removing videoconvert helped a bit, but I would’ve expected much greater gains, and I would expect the frame to be much more recent than 250ms.

Thanks for the help!

Edit: the latency appears to be due to the appsink queueing up frames because my application isn’t always pulling them fast enough. When used properly, I’m getting ~7ms of latency on an incoming GRAY8 or I420 image. Thanks again for the help.


Good to hear that!

You mention that your latency is low “When used properly”
What do you mean by that?

rm95- I believe what I meant was that I just wasn’t pulling frames from the appsink fast enough (my polling loop was too slow). That being said, I’ve dropped using nvcamerasrc in favor of the Argus library, which results in much faster (lower-latency) image capture. I would highly recommend looking into that path instead.