I’m testing image latency when captured using Opencv+gstreamer+nvcamerasrc on the TX2 using the Jetson’s on-board omnivision camera.
My capture pipeline looks something like:
cv::VideoCapture(nvcamerasrc sensor-id=0 name=camerasrc ! video/x-raw(memory:NVMM), format=UYVY, width=some_width, height=some_height, framerate=30/1 ! nvvidconv ! video/x-raw, format=BGRx, width=some_width, height=some_height ! videoconvert ! video/x-raw, format=BGR ! appsink name=appsink);
Where I specify some_width and some_height.
I measure the latency of incoming frames with Gstreamer’s PTS timestamp of each incoming frame and add the system’s monotonic clock time of the start of the pipeline:
frame_timestamp = GST_TIME_AS_MSECONDS(gst_element_get_base_time(_pipeline) + GST_BUFFER_PTS(buffer));
and then subtract that value from the current monotonic clock time, so:
latency = get_time_mono_ms() - frame_timestamp
Heres a list of latencies in ms for different frame sizes:
720p: ~11ms
1080p: ~35ms
2592x1944: ~155ms
I’m looking to get that number much lower, particularly for the 2592x1944 size. I realize there are a lot of memory copies currently happening in my OpenCV capture pipeline, but based on other forum posts I’m not sure how to reduce them. Additionally I was hoping to adjust the output of nvcamerasrc or nvvidconv to have a better fit for de-bayer or conversion to BGR for OpenCV, but I don’t see any obvious path there. Lastly I’ve played around with some nvcamerasrc element properties like intent, but haven’t seen much (if any) improvement.
Any suggestions on how to reduce latency of incoming frames using the OpenCV+nvcamerasrc capture method?
Or, do I need to consider a different API, like libargus?