TX2 Onboard camera with OpenCV 4.1 works but has high latency

I’ve compiled OpenCV 4.1 for the tx2, and have been successful in connecting to the onboard camera with gstreamer. However, when I connect to the camera in OpenCV python, the feed I get is several seconds delayed.
Here’s the code I use to capture:

cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)NV12, framerate=(fraction)2/3 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink", cv2.CAP_GSTREAMER)

I would really appreciate if anyone knows and can share a way to get the feed in real time.

Probably the framerate is too low for nvcamerasrc, but you may try to set this framerate as output caps of nvvidconv:

nvarguscamerasrc  ! video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1280, height=(int)720, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx, framerate=(<b>int</b>)2/3 ! videoconvert ! video/x-raw, format=(string)BGR  ! appsink

[EDIT: the pipeline seems working when casting 2/3 to int but casted as fraction it fails to open camera (I’m currently running R28.2 on TX2 with nvcamerasrc, not nvarguscamerasrc, so it may be different in your case.]

GST_ARGUS: PowerService: requested_clock_Hz=12096000
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
GST_ARGUS: 
PowerServiceHwVic::cleanupResources

I get that message. cap.read()[0] or ret is false.

Well, I cannot say much more, but in my case it works fine at 30 fps with less than half a second latency (it may take two seconds at launch but after this it’s ok).
You may try a standard pipeline as:

nvarguscamerasrc  ! video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1280, height=(int)720, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR  ! appsink

and if it works try to customize from this.

If it doesn’t work, you may check if you’re running the expected opencv version from python with :

print(cv2.__version__)
print(cv2.getBuildInformation())

I tweaked your code a bit for my use case and it’s working. However, I’m still getting the delay. Could it have to do with the fact that I’m processing video for 0.6 seconds before displaying? Is there any way to make each new frame the current view of the camera when its fetched?

TO CLARIFY: It’s much more than a 0.6 second delay.

I’d suggest to first try without your processing for checking basic setup.

  1. Check with commands from my previous post if your python has the expected opencv-4.1 version.

  2. Try this example. Your would reduce the gst_debug level from 3 to 0 because 3 may be quite verbose and might end with terminal slowing down.
    If it has several seconds delay (a few seconds after pipeline creation), then probably you have something wrong such as opencv build with wrong options or system getting loaded by something else. Also check your nvpmodel and try MAXN (-m 0) so that all 6 cores are available.

  3. When it’s ok, you may try to further optimize, adding queue in the pipeline after videoconvert so that it may be launched on a different core than appsink, your opencv application doing some processing.

  4. You may also boost clocks with jetson_clocks script. If not enough, check where is the bottleneck with tegrastats.