How to eliminate gstreamer camera buffer

In my use case, I need as little latency as possible between the time of the command to take a picture and the time of the frame being returned to my program. There appears to be a buffer that stores frames from the camera, causing up to a 3 second discrepancy between when the picture was taken and when the command was issued. Is there a way to either eliminate this buffer, or read from the front so I get the newest frame instead of the oldest?

This is how I currently initialize the camera and take pictures in python:

def gstreamer_pipeline (capture_width=3280, capture_height=2464, exposure_time=90, framerate=8):
	exposure_time = exposure_time * 1000000 #ms to ns
	exp_time_str = '"' + str(exposure_time) + ' ' + str(exposure_time) + '"'

	return ('nvarguscamerasrc '
        'wbmode=0 awblock=true aelock=true '
	'gainrange="2 2" '
	'ispdigitalgainrange="1 1" '
        'exposuretimerange=%s ! '
	'video/x-raw(memory:NVMM), format=NV12, '
	'width=%d, height=%d, '
	'framerate=%d/1 ! '
	'nvvidconv flip-method=2 ! '
	'video/x-raw, format=I420 ! '	
	'appsink '
	% (exp_time_str, capture_width, capture_height, framerate))

camera = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)
_, image = camera.read()
image = cv2.cvtColor(image, cv2.COLOR_YUV2BGR_I420)

Hi,

A trick to stop GStreamer elements from buffering is adding one buffer queues that discard older buffers (you can also discard newer buffers with leaky upstream):

queue max-size-buffers=1 leaky=downstream

Also try adding sync=false to your appsink.

return ('nvarguscamerasrc '
        'wbmode=0 awblock=true aelock=true '
	'gainrange="2 2" '
	'ispdigitalgainrange="1 1" '
        'exposuretimerange=%s ! '
        'queue max-size-buffers=1 leaky=downstream ! '
	'video/x-raw(memory:NVMM), format=NV12, '
	'width=%d, height=%d, '
	'framerate=%d/1 ! '
	'nvvidconv flip-method=2 ! '
        'queue max-size-buffers=1 leaky=downstream ! '
	'video/x-raw, format=I420 ! '	
	'appsink sync=false'
	% (exp_time_str, capture_width, capture_height, framerate))

Hi, thanks for the reply. Unfortunately I see no difference. I ran another script in a 2nd terminal that displays the current system time and took pictures of my screen. Most images were around 5 seconds old.

Edit: I tried posting my test script but the forums keep giving me error 14, 15, or 16 “blocked by security rules.”

Update: Here’s the gstreamer initialization but I cant’t upload the rest of the script for whatever reason.

def gstreamer_pipeline (capture_width=3280, capture_height=2464, exposure_time=90, framerate=8):
	exposure_time = exposure_time * 1000000 #ms to ns
	exp_time_str = '"' + str(exposure_time) + ' ' + str(exposure_time) + '"'

	return ('nvarguscamerasrc '
	'wbmode=0 awblock=true aelock=true '
	'gainrange="1 1" '
	'ispdigitalgainrange="1 1" '
	'exposuretimerange=%s ! '
	'queue max-size-buffers=1 leaky=downstream ! '
	'video/x-raw(memory:NVMM), format=NV12, '
	'width=%d, height=%d, '
	'framerate=%d/1 ! '
	'nvvidconv flip-method=2 ! '
	'queue max-size-buffers=1 leaky=downstream ! '
	'video/x-raw, format=I420 ! '
	'appsink sync=flase '
	% (exp_time_str, capture_width, capture_height, framerate))

To summarize the rest of the script that I am unable to upload:

Initialize camera
start indefinite while loop
    take picture
    record time of picture as string
    convert image to BGR
    save image with timestamp in filename
    wait 1 second

leaky=downstream appears to work as intended as my images are consistently 5 seconds behind. I used to get inconsistent delays between frames.

Hi,
I haven’t measured latency on the jetson-nano, but 5 seconds seems too much to be caused by GStreamer. Can you measure glass to glass latency with your current setup?
Here is a wiki page on how to do this measurement:
https://developer.ridgerun.com/wiki/index.php?title=Jetson_glass_to_glass_latency
There are other ways to measure latency, for example, using GstShark to measure latency on each element. You can get more info on this section:
https://developer.ridgerun.com/wiki/index.php?title=Xavier/GStreamer_Pipelines/Capture_and_Display#Latency
This can help you identify where is the latency being generated.

The very first image produced by the test script is ~0.3 seconds old. The latency builds up to 5 seconds in the next 3 images and stays there.

I’ll look into measuring the glass to glass latency and get back to you with some numbers.

This isn’t exactly how the glass to glass latency measurement was performed in those links buts its probably close enough.

I took 5 pictures of the system time displayed on my screen, then saved them, repeating that process 10 times. Saving all 5 pictures takes some time, so I get snapshots of the buffer at different points in time.

Image set A: The first 5 images have a latency of 0.15 ± 0.05 seconds which is perfectly fine for my use case.

6 seconds pass as the images are saved.

Image set B: The next 5 images have a latency of 6 seconds. They appear to just be the next images in the queue from the camera and clearly are not captured at time of command. The first image shows a time 0.05 seconds after the last image of set A

Hi,

The appsink may queue buffers internally if you do not drain them fast enough. The default queue size for the appsink is unlimited. You could limit the appsink queue with the “max-buffers” property.

@miguel.taylor

Adding a queue of one would only keep upstream elements from buffering but not downstream elements like the appsink. I think a queue with a size of one should only be used as a thread boundary element. If you use the queue to avoid buffering on upstream elements, you would still waste the resources required to process those buffers.

Thanks so much! Setting max-buffers=1 by itself didn’t fix the issue, but adding drop=True to discard old frames solved everything.

For future reference:

'appsink max-buffers=1 drop=True'
2 Likes

Thank you so much! This solution worked to remove the serious ~500ms lag on Jetson Nano & Raspberry Pi V2 CSI camera.
Now the lag is not noticeable to my eyes.

I will be submitting a pull request to github.com/JetsonHacksNano/CSI-Camera to reflect this.