V4L2 real time receive

I am working on different v4l2 projects for my company and some of them need real time or at least real time v4l2 access.
For my last project I was working with XAVIER NX and AGX and receiving v4l2 buffers had quite some jitter when using userpointers, but using MMAP buffers they arrived exactly on the microsecond without any jitter and my project worked like a charme.

For another project we are using Jetson Nano and using memory mapped buffers does not solve this. Do you have any hints on how to achieve less jitter on the receive of v4l2 frames?

I already updated my kernel with real time patches and I am playing with thread priority and cpu affinity but it seems there are always outliners which are 1-2ms late.

Hints increasing general realtimeness of jetson systems are also very welcome.

I seem to have the same problem in this thread:

Do you have similar advice for jetson nano?

Please check if you can use V4L2_MEMORY_DMABUF mode. We have demonstration in


In the mode, frame data is capture into the buffer directly and there is no additional memory copy. This is an optimal solution and it would be great if you can apply it to your use-case.

Just to be clear. Normally there is a copy inside the v4l2 pipeline, but using V4L2_MEMORY_DMABUF it is not copied and I can use it in userspace using NvBufferMemSyncForCpu or NvBufferMemSyncForDevice depending on whether I want to use CPU or GPU.

What are your kernel parameters? How is rcu_nocbs configured? How did you partition your CPUs?

Since NVIDIA already answered “here is an optimal solution, do it our way” I think you’ll be best off following that advice. If you’re unlucky there will be some NVIDIA magic they won’t tell you anything about which effectively prohibits you from doing it any other way.

I am having trouble implementing /usr/src/jetson_multimedia_api/samples/12_v4l2_camera_cuda for my use case. Since I have RAW10 sensor, of what type should I make the NvBuffer.

I tried to make it GRAY8 and just double the width.

On querybuf I get this error:
VIDIOC_QUERYBUF error 25, Inappropriate ioctl for device

Are you sure this will fix the jitter? Can you confirm you have jitter on Jetson NANO also? Could it also depend on Jetpack Version, since I am using Jetpack 4.5?

For us it would mean a lot of work to change the jetson in this phase of the project but it seems to me we have to upgrade to Jetson Orin NANO, when there is no fix for Jetson Nano Jitter.

Any idea what is different from Nano to Xaxvier NX that could result in this kind of jitter?

Please note that in this thread ( V4L2 capture jitter problem ) the same problem is discussed. But on XAVIER NX and AGX using MMAP buffers instead of userptr fixes the problem, on Jetson Nano it does not.

RAW10 is not supported in NvBuffer formats. I thought your source is in YUV422 such as YUYV or UYVY.

For RAW10, please refer to the sample:


You can capture frame data to CUDA buffer and implement CUDA code to convert to other format.

V4L2cuda does not support DMA Buffer. So I am stuck with Userptr and MMAP Buffer which have the described Jitter effect.

Do you see any chance to get rid of Jitter?

If you capture frame data through v4l2cuda app, do you observe the jittering?

Yes! You do not using Jetson Nano?

We don’t actually try the use-case. For RAW10 format, we would suggest use ISP engine to get optimal solution. But looks like you don’t go this route. For getting RAW10 frame data through v4l2, MMAP should be the optimal mode. This may be due to constraint of hardware. The capability of Jetson Nano cannot achieve the target performance.

Okay, seems like we have to switch device then…

“Realtime” camera receive seems then impossible on Jetson Nano. This is sad, but I can understand you do not want to work on this since Jetson Nano is old device.

But you should add Jitter Tests to your testing pipeline, since Userptr also do not work on Xavier devices, this seems like a general problem on all Jetson devices.

Can you confirm you receive the same jitter as I do with MMAP? Just capture timestamp after image recv on v4l2 and plot the results.

Here is a graph of Jitter recorded using /usr/src/jetson_multimedia_api/samples/v4l2cuda with MMAP buffers.

You can see 2 outliners that break realtime.

Do you use Bayer camera sensor? We have RPi camera v2 and can set it up for a try.

Yes! I have our own implementation of Bayer Sensor 10 bit and RPi camera for comparison. Same behavior. Thank you!

Please share the steps for reference. We will try and the result can be similar to yours. It would be great if you can consider use later Jetson platforms.

Modify v4l2cuda to print the timestamp of the receiving frame. Either get a timestamp when image is received with v4l2 or use the v4l2_buffer timestamp. Both will have the same jitter.

Replace process_image() with

struct timeval stamp;
gettimeofday(&stamp, NULL);
printf("%lu\n", stamp.tv_sec*1000000UL +  stamp.tv_usec);
//process_image (buffers[buf.index].start);

Plotting the time between each frame will give you the jitter graph.

You can compare Xavier NX/AGX with MMAP and Userptr. You will see no jitter for MMAP, but you will see some for Userptr.

Now compare MMAP from Nano to MMAP from Xavier. Nano will have Jitter, but Xavier will not.

If you do this with no other load, jitter will be relatively low. But if you add some load, you will see those big outliners like I did.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.