V4L2 driver returning empty and corrupted frame buffers from VIDIOC_DQBUF IOCTL

I am using a RPi 2 camera (IMX219) with my Jetson Nano and I am trying to get high framerate 640x480 video. I am targeting 180fps as I’ve seen people achieve this with this sensor at this resolution on other forums. I modified the stock Nvidia V4L2 driver to add this resolution mode (adding stuff to imx219_mode_tbls.h and tegra210-camera-rbpcv2-imx219.dtsi), and I am able to start a stream from a C program.

In imx219_mode_tbls.h, I needed to define a new set of register values for my new resolution mode. I copied the values from imx219_mode_1280x720_60fps and modified the ones relevant to the sensor cropping, binning, and output resolution (same for the stuff in the device tree file).

When I run my application, the video output is glitchy because the VIDIOC_DQBUF IOCTL seems to be returning empty or partially-filled frame buffers (i.e. half the scan lines of the buffer are updated, the rest are old). I am using four user pointer buffers. It seems like one of the four framebuffers is updated, but only part of it is filled. When I change the frame_rate control while my application is running using v4l2-ctl, I found that lower framerates did not produce this problem.

With the default pixel clock settings (182.4MHz as defined in the device tree and sensor registers, again copied from the 720p60 mode), the video stream is fine up until about 110fps. If I changed the registers and device tree to 169.6MHz (as used in the 720p120 line-doubled mode), the video works up to about 100fps. If I increase the pixel clock to 224MHz (extrapolating the PLL multiplier register values from the ones in the other modes), it works up to about 130fps. In all cases, it works fine until you increase the framerate by 1 and it immediately starts glitching out.

I have also tried using the nonblocking driver mode and increasing the number of framebuffers to 10, but this didn’t change the behavior. In no case does the IOCTL return any errors (flags in the v4l2_buffer struct or errno).

640x480 180fps video has fewer pixels per second than some of the other stock modes, which all work fine (including 720p 120fps), so I think the clock speed isn’t truly the limiting factor here. I don’t want to just keep pushing the clock speed up, since other drivers (for the Raspberry Pi) that supposedly achieve 180fps at 480p are using 182.4MHz. I am not familiar with the details of the CSI interface or how the V4L2 driver is implemented, so does anyone recognize what might be going on here and how I can fix it?

I read some more driver code and figured it out. I found this suspicious line in imx219.c:

frame_length = (u32)(mode->signal_properties.pixel_clock.val *
		(u64)mode->control_properties.framerate_factor /
		mode->image_properties.line_length / val);

At higher frame rates, frame_length was becoming smaller. When it got below about 500, the video glitching started happening. I haven’t read the entire IMX219 datasheet but I assume this is because it’s getting below some minimum active row (480) plus blanking line requirement. It turns out the sensor had a lot of clock speed headroom relative to the default speed, so I increased it to 348.8MHz (I think 350MHz is the maximum supported?). I was then able to get 200fps, though there’s a bit of artifacting from signal integrity issues with the long ribbon cable I’m using.

Are you able to post your drivers modification here.

Sure. Here are the three files I changed (this also enables the 720p 120fps mode that was disabled in imx219_mode_tbls.h).
imx219_mode_tbls.h.patch (4.7 KB)
tegra210-camera-rbpcv2-dual-imx219.dtsi.patch (2.9 KB)
tegra210-camera-rbpcv2-imx219.dtsi.patch (1.6 KB)

For anyone who wants to duplicate this modification, I more or less followed this guide to recompile the kernel:

TEGRA_KERNEL_OUT = [full path to]/Linux_for_Tegra/source/public/build
TOOLCHAIN_PREFIX = [full path to]/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-
KERNEL_MODULES_OUT = [full path to]/Linux_for_Tegra/source/public/modules

cd Linux_for_Tegra/source/public
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra CROSS_COMPILE=${TOOLCHAIN_PREFIX} tegra_defconfig
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra INSTALL_MOD_PATH=$KERNEL_MODULES_OUT modules_install

sudo cp build/arch/arm64/boot/Image [card root]/boot/Image
sudo rm -rf [card root]/boot/dtb/*
sudo cp -r build/arch/arm64/boot/dts/* [card root]/boot/dtb
sudo cp -r build/arch/arm64/boot/dts/* [card root]/boot

Then added FDT /boot/tegra210-p3448-0000-p3449-0000-a02.dtb to /boot/extlinux/extlinux.conf
(You might need to use a different .dtb file here if you have a different board. This one works for the Jetson Nano SD card version.)

Running at 200FPS, signal integrity can be a problem with long ribbon cables. You can reduce the frame rate by reducing the clock speed and related values in imx219_mode_tbls.h and the device tree files. Here’s my notes on adjusting the registers:
IMX219 200FPS Registers.pdf (28.9 KB)

Edit: I made a GitHub repo with a more detailed writeup: GitHub - aWZHY0yQH81uOYvH/jetson-nano-200fps-cam: Jetson Nano V4L2 driver modifications to add a 640x480 200FPS mode for the Sony IMX219 (Raspberry Pi Camera 2)

Thanks for your sharing.