tegra camera module V4L2 I/O method and access Gstreamer's NVMM memory.

Hi there,

We are trying to use IO_METHOD_USEPTR on our custom V4L2 video device, so it seem L4T R23.2’s tegra’s camera module do not support it, Is anyone also play with this IO method?

This is the Gstreamer’s pipeline we are testing with, and it only support IO-mode to “mmap”

gst-launch-1.0 v4l2src device=/dev/video0 do-timestamp=true ! 'video/x-raw, format=UYVY, framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420, framerate=30/1' ! nvoverlaysink sync=false -vvv

The other question is that if we can access “memeory:NVMM” directly in any of NVIDIA’S video sink or “nvvidconv”, and pass it as our custom openGL’S GPU frame buffer?

Hi usaarizona,
Jetson TX1 can support two I/O streaming ways with L4T R23.2, MMAP and USERPTR.
Could you please let us know what the issue exactly is when you tried USERPTER?
If you are not going to use NVIDIA’s sink afterwards, why do you need memory layout conversion along with color conversion?

Hi Nvconan,

We have a H2C(hdmi2csi) chip which is using the v4l2 driver based on tegra_camera module on TX1, so we just use “MMAP IO method” to capture the buffer from v4l2 video device, and it works fine but the performance problem , each of a 4k(3840x2160)“UYVY format around 16MB” frame need around “55-60ms” to copy from kernel’s buffer to openGL’s pixel buffer, which mean, we only can do around 18FPS at most, and it only take around “9-15ms” to copy a “malloc method” and 4k-size( RGBA format around 24MB) memory to GPU side, do you have any clue on this?

And for the USERPTR method, It can successfully request buffer using “VIDIOC_REQBUFS”, but it always failed to queue the buffer using “VIDIOC_QBUF”, the error code is always “Bad address”, so did you ever tried and tested with USRPTR. Thank you.

Kevin

Hi Nvconan,

The exact error code return from VIDIOC_QBUF ioctl call is following.
EINVAL:
The buffer type is not supported, or the index is out of bounds, or no buffers have been allocated yet, or the userptr or length are invalid.

Kevin

Hi usaarizona,

Could you please check if yavta is not working either?
I did a quick try in local(sensor with UYVY/640x480@30) and both MMAP and USERPTR are working as expect.
FYI.

[b]ubuntu@tegra-ubuntu:~$ ./yavta -c1 -n1 -s640x480 -fUYVY -Fcam.raw /dev/video0 
[/b]Device /dev/video0 opened.
Device `vi' on `' is a video capture device.
Video format set: UYVY (59565955) 640x480 (stride 1280) buffer size 614400
Video format: UYVY (59565955) 640x480 (stride 1280) buffer size 614400
1 buffers requested.
length: 614400 offset: 0 timestamp type: monotonic
Buffer 0 mapped at address 0xf736f000.
0 (0) [-] 0 614400 bytes 1464762201.686934 296.512458 0.001 fps
Captured 1 frames in 0.018856 seconds (53.031982 fps, 32582849.478372 B/s).
1 buffers released.
[b]ubuntu@tegra-ubuntu:~$ ./yavta -c1 -n1 -s640x480 -fUYVY -u -Fcam.raw /dev/video0 
[/b]Device /dev/video0 opened.
Device `vi' on `' is a video capture device.
Video format set: UYVY (59565955) 640x480 (stride 1280) buffer size 614400
Video format: UYVY (59565955) 640x480 (stride 1280) buffer size 614400
1 buffers requested.
length: 614400 offset: 0 timestamp type: monotonic
Buffer 0 allocated at address 0xf71bb000.
0 (0) [-] 0 614400 bytes 1464762203.647772 298.473429 0.001 fps
Captured 1 frames in 0.029709 seconds (33.659666 fps, 20680498.814709 B/s).
1 buffers released.

Definitely, sharing buffer between v4l2 video device and subsequent components is the best choice.

Hi nVConan,

We tested our video device with yavta, and you are right, it do support user pointer mode, we compared the code with yavta’s code, and find the bug out finally on our side, so I confirm that TX1’s(L4T R23.2) “vi” v4l2 driver support for “IO_METHOD_USERPTR”, Thanks for your hint.

Kevin