V4l2src is not responding in R24.1

When I run the following command

gst-launch-1.0 v4l2src device=/dev/video0 ! "video/x-raw,width=640,height=480,format=(string)I420,framerate=(fraction)12/1" ! xvimagesink -e --gst-debug=10

I will see a window show up but it never displays a video. If I run this local on the TX1 sometimes the board will become unresponsive until I reboot the device. The only thing I see in the log is the following:

0:00:01.264803581  4526       0x612800 LOG                  basesrc gstbasesrc.c:2663:gst_base_src_loop:<v4l2src0> next_ts 99:99:99.999999999 size 4096
0:00:01.265230509  4526       0x612800 DEBUG                basesrc gstbasesrc.c:2388:gst_base_src_get_range:<v4l2src0> calling create offset 18446744073709551615 length 4096, time 0
0:00:01.265645509  4526       0x612800 DEBUG                   v4l2 gstv4l2bufferpool.c:864:gst_v4l2_buffer_pool_acquire_buffer:<v4l2bufferpool0> acquire
0:00:01.266054937  4526       0x612800 LOG                     v4l2 gstv4l2bufferpool.c:641:gst_v4l2_object_poll:<v4l2src0> polling device
0:00:01.266469572  4526       0x612800 DEBUG               GST_POLL gstpoll.c:1196:gst_poll_wait: timeout :99:99:99.999999999

Running

./yavta /dev/video0 -c1 -n1 -s1920x1080 -Fov.raw

I will get:

Device /dev/video0 opened: vi-output-2 (platform:vi:2).
Video format set: width: 1920 height: 1080 buffer size: 4147200
Video format: RG10 (30314752) 1920x1080
1 buffers requested.
length: 4147200 offset: 0
Buffer 0 mapped at address 0x7f8226e000.

But it hangs here. I have tried re-flashing the board and I have tried using a different jetson board all with the same results.

Hello, cdriscoll:
On-board sensor in Jetson TX1 (OV5693) is a raw sensor. So it will generate raw data through V4L2 interface.
So the first pipeline is not correct. You can use nvcamerasrc plugin instead if you want to preview.

For the yavta command, please specify the format as well.
e.g.
/home/ubuntu/yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fov.raw

this works in my side. Also, it will generate the raw data.

br
Chenjian

Hello jachen,

I also tried to capture RAW images on the T4L 24.1 using V4L2 (image sensor: OV5693).
Unfortunately, the image file created by yavta is completely zero:

ubuntu@tegra-ubuntu:~/yavta$ ./yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fov.raw
Device /dev/video0 opened.
Device vi-output-2' on platform:vi:2’ (driver ‘tegra-video’) is a video capture (without mplanes) device.
Video format set: SRGGB10 (30314752) 1920x1080 (stride 3840) field none buffer size 4147200
Video format: SRGGB10 (30314752) 1920x1080 (stride 3840) field none buffer size 4147200
1 buffers requested.
length: 4147200 offset: 0 timestamp type/source: mono/EoF
Buffer 0/0 mapped at address 0x7f80406000.
0 (0) [E] none 0 4147200 B 88.966057 88.966145 0.250 fps ts mono/EoF
Captured 1 frames in 4.002829 seconds (0.249823 fps, 1036067.009787 B/s).
1 buffers released.

Do you have any idea how to solve this issue?
I used the latest yavta version (git commit 449a146784d554ef40e5dea3483cb5e9bacbb2c8).

Best regards,
Christian

I have gotten the camera to work with the nvcamerasrc plguin, I am trying the get the v4l2src plugin to work. If the pipeline I gave in the original post is invalid could you suggest one that is valid?

Hi cdriscoll,

From the comment you made above, looks like you were trying to stream raw data per v4l2src, if it’s yes, that’s expected to fail since v4l2src doesn’t have capability to do image processing from raw to yuv(I420).

gst-launch-1.0 v4l2src device=/dev/video0 ! "video/x-raw,width=640,height=480,format=(string)I420,framerate=(fraction)12/1" ! xvimagesink -e --gst-debug=10

Usually, we are using v4l2src for YUV CSI sensor or USB camera.
The reason why nvcamerasrc works for you is because our nvcamerasrc is involving the HW based NVIDIA ISP.

Hi nVConan,

I’m a little bit confused now. I thought that the ISP only supports the OV5693 image sensor and that the V4L2 interface is the recommended way to use RAW image sensors other than the OV5693.
Now, it seems that the V4L2 interface doesn’t support RAW image sensors.

Could you please explain the recommended way to interface a RAW image sensor other than the OV5693?

Best regards,
Christian

Hello, cmenssen:
That’s weird. I pasted my log for your reference.

/home/ubuntu/yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fov.raw
Device /dev/video0 opened.
Device vi-output-2, ov5693 6-0036' on platform:vi:2’ is a video capture device.
Video format set: SRGGB10 (30314752) 1920x1080 (stride 3840) buffer size 4147200
Video format: SRGGB10 (30314752) 1920x1080 (stride 3840) buffer size 4147200
3 buffers requested.
length: 4147200 offset: 0 timestamp type: monotonic
Buffer 0 mapped at address 0x7fa54f7000.
length: 4147200 offset: 4149248 timestamp type: monotonic
Buffer 1 mapped at address 0x7fa5102000.
length: 4147200 offset: 8298496 timestamp type: monotonic
Buffer 2 mapped at address 0x7fa4d0d000.
0 (0) [E] 0 4147200 bytes 8708.194036 8708.261344 15.036 fps
Captured 1 frames in 0.133813 seconds (7.473071 fps, 30992318.714627 B/s).
3 buffers released.
ubuntu@tegra-ubuntu:~/Work/bug_fixing/forum_947908$ ll
total 8112
drwxrwxr-x 2 ubuntu ubuntu 4096 Jul 1 06:04 ./
drwxrwxr-x 32 ubuntu ubuntu 4096 Jul 5 10:11 …/
-rw-rw-r-- 1 ubuntu ubuntu 203 Jul 1 06:01 command.txt
-rw-rw-r-- 1 ubuntu ubuntu 8294400 Jul 14 09:25 ov.raw
ubuntu@tegra-ubuntu:~/Work/bug_fixing/forum_947908$ hexdump -C ov.raw -n 1000
00000000 57 01 86 01 5f 01 80 01 83 01 72 01 68 01 86 01 |W…_…r.h…|
00000010 7d 01 8f 01 53 01 84 01 83 01 99 01 90 01 92 01 |}…S…|
00000020 74 01 6f 01 6e 01 92 01 68 01 88 01 60 01 86 01 |t.o.n…h…...| 00000030 47 01 77 01 5c 01 7d 01 3f 01 87 01 6e 01 69 01 |G.w.\.}.?...n.i.| 00000040 7d 01 89 01 8b 01 6d 01 6e 01 68 01 68 01 72 01 |}.....m.n.h.h.r.| 00000050 68 01 73 01 54 01 89 01 7f 01 7a 01 40 01 6e 01 |h.s.T.....z.@.n.| 00000060 60 01 80 01 53 01 89 01 60 01 83 01 53 01 8c 01 |…S…...S...| 00000070 90 01 91 01 6e 01 71 01 60 01 8c 01 7a 01 9a 01 |....n.q.…z…|
00000080 74 01 8d 01 71 01 7a 01 54 01 9b 01 77 01 77 01 |t…q.z.T…w.w.|
00000090 8b 01 80 01 7d 01 9e 01 74 01 97 01 6e 01 74 01 |…}…t…n.t.|
000000a0 7c 01 81 01 80 01 7f 01 6b 01 8c 01 6e 01 98 01 ||…k…n…|
000000b0 77 01 89 01 6b 01 77 01 56 01 98 01 65 01 7d 01 |w…k.w.V…e.}.|
000000c0 70 01 74 01 5d 01 70 01 7c 01 7b 01 59 01 a7 01 |p.t.].p.|.{.Y…|
000000d0 7a 01 7d 01 94 01 89 01 8f 01 90 01 94 01 6e 01 |z.}…n.|
000000e0 90 01 73 01 67 01 81 01 96 01 77 01 70 01 74 01 |…s.g…w.p.t.|

for OV5693, it can output YUV data (with internal ISP) by GST plugin nvcamerasrc, or RAW data (by-pass internal ISP) by v4l2 driver.

Hope that can help.

br
ChenJian

Hi,

 If you are using a raw sensor you can capture with both, nvcamerasrc and v4l2src, the difference is the following:

*nvcamerasrc: it will allow you to capture the raw frame and convert it in the ISP to produce a NV12 frame suitable for the encoder
*v4l2src: it bypasses the ISP, so it will give you a raw frame. You would be able to capture it but you wouldn’t be able to encode unless you convert it to NV12. For this reason V4L2src is recommended for sensors giving YUV directly so you don’t need to convert the frame later by sw to NV12 - for instance [1]

Obviously, the conclusion would be: let’s use nvcamerasrc always. However, several persons have posted that is not possible to use other sensor with nvcamerasrc, only the default one included in the jetson board.

-David

[1][url]https://developer.ridgerun.com/wiki/index.php?title=Toshiba_TC358743_Linux_driver_for_Tegra_X1[/url]