I will see a window show up but it never displays a video. If I run this local on the TX1 sometimes the board will become unresponsive until I reboot the device. The only thing I see in the log is the following:
Hello, cdriscoll:
On-board sensor in Jetson TX1 (OV5693) is a raw sensor. So it will generate raw data through V4L2 interface.
So the first pipeline is not correct. You can use nvcamerasrc plugin instead if you want to preview.
For the yavta command, please specify the format as well.
e.g.
/home/ubuntu/yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fov.raw
this works in my side. Also, it will generate the raw data.
I also tried to capture RAW images on the T4L 24.1 using V4L2 (image sensor: OV5693).
Unfortunately, the image file created by yavta is completely zero:
ubuntu@tegra-ubuntu:~/yavta$ ./yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fov.raw
Device /dev/video0 opened.
Device vi-output-2' on platform:vi:2’ (driver ‘tegra-video’) is a video capture (without mplanes) device.
Video format set: SRGGB10 (30314752) 1920x1080 (stride 3840) field none buffer size 4147200
Video format: SRGGB10 (30314752) 1920x1080 (stride 3840) field none buffer size 4147200
1 buffers requested.
length: 4147200 offset: 0 timestamp type/source: mono/EoF
Buffer 0/0 mapped at address 0x7f80406000.
0 (0) [E] none 0 4147200 B 88.966057 88.966145 0.250 fps ts mono/EoF
Captured 1 frames in 4.002829 seconds (0.249823 fps, 1036067.009787 B/s).
1 buffers released.
Do you have any idea how to solve this issue?
I used the latest yavta version (git commit 449a146784d554ef40e5dea3483cb5e9bacbb2c8).
I have gotten the camera to work with the nvcamerasrc plguin, I am trying the get the v4l2src plugin to work. If the pipeline I gave in the original post is invalid could you suggest one that is valid?
From the comment you made above, looks like you were trying to stream raw data per v4l2src, if it’s yes, that’s expected to fail since v4l2src doesn’t have capability to do image processing from raw to yuv(I420).
Usually, we are using v4l2src for YUV CSI sensor or USB camera.
The reason why nvcamerasrc works for you is because our nvcamerasrc is involving the HW based NVIDIA ISP.
I’m a little bit confused now. I thought that the ISP only supports the OV5693 image sensor and that the V4L2 interface is the recommended way to use RAW image sensors other than the OV5693.
Now, it seems that the V4L2 interface doesn’t support RAW image sensors.
Could you please explain the recommended way to interface a RAW image sensor other than the OV5693?
If you are using a raw sensor you can capture with both, nvcamerasrc and v4l2src, the difference is the following:
*nvcamerasrc: it will allow you to capture the raw frame and convert it in the ISP to produce a NV12 frame suitable for the encoder
*v4l2src: it bypasses the ISP, so it will give you a raw frame. You would be able to capture it but you wouldn’t be able to encode unless you convert it to NV12. For this reason V4L2src is recommended for sensors giving YUV directly so you don’t need to convert the frame later by sw to NV12 - for instance [1]
Obviously, the conclusion would be: let’s use nvcamerasrc always. However, several persons have posted that is not possible to use other sensor with nvcamerasrc, only the default one included in the jetson board.