With R24.1 L4T, I tried to record OV5693 Bayer data through Yavta using below command
“yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fov.raw”
Logs of above command:
Device /dev/video0 opened.
vi-output-2' on platform:vi:2’ (driver ‘tegra-video’) is a video capture (without mplanes) device.
Video format set: SRGGB10 (30314752) 1920x1080 (stride 3840) field none buffer size 4147200
Video format: SRGGB10 (30314752) 1920x1080 (stride 3840) field none buffer size 4147200
1 buffers requested.
length: 4147200 offset: 0 timestamp type/source: mono/EoF
Buffer 0/0 mapped at address 0x7f83784000.
[ 651.852158] video4linux video0: frame start syncpt timeout!
[ 651.858294] video4linux video0: TEGRA_VI_CSI_ERROR_STATUS 0x00000000
[ 653.851838] video4linux video0: MW_ACK_DONE syncpoint time out!
0 (0) [E] none 0 4147200 B 653.857026 653.857691 0.250 fps ts mono/EoF
Captured 1 frames in 4.000328 seconds (0.249979 fps, 1036714.943759 B/s).
1 buffers released.
ov.raw file of size 4147200 is created. But hexdump of it shows, entire file is with Zero as data.
I also tried recording using nvcamerasrc, I am able to record MP4 properly.
Can someone tell me if I am missing anything in HW or SW…
I could see multiple posts in which developers have raised this issue. But could not find any solid solution. Is there any work around to read Raw bayer data ? or we have to use only nvcamerasrc?
We have released the 24.2 with re-arch camera framework and some fixes.
Suggest to move on this new version.
Thanks for the response!
Yes already I have moved to R24.2 and the above issue is not happening now.
But I have few other queries…
- We tried below steps
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=RG10 --stream-mmap --stream-count=100 -d /dev/video0
yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fov.raw
but recorded file is always with size 2592x1944. Is it some default value which cannot be override?
Device /dev/video0 opened.vta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fov.raw
vi-output-2, ov5693 6-0036' on platform:vi:2’ (driver ‘tegra-video’) is a video capture (without mplanes) device.
Video format set: SRGGB10 (30314752) 2592x1944 (stride 3840) field none buffer size 7464960
Video format: SRGGB10 (30314752) 2592x1944 (stride 5184) field none buffer size 10077696
3 buffers requested.
length: 7464960 offset: 0 timestamp type/source: mono/EoF
Buffer 0/0 mapped at address 0x7fb61fa000.
length: 7464960 offset: 7467008 timestamp type/source: mono/EoF
Buffer 1/0 mapped at address 0x7fb5adb000.
length: 7464960 offset: 14934016 timestamp type/source: mono/EoF
Buffer 2/0 mapped at address 0x7fb53bc000.
Warning: bytes used 7464960 != image size 10077696 for plane 0
0 (0) [E] none 0 7464960 B 1686.709448 1686.777462 15.166 fps ts mono/EoF
Captured 1 frames in 0.133950 seconds (7.465452 fps, 55729299.847218 B/s).
3 buffers released.
ubuntu@tegra-ubuntu:~$ ls -l ov.raw
-rw-rw-r-- 1 ubuntu ubuntu 14929920 Sep 29 05:06 ov.raw
- If recording is done for some other sensor (other than OV5693) for which ISP support is not there, then How to verify the recorded file ?
Yes I will look at the driver code.
I am not sure if Bayer file is recorded properly or not. Because file is too big… I converted it to Tiff file and tried viewing it through Irfan view. But could not succeed.
- The r24.2 implement v4l2 driver base on the new v4l2 framework. You can get the 1920x1080 raw data from below command, yavta may not fully support now.
- For the raw2bmp you can find it on the web.
v4l2-ctl -d /dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=ov1080.raw
Thanks for the update. We will use this method.
Also Can You please confirm that TX1 is capable of procesisng RAW bayer of types RGGB10 and also BGGR10 ?
Yes,TX1 can support both of RGGB10 and BGGR10 are support
Okay. Thanks for the info.
We are planning to connect three Bayer sensors to TX1 and do simultaneous YUV conversion and encoding. Each sensor will be having different lens. In this case, Can single ISP configuration work for all three sensors?.
FYI We are going to get this ISP configuration from third party.
No you can’t apply the same ISP config file to different sensor with different lens.
OKay. So using Two TX1 ISP for processing three RAW bayer sensors (with different lens) simultaneously is not possible with TX1 or We can have multiple ISP configuration file to achieve this ??
Our concern is - We need to process all three simultaneously. So Could You please Let us know how ISP will handle that?
ISP config files are for image tuning. One ISP config setting file for one sensor if they have any different. If you load the same config file to different sensor lens module you may have some image quality problem.
Yes We understand that ISP is used for image tuning. But our concern is - How these different ISP configuration file will be used by nvcamerasrc plugin.
Assume we have created three different ISP configuration files - one for each Bayer sensor.
As Bayer to YUV conversion is needed, We will be using nvcamerasrc. As per our use case, this conversion needs to happen simultaneously for all three Bayer sensors.
Below are the questions:
There are only two ISP blocks in TX1. But We have 3 configuration files and three Bayer sensors. How nvcamerasrc should be invoked in this scenario ?
Can two ISPs handle three Bayer sensors at a time ?
Okay Shane. Thanks for providing detailed information!
Let me consolidate our design…
We will be connecting one YUV sensor and 3 Bayer sensors through MIPI-CSI. We will be creating 3 ISP configuration files - one for each Bayer sensor. We will use gstreamer plugins like v4l2src(for YUV sensor) and nvcamerasrc (for three Bayer sensors) to encode and record MP4 files of each sensor simultaneously.
Note: Using v4l2src and nvcamerasrc together is suppported from R24.2
Could You please Let me know if the above design is possible in R24.2 ?
Okay Shane… So Assuming TX1 can support above design… We will go ahead with TX1.
Current R24.2 have multiple camera performance. When launch multiple camera at a time they may can’t reach 30fps.
As per our requirement, YUV sensor will be 1080p @ 30fps and three Bayer sensors will be 720p @ 30fps.
Please clarify below queries:
when you say multiple camera performance, does it include possibility of connecting One YUV and three Bayer sensors?
As there is a limitation in FPS, Will it be resolved in next release or it cannot be done?
Okay Shane… Sounds good…
Please clarify below queries:
YUV sensor will be 1080p and three Bayer sensors will be 720p.
As you said, 30 fps cannot be achieved. Any rough number of fps, we will get for above configuration?
- Could You please let me know how the different bayer sensors can be referenced in gstreamer while using nvcamerasrc. Generally sensor-id will be used. Please confirm whether with R24.2 also we need to use sensor-id only or a different method.