I’m testing on two Jetson Nano 2GB Dev Kits running L4T 32.6.1 - For some reason, only one of them is able to run v4l2-ctl (see here for more info). With the one that does work with v4l2-ctl I’ve been testing both the stock Nvidia and Arducam drivers, the latter which provides support for the full resolution mode 4032x3040 at 30fps. The Nvidia driver only provides 3840x2160 at 30fps and 1920x1080 at 60fps. Here’s a breakdown of the functionality between Argus and v4l2 when running the Arducam drivers (where mode 0 = 4032x3040 at 30fps):
And just to clarify, Argus is 100% able to actually capture 4032x3040 at 30fps and encode at 30fps via nvv4l2h265enc/nvv4l2h264enc. I have confirmed this multiple ways.
v4l2 via v4l2-ctl
4032x3040 at 30fps - FAIL
tegra_channel_error_status:error 20022 frame 90
3840x2160 at 30fps - PASS
1920x1080 at 60fps - PASS
Example command: v4l2-ctl -d /dev/video0 --set-fmt-video=width=4032,height=3040,pixelformat=RG10 --set-ctrl bypass_mode=0,sensor_mode=0 --stream-mmap
To rule out v4l2-ctl being the limitation, I tested with the v4l2cuda sample app (modified to support raw bayer output) and it exhibits the same behavior as v4l2-ctl. Arducam kindly provided a custom driver with a 4032x3040 at 15fps mode. This change enabled v4l2-ctl to capture correctly at that resolution.
Given my albeit limited understanding of the Argus and v4l2 camera pipelines, I believe they should share the same MIPI CSI bandwidth performance characteristics. The primary difference being that Argus interfaces with the ISP while v4l2 does not:
So my question becomes, how can Argus handle 4032x3040 at 30fps while v4l2 can’t? Any help in understanding, or ideally, finding a way to actually capture in this mode with v4l2 would be greatly appreciated. Thanks!
the clock allocations approaches were different,
could you please try running jetson_clocks.sh and set the NVPModel clock configuration as MaxN for testing v4l2.
thanks
Are the clock allocations something that can be modified either in the kernel sources, or at runtime? All of my tests have been run with jetson_clocks.sh at MaxN. Also, note that I updated the post above to include the correct error when trying to use v4l2 with 4032x3040 at 30fps:
tegra_channel_error_status:error 20022 frame 90
The other error I had previously included:
video4linux video0: frame start syncpt timeout!0
Is what I see on my other Nano 2GB Dev Kit (which never works with v4l2 - per this post)
Argus supports DFS based on the loading, that’ll be able to set the clocks during runtime; there’s no DFS for VI in v4l2.
it’s pixel_clk_hz for the camera software uses the sensor pixel clock to calculate the exposure and frame rate of the sensor. this must be set correctly to avoid potential issues.
please keep running at performance mode, and please also setting VIC always running at max clock rates for testing,
for example,
disable runtime suspend of VIC
# echo on > /sys/devices/50000000.host1x/54340000.vic/power/control
set userspace governor
# echo userspace > /sys/devices/50000000.host1x/54340000.vic/devfreq/54340000.vic/governor
set max frequency
# cat /sys/devices/50000000.host1x/54340000.vic/devfreq/54340000.vic/available_frequencies
# echo [max_freq_val] > /sys/devices/50000000.host1x/54340000.vic/devfreq/54340000.vic/max_freq
set target frequency
# echo [max_freq_val] > /sys/devices/50000000.host1x/54340000.vic/devfreq/54340000.vic/cur_freq
may I know what’s the actual use-case for running v4l2 camera pipelines?
I’ve check CSI and USB Camera Features again and it surprise me that the capture has validated up-to 3280x2464.
My use-case requires long-exposure and raw bayer output still photographs (ideally captured in a trigger mode rather than streaming, but I know this isn’t yet supported). Per this thread, I was pointed to v4l2 as the only way long exposures are possible on the Nano currently. Why does it surprise you about the validation resolution?
so far, we don’t have sensor hardware to produce 4Kx3K sources on Nano platforms.
you may have workaround to lower the frame rate, please consider this is known issue since software release only mentioned it has validated up-to 3280x2464.
I’d be happy to send you / Nvidia a couple IMX477 MIPI CSI sensors to test on. Would that be helpful? Do you know what the validation involves? Is there a test-suite/unit-test?
This Nvidia list suggests there are many 4k x 3k sensors that are compatible with the Nano (and other Jetson platforms). Thanks again.
Is there a reason for this? With Gstreamer fully capable of 4032x3040 30fps, I would expect v4l2 to also be able to function, and be supported, at that resolution/fps. I’d just love to understand why this isn’t the case.
I’m not sure I understand. There was an older L4T release that contained an IMX477 driver that included several modes (including the 4032x3040 mode) which are no longer included in the current L4T release. The only way I can test 4032x3040 is by using the Arducam/RidgeRun patched drivers. What exactly will be changing in future releases regarding the IMX477 driver? IMO, the 4032x3040 mode should be added back into the stock Nvidia driver as it does work perfectly fine with Argus at 30fps. As to why it doesn’t work with V4L2 is still a mystery…
it’s due to some 2-lane configuration updates and the default 4K sensor mode has replaced with standard 4K (i.e. 3840x2160).
so, the v4l2 standard control issue with 4032x3040@30fps will be not fix.
Ok, I don’t understand 100%, but as long as I can install the Arducam drivers which expose the 4032x3040@30fps, then that should be fine. In my opinion, there will be many users who will want/need access to the full 4:3 FOV of the IMX477 (even if it needs to be at lower FPS - though it does work at 30fps in Gstreamer 100% with a Nano 2GB).