New sensor driver works with v4l2 but nvarguscamerasrc cannot work

Dear,

Can you help with this: We created a new sensor driver with Jetson Nano, with v4l2 ctl, we can capture raw frames. But nvarguscamerasrc cannot work, no further output after below information in command line

leon@leon-desktop:~$ gst-launch-1.0 nvarguscamerasrc num-buffers=200 ! 'video/x-raw(memory:NVMM),width=2688, height=1520, format=NV12' ! omxh264enc ! qtmux ! filesink location=test.mp4 -e
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
H264: Profile = 66, Level = 40 
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 2688 x 1520 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 1 
   Output Stream W = 2688 H = 1520 
   seconds to Run    = 0 
   Frame Rate = 29.999999 
GST_ARGUS: PowerService: requested_clock_Hz=26812800
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.

By enabling debug information after start streaming, we can see the log below with error “fence timeout on [ffffffc0836d9d80] after 1500ms

[  100.501155] fence timeout on [ffffffc0836d9d80] after 1500ms
[  100.501164] name=[nvhost_sync:20], current value=0 waiting value=1
[  100.501169] ---- mlocks ----

[  100.501181] ---- syncpts ----
[  100.501190] id 5 (disp1_a) min 771 max 771 refs 1 (previous client : )
[  100.501194] id 6 (disp1_b) min 2 max 2 refs 1 (previous client : )
[  100.501198] id 7 (disp1_c) min 2 max 2 refs 1 (previous client : )
[  100.501202] id 9 (gm20b_507) min 36924 max 36924 refs 1 (previous client : )
[  100.501209] fence timeout on [ffffffc0836d9c00] after 1500ms
[  100.501209] id 11 (gm20b_506) min 14 max 14 refs 1 (previous client : )
[  100.501213] id 12 (gm20b_505) min 4154 max 4154 refs 1 (previous client : gm20b_505)
[  100.501214] name=[nvhost_sync:21], current value=0 waiting value=1
[  100.501217] ---- mlocks ----
[  100.501218] id 13 (54340000.vic_gst-launch-1.0_0) min 204 max 204 refs 1 (previous client : vi)
[  100.501225] id 20 (54680000.isp_0) min 0 max 3 refs 4 (previous client : )

[  100.501229] id 21 (54680000.isp_1) min 0 max 3 refs 4 (previous client : )
[  100.501230] ---- syncpts ----
[  100.501232] id 22 (54680000.isp_2) min 7 max 15 refs 10 (previous client : )
[  100.501235] id 23 (54680000.isp_3) min 0 max 3 refs 4 (previous client : )
[  100.501239] id 24 (gm20b_504) min 686 max 686 refs 1 (previous client : )

It’s could be the sensor configure cause the ISP pipeline capture failed.
Does any VI/CSI log before the fence timeout.

Thanks ShaneCCC,

Sorry for troubling further, what is the relevant sensor configuration that could impact the ISP pipeline?(the pixel clock?) Any suggestions for debugging and fixing this issue?

Hello ShaneCCC,

below is the VI/CSI log, thanks!

[   64.119198] video4linux video0: Syncpoint already enabled at capture done!0
[   64.321876] video4linux video0: tegra_channel_capture_done: MW_ACK_DONE syncpoint time out!0
[   64.330610] video4linux video0: TEGRA_VI_CSI_ERROR_STATUS 0x00000005
[   64.330648] vi 54080000.vi: TEGRA_CSI_PIXEL_PARSER_STATUS 0x00000080
[   64.330674] vi 54080000.vi: TEGRA_CSI_CIL_STATUS 0x00000000
[   64.330698] vi 54080000.vi: TEGRA_CSI_CILX_STATUS 0x00000000
[   64.330829] vi 54080000.vi: cil_settingtime was autocalculated
[   64.330846] vi 54080000.vi: csi clock settle time: 13, cil settle time: 10

Looks like the v4l2-ctl failed to get raw data too.
The log show
PPA_SHORT_FRAME: Set when CSI-PPA receives a short frame. This bit gets set even if
CSI_PPA_PAD_FRAME specifies that short frames are to be padded to the correct line length.

[ 64.330610] video4linux video0: TEGRA_VI_CSI_ERROR_STATUS 0x00000005

Thanks ShaneCCC,

Could you kindly share the expected value of TEGRA_VI_CSI_ERROR_STATUS, also, any suggestions for how to debug in this situation? thanks a lot!

You can find it in the TX1 TRM.
Short frame means the sensor output line didn’t as expect. You can reduce active_y in the sensor mode of the device tree to narrow it down.

How ShaneCCC,

I tried to modify the line_lengt, and got the status like below:

[  148.697443] video4linux video0: tegra_channel_capture_done: MW_ACK_DONE syncpoint time out!0
[  148.706767] video4linux video0: TEGRA_VI_CSI_ERROR_STATUS 0x00000006
[  148.706838] vi 54080000.vi: TEGRA_CSI_PIXEL_PARSER_STATUS 0x00000080
[  148.706886] vi 54080000.vi: TEGRA_CSI_CIL_STATUS 0x00000000
[  148.707110] vi 54080000.vi: TEGRA_CSI_CILX_STATUS 0x00000000
[  148.707610] vi 54080000.vi: cil_settingtime was autocalculated
[  148.707748] vi 54080000.vi: csi clock settle time: 13, cil settle time: 10

It’s doesn’t matter with the line length, it’s about the output size. You should try reduce the atcive_y.

I checked in the device tree, but cannot find active_y, is it active_h? Thanks!

Yes, it’s active_h

Thanks ShaneCCC for the instruction, I will try to reduce the active_h, just one more question, the resolution I added in device tree is taking information from sensor supplier. With 26881520 output, what’s the percentage I should try to decrease the active_h? to 26881080? Any principles?

Usually the sensor output should be a litter few lines than the vendor configure you can try to reduce by 2 like 1518, 1516, 1514 …

Hello ShareCCC,

I tried several numbers but still got the same failure. I wondering if this error also related to the data rate format 10bit/12bit per pixel.

I notice that for imx219 it only support 10bits/pixel and in the device tree it’s forced to 10 bit deep, the sensor I’m using is imx347, which support 10bits/12bits, I configured it as 10bit/pixel output, do I need to change the device tree accordingly?

Thanks!

Hello yawei.yang,
The Sensor datasheet provides information about the active width and height, line-length and bits/pixel. For a particular mode(say 1920x1080@30fps), there would be fixed width, height, line-length and bits/pixel parameters given in the sensor datasheet and that fixed values have to be used in the DTS to configure correctly. Refer to L4T Camera Development Guide Device tree and Camera Modules Section(below link) for more details about setting these parameters(active_w, active_h, line-length, pixel depth, pix_clk) in the device tree
https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fcamera_sensor_prog.html%23wwpID0E0XBB0HA

Thanks Sarah,

I have created the device tree with the parameters(active_w, active_h, line-length, pixel depth, pix_clk), and I was able to capture frames with v4l2 utils, problem is that the argus_camera and nvarguscamera is not working, and the log I captured is:

[  148.697443] video4linux video0: tegra_channel_capture_done: MW_ACK_DONE syncpoint time out!0
[  148.706767] video4linux video0: TEGRA_VI_CSI_ERROR_STATUS 0x00000004
[  148.706838] vi 54080000.vi: TEGRA_CSI_PIXEL_PARSER_STATUS 0x00000080
[  148.706886] vi 54080000.vi: TEGRA_CSI_CIL_STATUS 0x00000000
[  148.707110] vi 54080000.vi: TEGRA_CSI_CILX_STATUS 0x00000000
[  148.707610] vi 54080000.vi: cil_settingtime was autocalculated
[  148.707748] vi 54080000.vi: csi clock settle time: 13, cil settle time: 10

As instructed by Shane and the TX1 TRM, it’s the PPA_SHORT_FRAME error, mean number of lines is smaller than frame height, do you have any suggestion for this?

Thanks!

Hello yawei.yang,

    As Shane suggested, Short frame error happens due to incorrect output size. So, it is better to check the active_w and active_h values and line_length values <b>according to the imx374 datasheet</b>. My suggestion would be, check if the sensor is properly configured to output 2688*1520 @ 30 FPS and also the error we have here is all related to output size; So Parameters to consider are active_w, active_h, and line_length.

Hello Sarath,

These is one point confusing me: taking reference from L4T documents:
https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-322/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide%2Fcamera_sensor_prog.html%23wwpID0E0340HA
The optical black and ignored effective pixels is added to the active_h:
•active_w: Total width of the pixel-active region. In this case, it is 3840 + 4 (LI) + 12 (left margin) + 0 (right margin) = 3856.
•active_h: Total height of the pixel-active region. In this case, it is 2160 + 8 (OB) + 6 (Ignored Area Effective Pixel + 50 (VBP)) * 2 = 4448.

But for imx219, the active_w is 3264, active_h is 2464, which is not taking optical black and ignored effective pixels.
Which instruction I should follow to calculate active_w, active_h? thanks!