How to set correctly the pxl_clk_hz, and strange behaviour with frame pause time

Reading the Sensor Driver Programming Guide, I’ve seen in the Senson Pixel Clock part that there are different ways to calculate the pxl_clk_hz parameter in the device tree. Actually, I’m using an FPGA to send the data to the TX2, and the parameters are the following ones:

  • Sensor data rate = 1.2Gbps
  • Number of lanes = 4
  • Bits per pixel = 10
  • So if I calculate it this way, I get that pxl_clk_hz is 480 000 000 Anyway, If considering the third option and the following values:
  • Width = 1920
  • Height = 1080
  • Framerate = 50 fps
  • I get a completely different pxl_clk_hz, of 103 680 000. Which one is the correct? I'm focusing on this, because I have a strange behaviour in the data acquisition. When having a frame pause of >4ms @ 50fps, every second frame is not being captured, so I get frames 1, 3, 5, 7... Anyway, if I set the frame pause <3.24ms, I can get all the frames correctly.

    Summing up a bit, with this signal (frame pause 4ms), I get images at 25fps as every second frame is not acquired, without giving any error, neither in trace nor in dmesg.

    Anyway, when reducing the frame pause to 3.24ms, I get the images at 50fps.

    The pxl_clk_hz will use to calculate the clock for below. You can set them as Max to confirm if the host bandwidth cause the issue.

    sudo su
    echo 1 > /sys/kernel/debug/bpmp/debug/clk/vi/mrq_rate_locked
    echo 1 > /sys/kernel/debug/bpmp/debug/clk/isp/mrq_rate_locked
    echo 1 > /sys/kernel/debug/bpmp/debug/clk/nvcsi/mrq_rate_locked
    cat /sys/kernel/debug/bpmp/debug/clk/vi/max_rate
    cat /sys/kernel/debug/bpmp/debug/clk/isp/max_rate
    cat /sys/kernel/debug/bpmp/debug/clk/nvcsi/max_rate
    echo {max_rate} > /sys/kernel/debug/bpmp/debug/clk/vi/rate echo {max_rate} > /sys/kernel/debug/bpmp/debug/clk/isp/rate
    echo ${max_rate} > /sys/kernel/debug/bpmp/debug/clk/nvcsi/rate

    I’ve tried it, so I set it to the following values:

    root@nvidia-desktop:/home/nvidia# cat /sys/kernel/debug/bpmp/debug/clk/vi/max_rate
    1011200000
    root@nvidia-desktop:/home/nvidia# cat /sys/kernel/debug/bpmp/debug/clk/isp/max_rate
    1088000000
    root@nvidia-desktop:/home/nvidia# cat /sys/kernel/debug/bpmp/debug/clk/nvcsi/max_rate
    225000000
    
    echo 1011200000  > /sys/kernel/debug/bpmp/debug/clk/vi/rate
    echo 1088000000 > /sys/kernel/debug/bpmp/debug/clk/isp/rate
    echo 225000000 > /sys/kernel/debug/bpmp/debug/clk/nvcsi/rate
    

    Yet, for a 1920x1080 @ 50Hz signal, I get no errors neither in the trace log nor in dmesg.

    nvidia@nvidia-desktop:~/Pictures$ v4l2-ctl -d /dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=GRAY16_LE --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1000 --stream-to=img.raw
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 50.00 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 50.24 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 50.16 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 43.89 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 44.91 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 45.68 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 46.29 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 46.16 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 45.73 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 46.25 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 46.54 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 46.87 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 46.74 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 47.00 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 47.23 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 47.40 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 47.56 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 47.30 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 46.13 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 46.15 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 45.52 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 45.16 fps
    <<<<
    

    The framerate is not constant, although I might understand it as there are frames being lost somewhere. I got the correct framerate by setting the frame pause smaller, as mentioned in the original post, but when I read about 100 frames, I get no lost frames. If setting the number of frames much higher, there are frames that are not processed.

    Lastly, I’ve done a script to count the number of “successful” trace log messages, and this is what I got:

    CSIMUX_STREAM: 1
    CHANSEL_PXL_SOF: 1004
    ATOMP_FS: 1004
    CHANSEL_LOAD_FRAMED: 963
    CHANSEL_PXL_EOF: 1004
    ATOMP_FE: 1004
    

    Does this mean it tried to process 1004 images and captured 963? Because, if I process the raw image with ImageJ, it is showing 1000 images, so I don’t know how to interpret these values

    I’ve analysed the position and number of lost frames, and these are the results:

    Missing Number:
    index: 140, numbers: 1
    index: 189, numbers: 4
    index: 227, numbers: 4
    index: 302, numbers: 4
    index: 340, numbers: 12
    index: 372, numbers: 5
    index: 410, numbers: 4
    index: 448, numbers: 4
    index: 486, numbers: 4
    index: 523, numbers: 5
    index: 560, numbers: 4
    index: 566, numbers: 2
    index: 599, numbers: 3
    index: 635, numbers: 5
    index: 671, numbers: 4
    index: 708, numbers: 4
    index: 746, numbers: 9
    index: 768, numbers: 4
    index: 804, numbers: 5
    index: 818, numbers: 7
    index: 829, numbers: 4
    index: 840, numbers: 9
    index: 841, numbers: 17
    index: 863, numbers: 4
    index: 901, numbers: 4
    index: 938, numbers: 4
    index: 976, numbers: 4
    

    Index refers to the position of the frame that is not correctly processed, and numbers refer to the amount of frames not being captured. They don’t seem to have anything in common, so it seems a random behaviour.

    Moreover, to see if the problem of getting half of the frames is an issue on the camera or on the Jetson, I’ve tested to send black and white images crossed between them, so after a white image, a black one is being sent, and after the black, a white one. So after measuring it in the oscilloscope, I’ve seen that they are really sent, and the TX2 only processes or all white images, or all black images.
    Two continuous frames

    White image

    Black image

    Is it possible that there is a kind of overflow somewhere in the TX2? Maybe in the NVCSI block, or something similar.

    Does this sensor output as continuous mode? Can it configure as non continuous mode to try.

    The FPGA can output in discontinuous mode. Up until now, I was always testing in continuous mode.
    Anyway, I’ve generated a new image with the “discontinuous_clk” parameter enabled in the device tree, and activated the discontinuous clk mode in the FPGA, and still the same results.
    Result of 300 images @ 50fps - “small” frame pause

    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 50.49 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 50.24 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 50.16 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 47.88 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 47.90 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 43.12 fps
    <<<<<<<<<<<<<<<<<<<<<<<<<<
    

    Result of 300 images @ 50fps - larger frame pause

    <<<<<<<<<<<<<<<<<<<<<<<<<<< 25.24 fps
    <<<<<<<<<<<<<<<<<<<<<<<<< 25.00 fps
    <<<<<<<<<<<<<<<<<<<<<<<<< 25.08 fps
    <<<<<<<<<<<<<<<<<<<<<<<<< 25.06 fps
    <<<<<<<<<<<<<<<<<<<<<<<<< 25.04 fps
    <<<<<<<<<<<<<<<<<<<<<<<<< 25.04 fps
    <<<<<<<<<<<<<<<<<<<<<<<< 24.89 fps
    <<<<<<<<<<<<<<<<<<<<<<<< 24.68 fps
    <<<<<<<<<<<<<<<<<<<<<<<<< 24.86 fps
    <<<<<<<<<<<<<<<<<<<<<<<< 24.77 fps
    <<<<<<<<<<<<<<<<<<<< 24.34 fps
    <<<<<<<<<<<<<<<<<<<<<<<<< 24.39 fps
    <<<<<<
    

    I’ve been thinking of why this could be happening, as I don’t think it might be because of the device tree nor the driver. Could it be possible that I need to change further files (like vi4_fops, csi4_fops, …)? Or it should be enough with the DT + driver?

    Link to the https://devtalk.nvidia.com/default/topic/1071801