We designed a custom carrier board for Jetson Nano with 2 Cameras.
AR0521 and IMX219.
The code for the AR0521 is also custom.
When we started the mass production, we saw a problem when only one camera is working on only 20% of the NANO jetson SOM we have.
• When requesting an image from the sensor we got corrupted data on the image.
The size of the corrupt depends on the resolution requested.
In low resolution 640x480, it does not happen.
• Image size 2592, 1944
• This is not a MIPI problem from the sensor or kernel module.
• If we start streaming this corrupted video and in parallel requesting images from the IMX219 camera, this
problem disappears (when the IMX219 sensor is working).
• We saw this problem in RAW data when grabbing the Bayer frame from V4L.
• We tried to force all the relevant clocks to the max - still didn’t help.
Did someone encounter this problem in the past and have any idea how to solve it?
this looks like power noise. you may arrange hardware resources for investigation.
This is not a power noise, PCB, or MIPI problem it SOM problem, we test the sensor on other EVBs and get the same problem.
We tested more than 100 SOM only 20 or more show this problem and during streaming if we start the other camera the problem is solved.
my guess is a Clock configuration in the VI or Memory allocation.
But any attempt to play with the kernel or device tree didn’t solve my problem yet.
may I know how many camera stream in your system. is it always happened on specific camera?
could you please also setup a terminal to keep gather logs, i.e.
$ dmesg --follow, please check whether there’s failure reported when you see abnormal result.
No error on the dmesg.
we see the problem in the V4L when running the command
v4l2-ctl --verbose --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=test.raw -d /dev/video0
and convert the raw to an image.
is it only happened for the 1st frame? do you see this also in the preview stream?
if we set the mode to 640x480 we don’t get the problem.
We saw it on the first frame and only when it ran as one camera in the system. It will not appear if we use 2 camera system.
When increasing the resolution, we encounter the problem and itis gets worse with higher resolutions
is it always happened for the first frame even with higher resolutions?
how about dropping the 1st capture buffer, is it an acceptable solution?
it starts from the first frame but all the frames are the same when running with one camera.
So it will not help dump the first frame.
please review your AR0521 clock configurations, thanks
The only clock I share with the imx sensor is
vi, isp, and nvcsi for the TX2 NX SOM I can control them from kernel debug Jetson/l4t/Camera BringUp - eLinux.org
Do you have the same interface for Nano?
did you meant you’ve putting everything as maximum?
could you please refer to Sensor Pixel Clock to calculate the clock settings.
The pix_clk_hz = “168000000” we use 27Mhz external clock
could you please refer to those formula to calculate the clock settings?
or, what’s happening when you increasing pixel clock?
The formula is :
External Clock 27Mhz
REG 0x306 [15:8] - 0x70
REG 0x304 [13:8] - 3
REG 0x300 - 6
REG 0x302 - 1
VT pix-clk : 168000000
I try increasing the pix-clk to 207000000, same problem.
It looks more like a unstable problem of CSI lines on carrier board. Have you checked the CSI routing on your board? It should well follow the routing guideline in Design Guide.
Yes, we test them, and use our camera also with Auvidea jn30b EVB, and get the same results.
The main problem is that we have more than 80% of the SOM that not showing the problem.
For the 20% that shows the problem, it only shows if we run one camera in the system, if this is an HW problem in the camera it can’t be solved by running different interfaces in parallel.