We stream video data from an FPGA over MIPI CSI2 to the Nvidia. When we use a video format such as RAW12 2048x128 pixel, everything works fine. However when we use RAW12 3640x2304 pixel we get some pixel errors.
We already verified the FPGA functionality with the FPGA manufacturer.
Additionally we have also did tests with a AGX Orin Devkit, where we see the same effect as with the Xavier NX Devkit. That’s why we assume the error is on nvidia side.
The attached picture visuzalize the raw file content (2D, Pixelvalue vs. pixelnumber) when we stream a sawtooth signal (Pixel values 1,2,3,4,5,1,2,3,4,5,1,2, …) into Nvidia using v4l2. The pixel errors are marked with red rectangles. There are dublicated pixels in the data stream and there are zeros in the datastream although we never transmit zeros with the FPGA. Every frame looks like that. The error is not changing from frame to frame even wehen we change the physical data rate.
Can you please verify the csi video functionality with RAW12 3640x2304 pixel format?
I also have trouble with the idea that the problem lies with nvidia but somewhere is the error.
The picture visualize the camera data. All of them are 2D Graphics which show the value of the pixel on the y-axis.
If you have a look at the first graph, you can see the values of all 8386560 pixels. Because of so much pixels in one graph we just see one blue block. That’s OK. But remember that we stream a sawtooth with numbers from 1 to 5 and the blue block is goint down to zero, which is not allowed!
The three graphics belo show the data when we zoome further into the graph on the top.
If you have a look at the third graph you can see that pixel number 3617 has a value of 1 instead of 5.
If you have a look at the graph on the buttom you can see that pixel nuber 7257 to 7264 are zeros. These are values what we never transmit to the Nvidia since we just transmit the following values (1,2,3,4,5,1,2,3,4,5,1,2,3,…) to get a clean sawtooth.
I also attached the raw file created with v4l2. You can analyze it by yourself if my explanation is still confusing for you. data_constData.raw (16.0 MB)
Im sure that there are formats which work fine. We already used different formats in the past an we never had problems. However for the current project our target format is 3640x2304 pixel. This format is based on the application and FPGA restrictions. In my opinion the Nvidia should be able to handle this format in case there is no bug. Is there a restriction on Nvidia side to use a frame with which can be devided by 32?