I’m looking into writing my own driver for an unconventional vision sensor which I would like to connect through MIPI CSI-2 with the Tegra. The sensor uses a custom format. It would be sufficient if the driver simply dumps the data from the sensor into the memory without processing it, i.e. no ISP is used. Is that possible? Where could I obtain the desired documentation to achieve that? Is the source code of some of the existing MIPI CSI drivers available such that it could be used for reference?
The support available for v4l2 would actually do what you need, it would bypass the ISP and will put the data into memory. You can use a tool like Yavta or gstreamer to test the capture. Tegra X1 is using V4L2 media controller for camera drivers.
You can read more about it in the documentation available in the package called L4T R24-2 Documentation, you can download it from: [url]Jetson Download Center | NVIDIA Developer (nvl4t_docs/index.html - sections: Video for Linux User Guide and Sensor Driver Programming Guide)
The default kernel comes with an example for ov5693 which is the sensor included in the evaluation board (Jetson). Since you will need to rebuild the kernel when creating your driver you might find this guide useful: https://developer.ridgerun.com/wiki/index.php?title=Compiling_Tegra_X1_source_code
We create V4L2 driver for our customers, if you need help just let us know.
Thanks for your fast reply and the link to the documentation. It looks exactly like what I need - this is really great.
Just to be clear, our vision sensor doesn’t output simple image data but it’s rather kind of a mix of image data and other type of data. While on the low-level our sensor sticks to the packet based protocol (specified by MIPI, i.e. long packet = Packet header + Payload in 8bit words + Checksum), on the layer above, where usually bytes are converted to pixels, we would need to do some customization. This is basically the layer that specifies the data format (i.e. RGB888, RAW12 etc.). Since we have our custom format and we would need to be able to change how the data is interpreted, I’m now trying to understand if I have access to that layer and if it can be customized. Is that also part of the v4l2 or is that happening somewhere else?
Hi Marc, from the documentation I can see it supports:
•RGB888
•RAW8
•RAW10
•YUV422
from what you say I would try to capture plain RAW8 and then interpret the information as needed once you have captured the frame. No sure if this is what you need.
Ok. That seems like a good option for a start. I will carefully read the documentation and then try it out. What I’m still confused about, is up to which level I have access and what I can change. For example, are these data formats specified somewhere in the source code (in thi case I could add support for my own format) or is this happening in some firmware where I don’t have access. Do you happen to know that?
I haven’t had to go that deep yet but from previous experience in other SoC normally you can’t go and modify how the data is received and packaged into memory directly in the capture subsystem (VI), maybe nvidia can provide more details.
Looking into the Technical reference Manual [1], chapter 31. Table 168 contains the list of all the supported formats and a description of several registers to configure VI, that could give you a good idea if what you need is possible or not. At this point the ones supported by the sw are the ones listed above.
Thanks a lot. Perfect, this is exactly what I was looking for. I also think that I can’t implement my own data format on such a low level (since this would require to write firmware for the VI) but it seems like I could make use of the RAW formats. There seems to be also support for arbitrary embedded 8-bit data.
I will try to get started and if I run into troubles I might shoot you a PM and ask for help :)
I know it is an old thread, yet I hope someone can help with the following:
I need to find the exact point in kernel where frame is captured in mipi csi camera (direct v4l).
I searched in the files inside the tree mentioned in these posts
/ drivers / video / tegra / camera
but havn’t find where the frame is captured.
Take a look at the tegra_channel_capture_frame function in kernel/kernel-4.4/drivers/media/platform/tegra/camera/vi/vi4_fops.c, this is where the frame capture is requested in the case of the TX2 with Jetpack 3.3.
I added a lot of logs in vi2_fops.c and the I started a v4l2/gstreamer application which capture frames.
Yet, surprisingly there are no logs from tegra_channel_capture_done, tegra_channel_capture_frame, but only vi2_channel_start_streaming ת vi2_channel_stop_streaming .
Is there any idea why I can’t see logs from tegra_channel_capture_frame ? Is it because the printing rate is too high ?