Just got the TX1 and I am trying to hook up multiple cameras to it. I read all relevant posts in the TX1 forum but found no definitive answer. It is possible to attach like 6 cameras using MIPI 2-lane and receive all video streams simultaneously? Any tutorials/examples or information that can help? Thanks in advance!
Yes, the OEM Design Guide claims that TX1 can do that, but I cannot find information about the software/firmware side of things, like how to interface these 6 dual lane CSI cameras with the two ISPs inside TX1.
that topic would be interesting for me as well, but the comments on the documentation are a bit confusing for me.
In the document “Tegra X1/Tegra Linux Driver Package Multimedia User Guide (R23.2)” in section “SUPPORTED CAMERAS → CSI CAMERAS” is a note, that TX1 currently supports only 1 sensor. How do we have to understand that?
Is it 'just’the current software package (L4T 23.2) which doesn’t support that or what’s the reason for it? Are there other ways to support multiple cameras?
Would be great to hear something about the software side.
This question is discussed quite a lot here. You can have multiple cameras connected to the board as long as they fit in the CSI lanes available on the SoM. In my project we have 3.
Then provided you write the appropriate V4L2 driver for the camera (this is mainly getting configuration register tables from somewhere) you can use a program of your choice to get raw frames from the camera, yavta being a good choice. This means that if your camera is UYVY one, you get the frame in this exact format, or Bayer sensor would give BGBG/GRGR sequence in the frame.
However, in order to use all features that gstreamer or Vision Works provides you need some close source components from Nvidia that transform YUV or Bayer to a an acceptable format (for example NV12). These components I found have showstopper compatibility bugs that still make cameras other than the default ov5693 (the evaluation kit is shipped w/ it) unusable.
Tegra TX1 provides 6 csi ports(CSI_A/B/C/D/E/F) and each port has 2 lanes. Each port can be programmed individually.
HW connection,
We can support the follow cases,
<=6 sensors with 2-lane or 1-lane output
<=3 sensors with 4-lane output(combination with (A+B, C+D, or E+F)
L sensors with 1-lane, N sensors with 2-lane and M sensors with 4-lane under the condition (Lx1 + Nx2 + Mx4) <= 12-lane && (L + M + N) <= 6
Note that each port can accept only one sensor.
I need few information regarding the power rail on the J22 connector on Jetson TX1. We are using the J22 5V rail (VDD_5V0_IO_SYS) to source multiple CSI cameras. I am interested to know about the maximum load current that can be sourced from this 5V?
Can this supply provide around 2A if required? Or is it better for us to use an external power supply?
We are developping our own processing card which uses a FPGA as an interface to two digital cameras.
So, how can we configure our two digital cameras to Jetson TX1 platform via FPGA?
We do see the configuration file “board-t210ref-camera”. Could you tell us which part of this file will need modification and how? Could you provide some advice and guideline?
We noticed the camera provided with the Jetson TX1 development board, it is configured as a I2C device on Linux device tree. With our developped board, should we also treat our FPGA+digital cameras as a I2C device as well?
How do we make our developped board work as I2C device? What should we do?
There are 12 lanes on TX1 for connecting multiple cameras. How do we define these lanes for each cameras?
Could you provide a sample code for two Camera-Link cameras, resolution 1280x1024?