Multiple CSI cameras on tx1

Hi, everyone!

Just got the TX1 and I am trying to hook up multiple cameras to it. I read all relevant posts in the TX1 forum but found no definitive answer. It is possible to attach like 6 cameras using MIPI 2-lane and receive all video streams simultaneously? Any tutorials/examples or information that can help? Thanks in advance!

Regards,
Dan

Hi cdpango,

You could refer to Jetson TX1 OEM Product Design Guide
http://developer.nvidia.com/embedded/dlc/jetson-tx1-oem-product-design-guide

That TX1 can support up to 6 dual lane camera streams.

Thanks

Hi Kayccc,

Yes, the OEM Design Guide claims that TX1 can do that, but I cannot find information about the software/firmware side of things, like how to interface these 6 dual lane CSI cameras with the two ISPs inside TX1.

Thanks!

I have same questions.
eg. I need 4 ov5693, how can i get video or capture picture from every camera?

Hi,

that topic would be interesting for me as well, but the comments on the documentation are a bit confusing for me.

In the document “Tegra X1/Tegra Linux Driver Package Multimedia User Guide (R23.2)” in section “SUPPORTED CAMERAS -> CSI CAMERAS” is a note, that TX1 currently supports only 1 sensor. How do we have to understand that?

Is it 'just’the current software package (L4T 23.2) which doesn’t support that or what’s the reason for it? Are there other ways to support multiple cameras?

Would be great to hear something about the software side.

Thanks a lot
Marc

This question is discussed quite a lot here. You can have multiple cameras connected to the board as long as they fit in the CSI lanes available on the SoM. In my project we have 3.

Then provided you write the appropriate V4L2 driver for the camera (this is mainly getting configuration register tables from somewhere) you can use a program of your choice to get raw frames from the camera, yavta being a good choice. This means that if your camera is UYVY one, you get the frame in this exact format, or Bayer sensor would give BGBG/GRGR sequence in the frame.

However, in order to use all features that gstreamer or Vision Works provides you need some close source components from Nvidia that transform YUV or Bayer to a an acceptable format (for example NV12). These components I found have showstopper compatibility bugs that still make cameras other than the default ov5693 (the evaluation kit is shipped w/ it) unusable.

Tegra TX1 provides 6 csi ports(CSI_A/B/C/D/E/F) and each port has 2 lanes. Each port can be programmed individually.

HW connection,

We can support the follow cases,

  1. <=6 sensors with 2-lane or 1-lane output
  2. <=3 sensors with 4-lane output(combination with (A+B, C+D, or E+F)
  3. L sensors with 1-lane, N sensors with 2-lane and M sensors with 4-lane under the condition (Lx1 + Nx2 + Mx4) <= 12-lane && (L + M + N) <= 6
    Note that each port can accept only one sensor.

SW configuration,

please review the following file to get reference

$KERNEL/arch/arm64/mach-tegra/board-t210ref-camera.c

Hi nVConan,

We are also trying to get 6 cameras working through the CSI ports. I was not able to locate the file you mentioned above:

$KERNEL/arch/arm64/mach-tegra/board-t210ref-camera.c

In the latest Kernel_src, the folder structure exists, but there are no files by that name.

Could you elaborate more on what’s necessary in software to get 6 cameras up and running? Thanks!

-marc

Even with the latest kernel for R24.2, which is still under holding, the file exists as expected. Please check if something is wrong in your side.

IMO, the following snapshot in that file is a good example for your reference.

else if (of_machine_is_compatible("nvidia,e2220")) {                
    platform_device_register(&t210ref_ov5693_a_soc_camera_device);  
    platform_device_register(&t210ref_ov5693_b_soc_camera_device);  
    platform_device_register(&t210ref_ov5693_c_soc_camera_device);  
    platform_device_register(&t210ref_ov5693_d_soc_camera_device);  
    platform_device_register(&t210ref_ov5693_e_soc_camera_device);  
    platform_device_register(&t210ref_ov5693_f_soc_camera_device);  
}

Hi nVConan/kayccc,

I need few information regarding the power rail on the J22 connector on Jetson TX1. We are using the J22 5V rail (VDD_5V0_IO_SYS) to source multiple CSI cameras. I am interested to know about the maximum load current that can be sourced from this 5V?

Can this supply provide around 2A if required? Or is it better for us to use an external power supply?

Hi dili,

2A is supported by VDD_5V0_IO_SYS thru J22 connector, no need to add external power supply.

6 MIPI CSI-2 Cameras support for Jetson TX1: https://www.e-consystems.com/multiple-csi-cameras-for-nvidia-jetson-tx2.asp

Hi e-ConSystems,
I do not see any information about the camera adapter used for the 6 camera demo, could you provide some?Thx.

Best Regards,
Ruih

Our camera adapter board is currently under development, If you need any further information, please feel free to contact us: sales@e-consystems.com

  1. We are developping our own processing card which uses a FPGA as an interface to two digital cameras.
    So, how can we configure our two digital cameras to Jetson TX1 platform via FPGA?
    We do see the configuration file “board-t210ref-camera”. Could you tell us which part of this file will need modification and how? Could you provide some advice and guideline?

  2. We noticed the camera provided with the Jetson TX1 development board, it is configured as a I2C device on Linux device tree. With our developped board, should we also treat our FPGA+digital cameras as a I2C device as well?
    How do we make our developped board work as I2C device? What should we do?

  3. There are 12 lanes on TX1 for connecting multiple cameras. How do we define these lanes for each cameras?
    Could you provide a sample code for two Camera-Link cameras, resolution 1280x1024?