TX1 I2C / CSI-2 Quad Camera Input

Hello,

I believe I am in need of assistance with regards to Linux kernel drivers for the CSI-2 output of the Texas Instruments DS90UB964-Q1 Quad Camera Hub attached to a Nvidia Jetson TX1. If there is any Linux device drivers available for the DS90UB964-Q1, or documentation on writing drivers for this chip, please advise.

However, my problem may be elsewhere. Hence, I am providing further information on my project:

Project specifications:

Run machine vision software (OpenCV-based) using multiple camera input (4 in this case) on an embedded device

(OpenCV can access cameras once a handle to the camera device is available in /dev/video* (in Linux))

Therefore, aim is to make camera devices available via /dev/video*.

Materials:

Embedded Device:

Nvidia TX1

Operating System:

Linux4Tegra (L4T) version 24.2.1

Quad camera hub:
Texas Instruments DS90UB964-Q1

Cameras:
4x OV10635 Serial Coax

Connectivity:

The DS90UB964-Q1 (hereafter referred to as the UB964) was connected to the Nvidia TX1 via the (TX1) J21 GPIO expansion header (for I2C connectivity) and (TX1) J22 Camera connector (for MIPI-CSI2 connectivity). The OV10635 cameras connect via coax to the UB964 board. Development is done on the Linux OS on the TX1 (L4T R24.2.1).

Methods tested:

Firstly, the i2c connection was confirmed by checking for visibility in userspace, using the Linux program i2c-tools. The following i2cdetect command:
i2cdetect -y -r 0

Revealed the UB964 at (7 bit) i2c address 0x30. This was double-checked in Windows, via connection to I2C using an (Total Phase) Aardvark I2C-to-USB connector and the Texas Instruments Analog Launchpad program. Registers revealed using both methods were checked against the expected default values found in the DS90UB964-Q1 Quad FPD-Link III Deserializer Hub pdf. Electrically, the connection was checked on an oscilloscope (connected to SDA and SCL on the UB964), and write commands to this i2c address produced expected clock and data waveforms on the oscilloscope.

From this, it appears we have connection to the UB964 I2C bus via Linux on the TX1. Next, we wanted to test the connection to the OV10635 (connected through the UB964). So, the next step was to setup the UB964 to alias the OV10635 to the I2C bus (visible in TX1 userspace). A sample script (found in a Texas Instruments UB964 pdf) for locking ports, setting up CSI parameters and aliasing receivers on the I2C was followed in Linux using the same registers and values but setting up using a shell script consisting of i2cset commands.

After running these commands (as a shell script (.sh)), two extra i2c addresses (per each camera connected) were found on the i2c bus. Both of these new i2c addresses were “pinged” via i2cget commands (which reads registers at i2cbus addresses) and one of the two new addresses (per camera) displayed expected data / clock response on an oscilloscope (attached to the SDA/SCL lines on the OV10635 Serial Coax board) while the other register displayed nothing on the oscilloscope.

From this, it appears we have access to the OV10635 registers through the I2C bus.
I don’t have much experience in the Linux kernel, or the lower-level I2C and CSI-2 functionality with the kernel, so general pointers may prove useful.

At this point, I believe my next step is to start looking at how to acquire the CSI-2 data from the UB964 on to the TX1. From what I’ve read on the Nvidia forums and about Linux, I believe I need to get a device driver so the TX1 kernel-space can identify the i2c connection in the device tree (via its i2c address on the TX1 i2c bus), and (using the “compatible” parameter in dtsi file) link this to a driver file (.c) with information about the incoming frames (height, width, clock, etc.) which should allow the Video4Linux2 (V4L2) framework to create a node under /dev/video* with which one can read in the camera frames. I may be incorrect in my assumptions of this process. I’m not sure where a CSI-2 handle might fit in to the kernel either.

I have attempted to make device driver files and recompiled the Linux kernel with these device drivers, but nothing shows up in /dev/video*. The device driver files I complied were simply renamed copies of OV5693 device driver files, and I changed the i2c address in the dtsi files to that of the UB964.

Although I am on L4T 24.2.1, I haven’t found many useful forum posts about using the Media Controller API; so I have been attempting to go the Soc_Camera route as mentioned in these posts:
https://devtalk.nvidia.com/default/topic/946840/soc_camera-driver-in-l4t-r24-1/

I had difficulty making nvhost_vi a module, so I removed it and manually exported the symbols that were missing when I tried to recompile the kernel. I added soc_camera as a module (along with created driver files)and recompiled kernel again so I could modprobe soc_camera, which successfully runs. modprobe tegra_camera however responds with “device or resource busy”.

That’s about where I am up to. Other than trying the Media-Controller route, I am not sure how to debug this problem further, and can’t seem to find any more potential development avenues after searching forums of Nvidia, Texas Instruments and Linux.

I followed this guide to build the kernel: http://www.jetsonhacks.com/2016/09/28/build-tx1-kernel-and-modules-nvidia-jetson-tx1/

I have also posted this to the Texas Instruments E2E forums, as I believe I may require help from both TI and Nvidia in getting this board functional.

Thanks for your time,

Regards,

Peter.

Hi Poydah

I guess you can start with the “sensor programing guide” first.

http://developer.nvidia.com/embedded/dlc/l4t-documentation-24-2-1

Hi Poydah,

We are about to start adding the support to use DS90UB964 on TX1, however, we have some questions about it and I was wondering if those are affecting you as well.

In case of using 4 cameras with B964 it seems that it will try to mix or interleave the 4 streams in one/or two MIPI CSI streamings on the same physical output using virtual channels IDs (VC-ID, according to page 28 of the datasheet)

http://www.ti.com/lit/ds/symlink/ds90ub964-q1.pdf

Looking into the Technical Reference Manual for TX1, on table 155 (page 2391), it is mentioned that TX1 doesn’t support virtual channel interleaving. My questions are:

a) Are you able to capture using virtual channel interleave? or are you using a different configuration in B964?

b) For NVIDIA, is there someway to get Virtual Channel interleave working with TX1?

Thanks,
-David

Thread bump, I was actually looking to build something similar, perhaps we can solve the problems together.

Was there any reason why the I2C pins on the camera connector weren’t used for the interface (CAM_I2C_SCL, I2C_CAM_CLK)?

Have you had any luck since the last post in getting the TI deserializer to work without virtual channel interleaving?

Hi Guys
Virtual channel are not implement in current BSP. We are consider to implement it in future.

Are we able to dump the entire interleaved block into memory, and manually splice out the virtual channels if we know the packing order?

@xtracrispy
You can try, but may have performance issue. I mean you need post process may have latency increasement. And can run the argus/nvcamerasrc.

Hi ShaneCCC,

The first time that I read the TRM and found that VC IDs are not supported I thought it was a hardware limitation, you comment about being a feature that could be implemented in future makes me believe that I was wrong and that VI could extract or capture frames that come with different IDs. Is this correct? I would like to work adding this feature.

@ShaneCCC,

What is the expected outcome of grabbing frames from a deserializer that encodes 4 virtual channels through the standard long-packet header identifier?

Would each channel be grabbed in sequential order as one stream? Is there any way to export the packet header information w/ each frame, for us to manually associate the channels?

Also, poking around, it looks like the nVidia PX2 boards used a quad deserializer as well, albeit a MAX9286. Do you think any of that codebase is portable for our version of L4T to gain VC support?

@David
Could you tell which TRM and where tell VC IDs not support?

@xtracrispy
First TX1 only have 2 channel for each 4 lane CSI, As I mention early current soft driver are not support VC yet. Current you may need to check split it as two 4 lanes source.

Hi ShaneCCC,

I am referring to the Tegra X1 (SoC) Technical Reference Manual that you can download from the embedded download center, table 155 (page 2391) [1], there is a message that says:

“Virtual channel based interleaving is not supported”

I just noticed that the Parker (TX2) TRM was just released so I decided to take a look on it, it is interesting that on table 366 (page 2847) it is mentioned that one of the main differences with TX1 is:

Support for virtual channel (VC) interleaving
Support for data type (DT) interleaving

Then I went to page 2846 and it indicates as new feature that Virtual Channel interleaving is now supported by the hardware:

“Virtual Channel Interleaving: VCs are defined in the CSI-2 specification, and are useful when supporting multiple camera sensors. With the VC capability, one pixel parser (PP) can interleave up to four image streams. This feature is useful in supporting up to 12 streams for automotive use case”

However, it seems that the maximum amount of cameras that you can multiplex is two per every 2 MIPI CSI Lanes, because it says up to 12 streams. Conclusion, VC ID should be possible but only on TX2. ShaneCCC what would be needed if I want to add the support to the main VI driver?

[1] https://devtalk.nvidia.com/default/topic/1009457/jetson-tx1/tx1-i2c-csi-2-quad-camera-input/post/5166453/#5166453

@David
Be honesty it’s difficult to me to tell what would be needed. So we issue this topic to developer to take it as plan.

Hi ShaneCCC,

Thanks for your honesty ;) please keep me posted with any guidance or news that you get about it, I will be checking this in more detail and asking to nvidia internally as well,

-David

@Shane,
Acknowledged, VC is not supported yet. What I am wondering is whether manual workarounds are possible, or will the data be dropped altogether?

Example application, 2 standard VC streams coming in. Would frame grabs using V4L as a time series look like:
a) VC0->VC1->VC0->VC1
b) VC0->DROP->VC0->DROP
c) error, everything breaks?

@xtracrispy
Suppose it would be error everything breaks.

Hi All,

I was wondering if NVIDIA has made any progress on getting the Virtual Channel ID support in place, we are going to work on this with one of the MAX9286 and the leopard imaging camera:

https://www.leopardimaging.com/LI-OV10640-490-GMSL.html

We will be creating the drivers but also want to use the VCID feature. If someone has made progress on this any input is appreciated. I know this should be possible only on TX2.

-David

Hi All,

my settup is a TI DS90UB964-Q1 Quad Camera Hub attached to a Nvidia Jetson TX2.
I wanted to ask if anybody has made some progress on working with virtual channels?

Hi Rabe,

RidgeRun created the drivers for the MAX SerDes and this solution was enough for our customer, they use one GMSL camera per deserializer. We are planning to continue with the research on how to add the virtual channel ID support as an internal project. I asked NVIDIA about it but they haven’t done progress on this yet:

https://devtalk.nvidia.com/default/topic/1030137/virtual-channel-id-on-tx2/?offset=4

-David