Camera device tree

What’s the different between those two device tree?

i2c@3180000{
    ov5693_a@36{
        // .....
        // .....
    };
};
i2c@3180000{
    tca9548@77{
        i2c@0{
            ov5693_a@36{
                //......
                //......
            }
        };
    };
};

I try too capture the csi data from camera without i2c communication. I referred others topic i saw author use the first way. So I want to know what’s the different.
Thanks.

Hello @leo857366,

The difference between the two device tree snippets is that in 1 you are describing your camera module to be directly connected to the i2c bus, whereas in 2 you are configuring the system to use the i2c mux.

Now the reason for using 2 over 1, is that if you find yourself in a position where you want to connect 2 or more camera modules to the board, you will end up with 2 devices with likely the same i2c address connected to the same i2c bus, therefore, in order for you to be able to tell the system to which sensor communicate, you would need to mux the i2c bus so you can control which camera gets the i2c commands.

We hope that helps answer some of your questions.

Now, in order to better assist you, would it be possible for you to elaborate a bit more on your specific use case?

What do you mean by capturing without i2c communication?

Please let us know if you have other questions or comments or if you require further assistance, we would be glad to assist you.

regards,
Andrew

1 Like

What do you mean by capturing without i2c communication?

I’m doing the similar work as below.

  1. How to configure only MIPI lanes for non I2C camera sensor
  2. Jetson Xavier NX MIPI CSI-2 without I2C from FPGA

Both of them using the first way to add the camera to the device tree.
And there are some topic also do the same thing with the same way, so I’m not sure should i choose the first way.

Hello @leo857366,

Thanks for getting back with more details.

Now everything is a bit more clear.

If your sensor does not require any i2c configuration, I believe it makes no difference with which DTB configuration you decide to go.

This is because at the end of the day, that configuration is used by the Jetson Camera Subsystem to register an i2c_client which later your .c camera driver will use to register video devices and more importantly, to communicate with the device.

Now, given that you don’t require to configure your camera through i2c, you should be fine with any i2c configuration you decide to go. You just need to adjust your driver so that you change the logic to avoid using any i2c operations. This would be a similar approach to a sensor that uses a SERDES connection, given that the sensor itself is not configured through i2c directly since you interact with it through the SERDES modules, you only need to make sure you configure all the CSI, VI and tegracam endpoints properly.

Finally, you need to make sure your camera is producing frames for the driver to capture. Usually, that is done by changing registers through i2c, if your camera offers a different method, you need to ensure it is properly configured and set to stream whenever you want to capture, otherwise the driver might break.

In conclusion, you might be good with any DTB configuration, if you want to go with 1 camera, I would suggest the DTB config with no mux, otherwise, just for the sake of respecting the camera subsystem organization I would go with the i2cmux option.

Sorry if this is a bit dense, it is usually a bit harder to explain through text. If you want us to jump into a call with you to give you a bit more guidance please let us know, we would do it at not cost.

Please let us know if you have any other questions or if you require further assistance.

regards,
Andrew

HI @proventusnova ,

The camera module we use is a FPGA combine a visible sensor and a thermal sensor. The FPGA will send the csi data to NVIDIA Jetson Orin Nano.
I want to know should i add FPGA as a new device in device tree. And seen visible and thermal as different mode.
Or add visible and thermal as a new device in device tree respectively.

Thanks.

Hello @leo857366,

That depends on what you want to achieve and how the camera/fpga works.

Assuming the camera/fpga is able to provide both streams at the same time by using virtual channel. You should be able to create 2 separate nodes on the DTB and configure them to use different virtual channels for routing.

On different case. If the sensor is only able to provide 1 mode at the time depending on its register configuration. Then you would need to create only 1 DTB node and have 2 separate capture modes for it.

We hope that helps. Please let us know if you have any other questions or comments.

regards,
Andrew

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.