I have a single sensor that can output multiple image data simultaneously MIPI using Virtual Channel.
Is it possible to connect that sensor directly to Jetson without using GMSL or SerDes and have multiple image data recognized as separate devices (for example, /dev/video0, /dev/video1, ) for each Virtual Channel?
Note that it is possible to connect this sensor directly to Jetson and get single image data.
In the DeveloperGuide, in the chapter “Jetson Virtual Channel with GMSL Camera Framework”, there is an example of using GMSL, but there is no description of not using GMSL.
Please let me know if this is technically possible.
I understand that it is possible,
but if I create a corresponding device tree file based on “tegra194-camera-imx390-a00.dtsi”,
please let me know what to do with the “gmsl-link” entry?
Sorry for the repetive questions.
Since this is a single sensor, there will be duplicate slave address,
how should this be handled in the device tree file?
For example, should we set the correct slave address for VC0 and dummy address for VC1?
You can set the VC1 address by +1 and handle it in the sensor driver.
Like if the sensor slave address is 0x10 and set the VC1 to 0x11, and handle it in the driver properly.
Please teach me to make one more point on this matter, just to be sure.
Multiple frames of data from one sensor module are output through one MIPI signal line using a virtual channel.
Is there any problem if the data size, data type, and frame rate are different for each of the multiple frame data output?
Sorry for the late reply, but thank you for your answer.
I was able to confirm that I can get multiple images from one sensor
using Virtual Channel even in a configuration without GMSL,
so I will resolve this issus.