I am attempting to design a high throughput multi camera system and need to confirm the constraints of the imaging pipeline.
Information I have gathered so far (please correct me if my info is incorrect):
The Xavier SoC has 16 CSI-2 Lanes. Each lane can handle 2.5Gbps via D-phy
Each camera group has 4 ports and a camera group can only interface with one type of camera at a time (i.e. 1 to 4 IMX390)
Each group is connected to 2x Max9296A dual GMSL1/2 Deserializers. Making a total of 8 deserializers available.
Each deserialize uses 2x CSI-2 lanes per Xavier SoC (really not sure on this one)
Each deserializer is setup in replication mode to send 2x CSI-2 lanes to each Xavier SoC
Each deserializers connects via D-Phy and have a max output rate of 2Gbps per lane. Making each deserializer capable of moving 4 Gbps of data (assuming its connected to 2 lanes)
Therefore each camera group can handle 8Gbps of data.
Unknowns:
A. How many Gpps(gigapixels per sec) can the Xavier ICP/ISP process?
B. What is the CSI over head? I can calculate raw bitrate from a camera, but I am assuming there is some overhead.
C. What other aspects/constraints should I be aware of when designing a multi camera suite?
If I take the ARO820 as an example:
8.3Mp * 20fps * 12bit * 4 cameras = 7.96Gbps
That fits within my estimated 8Gbps per camera group but assumes zero overhead. Can the AGX handle 4x AR0820 cameras per camera group? How about 16x AR0820 on 4 camera groups? I imagine I would hit some bottleneck in the image processing pipeline, but I not sure what!
No, I am mostly interested in WHY the AGX can/cannot handle a certain number of cameras. The example I used was 4x AR0820 in one camera group.
Additionally, the link you posted for the ISP spec also includes info for camera bandwidth(90Gb/s over 16 GMSL(R)).
However, I can’t figure out how this spec is derived. If I am correct the Xavier SOC has 16x CSI-2 lanes each capable of 2.5Gbps. Which only adds up to 40 Gbps. The only way I can get close to 90Gbps is by using 16 GMSL2 cameras at max rate (16* 6Gbps = 96Gbps). But again, that Xavier doesn’t appear capable of consuming that much data over CSI. Can you tell me where that 90Gbps comes from?
Thanks. That appears to bet a quad port gmsl2 deser. That is different than I had initially thought.
I dont think the camera vendors will have any input on this matter because I am looking to understand the performance of the AGX (Deser, CSI, NVCSI, VI, ISP, etc), not the camera itself.
The Xavier TRM is providing some insights into the limitations of the camera pipeline. I will continue my reading there.
Another thing I’d like to learn about is how the ISP is currently configured for the supported cameras modules (AR0231, IMX390, AR0820)?
Primarily, how many line exposures per “frame” and what’s the pixel bit depth(8/16/20/24)?
You can check them by running nvsipl_camera sample application with the camera modules.
The following code snippet of it is from /home/vyu/nvidia/nvidia_sdk/DRIVE_OS_5.2.0_SDK_HW_Linux_OS_DDPX/DRIVEOS/drive-t186ref-linux/samples/nvmedia/nvsipl/test/camera/main.cpp. FYI.
if ((sensor.vcInfo.inputFormat == NVMEDIA_IMAGE_CAPTURE_INPUT_FORMAT_TYPE_RAW6) or
(sensor.vcInfo.inputFormat == NVMEDIA_IMAGE_CAPTURE_INPUT_FORMAT_TYPE_RAW7) or
(sensor.vcInfo.inputFormat == NVMEDIA_IMAGE_CAPTURE_INPUT_FORMAT_TYPE_RAW8) or
(sensor.vcInfo.inputFormat == NVMEDIA_IMAGE_CAPTURE_INPUT_FORMAT_TYPE_RAW10) or
(sensor.vcInfo.inputFormat == NVMEDIA_IMAGE_CAPTURE_INPUT_FORMAT_TYPE_RAW12) or
(sensor.vcInfo.inputFormat == NVMEDIA_IMAGE_CAPTURE_INPUT_FORMAT_TYPE_RAW14) or
(sensor.vcInfo.inputFormat == NVMEDIA_IMAGE_CAPTURE_INPUT_FORMAT_TYPE_RAW16) or
(sensor.vcInfo.inputFormat == NVMEDIA_IMAGE_CAPTURE_INPUT_FORMAT_TYPE_RAW20)) {
isRawSensor = true;
}