I am currently using an FPD LINK III camera module, with an onsemi Image sensor, onsemi ISP and TI serializer, i need to develop the driver to get the stream in YUV:422 8 bits image format in the Jetson AGX Xavier bypassing the Jetson internal ISP.
There is the diagram block of the communication between the camera module, deserializer and the Jetson camera interface
Do we need to develop the driver for both of the ISP and and the image sensor to get the stream in YUV, or there is only the need to develop the driver to send data from ISP to serializer and set the GPIO signal in the serializer to send the bayer data from image sensor to ISP ?
Do we need both of isp_driver.c and image_sensor.c or only the isp_driver.c to get the YUV data ?
Is it possible to use LibArgus to get the YUV422 bypassing the ISP ? or the usage of the LibArgus requires to use ISP Jetson pipeline ?
Can you suggest if for this kind of camera module we have to use LibArgus to get the YUV data or we have to use directly V4L2 API ?
Argus only support bayer formats. in other words, according to the diagram, itās sending YUV to CSI brick. you can only enable v4l2src to access to the stream.
Thank you for your feedback, we will use directly v4l2src.
Currently we didnāt made the driver yet, and thatās why iām confused if in camera modules that have an internal ISP, if in the device tree will the ISP be the main sensor that has mode property, for example the ISP is ov491, and the DT should be like the following
i2c@3180000 {
ov491_a@24 {
compatible = "nvidia,ov491";
/* I2C device address */
reg = <0x24>;
/* V4L2 device node location */
devnode = "video0";
/* Physical dimensions of sensor */
physical_w = "3.674";
physical_h = "2.738";
/* Define any required hw resources needed by driver */
/* ie. clocks, io pins, power sources */
avdd-reg = "vana";
iovdd-reg = "vif";
/* Sensor output flip settings */
vertical-flip = "true";
/**
* A modeX node is required to support v4l2 driver
* implementation with NVIDIA camera software stack
*
* mclk_khz = "";
* Standard MIPI driving clock, typically 24MHz
*
* num_lanes = "";
* Number of lane channels sensor is programmed to output
*
* tegra_sinterface = "";
* The base tegra serial interface lanes are connected to
*
* discontinuous_clk = "";
* The sensor is programmed to use a discontinuous clock on MIPI lanes
*
* dpcm_enable = "true";
* The sensor is programmed to use a DPCM modes
*
* cil_settletime = "";
* MIPI lane settle time value.
* A "0" value attempts to autocalibrate based on mclk_multiplier
*
*
*
*
* active_w = "";
* Pixel active region width
*
* active_h = "";
* Pixel active region height
*
* pixel_t = "";
* The sensor readout pixel pattern
*
* readout_orientation = "0";
* Based on camera module orientation.
* Only change readout_orientation if you specifically
* Program a different readout order for this mode
*
* line_length = "";
* Pixel line length (width) for sensor mode.
* This is used to calibrate features in our camera stack.
*
* mclk_multiplier = "";
* Multiplier to MCLK to help time hardware capture sequence
* TODO: Assign to PLL_Multiplier as well until fixed in core
*
* pix_clk_hz = "";
* Sensor pixel clock used for calculations like exposure and framerate
*
*
*
*
* inherent_gain = "";
* Gain obtained inherently from mode (ie. pixel binning)
*
* min_gain_val = ""; (floor to 6 decimal places)
* max_gain_val = ""; (floor to 6 decimal places)
* Gain limits for mode
*
* min_exp_time = ""; (ceil to integer)
* max_exp_time = ""; (ceil to integer)
* Exposure Time limits for mode (us)
*
*
* min_hdr_ratio = "";
* max_hdr_ratio = "";
* HDR Ratio limits for mode
*
* min_framerate = "";
* max_framerate = "";
* Framerate limits for mode (fps)
*/
mode0 { //OV5693_MODE_1824x940
mclk_khz = "24000";
num_lanes = "2";
tegra_sinterface = "serial_a";
discontinuous_clk = "no";
dpcm_enable = "false";
cil_settletime = "0";
active_w = "1824";
active_h = "940";
pixel_t = "uyvy";
readout_orientation = "90";
line_length = "1825";
inherent_gain = "1";
mclk_multiplier = "6.67";
pix_clk_hz = "160000000";
min_gain_val = "1.0";
max_gain_val = "16";
min_hdr_ratio = "1";
max_hdr_ratio = "64";
min_framerate = "2.787078";
max_framerate = "30";
min_exp_time = "22";
max_exp_time = "358733";
};
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
e3327_ov491_out0: endpoint {
csi-port = <2>;
bus-width = <2>;
remote-endpoint = <&e3327_csi_in0>;
};
};
};
};
};
tegra-camera-platform {
compatible = "nvidia, tegra-camera-platform";
/**
* Physical settings to calculate max ISO BW
*
* num_csi_lanes = <>;
* Total number of CSI lanes when all cameras are active
*
* max_lane_speed = <>;
* Max lane speed in Kbit/s
*
* min_bits_per_pixel = <>;
* Min bits per pixel
*
* vi_peak_byte_per_pixel = <>;
* Max byte per pixel for the VI ISO case
*
* vi_bw_margin_pct = <>;
* Vi bandwidth margin in percentage
*
* max_pixel_rate = <>;
* Max pixel rate in Kpixel/s for the ISP ISO case
*
* isp_peak_byte_per_pixel = <>;
* Max byte per pixel for the ISP ISO case
*
* isp_bw_margin_pct = <>;
* Isp bandwidth margin in percentage
*/
num_csi_lanes = <2>;
max_lane_speed = <1500000>;
min_bits_per_pixel = <10>;
vi_peak_byte_per_pixel = <2>;
vi_bw_margin_pct = <25>;
max_pixel_rate = <160000>;
isp_peak_byte_per_pixel = <5>;
isp_bw_margin_pct = <25>;
/**
* The general guideline for naming badge_info contains 3 parts, and is as follows,
* The first part is the camera_board_id for the module; if the module is in a FFD
* platform, then use the platform name for this part.
* The second part contains the position of the module, ex. ārearā or āfrontā.
* The third part contains the last 6 characters of a part number which is found
* in the module's specsheet from the vender.
*/
modules {
module0 {
badge = "e3327_front_P5V27C";
position = "rear";
orientation = "1";
drivernode0 {
/* Declare PCL support driver (classically known as guid) */
pcl_id = "v4l2_sensor";
/* Driver v4l2 device name */
devname = "ov491 2-0024";
/* Declare the device-tree hierarchy to driver instance */
proc-device-tree = "/proc/device-tree/i2c@3180000/ov491_a@24";
};
};
};
};
};
1. Is it correct that the internal sensor of the ISP will have the mode properties instead of the image sensor ?
2. Also the example in the jetson developer guide use GMSL serial link, do we have to develop the FPD LINK III driver ? Do you think that it can support also virtual channels ?
that ov491 code snippet looks not enough,
besides sensor specific settings, youāll also need to add CSI/VI/Sensor port bindings.
thereās YUV reference camera available, please download the latest rel-35 public sources, and⦠you shall see the device tree as following for reference.
for example, $public_sources/kenrel_src/hardware/nvidia/platform/t19x/galen/kernel-dts/common/tegra194-p2822-camera-eCAM130A_CUXVR.dtsi
FYI,
we had other customer that using FPDLinkIII, itās also using virtual channel support.
@JerryChang Indeed, in DT i wrote the CSI/VI/Sensor port bindings is missing, also before the DT, correct me if iām wrong, i think we have to make all this drivers
TI954 deserializer driver
TI935 serializer driver
FPD-Link III serial link driver and virtual channels support
OX03A0A image sensor driver (MIPI CSI2, I²C, GPIO) with serializer and deserializer instances (initialization, power on/off, stream on/off)
ISP OV491 driver
Device tree (Top level dts, image sensor mapping with serdes, isp, tca9546 mux for 4 cam support etc.)
The confusing part is we have to make a driver for both image sensor and isp separately ?
Because the image sensor send the bayer data to ISP, do you think that in device tree we have to set the mapping between image sensor ox03a01 and isp OV491 for example :
/*
* Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/ {
host1x {
vi@15700000 {
num-channels = <1>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
e3327_vi_in0: endpoint {
csi-port = <2>;
bus-width = <2>;
remote-endpoint = <&e3327_csi_out0>;
};
};
};
};
nvcsi@150c0000 {
num-channels = <1>;
#address-cells = <1>;
#size-cells = <0>;
channel@0 {
reg = <0>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
e3327_csi_in0: endpoint@0 {
csi-port = <2>;
bus-width = <2>;
remote-endpoint = <&e3327_ov491_out0>;
};
};
port@1 {
reg = <1>;
e3327_csi_out0: endpoint@1 {
remote-endpoint = <&e3327_vi_in0>;
};
};
};
};
};
};
i2c@3180000 {
status = "okay";
ov491@20 {
compatible = "ovti,ov491";
reg = <0x20>;
clocks = <&tegra_car TEGRA210_CLK_PLLP_OUT0>;
clock-names = "csi_phy_1";
power-domains = <&pd_g;
&pd_cil>;
power-domain-names = "csi",
"cil";
num-lanes = <1>;
lane-assignments = "0x0";
csi-port = <&mipi_csi2>;
status = "okay";
ti935@3c {
compatible = "ti,ds90ub935-q1";
reg = <0x3c>;
interrupt-parent = <&gpio>;
interrupts = <TEGRA_GPIO(TEGRA_SOC, M, 6) IRQ_TYPE_LEVEL_HIGH>;
power-domains = <&pd_m>;
power-domain-names = "serializer";
num-lanes = <1>;
csi-port = <&mipi_csi2>;
status = "okay";
};
ti954@2c {
compatible = "ti,ds90ub954-q1";
reg = <0x2c>;
interrupt-parent = <&gpio>;
}
ox03a01@60 {
compatible = "your_ox03a01_compatible";
reg = <0x60>;
clocks = <&tegra_car TEGRA210_CLK_PLLP_OUT0>;
clock-names = "mclk";
power-domains = <&pd_g>;
power-domain-names = "csi";
num-lanes = <1>;
lane-assignments = "0x0";
csi-port = <&mipi_csi2>;
status = "okay";
};
};
};
Do i have to set the modeX property for the image sensor or the ISP ?
Thank you a lot for you support, i am used to work with camera that has bayer output, that why iām confusedā¦
typically, ISP part itās done by sensor vendor, there should be on-chip ISP to process the data.
as youāre using a YUV camera, all you have to do is given those i2c, GPIOā¦etc sensor related settings, then, you shall obtain the YUV data for your own use-case.
@JerryChang Thank you for your feed back, so the driver should be done for the image sensor or isp ?
Because the image sensor do the capture and the ISP does the processing stuff, itās two separated components in the camera.
Also the modeX property to use NVIDIA camera stack should be done for the isp sensor right ? In fact that the serializer output yuv data for the jetson ?
So if i truly understand i have to do the driver for only for the image sensor right ? And in the device tree i should put the property in modeX the yuv format ?
Sorry for all this questions, but in the DT i want to know if i have to put both of isp and image sensor node ?
The image sensor captures raw bayer data, which needs to be processed by the ISP to generate YUV data. Therefore, the ISP acts as an intermediary between the image sensor and the final output on the MIPI CSI-2 interface.
Also note that the image sensor and the ISP are two different components, the isp is not integrated to the image sensor, the isp reference is ov491 and image sensor is ox03a01 bayer sensor, the data will be sent through mipi csi2 from image sensor to isp and the control is through i2c.
Without the ISP driver, the raw bayer data captured by the image sensor cannot be processed into usable image data. And without the image sensor driver, the ISP cannot communicate with the image sensor to capture the raw bayer data
correct.
letās ignore SerDes chip at this point since itās only for data transfer.
I assume you donāt need to do any extra things, because thatās external ISP being used.
device tree configuration is used for specify hardware settings. CSI brick can handle Bayer or YUV formats as long as youāve given correct settings.
the image sending to CSI brick already YUV formats. hence, you just enable v4l2src to access to the stream.
as I mentioned in commnet #5. thereās YUV reference camera available, please take a look.
you may see-also. V4L2 Sensor Driver Development Tutorial. which dive deep into the steps of writing a complete V4L2 compliant driver for an image sensor.
thatās correct.
BUT⦠you can sending raw bayer data, and let NV internal ISP to process the frames.
Thank you a lot for your support !
Maybe a last precision before closing this topic.
It seems that only the isp ov491 can be driven by the jetson through i2c in fact that in the hardware schematic, there i2c_ext (i2c external) as input in the isp ov491, but for the image sensor ox030a1, there not i2c_ext pins, there is only i2c_int (i2c internal) between the ISP and the image sensor.
So it seems that to drive the image sensor, we have to use the ISP for i2c requests, so there is a need of an isp driver (the isp should be driven through the jetson by i2c).
Anyway itās a little bit complicated than using a bayer sensor directlyā¦