Hello,
I am having some issues bringing up a mipi driver, based on the ov5693. The idea is the following.
I am using L4T 32.6.1
We have an FPGA which is delivering a 2 MIPI signal to both CSI0 and CSI4 (2 different interfaces, with 2 lanes each). The configuration of the IP core is as follows:
- 2 lanes
- YUV 422 8 bit stream, 1080p60
- Line rate: 1200 mbps
In order to get the driver running, I added this dtsi file to the device tree. The part of the HW resources I skipped it, as in the driver it is not needed. The idea is to have CSI0 bounded to /dev/video0 and CSI4 to /dev/video1 :
/*
* Copyright (c) 2015-2020, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "dt-bindings/clock/tegra194-clock.h"
/ {
host1x {
vi@15c10000 {
num-channels = <2>;
ports {
#address-cells = <1>;
#size-cells = <0>;
vi_port0: port@0 {
reg = <0>;
udv_vi_in0: endpoint {
port-index = <0>;
bus-width = <2>;
remote-endpoint = <&udv_csi_out0>;
};
};
vi_port1: port@1 {
reg = <1>;
udv_vi_in1: endpoint {
port-index = <4>;
bus-width = <2>;
remote-endpoint = <&udv_csi_out1>;
};
};
};
};
nvcsi@15a00000 {
num-channels = <2>;
#address-cells = <1>;
#size-cells = <0>;
csi_chan0: channel@0 {
reg = <0>;
ports {
#address-cells = <1>;
#size-cells = <0>;
csi_chan0_port0: port@0 {
reg = <0>;
udv_csi_in0: endpoint@0 {
port-index = <0>;
bus-width = <2>;
remote-endpoint = <&udv_out0>;
};
};
csi_chan0_port1: port@1 {
reg = <1>;
udv_csi_out0: endpoint@1 {
remote-endpoint = <&udv_vi_in0>;
};
};
};
};
csi_chan1: channel@1 {
reg = <1>;
ports {
#address-cells = <1>;
#size-cells = <0>;
csi_chan1_port0: port@0 {
reg = <0>;
udv_csi_in1: endpoint@2 {
port-index = <4>;
bus-width = <2>;
remote-endpoint = <&udv_out1>;
};
};
csi_chan1_port1: port@1 {
reg = <1>;
udv_csi_out1: endpoint@3 {
remote-endpoint = <&udv_vi_in1>;
};
};
};
};
};
};
cam_i2cmux {
i2c_0: i2c@0 {
udv_cam0: ov5693_a@32 {
compatible = "nvidia,ov5693";
/* I2C device address */
reg = <0x32>;
/* V4L2 device node location */
devnode = "video0";
/* Physical dimensions of sensor */
physical_w = "3.674";
physical_h = "2.738";
/* Define any required hw resources needed by driver */
/* ie. clocks, io pins, power sources */
//avdd-reg = "vana";
//iovdd-reg = "vif"
/* Sensor output flip settings */
vertical-flip = "true";
/* Disable EEPROM support */
has-eeprom = "0";
use_sensor_mode_id = "true";
sensor-bpp = "16"; // used for V4L2 tools, changed it to 8
clocks = <&bpmp_clks TEGRA194_CLK_EXTPERIPH1>,
<&bpmp_clks TEGRA194_CLK_PLLP_OUT0>;
clock-names = "extperiph1", "pllp_grtba";
mclk = "extperiph1";
clock-frequency = <24000000>;
/**
* A modeX node is required to support v4l2 driver
* implementation with NVIDIA camera software stack
*
* mclk_khz = "";
* Standard MIPI driving clock, typically 24MHz
*
* num_lanes = "";
* Number of lane channels sensor is programmed to output
*
* tegra_sinterface = "";
* The base tegra serial interface lanes are connected to
* Incase of virtual HW devices, use virtual
* For SW emulated devices, use host
*
* phy_mode = "";
* PHY mode used by the MIPI lanes for this device
*
* discontinuous_clk = "";
* The sensor is programmed to use a discontinuous clock on MIPI lanes
*
* dpcm_enable = "true";
* The sensor is programmed to use a DPCM modes
*
* cil_settletime = "";
* MIPI lane settle time value.
* A "0" value attempts to autocalibrate based on mclk_multiplier
*
*
*
*
* active_w = "";
* Pixel active region width
*
* active_h = "";
* Pixel active region height
*
* pixel_t = "";
* The sensor readout pixel pattern
*
* readout_orientation = "0";
* Based on camera module orientation.
* Only change readout_orientation if you specifically
* Program a different readout order for this mode
*
* line_length = "";
* Pixel line length (width) for sensor mode.
* This is used to calibrate features in our camera stack.
*
* mclk_multiplier = "";
* Multiplier to MCLK to help time hardware capture sequence
* TODO: Assign to PLL_Multiplier as well until fixed in core
*
* pix_clk_hz = "";
* Sensor pixel clock used for calculations like exposure and framerate
*
*
*
*
* inherent_gain = "";
* Gain obtained inherently from mode (ie. pixel binning)
*
* == Source Control Settings ==
*
* Gain factor used to convert fixed point integer to float
* Gain range [min_gain/gain_factor, max_gain/gain_factor]
* Gain step [step_gain/gain_factor is the smallest step that can be configured]
* Default gain [Default gain to be initialized for the control.
* use min_gain_val as default for optimal results]
* Framerate factor used to convert fixed point integer to float
* Framerate range [min_framerate/framerate_factor, max_framerate/framerate_factor]
* Framerate step [step_framerate/framerate_factor is the smallest step that can be configured]
* Default Framerate [Default framerate to be initialized for the control.
* use max_framerate to get required performance]
* Exposure factor used to convert fixed point integer to float
* For convenience use 1 sec = 1000000us as conversion factor
* Exposure range [min_exp_time/exposure_factor, max_exp_time/exposure_factor]
* Exposure step [step_exp_time/exposure_factor is the smallest step that can be configured]
* Default Exposure Time [Default exposure to be initialized for the control.
* Set default exposure based on the default_framerate for optimal exposure settings]
*
* gain_factor = ""; (integer factor used for floating to fixed point conversion)
* min_gain_val = ""; (ceil to integer)
* max_gain_val = ""; (ceil to integer)
* step_gain_val = ""; (ceil to integer)
* default_gain = ""; (ceil to integer)
* Gain limits for mode
*
* exposure_factor = ""; (integer factor used for floating to fixed point conversion)
* min_exp_time = ""; (ceil to integer)
* max_exp_time = ""; (ceil to integer)
* step_exp_time = ""; (ceil to integer)
* default_exp_time = ""; (ceil to integer)
* Exposure Time limits for mode (us)
*
*
* min_hdr_ratio = "";
* max_hdr_ratio = "";
* HDR Ratio limits for mode
*
* framerate_factor = ""; (integer factor used for floating to fixed point conversion)
* min_framerate = "";
* max_framerate = "";
* step_framerate = ""; (ceil to integer)
* default_framerate = ""; (ceil to integer)
* Framerate limits for mode (fps)
*/
mode0 { // OV5693_MODE_2592X1944
mclk_khz = "24000";
num_lanes = "2";
tegra_sinterface = "serial_a";
phy_mode = "DPHY";
discontinuous_clk = "no";
dpcm_enable = "false";
cil_settletime = "0";
active_w = "1920";
active_h = "1080";
dynamic_pixel_bit_depth = "16";
csi_pixel_bit_depth = "16";
pixel_t = "yuv_yuyv";
mode_type = "yuv";
pixel_phase = "yuyv";
readout_orientation = "0";
line_length = "1920";
inherent_gain = "1";
mclk_multiplier = "9.33";
pix_clk_hz = "124416000";
gain_factor = "10";
min_gain_val = "10";/* 1DB*/
max_gain_val = "160";/* 16DB*/
step_gain_val = "1";
default_gain = "10";
min_hdr_ratio = "1";
max_hdr_ratio = "1";
framerate_factor = "1000000";
min_framerate = "2000000";/*1.816577 */
max_framerate = "80000000";/*30*/
step_framerate = "1";
default_framerate = "60000000";
exposure_factor = "1000000";
min_exp_time = "34";/* us */
max_exp_time = "550385";/* us */
step_exp_time = "1";
default_exp_time = "33334";/* us */
embedded_metadata_height = "0";
};
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
udv_out0: endpoint {
port-index = <0>;
bus-width = <2>;
remote-endpoint = <&udv_csi_in0>;
};
};
};
};
};
};
i2c@3180000 {
udv_cam1: ov5693_c@36 {
compatible = "nvidia,ov5693";
/* I2C device address */
reg = <0x36>;
/* V4L2 device node location */
devnode = "video1";
/* Physical dimensions of sensor */
physical_w = "3.674";
physical_h = "2.738";
/* Define any required hw resources needed by driver */
/* ie. clocks, io pins, power sources */
//avdd-reg = "vana";
//iovdd-reg = "vif"
/* Sensor output flip settings */
vertical-flip = "true";
/* Disable EEPROM support */
has-eeprom = "0";
use_sensor_mode_id = "true";
sensor-bpp = "16"; // used for V4L2 tools, changed it to 8
clocks = <&bpmp_clks TEGRA194_CLK_EXTPERIPH1>,
<&bpmp_clks TEGRA194_CLK_PLLP_OUT0>;
clock-names = "extperiph1", "pllp_grtba";
mclk = "extperiph1";
clock-frequency = <24000000>;
/**
* A modeX node is required to support v4l2 driver
* implementation with NVIDIA camera software stack
*
* mclk_khz = "";
* Standard MIPI driving clock, typically 24MHz
*
* num_lanes = "";
* Number of lane channels sensor is programmed to output
*
* tegra_sinterface = "";
* The base tegra serial interface lanes are connected to
* Incase of virtual HW devices, use virtual
* For SW emulated devices, use host
*
* phy_mode = "";
* PHY mode used by the MIPI lanes for this device
*
* discontinuous_clk = "";
* The sensor is programmed to use a discontinuous clock on MIPI lanes
*
* dpcm_enable = "true";
* The sensor is programmed to use a DPCM modes
*
* cil_settletime = "";
* MIPI lane settle time value.
* A "0" value attempts to autocalibrate based on mclk_multiplier
*
*
*
*
* active_w = "";
* Pixel active region width
*
* active_h = "";
* Pixel active region height
*
* pixel_t = "";
* The sensor readout pixel pattern
*
* readout_orientation = "0";
* Based on camera module orientation.
* Only change readout_orientation if you specifically
* Program a different readout order for this mode
*
* line_length = "";
* Pixel line length (width) for sensor mode.
* This is used to calibrate features in our camera stack.
*
* mclk_multiplier = "";
* Multiplier to MCLK to help time hardware capture sequence
* TODO: Assign to PLL_Multiplier as well until fixed in core
*
* pix_clk_hz = "";
* Sensor pixel clock used for calculations like exposure and framerate
*
*
*
*
* inherent_gain = "";
* Gain obtained inherently from mode (ie. pixel binning)
*
* == Source Control Settings ==
*
* Gain factor used to convert fixed point integer to float
* Gain range [min_gain/gain_factor, max_gain/gain_factor]
* Gain step [step_gain/gain_factor is the smallest step that can be configured]
* Default gain [Default gain to be initialized for the control.
* use min_gain_val as default for optimal results]
* Framerate factor used to convert fixed point integer to float
* Framerate range [min_framerate/framerate_factor, max_framerate/framerate_factor]
* Framerate step [step_framerate/framerate_factor is the smallest step that can be configured]
* Default Framerate [Default framerate to be initialized for the control.
* use max_framerate to get required performance]
* Exposure factor used to convert fixed point integer to float
* For convenience use 1 sec = 1000000us as conversion factor
* Exposure range [min_exp_time/exposure_factor, max_exp_time/exposure_factor]
* Exposure step [step_exp_time/exposure_factor is the smallest step that can be configured]
* Default Exposure Time [Default exposure to be initialized for the control.
* Set default exposure based on the default_framerate for optimal exposure settings]
*
* gain_factor = ""; (integer factor used for floating to fixed point conversion)
* min_gain_val = ""; (ceil to integer)
* max_gain_val = ""; (ceil to integer)
* step_gain_val = ""; (ceil to integer)
* default_gain = ""; (ceil to integer)
* Gain limits for mode
*
* exposure_factor = ""; (integer factor used for floating to fixed point conversion)
* min_exp_time = ""; (ceil to integer)
* max_exp_time = ""; (ceil to integer)
* step_exp_time = ""; (ceil to integer)
* default_exp_time = ""; (ceil to integer)
* Exposure Time limits for mode (us)
*
*
* min_hdr_ratio = "";
* max_hdr_ratio = "";
* HDR Ratio limits for mode
*
* framerate_factor = ""; (integer factor used for floating to fixed point conversion)
* min_framerate = "";
* max_framerate = "";
* step_framerate = ""; (ceil to integer)
* default_framerate = ""; (ceil to integer)
* Framerate limits for mode (fps)
*/
mode0 { // OV5693_MODE_2592X1944
mclk_khz = "24000";
num_lanes = "2";
tegra_sinterface = "serial_e";
phy_mode = "DPHY";
discontinuous_clk = "no";
dpcm_enable = "false";
cil_settletime = "0";
active_w = "1920";
active_h = "1080";
dynamic_pixel_bit_depth = "16";
csi_pixel_bit_depth = "16";
pixel_t = "yuv_yuyv";
mode_type = "yuv";
pixel_phase = "yuyv";
readout_orientation = "0";
line_length = "1920";
inherent_gain = "1";
mclk_multiplier = "9.33";
pix_clk_hz = "124416000";
gain_factor = "10";
min_gain_val = "10";/* 1DB*/
max_gain_val = "160";/* 16DB*/
step_gain_val = "1";
default_gain = "10";
min_hdr_ratio = "1";
max_hdr_ratio = "1";
framerate_factor = "1000000";
min_framerate = "2000000";/*1.816577 */
max_framerate = "80000000";/*30*/
step_framerate = "1";
default_framerate = "60000000";
exposure_factor = "1000000";
min_exp_time = "34";/* us */
max_exp_time = "550385";/* us */
step_exp_time = "1";
default_exp_time = "33334";/* us */
embedded_metadata_height = "0";
};
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
udv_out1: endpoint {
port-index = <4>;
bus-width = <2>;
remote-endpoint = <&udv_csi_in1>;
};
};
};
};
};
};
/ {
tegra-camera-platform {
compatible = "nvidia, tegra-camera-platform";
/**
* Physical settings to calculate max ISO BW
*
* num_csi_lanes = <>;
* Total number of CSI lanes when all cameras are active
*
* max_lane_speed = <>;
* Max lane speed in Kbit/s
*
* min_bits_per_pixel = <>;
* Min bits per pixel
*
* vi_peak_byte_per_pixel = <>;
* Max byte per pixel for the VI ISO case
*
* vi_bw_margin_pct = <>;
* Vi bandwidth margin in percentage
*
* max_pixel_rate = <>;
* Max pixel rate in Kpixel/s for the ISP ISO case
*
* isp_peak_byte_per_pixel = <>;
* Max byte per pixel for the ISP ISO case
*
* isp_bw_margin_pct = <>;
* Isp bandwidth margin in percentage
*/
num_csi_lanes = <4>;
max_lane_speed = <1500000>;
min_bits_per_pixel = <16>;
vi_peak_byte_per_pixel = <2>;
vi_bw_margin_pct = <25>;
max_pixel_rate = <750000>;
isp_peak_byte_per_pixel = <5>;
isp_bw_margin_pct = <25>;
/**
* The general guideline for naming badge_info contains 3 parts, and is as follows,
* The first part is the camera_board_id for the module; if the module is in a FFD
* platform, then use the platform name for this part.
* The second part contains the position of the module, ex. "rear" or "front".
* The third part contains the last 6 characters of a part number which is found
* in the module's specsheet from the vender.
*/
modules {
cam_module0: module0 {
badge = "udv_front_RBP194";
position = "front";
orientation = "1";
cam_module0_drivernode0: drivernode0 {
/* Declare PCL support driver (classically known as guid) */
pcl_id = "v4l2_sensor";
/* Driver v4l2 device name */
devname = "ov5693 10-0032";
/* Declare the device-tree hierarchy to driver instance */
proc-device-tree = "/proc/device-tree/i2c_cmux/i2c@0/ov5693_a@32";
};
};
cam_module1: module1 {
badge = "udv_rear_RBP194";
position = "rear";
orientation = "1";
cam_module1_drivernode0: drivernode0 {
/* Declare PCL support driver (classically known as guid) */
pcl_id = "v4l2_sensor";
/* Driver v4l2 device name */
devname = "ov5693 3-0036";
/* Declare the device-tree hierarchy to driver instance */
proc-device-tree = "/proc/device-tree/i2c@3180000/ov5693_c@36";
};
};
};
};
};
In the kernel side, this is the ov5693.c file. I basically wrote it to omit all the i2c stuff.
/*
* ov5693_v4l2.c - ov5693 sensor driver
*
* Copyright (c) 2013-2020, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/gpio.h>
#include <linux/module.h>
#include <linux/debugfs.h>
#include <linux/seq_file.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <media/tegra-v4l2-camera.h>
#include <media/tegracam_core.h>
#include <media/ov5693.h>
//#include "../platform/tegra/camera/camera_gpio.h"
#include "ov5693_mode_tbls.h"
#define CREATE_TRACE_POINTS
#include <trace/events/ov5693.h>
#define OV5693_MAX_COARSE_DIFF 6
#define OV5693_MAX_FRAME_LENGTH (0x7fff)
#define OV5693_MIN_EXPOSURE_COARSE (0x0002)
#define OV5693_MAX_EXPOSURE_COARSE \
(OV5693_MAX_FRAME_LENGTH-OV5693_MAX_COARSE_DIFF)
#define OV5693_DEFAULT_LINE_LENGTH (0xA80)
#define OV5693_DEFAULT_PIXEL_CLOCK (160)
#define OV5693_DEFAULT_FRAME_LENGTH (0x07C0)
#define OV5693_DEFAULT_EXPOSURE_COARSE \
(OV5693_DEFAULT_FRAME_LENGTH-OV5693_MAX_COARSE_DIFF)
static const u32 ctrl_cid_list[] = {
TEGRA_CAMERA_CID_GAIN,
TEGRA_CAMERA_CID_EXPOSURE,
TEGRA_CAMERA_CID_EXPOSURE_SHORT,
TEGRA_CAMERA_CID_FRAME_RATE,
//TEGRA_CAMERA_CID_GROUP_HOLD,
TEGRA_CAMERA_CID_HDR_EN,
};
struct ov5693 {
struct i2c_client *i2c_client;
struct v4l2_subdev *subdev;
const char *devname;
struct mutex streaming_lock;
bool streaming;
s32 group_hold_prev;
u32 frame_length;
bool group_hold_en;
struct camera_common_i2c i2c_dev;
struct camera_common_data *s_data;
struct tegracam_device *tc_dev;
};
static struct regmap_config ov5693_regmap_config = {
.reg_bits = 16,
.val_bits = 8,
};
static inline void ov5693_get_frame_length_regs(ov5693_reg *regs,
u32 frame_length)
{
regs->addr = OV5693_FRAME_LENGTH_ADDR_MSB;
regs->val = (frame_length >> 8) & 0xff;
(regs + 1)->addr = OV5693_FRAME_LENGTH_ADDR_LSB;
(regs + 1)->val = (frame_length) & 0xff;
}
static inline void ov5693_get_coarse_time_regs(ov5693_reg *regs,
u32 coarse_time)
{
regs->addr = OV5693_COARSE_TIME_ADDR_1;
regs->val = (coarse_time >> 12) & 0xff;
(regs + 1)->addr = OV5693_COARSE_TIME_ADDR_2;
(regs + 1)->val = (coarse_time >> 4) & 0xff;
(regs + 2)->addr = OV5693_COARSE_TIME_ADDR_3;
(regs + 2)->val = (coarse_time & 0xf) << 4;
}
static inline void ov5693_get_coarse_time_short_regs(ov5693_reg *regs,
u32 coarse_time)
{
regs->addr = OV5693_COARSE_TIME_SHORT_ADDR_1;
regs->val = (coarse_time >> 12) & 0xff;
(regs + 1)->addr = OV5693_COARSE_TIME_SHORT_ADDR_2;
(regs + 1)->val = (coarse_time >> 4) & 0xff;
(regs + 2)->addr = OV5693_COARSE_TIME_SHORT_ADDR_3;
(regs + 2)->val = (coarse_time & 0xf) << 4;
}
static inline void ov5693_get_gain_regs(ov5693_reg *regs,
u16 gain)
{
regs->addr = OV5693_GAIN_ADDR_MSB;
regs->val = (gain >> 8) & 0xff;
(regs + 1)->addr = OV5693_GAIN_ADDR_LSB;
(regs + 1)->val = (gain) & 0xff;
}
static int test_mode;
module_param(test_mode, int, 0644);
static inline int ov5693_read_reg(struct camera_common_data *s_data,
u16 addr, u8 *val)
{
return 0;
}
static int ov5693_write_reg(struct camera_common_data *s_data, u16 addr, u8 val)
{
return 0;
}
static int ov5693_write_table(struct ov5693 *priv,
const ov5693_reg table[])
{
return 0;
}
static int ov5693_power_on(struct camera_common_data *s_data)
{
struct camera_common_power_rail *pw = s_data->power;
pw->state = SWITCH_ON;
return 0;
}
static int ov5693_power_off(struct camera_common_data *s_data)
{
struct camera_common_power_rail *pw = s_data->power;
pw->state = SWITCH_OFF;
return 0;
}
static int ov5693_power_put(struct tegracam_device *tc_dev)
{
return 0;
}
static int ov5693_power_get(struct tegracam_device *tc_dev)
{
struct camera_common_data *s_data = tc_dev->s_data;
struct camera_common_power_rail *pw = s_data->power;
pw->state = SWITCH_OFF;
return 0;
}
static int ov5693_set_gain(struct tegracam_device *tc_dev, s64 val);
static int ov5693_set_frame_rate(struct tegracam_device *tc_dev, s64 val);
static int ov5693_set_exposure(struct tegracam_device *tc_dev, s64 val);
static int ov5693_set_exposure_short(struct tegracam_device *tc_dev, s64 val);
static const struct of_device_id ov5693_of_match[] = {
{
.compatible = "nvidia,ov5693",
},
{ },
};
static int ov5693_set_group_hold(struct tegracam_device *tc_dev, bool val)
{
int err;
struct ov5693 *priv = tc_dev->priv;
int gh_prev = switch_ctrl_qmenu[priv->group_hold_prev];
struct device *dev = tc_dev->dev;
if (priv->group_hold_en == true && gh_prev == SWITCH_OFF) {
camera_common_i2c_aggregate(&priv->i2c_dev, true);
/* enter group hold */
err = ov5693_write_reg(priv->s_data,
OV5693_GROUP_HOLD_ADDR, val);
if (err)
goto fail;
priv->group_hold_prev = 1;
dev_info(dev, "%s: enter group hold\n", __func__);
} else if (priv->group_hold_en == false && gh_prev == SWITCH_ON) {
/* leave group hold */
err = ov5693_write_reg(priv->s_data,
OV5693_GROUP_HOLD_ADDR, 0x11);
if (err)
goto fail;
err = ov5693_write_reg(priv->s_data,
OV5693_GROUP_HOLD_ADDR, 0x61);
if (err)
goto fail;
priv->group_hold_prev = 0;
dev_info(dev, "%s: leave group hold\n", __func__);
}
return 0;
fail:
dev_info(dev, "%s: Group hold control error\n", __func__);
return err;
}
static int ov5693_set_gain(struct tegracam_device *tc_dev, s64 val)
{
struct camera_common_data *s_data = tc_dev->s_data;
struct ov5693 *priv = (struct ov5693 *)tc_dev->priv;
struct device *dev = tc_dev->dev;
const struct sensor_mode_properties *mode =
&s_data->sensor_props.sensor_modes[s_data->mode_prop_idx];
ov5693_reg reg_list[2];
int err;
u16 gain;
int i;
if (!priv->group_hold_prev)
ov5693_set_group_hold(tc_dev, 1);
/* translate value */
gain = (u16) (((val * 16) +
(mode->control_properties.gain_factor / 2)) /
mode->control_properties.gain_factor);
ov5693_get_gain_regs(reg_list, gain);
dev_info(dev, "%s: gain %d val: %lld\n", __func__, gain, val);
for (i = 0; i < 2; i++) {
err = ov5693_write_reg(s_data, reg_list[i].addr,
reg_list[i].val);
if (err)
goto fail;
}
return 0;
fail:
dev_info(dev, "%s: GAIN control error\n", __func__);
return err;
}
static int ov5693_set_frame_rate(struct tegracam_device *tc_dev, s64 val)
{
struct camera_common_data *s_data = tc_dev->s_data;
struct device *dev = tc_dev->dev;
struct ov5693 *priv = tc_dev->priv;
const struct sensor_mode_properties *mode =
&s_data->sensor_props.sensor_modes[s_data->mode_prop_idx];
ov5693_reg reg_list[2];
int err;
u32 frame_length;
int i;
if (!priv->group_hold_prev)
ov5693_set_group_hold(tc_dev, 1);
frame_length = mode->signal_properties.pixel_clock.val *
mode->control_properties.framerate_factor /
mode->image_properties.line_length / val;
ov5693_get_frame_length_regs(reg_list, frame_length);
dev_dbg(dev, "%s: val: %d\n", __func__, frame_length);
for (i = 0; i < 2; i++) {
err = ov5693_write_reg(s_data, reg_list[i].addr,
reg_list[i].val);
if (err)
goto fail;
}
priv->frame_length = frame_length;
return 0;
fail:
dev_dbg(dev, "%s: FRAME_LENGTH control error\n", __func__);
return err;
}
static int ov5693_set_exposure(struct tegracam_device *tc_dev, s64 val)
{
struct camera_common_data *s_data = tc_dev->s_data;
struct device *dev = tc_dev->dev;
struct ov5693 *priv = tc_dev->priv;
const s32 max_coarse_time = priv->frame_length - OV5693_MAX_COARSE_DIFF;
const struct sensor_mode_properties *mode =
&s_data->sensor_props.sensor_modes[s_data->mode_prop_idx];
ov5693_reg reg_list[3];
int err;
u32 coarse_time;
int i;
if (!priv->group_hold_prev)
ov5693_set_group_hold(tc_dev, 1);
coarse_time = (u32)(((mode->signal_properties.pixel_clock.val*val)
/mode->image_properties.line_length)/
mode->control_properties.exposure_factor);
if (coarse_time < OV5693_MIN_EXPOSURE_COARSE)
coarse_time = OV5693_MIN_EXPOSURE_COARSE;
else if (coarse_time > max_coarse_time)
coarse_time = max_coarse_time;
ov5693_get_coarse_time_regs(reg_list, coarse_time);
dev_dbg(dev, "%s: val: %d\n", __func__, coarse_time);
for (i = 0; i < 3; i++) {
err = ov5693_write_reg(s_data, reg_list[i].addr,
reg_list[i].val);
if (err)
goto fail;
}
return 0;
fail:
dev_dbg(dev, "%s: COARSE_TIME control error\n", __func__);
return err;
}
static int ov5693_set_exposure_short(struct tegracam_device *tc_dev, s64 val)
{
struct camera_common_data *s_data = tc_dev->s_data;
struct device *dev = tc_dev->dev;
struct ov5693 *priv = tc_dev->priv;
const struct sensor_mode_properties *mode =
&s_data->sensor_props.sensor_modes[s_data->mode_prop_idx];
ov5693_reg reg_list[3];
int err;
struct v4l2_control hdr_control;
int hdr_en;
u32 coarse_time_short;
int i;
if (!priv->group_hold_prev)
ov5693_set_group_hold(tc_dev, 1);
/* check hdr enable ctrl */
hdr_control.id = TEGRA_CAMERA_CID_HDR_EN;
err = camera_common_g_ctrl(s_data, &hdr_control);
if (err < 0) {
dev_err(dev, "could not find device ctrl.\n");
return err;
}
hdr_en = switch_ctrl_qmenu[hdr_control.value];
if (hdr_en == SWITCH_OFF)
return 0;
coarse_time_short = (u32)(((mode->signal_properties.pixel_clock.val*val)
/mode->image_properties.line_length)
/mode->control_properties.exposure_factor);
ov5693_get_coarse_time_short_regs(reg_list, coarse_time_short);
dev_dbg(dev, "%s: val: %d\n", __func__, coarse_time_short);
for (i = 0; i < 3; i++) {
err = ov5693_write_reg(s_data, reg_list[i].addr,
reg_list[i].val);
if (err)
goto fail;
}
return 0;
fail:
dev_dbg(dev, "%s: COARSE_TIME_SHORT control error\n", __func__);
return err;
}
MODULE_DEVICE_TABLE(of, ov5693_of_match);
static struct camera_common_pdata *ov5693_parse_dt(struct tegracam_device
*tc_dev)
{
struct device *dev = tc_dev->dev;
struct device_node *node = dev->of_node;
struct camera_common_pdata *board_priv_pdata;
const struct of_device_id *match;
int err;
if (!node)
return NULL;
match = of_match_device(ov5693_of_match, dev);
if (!match) {
dev_err(dev, "Failed to find matching dt id\n");
return NULL;
}
board_priv_pdata = devm_kzalloc(dev,
sizeof(*board_priv_pdata), GFP_KERNEL);
if (!board_priv_pdata)
return NULL;
err = camera_common_parse_clocks(dev,
board_priv_pdata);
if (err) {
dev_err(dev, "Failed to find clocks\n");
}
return board_priv_pdata;
}
static int ov5693_set_mode(struct tegracam_device *tc_dev)
{
struct ov5693 *priv = (struct ov5693 *)tegracam_get_privdata(tc_dev);
struct camera_common_data *s_data = tc_dev->s_data;
int err;
err = ov5693_write_table(priv, mode_table[s_data->mode_prop_idx]);
if (err)
return err;
return 0;
}
static int ov5693_start_streaming(struct tegracam_device *tc_dev)
{
struct ov5693 *priv = (struct ov5693 *)tegracam_get_privdata(tc_dev);
mutex_lock(&priv->streaming_lock);
priv->streaming = true;
mutex_unlock(&priv->streaming_lock);
return 0;
}
static int ov5693_stop_streaming(struct tegracam_device *tc_dev)
{
struct ov5693 *priv = (struct ov5693 *)tegracam_get_privdata(tc_dev);
u32 frame_time;
mutex_lock(&priv->streaming_lock);
priv->streaming = false;
mutex_unlock(&priv->streaming_lock);
/*
* Wait for one frame to make sure sensor is set to
* software standby in V-blank
*
* frame_time = frame length rows * Tline
* Tline = line length / pixel clock (in MHz)
*/
frame_time = priv->frame_length *
OV5693_DEFAULT_LINE_LENGTH / OV5693_DEFAULT_PIXEL_CLOCK;
usleep_range(frame_time, frame_time + 1000);
return 0;
}
static struct camera_common_sensor_ops ov5693_common_ops = {
.numfrmfmts = ARRAY_SIZE(ov5693_frmfmt),
.frmfmt_table = ov5693_frmfmt,
.power_on = ov5693_power_on,
.power_off = ov5693_power_off,
.write_reg = ov5693_write_reg,
.read_reg = ov5693_read_reg,
.parse_dt = ov5693_parse_dt,
.power_get = ov5693_power_get,
.power_put = ov5693_power_put,
.set_mode = ov5693_set_mode,
.start_streaming = ov5693_start_streaming,
.stop_streaming = ov5693_stop_streaming,
};
static struct tegracam_ctrl_ops ov5693_ctrl_ops = {
.numctrls = ARRAY_SIZE(ctrl_cid_list),
.ctrl_cid_list = ctrl_cid_list,
.set_gain = ov5693_set_gain,
.set_exposure = ov5693_set_exposure,
.set_exposure_short = ov5693_set_exposure_short,
.set_frame_rate = ov5693_set_frame_rate,
.set_group_hold = ov5693_set_group_hold,
};
static int ov5693_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
struct i2c_client *client = v4l2_get_subdevdata(sd);
dev_dbg(&client->dev, "%s:\n", __func__);
return 0;
}
static const struct v4l2_subdev_internal_ops ov5693_subdev_internal_ops = {
.open = ov5693_open,
};
static int ov5693_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct device *dev = &client->dev;
struct device_node *node = client->dev.of_node;
struct tegracam_device *tc_dev;
struct ov5693 *priv;
int err;
const struct of_device_id *match;
dev_info(dev, "probing v4l2 sensor.\n");
match = of_match_device(ov5693_of_match, dev);
if (!match) {
dev_err(dev, "No device match found\n");
return -ENODEV;
}
if (!IS_ENABLED(CONFIG_OF) || !node)
return -EINVAL;
priv = devm_kzalloc(dev,
sizeof(struct ov5693), GFP_KERNEL);
if (!priv)
return -ENOMEM;
tc_dev = devm_kzalloc(dev,
sizeof(struct tegracam_device), GFP_KERNEL);
if (!tc_dev)
return -ENOMEM;
priv->i2c_client = tc_dev->client = client;
tc_dev->dev = dev;
strncpy(tc_dev->name, "ov5693", sizeof(tc_dev->name));
tc_dev->dev_regmap_config = &ov5693_regmap_config;
tc_dev->sensor_ops = &ov5693_common_ops;
tc_dev->v4l2sd_internal_ops = &ov5693_subdev_internal_ops;
tc_dev->tcctrl_ops = &ov5693_ctrl_ops;
err = tegracam_device_register(tc_dev);
if (err) {
dev_err(dev, "tegra camera driver registration failed\n");
return err;
}
priv->tc_dev = tc_dev;
priv->s_data = tc_dev->s_data;
priv->subdev = &tc_dev->s_data->subdev;
tegracam_set_privdata(tc_dev, (void *)priv);
mutex_init(&priv->streaming_lock);
err = tegracam_v4l2subdev_register(tc_dev, true);
if (err) {
dev_err(dev, "tegra camera subdev registration failed\n");
return err;
}
dev_dbg(dev, "Detected OV5693 sensor\n");
return 0;
}
static int
ov5693_remove(struct i2c_client *client)
{
struct camera_common_data *s_data = to_camera_common_data(&client->dev);
struct ov5693 *priv = (struct ov5693 *)s_data->priv;
tegracam_v4l2subdev_unregister(priv->tc_dev);
ov5693_power_put(priv->tc_dev);
tegracam_device_unregister(priv->tc_dev);
mutex_destroy(&priv->streaming_lock);
return 0;
}
static const struct i2c_device_id ov5693_id[] = {
{ "ov5693", 0 },
{ }
};
MODULE_DEVICE_TABLE(i2c, ov5693_id);
static struct i2c_driver ov5693_i2c_driver = {
.driver = {
.name = "ov5693",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(ov5693_of_match),
},
.probe = ov5693_probe,
.remove = ov5693_remove,
.id_table = ov5693_id,
};
module_i2c_driver(ov5693_i2c_driver);
MODULE_DESCRIPTION("Media Controller driver for OmniVision OV5693");
MODULE_AUTHOR("arb");
MODULE_LICENSE("GPL v2");
Here is the ov5693_mode_tbls.c, edited to match the desired input stream. Basically, changed to Full HD 60 fps mode.
/*
* ov5693_mode_tbls.h - ov5693 sensor mode tables
*
* Copyright (c) 2015-2019, NVIDIA CORPORATION, All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __OV5693_TABLES__
#define __OV5693_TABLES__
#include <media/camera_common.h>
#define OV5693_TABLE_WAIT_MS 0
#define OV5693_TABLE_END 1
#define OV5693_MAX_RETRIES 3
#define OV5693_WAIT_MS 10
#define ENABLE_EXTRA_MODES 0
#define ov5693_reg struct reg_8
static const ov5693_reg ov5693_start[] = {
{0x0100, 0x01}, /* mode select streaming on */
{OV5693_TABLE_END, 0x00}
};
static const ov5693_reg ov5693_stop[] = {
{0x0100, 0x00}, /* mode select streaming on */
{OV5693_TABLE_END, 0x00}
};
static const ov5693_reg tp_colorbars[] = {
{0x0600, 0x00},
{0x0601, 0x02},
{OV5693_TABLE_WAIT_MS, OV5693_WAIT_MS},
{OV5693_TABLE_END, 0x00}
};
static const ov5693_reg mode_1920x1080[] = {
{OV5693_TABLE_WAIT_MS, OV5693_WAIT_MS},
{OV5693_TABLE_END, 0x0000}
};
enum {
OV5693_MODE_1920X1080,
OV5693_MODE_START_STREAM,
OV5693_MODE_STOP_STREAM,
OV5693_MODE_TEST_PATTERN
};
static const ov5693_reg *mode_table[] = {
[OV5693_MODE_1920X1080] = mode_1920x1080,
[OV5693_MODE_START_STREAM] = ov5693_start,
[OV5693_MODE_STOP_STREAM] = ov5693_stop,
[OV5693_MODE_TEST_PATTERN] = tp_colorbars,
};
static const int ov5693_60fps[] = {
60,
};
static const struct camera_common_frmfmt ov5693_frmfmt[] = {
{{1920, 1080}, ov5693_60fps, 1, 0, OV5693_MODE_1920X1080},
};
#endif /* __OV5693_TABLES__ */
After compiling and changing the kernel and dtb, I am able to do the port binding. The driver also gets loaded correctly, and I can see the formats correctly also. See the following commands that I did.
- Port binding
sudo media-ctl -p -d /dev/media0
Media controller API version 0.1.0
Media device information
driver tegra194-vi5
model NVIDIA Tegra Video Input Device
serial
bus info
hw revision 0x3
driver version 0.0.0
Device topology
-
entity 1: ov5693 9-0032 (1 pad, 1 link)
type V4L2 subdev subtype Sensor flags 0
device node name /dev/v4l-subdev0
pad0: Source
[fmt:YUYV8_1X16/1920x1080 field:none colorspace:srgb]
→ “15a00000.nvcsi–2”:0 [ENABLED] -
entity 3: 15a00000.nvcsi–2 (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev1
pad0: Sink
← “ov5693 9-0032”:0 [ENABLED]
pad1: Source
→ “vi-output, ov5693 9-0032”:0 [ENABLED] -
entity 6: vi-output, ov5693 9-0032 (1 pad, 1 link)
type Node subtype V4L flags 0
device node name /dev/video0
pad0: Sink
← “15a00000.nvcsi–2”:1 [ENABLED] -
entity 18: ov5693 2-0036 (1 pad, 1 link)
type V4L2 subdev subtype Sensor flags 0
device node name /dev/v4l-subdev2
pad0: Source
[fmt:YUYV8_1X16/1920x1080 field:none colorspace:srgb]
→ “15a00000.nvcsi–1”:0 [ENABLED] -
entity 20: 15a00000.nvcsi–1 (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev3
pad0: Sink
← “ov5693 2-0036”:0 [ENABLED]
pad1: Source
→ “vi-output, ov5693 2-0036”:0 [ENABLED] -
entity 23: vi-output, ov5693 2-0036 (1 pad, 1 link)
type Node subtype V4L flags 0
device node name /dev/video1
pad0: Sink
← “15a00000.nvcsi–1”:1 [ENABLED]
- List formats
v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘YUYV’
Name : YUYV 4:2:2
Size: Discrete 1920x1080
Interval: Discrete 0.017s (60.000 fps)
Index : 1
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUYV 4:2:2
Size: Discrete 1920x1080
Interval: Discrete 0.017s (60.000 fps)
v4l2-ctl -d /dev/video1 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘YUYV’
Name : YUYV 4:2:2
Size: Discrete 1920x1080
Interval: Discrete 0.017s (60.000 fps)
Index : 1
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUYV 4:2:2
Size: Discrete 1920x1080
Interval: Discrete 0.017s (60.000 fps)
- dmesg dump at bootup
dmesg | grep ov5693
[ 2.198376] ov5693 2-0036: probing v4l2 sensor.
[ 2.198519] ov5693 2-0036: tegracam sensor driver:ov5693_v2.0.6
[ 2.198616] ov5693 9-0032: probing v4l2 sensor.
[ 2.198766] ov5693 9-0032: tegracam sensor driver:ov5693_v2.0.6
[ 3.287888] tegra194-vi5 15c10000.vi: subdev ov5693 9-0032 bound
[ 3.288601] tegra194-vi5 15c10000.vi: subdev ov5693 2-0036 bound
[ 5.566645] Could not create tracefs ‘ov5693_s_stream’ directory
[ 5.566859] ov5693: module is already loaded
[ 5.613959] Could not create tracefs ‘ov5693_s_stream’ directory
[ 5.614165] ov5693: module is already loaded
After checking this, I tried to get the image but I am not able to. The FPGA is already delivering image, I probed the signal with an oscilloscope. The pipeline I am using is the following (I am only trying CSI0 at the moment)
- gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,forma=YUYV,width=1920,height=1080,framerate=60/1 ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=NV12’ ! nvoverlaysink
I enabled the trace log, and this is what I am currently seeing:
log.txt (315.0 KB)
Also, here is the dump from dmesg:
dmesg | grep vi5
[ 3.281020] tegra194-vi5 15c10000.vi: using default number of vi channels, 36
[ 3.283632] tegra194-vi5 15c10000.vi: initialized
[ 3.287888] tegra194-vi5 15c10000.vi: subdev ov5693 9-0032 bound
[ 3.287918] tegra194-vi5 15c10000.vi: subdev 15a00000.nvcsi–2 bound
[ 3.288601] tegra194-vi5 15c10000.vi: subdev ov5693 2-0036 bound
[ 3.288608] tegra194-vi5 15c10000.vi: subdev 15a00000.nvcsi–1 bound
[ 136.395887] tegra194-vi5 15c10000.vi: corr_err: discarding frame 0, flags: 6, err_data 512
[ 136.412504] tegra194-vi5 15c10000.vi: corr_err: discarding frame 0, flags: 96, err_data 4194400
[ 136.413146] [RCE] vi5_hwinit: firmware CL2018101701 protocol version 2.2
[ 136.429198] tegra194-vi5 15c10000.vi: corr_err: discarding frame 0, flags: 96, err_data 4194400
[ 136.445859] tegra194-vi5 15c10000.vi: corr_err: discarding frame 0, flags: 96, err_data 4194400
I also tried boosting the clocks, but it would not work. Any idea on how to keep going/debugging? At the moment I cannot think of different approaches.
Update: Here I attach the beginning of the trace log, as It was not in the first log I attached.
fullLog.txt (58.2 KB)
This is the beginning of the log, and the first one I attach is what you can see after