We have a video input board where the ‘cameras’ i.e. video input streams
are not controlled directly by a single i2c controller (there are several
spi and i2c devices and intermediate circuitry). The video eventually
ends up at the csi input ports on the TX2 module. I am looking at the
“Sensor Driver Programming Guide” and the example ov4693.c driver and it
all seems very tightly tied to i2c devices. Many of the calls (i.e
v4l2_i2c_subdev_init() and camera_common_parse_ports()) expect a
struct i2c_client, but since our board is not a single i2c device this
doesn’t really exist. Even struct camera_common_data (camera_common.h) expects
a struct i2c_client.
I’m not sure if I need to search for other registration calls, or provide
a fake i2c client structure. If I have to create a dummy i2c client structure
it is unclear how many fields need to be filled in.
Is your video input source a CSI device? If so, I2C (also called CCI in CSI spec) is combined with CSI, the CSI device should have it. If not, why the video ends up at the CSI ports?
@cobrien
You can use the reference driver ov5693 as template but replace the i2c access to SPI like below.
Of course you still need add initial spidev in you sensor driver.
Again, the problem is the frame capture board is not controlled by a single spi device either.
The problem I am facing is that I see only v4l2_i2c_subdev_init.
Right now I create a non-existant i2c device on an existing bus and use that, which gets
me through the _probe function, so I can call v4l2_i2c_subdev_init() and v4l2_async_register_subdev(),
but I’m not seeing the video device yet. I need to look at the device tree entries next.
I’d really like to use the camera framework because it seems to handle the csi frame grabbing and
v4l2 integration automatically provided it’s configured properly in the device tree.
We faced the same problem. It seems as if i2c device is only needed for debugging output inside the nvidia driver. The debug functions used within the nvidia driver need a device. We created a dummy i2c structure containing the nessessary fields.
The media framework seems to expect an i2c device, so I created one that referenced
an unused address on one of the i2c ports. The actual video capture and conversion
to CSI is done by several chips and an fpga controlled by several i2c and spi bus
connections. There are actually four streams captured. This is controlled outside
of the i2c device that is actually connected to the csi in put and vi processing
unit in the device tree.
In any case, I have /dev/video0 to /dev/video3, and media-ctl -p shows the proper
linkage with the CSI ports. Unfortunately, trying to capture data with v4l2-ctl
I think I’m in the same situation with a generated CSI video stream input, but still cannot get /dev/video* for the video source.
Could u share us more details about how to get /dev/video*s , like how to add video configurations to the existing drivers and
device tree entries to make /dev/video available?
We are still struggling with this. The CSI converter we use can generate a test pattern,
that we can capture. But actual video frames are failing to be captured by the CSI
subsystem. Our understanding is that the CSI input section needs to have the exact frame
size (width and height) or it will fail to capture. The relationship between the active
video and any extra blanking lines/columns is what we are investigating now.
Another problem was adding the correct frame size entries so that v4l2-ctl --all showed
the proper resolution and video format. Without this several things failed – the v4l2
device registration failed (so /dev/video* wasn’t created), or the resolution wouldn’t be
shown in v4l2-ctl --all. we had to change sensor_common.c:extract_pixel_format(), and
add an entry in camera_common.c:camera_common_color_fmts. But again, if you are getting
a /dev/video* and v4l2-ctl --all shows the right formats you are beyond this.
Another thing to check is that media-ctl -p shows the proper connections between camera,
csi, and vi subdevices.
Hi, Cary.
Thanks for your tips, and now we can get /dev/video0 node. But I am not sure the device tree node params were correct.
The mipi signal is a 1080p 60fps, yuv422-8bit stream. How should I set these params’ values in device tree node?
Here are the results of media-ctl -p and v4l2-ctl --all
nvidia@tegra-ubuntu:~$ sudo media-ctl -p
Media controller API version 0.1.0
Media device information
------------------------
driver tegra-vi4
model NVIDIA Tegra Video Input Device
serial
bus info
hw revision 0x3
driver version 0.0.0
Device topology
- entity 1: 150c0000.nvcsi-2 (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev0
pad0: Sink
<- "ov5693 2-0036":0 [ENABLED]
pad1: Source
-> "vi-output, ov5693 2-0036":0 [ENABLED]
- entity 2: ov5693 2-0036 (1 pad, 1 link)
type V4L2 subdev subtype Sensor flags 0
device node name /dev/v4l-subdev1
pad0: Source
[fmt:SRGGB10/1920x1080 field:none]
-> "150c0000.nvcsi-2":0 [ENABLED]
- entity 3: vi-output, ov5693 2-0036 (1 pad, 1 link)
type Node subtype V4L flags 0
device node name /dev/video0
pad0: Sink
<- "150c0000.nvcsi-2":1 [ENABLED]
nvidia@tegra-ubuntu:~$ v4l2-ctl --all
Driver Info (not using libv4l2):
Driver name : tegra-video
Card type : vi-output, ov5693 2-0036
Bus info : platform:15700000.vi:2
Driver version: 4.4.38
Capabilities : 0x84200001
Video Capture
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04200001
Video Capture
Streaming
Extended Pix Format
Priority: 2
Video input : 0 (Camera 2: no power)
Format Video Capture:
Width/Height : 1920/1080
Pixel Format : 'RG10'
Field : None
Bytes per Line : 3840
Size Image : 4147200
Colorspace : sRGB
Transfer Function : Default
YCbCr Encoding : Default
Quantization : Default
Flags :
Camera Controls
frame_length (int) : min=0 max=32767 step=1 default=1984 value=1984 flags=slider
coarse_time (int) : min=2 max=32761 step=1 default=1978 value=1978 flags=slider
coarse_time_short (int) : min=2 max=32761 step=1 default=1978 value=1978 flags=slider
group_hold (intmenu): min=0 max=1 default=0 value=0
hdr_enable (intmenu): min=0 max=1 default=0 value=0
otp_data (str) : min=0 max=1024 step=2 value='a0b9e7ecc1ffffffcc2a0f00c0ffffff20c22201c0ffffff0000000000000000c0b9e7ecc1ffffffac480f00c0ffffff1800000000000000000000000000000060bae7ecc1ffffff504c0f00c0ffffff184c0f00c0ffffff40effd00c0ffffff00000000000000000077b2eac1ffffff880803e6c1ffffff580803e6c1ffffffc80e03e6c1ffffffeaffffff000000002024b4ebc1ffffff18a202e6c1ffffffd8d73e01c0ffffff000000000000000070bae7ec00000000400000000000000030bbe7ecc1ffffff30bbe7ecc1fffffff0bae7ecc1ffffffc8ffffff0000000090bae7ecc1ffffff28121700c0ffffff30bbe7ecc1ffffff30bbe7ecc1fffffff0bae7ecc1ffffffc8ffffff0000000030bbe7ecc1ffffffc4fd7600c0ffffff01000000000000000077b2eac1ffffff30bbe7ecc1ffffff30bbe7ecc1fffffff0bae7ecc1ffffffc8ffffff0000000030bbe7ecc1ffffff30bbe7ecc1fffffff0bae7ecc1ffffffc8ffffff0000000060bbe7ecc1ffffffb0fd7600c0ffffff0100000000000000000000000000000000000000000000001500000000000000bc77b2eac1ffffff000000000000000040bbe7ecc1ffffff38097800c0ffffff80bbe7ecc1ffffff78087700c0ffffff2024b4ebc1ffffff180803e6c1ffffff580803e6c1ffffff0020c100c0ffffff88c92c01c0ffffff0024b4ebc1ffffff' flags=read-only, has-payload
fuse_id (str) : min=0 max=16 step=2 value='4cdc7500c0ffffff' flags=read-only, has-payload
gain (int) : min=256 max=4096 step=1 default=256 value=256 flags=slider
bypass_mode (intmenu): min=0 max=1 default=0 value=0
override_enable (intmenu): min=0 max=1 default=0 value=0
height_align (int) : min=1 max=16 step=1 default=1 value=1
size_align (intmenu): min=0 max=2 default=0 value=0
write_isp_format (int) : min=1 max=1 step=1 default=1 value=1
I am following up on this thread for my own purposes. Did you get this working with 2 different CSI streams (i.e. that map to 2 different device nodes)? I have a similar situation where we have a pre-configured data stream coming from an FPGA and I simply want to be able to capture data over CSI using V4L2. Is there anyway you could share the snippits of your DTSI files that made this work? Did you use the OV5693 DTSI file (e3326 board), and were you able to extend this to support multiple input streams or “cameras”? And what changes do you need to make in the device driver to make sure that the don’t attempt to interface over i2c?