nvcamerasrc source code request to support a sensor other than the ov5693.

I’d like to support another sensor on the TX1. The only supported sensor right now is the ov5693. It does not have the /dev/videoX V4L2(soc-camera) interface yet, but there is a gstreamer plugin ‘nvcamerasrc’ that currently works.

I have looked at the ov5693.c driver, but it mostly concerns itself with configuring the sensor registers via i2c according to the requested parameters (via ioctl), and it does not deal with configuring the VI (Video Input) hardware and actually getting the frame data.

I imagine that the ‘nvcamerasrc’ plugin is configuring the sensor and the VI separately, and as far as I can tell, there is no single example of how to do this.

For that reason, could nVidia please provide the source code for this? At this point it seems rather difficult to figure out which other drivers need to be interfaced with, and in what way, in order to support any other sensor.

Or perhaps a member of this community knows how to do this?

In an L4T update dropping soon (R23.2, in a few days), there will be V4L2 interface provided w/ kernel src to promote ease-of-use.

For gstreamer, perhaps the release gstomx sources will help? [URL]http://developer.download.nvidia.com/embedded/L4T/r23_Release_v1.0/source/gstomx1_src.tbz2[/URL]

Hi Dusty

I need some help.

The V4L2 software implementation bypasses the Tegra ISP, and is suitable for use when Tegra ISP support is not required, such as with sensors or input devices that provide data in YUV format.

The above stattement is from JetsonTX1 documents.

But I would like to know whether normal v4l2 implementation in LINUX supports ISP or not??

And could you also help me regarding “How the data flows from ov5693 sensor to display through v4l2?”

I’d like to know how to capture raw image using nvcamerasrc.

I want to compare bit packing of v4l2 raw image and nvcamerasrc raw image.