User Space V4L driver for TX1 and other MIPI/CSI cameras

User space V4L driver would save users’ efforts to patch/build kernel for TX1 and other MIPI/CSI cameras.

E.g., camera initialization in kernel driver handles all register writes/reads through I2C, but user may wants to access more registers than the kernel already implemented.

If a user space V4L driver accesses I2C bus and handles all register writes/reads, it’d easier for users to expand driver functions and develop driver for other cameras.

Raspberry Pi camera has user space V4L driver and works well, but it’s closed sources.

How difficult to convert existing TX1 V4L driver to user space V4L driver?

Thanks in advance.

yahoo2016
Same for upcoming r24.1 camera v4l2 kernel driver model, it’s at kernel layer with full source code access so developer like you could made the changes from your end. Running it at user space will rely on execution sequence of entire camera software stack to cover camera data path. You won’t be able to access the code.

"Raspberry Pi camera has user space V4L driver and works well, but it’s closed sources.
=> That’s why we’d like to make it a kernel driver so you have full source code access.

"How difficult to convert existing TX1 V4L driver to user space V4L driver?
=> You have full access to the code so should not be. The issue, as I explained earlier, it’s the data path that you could not control other user space software component.

Thanks for the response. With kernel V4L driver, can the registers of image sensor chip be read/written by user?

There is also a helpful posting in this regard,
https://devtalk.nvidia.com/default/topic/934354/jetson-tx1/typical-approaches-to-test-camera-be-only-applicable-for-l4t-r23-2-jetson-tx1-release-/

I assume your ‘user’ means user space. In fact, you can always program registers from use space. However, that’s not what our camera software stack is designed for. All the image sensor register programming about initialization and setup is done at the kernel side.

Are you looking for shorter turn-around development cycle using user mode programming instead of kernel?

We having been using FPGA grabbers for Aptina image sensors which we can fine tune frame rates, blank pixels, blank lines, analog/digital gains, integration time, binning/subsampling using APIs in C/C++. Many parameters can be changed in real time. I’m wandering if kernel mode driver can provide such flexibility. We can also integrate different image sensors without spending days/weeks to write drivers for each sensor.

Good to know, yahoo2016. When using a different image sensor, something has to change to make it work. Things like sensor initialization, mode set up etc. A lot of these information and register programming usually comes from sensor spec. Often time, having sensor spec does not mean you could program the sensor successfully. We work with many sensor vendors and we usually obtain sensor initialization and (one) mode set up register setting and code from them. To enable Jetson developers to write their own sensor driver, we architect driver model as a kernel driver so all the required driver code is in source code format. For upcoming R24.1 release, there is an OV5693 V4L2 sample sensor driver implementation you could reference. For changing sensor register real-time, there is nothing preventing you from writing s little user-mode tool (and leveraging IOCTL kernel call) to do that during development, debugging or testing phase. Hope this helps.

I heard Leopard Imaging is releasing TX1 camera adapters for many of their MiPi cameras, I hope it won’t be too difficult for someone to write drivers for those cameras. We are interested in those MiPi cameras for TX1.