Questions about Camera Driver development

Hi Team,

I am currently working on writing a Driver for a custom Sensor that sends out Data via CSI. The Configuration of the Device is Fixed, as soon as it powers On it sends out CSI Data and it is not controlled via I2C. But the Frame Length and Height and the Framerate can very depending on other Inputs, that are not controllable by the Kernel Module. The Data is being send in the RAW8 Format and needs to be accessed unprocessed.

For this i am working along this Guide: Sensor Software Driver Programming — NVIDIA Jetson Linux Developer Guide 1 documentation

I am currently trying to Understand the nv_imx185.c File that is referred to in the Guide. As far as my Understanding goes, i dont need to Implement all the Functions that only serve the purpose of configurating the Camera (like imx185_power_on / power_off, start_streaming / strop_streaming, set_frame_rate and parse_dt ). I only need to implement the probe, open and remove function alongside with the necessary Structs and Variables.

Is that correct?

From where does V4L2 know, how the CSI Signal is configured (Lanes, Format, Framerate, Clockspeed, …). Do I still need to Implement a Device Tree Parser Function for this or is this configured otherwise (like when calling v4l2-ctl command)

Thank you and Best regards,
Tim

hello tteuber,

FYI,
during kernel initialization stage, it’s step for camera device registration to setup a video device node to linux kernel. sensor probing only run once during kernel initialization stage of system boot-up.
for a typical camera application running cycle, the driver will Power On the sensor, Start Sensor Streaming, sending relevant v4l2 controls, and finally power off the sensor.
a video node, (/dev/video0) will register to linux kernel if above all correct.

this might be another issue even you’ve camera device registered to linux kernel.
for instance,
while you enable camera stream via application, it’s by default to expect the camera driver will Power On the sensor, Start Sensor Streaming, sending relevant v4l2 controls.
for your use-case, you should issue a reset for VI engine to recognize the signaling.

Hello JerryChang,

Thank you for your detailed Answer. I want to setup the driver as a loadable kernel module and not directly included into the kernel. Would it be a work-around to just leave the power on, start streaming,… functions blank, just returning 0 so V4L2 thinks the probing was successful? If I would do so, in my Understanding, a Video Node would be then created, even when there is not even a Sensor Connected.

Then I would just have to define the correct Signal Bindings in the Device Tree and let V4L2 do the Initialization in my Kernel. Also: For my Application it would be nice, when I could change the Resolution without reloading the driver, is this possible?

I maybe have to add, that the data i am trying to receive is not Image Data, but from another Sensor that just happends to be sending out data in the CSI Format. The Clock is discontinuous. Different Configurations of the Sensor result in different “Pixelheights” and “Width”. The Configuration is not done by the Jetson, so there is no need to implement this in the Driver.

hello tteuber,

those process will be the same when you insert a sensor kernel module.

but…

may I have more details…
for instance, could you please share an example of what’s the frame packet looks like?

Hi JerryChang,

I am working with a Radar Sensor, that sends RAW ADC Data with CSI.

This is a example for one vertical Line of a Frame, the width of the Frame is defined by the number of Loops of Sampling per Frame

hello tteuber,

please see-also some similar discussion thread for reference.
for example, Topic 256575, Topic 238270, and Topic 200846.

Hi JerryChang,

Thank you for the references. I am checking those out and see if those answer my questions.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.