I have a working custom camera (not a MIPI sensor, but the output of a MIPI IP Core) connected to the Jetson TX2 camera expansion header. I use 4 lanes of data. The pixel format is RAW10, the framerate is 50 fps, and the resolution is 1920x1080.
I’ve built a device tree and a driver for the camera based on the Sensor Driver Programming Guide, and I achieved it to be loaded and recognized (appears a /dev/video0 entry). Anyway, when trying to stream video (nvgstcapture-1.0), I get the following error:
[ +1,504481] fence timeout on [ffffffc18e485900] after 1500ms
[ +0,000006] fence timeout on [ffffffc18e485180] after 1500ms
[ +0,000002] name=[nvhost_sync:44], current value=0 waiting value=1
[ +0,000004] name=[nvhost_sync:31], current value=0 waiting value=1
[ +0,000001] ---- mlocks ----
[ +0,000004] fence timeout on [ffffffc18e485000] after 1500ms
[ +0,000001] ---- mlocks ----
[ +0,000004] name=[nvhost_sync:34], current value=0 waiting value=1
[ +0,000002] 8: locked by channel 7
[ +0,000001] ---- mlocks ----
[ +0,000003] 8: locked by channel 7
[ +0,000007] 8: locked by channel 7
[ +0,000007] ---- syncpts ----
[ +0,000004] ---- syncpts ----
The pipeline I use is the following one:
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)50/1' ! nvoverlaysink -e
Anyway, many of the properties that must be defined in the DT and driver are not relevant for me (e.g. power functions, sensor dimensions, mclk …), as the camera is already boot up and transmitting data through the CSI lanes. Is there any simpler way to build a driver that captures data in the specified format?