I’m coding a driver to support OV5647 and made some progress to get a image out however I’m guessing somewhere the frame timing are getting out of sync after a few frames as I’m getting the following errors:
[ 93.305310] video4linux video0: frame start syncpt timeout!0
[ 93.349157] vi 54080000.vi: tegra_channel_error_status:error 2 frame 3
[ 160.337401] vi 54080000.vi: tegra_channel_error_status:error 24002 frame 0
[ 160.401863] vi 54080000.vi: tegra_channel_error_status:error 24002 frame 1
[ 160.658097] vi 54080000.vi: tegra_channel_error_status:error 24002 frame 2
[ 203.253901] vi 54080000.vi: tegra_channel_error_status:error 20022 frame 0
[ 203.382330] vi 54080000.vi: tegra_channel_error_status:error 24002 frame 1
[ 203.905692] video4linux video0: tegra_channel_capture_done: MW_ACK_DONE syncpoint time out!0
Any help with the error codes would be useful or to turn on debugging.
it seems there’s LINE_WIDTH_LONG_ERROR, please review the sensor specification and update your active_w sensor device tree settings
you may also download [Tegra X1 (SoC) Technical Reference Manual] from Jetson Download Center for checking.
you might also check the register description of VI_CSI_[0…5]_ERROR_STATUS for error reporting.
thanks
Thanks for the info have another question I’m trying to run a gstreamer pipe line against the driver on /dev/video0 (which is present). The error I’m seeing is from gstnvarguscamerasrc:
jetson-nano@jetsonnano-desktop:~/kernel/git/sources/kernel/kernel-4.9$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=2592, height=1944, format=NV12' ! nvvidconv flip-method=0 ! multifilesink location=test_%d.yuv
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:521 No cameras available
Can’t find the source for gstnvarguscamerasrc.cpp, do you know what the error means “No cameras available” ?
by googling the OV5647 sensor specification, this sensor was outputting 8/10-bit raw RGB data.
currently, we need sensor format as RGBA for encode/decode. hence you’ll only able to use v4l2src to access camera sensor.
thanks
Whats confusing is that the imx219 (the supported camera) is also outputting 10-bit raw rgb data as per the definitions in the dts files and that works with gstnvarguscamerasrc. From what I understand conversion to RGBA would be done between CSI → VI (Video Input) unit.
Update:
I think the reason I can’t use gstnvarguscamerasrc is because it needs an Custom ISP configuration file and this only available to selected ODMs? Am I right?
to clarify, CSI, and VI were handle sensor initialization, signal processing, buffer allocation and writing.
there’s VIC engine to handle the color format conversion. you may check [Tegra X1 (SoC) Technical Reference Manual] to have more details.
thanks
My driver is also configured the same pixel_t = “bayer_rggb” in the dts and the TX1 ov5693 implementation is also having the same value. The only difference I see is ISP configuration as per the documentation see ‘ISP Support’ section.
that’s not correct, device tree only reporting the actual sensor capability to the driver.
please contact with Jetson Preferred Partner for further camera solution supports.
thanks
I am currently attempting to use a OV5647 sensor with the Jetson and stumbled across this thread. Did you complete the camera driver and if so is it available for use somewhere ?