We added a bayer camera on Tk1 R21.5 and the camera driver is modified from Tk1 imx135_v4l2.c.
We failed to use gstreamer but could capture a frame by using yavta.
Is there any we missed?
Belwo are the commands we used and the log.
./yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fimx1x5.raw
gst-launch-1.0 nvcamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12’ ! nvoverlaysink -ev
ubuntu@tegra-ubuntu:~$ ./yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fimx1x5.raw
Device /dev/video0 opened.
Device vi' on ’ is a video capture device.
Video format set: SRGGB10 (30314752) 1920x1080 (stride 3840) buffer size 4147200
Video format: SRGGB10 (30314752) 1920x1080 (stride 3840) buffer size 4147200
1 buffers requested.
length: 4147200 offset: 0
Buffer 0 mapped at address 0xb69de000.
0 (0) [E] 0 4147200 bytes 1491904018.880566 500.980152 -0.002 fps
Captured 1 frames in 1.967854 seconds (0.508168 fps, 2107472.440365 B/s).
1 buffers released.
ubuntu@tegra-ubuntu:~$ gst-launch-1.0 nvcamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12’ ! nvoverlaysink -ev
Setting pipeline to PAUSED …
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingNvCamHwHalDeviceRead: 18 - -1
NvCamHwHalDeviceRead: 19 - -1
NvCamHwHalDeviceRead: 20 - -1
NvCamHwHalDeviceRead: 22 - -1
NvCamHwHalDeviceRead: 23 - -1
NvCamHwHalDeviceInstall: PCLLK_IOCTL_DEV_REG fail. -1
ImagerDeviceDetect: Failed to register new device
IMX135 **** Can not open camera device: No such file or directory
NvOdmImagerOpenExpanded 462: Sensor ERR
NvOdmImagerOpenExpanded FAILED!
camera_open failed
ERROR: Pipeline doesn’t want to pause.
Setting pipeline to NULL …
Freeing pipeline …
error message on host:
i2c i2c-2: Failed to register i2c client pcl_IMX1x5 at 0x1a (-16)
pcl-generic pcl-generic: camera_new_device cannot allocate client: pcl_IMX1x5 bus 2, 1a
Hi linuxsky
Sorry to let you know. There’s no way to use nvcamerasrc on TK1 due to the software architecture is different with TX1. For TK1 you only can use v4l2src and bayer2rgb element for your project.
We encountered an error, as we tested the v4l2src command you provided. (message below.)
We will re-check our driver and appreciate your any suggestions.
Plus, one more question.
Does it use Tk1 ISP as using v4l2src and bayer2rgb element.
ubuntu@tegra-ubuntu:~$ gst-launch-1.0 v4l2src device=“/dev/video0” ! ‘video/x-bayer,format=bggr,width=1280,height=720’ ! bayer2rgb ! videoconvert ! xvimagesink sync=false
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 0:00:00.735842046
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …
ubuntu@tegra-ubuntu:~$
Sorry. Both sudo and remove foramt=bggr got the same error.
ubuntu@tegra-ubuntu:~$ gst-launch-1.0 v4l2src device=“/dev/video0” ! ‘video/x-bayer,width=1920,height=1080’ ! bayer2rgb ! videoconvert ! xvimagesink sync=false
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 0:00:00.731233233
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …
ubuntu@tegra-ubuntu:~$ gst-launch-1.0 v4l2src device=“/dev/video0” ! ‘video/x-bayer,width=1920,height=1080’ ! bayer2rgb ! videoconvert ! xvimagesink sync=false
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 0:00:00.751204909
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …
ubuntu@tegra-ubuntu:~$ sudo gst-launch-1.0 v4l2src device=“/dev/video0” ! ‘video/x-bayer,width=1920,height=1080’ ! bayer2rgb ! videoconvert ! xvimagesink sync=false
[sudo] password for ubuntu:
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 0:00:00.733260232
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …
ubuntu@tegra-ubuntu:~$
We are still struggling to figure out our issue is caused by sensor driver or sensor HW.
Due to we are first bring up camera sensor on Tk1 and the sensor module is new too.
We could get a frame with format=bggr removed after the sensor driver was modified.
But we still failed to get multi frames.
The thing is we actually got error message below as even getting a frame by yavta command.
But we could get a correct frame/picture.
Although the test pattern we got does not completely correct.
They all report that there is Control Error.
Control error is set when proper LP → HS or HS → LP
transitioning is not detected.
This may or may not be a real issue.
For any real issue, You should check the MIPI bus to see if sensor is transitioning
through this phase properly.
Usually such issue will manifest itself in the form of frame loss.
But if you do not see any missing frame, then they could be set due
to software sequencing.
When the CIL (CSI Phy layer) is enabled, sometimes the
MIPI bus could be in LP00 state, Then the internal logic
treats this as error, since it expects the MIPI signals to be in logic 11.
You can resolve this error by making sure the MIPI bus is in the LP11 state,
before you enable the CIL. In other words turn on the Receiver before you turn
on the transmitter.
Thank you for your support.
Attached some waves of the camera hsync (c1)/ mipi lane0 clock (c2)/lane0 data (c3).
We still cannot get multi frames.
one frame — ok & image ok
two frames — ok & first image ok , second image is black
three frames — ok, & first image ok , second/third images are black
four frames — hang up
I attached the zip file first.
Sorry. I could not find a way to enable discontinuous mode. (neither Sony data sheet nor driver examples.)
I will try to figure out it later.
Sorry. We still don’t know how to do it.
we appreciate any sample or information of enabling discontinuous mode.
We will try to get a proper measure equipment to check the mipi timing and update it.
Thank you for your support again.
Sorry. We have not got the equipment yet.
I’m afraid there is no any update result.
Also, we are considering to use a workable camera module on TK1 for test first.