Tx1 CSI without I2c

static const struct camera_common_colorfmt camera_common_color_fmts[] = {
	{
		MEDIA_BUS_FMT_SRGGB12_1X12,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SRGGB12,
	},
	{
		MEDIA_BUS_FMT_SRGGB10_1X10,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SRGGB10,
	},
	{
		MEDIA_BUS_FMT_SBGGR10_1X10,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SBGGR10,
	},
	{
		MEDIA_BUS_FMT_SRGGB8_1X8,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SRGGB8,
	},
	{ //Added this line.
		MEDIA_BUS_FMT_YUYV8_2X8,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_YUYV,
	},

I made this change, but in ov5693.c

#define OV5693_DEFAULT_MODE	OV5693_MODE_2592X1944
#define OV5693_DEFAULT_HDR_MODE	OV5693_MODE_2592X1944_HDR
#define OV5693_DEFAULT_WIDTH	1280
#define OV5693_DEFAULT_HEIGHT	720
#define OV5693_DEFAULT_DATAFMT	MEDIA_BUS_FMT_YUYV8_2X8
#define OV5693_DEFAULT_CLK_FREQ	24000000

I need to change these as well. Should I go and edit ov5693_mode_tbls.h ? (there are a lot of register tables)

and in device tree;

pixel_t = "bayer_bggr";

Still this. Couldn’t find a replacement for this eventhough it should be yuv422.

Can I just use,

mode_type= "yuv";
 csi_pixel_bit_depth= "???" (What to enter here)
pixel_phase= "yuyv"

instead? Does ov5693.c support this? I know imx185 does.

@ShaneCCC any idea? Thank you Shane.

The ov5693_mode_tbls.h is sensor initial table, If your device don’t need it you can remove/ignore it.
Below configure don’t need exactly correct. They are for ISP pipeline usage. And ISP pipeline don’t support YUV sensor.

pixel_t = "bayer_bggr";
mode_type= "yuv";
csi_pixel_bit_depth= "???" (What to enter here)
pixel_phase= "yuyv"

But you said before that these were required in order to work.

How will tx1 know that my input is “yuvy” then? It will try to capture the stream as it was bayer_bggr in this case, am i wrong?

Correct my comment for below don’t need for YUV sensor.
Look into the kernel/kernel-4.4/drivers/media/platform/tegra/camera/sensor_common.c both of them are need to configure correct. Please have a reference to below doc for detail information.

https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fcamera_sensor_prog.html%23

pixel_t = "bayer_bggr";
mode_type= "yuv";
csi_pixel_bit_depth= "???" (What to enter here)
pixel_phase= "yuyv"

I’m connecting the Tx1 with ssh. How can I check it’s working?

nvidia@tegra-ubuntu:~$ gst-launch-1.0 -v v4l2src device=/dev/video0 ! fakesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device '/dev/video0' cannot capture in the specified format
Additional debug info:
gstv4l2object.c(3481): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Tried to capture in YU12, but device returned format YUYV
Execution ended after 0:00:00.000113073
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

I want this kind of code, one to publish and one to capture. TX1 to send video stream via ethernet, and watch it from the host pc.

Using v4l2-ctl to capture to make sure the sensor output is working first. Have a reference to below command to modify it to try. need change the width/height/pixelformat to match your sensor driver.

v4l2-ctl -d /dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=test.raw

I used the command you’ve given.

v4l2-ctl -d /dev/video0 --set-fmt-video=width=1280,height=720,pixelformat=YUYV --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=test.raw

Took back the test.raw file with scp(ssh) and used 7YUV to inspect it. (Set the resolution 1280x720 and format to 422 YUYV) Note that this is just the pattern generator we’re trying to verify instead of the actual camera, but the outputs are similar.

We get the pattern generators pattern right. How can I watch it as a video instead of just a single frame?

I need two compatible command like these;

// watch stream on PC
gst-launch-1.0 udpsrc port=5600 ! application/x-rtp, encoding-name=H265, payload=96 ! rtph265depay ! h265parse ! queue ! avdec_h265 ! xvimagesink sync = false

// mipi camera J1 on TX2
gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM),width=(int)1280,height=(int)720,framerate=(fraction)20/1' ! nvvidconv ! omxh265enc control-rate=4 bitrate=2000000 qp-range=35,50:35,50:35,50 ! video/x-h265,stream-format=byte-stream ! rtph265pay config-interval=1 pt=96 ! udpsink host=192.168.1.10 port=5600 sync=false async=false

// usb camera
gst-launch-1.0 v4l2src device=/dev/video0 ! videorate max-rate=25 ! videoconvert ! omxh265enc qp-range=30,50:30,50:30,50 control-rate=4 bitrate=3000000 ! "video/x-h265, stream-format=(string)byte-stream" ! rtph265pay mtu=1400 ! udpsink host=192.168.1.100 port=5600 sync=false async=false

These are obviously for h265, but we need to be able stream the data just like this with a YUV one.

nvcmaerasrc and nvarguscamerasrc didn’t support YUV sensor you can only run the gstreamer pipeline with v4l2src like below command.

gst-launch-1.0 -v v4l2src device="/dev/video0" ! "video/x-raw,width=1280,height=720, format=(string)I420" ! nvvidconv ! "video/x-raw(memory:NVMM)" ! nvoverlaysink
nvidia@tegra-ubuntu:~$ gst-launch-1.0 -v v4l2src device="/dev/video0" ! "video/x-raw,width=1280,height=720, format=(string)I420" ! nvvidconv ! "video/x-raw(memory:NVMM)" ! nvoverlaysink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device '/dev/video0' cannot capture in the specified format
Additional debug info:
gstv4l2object.c(3481): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Tried to capture in YU12, but device returned format YUYV
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
nvidia@tegra-ubuntu:~$

Tried it, and it immedieately exits, and returns the control back to me.

Did you modify the format to match your case? Maybe the I420 change to YUYV

"video/x-raw,width=1280,height=720, format=(string)I420"
nvidia@tegra-ubuntu:~$ gst-launch-1.0 -v v4l2src device="/dev/video0" ! "video/x-raw,width=1280,height=720, format=(string)YUYV" ! nvvidconv ! "video/x-raw(memory:NVMM)" ! nvoverlaysink
WARNING: erroneous pipeline: could not link v4l2src0 to nvvconv0

Did as you say, but pipeline couldn’t link this time.

I wrote a shell script that executed this command for 10 different times, and saving the file as

test1.raw
test2.raw
test3.raw

test10.raw

And I can see the video is playing, since the square of our pattern generator moves and changes position with each frame. So video stream seems fine. But capturing it as a video is a problem now.

v4l2-ctl --set-fmt-video=width=1280,height=720,pixelformat=YUYV --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100 -d /dev/video0 --stream-to=ov491.raw
scp ov491.raw  burak@192.168.1.100:/home/burak/Desktop
mplayer /home/burak/Desktop/ov491.raw -demuxer rawvideo -rawvideo w=1280:h=720:fps=30:format=yuy2

I can play the recorded raw video just fine with these comments, but I need gst-launch-1.0 equilavents of these. Can you please help me Shane? Thank you.

How about below command line.

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, width=1280, height=720, format=YUYV' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=1280, height=720, format=I420' ! nvoverlaysink sync=false
nvidia@tegra-ubuntu:~$ gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, width=1280, height=720, format=YUYV' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=1280, height=720, format=I420' ! nvoverlaysink sync=false
WARNING: erroneous pipeline: could not link v4l2src0 to nvvconv0

Still the same error.

nvidia@tegra-ubuntu:~$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'RG10'
	Name        : 10-bit Bayer RGRG/GBGB

^C^C^C^C^C

We can record raw Image/Video, but it gets stuck even while using this command. Is this a problem for us and where might be the problem Shane?

I used the command;

v4l2-ctl -d /dev/video0 --set-fmt-video=width=1280,height=720,pixelformat=YUYV --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=test.raw

I wrote a shell script that executed this command for 10 different times, and saving the file as

test1.raw
test2.raw
test3.raw

test10.raw

And I can see the video is playing, since the square of our pattern generator moves and changes position with each frame.

Then I’ve proceeded with the video;

v4l2-ctl --set-fmt-video=width=1280,height=720,pixelformat=YUYV --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100 -d /dev/video0 --stream-to=ov491.raw

scp ov491.raw  burak@192.168.1.100:/home/burak/Desktop

[code]mplayer /home/burak/Desktop/ov491.raw -demuxer rawvideo -rawvideo w=1280:h=720:fps=30:format=yuy2

I can play the recorded raw video just fine with these commands.

But I need to stream the video with commands similar to these;

// usb camera
gst-launch-1.0 v4l2src device=/dev/video0 ! videorate max-rate=25 ! videoconvert ! omxh265enc qp-range=30,50:30,50:30,50 control-rate=4 bitrate=3000000 ! "video/x-h265, stream-format=(string)byte-stream" ! rtph265pay mtu=1400 ! udpsink host=192.168.1.100 port=5600 sync=false async=false

// watch stream on PC
gst-launch-1.0 udpsrc port=5600 ! application/x-rtp, encoding-name=H265, payload=96 ! rtph265depay ! h265parse ! queue ! avdec_h265 ! xvimagesink sync = false

However the first command outputs;

nvidia@tegra-ubuntu:~$ gst-launch-1.0 v4l2src device=/dev/video0 ! videorate max-rate=25 ! videoconvert ! omxh265enc qp-range=30,50:30,50:30,50 control-rate=4 bitrate=3000000 ! "video/x-h265, stream-format=(string)byte-stream" ! rtph265pay mtu=1400 ! udpsink host=192.168.1.100 port=5600 sync=false async=false
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device '/dev/video0' cannot capture in the specified format
Additional debug info:
gstv4l2object.c(3481): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Tried to capture in YU12, but device returned format YUYV
Execution ended after 0:00:00.001618073
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

Any idea on how I can set the YUV12 to YUYV Shane? Thank you!

I’ll add the whole process, maybe the terminal messages can be used to debug the problem. (Which I record, transfer and the watch the video without any error)

nvidia@tegra-ubuntu:~$ v4l2-ctl --set-fmt-video=width=1280,height=720,pixelformat=YUYV --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100 -d /dev/video0 --stream-to=ov491.raw
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 61.00 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
nvidia@tegra-ubuntu:~$
nvidia@tegra-ubuntu:~$ scp ov491.raw  burak@192.168.1.100:/home/burak/Desktop
burak@192.168.1.100's password: 
ov491.raw                                                                                            100%  176MB  11.0MB/s   00:16
burak@burak-lenovo-v330-15ikb:~/Desktop$ mplayer /home/burak/Desktop/ov491.raw -demuxer rawvideo -rawvideo w=1280:h=720:fps=30:format=yuy2
MPlayer 1.3.0 (Debian), built with gcc-7 (C) 2000-2016 MPlayer Team
do_connect: could not connect to socket
connect: No such file or directory
Failed to open LIRC support. You will not be able to use your remote control.

Playing /home/burak/Desktop/ov491.raw.
rawvideo file format detected.
Failed to open VDPAU backend libvdpau_i965.so: cannot open shared object file: No such file or directory
[vdpau] Error when calling vdp_device_create_x11: 1
==========================================================================
Opening video decoder: [raw] RAW Uncompressed Video
Could not find matching colorspace - retrying with -vf scale...
Opening video filter: [scale]
Movie-Aspect is undefined - no prescaling applied.
[swscaler @ 0x5647792b6ce0] bicubic scaler, from yuyv422 to yuv420p using MMXEXT
[swscaler @ 0x5647792b6ce0] using unscaled yuyv422 -> yuv420p special converter
VO: [xv] 1280x720 => 1280x720 Planar YV12 
Selected video codec: [rawyuy2] vfm: raw (RAW YUY2)
==========================================================================
Load subtitles in /home/burak/Desktop/
Audio: no sound
Starting playback...
V:   3.3 101/101  0%  4%  0.0% 0 0 

Exiting... (End of file)
burak@burak-lenovo-v330-15ikb:~/Desktop$

I mostly hope that you can find something in the last one, since it gives out the most information about the process and the format that the video is recorded/converted/played.