First understanding for implementing custom Camera driver (CSI)

Hello. I am still confusing with custom camera implementation on TX2.
I have several sensors.
Now I try to bring up another camera sensor with stream:
1280x720 , 16 bits per pixel, 30 fps. Format YUV422, 8 bits per component (less then 16),
Also I found from Xilinx example - Maximum bits per component - 14 (also less than 16)
As I got the answer - 14 - it’s a useful bits in 16 bit component (2 another bits are empty).
Does it can cause the problem with TX2 capturing via v4l2??
What the valid way to adjust csi_pixel_bit_depth.? For YUV in kernel source It’s allowed be only 16, not 8 or 14.

image

In my DTS I set:
active_w = "1280";
active_h = "720";
mode_type = "yuv";
pixel_phase = "uyvy";
csi_pixel_bit_depth = "16";
readout_orientation = "90";
line_length = "2560";
inherent_gain = "1";
mclk_multiplier = "24";
pix_clk_hz = "74250000";
I have some confusion about pix_clk_hz. Isn’t it calculated from the sensor signal? Why is it added?
What is the readout_orientation? Could this parameter cause some errors or problems?
Common question. How can I calculate and set required parameters for DTS, if I know video signal parameters presumably?
Also, what values are required for framerate related fields?
framerate_factor = "1000000";
min_framerate = "2000000";
max_framerate = "60000000";
step_framerate = "1";
default_framerate= "30000000";
Thanks.

I was able to run a stream.
I have CHANSEL_NOMATCH with data = 7fc03c9.
How can I interpret 7fc???
Thanks.

My trace contains:
tag:CSIMUX_STREAM channel:0xff frame:0 vi_tstamp:67548507172 data:0x00000001
tag:FS channel:0x00 frame:0 vi_tstamp:67548514459 data:0x00000010
tag:CHANSEL_PXL_SOF channel:0x00 frame:0 vi_tstamp:67548514499 data:0x00000001
tag:ATOMP_FS channel:0x00 frame:0 vi_tstamp:67548514520 data:0x00000000
tag:CHANSEL_LOAD_FRAMED channel:0x01 frame:0 vi_tstamp:67548516980 data:0x08000000
tag:CHANSEL_PXL_EOF channel:0x00 frame:0 vi_tstamp:67548842490 data:0x02cf0002
tag:ATOMP_FRAME_DONE channel:0x00 frame:0 vi_tstamp:67548842507 data:0x00000000
tag:FE channel:0x00 frame:0 vi_tstamp:67548842527 data:0x00000020
tag:ATOMP_FE channel:0x00 frame:0 vi_tstamp:67548842530 data:0x00000000
tag:FS channel:0x00 frame:0 vi_tstamp:67548842545 data:0x00000010
tag:CHANSEL_NOMATCH channel:0x01 frame:0 vi_tstamp:67548842585 data:0x07fc03c9
tag:FE channel:0x00 frame:0 vi_tstamp:67549170613 data:0x00000020
tag:FS channel:0x00 frame:0 vi_tstamp:67549170631 data:0x00000010
tag:CHANSEL_PXL_SOF channel:0x00 frame:0 vi_tstamp:67549170670 data:0x00000001
tag:ATOMP_FS channel:0x00 frame:0 vi_tstamp:67549170691 data:0x00000000
tag:CHANSEL_LOAD_FRAMED channel:0x01 frame:0 vi_tstamp:67549173895 data:0x08000000
tag:CHANSEL_PXL_EOF channel:0x00 frame:0 vi_tstamp:67549498661 data:0x02cf0002
tag:ATOMP_FRAME_DONE channel:0x00 frame:0 vi_tstamp:67549498686 data:0x00000000
tag:FE channel:0x00 frame:0 vi_tstamp:67549498698 data:0x00000020
tag:ATOMP_FE channel:0x00 frame:0 vi_tstamp:67549498702 data:0x00000000
tag:FS channel:0x00 frame:0 vi_tstamp:67549498716 data:0x00000010
tag:CHANSEL_NOMATCH channel:0x01 frame:0 vi_tstamp:67549498756 data:0x07fc03c9
And image I have from live stream from gstreamer:
gst-launch-1.0 v4l2src device=/dev/video0 ! videoparse width=1280 height=720 framerate=30/1 format=2 ! autovideoconvert ! autovideosink

If I try to save the stream and read it from file:
v4l2-ctl -d /dev/video0 --set-fmt-video=width=1280,height=720,pixelformat=NV16 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100 --stream-to=fff.yuv
And play it:
gst-launch-1.0 filesrc location=fff.yuv ! videoparse width=1280 height=720 framerate=24/1 format=2 ! autovideoconvert ! autovideosink

I receive alternating frames. One whole and one divided at vertical (your can see part of one test rectangle in bottom and top).

My embedded_metadata_height = “0”;
What could be a reason?
The signal goes as test pattern from Xilinx FPGA (we dont have any success with real sensors)
Thanks

Hello @sergeyfevg,

That seems like a timing issue.
You can try 2 things:

  1. Use a higher pix_clk_hz . That is the clock at which the NVIDIA Jetson camera subsystem syncs to capture data from the sensor. You can always have it be higher than the theoretical one. See if it has any effect on the stream.

  2. Can you capture using v4l2-ctl into a raw file so you can share it and we can help you check file size and compare with what you have configured on your capture mode?

best reagards,
Andrew
Embedded Software Engineer at ProventusNova

The problem seems that the Jetson Board tries to parse 422 format.
But we use the YUV422 8bit packed.
In YUV viewer I see this:
This image directly the pattern that we observe from Jetson output.


If I set UYVY packet format, we see the black color (as we expect)

But how we can set Jetson source be able understand UYVY422 8bit packed??
In DTS I set:
active_w = "1280";
active_h = "720";
mode_type = "yuv";
pixel_phase = "uyvy";
csi_pixel_bit_depth = "16";
readout_orientation = "90";
line_length = "2560";

And I suspect that is some problem in V4L side or GStreamer.
I save file with:
v4l2-ctl -d /dev/video0 --set-fmt-video=width=1280,height=720,pixelformat=UYVY --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100 --stream-to=test.yuv
And try to play it:
gst-launch-1.0 filesrc location=test.yuv ! videoparse format=5 width=1280 height=720 framerate=1/1 ! xvimagesink

But I get the error:
Setting pipeline to PAUSED …
Pipeline is PREROLLING …
ERROR: from element /GstPipeline:pipeline0/GstVideoParse:videoparse0/GstRawVideoParse:inner_rawvideoparse: Internal data stream error.
Additional debug info:
../libs/gst/base/gstbaseparse.c(3702): gst_base_parse_loop (): /GstPipeline:pipeline0/GstVideoParse:videoparse0/GstRawVideoParse:inner_rawvideoparse:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to NULL …

But if I set the format=2, I am able to play the video, but in planar format, that is show wrong image with greens colors.

My single frame and double-single frame saved through v4l2:

uyvy_test.zip (9.1 KB)

Thanks.

Do you confirm the sensor output embedded data lines?

Usually the CHANSEL_NOMATCH could be cause by incorrect virtual channel ID or incorrect embedded data lines.

Thanks

I will clarify about embedded lines.
Now I am able play the normal stream, but using ffplay util:
ffplay -f v4l2 -frameratre 30 -pixel_format uyvy422 -video_size 1280x720 -i /dev/video0

But with gstreamer I have a fail when set format=5.
Thanks.

@sergeyfevg,

That is a very interesting behavior you are seeing.
And great catch finding out that it was actually the YUV format being used.

The good news is that if you are able to view your buffers after capturing them raw, it means that the camera driver and DTB configuration should be fine.

Now, as per GStreamer, can you try capturing with:

gst-launch-1.0 nvv4l2camerasrc ! nvvidconv ! xvimagesink 

best regards,
Andrew
Embedded Software Engineer at ProventusNova

Hello.
Thanks for advise.
gst-launch-1.0 nvv4l2camerasrc ! nvvidconv ! xvimagesink
It works.
Although, I have a full screen stream (we need window’ed).
I have got the window with:
sudo gst-launch-1.0 nvv4l2camerasrc device=/dev/video0 ! "video/x-raw(memory:NVMM), format=(string)UYVY,width=(int)1280, height=(int)720,framerate=(fraction)28/1" ! nvvidconv ! xvimagesink
But strange, with command where I use format=5, I have the fail.

Although, my original question is:
How need I set different and possible params in DTS (or suppose some of params), when I know
about the sensor only string:
1280x720 , 16 bits per pixel, 30 fps. Format YUV422, 8 bits per component.
I would try to write some Util, that will create DTS automatically depending on camera properties like from above.
And. Does the params related physical size of sensor are matter? For example physical_w. As I know it only describes the width of sensor. But does it matter for driver?

Thanks

Have a check the programing guide for the details.

BTW, the physical_w is using for calculating the lens information. For YUV sensor you can just ignore it.