Hello. I am still confusing with custom camera implementation on TX2.
I have several sensors.
Now I try to bring up another camera sensor with stream: 1280x720 , 16 bits per pixel, 30 fps. Format YUV422, 8 bits per component (less then 16),
Also I found from Xilinx example - Maximum bits per component - 14 (also less than 16)
As I got the answer - 14 - it’s a useful bits in 16 bit component (2 another bits are empty).
Does it can cause the problem with TX2 capturing via v4l2??
What the valid way to adjust csi_pixel_bit_depth.? For YUV in kernel source It’s allowed be only 16, not 8 or 14.
In my DTS I set: active_w = "1280"; active_h = "720"; mode_type = "yuv"; pixel_phase = "uyvy"; csi_pixel_bit_depth = "16"; readout_orientation = "90"; line_length = "2560"; inherent_gain = "1"; mclk_multiplier = "24"; pix_clk_hz = "74250000";
I have some confusion about pix_clk_hz. Isn’t it calculated from the sensor signal? Why is it added?
What is the readout_orientation? Could this parameter cause some errors or problems?
Common question. How can I calculate and set required parameters for DTS, if I know video signal parameters presumably?
Also, what values are required for framerate related fields? framerate_factor = "1000000"; min_framerate = "2000000"; max_framerate = "60000000"; step_framerate = "1"; default_framerate= "30000000";
Thanks.
If I try to save the stream and read it from file: v4l2-ctl -d /dev/video0 --set-fmt-video=width=1280,height=720,pixelformat=NV16 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100 --stream-to=fff.yuv
And play it: gst-launch-1.0 filesrc location=fff.yuv ! videoparse width=1280 height=720 framerate=24/1 format=2 ! autovideoconvert ! autovideosink
I receive alternating frames. One whole and one divided at vertical (your can see part of one test rectangle in bottom and top).
My embedded_metadata_height = “0”;
What could be a reason?
The signal goes as test pattern from Xilinx FPGA (we dont have any success with real sensors)
Thanks
That seems like a timing issue.
You can try 2 things:
Use a higher pix_clk_hz . That is the clock at which the NVIDIA Jetson camera subsystem syncs to capture data from the sensor. You can always have it be higher than the theoretical one. See if it has any effect on the stream.
Can you capture using v4l2-ctl into a raw file so you can share it and we can help you check file size and compare with what you have configured on your capture mode?
best reagards,
Andrew
Embedded Software Engineer at ProventusNova
The problem seems that the Jetson Board tries to parse 422 format.
But we use the YUV422 8bit packed.
In YUV viewer I see this:
This image directly the pattern that we observe from Jetson output.
But how we can set Jetson source be able understand UYVY422 8bit packed??
In DTS I set: active_w = "1280"; active_h = "720"; mode_type = "yuv"; pixel_phase = "uyvy"; csi_pixel_bit_depth = "16"; readout_orientation = "90"; line_length = "2560";
And I suspect that is some problem in V4L side or GStreamer.
I save file with: v4l2-ctl -d /dev/video0 --set-fmt-video=width=1280,height=720,pixelformat=UYVY --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100 --stream-to=test.yuv
And try to play it: gst-launch-1.0 filesrc location=test.yuv ! videoparse format=5 width=1280 height=720 framerate=1/1 ! xvimagesink
But I get the error: Setting pipeline to PAUSED … Pipeline is PREROLLING … ERROR: from element /GstPipeline:pipeline0/GstVideoParse:videoparse0/GstRawVideoParse:inner_rawvideoparse: Internal data stream error. Additional debug info: ../libs/gst/base/gstbaseparse.c(3702): gst_base_parse_loop (): /GstPipeline:pipeline0/GstVideoParse:videoparse0/GstRawVideoParse:inner_rawvideoparse: streaming stopped, reason not-negotiated (-4) ERROR: pipeline doesn’t want to preroll. Setting pipeline to NULL …
But if I set the format=2, I am able to play the video, but in planar format, that is show wrong image with greens colors.
My single frame and double-single frame saved through v4l2:
I will clarify about embedded lines.
Now I am able play the normal stream, but using ffplay util: ffplay -f v4l2 -frameratre 30 -pixel_format uyvy422 -video_size 1280x720 -i /dev/video0
But with gstreamer I have a fail when set format=5.
Thanks.
That is a very interesting behavior you are seeing.
And great catch finding out that it was actually the YUV format being used.
The good news is that if you are able to view your buffers after capturing them raw, it means that the camera driver and DTB configuration should be fine.
Now, as per GStreamer, can you try capturing with:
Hello.
Thanks for advise. gst-launch-1.0 nvv4l2camerasrc ! nvvidconv ! xvimagesink
It works.
Although, I have a full screen stream (we need window’ed).
I have got the window with: sudo gst-launch-1.0 nvv4l2camerasrc device=/dev/video0 ! "video/x-raw(memory:NVMM), format=(string)UYVY,width=(int)1280, height=(int)720,framerate=(fraction)28/1" ! nvvidconv ! xvimagesink
But strange, with command where I use format=5, I have the fail.
Although, my original question is:
How need I set different and possible params in DTS (or suppose some of params), when I know
about the sensor only string: 1280x720 , 16 bits per pixel, 30 fps. Format YUV422, 8 bits per component.
I would try to write some Util, that will create DTS automatically depending on camera properties like from above.
And. Does the params related physical size of sensor are matter? For example physical_w. As I know it only describes the width of sensor. But does it matter for driver?