I’ve captured 100 images and converted them to GIF, so they look like this:
Now, i’d like to stream the video, so I thought in using Gstreamer. For that, I’ve seen the available formats, which are these:
nvidia@nvidia-desktop:~/Pictures$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'RG10'
Name : 10-bit Bayer RGRG/GBGB
Size: Discrete 1920x1080
Interval: Discrete 0.020s (50.000 fps)
Anyway, I try to run the following command to capture live video, but I get the following error. How should I proceed? Do I need to change something from the driver?
nvidia@nvidia-desktop:~/Pictures$ gst-launch-1.0 v4l2src device="/dev/video0" ! "video/x-raw, width=1920, height=1080, format=(string)BGRx" ! xvimagesink -e
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
It is not, it is monochrome, but I haven’t seen a way to specify it is monochrome.
The data coming from the FPGA is RAW10 Monochrome, 10 bits pixel depth.
But it is not a bayer sensor, it is monochrome. Anyway, is it possible to capture it without processing, and then use a third-party tool to process it in grayscale? Because I’m able to capture frames with v4l2-ctl
Do you refer to the output of “v4l2-ctl -d /dev/video0 --list-formats-ext” I listed in the comment #1? This is NOT defining the driver. I set it up like this in the device tree mostly because I see no other way to define my sensor. I understand that v4l2-ctl is working as I bypass the ISP, but my sensor is not bayer. How should I define it in the DT?
As far as I see, there has been some progress in this thread
Summarizing, I have a monochrome sensor (10-bit pixel depth - GRAYSCALE) outputting data in RAW8, RAW10, RAW12 or RAW14 formats