Gstreamer pipeline produces greyscale video stream (using NV12 and nvvvidconv)

Hi.
I am preparing a pipeline using two imx219 camera sources.
When I run:
“gst-launch-1.0 -e nvarguscamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM),width=3264,height=1848,format=NV12,framerate=28/1’ ! nvvidconv ! video/x-raw,format=NV12 ! autovideosink”
I get a greyscale stream.
I have spent time looking at earlier posts on related ‘greyscale’ issues and 2 years ago this kind of issue seemed to relate to NV12 encoding. Given the intervening years, the expectation might be that as more ‘NV’ plugins shift the processing from CPU to GPU, autovideosink would initially look for a 3 channel sink to display an output, rather than go to a 1 channel quick fix.
Should I be using a different sink? I first want to view the stream and then once everything is working I want to save the source as a cropped .mp4, which might be possible using:
! nvvidconv ! video/x-raw,fornat=NV12,left=0,right=0,top=424,bottom=424 ! nvv412h264enc! qtmux ! filesink location=x/x/camera1/croptest.mp4

This is as a precursor to running inference on data gathered after converting the .mp4 material to .pngs (i.e ‘real time’ is seconds, rather than milliseconds).

I have also tried reformatting to RGBx and RGB via nvvidconv and videoconvert before using autovideosink, to solve the greyscale issue, but this produces error messages which include negotiator issues.

May I request advice on where to focus my attention.

Thank you

autovideosink may select xvimagesink, and this sink doesn’t support NV12 format, so you would just remove NV12 format:

gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM),width=3280,height=1848,format=NV12,framerate=28/1' ! nvvidconv ! video/x-raw ! queue ! autovideosink -v

This would use YUY2 format.

Also note that as opposed to videocrop, nvvidconv doesn’t use margins but coordinates.
Assuming that you want to just remove 424 lines on top and 424 lines on bottom (resulting in a 3264x1000 resolution), the correct pipeline would be:

gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM),width=3264,height=1848,format=NV12,framerate=28/1' ! nvvidconv left=0 right=3263 top=424 bottom=1423 ! video/x-raw,width=3264,height=1000,pixel-aspect-ratio=1/1 ! queue ! autovideosink -v

and if you want to encode into H264 and save into mp4 container, you would use:

gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM),width=3264,height=1848,format=NV12,framerate=28/1' ! nvvidconv left=0 right=3263 top=424 bottom=1423 ! 'video/x-raw(memory:NVMM),width=3264,height=1000,pixel-aspect-ratio=1/1' ! queue ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=test.mp4 -ev

Many thanks for your highly detailed answer.

That is extremely helpful.

However it generated a couple of errors which I need to investigate. The first was:
“Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute: 787 FrameRate specified is greater than supported”
and so it switched to Camera mode 0 and tried to run w = 3264 H = 2464 Frame Rate = 21.000000
however “Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp” then generated a second error.
I reran the code, using width=3264 instead of width=3280 and made the necessary adjustments to the coordinates.
Now the camera stream runs but:

  1. it is greyscale
  2. it is very noisy
    I would appreciate any advice you can offer on point 1.

On point 2. in order to keep things simple I did not mention the following: I have been using “tnr-mode=2 tnr-strength=1” after “nvarguscamerasrc sensor-id=0” and it appears that the noise reduction is no longer working. Do I need to use some other noise reduction now?

Many thanks in advance

Hi,
Please try the command and see if video preview is good:

$ gst-launch-1.0 -e nvarguscamerasrc sensor-id=0 ! nvoverlaysink

If you use Raspberry Pi camera v2, it is enabled by default. If you use imx219 from other vendor, you would need to port sensor driver and device tree accordingly.

Hi there.

The command runs. It is in colour. It defaults to camera mode 2.
I have used this camera in colour before.

Hi,
Please try I420:

$ gst-launch-1.0 -e nvarguscamerasrc sensor-id=0 ! nvvidconv ! video/x-raw,format=I420 ! xvimagesink sync=0

Hi again.

That produces the same result: positive, colour.

Hi,
You can use I420 in your use-case. I420 and NV12 are same formats and different in data layout.

Hi DaneLLL thanks for that.

Unfortunately that doesn’t run:


There are no errors but the stream doesn’t play. I think the syntax is right, or have I missed something?

Hi,
Please confirm if 3264x1848,28fps is a valid sensor more. Seems like the mode is not supported.

For your reference, we can run the command on Jetpack 4.6.4/Jetson Nano+ Raspberry Pi camera v2:

$ DISPLAY=:0 gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3264,height=1848,framerate=28/1' ! nvvidconv left=0 right=3263 top=424 bottom=1423 ! 'video/x-raw,width=3264,height=1000' ! xvimagesink sync=0

Hi DaneLLL.

Please confirm if 3264x1848,28fps is a valid sensor more. Seems like the mode is not supported
This is the command that gives me the greyscale image


Here is the list of supported modes:

(its mode 1)
I am using Jetpack 4.5.1/Jetson Nano+Waveshare Raspberry Pi camera v2. If I take out the crop command it runs in colour.

.

Hi,
We would suggest upgrade to Jetpack 4.6.4. This is latest version for Jetson Nano

Hello.

Okay, will this action necessitate any other upgrades as a result?

Please advise before I make the upgrade.

Hi,
If you use custom board, please contact with the vendor to get support. The default image is for Jetson Nano developer kit and we can flash the system through SDKManager. Custom board may have deviation in hardware design and need board vendor to provide customized system image.

We also suggest try Jetson Nano developer kit + Raspberry Pi camera v2 if you can get this reference setup.

Thank you I am now using this setup.

Do I follow the instructions for “To update to a new minor release” at:
https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-3261/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/updating_jetson_and_host.html#
?
Thank you

Hi DaneLLL

I updated to 4.6.1 without using the SDK manager. If I update from 4.6.1 to 4.6.4 using the SDK manager, do I need to back up the contents of the Nano first?

Hi,
The system is re-flashed while using SDKManager. Please backup the content before re-flashing.

You may try the command on Jetpack 4.6.1 first:
Gstreamer pipeline produces greyscale video stream (using NV12 and nvvvidconv) - #11 by DaneLLL

Hi

On Jetpack 4.6.1 the command:
"gst-launch-1.0 nvarguscamerasrc sensor-id=0 tnr-mode=2 tnr-strength=1 ! ‘video/x-raw(memory:NVMM),width=3264,height=1848,format=I420,framerate=28/1’ ! nvvidconv left=0 right=3263 top=424 bottom=1423,height=1000,pixel-aspect-ratio=1/1 ! queue ! autovideosink -v

produces erroneous pipeline warning.

Before I look at re-flashing the system, is there no way to update from 4.6.1 to 4.6.4 without re-flashing? I thought point releases could be updated via apt?

Hi,
Please try the commands on Jetpack 4.6.1:

$ rm ~/.cache/gstreamer-1.0/registry.aarch64.bin
$ DISPLAY=:0 gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3264,height=1848,framerate=28/1' ! nvvidconv left=0 right=3263 top=424 bottom=1423 ! 'video/x-raw,width=3264,height=1000' ! xvimagesink sync=0

We would expect the command working on 4.6.1(r32.7.1) if the sensor driver is good.

Hi there.

I get the following errors:
nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED …
ERROR: Pipeline doesn’t want to pause.
ERROR: From element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: Could not initialise Xv output
Additional debug info:
xvimagesink.c(1773): gst_xv_image_sink_open (): /GstPipeline:pipeline0/GstXvImageSink/xvimagesink0:
Could not open display (null)

Do I need to update my sensor driver?

Thank you