nvvideoconvert UYVY support on Xavier

Hello,

We are using CSI cameras that natively output images in the UYVY color format. The ‘nvvideoconvert’ plugin does not implement support for this color format. The deepstream sample applications use the standard gstreamer ‘videoconvert’ plugin to perform conversions from UYVY. Unfortunately this results in high CPU utilization since ‘videoconvert’ is done on the CPU. The ‘nvvidconv’ plugin does support hardware accelerated colorspace conversion but does not appear to work with nvstreammux. When we attempt to use it we encounter the following error:

gst-launch-1.0 -v v4l2src device=/dev/video0 ! ‘video/x-raw, format=(string)UYVY, width=(int)3840, height=(int)2160, framerate=(fraction)30/1’ ! nvvidconv ! queue ! m.sink_0 nvstreammux live-source=1 name=m batch-size=1 width=3840 height=2160 ! nvegltransform ! nveglglessink

ERROR: from element /GstPipeline:pipeline0/GstNvStreamMux:m: Input buffer number of surfaces (0) must be equal to mux->num_surfaces_per_frame (1)
Set nvstreammux property num-surfaces-per-frame appropriately

Is there a way to get hardware accelerated UYVY colorspace conversion to work on the Xavier when the data is being routed through ‘nvstreammux’? Will ‘nvvideoconvert’ be enhanced to support this format?

Thanks,
Jon

1 Like

Hi,
We will evaluate to improve performance of this usecase. On DS4.0, videoconvert plugin is required.

Hi,
For our reference, please share your camera module.

Hello,

We are using the e-con Systems e-CAM130_CUXVR camera solution.

Thanks,
Jon

Hi jbchapman1,

As DaneLLL suggested that On DS4.0, videoconvert plugin is required.

Please try below Sample pipeline to use DeepStream for e-CAM130_CUXVR product.
You can modify below pipeline based on your use-case.

For 1080p:-

gst-launch-1.0 v4l2src device=/dev/video0 ! capsfilter caps=“video/x-raw,width=1920,height=1080,framerate=60/1,format=(string)UYVY” ! videoconvert ! nvvideoconvert ! capsfilter caps=“video/x-raw(memory:NVMM),width=1920,height=1080,framerate=60/1,format=(string)NV12” ! nv.sink_01 nvstreammux name=nv width=1920 height=1080 batch-size=1 batched-push-timeout=4000000 live-source=true ! queue ! nvvideoconvert ! nvinfer config-file-path=<deep_streamer_path>/deepstream_sdk_on_jetson/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt ! fpsdisplaysink video-sink=fakesink -v

For 4K:-
gst-launch-1.0 v4l2src device=/dev/video0 ! capsfilter caps=“video/x-raw,width=3840,height=2160,framerate=60/1,format=(string)UYVY” ! videoconvert ! nvvideoconvert ! capsfilter caps=“video/x-raw(memory:NVMM),width=1920,height=1080,framerate=60/1,format=(string)NV12” ! nv.sink_01 nvstreammux name=nv width=1920 height=1080 batch-size=1 batched-push-timeout=4000000 live-source=true ! queue ! nvvideoconvert ! nvinfer config-file-path=<deep_streamer_path>/deepstream_sdk_on_jetson/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt ! fpsdisplaysink video-sink=fakesink -v

Please let me know if you still facing any problems on using DeepStream.
Maybe you can get less framerate than supported because of videoconvert gstreamer elements.

Thanks,
Ritesh Kumar
e-con Systems India Pvt.Ltd
https://www.e-consystems.com/nvidia-jetson-camera.asp

I’ve found the most performant workaround is to just use nvconvert to convert back to non-NVMM memory before sending the data down the rest of the pipeline. A solution such as:

gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw, format=(string)UYVY, width=(int)4096, height=(int)2160, framerate=(fraction)28/1’ ! nvvidconv ! ‘video/x-raw(memory:NVMM), format=NV12, width=(int)4096, height=(int)2160, framerate=(fraction)28/1’ ! nvvidconv ! ‘video/x-raw, format=NV12, width=(int)4096, height=(int)2160, framerate=(fraction)28/1’ ! nvvideoconvert ! ‘video/x-raw(memory:NVMM), format=NV12, width=(int)4096, height=(int)2160, framerate=(fraction)28/1’ ! …

This allows the color conversion to be hardware accelerated and vastly outperforms videoconvert.

Thanks,
Jon

Hi Jon,
Thanks for sharing the solution. Shall works better than using videoconvert plugin.

Greetings Jon.
I was trying the pipeline as I am also working on an e-con camera with UYVY format for the nano. I was wondering if you encountered the following error when running the pipeline:

(gst-launch-1.0:8217): GLib-GObject-WARNING **: 14:08:14.118: cannot register existing type ‘GstInterpolationMethod’

(gst-launch-1.0:8217): GLib-GObject-CRITICAL **: 14:08:14.118: g_param_spec_enum: assertion ‘G_TYPE_IS_ENUM (enum_type)’ failed

(gst-launch-1.0:8217): GLib-GObject-CRITICAL **: 14:08:14.118: validate_pspec_to_install: assertion ‘G_IS_PARAM_SPEC (pspec)’ failed

I know that it happens because of nvvidconv & nvvideoconvert compatability with deepstream. Though even with the error the pipeline seems to run just fine. So I was wondering if it was safe to simply ignore it?

Best regards Bjørn.

Hi,
There is a sample for this usecase:
https://devtalk.nvidia.com/default/topic/1066418/deepstream-sdk/use-yuy2-format-as-input/post/5404526/#5404526
FYR. Also please try to clean gstreamer cache.