Using deepstream plugins in gstreamer pipelines.

Is it possible to use the Nvidia plugins provided via deepstream in regular gstreamer pipelines?

I have followed the I installation instructions for deepstream2.0 on a system with a Tesla P4. When I run gst-inspect | grep nv I see both nvvidconv and nvdec_h264 (I think that’s what it’s called), but when I swap these into our current gstreamer pipeline, I receive output from both plugins, but the pipeline fails to change to the playing state.

We would like to quickly add hardware decoding to our existing application, rather than redeveloping it around deepstream. Any ideas?

Could you give your whole pipeline here?

Thanks
wayne zhu

Of course:

The full pipeline I have tried to use is

rtspsrc location={} ! rtph264depay ! h264parse ! nvdec_h264 ! nvvidconv ! video/x-raw\
,format=RGBA ! videocrop top={} bottom={} left={} right={} ! videoscale ! video/x-raw,width={},heigh\
t={} ! appsink name=opencvsink

Where the {} are input parameters.

When I run it with GST_DEBUG=“*:2” I get:

nvvidconv: line=300 ---- video/x-raw, width=(int)1280, height=(int)960, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)0/1, format=(string)RGBA; video/x-raw, width=(int)[ 1, 32767 ], height=(int)[ 1, 32767 ], interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)[ 1/2147483647, 2147483647/1 ], framerate=(fraction)0/1, format=(string)RGBA
0:00:00.677649686 15862 0x7f788c00e190 WARN           basetransform gstbasetransform.c:1414:gst_base_transform_setcaps:<nvvidconv0> transform could not transform video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1280, height=(int)960, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)bt709, framerate=(fraction)0/1 in anything we support
nvvidconv: line=300 ---- video/x-raw, width=(int)1280, height=(int)960, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)0/1, format=(string)RGBA; video/x-raw, width=(int)[ 1, 32767 ], height=(int)[ 1, 32767 ], interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)[ 1/2147483647, 2147483647/1 ], framerate=(fraction)0/1, format=(string)RGBA
0:00:00.677918044 15862 0x7f788c00e190 WARN           basetransform gstbasetransform.c:1414:gst_base_transform_setcaps:<nvvidconv0> transform could not transform video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1280, height=(int)960, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)bt709, framerate=(fraction)0/1 in anything we support
0:00:00.677956109 15862 0x7f788c00e190 WARN                GST_PADS gstpad.c:4092:gst_pad_peer_query:<nvcuvidh264dec0:src> could not send sticky events

Where the nvvidconv message repeats multiple times. I am checking to see if the pipeline reaches the playing state, but it never does.

Now I’ve run some more pipelines and it appears nvvidconv is the problem.

This pipeline:

rtspsrc location=rtsp://10.1.10.179:9000/test1.sdp ! rtph264depay ! h264parse ! nvdec_h264 ! appsink name=opencvsink

does get to the playing state and produces buffers. Our app wants BGR or RGBA images, however, and once I try

rtspsrc location=rtsp://10.1.10.179:9000/test1.sdp ! rtph264depay ! h264parse ! nvdec_h264 ! nvvidconv ! video/x-raw,format=RGBA ! appsink name=opencvsink

The output reads:
nvvidconv0: NOT SUPPROTED CONVERSION … Use videoconvert … EXITING…

When I try to use videoconvert in its place, I get:
ERROR GST_PIPELINE grammar.y:642:gst_parse_perform_link: could not link videoconvert0 to opencvsink

Try following:
gst-launch-1.0 rtspsrc location=rtsp://10.1.10.179:9000/test1.sdp ! rtph264depay ! h264parse ! nvdec_h264 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=RGBA' ! nvvidconv ! video/x-raw\ ,format=RGBA ! videocrop top={} bottom={} left={} right={} ! videoscale ! video/x-raw,width={},heigh\ t={} ! appsink name=opencvsink

Thanks
wayne zhu

I tried following the accepted answer with a little modification (to be used with OpenCV):

gst_str = ('rtspsrc location={} latency={} ! rtph264depay ! h264parse ! nvdec_h264 ! nvvidconv ! video/x-raw(memory:NVMM), format=RGBA ! nvvidconv ! video/x-raw,format=RGBA ! videoconvert ! video/x-raw ! appsink drop=true sync=false').format(stream, 200)

and I got the following error:

[TensorRT] ERROR: cuda/reformat.cu (773) - Cuda Error in NCHWToNCHHW2: 33 (invalid resource handle)
[TensorRT] ERROR: cuda/reformat.cu (773) - Cuda Error in NCHWToNCHHW2: 33 (invalid resource handle)

I also tried this:

gst_str = ('rtspsrc location={} latency={} ! rtph264depay ! h264parse ! nvdec_h264 ! nvvidconv ! video/x-raw(memory:NVMM), format=RGBA ! nvvidconv ! video/x-raw,format=RGBA ! videoscale ! video/x-raw ! appsink drop=true sync=false').format(stream, 200)

but I got stuck here:

0:00:00.502411500 16597 0x7fe7200044f0 WARN           basetransform gstbasetransform.c:1355:gst_base_transform_setcaps:<nvvidconv1> transform could not transform video/x-raw(memory:NVMM), format=(string)RGBA, width=(int)1280, height=(int)720, framerate=(fraction)15/1 in anything we support

Any idea how to resolve this? I’m using a P100 for this task. Most of the examples I see that use nvvidconv are Tegra based so I’m not sure if this even works for Tesla. I might have to use avdec_h264 instead of nvdec_h264 if this persists. Thanks!

Hi,
If you install DeepStream SDK 2.0/3.0 for Tesla, you should see nvvidconv and nvdec_h264 plugins. nvvidconv plugins are the same name on Jetson platforms and desktop GPUs but run on different hardware engines.

Hi, I can see both plugins but when I try to use them together, I run into the same problems as I’ve noted above. I’m not really what’s going on here. Is it possible for you to provide a working pipeline that uses rtspsrc, nvvidconv and nvdec_h264? Also, would you happen to know how to debug the first error (invalid resource handle issue)? Is the videoconvert plugin the issue here?

Refer to this:
gst-launch-1.0 rtspsrc location=rtsp://172.31.29.17:3000/stream ! rtph264depay ! h264parse ! queue ! nvdec_h264 ! ‘video/x-raw(memory:NVMM)’ ! nvvidconv ! ‘video/x-raw’ ! videoconvert ! videoscale ! videorate ! ‘video/x-raw,format=(string)NV21’ ! fakesink dump=1

It works… thank you! By the way, this seems to be NV21 format. If I were to use RGBA/BGRx, how would I have to change this? Do I just add another videoconvert to the pipeline to convert to RGBA/BGRx? Thanks!

I see videoconvert supports “(string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, …” can you try it?