Can I predict the output format of the decodebin module in advance?

I have run the following same command on different servers, respectively.
( server1 - tesla P4, server2 - tesla P4, GeForce GTX 1080 )

gst-launch-1.0 rtspsrc location=“rtsp://” ! decodebin ! nvvidconv ! ‘video/x-raw,format=NV12’ ! appsink

As a result, we found that one server failed and one server operated normally.

I found that if the input and output of nvvidconv is video / x-raw, I get an error.

Is that true?

If so, is there a way to know in advance whether the result type of decodebin is gpu / cpu memory?

Or can you fix decodebin to return gpu memory?

server1_failed.txt (181 KB)

server2_ok.txt (105 KB)

nvdecoder output is gpu memory.

We just release deepstream 4.0. Enjoy

I am using deepstream 3.0 version.

I think the result format of the decodebin plugin could be gpu memory, or cpu memory.

If you check with gst-inspect-1.0, you can see that capability is “ANY”.

When I compare graph images run on two different servers,

I can see that the decodebin output is displayed as video/x-raw(memory: NVMM) and video/x-raw.

( server1_failed.png, server2_ok.png Please note. )

The search found that the nvvidconv plug-in could fail if the input and output format were video / x-raw.

So I removed nvvidconv and found that it works.

Is it possible to know in advance whether the result type of the decodebin is gpu memory or cpu memory and modify the pipeline?

And the operation succeeded, but I noticed that gstreamer periodically sends the following error message:

ERROR libav :0:: Invalid UE golomb code

This message seems to be related to rtsp communication, and I wonder exactly what happens.

decodebin is not from nvidia.
You can use urideocdebin. It will call nvidia decoder plugin.

You can check if nvdec is working by “$ nvidia-smi dmon” to see -> %dec

But the output is not NVMM. You need an extra nvvidconv in the pipeline to get NVMM.

In DS 1.5 TX2, uridecodebin can output NVMM directly.

But the output is not NVMM
what’s your pipeline and platform ?

Thanks. I found the answer here

Both servers are both Ubuntu 16.04 environments.

The pipeline we have tested is roughly:

Pipeline1) gst-launch-1.0 rtspsrc location=“rtsp://~” ! decodebin ! nvvidconv ! ‘video/x-raw, format=(string)NV12’ ! appsink

Pipeline2) gst-launch-1.0 rtspsrc location=“rtsp://~” ! decodebin ! appsink

One server runs on pipeline 1.

On other servers, Pipeline1 is not working, so it works when you use pipeline2.

One server is equipped with tesla p4 graphics card, the other one is equipped with tesla p4 and geforce 1080 ti.

Does this affect the output format of decodebin to gpu, or cpu memory?

I saw the nvv4l2decoder plugin in the link. Is it a plugin in Deepstream v4.0?

My server can not be searched with gst-inspect-1.0.

decodebin is software decoder. Please use nvv4l2decoder in deepstream 4.0