Issues with hardware decoding in gstreamer 1.x.

In gstreamer 1.x I get broken image with OMX video decoders (I’ve tested H.264 and MPEG4). I don’t get these issues with libav/ffmpeg decoders.

I’ll show some commands I’ve ran and a screenshot:

gst-launch-1.0 filesrc location=h264.mp4 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! ximagesink

gst-launch-1.0 filesrc location=h264.mp4 ! qtdemux ! h264parse ! omxh264dec ! videoconvert ! ximagesink

gst-launch-1.0 filesrc location=mpeg4.mp4 ! qtdemux ! mpeg4videoparse ! avdec_mpeg4 ! videoconvert ! ximagesink

gst-launch-1.0 filesrc location=mpeg4.mp4 ! qtdemux ! mpeg4videoparse ! omxmpeg4videodec ! videoconvert ! ximagesink

This is either an issue in the decoders or videoconvert, and I’m guessing it’s either bad interpretation of data from the decoder, or bad encoding (the omx decoders in this example were using progressive-scan NV12).

For reference, the same thing was happening with gstreamer1.x packages from the repos, and a fresh build of the latest releases. I only have the omx plugins from nvidia, as I couldn’t get them to compile (the package on the support site doesn’t even have the configure script, and one made by autoconf fails to run).

Hi Nezumi-sama,

We have tried to play the exact pipelines mentioned by you with hardware accelerated decoders and don’t see the issue.
Can you please let us know which release are you using?
Can you try to use the latest release - R21.2 and let us know your observations?

Regards,
Sanket

I have a similar issue with omxh264dec. It is working fine when running with gst-launch but colors are not correct when I run with my C code. I am new in this area but when I looked at it some color channels were missing.

gst-launch-1.0 filesrc location=samplevideo.avi ! qtdemux ! h264parse ! omxh264dec ! videoconvert ! ximagesink

http://i.imgur.com/MdB2UDU.png

this is part of my code. I’m using appsink to convert frames into OpenCV Mat object.

gchar *descr = g_strdup(
    "filesrc  location=/home/v5user/samplevideo.avi ! "
    "qtdemux ! "
    "h264parse ! "
    "omxh264dec ! "
    "videoconvert ! "
    "appsink name=sink sync=true"
  );

http://i.imgur.com/57GqUIp.png
http://i.imgur.com/JrzkVCF.jpg

gchar *descr = g_strdup(
    "filesrc  location=/home/v5user/samplevideo.avi ! "
    "qtdemux ! "
    "h264parse ! "
    "avdec_h264 ! "
    "videoconvert ! "
    "appsink name=sink sync=true"
  );

http://i.imgur.com/gwIyFhF.png
http://i.imgur.com/iJnbUit.png

I’m guessing omxh264dec may not be compatible with appsink. It doesn’t make any difference either I use videoconvert or not. Also I’m using R21.2.

In the gst-launch pipeline the videoconvert will convert the video to something the ximagesink understands. With your code I think the appsink accepts everything, so the videoconvert doesn’t actually do anything.

Try adding the following between videoconvert and appsink:

capsfilter caps=video/x-raw-rgb

Videoconvert is doing it’s work purely on the CPU. If you get that working, try replacing it with a Tegra specific converter “nvvidconv”. It should be more efficient.

EDIT: if your app can handle YUV, then try x-raw-yuv instead of x-raw-rgb.

Thanks Kulve. I was able to fix it with “nvvidconv” with the capsfilter. BTW, how can I know which elements utilize GPU.

You really can’t. You can know that if you get your pipeline working with the plugins starting with “nv” then they are using some HW block on Tegra. Nvvidconv does a conversion so it’s always extra work but in some cases you just need to do it.

Most of the video stuff (e.g. decoding/encoding) is actually not done by GPU but a dedicated video hardware. Showing the frames is then a separate action and that might do e.g. scaling by using GPU or display hardware.