This is either an issue in the decoders or videoconvert, and I’m guessing it’s either bad interpretation of data from the decoder, or bad encoding (the omx decoders in this example were using progressive-scan NV12).
For reference, the same thing was happening with gstreamer1.x packages from the repos, and a fresh build of the latest releases. I only have the omx plugins from nvidia, as I couldn’t get them to compile (the package on the support site doesn’t even have the configure script, and one made by autoconf fails to run).
We have tried to play the exact pipelines mentioned by you with hardware accelerated decoders and don’t see the issue.
Can you please let us know which release are you using?
Can you try to use the latest release - R21.2 and let us know your observations?
I have a similar issue with omxh264dec. It is working fine when running with gst-launch but colors are not correct when I run with my C code. I am new in this area but when I looked at it some color channels were missing.
In the gst-launch pipeline the videoconvert will convert the video to something the ximagesink understands. With your code I think the appsink accepts everything, so the videoconvert doesn’t actually do anything.
Try adding the following between videoconvert and appsink:
capsfilter caps=video/x-raw-rgb
Videoconvert is doing it’s work purely on the CPU. If you get that working, try replacing it with a Tegra specific converter “nvvidconv”. It should be more efficient.
EDIT: if your app can handle YUV, then try x-raw-yuv instead of x-raw-rgb.
You really can’t. You can know that if you get your pipeline working with the plugins starting with “nv” then they are using some HW block on Tegra. Nvvidconv does a conversion so it’s always extra work but in some cases you just need to do it.
Most of the video stuff (e.g. decoding/encoding) is actually not done by GPU but a dedicated video hardware. Showing the frames is then a separate action and that might do e.g. scaling by using GPU or display hardware.