NvMMDecNVMEDIACreateParser failed

We have a custom pipeline running on Jetson AGX Xavier with Jetpack 4.4 to receive encoded video stream and sometimes it fails on the decoding stage with messages:

NvMMLiteOpen : Block : BlockType = 279
Opening channel /dev/nvhost-nvdec1 failed
NVMEDIA: NvMMDecNVMEDIACreateParser: 2786: - Could not get NVDEC Channel handle for inst 1
NVMEDIA: NvMMLiteTVMRDecBlockOpen: 3302: NvMMDecNVMEDIACreateParser failed!free(): double free detected in tcache 2

Pipeline can be conventionally presented as:

gst-launch-1.0 webrtcbin bundle-policy=max-bundle ! rtph265depay ! h265parse ! queue max-size-buffers=30 max-size-bytes=0 leaky=downstream ! nvv4l2decoder drop-frame-interval=0 num-extra-surfaces=1 !  fakesink

webrtcbin element work looks pretty messy, it constantly drops buffers for some reason and doesn’t have a lot of configurable parameters out of the box. Blue spikes are from webrtcbin queue.

Just wondering could it be the reason to have problems with decoding, especially with memory allocation issues like shown above?

And what should I check to try to avoid this issue?

Hi,
Generally we run RTSP or UDP streaming. Don’t have much experience about using webrtcbin. Could you please share steps so that we can set up and run, to reproduce the issue and check?

Thanks for your reply, actually it’s not so easy to run this pipeline without our custom workaround. And I just interested to find proper way to catch this issue, so for now I’m going into two directions:

  1. Add more queues after webrtcbin and maybe change some parameters in queue before nvv4l2decoder
  2. Build an open-source part of nvv4l2decoder for plugin and try to catch this issue there

I would be very appreciate if you could help me with an advice or a feedback just to validate my way to resolve this issue.

Hi,
May try to add queue element and set the properties:

  min-threshold-buffers: Min. number of buffers in the queue to allow reading (0=disable)
                        flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0
  min-threshold-bytes : Min. amount of data in the queue to allow reading (bytes, 0=disable)
                        flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0
  min-threshold-time  : Min. amount of data in the queue to allow reading (in ns, 0=disable)
                        flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state
                        Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0

See if it helps by buffering more stream data before sending data to nvv4l2decoder.

Thanks for your reply, I’m checking this options trying to get some progress with the issue.

Just one more thing I found very recently on trying to catch this issue. Maybe it can be the origin of this problem. There are some duplicates happen while pipelines works in our app.

DecoderOpening

Could it be some issue related to the race for resources? And if two different thread try to setup decoder, could it be some unhandled situation in Nvidia Multimedia API?

Hi,
For every decoding task the prints should be present once. It looks similar to this issue:
Jetson/L4T/r32.6.x patches - eLinux.org
[gstreamer]Memory leak in UDP streaming

Please apply the patch, rebuild/replace libgstnvvideo4linux2.so and try again.

Thanks for this answer, pipeline became clearer, but the issue is still there. Probably I need to go another way and to check your suggestions about the library patching.

Hi, can you share the method of " rebuild/replace libgstnvvideo4linux2.so ", please?

Hi,
Please follow README in gst-v4l2 package. You can download the source code from
https://developer.nvidia.com/embedded/linux-tegra
L4T Driver Package (BSP) Sources

Tried to apply a patch, didn’t get much progress on it. But If I try to run a similar pipeline from a file, there are some issues with nvv4l2decoder.

The pipeline is used looks like this one:

gst-launch-1.0 -v multifilesrc location=test.mp4 loop=true ! h265parse ! queue max-size-buffers=30 max-size-bytes=0 leaky=downstream ! nvv4l2decoder drop-frame-interval=0 num-extra-surfaces=1 ! nvvidconv ! "video/x-raw(memory:NVMM),width=(int)1920,height=(int)1080" ! nvvidconv ! "video/x-raw(memory:NVMM),format=(string)RGBA" ! queue max-size-buffers=30 leaky=downstream ! fakesink

And the error message I got is next:

Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstH265Parse:h265parse0.GstPad:src: caps = video/x-h265, width=(int)1920, height=(int)1080, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au, profile=(string)main, tier=(string)main, level=(string)4
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = video/x-h265, width=(int)1920, height=(int)1080, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au, profile=(string)main, tier=(string)main, level=(string)4
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = video/x-h265, width=(int)1920, height=(int)1080, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au, profile=(string)main, tier=(string)main, level=(string)4
NvMMLiteOpen : Block : BlockType = 279 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 279 
/GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0.GstPad:sink: caps = video/x-h265, width=(int)1920, height=(int)1080, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au, profile=(string)main, tier=(string)main, level=(string)4
ERROR: from element /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0: Failed to process frame.
Additional debug info:
gstv4l2videodec.c(1609): gst_v4l2_video_dec_handle_frame (): /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0:
Maybe be due to not enough memory or failing driver
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
  1. The issue disappears if the queue before nvv4l2decoder is removed. But for us this is not a solution, because we need queuing in our decoding pipeline
  2. If I run this command in the loop like while true; do gst-launch-1.0 ... ; done finally I get my file playing, but with a bunch of artifacts.
  3. If I try to use the patch above - nothings happens, there is just a frozen pipeline. Probably because I use Jetpack older that mention in the patch thread.

Any suggestions how this issue can be solved?

Hi,
The gst-v4l2 package is public for each r32 release. You may download the package fitting your release.

Since it works without queue plugin and the properties you set to queue may cause certain data being dropped and the data sending to nvv4l2decoder may be incomplete. If this is your use-case, suggest you investigate why the stream data is not complete. Hardware decoder shall work fine if the stream data is valid.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.