Nvv4l2decoder won't decode when using tee in pipeline

Hey there…
I’m trying to create a pipeline which takes an RTSP stream and:

  1. Pass the h264 stream to appsink which then goes into an appsrc in an RTSP server (not really relevant to this question).
  2. Decode the frame and show it on screen (in an app).

For simplicity: I removed the appsinks and replaced with fakesinks. To see the output, I used identity dump=true

The final command I used it:
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8510/video latency=0 ! rtph264depay ! tee name=tee ! queue ! fakesink tee. ! queue ! nvv4l2decoder disable-dpb=true ! identity dump=true ! fakesink
If I replace the decoder with avdec_h264 everything works fine.
Also, if I removed the ! queue ! fakesink on the first branch, it also works fine (tee with only one output).

How can I solve this issue?

Thank you very much.
Omer

Not sure for your case, but it may be an H264 profile issue.

You may try this python RTSP server:

import gi
gi.require_version('Gst','1.0')
gi.require_version('GstVideo','1.0')
gi.require_version('GstRtspServer','1.0')
from gi.repository import GObject, Gst, GstVideo, GstRtspServer

Gst.init(None)


mainloop = GObject.MainLoop()
server = GstRtspServer.RTSPServer()
mounts = server.get_mount_points()
factory = GstRtspServer.RTSPMediaFactory()

# Using NVENC - baseline H264 profile
#factory.set_launch('( videotestsrc is-live=1 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! nvvidconv ! nvv4l2h264enc insert-sps-pps=1 idrinterval=30 insert-vui=1 ! rtph264pay name=pay0 )')

# Using legacy omxh264enc - baseline H264 profile
#factory.set_launch('( videotestsrc is-live=1 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! nvvidconv ! video/x-raw(memory:NVMM),format=I420 ! omxh264enc insert-sps-pps=1 idrinterval=30 insert-vui=1 ! h264parse ! rtph264pay name=pay0 )')

# Using x264enc with I420 or NV12 - baseline H264 profile
#factory.set_launch('( videotestsrc is-live=1 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! videoconvert ! video/x-raw,format=I420 ! x264enc key-int-max=30 tune=zerolatency ! video/x-h264,stream-format=byte-stream,profile=main ! h264parse config-interval=1 ! rtph264pay name=pay0 )')

# Using x264enc with Y42B - implies main H264 profile
factory.set_launch('( videotestsrc is-live=1 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! videoconvert ! x264enc key-int-max=30 tune=zerolatency ! video/x-h264,stream-format=byte-stream,profile=main ! h264parse config-interval=1 ! rtph264pay name=pay0 )')

mounts.add_factory("/test", factory)
server.attach(None)

print ("stream ready at rtsp://127.0.0.1:8554/test")
mainloop.run()

The first 3 pipelines would encode into H264 baseline profile, so you can decode with:

gst-launch-1.0 -v rtspsrc location=rtsp://127.0.0.1:8554/test latency=0 ! rtph264depay ! tee name=tee ! queue ! fakesink  tee. ! queue ! h264parse ! nvv4l2decoder disable-dpb=true ! fakesink dump=1

For the 4th case, it would encode into H264 main profile, so you would decode with:

gst-launch-1.0 -v rtspsrc location=rtsp://127.0.0.1:8554/test latency=0 ! rtph264depay ! video/x-h264,profile=main ! tee name=tee ! queue ! fakesink   tee. ! queue ! h264parse ! nvv4l2decoder disable-dpb=true ! fakesink dump=1

Hello,

Thanks for your reply.

I’ll try your proposed solution on Sunday as soon as I’ll be in the office again.
I don’t understand however, why does the tee element (or queues) have anything to do with it.
As I said, everything works fine if I remove the first queue and fakesink. So how does the profile have anything to do with it?

I would expect that if it was a profile issue, then it wouldn’t work with or without the tee.

Thanks again.

Hi again.

So I was able to reproduce this issue even with the profile set in caps.
I did however, see a correlation between the ease of reproducing and the gop size.

When I used this pipeline:

videotestsrc is-live=true ! video/x-raw,framerate=30/1 ! videoconvert ! x264enc tune=zerolatency key-int-max=25 ! rtph264pay name=pay0

The issue wasn’t reproduced.

With this pipeline:

videotestsrc is-live=true ! video/x-raw,framerate=30/1 ! videoconvert ! x264enc tune=zerolatency key-int-max=100 ! rtph264pay name=pay0

I was able to reproduce this issue almost every time.
Again, with the client pipeline:

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8510/video latency=0 ! rtph264depay ! tee name=tee ! queue ! fakesink tee. ! queue ! nvv4l2decoder disable-dpb=true ! fakesink dump=true

To me it seems that when the decoder takes too many p-frames before receiving an i-frame, it just deadlocks.

Not sure, but after x264enc, I’d add h264parse config-interval=1.

Same on receiver side, does adding h264parse after rtph264depay change ?

What about using nvv4l2h264enc ? Should be better as using dedicated HW.

Unfortunately I don’t have control over the server and adding h264parse to the receiver didn’t help.

Edit: Actually I didn’t check with config-interval=1. I’ll try it and return with an answer.

Edit2: Yep… It doesn’t help…

I think I’ve been able to reproduce your issue. You would try specifying caps after rtph264depay so that both subpipelines are using same format and buffers:

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8510/video latency=0 ! rtph264depay ! video/x-h264,stram-format=avc ! tee name=tee ! queue ! fakesink tee. ! queue ! nvv4l2decoder disable-dpb=true ! fakesink dump=true

Note that avc stream-format is ok with localhost for 0 latency using your example with x264enc for reproducing the issue, but depending on your encoder/payloader it may also be byte-stream format, and may require different latencies depending on your network and UDP stacks on both ends.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.