And see if there is improvement. Furthermore, you may construct the pipeline manually so that enable-max-performance can be enabled in nvv4l2decoder plugin.
gst-launch-1.0 udpsrc address=239.192.0.20 port=37004 ! 'application/x-rtp,encoding-name=H264,payload=96' ! rtph264depay ! avdec_h264 ! xvimagesink sync=0
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: The stream is in the wrong format.
Additional debug info:
gstrtph264depay.c(1270): gst_rtp_h264_depay_process (): /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0:
NAL unit type 26 not supported yet
Execution ended after 0:00:10.619273746
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
the second pipeline:
gst-launch-1.0 udpsrc address=239.192.0.20 port=37004 ! 'application/x-rtp,encoding-name=H264,payload=96' ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! xvimagesink sync=0
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Stream format not found, dropping the frame
Stream format not found, dropping the frame
Stream format not found, dropping the frame
Stream format not found, dropping the frame
Stream format not found, dropping the frame
Stream format not found, dropping the frame
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:01.198406354
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
the third:
gst-launch-1.0 udpsrc address=239.192.0.20 port=37004 caps='application/x-rtp,encoding-name=H264,payload=96' ! rtpmp2tdepay ! tsdemux name=demux ! h264parse ! avdec_h264 ! videoconvert ! xvimagesink
WARNING: erroneous pipeline: could not link udpsrc0 to rtpmp2tdepay0
Maybe for 4K you would have to increase max kernel socket buffer size. You may try with buffer-size property of udpsrc, but the limitation may also be on sender size (udpsink also has a buffer-size property if using gstreamer on sender).
Probably because I imagined that you were receiving RTPMP2T over udp, but with the last information you’ve provided it seems that you are receiveing raw MP2 TS over UDP. Note that UDP has no control flow and that packets may be lost. Furthermore, multicast may not be the best solution over WiFi if this is your case.
If you can change the sender source, I’d advise to switch either to raw TS streaming over TCP (would be better using matroskamux with streamable=true on sender side and matroskademux on receiver side), or for UDP try RTPMP2T.
Not sure if it might help, but you may also try adding tsparse after tsdemux.
If I change the sender source’s packetise method to TS (instead of TTS) then your pipelines are working well.
Unfortunately I need to display TTS Packetise only.
The differences between TS and TTS is the added timestamp. Each packet takes about additional 4 bytes for which parsing is required and it looks like this method is not supported into gstreamer :(
I may not be able to further help, I have no experience with MP2 TTS, most of info for it seems for asia and it might be patented so not so available from open source software.
My last suggestion would be trying to set rtpmp2tdepay property skip-first-bytes to 4 in receiver, but I suppose it would discard the TS timestamp. Would be harmless to try, though.
Thank you so much indeed for your suggestions. Tell the truth these didn’t work but it was a gstreamer problem or deficiency.
Because the solution come from this patch: