My pipeline is like so:
gst-launch-1.0 -v v4l2src device="/dev/video2" ! queue ! nvvidconv ! nvv4l2h264enc ! h264parse ! queue ! rtph264pay ! udpsink port=5000 host=192.168.12.187
gst-launch-1.0 -v udpsrc port=5000 caps='application/x-rtp, media=(string)video, encoding-name=(string)H264, framerate=(fraction)0/1' ! queue ! rtph264depay ! h264parse ! queue ! avdec_h264 ! queue ! videoscale ! videoconvert ! ximagesink sync=false async=false -e
The source portion is run on the nano.
I have a very poor quality stream (very fuzzy and pixelated when there is any motion).
I run tegrastats and see that GR3D_FREQ is consistently 0%. Why is this the case when I am using the nvidia plugins to convert and encode the h264 stream? How do I fix this?
The encoding is done on individual hardware engine
NVENC. Not done on GPU so the loading is nearly 0% in tegrastats. Please check
And you may try the commands and check if there is improvement:
Jetson 4k Encoding -> Decoding Pipeline and latency - #11 by DaneLLL
Gstreamer TCPserversink 2-3 seconds latency - #5 by DaneLLL
May also try h265 encoding.
So I have a stream that seems to have zero latency and no blockiness during periods of quick and unpredictable motion.
However, I achieved it by making every frame an IDR frame which doesn’t seem ideal.
gst-launch-1.0 -v v4l2src device="/dev/video2" ! queue ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12,width=1280,height=720' ! nvv4l2h264enc maxperf-enable=1 insert-sps-pps=1 idrinterval=1 ! h264parse ! queue ! rtph264pay ! udpsink port=5000 ! host=192.168.12.187 sync=false
Is this going to hurt me down the line?
Decoding begins when receiving first complete IDR frame, so it looks expected. However, the compression rate is low if you encode all frames to IDR frames. This would be tradeoff between latency and compression rate. You may tune the setting and find a balance in the use-case.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.